That's an insightful question! Testing "security through obscurity" on a login page typically involves observing the server's responses to different types of login attempts.
In this lab, you've already performed a manual test of this concept:
-
Attempt 1: Username
test, Passwordpassword123- You likely know
testis not a valid username in our system, andpassword123is not a valid password. - Observed Response: "Invalid username or password"
- You likely know
-
Attempt 2: Username
admin, Passwordadmin- You might suspect
admincould be a valid username, butadminis not the correct password. - Observed Response: "Invalid username or password"
- You might suspect
To formally "test" it, you would look for consistency in the error messages:
- Try a username you know doesn't exist (e.g.,
not_a_user123). - Try a username you suspect might exist (or one you know exists if you have that info), but with a clearly wrong password (e.g.,
adminwithwrongpassword).
If both scenarios yield the exact same generic error message (like "Invalid username or password"), then the system is employing "security through obscurity" by not revealing whether the username itself was the issue or if only the password was incorrect.
In more advanced penetration testing, automated tools can:
- Compare response sizes or timings: Sometimes, even if the error message is the same, the server might send a slightly different response size or take a fraction of a second longer to process a request for a valid user versus an invalid one. These subtle differences could potentially leak information. (However, this is typically not the case with basic obscurity.)
- Analyze error message variations: Tools can automate attempting many usernames and passwords and look for any variations in the error messages returned.
For the purpose of this lab and most basic scenarios, simply observing the consistent generic error message (as you've already done!) is how you confirm that "security through obscurity" is being used against username enumeration.
Does that make sense? We're essentially trying to trick the system into giving us more information than it intends to!