PHP client PIN security - php

I'm currently developing a system which has a functionality where clients can view details of their purchases/renewals/etc by supplying a PIN "number".
A PIN is being used instead of login information because of the type of clients we're targeting. The PIN is printed on documents sent to them.
The view shown when they supply the PIN does not reveal highly sensitive information such as credit card etc, but less sensitive one such as product name, type, price, barcode, repairs etc.
The issue in question is the PIN. I opted to using a random 5 character PIN (0-9, a-z A-Z) - case sensitive.
I'll be removing some homoglyphs ('I','1','l','0','O','rn','vv'), so the actual number of combinations is actually lower.
I've got a couple of questions about this:
Is this practice acceptable?
Should I write a lockout mechanism to "ban" traffic from IPs with a large amount of failed attempts?*
Should I write an error checking system (similar to Luhn's algo in credit card numbers)?
*Should I make use of a captcha system?

As for the CAPTCHA and lockout - I'd go for the CAPTCHA, and delay 1) the clients that fail CAPTCHA, and 2) invalid logins: before checking, sleep 1 sec on 1st attempt, 2s on second, 4s third, 8s on subsequent. This won't inconvenience normal users too much, but it will slow down an attacker significantly. No matter what you do, people will get it wrong - no need to ban them outright.
The checksum - might be useful as a 6th character for detecting typing errors, not for security.
As far as the password strength goes, this is a weak password - I wouldn't use this as the only form of authorization for anything stronger than "sharing pictures of lolcats" - consider a longer one, or your clients might even accidentaly access each other's data (and they tend to get really upset when that happens: "you mean that anyone could see my data like that?!").

A PIN is being used instead of login
information because of the type of
clients we're targeting. The PIN is printed on documents sent to them.
Very strange, but yeah could write it like this. I think you should really reconsider if it is really necessary. But if I understand you correctly you sent the document via snailmail? For example Isn't it possible to send the user a PIN and next have them sign into openID(LightOpenID). I would lock it down to just Google's OpenID provider because these accounts are "safe". This way you have added another level of security. Also Google uses captcha to verify account(make it
"safe").
Is this practice acceptable?
I think it is acceptable, although very strange.
Should I write a lockout mechanism to
"ban" traffic from IPs with a large
amount of failed attempts?*
I think you should write a lockout mechanism should because brute-force hacking password is already easily accomplished, but brute-force hacking a PIN can be done without any effort at all. Although I don't think you should do it via IP, because the end-user could sit behind a router and then he would be blocked. Also hackers could have a botnet to perform these kinds of attacks.
I read today about HashCash thanks to stackoverflow.com and I also found it very interesting. Maybe you could use that to protect yourself against attacks.
Should I write an error checking
system (similar to Luhn's algo in
credit card numbers)?
I don't think so.
Should I make use of a captcha system?
The only true way to prevent automated attacks is CAPTCHA's, so I think you should. Google/Twitter/etc aren't using CAPTCHA's because they are user friendly, but because that is the only working way to stop automated attacks. If you would implement my system that PIN with OpenID from Google then you can skip this step, because Google already has you covered.

First of all, ask not only for the PIN, add something simple the customer knows, like with snail mail systems where you're often ask for your ZIP-Code. That sorts out people that do not know the somehow "shared secret".
The captcha, if it's not annoyingly hard makes sense as it helps to reduce "guess" attempts by bots. As Stefan mentioned, banning by IP is problematic because of shared IPs.
You could also implement some kind of "tar pit" when form posts are wrong based on your error checking, e.g. you delay the processing of incoming connections. A simple algorithmic error check helps you to avoid a useless database lookup of the given PIN.

1) Yes, depends on target audience though.
2) Sometimes it makes sense, sometimes it won't work due to how the system is used, and how many clients are on a shared IP number.
3) What value would it add? Won't that just help people trying to find a working PIN?
4) Depends on target audience, and what kind of captcha.

yes but it depends on the value of the information,if the information value is hight and you think that someone may try to break in you should consider additional protections
It may be a good idea if the information you are protecting have an hight value,in this case you must warn the user that he have a limited numer of possibilityes,create also a log file to monitor failures on code typing and consider that if the user is behind a NAT a lot of user may use the same ip(all the user on an office or in school for example,also connection like fastweb use one ip for a large group of people) so don't block the ip for a long time(15-30s every 3-5 fails should be enoght to avoid brute force attacks,you can double it every time the user fails a second time)and,most important,block only the code immission not the whole site.
it's not needed but you can implement it,as i sayed it also depend on the value of the information
it's a great idea to avoid proxy and crawlers but i recommend something different: use an image with a question like " five plus 2 =" or "what's the color of a red apple?",they're a lot more hard to understand by crawlers but a lot more easy for users.
I recommend also you use mt_rand() to randomize the pin(a lot more efficient than the default random,it's statistically correct random and it's integrated in php as default),the homoglyphs removal should be a good way to avoid error typing but i may recommend also to create a longer code with separators like
AXV2-X342-3420
so the user should understand that's a code and easly count how many character are left or if he entered the wrong code.
I may avoid case sensitive pin because upper case characters are more easy to read and some user will simply paste it lower or upper case only even if you write clearly that the code is case sensitive.

The axiom "If you're going to roll your own security, you've already failed," applies here.
For your 5 character [0-9, A-Z, a-z] pin, you're generating less than 8.27 bits of entropy (64 310 = 2^n). [fixed]
It will take less than one day (a 1,000 guesses/sec, which is very slow) for an attacker to break your system. Is that acceptable? Maybe for trivial systems where bypassing security has very little impact.
Should I write a lockout mechanism to "ban" traffic from IPs with a large amount of failed attempts?
IPs can be spoofed.
Should I write an error checking system (similar to Luhn's algo in credit card numbers)?
That would actually decrease the number of bits in your entropy, making it easier to break into your system.
Should I make use of a captcha system?
If you feel you need the exercise. Captchas have been broken and are useless for anything other than as a speed bump.
Update
Unfortunately, there is no cut-and-dried solution for computer security, as it is still an immature (undermature?) field. Anyone who says, "Oh, do this-and-this and you'll be fine" is skipping one of the most fundamental issues around security.
Security is always a tradeoff
The ultimately secured system cannot be accessed. On the other end, the ultimately-accessible system has no barrier to entry. Obviously, both extremes are unacceptable.
Your job as a software developer is to find the sweet spot between the two. This will depend upon several factors:
The technical expertise of your users
The willingness of your users to put up with security
The cost (time and money) to implement and maintain (i.e., a more sophisticated system will generate more support calls)
The impact a breech would have on your users and company
The likelihood of a breech: are you the US Department of Defense? Visa? You're probably under attack now. Bob's Bicycle Shop in Ojai, CA is on the other end of the spectrum.
From your question, I take it that you're effectively generating their "password" for them. What if you flipped it on its head? Make the pin their account and the first time they log into your system they have to create a password* that is then required from then on.
*Yes, if this is a bank, then this is not a good idea.

Related

Using correct IP with PHP [duplicate]

As a response to the recent Twitter hijackings and Jeff's post on Dictionary Attacks, what is the best way to secure your website against brute force login attacks?
Jeff's post suggests putting in an increasing delay for each attempted login, and a suggestion in the comments is to add a captcha after the 2nd failed attempt.
Both these seem like good ideas, but how do you know what "attempt number" it is? You can't rely on a session ID (because an attacker could change it each time) or an IP address (better, but vulnerable to botnets). Simply logging it against the username could, using the delay method, lock out a legitimate user (or at least make the login process very slow for them).
Thoughts? Suggestions?
I think database-persisted short lockout period for the given account (1-5 minutes) is the only way to handle this. Each userid in your database contains a timeOfLastFailedLogin and numberOfFailedAttempts. When numbeOfFailedAttempts > X you lockout for some minutes.
This means you're locking the userid in question for some time, but not permanently. It also means you're updating the database for each login attempt (unless it is locked, of course), which may be causing other problems.
There is at least one whole country is NAT'ed in asia, so IP's cannot be used for anything.
In my eyes there are several possibilities, each having cons and pros:
Forcing secure passwords
Pro: Will prevent dictionary attacks
Con: Will also prevent popularity, since most users are not able to remember complex passwords, even if you explain to them, how to easy remember them. For example by remembering sentences: "I bought 1 Apple for 5 Cent in the Mall" leads to "Ib1Af5CitM".
Lockouts after several attempts
Pro: Will slow down automated tests
Con: It's easy to lock out users for third parties
Con: Making them persistent in a database can result in a lot of write processes in such huge services as Twitter or comparables.
Captchas
Pro: They prevent automated testing
Con: They are consuming computing time
Con: Will "slow down" the user experience
HUGE CON: They are NOT barrier-free
Simple knowledge checks
Pro: Will prevent automated testing
Con: "Simple" is in the eye of the beholder.
Con: Will "slow down" the user experience
Different login and username
Pro: This is one technic, that is hardly seen, but in my eyes a pretty good start to prevent brute force attacks.
Con: Depends on the users choice of the two names.
Use whole sentences as passwords
Pro: Increases the size of the searchable space of possibilities.
Pro: Are easier to remember for most users.
Con: Depend on the users choice.
As you can see, the "good" solutions all depend on the users choice, which again reveals the user as the weakest element of the chain.
Any other suggestions?
You could do what Google does. Which is after a certain number of trys they have a captacha show up. Than after a couple of times with the captacha you lock them out for a couple of minutes.
I tend to agree with most of the other comments:
Lock after X failed password attempts
Count failed attempts against username
Optionally use CAPTCHA (for example, attempts 1-2 are normal, attempts 3-5 are CAPTCHA'd, further attempts blocked for 15 minutes).
Optionally send an e-mail to the account owner to remove the block
What I did want to point out is that you should be very careful about forcing "strong" passwords, as this often means they'll just be written on a post-it on the desk/attached to the monitor. Also, some password policies lead to more predictable passwords. For example:
If the password cannot be any previous used password and must include a number, there's a good chance that it'll be any common password with a sequential number after it. If you have to change your password every 6 months, and a person has been there two years, chances are their password is something like password4.
Say you restrict it even more: must be at least 8 characters, cannot have any sequential letters, must have a letter, a number and a special character (this is a real password policy that many would consider secure). Trying to break into John Quincy Smith's account? Know he was born March 6th? There's a good chance his password is something like jqs0306! (or maybe jqs0306~).
Now, I'm not saying that letting your users have the password password is a good idea either, just don't kid yourself thinking that your forced "secure" passwords are secure.
To elaborate on the best practice:
What krosenvold said: log num_failed_logins and last_failed_time in the user table (except when the user is suspended), and once the number of failed logins reach a treshold, you suspend the user for 30 seconds or a minute. It is the best practice.
That method effectively eliminates single-account brute-force and dictionary attacks. However, it does not prevent an attacker from switching between user names - ie. keeping the password fixed and trying it with a large number of usernames. If your site has enough users, that kind of attack can be kept going for a long time before it runs out of unsuspended accounts to hit. Hopefully, he will be running this attack from a single IP (not likely though, as botnets are really becoming the tool of the trade these days) so you can detect that and block the IP, but if he is distributing the attack... well, that's another question (that I just posted here, so please check it out if you haven't).
One additional thing to remember about the original idea is that you should of course still try to let the legitimate user through, even while the account is being attacked and suspended -- that is, IF you can tell the real user and the bot apart.
And you CAN, in at least two ways.
If the user has a persistent login ("remember me") cookie, just let him pass through.
When you display the "I'm sorry, your account is suspended due to a large number of unsuccessful login attempts" message, include a link that says "secure backup login - HUMANS ONLY (bots: no lying)". Joke aside, when they click that link, give them a reCAPTCHA-authenticated login form that bypasses the account's suspend status. That way, IF they are human AND know the correct login+password (and are able to read CAPTCHAs), they will never be bothered by delays, and your site will be impervious to rapid-fire attacks.
Only drawback: some people (such as the vision-impaired) cannot read CAPTCHAs, and they MAY still be affected by annoying bot-produced delays IF they're not using the autologin feature.
What ISN'T a drawback: that the autologin cookie doesn't have a similar security measure built-in. Why isn't this a drawback, you ask? Because as long as you've implemented it wisely, the secure token (the password equivalent) in your login cookie is twice as many bits (heck, make that ten times as many bits!) as your password, so brute-forcing it is effectively a non-issue. But if you're really paranoid, set up a one-second delay on the autologin feature as well, just for good measure.
You should implement a cache in the application not associated with your backend database for this purpose.
First and foremost delaying only legitimate usernames causes you to "give up" en-mass your valid customer base which can in itself be a problem even if username is not a closely guarded secret.
Second depending on your application you can be a little smarter with an application specific delay countermeasures than you might want to be with storing the data in a DB.
Its resistant to high speed attempts that would leak a DOS condition into your backend db.
Finally it is acceptable to make some decisions based on IP... If you see single attempts from one IP chances are its an honest mistake vs multiple IPs from god knows how many systems you may want to take other precautions or notify the end user of shady activity.
Its true large proxy federations can have massive numbers of IP addresses reserved for their use but most do make a reasonable effort to maintain your source address for a period of time for legacy purposes as some sites have a habbit of tieing cookie data to IP.
Do like most banks do, lockout the username/account after X login failures. But I wouldn't be as strict as a bank in that you must call in to unlock your account. I would just make a temporary lock out of 1-5 minutes. Unless of course, the web application is as data sensitive as a bank. :)
This is an old post. However, I thought of putting my findings here so that it might help any future developer.
We need to prevent brute-force attack so that the attacker can not harvest the user name and password of a website login. In many systems, they have some open ended urls which does not require an authentication token or API key for authorization. Most of these APIs are critical. For example; Signup, Login and Forget Password APIs are often open (i.e. does not require a validation of the authentication token). We need to ensure that the services are not abused. As stated earlier, I am just putting my findings here while studying about how we can prevent a brute force attack efficiently.
Most of the common prevention techniques are already discussed in this post. I would like to add my concerns regarding account locking and IP address locking. I think locking accounts is a bad idea as a prevention technique. I am putting some points here to support my cause.
Account locking is bad
An attacker can cause a denial of service (DoS) by locking out large numbers of accounts.
Because you cannot lock out an account that does not exist, only valid account names will lock. An attacker could use this fact to harvest usernames from the site, depending on the error responses.
An attacker can cause a diversion by locking out many accounts and flooding the help desk with support calls.
An attacker can continuously lock out the same account, even seconds after an administrator unlocks it, effectively disabling the account.
Account lockout is ineffective against slow attacks that try only a few passwords every hour.
Account lockout is ineffective against attacks that try one password against a large list of usernames.
Account lockout is ineffective if the attacker is using a username/password combo list and guesses correctly on the first couple of attempts.
Powerful accounts such as administrator accounts often bypass lockout policy, but these are the most desirable accounts to attack. Some systems lock out administrator accounts only on network-based logins.
Even once you lock out an account, the attack may continue, consuming valuable human and computer resources.
Consider, for example, an auction site on which several bidders are fighting over the same item. If the auction web site enforced account lockouts, one bidder could simply lock the others' accounts in the last minute of the auction, preventing them from submitting any winning bids. An attacker could use the same technique to block critical financial transactions or e-mail communications.
IP address locking for a account is a bad idea too
Another solution is to lock out an IP address with multiple failed logins. The problem with this solution is that you could inadvertently block large groups of users by blocking a proxy server used by an ISP or large company. Another problem is that many tools utilize proxy lists and send only a few requests from each IP address before moving on to the next. Using widely available open proxy lists at websites such as http://tools.rosinstrument.com/proxy/, an attacker could easily circumvent any IP blocking mechanism. Because most sites do not block after just one failed password, an attacker can use two or three attempts per proxy. An attacker with a list of 1,000 proxies can attempt 2,000 or 3,000 passwords without being blocked. Nevertheless, despite this method's weaknesses, websites that experience high numbers of attacks, adult Web sites in particular, do choose to block proxy IP addresses.
My proposition
Not locking the account. Instead, we might consider adding intentional delay from server side in the login/signup attempts for consecutive wrong attempts.
Tracking user location based on IP address in login attempts, which is a common technique used by Google and Facebook. Google sends a OTP while Facebook provides other security challenges like detecting user's friends from the photos.
Google re-captcha for web application, SafetyNet for Android and proper mobile application attestation technique for iOS - in login or signup requests.
Device cookie
Building a API call monitoring system to detect unusual calls for a certain API endpoint.
Propositions Explained
Intentional delay in response
The password authentication delay significantly slows down the attacker, since the success of the attack is dependent on time. An easy solution is to inject random pauses when checking a password. Adding even a few seconds' pause will not bother most legitimate users as they log in to their accounts.
Note that although adding a delay could slow a single-threaded attack, it is less effective if the attacker sends multiple simultaneous authentication requests.
Security challenges
This technique can be described as adaptive security challenges based on the actions performed by the user in using the system earlier. In case of a new user, this technique might throw default security challenges.
We might consider putting in when we will throw security challenges? There are several points where we can.
When user is trying to login from a location where he was not located nearby before.
Wrong attempts on login.
What kind of security challenge user might face?
If user sets up the security questions, we might consider asking the user answers of those.
For the applications like Whatsapp, Viber etc. we might consider taking some random contact names from phonebook and ask to put the numbers of them or vice versa.
For transactional systems, we might consider asking the user about latest transactions and payments.
API monitoring panel
To build a monitoring panel for API calls.
Look for the conditions that could indicate a brute-force attack or other account abuse in the API monitoring panel.
Many failed logins from the same IP address.
Logins with multiple usernames from the same IP address.
Logins for a single account coming from many different IP addresses.
Excessive usage and bandwidth consumption from a single use.
Failed login attempts from alphabetically sequential usernames or passwords.
Logins with suspicious passwords hackers commonly use, such as ownsyou (ownzyou), washere (wazhere), zealots, hacksyou etc.
For internal system accounts we might consider allowing login only from certain IP addresses. If the account locking needs to be in place, instead of completely locking out an account, place it in a lockdown mode with limited capabilities.
Here are some good reads.
https://en.wikipedia.org/wiki/Brute-force_attack#Reverse_brute-force_attack
https://www.owasp.org/index.php/Blocking_Brute_Force_Attacks
http://www.computerweekly.com/answer/Techniques-for-preventing-a-brute-force-login-attack
I think you should log againt the username. This is the only constant (anything else can be spoofed). And yes it could lock out a legitimate user for a day. But if I must choose between an hacked account and a closed account (for a day) I definitely chose the lock.
By the way, after a third failed attempt (within a certain time) you can lock the account and send a release mail to the owner. The mail contains a link to unlock the account. This is a slight burden on the user but the cracker is blocked. And if even the mail account is hacked you could set a limit on the number of unlockings per day.
A lot of online message boards that I log into online give me 5 attempts at logging into an account, after those 5 attempts the account is locked for an hour or fifteen minutes. It may not be pretty, but this would certainly slow down a dictionary attack on one account. Now nothing is stopping a dictionary attack against multiple accounts at the same time. Ie try 5 times, switch to a different account, try another 5 times, then circle back. But it sure does slow down the attack.
The best defense against a dictionary attack is to make sure the passwords are not in a dictionary!!! Basically set up some sort of password policy that checks a dictionary against the letters and requires a number or symbol in the password. This is probably the best defense against a dictionary attack.
You could add some form of CAPTCHA test. But beware that most of them render access more difficult eye or earing impaired people. An interesting form of CAPTCHA is asking a question,
What is the sum of 2 and 2?
And if you record the last login failure, you can skip the CAPTCHA if it is old enough. Only do the CAPTCHA test if the last failure was during the last 10 minutes.
For .NET Environment
Dynamic IP Restrictions
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Reduce the chances of a Denial of Service attack by dynamically blocking requests from malicious IP addresses
Dynamic IP Restrictions for IIS allows you to reduce the probabilities of your Web Server being subject to a Denial of Service attack by inspecting the source IP of the requests and identifying patterns that could signal an attack. When an attack pattern is detected, the module will place the offending IP in a temporary deny list and will avoid responding to the requests for a predetermined amount of time.
Minimize the possibilities of Brute-force-cracking of the passwords of your Web Server
Dynamic IP Restrictions for IIS is able to detect requests patterns that indicate the passwords of the Web Server are attempted to be decoded. The module will place the offending IP on a list of servers that are denied access for a predetermined amount of time. In situations where the authentication is done against an Active Directory Services (ADS) the module is able to maintain the availability of the Web Server by avoiding having to issue authentication challenges to ADS.
Features
Seamless integration into IIS 7.0 Manager.
Dynamically blocking of requests from IP address based on either of the following criteria:
The number of concurrent requests.
The number of requests over a period of time.
Support for list of IPs that are allowed to bypass Dynamic IP Restriction filtering.
Blocking of requests can be configurable at the Web Site or Web Server level.
Configurable deny actions allows IT Administrators to specify what response would be returned to the client. The module support return status codes 403, 404 or closing the connection.
Support for IPv6 addresses.
Support for web servers behind a proxy or firewall that may modify the client IP address.
http://www.iis.net/download/DynamicIPRestrictions
Old post but let me post what I have in this the end 2016. Hope it still could help.
It's a simple way but I think it's powerful to prevent login attack. At least I always use it on every web of mine. We don't need CAPTCHA or any other third party plugins.
When user login for the first time. We create a session like
$_SESSION['loginFail'] = 10; // any number you prefer
If login success, then we will destroy it and let user login.
unset($_SESSION['loginFail']); // put it after create login session
But if user fail, as we usually sent error message to them, at the same time we reduce the session by 1 :
$_SESSION['loginFail']-- ; // reduce 1 for every error
and if user fail 10 times, then we will direct them to other website or any web pages.
if (!isset($_SESSION['loginFail'])) {
if ($_SESSION['login_fail'] < 1 ) {
header('Location:https://google.com/'); // or any web page
exit();
}
}
By this way, user can not open or go to our login page anymore, cause it has redirected to other website.
Users has to close the browser ( to destroy session loginFail that we created), open it 'again' to see our login page 'again'.
Is it helpful?
There are several aspects to be considered to prevent brute-force. consider given aspects.
Password Strenght
Force users to create a password to meet specific criteria
Password should contain at least one uppercase, lowercase, digit and symbol(special character).
Password should have a minimum length defined according to your criteria.
Password should not contain a user name or the public user id.
By creating the minimum password strength policy, brute-force will take time to guess the password. meanwhile, your app can identify such thing and migrate it.
reCaptcha
You can use reCaptcha to prevent bot scripts having brute-force function. It's fairly easy to implement the reCaptcha in web application. You can use Google reCaptcha. it has several flavors of reCaptcha like Invisible reCaptcha and reCaptcha v3.
Dynamic IP filtering Policy
You can dynamically identify the pattern of request and block the IP if the pattern matches the attack vector criteria. one of the most popular technique to filter the login attempts is Throttling. Read the Throttling Technique using php to know more. A good dynamic IP filtering policy also protects you from attacks like DoS and DDos. However, it doesn't help to prevent DRDos.
CSRF Prevention Mechanism
the csrf is known as cross-site request forgery. Means the other sites are submitting forms on your PHP script/Controller. Laravel has a pretty well-defined approach to prevent csrf. However, if you are not using such a framework, you have to design your own JWT based csrf prevention mechanism. If your site is CSRF Protected, then there is no chance to launch brute-force on any forms on your website. It's just like the main gate you closed.

Using token to access sensitive data [duplicate]

As a response to the recent Twitter hijackings and Jeff's post on Dictionary Attacks, what is the best way to secure your website against brute force login attacks?
Jeff's post suggests putting in an increasing delay for each attempted login, and a suggestion in the comments is to add a captcha after the 2nd failed attempt.
Both these seem like good ideas, but how do you know what "attempt number" it is? You can't rely on a session ID (because an attacker could change it each time) or an IP address (better, but vulnerable to botnets). Simply logging it against the username could, using the delay method, lock out a legitimate user (or at least make the login process very slow for them).
Thoughts? Suggestions?
I think database-persisted short lockout period for the given account (1-5 minutes) is the only way to handle this. Each userid in your database contains a timeOfLastFailedLogin and numberOfFailedAttempts. When numbeOfFailedAttempts > X you lockout for some minutes.
This means you're locking the userid in question for some time, but not permanently. It also means you're updating the database for each login attempt (unless it is locked, of course), which may be causing other problems.
There is at least one whole country is NAT'ed in asia, so IP's cannot be used for anything.
In my eyes there are several possibilities, each having cons and pros:
Forcing secure passwords
Pro: Will prevent dictionary attacks
Con: Will also prevent popularity, since most users are not able to remember complex passwords, even if you explain to them, how to easy remember them. For example by remembering sentences: "I bought 1 Apple for 5 Cent in the Mall" leads to "Ib1Af5CitM".
Lockouts after several attempts
Pro: Will slow down automated tests
Con: It's easy to lock out users for third parties
Con: Making them persistent in a database can result in a lot of write processes in such huge services as Twitter or comparables.
Captchas
Pro: They prevent automated testing
Con: They are consuming computing time
Con: Will "slow down" the user experience
HUGE CON: They are NOT barrier-free
Simple knowledge checks
Pro: Will prevent automated testing
Con: "Simple" is in the eye of the beholder.
Con: Will "slow down" the user experience
Different login and username
Pro: This is one technic, that is hardly seen, but in my eyes a pretty good start to prevent brute force attacks.
Con: Depends on the users choice of the two names.
Use whole sentences as passwords
Pro: Increases the size of the searchable space of possibilities.
Pro: Are easier to remember for most users.
Con: Depend on the users choice.
As you can see, the "good" solutions all depend on the users choice, which again reveals the user as the weakest element of the chain.
Any other suggestions?
You could do what Google does. Which is after a certain number of trys they have a captacha show up. Than after a couple of times with the captacha you lock them out for a couple of minutes.
I tend to agree with most of the other comments:
Lock after X failed password attempts
Count failed attempts against username
Optionally use CAPTCHA (for example, attempts 1-2 are normal, attempts 3-5 are CAPTCHA'd, further attempts blocked for 15 minutes).
Optionally send an e-mail to the account owner to remove the block
What I did want to point out is that you should be very careful about forcing "strong" passwords, as this often means they'll just be written on a post-it on the desk/attached to the monitor. Also, some password policies lead to more predictable passwords. For example:
If the password cannot be any previous used password and must include a number, there's a good chance that it'll be any common password with a sequential number after it. If you have to change your password every 6 months, and a person has been there two years, chances are their password is something like password4.
Say you restrict it even more: must be at least 8 characters, cannot have any sequential letters, must have a letter, a number and a special character (this is a real password policy that many would consider secure). Trying to break into John Quincy Smith's account? Know he was born March 6th? There's a good chance his password is something like jqs0306! (or maybe jqs0306~).
Now, I'm not saying that letting your users have the password password is a good idea either, just don't kid yourself thinking that your forced "secure" passwords are secure.
To elaborate on the best practice:
What krosenvold said: log num_failed_logins and last_failed_time in the user table (except when the user is suspended), and once the number of failed logins reach a treshold, you suspend the user for 30 seconds or a minute. It is the best practice.
That method effectively eliminates single-account brute-force and dictionary attacks. However, it does not prevent an attacker from switching between user names - ie. keeping the password fixed and trying it with a large number of usernames. If your site has enough users, that kind of attack can be kept going for a long time before it runs out of unsuspended accounts to hit. Hopefully, he will be running this attack from a single IP (not likely though, as botnets are really becoming the tool of the trade these days) so you can detect that and block the IP, but if he is distributing the attack... well, that's another question (that I just posted here, so please check it out if you haven't).
One additional thing to remember about the original idea is that you should of course still try to let the legitimate user through, even while the account is being attacked and suspended -- that is, IF you can tell the real user and the bot apart.
And you CAN, in at least two ways.
If the user has a persistent login ("remember me") cookie, just let him pass through.
When you display the "I'm sorry, your account is suspended due to a large number of unsuccessful login attempts" message, include a link that says "secure backup login - HUMANS ONLY (bots: no lying)". Joke aside, when they click that link, give them a reCAPTCHA-authenticated login form that bypasses the account's suspend status. That way, IF they are human AND know the correct login+password (and are able to read CAPTCHAs), they will never be bothered by delays, and your site will be impervious to rapid-fire attacks.
Only drawback: some people (such as the vision-impaired) cannot read CAPTCHAs, and they MAY still be affected by annoying bot-produced delays IF they're not using the autologin feature.
What ISN'T a drawback: that the autologin cookie doesn't have a similar security measure built-in. Why isn't this a drawback, you ask? Because as long as you've implemented it wisely, the secure token (the password equivalent) in your login cookie is twice as many bits (heck, make that ten times as many bits!) as your password, so brute-forcing it is effectively a non-issue. But if you're really paranoid, set up a one-second delay on the autologin feature as well, just for good measure.
You should implement a cache in the application not associated with your backend database for this purpose.
First and foremost delaying only legitimate usernames causes you to "give up" en-mass your valid customer base which can in itself be a problem even if username is not a closely guarded secret.
Second depending on your application you can be a little smarter with an application specific delay countermeasures than you might want to be with storing the data in a DB.
Its resistant to high speed attempts that would leak a DOS condition into your backend db.
Finally it is acceptable to make some decisions based on IP... If you see single attempts from one IP chances are its an honest mistake vs multiple IPs from god knows how many systems you may want to take other precautions or notify the end user of shady activity.
Its true large proxy federations can have massive numbers of IP addresses reserved for their use but most do make a reasonable effort to maintain your source address for a period of time for legacy purposes as some sites have a habbit of tieing cookie data to IP.
Do like most banks do, lockout the username/account after X login failures. But I wouldn't be as strict as a bank in that you must call in to unlock your account. I would just make a temporary lock out of 1-5 minutes. Unless of course, the web application is as data sensitive as a bank. :)
This is an old post. However, I thought of putting my findings here so that it might help any future developer.
We need to prevent brute-force attack so that the attacker can not harvest the user name and password of a website login. In many systems, they have some open ended urls which does not require an authentication token or API key for authorization. Most of these APIs are critical. For example; Signup, Login and Forget Password APIs are often open (i.e. does not require a validation of the authentication token). We need to ensure that the services are not abused. As stated earlier, I am just putting my findings here while studying about how we can prevent a brute force attack efficiently.
Most of the common prevention techniques are already discussed in this post. I would like to add my concerns regarding account locking and IP address locking. I think locking accounts is a bad idea as a prevention technique. I am putting some points here to support my cause.
Account locking is bad
An attacker can cause a denial of service (DoS) by locking out large numbers of accounts.
Because you cannot lock out an account that does not exist, only valid account names will lock. An attacker could use this fact to harvest usernames from the site, depending on the error responses.
An attacker can cause a diversion by locking out many accounts and flooding the help desk with support calls.
An attacker can continuously lock out the same account, even seconds after an administrator unlocks it, effectively disabling the account.
Account lockout is ineffective against slow attacks that try only a few passwords every hour.
Account lockout is ineffective against attacks that try one password against a large list of usernames.
Account lockout is ineffective if the attacker is using a username/password combo list and guesses correctly on the first couple of attempts.
Powerful accounts such as administrator accounts often bypass lockout policy, but these are the most desirable accounts to attack. Some systems lock out administrator accounts only on network-based logins.
Even once you lock out an account, the attack may continue, consuming valuable human and computer resources.
Consider, for example, an auction site on which several bidders are fighting over the same item. If the auction web site enforced account lockouts, one bidder could simply lock the others' accounts in the last minute of the auction, preventing them from submitting any winning bids. An attacker could use the same technique to block critical financial transactions or e-mail communications.
IP address locking for a account is a bad idea too
Another solution is to lock out an IP address with multiple failed logins. The problem with this solution is that you could inadvertently block large groups of users by blocking a proxy server used by an ISP or large company. Another problem is that many tools utilize proxy lists and send only a few requests from each IP address before moving on to the next. Using widely available open proxy lists at websites such as http://tools.rosinstrument.com/proxy/, an attacker could easily circumvent any IP blocking mechanism. Because most sites do not block after just one failed password, an attacker can use two or three attempts per proxy. An attacker with a list of 1,000 proxies can attempt 2,000 or 3,000 passwords without being blocked. Nevertheless, despite this method's weaknesses, websites that experience high numbers of attacks, adult Web sites in particular, do choose to block proxy IP addresses.
My proposition
Not locking the account. Instead, we might consider adding intentional delay from server side in the login/signup attempts for consecutive wrong attempts.
Tracking user location based on IP address in login attempts, which is a common technique used by Google and Facebook. Google sends a OTP while Facebook provides other security challenges like detecting user's friends from the photos.
Google re-captcha for web application, SafetyNet for Android and proper mobile application attestation technique for iOS - in login or signup requests.
Device cookie
Building a API call monitoring system to detect unusual calls for a certain API endpoint.
Propositions Explained
Intentional delay in response
The password authentication delay significantly slows down the attacker, since the success of the attack is dependent on time. An easy solution is to inject random pauses when checking a password. Adding even a few seconds' pause will not bother most legitimate users as they log in to their accounts.
Note that although adding a delay could slow a single-threaded attack, it is less effective if the attacker sends multiple simultaneous authentication requests.
Security challenges
This technique can be described as adaptive security challenges based on the actions performed by the user in using the system earlier. In case of a new user, this technique might throw default security challenges.
We might consider putting in when we will throw security challenges? There are several points where we can.
When user is trying to login from a location where he was not located nearby before.
Wrong attempts on login.
What kind of security challenge user might face?
If user sets up the security questions, we might consider asking the user answers of those.
For the applications like Whatsapp, Viber etc. we might consider taking some random contact names from phonebook and ask to put the numbers of them or vice versa.
For transactional systems, we might consider asking the user about latest transactions and payments.
API monitoring panel
To build a monitoring panel for API calls.
Look for the conditions that could indicate a brute-force attack or other account abuse in the API monitoring panel.
Many failed logins from the same IP address.
Logins with multiple usernames from the same IP address.
Logins for a single account coming from many different IP addresses.
Excessive usage and bandwidth consumption from a single use.
Failed login attempts from alphabetically sequential usernames or passwords.
Logins with suspicious passwords hackers commonly use, such as ownsyou (ownzyou), washere (wazhere), zealots, hacksyou etc.
For internal system accounts we might consider allowing login only from certain IP addresses. If the account locking needs to be in place, instead of completely locking out an account, place it in a lockdown mode with limited capabilities.
Here are some good reads.
https://en.wikipedia.org/wiki/Brute-force_attack#Reverse_brute-force_attack
https://www.owasp.org/index.php/Blocking_Brute_Force_Attacks
http://www.computerweekly.com/answer/Techniques-for-preventing-a-brute-force-login-attack
I think you should log againt the username. This is the only constant (anything else can be spoofed). And yes it could lock out a legitimate user for a day. But if I must choose between an hacked account and a closed account (for a day) I definitely chose the lock.
By the way, after a third failed attempt (within a certain time) you can lock the account and send a release mail to the owner. The mail contains a link to unlock the account. This is a slight burden on the user but the cracker is blocked. And if even the mail account is hacked you could set a limit on the number of unlockings per day.
A lot of online message boards that I log into online give me 5 attempts at logging into an account, after those 5 attempts the account is locked for an hour or fifteen minutes. It may not be pretty, but this would certainly slow down a dictionary attack on one account. Now nothing is stopping a dictionary attack against multiple accounts at the same time. Ie try 5 times, switch to a different account, try another 5 times, then circle back. But it sure does slow down the attack.
The best defense against a dictionary attack is to make sure the passwords are not in a dictionary!!! Basically set up some sort of password policy that checks a dictionary against the letters and requires a number or symbol in the password. This is probably the best defense against a dictionary attack.
You could add some form of CAPTCHA test. But beware that most of them render access more difficult eye or earing impaired people. An interesting form of CAPTCHA is asking a question,
What is the sum of 2 and 2?
And if you record the last login failure, you can skip the CAPTCHA if it is old enough. Only do the CAPTCHA test if the last failure was during the last 10 minutes.
For .NET Environment
Dynamic IP Restrictions
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Reduce the chances of a Denial of Service attack by dynamically blocking requests from malicious IP addresses
Dynamic IP Restrictions for IIS allows you to reduce the probabilities of your Web Server being subject to a Denial of Service attack by inspecting the source IP of the requests and identifying patterns that could signal an attack. When an attack pattern is detected, the module will place the offending IP in a temporary deny list and will avoid responding to the requests for a predetermined amount of time.
Minimize the possibilities of Brute-force-cracking of the passwords of your Web Server
Dynamic IP Restrictions for IIS is able to detect requests patterns that indicate the passwords of the Web Server are attempted to be decoded. The module will place the offending IP on a list of servers that are denied access for a predetermined amount of time. In situations where the authentication is done against an Active Directory Services (ADS) the module is able to maintain the availability of the Web Server by avoiding having to issue authentication challenges to ADS.
Features
Seamless integration into IIS 7.0 Manager.
Dynamically blocking of requests from IP address based on either of the following criteria:
The number of concurrent requests.
The number of requests over a period of time.
Support for list of IPs that are allowed to bypass Dynamic IP Restriction filtering.
Blocking of requests can be configurable at the Web Site or Web Server level.
Configurable deny actions allows IT Administrators to specify what response would be returned to the client. The module support return status codes 403, 404 or closing the connection.
Support for IPv6 addresses.
Support for web servers behind a proxy or firewall that may modify the client IP address.
http://www.iis.net/download/DynamicIPRestrictions
Old post but let me post what I have in this the end 2016. Hope it still could help.
It's a simple way but I think it's powerful to prevent login attack. At least I always use it on every web of mine. We don't need CAPTCHA or any other third party plugins.
When user login for the first time. We create a session like
$_SESSION['loginFail'] = 10; // any number you prefer
If login success, then we will destroy it and let user login.
unset($_SESSION['loginFail']); // put it after create login session
But if user fail, as we usually sent error message to them, at the same time we reduce the session by 1 :
$_SESSION['loginFail']-- ; // reduce 1 for every error
and if user fail 10 times, then we will direct them to other website or any web pages.
if (!isset($_SESSION['loginFail'])) {
if ($_SESSION['login_fail'] < 1 ) {
header('Location:https://google.com/'); // or any web page
exit();
}
}
By this way, user can not open or go to our login page anymore, cause it has redirected to other website.
Users has to close the browser ( to destroy session loginFail that we created), open it 'again' to see our login page 'again'.
Is it helpful?
There are several aspects to be considered to prevent brute-force. consider given aspects.
Password Strenght
Force users to create a password to meet specific criteria
Password should contain at least one uppercase, lowercase, digit and symbol(special character).
Password should have a minimum length defined according to your criteria.
Password should not contain a user name or the public user id.
By creating the minimum password strength policy, brute-force will take time to guess the password. meanwhile, your app can identify such thing and migrate it.
reCaptcha
You can use reCaptcha to prevent bot scripts having brute-force function. It's fairly easy to implement the reCaptcha in web application. You can use Google reCaptcha. it has several flavors of reCaptcha like Invisible reCaptcha and reCaptcha v3.
Dynamic IP filtering Policy
You can dynamically identify the pattern of request and block the IP if the pattern matches the attack vector criteria. one of the most popular technique to filter the login attempts is Throttling. Read the Throttling Technique using php to know more. A good dynamic IP filtering policy also protects you from attacks like DoS and DDos. However, it doesn't help to prevent DRDos.
CSRF Prevention Mechanism
the csrf is known as cross-site request forgery. Means the other sites are submitting forms on your PHP script/Controller. Laravel has a pretty well-defined approach to prevent csrf. However, if you are not using such a framework, you have to design your own JWT based csrf prevention mechanism. If your site is CSRF Protected, then there is no chance to launch brute-force on any forms on your website. It's just like the main gate you closed.

Is it possible to somehow get this randomly generated key for my site and access the SQL?

I have a php/js site where the information is encoded and put into the database. The encryption key for the information is randomly generated, then given back to the users after they send a post through a form. The encryption key is not stored in my database at all. A seperate, randomly generated, ID is formed and stored in the database, used to lookup the item itself before deciphering it.
My question is, is it possible at all to look through the logs and find information that would reveal the key? I am trying to make it impossible to read any of the SQL data without either being the person who has the code (who can do whatever he wants with it), or by a brute force attack (unavoidable if someone gets my SQL database)?
Just to re-iterate my steps:
User sends information through POST
php file generates random ID and access key. The data is encrypted with the access key then put in the php database with the ID as the PRIMARY KEY.
php file echos just the random ID and the access key.
website uses jQuery to create a link from the key and mysite.com?i=cYFogD3Se8RkLSE1CA [9 digit A-Ba-b09 = ID][9 digit A-Ba-b09 = key]
Is there any possibility if someone had access to my server that can read the information? I want it to be information for me to read the messages myself. The information has to be decodable, it can't be a one way encoding.
I like your system of the URL containing the decryption key, so that not even you, without having data available only on the user's computer, will be able to access.
I still see a few gotchas in this.
URLs are often saved in web server logs. If you're logging to disk, and they get the disk, then they get the keys.
If the attacker has access to your database, he may have enough access to your system to secretly install software that logs the URLs. He could even do something as prosaic as turn logging back on.
The person visiting your site will have the URL bookmarked at least (otherwise it is useless to him) and it will likely appear in his browser history. Normally, bookmarks and history are not considered secure data. Thus, an attacker to a user's computer (either by sitting down directly or if the computer is compromised by malware) can access the data as well. If the payload is desirable enough, someone could create a virus or malware that specifically mines for your static authentication token, and could achieve a reasonable hit rate. The URLs could be available to browser plugins, even, or other applications acting under a seemingly reasonable guise of "import your bookmarks now".
So it seems to me that the best security is then for the client to not just have the bookmark (which, while it is information, it is not kept in anyone's head so can be considered "something he has"), but also for him to have to present "something he knows", too. So encrypt with his password, too, and don't save the password. When he presents the URL, ask for a password, and then decrypt with both (serially or in combination) and the data is secure.
Finally, I know that Google's two-factor authentication can be used by third parties (for example, I use it with Dropbox). This creates another "something you have" by requiring the person accessing the resource to have his cell phone, or nothing. Yes, there is recourse if you lose your cell phone, but it usually involves another phone number, or a special Google-supplied one-time long password that has been printed out and stashed in one's wallet.
Let's start with some basic definitions:
Code Protecting data by translating it to another language, usually a private language. English translated to Spanish is encoded but its not very secure since many people understand Spanish.
Cipher Protecting data by scrambling it up using a key. A letter substitution cipher first documented by Julius Caesar is an example of this. Modern techniques involve mathematical manipulation of binary data using prime numbers. The best techniques use asymmetric keys; the key that is used to encipher the data cannot decipher it, a different key is needed. This allows the public key to be published and is the basis of SSL browser communication.
Encryption Protecting data by encoding and/or enciphering it.
All of these terms are often used interchangeably but they are different and the differences are sometimes important. What you are trying to do is to protect the data by a cipher.
If the data is "in clear" then if it is intercepted it is lost. If it is enciphered, then both the data and the key need to be intercepted. If it is enciphered and encoded, then the data, the key and the code need to be intercepted.
Where is your data vulnerable?
The most vulnerable place for any data is when it is in clear the personal possession of somebody, on a storage device (USB, CD, piece of paper) or inside their head since that person is vulnerable to inducement or coercion. This is the foundation of Wikileaks - people who are trusted with in confidence information are induced to betray that confidence - the ethics of this I leave to your individual consciences.
When it is in transit between the client and the server and vice versa. Except for data of national security importance the SSL method of encryption should be adequate.
When it is in memory in your program. The source code of your program is the best place to store your keys, however, they themselves need to be stored encrypted with a password that you enter each time your program runs (best), that is entered when you compile and publish or that is embedded in your code (worst). Unless you have a very good reason one key should be adequate; not one per user. You should also keep in-memory data encrypted except when you actually need it and you should use any in-memory in-clear data structures immediately and destroy them as soon as you are finished with them. The key has to be stored somewhere or else the data is irrecoverable. But consider, who has access to the source code (including backups and superseded versions) and how can you check for backdoors or trojans?
When it is in transit between your program's machine and the data store. If you only send encrypted data between the program and the data store and DO NOT store the key in the data store this should be OK.
When it is stored in the data store. Ditto.
Do not overlook physical security, quite often the easiest way to steal data is to walk up to the server and copy the hard drive. Many companies (and sadly defence/security forces) spend millions on on-line data security and then put their data in a room with no lock. They also have access protocols that a 10 year old child could circumvent.
You now have lovely encrypted data - how are you going to stop your program from serving it up in the clear to anyone who asks for it?
This brings us to identification, validation and authorisation. More definitions:
Identification A claim made by a person that they are so-and-so. This is usually handled in a computer program by a user name. In physical security applications it is by a person presenting themselves and saying "I am so-and-so"; this can explicitly be by a verbal statement or by presenting an identity document like a passport or implicitly by a guard you know recognising you.
Validation This is the proof that a person is who they say they are. In a computer this is the role of the password; more accurately, this proves that they know the person they say they are's password which is the big, massive, huge and insurmountable problem in the whole thing. In physical security it is by comparing physical metrics (appearance, height etc) as documented in a trusted document (like a passport) against the claim; you need to have protocols in place to ensure that you can trust the document. Incidentally, this is the main cause of problems with face recognition technology to identify bad guys – it uses a validation technique to try and identify someone. “This guy looks like Bad Guy #1”; guess what? So do a lot of people in a population of 7 billion.
Authorisation Once a person has been identified and validated they are then given authorisation to do certain things and go to certain places. They may be given a temporary identification document for this; think of a visitor id badge or a cookie. Depending on where they go they may be required to reidentify and revalidate themselves; think of a bank’s website; you identify and validate yourself to see your bank accounts and you do it again to make transfers or payments.
By and large, this is the weakest part of any computer security system; it is hard for me to steal you data, it is far easier for me to steal your identity and have the data given to me.
In your case, this is probably not your concern, providing that you do the normal thing of allowing the user to set, change and retrieve their password in the normal commercial manner, you have probably done all you can.
Remember, data security is a trade off between security on the one hand and trust and usability on the other. Make things too hard (like high complexity passwords for low value data) and you compromise the whole system (because people are people and they write them down).
Like everything in computers – users are a problem!
Why are you protecting this data, and what are you willing to spend to do so?
This is a classic risk management question. In effect, you need to consider the adverse consequences of losing this data, the risk of this happening with your present level of safeguards and if the reduction in risk that additional safeguards will cost is worth it.
Losing the data can mean any or all of:
Having it made public
Having if fall into the wrong person’s hands
Having it destroyed maliciously or accidently. (Backup, people!)
Having it changed. If you know it has been changed this is equivalent to losing it; if you don’t this can be much, much worse since you may be acting on false data.
This type of thinking is what leads to the classification of data in defence and government into Top Secret, Secret, Restricted and Unrestricted (Australian classifications). The human element intervenes again here; due to the nature of bureaucracy there is no incentive to give a document a low classification and plenty of disincentive; so documents are routinely over-classified. This means that because many documents with a Restricted classification need to be distributed to people who don’t have the appropriate clearance simply to make the damn thing work, this is what happens.
You can think of this as a hierarchy as well; my personal way of thinking about it is:
Defence of the Realm Compromise will have serious adverse consequences for the strategic survival of my country/corporation/family whatever level you are thinking about.
Life and Death Compromise will put someone’s life or health in danger.
Financial Compromise will allow someone to have money/car/boat/space shuttle stolen.
Commercial Compromise will cause loss of future financial gain.
Humiliating Compromise will cause embarrassment. Of course, if you are a politician this is probably No 1.
Personal These are details that you would rather not have released but aren’t particularly earth shaking. I would put my personal medical history in here but, the impact of contravening privacy laws may push it up to Humiliating (if people find out) or Financial (if you get sued or prosecuted).
Private This is stuff that is nobody else’s business but doesn’t actually hurt you if they find out.
Public Print it in the paper for all anyone cares.
Irrespective of the level, you don’t want any of this data lost or changed but if it is, you need to know that this has happened. For the Nazi’s, having their Enigma cipher broken was bad; not knowing it had happened was catastrophic.
In the comments below, I have been asked to describe best practice. This is impossible without knowing the risk of the data (and risk tolerance of the organisation). Spending too much on data security is as bad as spending too little.
First and most importantly, you need a really good, watertight legal disclaimer.
Second, don’t store the user’s data at all.
Instead when the user submits the data (using SSL), generate a hash of the SessionID and your system’s datetime. Store this hash in your table along with the datetime and get the record ID. Encrypt the user’s data with this hash and generate a URL with the record ID and the data within it and send this back to the user (again using SSL). Security of this URL is now the user’s problem and you no longer have any record of what they sent (make sure it is not logged).
Routinely, delete stale (4h,24h?) records from the database.
When a retrieval request comes in (using SSL) lookup the hash, if it’s not there tell the user the URL is stale. If it is, decrypt the data they sent and send it back (using SSL) and delete the record from your database.
Lets have a little think
Use SSL - Data is encrypted
Use username/password for authorisation
IF someboby breaks that - you do have a problem with security
Spend the effort on fixing that. Disaster recover is a waste of effort in this case. Just get the base cases correct.

Security of using a long hash in the querystring to identify a dynamic document

I'm working on a small webapp that generates exercise program printouts. A user (ie personal trainer) can create an exercise program, then enter the email address of one of their clients. A link to the exercise program then gets sent to the client, like so...
http://www.myurl.com/generate.php?hash=abiglonghash...
The hash is a sha512 string.
I don't want people to be able to easily discover other people's exercise programs. At the same time I would really prefer to avoid prompting people for additonal password info, etc, when they click on that link. I would like a client to click on the link in their email, and immediately get their program, no fuss.
I'm wondering what thoughts are as to the security of the above, without additional authentication? I know it's not Fort Knox, but it seems about as safe to me as a typical "Forgot your password" system. Any thoughts, suggestions as to how this could be improved, without getting into full-blown user authentication?
Thanks in advance,
A "forgot password" system typically does a few things:
Requires you to "know" something once you get to the page (like your mother's maiden name, your high school, etc)
Sends you your new password in an email. Even if you get to the 'forgot password' URL of another user, the new password is sent to that user's email address on file. This means you would need access to their inbox, as well as their "secret question"
For your purposes, a SHA512 string should be secure enough. Using a SHA512 is similar to using a UUID, in theory. It is long enough to be statistically improbable that someone could guess someone else's hash. The odds of it happening are astronomically high.
Of course, there are always easier ways than guessing to get access to someone else's hash. Things like the user's browser history, intercepting their net traffic, looking over their shoulder, etc. Only SSL combined with a protective login system would protect against those things.
Security always brings up questions like these. Obviously the only way to keep all the people you want out is to keep everyone out! But that is not going to work.
If all you are interested in doing is shallowly hiding a user's workout program from another, then what you are doing is not a problem at all and there are no security issues.
If someone guesses (or investigates and finds) a link to another workout program, then there is nothing you can do to stop it with your system. If that is a concern, then you are going to have to come up with another method to know who you are letting in. If you don't care about people actively trying to get at others' workout programs and only trying to stop this from happening incidentally, then you have no problem.
Technically it's exactly as safe as a login/password pair, 32 chars each, i.e. very. Even brute forcing for random records is not an option. With a billion users and billion brute force attempts per second it would take well over 10^100 times the age of the universe before you found a working hash.
In practice there are other consideration, such as caching proxies, browser history and so forth. I would recommend against doing this with important data, but it's really in how you present and explain it to users.
I think this is not a bad idea as long as you don't allow those "magic tokens" to continue to work forever. You should store the tokens in a database and keep track of whether they've been used. Once a client uses the magic token to reach the system, they really should create a "normal" account with a username and password, like every other site on the internet. After that use, the token should be marked as "used" and further uses of it should be disallowed.

PHP: Anti-Flood/Spam system

I'm actually working on a PHP project that will feature a user system (Login,Register,Send lost password to email,..) and I think that this may be very vulnerable to Brute-Force attacks and/or Spam (Send a password to someone's email like 1000 times, etc. use your fantasy)
.
Do today's webservers (Apache, IIS) have some sort of built-in defense against Brute-Force?
What would be the best way to implement an Anti-Spam/Flood system, if I e.g.: want a page not be able to be called more than two times a minute, however another page may be called up to 100 times a minute or so.
I would definitely have to store IP adresses, the time when they last visited a page and the number of visits somewhere - but would it be efficient enough storing it in a text-file/database (MySQL)
Should I use captchas for things like registering/recovering lost passwords?
Are "text" captchas viable? (Something like "What is 5 plus 9 minus 2? ")
The page won't be used by that many users (100-200), do I actually have to implement all these things?
Regarding CAPTCHAs: I would recommend against using CAPTCHAs unless you really need it. Why?
it's ugly.
it's annoying for your users. You shouldn't make them jump through hoops to use your site.
There are some alternatives which are very simple, can be very effective and are entirely transparent to (almost all) users.
Honeypot fields: add a field to your forms with a common name like "website". Beside it, add a label saying something to the effect of "don't write in this box". Using Javascript hide the input and label. When you receive a form submission, if there's anything in the field, reject the input.
Users with JS won't see it and will be fine. Users without JS will just have to follow the simple instruction. Spambots will fall for it and reveal themselves.
Automatic faux-CAPTCHA: This is similar to the above. Add an input field with a label saying "Write 'Alex'" (for example). Using Javascript (and knowing that most automated spam bots won't be running JS), hide the field and populate it with 'Alex'. If the submitted form doesn't have the magic word there, then ignore it.
Users with JS won't see it and will be fine. Users without JS will just have to follow the simple instruction. Spambots won't know what to do and you can ignore their input.
This will safeguard you from 99.9% of automated spam bots. What it won't do, even in the slightest, is safeguard you against a targeted attack. Someone could customise their bot to avoid the honeypot or always fill in the correct value.
Regarding Brute Force blocking: A server-side solution is the only viable way to do this obviously. For one of my current projects, I implemented a brute force protection system very similar to what you describe. It was based on this Brute Force Protection plugin for CakePHP.
The algorithm is fairly simple, but a little confusing initially.
User requests some action (reset password, for example)
Run: DELETE * FROM brute_force WHERE expires < NOW()
Run:
SELECT COUNT(*) FROM brute_force
WHERE action = 'passwordReset'
AND ip = <their ip address>
If the count is greater than X then tell them to wait a while.
Otherwise, run:
INSERT INTO brute_force (ip, action, expires)
VALUES (<their ip address>, 'passwordReset', NOW() + Y minutes)
Continue with the reset password function.
This will allow users to only try resetting a password X times in Y minutes. Tweak these values as you see fit. Perhaps 3 resets in 5 minutes? Additionally, you could have different values for each action: for some things (eg: generate a PDF), you might want to restrict it to 10 in 10 minutes.
Yes, storing an IP address, last accessed and times accessed in a database would be fine.
Using CAPTCHAs for register/recovering password is advised so that e-mail addresses cannot be spammed. Also to stop brute forcing.
Yes, text CAPTCHAs are possible, although far easier for someone to crack and write a script to automate the answer. For a free CAPTCHA, I'd recommend Recaptcha.
That really depends on how much you care about security. I'd certainly recommend using a CAPTCHA as they are simple to implement.
Don't try to implement all the logic in your PHP - the lower in your stack you can implement it, the more efficiently it can be dealt with.
Most firewalls (including iptables on BSD/Linux) have connection throttling. Also, have a look at mod_security for DDOS/brute force attack prevention.
You should design your application around the idea that these kind of attacks will not give the attacker access to the app - at the end of the day there's no way you can prevent a DOS attack, although you can limit its effectiveness.
There's not a lot of value in relying on a consistent IP address from your attacker - there's lots of ways of getting around that.
e.g. keep track of the number of password reset requests between logins by each user. In your password reset form, respond (to the client) in exactly the same way if the user submits an unknown email address. Log invalid email addresses.
HTH
C.
Besides doing what Gazler is telling you, you should also have some way of counting the login attempts in general. It the total of all login attempts are bigger then X then either start using the sleep command or just say the servers have a high load.
Storing IP addresses is good practise for loggin and tracking but I think that just a captcha would stop spamming, brute-force attacks and flooding.
Recaptcha is indeed a good solution.
sure, Your target audience might not be large but if it's in the public domain then it's vulnerable,
text captcha's are cracked easily these days believe me
for an Anti-Spam/Flood system you could log IP addressses (MySQL preferably) and add a time limit login retries

Categories