This is mostly a generic question, but I am specifically working with ZF2. I'd like to understand exactly how the CSRF token works and what determines how long a TTL (timeout) I should have on it.
I understand that it is generating a unique hash for each form that is rendered and checking that hash on submission of the form, but what does it check it against, is it keeping a copy of the hash on the server side as well? Is it single use? I'm guessing not since I can re-submit the same form multiple times (i.e. hit refresh on the page that results from the form post). But, it must be specific to that form and my current session, right?
Why does it need to time-out? Currently, I somewhat frequently have pages where I submit, or re-submit, the form after 5 mins (the default ZF2 CSRF timeout) and I get the CSRF error message. This is an annoyance for legitimate user interaction. Should I increase or remove the timeout? What's the security trade off?
I'm hoping by having a better understanding of the mechanics of ZF2's CSRF system I'll be better informed for decisions like this. Also, what are your recommended CSRF configuration options?
Related
I have tried two approaches:
-- Using form_open : With this approach, I am able to add a field with CSRF Token in request header as well as in cookies. But the same CSRF Token is generated every time and hence not able to prevent the attack.
Also I need to know apart from adding Token on client-side, is there any need to check it at server-side or it is automatically done.
-- Using hidden input field with custom form tags : With this, I added a random token as the input field, still not able to avoid the attack.
For second approach, I need to know the changes we need to do in Security.php file and for this also if we have to do any server-side check or not.
The first approach is advised mainly because the CI code is well-tested, tried-and-true code. I assume the second method is something you intend to write yourself. If that's the case you are reinventing the wheel without good cause.
Using the CI code it is important to understand that the hash value of the token will not change unless you use the following in config.php
$config['csrf_regenerate'] = TRUE;
The other thing you need to know is that a new hash will be generated only when a POST request is made to the server. That's fine because the need for CSRF protection is only relevant for POST requests.
When making multiple GET requests, i.e. loading a <form> a number of times in succession, you will likely see the same hash value each time. But if you submit the form and then reload it you will see a new hash value.
Finally, you should know that the CSRF values are only checked for POST requests and are not checked for GET requests.
The hash value will be removed from $_POST after it is successfully validated.
All of the above is happens automatically if you use the $config setting shown in combination with form_open().
I need to make sure a request comes from a user submitting a form on the website rather than an automated POST request.
I could use
HTTP_REFERRER - but this is not reliable
hidden input field with random value from session - but what's to stop a spammer from going to my form, getting the value from the hidden field, and pasting it into his "program" as part of his automated request?
Any other options?
You could use an HMAC approach whereby you hash the first couple of bits of the POST payload using a hashing algorithm secured by a secret key known only between your php library and your backend. Store the calculated hash in the http headers, not as part of the form payload. All you need to do then is validate the data being submitted server-side by calculating the hash value using the secret key and if the hash value doesn't check out, you know it's a bogus submission. See this for details.
Also, basic cookie security parameters like HttpOnly instructs browsers to not permit access to your set cookies via scripts in transit (VBScript, JavaScript etc) so your tokens could be a little bit more secure in transit.
I'm afraid there is no way: if you examine your option number #2
hidden input field with random value from session - but what's to stop
a spammer from going to my form, getting the value from the hidden
field, and pasting it into his "program" as part of his automated
request?
what you describe is exactly what a browser would do. It would "go to your form", "get the value from the hidden field", and submitting it to your server.
And you can't distinguish between two identical modi operandi.
You can make life difficult for the spammer in a number of ways.
For example, the hidden field might be populated by a Javascript snippet; all non-JS browsers (and all your customers with JS disabled) will be bounced.
You could require a session authentication for starters; that way you'll be able to block the spammer later on by pulling his account.
Just for the laughs (I do not recommend doing this - clunky, risky, error-prone, you make bad enemies), you could employ psychological tactics: whatever response your system is expected to give in case of success (e.g. a spampost becoming visible), you can give also in case of failure but only towards the same IP address that elicited the response, and only for five minutes. Most automated spam-tools won't even check; and the majority of human "spambot tuners" will be satisfied once they see their spam appearing, and go on chuckling to their next victim. If they check later, their bots will still appear to work. With a bit of luck they'll believe there's a human webmaster psychotically hunting and cancelling their spam, and again they'll move on.
I've just setup a simple CSRF protection in my application. It creates a unique crumb which are validated against a session value upon submitting a form.
Unfortunately this means now that I can't keep multiple instances (tabs in the browser) of my application open simultaneously as the CSRF crumbs collide with each other.
Should I create an individual token for each actual form or use a mutual, shared crumb for all my forms?
What are common sense here?
You can do either. It depends on the level of security you want.
The OWASP Enterprise Security API (ESAPI) uses the single token per user session method. That is probably a pretty effective method assuming you have no XSS holes and you have reasonably short session timeouts. If you allow sessions to stay alive for days or weeks, then this is not a good approach.
Personally, I do not find it difficult to use a different token for each instance of each form. I store a structure in the user's session with key-value pairs. The key for each item is the ID of the form, the value is another structure that contain the token and an expiry date for that token. Typically I will only allow a token to live for 10-20 minutes, then it expires. For longer forms I may give it a long expiry time.
If you want to be able to support the same form in multiple browser tabs in the same session, then my method becomes a little trickery but could still be easily done by having unique form IDs.
The OWASP Cheat Sheet has the most definitive answers for this sort of thing. It discusses different approaches and balancing of security vs. usability.
In brief they recommend having a single token per (browser) session. In other words in your case the same token is shared among tabs. The cheat sheet also emphasizes that it is very important not to expose your site to cross site scripting vulnerabilities, as this subverts the per session CSRF token strategy.
As I know about CSRF, you can use
1) Random number and save it into session:
Add it to hidden input called hidden, then when you recive info you can check hidden field with the session value
2) Static config variable (like previous but no session)
The hidden field will contain this value (variable from config). When you validate, you will check the hidden posted and the config security key value
3) HTTP Referer
You can use http referrer to know where user come from then check it with the real domain(this method can attack if your website contain xss).
As I know you can use any solution :)
I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.
I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:
Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
I came across some points about these solutions being only single window. Could someone elaborate on this point?
Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.
In the end, I am looking for a better understanding so I can implement a more robust solution.
As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).
First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.
Now about your questions:
This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?
UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:
Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).
and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.
Is this enough for CSRF protection:
A random string is generated, $_SESSION['hash'] stores it
A hidden value (in $_POST['thing']) in a form contains the random string
When the form is submitted, it checks if $_SESSION['hash'] equals $_POST['thing'], and continues if they match
One of my site's users keeps telling me that my site is vulnerable, but I can't tell if he's just trolling me. Is there anything else that I can do?
What I think you are missing is limiting token to small window of time.
You should have a look at Chris's CRSF-article. A quick summary:
a CSRF attack must include a valid token (anti-CSRF token) in order to perfectly mimic the form submission.
The validity of the token can also be limited to a small window of time, such as five minutes
If you use a token in all of your forms as I have suggested, you can eliminate CSRF from your list of concerns. While no safeguard can be considered absolute (an attacker can theoretically guess a valid token), this approach mitigates the majority of the risk. Until next month, be safe.
If it's unique to every user, then it should be enough. Even if it's the same for duration of user session, it's still OK, but I would suggest to re-generate it periodically.
Also you may want to use different tokens per each form. For example, if you have login form and comments form, it's better to use different tokens for them, but it's not 100% necessary.
Why do you assume that just because someone says your site is vulnerable, it has to do with CSRF attach? They are so many other possible vulnerabilities.
Maybe your web server outdated and vulnerable, maybe the php version is not the most recent one. Maybe the user was able to login to your server via ssh or telnet. Maybe the user was able to guess admin password.
Maybe to let people login by cookie and store login credentials in cookies.
There are just too many things other than CSRF that could be exploited. There is also a possibility that the user is wrong or does not know that he is talking about or maybe he just wants to make your nervous.
Each time they load the page, it changes IF it's not already set.
Well there is your problem. Once a token is retrieved all the actions can be easily performed further one. I usually implement the token to be valid for one single request and afterwards regenerate it.
from : http://en.wikipedia.org/wiki/Cross-site_request_forgery
you can additional decrease time of life of cookie
check the HTTP Referer header
and captcha - but not every user like it
however your acion with secret key is still better than nothing...