As much as I understand, web developer should create token and put it in hidden field of form to prevent CSRF attacks. Also, he should save the same token in a session and then, when form is submitted - check that tokens are equal.
I came to question... is it necessary to do this technique for all forms? I mean, imagine form that is created to sign-in. I can't see any harm done to site and/or user if there is no CSRF protection, because user have no privileges (like he would have if he would be signed-in). The same goes for sign-up... Am I right?
P.S. If I'm wrong, please explain me the concept.
The danger that CSRF tries to prevent is when you have the following situation:
The user has signed-in or whatever, and has a certain level of authority
The bad guy exploits that authority without the user's permission
Sometimes this is by tricking the user into making an HTTP request without knowing it, for example in an image's source attribute.
The forms you want to protect are the forms that require this authority.
On the crazy, off-chance that this didn't actually make sense, Chris Shiflett has an awesome article on CSRF (which you may very well have already read :/)
Generally speaking, you want to protect your form anytime its submission will result in a change of content/state; be it adding it, removing it, editing it or sharing it with an external source ("share on xyz !").
An exemple of forms you wouldn't need to protect is a search box, since it doesn't result in any change of content.
If you're unsure, any form which will result in something being saved/deleted (whether it's on your site or not) should be protected.
And if you are really unsure just add the token, doesn't cost anything to be safe.
Related
Using PHP, is there a way to check for certain how a web server resource is requested while minimising room for forging and hacking? Or in simpler terms, how to check whether it is a hyperlink click, direct URL, submission of HTML Form, programmatic access etc.?
What about if tools other than PHP is used?
You can't make that bullet proof that way, but a good start is $_SERVER.
For example with $_SERVER['REQUEST_METHOD'] you can determine if the request was get/post/put/...
Regarding link click or direct access: $_SERVER['HTTP_REFERER'] might help.
At last $_SERVER['REQUEST_URI'] might hold some useable info, to determine if the script was run over an webserver (like apache) or the console.
Please see the documentation for more info on that.
And as I mentioned before: You can't be sure of anything here, but if you just want some info, and not use it for security reasons, you are fine.
If you want security you need to look into the possibilities with .htaccess.
It sounds like you might be alluding to cross-site request forgery (CSRF) attacks. CSRF attacks are where an attacker can cause a request to a site that appears to come from a legitimate user (e.g. "Transfer $500 to account 1234."). These attacks abuse the trust that a website has in the visitor. The request might appear legitimate because it is accompanied by a session cookie that matches the user's authentication session.
To prevent CSRF attacks, developers usually add hidden form fields that have "CSRF tokens." This token is submitted with the form submission. It is a means of ensuring form submissions are legitimate. Upon submission, the CSRF token is verified (it is usually compared to the token value stored or computed on the server).
For information on implementation details, check out OWASP. One important note is not to expose tokens in the URL (i.e. through GET requests) as these can expose tokens to potential attackers.
If you have concerns beyond CSRF, please feel free to update your question and I'd be happy to provide additional information that is more specific to your needs.
I am working on a project that allows a user to customize a web page. Currently, it is possible for this user to add HTML to the page. From my research, it appears that XSS attacks are generally used to hijack sessions / steal cookies. My question is, if visitors are not allowed to add any content to this page, and if the user who customizes the page is the only person able to log in, is it necessary to prevent XSS attacks? My thought is no, because the only cookies available to steal are his or her own.
Maaaybe, since it sounds like you're saying that only User A can see HTML submitted by User A, but you'd still have to be careful, since we're assuming that User A is aware of all data being submitting on his or her behalf. Consider the following attack.
Trick User A into visiting my malicious site.
Auto-submit a form to your website containing malicious HTML.
Redirect User A to your website, where they will see my malicious HTML that steals their cookies and sends them to me.
Maybe you'd be okay if you implemented CSRF protection that prevents me from submitting on the user's behalf, but it's still kinda scary. If users need to be able to use HTML (but they usually don't), consider a tool like HTML Purifier that allows HTML known to be safe but blocks potentially malicious HTML: that way, most legitimate use cases would probably be satisfied, but XSS will be darn near impossible, even if the other security systems fall apart. It's easy to implement, and a small price to pay for that additional level of security.
I'm making a web game for fun with lots of forms that post data to php pages. What are some methods to preventing people from using their own web forms to post to my site?
My knowledge PHP is not too advanced. So, while I've been researching this topic, unfortunately the answers I've found have been confusing me a little. I found this SO question from earlier that addressed the issue: Secure ajax form POST. I'm a little confused by the first answer and was wondering if somebody could provide an example in PHP. Some specific points I'm struggling with are:
how would you save a token somewhere on your server?
how do you decide what that token should be?
if somebody is on the website, can't they just view the source and
see the token in the hidden input element and use that in their own
third party form?
Thanks for your help!
There are a number of CSRF prevention libraries you can use. One is CSRFGuard at OWASP
You may also wish to read the main CSRF page to understand the issues and the CSRF Prevention Cheat Sheet to understand the principles behind the implementation design.
If you've read up and understood the issues, it should be a simple job to add your own protection, if you wish to construct your own implementation.
Bear in mind that if you have any XSS vulnerabilities, your CSRF protection can be simply bypassed. So be sure to understand XSS Prevention also.
The token is stored in the user's session along with an expiration date/time
The token can be generated automatically, per-user. It needs to be random enough to avoid guessing.
Yes, but you can combat this by using per-user token generation and expiration. If a token is submitted without an existing user session, or if the token has expired in the current session, redirect the user to an appropriate notification of failure.
I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.
I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:
Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
I came across some points about these solutions being only single window. Could someone elaborate on this point?
Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.
In the end, I am looking for a better understanding so I can implement a more robust solution.
As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).
First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.
Now about your questions:
This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?
UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:
Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).
and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.
I don't run a mission critical web site so I'm not looking for an industrial strength solution. However I would like to protect against basic attacks such as someone mocking up a false page on the hard disk and attempting to gain unauthorized access. Are there any standard techniques to ensure that form submission is only accepted from legitimate uses?
A few techniques come close:
Produce a form key for every form. The key would relate to a database record, and something else unique about the page view (the userID, a cookie, etc.). A form cannot be posted if the form key does not match for that user/cookie. The key is used only once, preventing an automated tool from posting again using a stolen key (for that user).
The form key can also be a shared-secret hash: the PHP generating the form can hash the cookie and userID, for example, something you can verify when the form is posted.
You can add a captcha, requiring a user to verify.
You can also limit the number of posts from that user/cookie (throttling), which can prevent certain forms of automated abuse.
You can't guarantee that the form isn't posted from disk, but you can limit how easily it is automated.
You can't. There's no reliable way to distinguish between an HTTP request generated from a user on your page, or a malicious user with their own web-page.
Just use a proper password authentication approach, and no-one will be able to break anything unless they know the password (regardless of where the HTTP requests are coming from). Once you have reliable server-side authentication, you don't need to waste time jumping through non-robust hoops worrying about this scenario.
You should not create a login-system yourself because it is difficult to get it right(security). You should NOT store the passwords(in any form whatsoever) of your users on your site(dangerous) => Take for example lifehacker.com which got compromised(my account too :(). You should use something like lightopenid(as stackoverflow also uses openid) for your authentication.
The remaining forms you have on your site should have the following protection(at least):
CSRF protection: This link explains thorougly what CSRF is and even more important how to protect against CSRF
Use http-only cookies: http-only sessions, http-only cookies
Protect against XSS using filter.
Use PDO prepared statement to protect youself against SQL-injection
i also recommend:
Save the IP of the computer that sends the form (to block it from the server if it.s annoying)
Use CAPTCHA when required, to avoid robots...
Send users to another page when the info is loaded, so the POST data won't be retrieved when you refresh the page.
Proper validation of form data is important to protect your form from hackers and spammers!
Strip unnecessary characters (extra space, tab, newline) from the
user input data (with the PHP trim() function)
Remove backslashes () from the user input data (with the PHP
stripslashes() function)
for more detail, you can refer to Form Validation