protect php script against csrf...without php session (cross site) - php

I have a public form that publish POST data to a PHP script.
This form is not located on the same domain, and doesn't use PHP either so the protection cannot be built around PHP session.
The goal is to allow only this form to post on that PHP script.
How do I provide more security for checking source of the request tells how to implement CSRF protection using PHP session but I wonder how I could do to protect mine without it? Is it possible?

POST requests are harder to fake compared to GET requests, so you have that going for you, which is nice. Just make sure you're not using $_REQUEST in your script.
You cannot use sessions here, but the principles are the same - you gotta implement some kind of a "handshake" between a form and your PHP script. There are a few different approaches if sessions are not an option.
The simplest thing to do would be to check http referrers. This will not work if the form is on http and script is under https, and also can be overcome using open redirect vulnerability.
Another way to go would be captchas. I know, not user friendly or fashionable these days, but that would make request forgery much harder, as hacker could not make his exploit work behind the scenes without any user input. You should look into reCAPTCHA (google's "I am not a robot" checkbox): https://www.google.com/recaptcha/intro/index.html
This is a tricky situation, because form on one host and script on another is basically CSRF in itself, so you want to allow it but only for one host. Complete security without any user interaction might be impossible here, so just try to make it as hard as possible for a would-be hacker to mess with your script, or suffer on the UX side. Personally i would go with reCAPTCHA.

Related

POST PHP Security, how to prevent

On my current website i use Jquery and POST requests between different PHP files to get and update information. Currently om not using either SSL or home grown encryption to hide the plain-text in the headers, that will come later.
I'm wondering how to prevent client side POST modification besides sanitizing and validating the inputs before using them. Some of the information passed between the PHP documents are hard to predict, therefore hard to validate.
Got any tricks up your sleeves?
I was thinking i could use session stored data in PHP to validate that it was the actual server that sent the request. But i guess that session data can be "tapped" in many ways?
Choose one:
You can store data in session between requests (more server memory)
You can sign the data sent to the client using an HMAC (more server cpu), then check it on the next request on the server
There's no excuse not to use HTTPS these days. 3 free vendors now.
Two important things about HTTP - It is, by nature, stateless, therefore every request is independent of any previous requests and secondly and more importantly - it is based on trust. Once data hits the server (specifically the php script), it is impossible to know where that request originated and if the data can be trusted. This means the only way to ensure data is clean and secure is if it is sanitized and validated.
Because of the inherent trust with HTTP, any client can forge a request with malicious intent. There are ways to make this harder, and depending on what you are trying to protect you can spend more time and resources to do so. These steps are different depending on what you are trying to accomplish. Are you trying to stop a malicious user from stealing others users information? Are you trying to stop them from accessing data on your server that they should not (sql injection, directory traversal)? Are you trying to prevent the user from impersonating another user (session hijacking)? Are you trying to prevent the user from injecting malicious javascript (xss)? Depending on your goal and your risk, you would invest time and energy to try and prevent one or all of these things.
Lastly, HTTPS only mitigates a man in the middle attack (maybe session hijacking) and not any of the above mentioned scenarios, so you still need to clean and scrub all data that your php receives.

How much safe a site if its not served to those who disabled Javascript?

I am using JQuery and Javascript extensively in my new project including form validation because I don't want to burden server using PHP validation. So I am restricting my site from the people who disabled Javascript on their browsers. I am trying to redirect them using meta tag:
<meta http-equiv="refresh" content="2; URL=../../enablejs.html">
I assume that this is safe because if javascript is not enabled they will not be able to access my site.
But still I have a doubt over this and need your advice. Is it completely safe? If not what are the area I need to concetrate?
This is a terrible, terrible idea.
because I don't want to burden server using PHP validation
You mean, you don't want to burden yourself with implementing it :)
I can relate. Everyone hates doing stuff twice. But server side validation is not a negotiable extra; client side validation can be easily circumvented and is for user convenience only. Server side validation is always needed for safety and security.
Apart from it being a bad idea, there is no way of reliably excluding users who have JavaScript turned off. JavaScript runs on client side, and its presence or non-presence can be easily faked to the server.
Client-side anything is never ever safe. You always need server-side validation. It's not a "burden", it's a necessity. I don't even need your website to submit (unvalidated) data to your server, in the end it all just boils down to HTTP requests. If you do not validate everything the user does on the server, you have no security.
I am using Jquery and Javascript extensively in my new project including form validation because I don't want to burden server using PHP validation.
That shouldn't save a significant burden. It should give faster feedback to users though, which is good.
So I am restricting my site from the people who disabled Javascript on their browsers.
That is a waste of time. The proportion of submissions in that which will be from users with JS disabled will be tiny.
I am trying to redirect them using meta tag
That's a very user hostile thing to do.
I assume that this is safe because if javascript is not enabled they will not be able to access my site.
If you mean that it avoids the need to write server side validation routines, then you are wrong. If someone wants to attack the site (rather then submit bad data by accident) then they can construct HTTP requests manually.
No that's not safe. Client side validations are nowhere safe. With javascript enabled anyone can bypass your validations. Using chrome console I can probably alter any text on your input boxes or any other input method without you validation noticing it.
Use server side validation or you're screwed.
No, this is not safe. Never rely on the browser for form validation. Form validation in the browser should only be to improve user experience, not to protect your data. You need to add some PHP validation.
Also, are people who have JavaScript disabled not supposed to use your site? You should make JavaScript degrade gracefully so that your site is still usable without it.
Using client side validation is a recipe for disaster "never ever trust clients input" clients inputs are GET (URL included), POST, FLash ...
All inputs should be validated by server side scripting language like PHP, ASP.net, java ...
If you use PHP then check http://www.phpclasses.org/ and look for form validation scripts and Cross-site scripting (XSS). Or use validation classes offered in frameworks like zend, codeigniter
http://en.wikipedia.org/wiki/Cross-site_scripting

Guide to Securing JQuery AJAX Code?

I am creating a web application that uses JQuery's AJAX calls as it deals with all of the browser inconsistencies.
However, as the code is very much easily readable from the browser I have has concerns about what security measures I can use to protect the web application from attack.
I will be obviously doing authentication checks for the server side code to ensure that they have access to the data that they are trying to access. However, I have also been trying to look into ways of stopping CSRF attacks as well as looking into ways of 'obscuring' the code so it is not easily readable via View Source in the browser.
What steps should I be taking to ensure that security is at a good level?
Also is injecting data into a jquery script via PHP a bad idea?
Thanks!
There's no easy answer to your main question. You should read the OWASP guide on CSRF prevention and go from there.
And there's plenty of options out there for obfuscating javascript, but none of them will increase the security of your code. If an attacker really wanted to read your obfuscated code, he could just pick through it by hand or write a parser for it and simply de-obfuscate it. Not a viable security technique.
Also is injecting data into a jquery script via PHP a bad idea?
As long as you have no problem with the world seeing that data, no it is not a bad idea. If the data is sensitive, you'll probably want to keep it server-side, or hash it with a salt and then insert the hashed value into the script. Of course, this hash is rather unusable client-side because you must not include your salt in anything client-side (this would defeat the purpose of obfuscating the data in the first place). If you want to make use of that data, you'll need to ajax it back to your server and process it there.

How to Implement Generic CSRF Tokens with JQuery AJAX?

I am currently developing jquery code that sends data to the server via ajax that is then inserted into the database based on the request parameters.
However, I am rather concerned that this could be abused by CSRF attacks which would make things rather insecure. I have tried to research this and only find answers for specific frameworks such as django and rails where I am only after a generic implementation for use with PHP.
I have read that you can use the JQuery.ajaxsend() function to implement the code so that a token is sent with EVERY AJAX request however I have no idea how this can be implemented as JavaScript obviously has no access to the PHP session variables. Would the use of cookies be secure enough?
Basically I need to be able to check the origin of the request to ensure that the request is genuine and not a forged request used to take advantage of the system.
If anyone can point me in the right direction that would be most appreciated!
Well, do know that $.ajax seems to send the cookies, including the PHP session cookie, with its request. Using that feature changes your attack from CSRF to session hijacking, but it's a start. Next, run your service over SSL if you can to avoid the session hijacking.
I'm sure there are other ways to do this as well, but for vanilla PHP, this seems to work. Someone correct me if I'm wrong, please.
Here's how it's done in Django, but there's nothing that's framework specific (besides setting the CSRF token in the cookie as 'csrftoken'): https://docs.djangoproject.com/en/1.3/ref/contrib/csrf/#ajax

Using a session token or nonce for Cross-site Request Forgery Protection (CSRF)?

I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.
I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:
Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
I came across some points about these solutions being only single window. Could someone elaborate on this point?
Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.
In the end, I am looking for a better understanding so I can implement a more robust solution.
As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).
First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.
Now about your questions:
This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?
UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:
Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).
and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.

Categories