In the HTML form I wrote user can upload a file that is sent out on submission using ajax. The script is located on exampledomain.com/upload.php.
I want to avoid a situation that somebody is going to make his own form with action pointing to upload.php and start spamming my server with unauthorized files.
Everybody can fill out my HTML form so login/password protection for upload is not an option for me. Is there any efficient way of making sure that upload.php is going to be triggered only from the HTML form on my website?
One way of dealing with that problem would be using sessions, but I'm not sure how well it works with ajax. Any suggestions?
Although Cross Site Request Forgery is normally classed as an attack on already-authenticated users' accounts via another site, the techniques used to combat it will work for you to some degree, even though, as you say, your users aren't authenticated as such.
Specifically, generating a nonce (one-time unique value) with every form, as described in Jeff Atwood's article here will help by preventing other sites from simply POSTing data to your servers -- if you validate that the nonce value sent with the POST request is one that you've recently generated, it must at least have come from someone who's "visited" your site somehow.
However, that won't prevent spammers from attacking your site by scraping the nonce values from your site, i.e. pretending to be your users and using your actual forms. For that, you probably want to look into various techniques like CAPTCHAs, blacklists, and other validations.
Personally, I think if you're not going to use authenticated users (by which I mean at least requiring a CAPTCHA plus an email address validation for a user to register, then requiring them to authenticate before uploading) then I'd say you'd pretty much have to use a good CAPTCHA system, probably combined with regular checking for unusual activity, and sampling of the uploads to combat the inevitable spam and attacks. There's a reason most popular web services require these checks, sadly.
(Incidentally, sessions work fine with Ajax -- just make sure to use the same session-handling code in the page that responds to the Ajax request. But without an authenticated user, the sessions won't buy you anything in the way of security, as far as I can see. A spambot can cope perfectly well with sessions.)
I want to avoid a situation that somebody is going to make his own form with action pointing to upload.php and start spamming my server with unauthorized files.
That's impossible.
How to allow PHP upload from only one domain?
To let you know, files being uploaded not from "domain" but merely from the user's PC.
As long as you have unrestricted registration on your site, anyone could spam. You have to choose another protection against this imaginary threat.
the best way to protect against spammers is captcha(Try recaptcha). other techniques to come in mind is unique code in an hidden form(good for protecting against clrf attacks), verify domain name with $_SERVER[''] but are relatively easy to counter
Add a PHP variable to the page to act like a password, but rather than a static value which could be copied use a calculated value that will change every minute.
The calculated value sent will have to match the calculated value in the upload.php file to allow the file to be sent.
This still doesn't stop spamming via your site though.
You could also calc the size of the file uploaded in Kbytes and attach that integer to the session variable, and check that variable in the next landing page after the login!
Related
I have a public form that publish POST data to a PHP script.
This form is not located on the same domain, and doesn't use PHP either so the protection cannot be built around PHP session.
The goal is to allow only this form to post on that PHP script.
How do I provide more security for checking source of the request tells how to implement CSRF protection using PHP session but I wonder how I could do to protect mine without it? Is it possible?
POST requests are harder to fake compared to GET requests, so you have that going for you, which is nice. Just make sure you're not using $_REQUEST in your script.
You cannot use sessions here, but the principles are the same - you gotta implement some kind of a "handshake" between a form and your PHP script. There are a few different approaches if sessions are not an option.
The simplest thing to do would be to check http referrers. This will not work if the form is on http and script is under https, and also can be overcome using open redirect vulnerability.
Another way to go would be captchas. I know, not user friendly or fashionable these days, but that would make request forgery much harder, as hacker could not make his exploit work behind the scenes without any user input. You should look into reCAPTCHA (google's "I am not a robot" checkbox): https://www.google.com/recaptcha/intro/index.html
This is a tricky situation, because form on one host and script on another is basically CSRF in itself, so you want to allow it but only for one host. Complete security without any user interaction might be impossible here, so just try to make it as hard as possible for a would-be hacker to mess with your script, or suffer on the UX side. Personally i would go with reCAPTCHA.
So, it's impossible to do AJAX requests securely without using SSL. I get it. You can either view-source the data that's being sent via Javascript, or you can directly access the PHP page by spoofing headers, yada yada.
But let's say this web app doesn't particularly require true security, and instead it's just a sort of game to keep most reverse-engineers at bay. What sort of hurdles should I employ?
I'm not looking for some ridiculously over-the-top Javascript implementation of an encryption algorithm. I want simplicity as well as mild security... if that isn't contradictory by nature. So, what would you guys recommend?
For example, I'm running a contest where if a user clicks an image successfully (jQuery), it passes their userid and a timestamp to a PHP page, both MD5 salted by random data and then encoded with MIME. The PHP page then validates this userid and timestamp, then returns a winning "code" in the form of another salted MD5 hash. I'm also employing multiple header checks to help ensure the request is from a valid location. Am I missing anything, or is that about all I can do? It seems like someone could just fire the jQuery click event and ruin the whole thing, but I don't see how I can prevent that.
I'll be awarding the answer to anyone who comes up with an ingenious faux-security mechanism! Or... just whomever tells me why I'm stupid this time.
I believe header checks can be easily fooled. Doesn't hurt though.
Since your algorithm is exposed on the client side, the user can simply send the appropriate data to your server with an automated script to fool your server into thinking it was clicked.
In addition to that, you have to watch out for session hijacking. A user can essentially submit this ajax request on behalf of someone else, especially if they have the algorithm. Does your application have different behavior for certain users? If so, then the session hijacking could turn into priviledge escalation issue.
It is not necessarily true that you need to encrypt the payload with SSL in your case in order to build a secure application. From what you've described, there is no sensitive data being sent over the wire.
Ensure that you have some basic silly checks on the server side to check for automated or malicious behavior. For example, if you find that the header information is missing, you may want to have some sort of flag/alert that someone is toying with the response. Another place you may want to do this is the pattern of requests.
A more secure model is to have the server assign the user some session token that they cannot reverse-engineer. This session token ideally should begin with the timestamp instead of the username to promote the avalanche effect of the salted hashing algorithm.
Since it seems like your application deals with prizes and potentially money, I would invest some more time in securing this app. Hope these tips have helped you.
I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.
I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:
Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
I came across some points about these solutions being only single window. Could someone elaborate on this point?
Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.
In the end, I am looking for a better understanding so I can implement a more robust solution.
As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).
First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.
Now about your questions:
This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?
UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:
Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).
and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.
I'm currently writing a web application which uses forms and PHP $_POST data (so far so standard! :)). However, (and this may be a noob query) I've just realised that, theoretically, if someone put together an HTML file on their computer with a fake form, put in the action as one of the scripts that are used on my site and populate this form with their own random data, couldn't they then submit this data into the form and cause problems?
I sanitise data etc so I'm not (too) worried about XSS or injection style attacks, I just don't want someone to be able to, for instance, add nonsense things to a shopping cart etc etc.
Now, I realise that for some of the scripts I can write in protection such as only allowing things into a shopping cart that can be found in the database, but there may be certain situations where it wouldn't be possible to predict all cases.
So, my question is - is there a reliable way of making sure that my php scripts can only be called by Forms hosted on my site? Perhaps some Http Referrer check in the scripts themselves, but I've heard this can be unreliable, or maybe some htaccess voodoo? It seems like too large a security hole (especially for things like customer reviews or any customer input) to just leave open. Any ideas would be very much appreciated. :)
Thanks again!
http://en.wikipedia.org/wiki/Cross-site_request_forgery
http://www.codewalkers.com/c/a/Miscellaneous/Stopping-CSRF-Attacks-in-Your-PHP-Applications/
http://www.owasp.org/index.php/PHP_CSRF_Guard
There exists a simple rule: Never trust user input.
All user input, no matter what the case, must be verified by the server. Forged POST requests are the standard way to perform SQL injection attacks or other similar attacks. You can't trust the referrer header, because that can be forged too. Any data in the request can be forged. There is no way to make sure the data has been submitted from a secure source, like your own form, because any and all possible checks require data submitted by the user, which can be forged.
The one and only way to defend yourself is to sanitize all user input. Only accept values that are acceptable. If a value, like an ID refers to a database entity, make sure it exists. Never insert unvalidated user input into queries, etc. The list just goes on.
While it takes experience and recognize all the different cases, here are the most common cases that you should try to watch out for:
Never insert raw user input into queries. Either escape them using functions such as mysql_real_escape_string() or, better yet, use prepared queries through API like PDO. Using raw user input in queries can lead to SQL injections.
Never output user inputted data directly to the browser. Always pass it through functions like htmlentities(). Even if the data comes from the database, you shouldn't trust it, as the original source for all data is generally from the user. Outputting data carelessly to the user can lead to XSS attacks.
If any user submitted data must belong to a limited set of values, make sure it does. For example, make sure that any ID submitted by the user exists in the database. If the user must select value from a drop down list, make sure the selected value is one of the possible choices.
Any and all input validation, such as allowed letters in usernames, must be done on the server side. Any form validation on the client, such as javascript checks, are merely for the convenience of the user. They do not provide any data security to you.
Take a look # nettuts tutorial in the topic.
Just updating my answer with a previously accepted answer also in the topic.
The answer to your question is short and unambiguous:
is there a reliable way of making sure that my php scripts can only be called by Forms hosted on my site?
Of course not.
In fact, NO scripts being called by forms hosted on your site. All scripts being called by forms hosted in client's browser.
Knowing that will help to understand the matter.
it wouldn't be possible to predict all cases.
Contrary, it would.
All good sites doing that.
There is nothing hard it that though.
There are limited number of parameters each form contains. And you just have to check every parameter - that's all.
As you have said ensuring that products exist in the database is a good start. If you take address information with a zip or post code make sure it's valid for the city that is provided. Make countries and cities a drop down and check that the city is valid for the country provided.
If you take email addresses make sure that they are valid email address and maybe send a confirmation email with a link before the transaction is authorised. Same for phone numbers (confirmation code in a text), although validating a phone number may be hard.
Never store credit card or payment details if it can be avoided at all (I'm inclined to believe that there are very few situations where it is needed to store details).
Basically the rule is make sure that all inputs are what you are expecting them to be. You're not going to catch everything (names and addresses will have to accept virtually any character) but that should get most of them.
I don't think that there is any way of completely ensuring that it is your form that they are coming from. HTTP Referrer and perhaps hidden fields in your form may help but they are not reliable. All you can do is validate everything as strictly as possible.
I dont see the problem as long as you trust your way of sanitizing data...and you say you sanitize it.
You do know about http://php.net/manual/en/function.strip-tags.php , http://www.php.net/manual/en/function.htmlentities.php and http://www.php.net/manual/en/filter.examples.validation.php
right?
I'm looking for a way to encrypt a HTML form in PHP in a way so I can then decrypt it in the browser using JavaScript. This should work transparently to the user and JavaScript input validation must also work on the form (I know how to do this). When user submits the form, it must be encrypted again and sent to the server using an "AJAX" request.
Edit: this will be used as an alternative CAPCHA system, so scripts cannot submit forms, unless by some clever design.
Edit 2: I know this is brakeable, everything is. Car locks are brakeable, but we still use them. It is not meant to be ultimate CAPTCHA, but a speed bump, which will drive all but the most persistent people away.
Thank you
This is the same problem as with DRM: User has the ciphertext. The decryption is done on user's system, so user must have the key too. If user has both key and ciphertext, all encryption is pointless.
If you just want to transmit data safe from outside snoopers, why not just use SSL (HTTPS)?
You can use base64.
<?php
echo base64_encode('html source');
<?
and then you can use jquery plugin: http://plugins.jquery.com/project/base64 or javascript http://www.webtoolkit.info/javascript-base64.html to decode that.
If you're trying to use this to stop spam, I've got some bad news for you:
The price of humans who'll spam blogs is falling to zero
This is a reality. On a site I run, I had a captcha system set up that spam was getting through. All but about 2 were coming from poorer regions of the world, so I had suspicions that there were companies paying people to spam. To test this I set accounts created by people in certain regions to be only visible to them and after they posted some content to alert them to the fact that their account was auto hidden. I provided them a form to contact us and complain if they were a legitimate user. Upon doing this we started getting about 10 emails a day from people angry that we had hidden their account, however upon checking the content they had added, they were spammers! It sounds crazy, but unfortunately it now seems to be humans doing the bulk of the spam. The spammers know we use captcha's, so they have adapted. :(
CAPTCHAs are fast becoming useless (if not so already). Adding a link so users can report spam and having karma levels where users are granted admin privileges so that their flagging leads to automatically hiding spam without prior confirmation (like stackoverflow does) is really the only effective way to stop spam now.
For a CAPTCHA, the only way to defeat scripts is something that can only done by a human - such as recognizing something in an image, or doimg some math.
All decryption that's done by the browser can be just as easily done by automated scripts.