I am using a simple PHP API that takes requests and connects to a MySQL DB to store/retrieve user information. Example: Login is done using a HTTP POST to the API with username and password.
How do I prevent people from flooding my API with requests, potentially making thousands of accounts in a few seconds.
Thanks
You could serve a token generated and remembered on the server side which is rendered with your forms and validated when the form is sent back to your server. That prevents the user from just generating new post requests without actually requesting the according form from your server since they need the according token generated on your server to get it through.
Then there is the mentioned captcha which would be way too much for a login form from my point but when it comes to things like registering a new user the captcha in combination with the token sounds very good to me.
UPDATE
I Just found this article which is about floot protection of script execution in general. I think their approach is good as is the ip tracking provided you have the ability to use memcache or something similar to speed the checks up.
First, when registering a user, also save her IP address at the time of registration in the DB.
If the IP already exists within 45 minutes of previous registration, reject the request.
Another method is the Captcha, I personally prefer a different method that I found to be even more effective against robots.
Instead of asking the user to "type what they see in an image", and verify they are humans (or robots with sophisticated image processing),
Add another field (for example city), and make it hidden with javascript.
The robots would submit that field to the server, and humans would not.
Note that the robots must run the javascript in order to know what fields are hidden, and this is a time consuming process that they usually don't do.
(see turing halting problem)
Related
I have a simple PHP script which accepts a $_REQUEST from a javascript Ajax call and adds a post to the DB
But I need to ensure that only javascript requests from my domain is allowed, to prevent someone from submitting thousands of junk posts to my DB.
My question is, how do I ensure that my script only accepts $_REQUEST from my domain?
Thanks
The short answer is: You can't.
It sounds like you need to introduce the usual defences against CSRF (i.e. to generate a random security token and store it in a cookie (or session) as well as in your HTML document. You then submit the token as part of your request and compare it to the one in the cookie. If they match, then it is an intentional post from the user and not their browser being tricked into making the request by another site).
This won't stop people submitting "thousands of junk posts" though. You also need to authenticate users and check they are authorised to make a submission before allowing it to go through.
You can consider also including rate limiting checks and spam filtering.
You use a 'secret' key, a response and a remote IP to validate.
Google has provided this for you
https://developers.google.com/recaptcha/docs/verify
https://developers.google.com/recaptcha/docs/display
https://developers.google.com/recaptcha/docs/invisible#auto_render
works like a charm.
Once you implement you get an ADMIN panel here:
https://www.google.com/recaptcha/admin
At which time you set the Domains to be ONLY your URL's.
Which will do what you want and make sure the form validates from your domain using both keys from server side and client side integration. If someone try's to generate the "key" using their domain recaptcha will detect it as spam.
(see the verify link above)
I have a webpage in which the user is awarded X points on clicking a button. The button sends a AJAX request(JQuery) to a PHP file which then awards the points. It uses POST.
As its client side, the php file, parameters are visible to the user.
Can the user automate this process by making a form with the same fields and sending the request ?
How can I avoid this type of CSRF ? Even session authentication is not useful.
You should handle that on the server-side, If you really want to prevent multi-vote or prevent the same people from voting several time on the same subject.
This is why real votes always use authenticated users and never anonymous votes.
By checking the request is really a XmlHttpRequest (with #Shaun Hare response code or with the linked stackoverflow question in your questions comments) you will eventually block some of the CSRF but you won't prevent a repost from the user, using tools like LiveHttpHeaders 'replay' and such. Everything coming from the client side can be forged, everything.
edit* if it's not a voting system as you commented, the problem is teh same, you nedd 'something' to know if the user is doing this action for the first time, or if he can still do this action. There's a lot of different things available.
You can set a token on your page, use that token in the ajax requests, and invalidate this token for later usage server side. This is one way. the problem is where to store these tokens server-side (sessions, caches, etc)
Another way is to check on the server side the situation is still a valid situation (for example a request asking to update 'something' should maybe handle a hash/marker/timestamp that you can verify with current server side state.
This is a very generic question, solutions depends on the reality of the 'performed action'.
Check it is an ajax call in php by checking
$_SERVER['HTTP_X_REQUESTED_WITH']
So I have a form that can create an account and right now the process to create the account is by calling a javascript REST API. I was thinking that it might be really easy to hack that programmatically since all they would need to do it look at the javascript to find out the url to spam and that it might be safer to do the processing in a PHP script. Then I though well, they could just look at the form to find the URL just as easy if I don't do it through javascript. The form is going to be processing only POST data but not sure if that is enough and if it matters if i process it through javascript or PHP.
What it the best way to prevent someone from spamming a form programmatically (ie prevent them from writing server, like PHP, or client, like javascript, code to spams the processing script).
One way is to use Captcha to filter the bots out reCaptcha but its not 100% protection
Using Captcha is probably the first method:
Google's Version
Secondly I would do data checking on the server side and possibly email verification, if the E-Mail is not verified I would have a cron to clean out the rows in your table which don't have e-mail verification.
With these two methods you should avoid a good majority of it.
Go for reCAPTCHA. It's pretty easy.
You can obtain a key pair there by registering your website URL. Use that key to generate the reCAPTCHA image/textbox in your form. Your form's data will be posted and added to database only if entry matches the text displayed in the image, otherwise not (that's aserverside check that you have to keep). You'll get plenty of related code in Google :)
Another technique, as most of the websites now a days follow, is to send an account activation link to the user via email. An account will get created only when that activation link is clicked upon. You can also set an expiration time (say, 24 hours) for this purpose.
Hope this helps.
What are some methods that could be used to secure a login page from being able to be logged into by a remote PHP script using CURL? Checking referrer and user agent won't work since those can be set with CURL. The ideal solution would be to solve this without using a CAPTCHA, that is the point of this question to try and figure out if this is possible.
One approach is to include some JavaScript in your login form, and make it so that the form cannot possibly be successfully submitted unless that JavaScript has run. This makes your login form only usable for people with JavaScript turned on, which CURL doesn't have. If the necessary JavaScript is some kind of challenge/response that differs every time (for instance use something like http://www.ohdave.com/rsa/ to make it non-trivial), the presence of the correctly set value in the form is good evidence that JavaScript ran.
You won't be able to stop all automated scripts though, it is easy enough to write scripts that drive an actual browser engine, and they will pass this test.
There isn't any way to prevent it simply. If the script knows the user name and password they will be able to login.
You could use a captcha so that automated logins won't be able to read it, but that will be a burden on actual users as well.
If you are concerned about it being used to try and brute force a login, then you could require some additional information after several attempts.
Disable the account and require reactivation via email
Require a captcha after several unsuccessful attempts
if I undestand correctly :
you have login page what execute login script
login script is hacked by remote cURL script...
Solution
in login page place hidden element with secret unique code what can happend only once, save this secret code in session, in loging script look in session for this code, compare with what was posted to the script, should same to proceed, clear session...
more about subject: http://en.wikipedia.org/wiki/Cross-site_request_forgery
cURL is no different from any other client (e.g. a browser). You could use nonce tied to a session in a hidden input field to prevent POST requests from being made directly but there are still ways around that. It's also a good idea to limit the number of log in attempts per minute to make brute-force attacks more difficult if that's what you're worried about.
I was looking at the livehttpheaders plugin for Firefox and decided to test my login page. I noticed that the parameters shown inside of it contain my login and password. For example:
username=sarmenhb&password=thepassword&submit=Login
in plain English.
I don not see this on other sites.
What can I be doing wrong? I see this as a security flaw. The login page, all it does is validate and log in the user. All fields are ran through mysql_real_escape_string (in case that is relevant).
The information has to get to the server from the client some how. Use SSL if you are worried about security.
Even if you do an MD5 hash in Javascript, this does not help because it is trivial to submit the hash to the login page, and the hash effectively becomes the password. All things are plain text until they, or the transport, is encrypted. POST the variables, use SSL.
To add from my comment below. You may not see the headers for other-sites because they may use AJAX, POST method or another client-side mechanism to authenticate.
This reminds me of a certain building in a large city (I am sure there are others in other places) where they have a web based interface to the building concierge. Residents can log on to a web site (over http) and specify (among other things) who is allowed to enter their apartment for repairs etc in their absence. I am sure the whole thing was programmed by someone's nephew who is a 'guru'.
I am sure it is, shall we say, good enough.
You're seeing it for your site and not for others because livehttpheaders shows the URL for GET requests, but doesn't show the content for POST requests.
Sending login information through GET requests is a minor extra security hole over sending them POST, in that the URLs for GET requests are often logged in various places, whereas almost no one logs POST content. Does everyone with permission to look at the webserver logs have permission to know the CEO's password?
However, as others have pointed out, unless you're using https: for login, data is going across the network in plain text whether you use GET or POST. This is almost always bad.
Still, as an intermediate measure I would change your app to send username and password stuff as a POST, not a GET, so that you don't end up storing usernames and passwords in your webserver logs - it's no use using https over the wire if you're doing something that then writes the username and password to an insufficiently protected logfile on the server.
When you are using http and submit a form, the form contents are sent across the wire "in the clear", as you're seeing. When that form submission includes credentials, then yes, you have a security issue.
Among your alternatives are:
Use https, so that over-the-wire communication is encrypted
Use OpenID for login, which pushes management of https credentials off onto the user's OpenID provider
Use Javascript on the client side to encrypt the credentials before posting the form
The latter approach tends to get people into trouble if they're not very careful, because the mechanism for encrypting the credentials is fully visible to anyone who cares to inspect the javascript.
HTTP live header shows POST requests as well. Post sends the data the same way as GET does but the only difference being that the variables are passed in the url itself in GET but in POST they are appended to the HTTP header.
To get better security use encrypting in JS (only password or token+password). But that still can be hacked using rainbow tables for say MD5 or any other hashing technique.
SSL is the only way to achieve high security.