I have a webpage in which the user is awarded X points on clicking a button. The button sends a AJAX request(JQuery) to a PHP file which then awards the points. It uses POST.
As its client side, the php file, parameters are visible to the user.
Can the user automate this process by making a form with the same fields and sending the request ?
How can I avoid this type of CSRF ? Even session authentication is not useful.
You should handle that on the server-side, If you really want to prevent multi-vote or prevent the same people from voting several time on the same subject.
This is why real votes always use authenticated users and never anonymous votes.
By checking the request is really a XmlHttpRequest (with #Shaun Hare response code or with the linked stackoverflow question in your questions comments) you will eventually block some of the CSRF but you won't prevent a repost from the user, using tools like LiveHttpHeaders 'replay' and such. Everything coming from the client side can be forged, everything.
edit* if it's not a voting system as you commented, the problem is teh same, you nedd 'something' to know if the user is doing this action for the first time, or if he can still do this action. There's a lot of different things available.
You can set a token on your page, use that token in the ajax requests, and invalidate this token for later usage server side. This is one way. the problem is where to store these tokens server-side (sessions, caches, etc)
Another way is to check on the server side the situation is still a valid situation (for example a request asking to update 'something' should maybe handle a hash/marker/timestamp that you can verify with current server side state.
This is a very generic question, solutions depends on the reality of the 'performed action'.
Check it is an ajax call in php by checking
$_SERVER['HTTP_X_REQUESTED_WITH']
Related
I want to use post to update a database and don't want people doing it manually, i.e., it should only be possible through AJAX in a client. Is there some well known cryptographic trick to use in this scenario?
Say I'm issuing a GET request to insert a new user into my database at site.com/adduser/<userid>. Someone could overpopulate my database by issuing fake requests.
There is no way to avoid forged requests in this case, as the client browser already has everything necessary to make the request; it is only a matter of some debugging for a malicious user to figure out how to make arbitrary requests to your backend, and probably even using your own code to make it easier. You don't need "cryptographic tricks", you need only obfuscation, and that will only make forging a bit inconvenient, but still not impossible.
It can be achieved.
Whenever you render a page which is supposed to make such request. Generate a random token and store it in session (for authenticated user) or database (in case this request is publicly allowed).
and instead of calling site.com/adduser/<userid> call site.com/adduser/<userid>/<token>
whenever you receive such request if the token is valid or not (from session or database)
In case token is correct, process the request and remove used token from session / db
In case token is incorrect, reject the request.
I don't really need to restrict access to the server (although that would be great), I'm looking for a cryptographic trick that would allow the server to know when things are coming from the app and not forged by the user using a sniffed token.
You cannot do this. It's almost one of the fundamental problems with client/server applications. Here's why it doesn't work: Say you had a way for your client app to authenticate itself to the server - whether it's a secret password or some other method. The information that the app needs is necessarily accessible to the app (the password is hidden in there somewhere, or whatever). But because it runs on the user's computer, that means they also have access to this information: All they need is to look at the source, or the binary, or the network traffic between your app and the server, and eventually they will figure out the mechanism by which your app authenticates, and replicate it. Maybe they'll even copy it. Maybe they'll write a clever hack to make your app do the heavy lifting (You can always just send fake user input to the app). But no matter how, they've got all the information required, and there is no way to stop them from having it that wouldn't also stop your app from having it.
Prevent Direct Access To File Called By ajax Function seems to address the question.
You can (among other solutions, I'm sure)...
use session management (log in to create a session);
send a unique key to the client which needs to be returned before it expires (can't
be re-used, and can't be stored for use later on);
and/or set headers as in the linked answer.
But anything can be spoofed if people try hard enough. The only completely secure system is one which no-one can access at all.
This is the same problem as CSRF - and the solution is the same: use a token in the AJAX request which you've perviously stored eslewhere (or can regenerate, e.g. by encrypting the parameters using the sessin id as a key). Chriss Shiflett has some sensible notes on this, and there's an OWASP project for detecting CSRF with PHP
This is some authorization issue: only authorized requests should result in the creation of a new user. So when receiving such a request, your sever needs to check whether it’s from a client that is authorized to create new users.
Now the main issue is how to decide what request is authorized. In most cases, this is done via user roles and/or some ticketing system. With user roles, you’ll have additional problems to solve like user identification and user authentication. But if that is already solved, you can easily map the users onto roles like Alice is an admin and Bob is a regular user and only admins are authorized to create new users.
It works like any other web page: login authentication, check the referrer.
The solution is adding the bold line to ajax requests. Also you should look to basic authentication, this will not be the only protector. You can catch the incomes with these code from your ajax page
Ajax Call
function callit()
{
if(window.XMLHttpRequest){xmlhttp=new XMLHttpRequest();}else{xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");}
xmlhttp.onreadystatechange=function(){if(xmlhttp.readyState==4&&xmlhttp.status==200){document.getElementById('alp').innerHTML=xmlhttp.responseText;}}
xmlhttp.open("get", "call.asp", true);
**xmlhttp.setRequestHeader("X-Requested-With","XMLHttpRequest");**
xmlhttp.send();
}
PHP/ASP Requested Page Answer
ASP
If Request.ServerVariables("HTTP_X-Requested-With") = "XMLHttpRequest" Then
'Do stuff
Else
'Kill it
End If
PHP
if( isset( $_SERVER['HTTP_X_REQUESTED_WITH'] ) && ( $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest' ) )
{
//Do stuff
} else {
//Kill it
}
I am using Laravel 5.2.15.
There are list of records in a webpage with Edit and Delete button with each record. I have two approaches for deleting the record
Use JQuery and send Ajax Request to server.
Place a form tag for delete button in each row.
I have following question
In case I use Approach 1, can it cause any issue when the site will be viewed from Android or iPhone? I have another option to do Server side validation using Request class.
In case of Approach 2, Will it make the page heavy? I am using Pagination, so 10 records will be displaced per page.
Please guide me if I should go with which approach or please suggest if both approaches are incorrect.
The questions you have don't really focus on the main reasons to choose one above the other. They differ mostly in how the request is sent to the server and how the page is refreshed to show the results.
Using Ajax is a very common approach and relies on using Javascript, a technology that has been available in all browsers for a very long time. Compatibility will not be a problem as most of the internet wouldn't function without it anyways (and you can even make it work using your second approach as a fallback mechanism). The request you sent is typically a HTTP DELETE request to a REST endpoint so that the server then knows to delete the record1. Upon receiving the success response from the server the page is responsible for updating itself by removing the row corresponding with the just deleted record, and possibly fetching new records to still have 10 rows on that page. No page refreshes required, but some Javascript required.
Your second approach is kind of old school in that the form you submit contains some kind of identifier such that the server knows what to do. This is a full page load and should be a HTTP POST request if you want to do it properly2. Following the Post/Redirect/Get idiom the server then sends a Redirect response so that the browser will then trigger yet another normal page load as GET request to show the user the updated list of records. You do not have to update the page manually by yourself, at the cost of having annoying page reloads (this isn't really expected anymore in the current day and age).
My advise would be to go with the first approach. It is the modern way of doing things and allows for having non-reloading pages. It does however require some additional work on the client side (in Javascript) to update the page accordingly.
As a side note, CSRF must be taken care of in both instances really. Always include a CSRF token with every 'update' action you perform on the server.
1 You have to program this yourself, of course :)
2 Browsers don't generally support anything other than GET and POST, although the HTTP specification allows for much more request methods.
It depends upon your requirements. But you should go with the 1st approch. If you will use 2nd approch the you will have to refresh the page since you can not handle the response. So basically if you delete 5 items the page needs to be refreshed 5 times and you may not send more than 1 delete request at a time. Now If you use 1st approch since It's ajax and javascript you can display appropriate message depending upon the result and no need of unnecessary page refresh.Plus as you mentioned you can do validaton using Request class. So you can handle bad or malicious request. And I am sure CSRF won't be that much of a problem since you can check whether the request is ajax or not using Request::ajax(). So 1st approch is better mostly because of that no page refresh.
Both approaches are fine ;)
But 2nd approach would be better than first one; Using this approach you can prevent CSRF attacks too;
I would suggest you to use method 1 with certain modifications.
Use get request to delete the record.
Send a CSRF token and dont forget to encrypt your id for the record
add your delete URL to href
Then when you do ajax request, use the url from href and you could send some additional parameter like is_ajax=1, but laravel already checks for the jquery header so Request::isAjax() method will let you know if the request was an ajax request or normal request.
Now all you need to do is send different response for ajax and normal request.
HOPE THIS HELPS :D
Another drawback of your second approach which haven't been mentioned is displaying validation errors. Specifically from your edit and even your delete actions.
If you have multiple forms for each set of data showing errors from validation would be a pain. But if you follow approach number 2 just by getting the reference of the row element submitted, you could easily append an alert div if ever an error from validation has occurred.
as for the delete action, somebody else might have already delete some shared data so you might also want to tell the user somebody already threw this out.
I am using a simple PHP API that takes requests and connects to a MySQL DB to store/retrieve user information. Example: Login is done using a HTTP POST to the API with username and password.
How do I prevent people from flooding my API with requests, potentially making thousands of accounts in a few seconds.
Thanks
You could serve a token generated and remembered on the server side which is rendered with your forms and validated when the form is sent back to your server. That prevents the user from just generating new post requests without actually requesting the according form from your server since they need the according token generated on your server to get it through.
Then there is the mentioned captcha which would be way too much for a login form from my point but when it comes to things like registering a new user the captcha in combination with the token sounds very good to me.
UPDATE
I Just found this article which is about floot protection of script execution in general. I think their approach is good as is the ip tracking provided you have the ability to use memcache or something similar to speed the checks up.
First, when registering a user, also save her IP address at the time of registration in the DB.
If the IP already exists within 45 minutes of previous registration, reject the request.
Another method is the Captcha, I personally prefer a different method that I found to be even more effective against robots.
Instead of asking the user to "type what they see in an image", and verify they are humans (or robots with sophisticated image processing),
Add another field (for example city), and make it hidden with javascript.
The robots would submit that field to the server, and humans would not.
Note that the robots must run the javascript in order to know what fields are hidden, and this is a time consuming process that they usually don't do.
(see turing halting problem)
I'm using tokens to prevent CSRF attacks in my application. But this application is single page and heavily AJAX based, so I need to find a way to provide a valid token for N actions in a single page:
e.g. A document fragment loads with 3 possible actions, each action needs a token to work properly but server side has just one valid token at time...
Since each token is just a encrypted nonce (the value isn't based in a specific form), I came with the idea of automatize the token assignation for each action with something like this:
The App intercepts an AJAX call, and check if it's a sensitive action (i.e. delete a user)
A token is requested to the server before the action proceed
The token is generated and then added to the action request
The action in executed since the request included a valid token
Do the same for any subsequent actions executed via AJAX
I believe that method isn't effective enough because the attacker can use a script that does exactly the same my App does (retrieve token and append to the request).
How can I improve my method to be effective against CSRF attacks?
Additional info: My backend environment is PHP/Phalcon and the tokens are generated using Phalcon.
A simpler method than using tokens for an AJAX only might be to check headers that can only be present in an AJAX request from your own domain.
Two options are:
Checking the Origin Header
Checking the X-Requested-With header
The Origin header can also be used for normal HTML form submissions, and you would verify that this contains the URL of your own site before the form submission does its processing.
If you decide to check the X-Requested-With header, you will need to make sure it is added to each AJAX request client side (JQuery will do this by default). As this header cannot be sent cross domain (without your server's consent to the browser first), checking that this is present and set to your own value (e.g. "XMLHttpRequest") will verify that the request is not a CSRF attack.
I had to deal with something similar awhile ago. Requesting nonces with ajax is a super bad idea – IMHO, it invalidates the whole point of having them if the attacker can simply generate it without reloading the page. I ended up implementing the following:
Nonce module (the brain of the operation) that handles creation, destruction, validation and hierarchy of nonces (e.g., child nonces for one page with multiple inputs).
Whenever a form / certain input is rendered, nonce is generated and stored in a session with expire timestamp.
When the user is done with an action / form / page, the nonce with it's hierarchy is destroyed. Request may return a new nonce if the action is repetitive.
Upon generating a new nonce old ones are checked and expired ones are removed.
The major trouble with it is deciding when the nonce expires and cleaning them up, because they grow like bacteria on steroids. You don't want a user to submit a form that was open for an hour and get stuck because the nonce is expired / deleted. In those situations you can return 'time out, please try again' message with the regenerated nonce, so upon the following request everything would pass.
As already suggested, nothing is 100% bullet proof and in most cases is an overkill. I think this approach is a good balance between being paranoid and not wasting days of time on it. Did it help me a lot? It did more with being paranoid about it compared to improving the security dramatically.
Logically thinking, the best thing you could do in those situations is to analyse the behaviour behind the requests and time them out if they get suspicious. For example, 20 requests per minute from one ip, track mouse / keyboard, make sure they are active between the requests. In other words ensure that requests are not automated, instead of ensuring they come with valid nonces.
iam using ajax for sending requests to one of my php pages in the site... but i do this from my html page. This is secure....
But what if others know my php page and they send ajax requests to that page from their script? This may cause any security problems.
How can i avoid this ?
You seem to be trying to defend against CSRF attacks.
You can include a nonce in your page, then require that all AJAX requests have that nonce.
Since the attacker is on a different domain, he will have no way of getting the nonce.
The only way they can send AJAX requests to your page is if they are on the same domain (ie. their script would have to be hosted on your domain).
AJAX won't work cross-domain, so it's quite secure.
There is very little you can do to stop this, the only think that can help prevent this is by having a good application architecture.
For example, the following rules will help:
Try and keep your Ajax down to read only.
If you have to use Ajax to write then you should follow these rules
Only allow users that are logged in to submit data
Validate Validate & Validate your post data, Make sure its exactly as you expect it
Implement a form hashing technique that generates a unique hash for every form on every page, and validate against a variable within the session Aka (Nonce)
If the user is logged in make sure there's a validation period, such as "You must wait 30 seconds before posting".
Always use session_regenerate_id() before you call session_start
These are just a few pointers that should get you on your way, when researching these you will come across other techniques used by other site owners but you should always remember the following 2 rules.
Never trust your users, just act like you do
Whitelist and never blacklist.