I don't understand how using a 'challenge token' would add any sort of prevention: what value should compared with what?
From OWASP:
In general, developers need only
generate this token once for the
current session. After initial
generation of this token, the value is
stored in the session and is utilized
for each subsequent request until the
session expires.
If I understand the process correctly, this is what happens.
I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission.
But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend.
Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.
The attacker has no way to get the token. Therefore the requests won't take any effect.
I recommend this post from Gnucitizen. It has a pretty decent CSRF explanation: http://www.gnucitizen.org/blog/csrf-demystified/
CSRF Explained with an analogy - Example:
You open the front door of your house with a key.
Before you go inside, you speak to your neighbour
While you are having this conversation, walks in, while the door is still unlocked.
They go inside, pretending to be you!
Nobody inside your house notices anything different - your wife is like, ‘oh crud*, he’s home’.
The impersonator helps himself to all of your money, and perhaps plays some Xbox on the way out....
Summary
CSRF basically relies on the fact that you opened the door to your house and then left it open, allowing someone else to simply walk in and pretend to be you.
What is the way to solve this problem?
When you first open the door to your house, you are given a paper with a long and very random number written on it by your door man:
"ASDFLJWERLI2343234"
Now, if you wanna get into your own house, you have to present that piece of paper to the door man to get in.
So now when the impersonator tries to get into your house, the door man asks:
"What is the random number written on the paper?"
If the impersonator doesn't have the correct number, then he won't get in. Either that or he must guess the random number correctly - which is a very difficult task. What's worse is that the random number is valid for only 20 minutes (e.g). So know the impersonator must guess correctly, and not only that, he has only 20 minutes to get the right answer. That's way too much effort! So he gives up.
Granted, the analogy is a little strained, but I hope it is helpful to you.
**crud = (Create, Read, Updated Delete)
You need to keep researching this topic for your self, but I guess that's why you are posting to SO :). CSRF is a very serious and widespread vulnerability type that all web app developers should be aware of.
First of all, there is more than one same origin policy. But the most important part is that a script being hosted from http://whatever.com cannot READ data from http://victom.com, but it can SEND data via POST and GET. If the request only contains information that is known to the attacker, then the attacker can forge a request on the victom's browser and send it anywhere. Here are 3 XSRF exploits that are building requests that do not contain a random token.
If the site contains a random token then you have to use XSS to bypass the protection that the Same Origin Policy provides. Using XSS you can force javascript to "originate" from another domain, then it can use XmlHttpRequest to read the token and forge the request. Here is an exploit I wrote that does just that.
Is there any alternative other than
appending the token to every single
URL as a query string? Seems very ugly
and impractical, and makes bookmarking
harder for the user.
There is no reason to append the token to every URL on your site, as long as you ensure that all GET requests on your site are read-only. If you are using a GET request to modify data on the server, you'd have to protect it using a CSRF token.
The funny part with CSRF is that while an attacker can make any http request to your site, he cannot read back the response.
If you have GET urls without a random token, the attacker will be able to make a request, but he won't be able to read back the response. If that url changed some state on the server, the attackers job is done. But if just generated some html, the attacker has gained nothing and you have lost nothing.
Related
Working on an app that lets a user call someone by clicking on them. After the call, a new activity is started, FeedbackActivity, where the user enters feedback regarding the person they called which is uploaded and the server crunches the numbers over time and produces a "rating."
However, the app does not have a traditional "log in and password" behavior... (and it is important that it does not have this) so there is nothing preventing a user from maliciously entering negative feedback over and over again... or worse, loading
http://www.example.com/feedback.php?personICalled=334875634&feedback=blahblahblah
into a browser and just reloading it over and over again.
So, we need to make sure that people can only give feedback on someone that have actually called.
I had the idea of having some sort of "token" be sent to the server when the user clicks "call." Then the server saves this token.
then, when they subsequently upload feedback, it would look like:
http://www.example.com/feedback.php?personICalled=334875634&feedback=blahblahblah,&token=[same token sent prior]
This way, the server checks that such a token was ever saved, and if so, then it saves the feedback, otherwise not.
Or, better yet, there could be a secret formula known only to the server (and the app), whereby [token checked upon feedback given] is a (complex mathematical) function of [token uploaded at phone call time].
But, obviously this wouldn't be that hard for someone to figure out by looking at app source code, or observing the y=f(x) relationship over time and figuring out the formula... and there has to be a better way to do this.
I read about the Diffie-Hellman key exchange... and it seems to me there must be a way of implementing this for this purpose... but I'm not a Harvard graduate and it been a while since discrete math... and I'm not particularly knowledgable about cryptography... and the wiki page makes me head hurt!!!!
Take this diagram, for example
If anyone could tell me how the terms "Common paint," "Secret Colors," "Public Transport" and "Common Secret" translate to my scenario, I think I might just be able to figure this out.
I'm guessing that Public Transport = internet... I've got that far.
First thing, Diffie Hellman is not going to solve your problem. There are a ton of things that can go wrong in crypto, so dont play with it unless you really know that you need it and know what you want it for.
What is your real requirement? A user should be able to enter feedback only one time per call. You do not need crypto to solve this.
When the user makes a call, generate a token. Send that token to the user and also store it in the database. When the call is finished, allow the user to "consume" the token by providing feedback associated with that token. The sever verifies that the token exists in the database (and has not already been consumed). Assuming it is there, accept the feedback and then remove the token from the database (it has been consumed). If it is not there, do not accept the feedback.
You can improve things by also storing a time with the token (the time it was generated). Don't let them provide feedback if they try to consume it too soon. Expire the tokens if they are not consumed after some max life period. This prevents people from repeatedly calling a user or tokens living indefinitely in your database (DoS).
You might also restrict people by IP address. Allow a user to receive only one rating from an IP address in any reasonable time period (one day). The IP addresses can be stored along with the feedback in the database.
Hi I want to collect data for my program, I want to use PHP to do this(http://site.com/php?v="0.0.1"&n="Application"), how would I make the php page only work the my program, and not somebody else has the link.
How would I make sure the
There are various ways, but I think the one really simple is
Send a parameter some kind of token that only you and the end-script knows.
http://site.com/php?v=0.0.1&n=Application&token=SDGH36THGB
Now in your php-script, you can fetch the token parameter from request and check against something that is already saved.
if($_GET['token']==$myToken){
//its my script accessing the page
}else{
//You can show error or something
}
Now that was level-1, very simple ain't it. But what if someone comes to know about that token you're using. It can easily be exploited.
Level-2 can be something like, there's a formula which you and the script knows for e.g. it always set toekn as todays date in yymmyyddHH format.
So now the script can check this against actual time-frame whether this is correct or not. So everytime you make a request there's a different token value depending on the current-time and then the script checks this token against the current-time frame.
Now if someone comes to know about a single-token, then also he can't replicate, since that token will not work after an hour. You can decide a more complex logic yourself.
Level - 3 OAuth - it's pretty difficult, but the best part is you can already find very good implementation libraries with a quick google - https://www.google.co.in/search?q=oauth+in+php&oq=oauth+in+php&aqs=chrome..69i57j0l5.4208j0&sourceid=chrome&espvd=210&es_sm=122&ie=UTF-8
Have your users register their software, and on registration give them a user ID. Send that user ID as a get arg to the php page to identify the source of the request.
Have a multistep process for registering stats. Application requests a code from the server, application hashes code with a secret salt, and then application sends this salted hash back to the server along with all the usage data.
In both of these methods, you still can't tell if some user is manufacturing the requests. You're writing the client in Java, which means that no matter how complicated you make the verification process your end application will be decompilable and your methods will be reproducible. The first solution has the advantage of isolating stats from individual users, so if a user does end up trying to screw with your system you could prune all entries from that user.
A number of my pages are produced from results pulled from MySQL using $_Get. It means the urls end like this /park.php?park_id=1. Is this a security issue and would it be better to hide the query string from the URL? If so how do I go about doing it?
Also I have read somewhere that Google doesn't index URLs with a ?, this would be a problem as these are the main pages of my site. Any truth in this?
Thanks
It's only a security concern if this is sensitive information. For example, you send a user to this URL:
/park.php?park_id=1
Now the user knows that the park currently being viewed has a system identifier of "1" in the database. What happens if the user then manually requests this?:
/park.php?park_id=2
Have they compromised your security? If they're not allowed to view park ID 2 then this request should fail appropriately. But is it a problem is they happen to know that there's an ID of 1 or 2?
In either case, all the user is doing is making a request. The server-side code is responsible for appropriately handling that request. If the user is not permitted to view that data, deny the request. Don't try to stop the user from making the request, because they can always find a way. (They can just manually type it in. Even without ever having visited your site in the first place.) The security takes place in responding to the request, not in making it.
There is some data they're not allowed to know. But an ID probably isn't that data. (Or at least shouldn't be, because numeric IDs are very easy to guess.)
No, there is absolutely no truth to it.
ANY data that comes from a client is subject to spoofing. No matter if it's in a query string, or a POST form or URL. It's as simple as that...
As far as "Google doesn't index URLs with a ?", who-ever told you that has no clue what they are talking about. There are "SEO" best practices, but they have nothing to do with "google doesn't index". It's MUCH more fine grained than that. And yes, Google will index you just fine.
#David does show one potential issue with using an identifier in a URL. In fact, this has a very specific name: A4: Insecure Direct Object Reference.
Note that it's not that using the ID is bad. It's that you need to authorize the user for the URL. So doing permissions soley by the links you show the user is BAD. But if you also authorize them when hitting the URL, you should be fine.
So no, in short, you're fine. You can go with "pretty urls", but don't feel that you have to because of anything you posted here...
How to prevent USER from doing automated posts/spam?
Here is my way of doing it, new php session for each page request, which has its own limitations, no multitabing.
I used new session for each page as defense against both CSRF and automated attacks. Lets say we have forum that uses AJAX to post threads and its validated by PHP SESSION.
add_answer.php?id=123
<?php
if(!is_ajax()){// function that determines whether the request is from ajax (http header stuff)
$_SESSION['token'] = md5(rand());
}
//some ajax request to ajax.php?id=123
?>
ajax.php?id=123
<?php
if($_SESSION['token'] == $_GET['token']){
echo 'MYSQL INSERT stuff';
}else{
echo 'Invalid Request';
}
?>
Every thing works fine until the user opens page.php?id=456 on another tab, the ajax returns 'invalid request' on ajax.php?id=123 This is related to another question I asked. They suggested to use only one session hash all the time, until he/she logs out - only then the session is regenerated. If the token is the same USER could simply bypass it and do the automated attacks. Any ideas on that?
Anyhow whats your way of preventing Automated AJAX attacks?
PS:
Dont torture users with captchas.
Google failed to show me something useful on this.
Take this as a challenge.
Or at least vote the answers of the experts which you think is brilliant way of doing this
It sounds like your objection to letting the session stay open as long as the browser is open is the issue of automated attacks. Unfortunately, refreshing the token on each page load only deters the most amateur attackers.
First, I assume we're talking about attacks specifically targeted at your site. (If we're talking about the bots that just roam around and submit various forms, not only would this not stop them, but there are far better and easier ways to do so.) If that's the case, and I'm targeting my site, here's what my bot would do:
Load form page.
Read token on form page.
Submit automated request with that token.
Go to step 1.
(Or, if I investigated your system enough, I'd realize that if I included the "this is AJAX" header on each request, I could keep one token forever. Or I'd realize that the token is my session ID, and send my own PHPSESSID cookie.)
This method of changing the token on each page load would do absolutely nothing to stop someone who actually wanted to attack you all that badly. Therefore, since the token has no effect on automation, focus on its effects on CSRF.
From the perspective of blocking CSRF, creating one token and maintaining it until the user closes the browser seems to accomplish all goals. Simple CSRF attacks are defeated, and the user is able to open multiple tabs.
TL;DR: Refreshing the token once on each request doesn't boost security. Go for usability and do one token per session.
However! If you're extremely concerned about duplicate form submissions, accidental or otherwise, this issue can still easily be resolved. The answer is simple: use two tokens for two different jobs.
The first token will stay the same until the browser session ends. This token exists to prevent CSRF attacks. Any submission from this user with this token will be accepted.
The second token will be uniquely generated for each form loaded, and will be stored in a list in the user's session data of open form tokens. This token is unique and is invalidated once it is used. Submissions from this user with this token will be accepted once and only once.
This way, if I open a tab to Form A and a tab to Form B, each one has my personal anti-CSRF token (CSRF taken care of), and my one-time form token (form resubmission taken care of). Both issues are resolved without any ill effect on the user experience.
Of course, you may decide that that's too much to implement for such a simple feature. I think it is, anyway. Regardless, a solid solution exists if you want it.
If you're trying to prevent having one client DoS you, an uncommon but workable strategy would be to include a hashcash token in the request (there are already PHP and JavaScript implementations).
In order to prevent breaking tabbed browsing and back buttoning, ideally you'd want the hashcash token's challenge to contain both a per-session anti-forgery token and a uniqueness portion newly generated for each request. In order to minimize the impact on usability if you have a large token cost, start precomputing the next token in your page as soon as you've expended the previous one.
Doing this limits the rate at which a client can produce valid requests, because each hashcash token can only be used once (which means you'll need to keep a cache of valid, already-spent hashcash tokens attached to the session to prevent endless reuse of a single token), they can't be computed in advance of the session start (because of the random anti-forgery value), and it costs nontrivial amounts of CPU time to generate each new valid token but only a trivial amount of time to validate one.
While this doesn't prevent automation of your AJAX API per se, it does constrain high-volume hammering of the API.
How to prevent USER from doing automated posts/spam?
This could likely be solved in the same manner as regular requests. A token per page load and stopping new tabs may be overkill. Certainly a time-sensitive token per form may mitigate CSRF attacks to some degree. But otherwise, instead of restricting the user experience, it may be best to define and implement a submission policy engine.
At the risk of sounding pompous or demeaning to everyone: Often sites use a points-based reward system, such as "karma" or "badges". Such systems actually add to the user experience as submissions then become a sort of game for users. They may often restrict the ability to post submissions to only trusted users or by a max number during a given time-frame. Take a look at SO's system for a good use case.
A very basic answer just demonstrating some common site policies:
If the user exceeded a count of x number of posts in the past y minutes, deny DB insert and display a "Sorry, too soon since your last post" warning. This can be achieved by querying the DB for a count of users's posts over a given recent time period before allowing the new record insert.
If the user doesn't have a certain karma threshold - for example, new users or those repeatedly marked as spammers - deny DB write and display a "Sorry, you haven't been here long enough" or a "Sorry, you spam too much" warning. This can be achieved by querying the DB for a total of users's "karma", which is managed in a separate table or site module, before allowing the new record insert.
If the site is small and manageable enough to be moderated by just one or two users, have all new user requests and posts reviewed and approved first. This can be achieved by holding new entries in a separate table for review before moving to the live table, or by having an "approved" flag column on the main table.
Furthermore, a count of policy violations can be kept on each user, and if it exceeds a certain point over a given time period, you may opt to have them automatically banned for a certain time period. The ban can be put into effect by denying all db writes related to that user if you wish.
On the note about "http header stuff", headers are for only working off a best guess and courtesy at what the client is requesting. They are only as difficult to forge as cookies, and forging cookies only takes a click of the mouse. And honestly, I personally wouldn't have it any other way.
I have a script that uses JSONP to make cross domain ajax calls. This works great but my question is, is there a way to prevent other sites from accessing and getting data from these URL's? I basically would like to make a list of sites that are allowed and only return data if they are in the list. I am using PHP and figure I might be able to use "HTTP_REFERER" but have read that some browsers will not send this info.... ??? Any ideas?
Thanks!
There really is no effective solution. If your JSON is accessible through the browser, then it is equally accessible to other sites. To the web server a request originating from a browser or another server are virtually indistinguishable aside from the headers. Like ILMV commented, referrers (and other headers) can be falsified. They are after all, self-reported.
Security is never perfect. A sufficiently determined person can overcome any security measures in place, but the goal of security is to create a high enough deterrent that laypeople and or most people would be dissuaded from putting the time and resources necessary to compromise the security.
With that thought in mind, you can create a barrier of entry high enough that other sites would probably not bother making requests with the barriers of entry put into place. You can generate single use tokens that are required to grab the json data. Once a token is used to grab the json data, the token is then subsequently invalidated. In order to retrieve a token, the web page must be requested with a token embedded within the page in javascript that is then put into the ajax call for the json data. Combine this with time-expiring tokens, and sufficient obfuscation in the javascript and you've created a high enough barrier.
Just remember, this isn't impossible to circumvent. Another website could extract the token out of the javascript, and or intercept the ajax call and hijack the data at multiple points.
Do you have access to the servers/sites that you would like to give access to the JSONP?
What you could do, although not ideal is to add a record to a db of the IP on the page load that is allowed to view the JSONP, then on the jsonp load, check if that record exists. Perhaps have an expiry on the record if appropriate.
e.g.
http://mysite.com/some_page/ - user loads page, add their IP to the database of allowed users
http://anothersite.com/anotherpage - as above, add to database
load JSONP, check the IP exists in the database.
After one hour delete the record from the db, so another page load would be required for example
Although this could quite easily be worked around if the scraper (or other sites) managed to work out what method you are using to allow users to view the JSONP, they'd only have to hit the page first.
How about using a cookie that holds a token used with every jsonp request?
Depending on the setup you can also use a variable if you don't want to use cookies.
Working with importScript form the Web Worker is quite the same as jsonp.
Make a double check like theAlexPoon said. Main-script to web worker, web worker to sever and back with security query. If the web worker answer to the main script without to be asked or with the wrong token, its better to forward your website to the nirvana. If the server is asked with the wrong token don't answer. Cookies will not be send with an importScript request, because document is not available at web worker level. Always send security relevant cookies with a post request.
But there are still a lot of risks. The man in the middle knows how.
I'm certain you can do this with htaccess -
Ensure your headers are sending "HTTP_REFERER" - I don't know any browser that wont send it if you tell it to. (if you're still worried, fall back gracefully)
Then use htaccess to allow/deny access from the right referer.
# deny all except those indicated here
order deny,allow
deny from all
allow from .*domain\.com.*