This question already has answers here:
How to prevent direct access to my JSON service?
(7 answers)
Closed 3 years ago.
I'm calling the php code from ajax like this:
ajaxRequest.open("GET", "func.php" + queryString, true);
Since it's a get request anyone can see it by simply examining the headers. The data being passed is not sensitive, but it could potentially be abused since it is also trivial to get the parameter names.
How do I prevent direct access to http://mysite/func.php yet allow my ajax page access to it?
Also I have tried the solution posted here but its doesn't work for me - always get the 'Direct access not premitted' message.
Most Ajax requests/frameworks should set this particular header that you can use to filter Ajax v Non-ajax requests. I use this to help determine response type (json/html) in plenty of projects:
if( isset( $_SERVER['HTTP_X_REQUESTED_WITH'] ) && ( $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest' ) )
{
// allow access....
} else {
// ignore....
}
edit:
You can add this yourself in your own Ajax requests with the following in your javascript code:
var xhrobj = new XMLHttpRequest();
xhrobj.setRequestHeader("X-Requested-With", "XMLHttpRequest");
what I use is: PHP sessions + a hash that is sent each time I do a request. This hash is generated using some algorithm in the server side
Mmm... you could generate a one-time password on session start, which you could store in the _SESSION, and add a parameter to your ajax call which would re-transmit this (something like a captcha). It would be valid for that session only.
This would shield you from automated attacks, but a human who has access to your site could still do this manually, but it could be the base to devise something more complicated.
Anyone in this thread who suggested looking at headers is wrong in some way or other. Anything in the request (HTTP_REFERER, HTTP_X_REQUESTED_WITH) can be spoofed by an attacker who isn't entirely incompetent, including shared secrets [1].
You cannot prevent people from making an HTTP request to your site. What you want to do is make sure that users must authenticate before they make a request to some sensitive part of your site, by way of a session cookie. If a user makes unauthenticated requests, stop right there and give them a HTTP 403.
Your example makes a GET request, so I guess you are concerned with the resource requirements of the request [2]. You can do some simple sanity checks on HTTP_REFERER or HTTP_X_REQUESTED_WITH headers in your .htaccess rules to stop new processes from being spawned for obviously fake requests (or dumb search-crawlers that won't listen to robots.txt), but if the attacker fakes those, you'll want to make sure your PHP process quits as early as possible for non-authenticated requests.
[1] It's one of the fundamental problems with client/server applications. Here's why it doesn't work: Say you had a way for your client app to authenticate itself to the server - whether it's a secret password or some other method. The information that the app needs is necessarily accessible to the app (the password is hidden in there somewhere, or whatever). But because it runs on the user's computer, that means they also have access to this information: All they need is to look at the source, or the binary, or the network traffic between your app and the server, and eventually they will figure out the mechanism by which your app authenticates, and replicate it. Maybe they'll even copy it. Maybe they'll write a clever hack to make your app do the heavy lifting (You can always just send fake user input to the app). But no matter how, they've got all the information required, and there is no way to stop them from having it that wouldn't also stop your app from having it.
[2] GET requests in a well-designed application have no side-effects, so nobody making them will be able to make a change on the server. Your POST requests should always be authenticated with session plus CSRF token, to let only authenticated users call them. If someone attacks this, it means they have an account with you, and you want to close that account.
Put the following code at the very top of your php file that is called by ajax. It will execute ajax requests, but will "die" if is called directly from browser.
define('AJAX_REQUEST', isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest');
if(!AJAX_REQUEST) {die();}
Personally, I choose not to output anything after "die()", as an extra security measure. Meaning that I prefer to show just a blank page to the "intruder", rather than giving out hints such as "if" or "why" this page is protected.
I would question why you are so convinced that no-one should be able to visit that file directly. Your first action really should be to assume that people may visit the page directly and act around this eventuality. If you are still convinced you want to close access to this file then you should know that you cannot trust $_SERVER variables for this as the origins of $_SERVER can be difficult to determine and the values of the headers can be spoofed. In some testing I did I found those headers ($_SERVER['HTTP_X_REQUESTED_WITH'] & $_SERVER['HTTP_X_REQUESTED_WITH']) to be unreliable as well.
I solved this problem preparing a check function that make three things
check referer $_SERVER['HTTP_REFERER'];
check http x request $_SERVER['HTTP_X_REQUESTED_WITH'];
check the origin via a bridge file
if all three pass, you success in seeing php file called by ajax, if just one fails you don't get it
The points 1 and 2 were already explained, the bridge file solution works so:
Bridge File
immagine the following scenario:
A.php page call via ajax B.php and you want prevent direct access to B.php
1) when A.php page is loaded it generates a complicated random code
2) the code is copied in a file C.txt not directly accessible from
web (httpd secured)
3) at the same time this code is in clear sculpted in the rendered
html of the A.php page (for example as an attribute of body,
es:
data-bridge="ehfwiehfe5435ubf37bf3834i"
4) this sculpted code is retrived from javascript and sent via ajax
post request to B.php
5) B.php page get the code and check if it exists in the C.txt file
6) if code match the code is popped out from C.txt and the page B.php
is accessible
7) if code is not sent (in case you try to access directly the B
page) or not matches at all (in case you supply an old code trapped
or trick with a custom code), B.php page die.
In this way you can access the B page only via an ajax call generated from the father page A.
The key for pageB.php is given only and ever from pageA.php
There is no point in doing this. It doesn't add any actual security.
All the headers that indicate that a request is being made via Ajax (like HTTP_X_REQUESTED_WITH) can be forged on client side.
If your Ajax is serving sensitive data, or allowing access to sensitive operations, you need to add proper security, like a login system.
I tried this
1) in main php file (from which send ajax request) create session with some random value, like $_SESSION['random_value'] = 'code_that_creates_something_random'; Must be sure, that session is created above $.post.
2) then
$.post( "process_request.php",
{
input_data:$(':input').serializeArray(),
random_value_to_check:'<?php echo htmlspecialchars( $_SESSION['random value'], ENT_QUOTES, "UTF-8"); ?>'
}, function(result_of_processing) {
//do something with result (if necessary)
});
3) and in process_request.php
if( isset($_POST['random_value_to_check']) and
trim($_POST['random_value_to_check']) == trim($_SESSION['random value']) ){
//do what necessary
}
Before i defined session, then hidden input field with session value, then value of the hidden input field send with ajax. But then decided that the hidden input field not necessary, because can send without it
I have a simplified version of Edoardo's solution.
Web page A creates a random string, a [token], and saves a file with that name on disk in a protected folder (eg. with .htaccess with Deny from all on Apache).
Page A passes the [token] along with the AJAX request to the script B (in OP's queryString).
Script B checks if the [token] filename exists and if so it carries on with the rest of the script, otherwise exits.
You will also need to set-up some cleaning script eg. with Cron so the old tokens don't cumulate on disk.
It is also good to delete the [token] file right away with the script B to limit multiple requests.
I don't think that HTTP headers check is necessary since it can be easily spoofed.
Based on your description, I assume you're trying to prevent outright rampant abuse, but don't need a rock-solid solution.
From that, I would suggest using cookies:
Just setcookie() on the page that is using the AJAX, and check $_COOKIE for the correct values on func.php. This will give you some reasonable assurance that anyone calling func.php has visited your site recently.
If you want to get fancier, you could set and verify unique session ids (you might do this already) for assurance that the cookie isn't being forged or abused.
I tried many suggestions, no one solved the problem. Finally I protected the php target file's parameters and it was the only way to limit direct access to the php file.
** Puting php file and set limitation by.htaccess caused fail Ajax connection in the main Html page.
Related
I asked a similar question before, and the answer was simply:
if JavaScript can do it, then any client can do it.
But I still want to find out a way do restrict AJAX calls to JavaScript.
The reason is :
I'm building a web application, when a user clicks on an image, tagged like this:
<img src='src.jpg' data-id='42'/>
JavaScript calls a PHP page like this:
$.ajax("action.php?action=click&id=42");
then action.php inserts rows in database.
But I'm afraid that some users can automate entries that "clicks" all the id's and such, by calling necessary url's, since they are visible in the source code.
How can I prevent such a thing, and make sure it works only on click, and not by calling the url from a browser tab?
p.s.
I think a possible solution would be using encryption, like generate a key on user visit, and call the action page with that key, or hash/md5sum/whatever of it. But I think it can be done without transforming it into a security problem. Am I right ? Moreover, I'm not sure this method is a solution, since I don't know anything about this kind of security, or it's implementation.
I'm not sure there is a 100% secure answer. A combination of a server generated token that is inserted into a hidden form element and anti-automation techniques like limiting the number of requests over a certain time period is the best thing I can come up with.
[EDIT]
Actually a good solution would be to use CAPTCHAS
Your question isn't really "How can I tell AJAX from non-AJAX?" It's "How do I stop someone inflating a score by repeated clicks and ballot stuffing?"
In answer to the question you asked, the answer you quoted was essentially right. There is no reliable way to determine whether a request is being made by AJAX, a particular browser, a CURL session or a guy typing raw HTTP commands into a telnet session. We might see a browser or a script or an app, but all PHP sees is:
GET /resource.html HTTP/1.1
host:www.example.com
If there's some convenience reason for wanting to know whether a request was AJAX, some javascript libraries such as jQuery add an additional HTTP header to AJAX requests that you can look for, or you could manually add a header or include a field to your payload such as AJAX=1. Then you can check for those server side and take whatever action you think should be made for an AJAX request.
Of course there's nothing stopping me using CURL to make the same request with the right headers set to make the server think it's an AJAX request. You should therefore only use such tricks where whether or not the request was AJAX is of interest so you can format the response properly (send a HTML page if it's not AJAX, or JSON if it is). The security of your application can't rely on such tricks, and if the design of your application requires the ability to tell AJAX from non-AJAX for security or reliability reasons then you need to rethink the design of your application.
In answer to what you're actually trying to achieve, there are a couple of approaches. None are completely reliable, though. The first approach is to deposit a cookie on the user's machine on first click, and to ignore any subsequent requests from that user agent if the cookie is in any subsequent requests. It's a fairly simple, lightweight approach, but it's also easily defeated by simply deleting the cookie, or refusing to accept it in the first place.
Alternatively, when the user makes the AJAX request, you can record some information about the requesting user agent along with the fact that a click was submitted. You can, for example store a hash (stored with something stronger than MD5!) of the client's IP and user agent string, along with a timestamp for the click. If you see a lot of the same hash, along with closely grouped timestamps, then there's possibly abuse being attempted. Again, this trick isn't 100% reliable because user agents can see any string they want as their user agent string.
Use post method instead of get.Read the documentation here http://api.jquery.com/jQuery.post/ to learn how to use post method in jquery
You could, for example, implement a check if the request is really done with AJAX, and not by just calling the URL.
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
// Yay, it is ajax!
} else {
// no AJAX, man..
}
This solution may need more reflexion but might do the trick
You could use tokens as stated in Slicedpan's answer. When serving your page, you would generate uuids for each images and store them in session / database.
Then serve your html as
<img src='src.jpg' data-id='42' data-uuid='uuidgenerated'/>
Your ajax request would become
$.ajax("action.php?action=click&uuid=uuidgenerated");
Then on php side, check for the uuid in your memory/database, and allow or not the transaction. (You can also check for custom headers sent on ajax as stated in other responses)
You would also need to purge uuids, on token lifetime, on window unload, on session expired...
This method won't allow you to know if the request comes from an xhr but you'll be able to limit their number.
I am working on a live weather data page. Our weather module outputs the data in CSV and stores it on my webserver. I then use a PHP script to translate the CSV into an array and then i encode it in JSON and output it so that my jQuery Ajax script can call it every so often to get the latest data and update the page. This is working great so far.
My question is, how can i prevent the URL used to retrieve the JSON (the URL of the aforementioned PHP script) to be opened and viewed in a browser? I tried adjusting the permissions, but to no success.
Thanks in advance to any who are willing to help.
There's no real way of doing that, since the Ajax call also comes from the browser. There's no real difference between a proper browser call and an Ajax call. A GET call is a GET call.
EDIT
As per #Adeneo's suggestion, implementing a pseudo-security, through some kind of key, would be a good way of making it harder for people to view the page, even though there's no way of literally blocking the call.
Also, adding a header to your Ajax call and verifying the presence of that header in your backend script makes it a bit harder to spoof.
Another idea would be that, if that service would be called only once per page view, you could setup a key in your javascript, provided by your server, to append to your ajax call. When the server gets called, the key provided becomes invalid after use, preventing someone from calling the service with the same key twice.
There is no way of (reliably) identifying a browser as anything that is not some form of "Authentication-Token" can be faked. The server relies on the client to be honest.
You can detect if a request is an ajax request tho. Here is a link to one way of doing it:
http://davidwalsh.name/detect-ajax
This is how he does it:
/* AJAX check */
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
/* special ajax here */
die($content);
}
You will want to reverse the statements in the if since it die()s when the request IS ajax.
There are other ways of detecting ajax, none of which are 100% secure, including you setting a GET variable that helps you identify an ajax call (but that get variable can also be sent via the browser via the address line so well... you get the picture)
Short answer: you cannot.
Long answer: you could implement a simple Browser Sniffing. Or search for far more advanced methods.
$browser = get_browser(null, true);
if ($browser[parent] == "whatever-identifies-clients-that-have-access") {
//Code to output jSon here.
}
else {
header('HTTP/1.1 403 Forbidden');
}
But note that this is not security. At the very most, it throws up a barrier; but preventing is impossible.
Edit This assumes the client is not a browser, I wrongly assumed a (mobile) client of some sorts was accessing the JSON. When it is a browser, you cannot deny access. At all. AJAX comes from that browser too.
In simplest terms, I utilize external PHP scripts throughout my client's website for various purposes such as getting search results, updating content, etc.
I keep these scripts in a directory:
www.domain.com/scripts/scriptname01.php
www.domain.com/scripts/scriptname02.php
www.domain.com/scripts/scriptname03.php
etc..
I usually execute them using jQuery AJAX calls.
What I'm trying to do is find is a piece of code that will detect (from within) whether these scripts are being executed from a file via AJAX or MANUALLY via URL by the user.
IS THIS POSSIBLE??
I've have searched absolutely everywhere and tried various methods to do with the $_SERVER[] array but still no success.
What I'm trying to do is find is a piece of code that will detect (from within) whether these scripts are being executed from a file via AJAX or MANUALLY via URL by the user.
IS THIS POSSIBLE??
No, not with 100% reliability. There's nothing you can do to stop the client from simulating an Ajax call.
There are some headers you can test for, though, namely X-Requested-With. They would prevent an unsophisticated user from calling your Ajax URLs directly. See Detect Ajax calling URL
Most AJAX frameworks will send an X-Requested-With: header. Assuming you are running on Apache, you can use the apache_request_headers() function to retrieve the headers and check for it/parse it.
Even so, there is nothing preventing someone from manually setting this header - there is no real 100% foolproof way to detect this, but checking for this header is probably about as close as you will get.
Depending on what you need to protect and why, you might consider requiring some form of authentication, and/or using a unique hash/PHP sessions, but this can still be reverse engineered by anyone who knows a bit about Javascript.
As an idea of things that you can verify, if you verify all of these before servicing you request it will afford a degree of certainty (although not much, none if someone is deliberately trying to cirumvent your system):
Store unique hash in a session value, and require it to be sent back to you by the AJAX call (in a cookie or a request parameter) so can compare them at the server side to verify that they match
Check the X-Requested-With: header is set and the value is sensible
Check that the User-Agent: header is the same as the one that started the session
The more things you check, the more chance an attacker will get bored and give up before they get it right. Equally, the longer/more system resources it will take to service each request...
There is no 100% reliable way to prevent a user, if he knows the address of your request, from invoking your script.
This is why you have to authenticate every request to your script. If your script is only to be called by authenticated users, check for the authentication again in your script. Treat it as you will treat incoming user input - validate and sanitize everything.
On Edit: The same could be said for any script which the user can access through the URL. For example, consider profile.php?userid=3
I have a javascript on my webpage which makes a call to a php file and passes some values to the php file. How can i know for sure that the call to the php file was from the js on my webpage and not directly entering the php url from the browsers address bar?
You'll want to use check if $_SERVER['HTTP_X_REQUESTED_WITH'] is XMLHttpRequest. That will prevent people from directly typing the URL in their browsers while allowing Ajax requests.
if ( ! empty($_SERVER['HTTP_X_REQUESTED_WITH']) && $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest' )
{
// Do something useful
}
You might want to see Do all browsers support PHP's $_SERVER['HTTP_X_REQUESTED_WITH']? for an explanation of what browsers/JavaScript libraries send the HTTP_X_REQUESTED_WITH header.
you can use the following info:
The headers that are being sent (usually Ajax calls adds special headers)
X-Requested-With: XMLHttpRequest
refering URL in the $_SERVER
Both can be hacked though.
If you need a very safe solution, you need to create a unique code for each request and send it to the server from your JS. This code needs to change after each time it is used.
You can't. Anything you expose to ajax you expose to the world. What's more is that someone wanting to get into that page could just run their own javascript on the page anyway and spoof any HTTP headers they want. These are the people you presumably want to keep out anyway. You shouldn't use measures that will limit non-malicious users but provide relatively little, if any, trouble to malicious users.
That said, I have never heard of HTTP_X_REQUESTED_WITH and based on preliminary reading it's not a good idea to use. I usually use POST and check that the _SERVER[REQUEST_METHOD] is POST because most modern browsers will use GET for any requests made by the url bar (that's my experience anyway). Again, this is not to keep bad people out, it's just to prevent accidents.
Another measure you should take is to check that a flag is sent to the page to help signal it's coming from a verified source. Depending upon how secure the ajax page is supposed to be, you may also want to verify session, etc. A somewhat safe way is to create a unique hash (e.g. md5) and store it in the DB with a timestamp. That hash indicates the page can be visited, say, up to three times. When the user clicks your ajax link, it sends the hash. The hash is flagged as consumed and cannot be reused. The hash should also go stale some time after creation (5 minutes? It depends and it's up to you).
Finally, make sure that this page is ignored by friendly robots (robots.txt, meta if possible, etc.). This will prevent people reaching it accidentally via search engines.
I have several pages that call in content via jQuery .ajax. I dont want the content visible on the page so thats why I went with .ajax and not showing/hiding the content. I want to protect the files inside the AJAX directory from being directly accessible through the browser url. I know that PHP headers can be spoofed and dont know if it is better to use an "access" key or try doing it via htaccess.
My question is what is the more reliable method? There is no logged on/non logged user status, and the main pages need to be able to pull in content from the pages in the AJAX directories.
thx
Make a temporary time-coded session variable. Check the variable in the php output file before echoing the data.
OR, if you don't want to use sessions.. do this:
$key = base64encode(time().'abcd');
in the read file:
base64decode
explode by abcd
read the time. Allow 5 seconds buffer. If the time falls within 5 seconds of the stamped request. You are legit.
To make it more secure, you can change your encrypting / decrypting mechanism.
I would drop this idea because there is no secure way to do it.
Your server will never be able to tell apart a "real" Ajax request from a "faked" one, as every aspect of the request can be forged on client side. An attacker will just have to look into a packet filter to see what requests your page makes. It is trivial to replicate the requests.
Any solution you work out will do nothing but provide a false sense of security. If you have data you need to keep secret, you will need to employ some more efficient protection like authentication.
Why not have the content be outside the webserver directory, and then have a php script that can validate if the person should see it, and then send it to them.
So, you have getcontent.php, and you can look at a cookie, or a token that was given to the javascript page and it uses to do the request, and then it will just fetch the real content, set the mime types and stream it to the user.
This way you can change your logic as to who should have access, without changing any of the rest of your application.
There is no real difference to having http://someorg.net/myimage.gif and http://someorg.net/myscript.php?token=887799&img_id=ddtw88 to the browser, but obviously it will need to work with GET so a time limited value is necessary as the user can see reuse it.