Ajax Security (i hope) - php

I'm building a browser game and im using a heavy amount of ajax instead of page refreshs. I'm using php and javascript. After alot of work i noticed that ajax isnt exactly secure. The threats im worried about is say someone wants to look up someones information on my SQL server they'd just need to key in right information to my .php file associated with my ajax calls. I was using GET style ajax calls which was a bad idea. Anyways after alot of research i have the following security measures in place. I switched to POST (which isnt really any more secure but its a minor deterent). I have a referred in place as well which again can be faked but again its another deterrent.
The final measure i have in place and is the focus of this question, when my website is loaded i have a 80 char hex key generated and saved in the session, and when im sending the ajax call i am also sending the challenge key in the form of
challenge= <?php $_SESSION["challenge"]; ?>
now when the ajax php file reads this it checks to see if the sent challenge matchs the session challenge. Now this by itself wouldnt do much because you can simply open up firebug and see what challenge is being sent easily. So what I'm having it do is once that challenge is used it generates a new one in the session.
So my question is how secure is this from where im standing it looks one could only see what the challenge key was after it was sent and then it renews and they couldnt see it again until it is sent, making it not possible to send a faked request from another source. So does anyone see any loop hole to this security method or have any addition thoughts or ideas.

Your definition of "secure" is vague. You seem less interested in preventing data from being intercepted, and more interested in keeping people from submitting custom requests to your server. That isn't security, that is just good application design - your program shouldn't accept requests which cause the internal state to break.There is absolutely nothing you can do to prevent people from submitting whatever data they want to. The solution is to validate the data they're submitting server-side, not to try to prevent them from submitting the data client-side, which will always fail.
I switched to POST
You shouldn't bother; that has nothing to do with security. Use whichever HTTP verb is appropriate for the request. Are you querying information? Use a get request. Are you updating/inserting/deleting information? Use post.
say someone wants to look up someones information on my SQL server they'd just need to key in right information to my .php file associated
You should be authenticating all requests to make sure they have access to the data they're querying. SSL will help you perform the authentication securely.
when my website is loaded i have a 80 char hex key generated and saved in the session, and when im sending the ajax call i am also sending the challenge key
This isn't going to help. The entire premise of your question seems to be that the user has Firebug or a similar HTTP debugging tool installed. If they do, your session key is rendered useless.

See the answer by 'meagar'.
I'd like to mention:
By passing around an identifier in Session, you're doing what the Session is already doing. There's usually a cookie with a unique identifier similar to the one you're generating, which is telling your application, essentially, who that person is. This is how PHP sessions work, in general.
What you would need to do, in this case, is check that for a given request - POST or GET - that the particular user (whose unique user ID, or similar, is stored in the Session) has permission to add/change/delete/whatever with that particular request.
So for a "search" request, you would only return results that User X has permission to view. That way, you don't worry about what they send - if the user doesn't have permission to do something, the system knows not to let them do it.
Hence "you should be authenticating all requests".
Someone feel free to add to this.

function mysqlRequest(type,server,name,value,sync){
$.ajax({
type: 'POST',
url: 'sql.php',
data: "server=s"+server+"&type="+type+"&name="+name+"&value="+value+"&challenge=<?php echo $_SESSION['challenge']; ?>",
cache: false,
dataType: 'json',
async: sync,
success: function(data){
},
complete: function(){}

Related

Why is id variable not passed to this file? [duplicate]

What's the difference when using GET or POST method? Which one is more secure? What are (dis)advantages of each of them?
(similar question)
It's not a matter of security. The HTTP protocol defines GET-type requests as being idempotent, while POSTs may have side effects. In plain English, that means that GET is used for viewing something, without changing it, while POST is used for changing something. For example, a search page should use GET, while a form that changes your password should use POST.
Also, note that PHP confuses the concepts a bit. A POST request gets input from the query string and through the request body. A GET request just gets input from the query string. So a POST request is a superset of a GET request; you can use $_GET in a POST request, and it may even make sense to have parameters with the same name in $_POST and $_GET that mean different things.
For example, let's say you have a form for editing an article. The article-id may be in the query string (and, so, available through $_GET['id']), but let's say that you want to change the article-id. The new id may then be present in the request body ($_POST['id']). OK, perhaps that's not the best example, but I hope it illustrates the difference between the two.
When the user enters information in a form and clicks Submit , there are two ways the information can be sent from the browser to the server: in the URL, or within the body of the HTTP request.
The GET method, which was used in the example earlier, appends name/value pairs to the URL. Unfortunately, the length of a URL is limited, so this method only works if there are only a few parameters. The URL could be truncated if the form uses a large number of parameters, or if the parameters contain large amounts of data. Also, parameters passed on the URL are visible in the address field of the browser not the best place for a password to be displayed.
The alternative to the GET method is the POST method. This method packages the name/value pairs inside the body of the HTTP request, which makes for a cleaner URL and imposes no size limitations on the forms output. It is also more secure.
The best answer was the first one.
You are using:
GET when you want to retrieve data (GET DATA).
POST when you want to send data (POST DATA).
There are two common "security" implications to using GET. Since data appears in the URL string its possible someone looking over your shoulder at Address Bar/URL may be able to view something they should not be privy to such as a session cookie that could potentially be used to hijack your session. Keep in mind everyone has camera phones.
The other security implication of GET has to do with GET variables being logged to most web servers access log as part of the requesting URL. Depending on the situation, regulatory climate and general sensitivity of the data this can potentially raise concerns.
Some clients/firewalls/IDS systems may frown upon GET requests containing an excessive amount of data and may therefore provide unreliable results.
POST supports advanced functionality such as support for multi-part binary input used for file uploads to web servers.
POST requires a content-length header which may increase the complexity of an application specific client implementation as the size of data submitted must be known in advance preventing a client request from being formed in an exclusively single-pass incremental mode. Perhaps a minor issue for those choosing to abuse HTTP by using it as an RPC (Remote Procedure Call) transport.
Others have already done a good job in covering the semantic differences and the "when" part of this question.
I use GET when I'm retrieving information from a URL and POST when I'm sending information to a URL.
You should use POST if there is a lot of data, or sort-of sensitive information (really sensitive stuff needs a secure connection as well).
Use GET if you want people to be able to bookmark your page, because all the data is included with the bookmark.
Just be careful of people hitting REFRESH with the GET method, because the data will be sent again every time without warning the user (POST sometimes warns the user about resending data).
This W3C document explains the use of HTTP GET and POST.
I think it is an authoritative source.
The summary is (section 1.3 of the document):
Use GET if the interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup).
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the
user would perceive (e.g., a subscription to a service), or
The user be held accountable for the results of the interaction.
Get and Post methods have nothing to do with the server technology you are using, it works the same in php, asp.net or ruby. GET and POST are part of HTTP protocol.
As mark noted, POST is more secure. POST forms are also not cached by the browser.
POST is also used to transfer large quantities of data.
The reason for using POST when making changes to data:
A web accelerator like Google Web Accelerator will click all (GET) links on a page and cache them. This is very bad if the links make changes to things.
A browser caches GET requests so even if the user clicks the link it may not send a request to the server to execute the change.
To protect your site/application against CSRF you must use POST. To completely secure your app you must then also generate a unique identifier on the server and send that along in the request.
Also, don't put sensitive information in the query string (only option with GET) because it shows up in the address bar, bookmarks and server logs.
Hopefully this explains why people say POST is 'secure'. If you are transmitting sensitive data you must use SSL.
GET and POST are HTTP methods which can achieve similar goals
GET is basically for just getting (retrieving) data, A GET should not have a body, so aside from cookies, the only place to pass info is in the URL and URLs are limited in length , GET is less secure compared to POST because data sent is part of the URL
Never use GET when sending passwords, credit card or other sensitive information!, Data is visible to everyone in the URL, Can be cached data .
GET is harmless when we are reloading or calling back button, it will be book marked, parameters remain in browser history, only ASCII characters allowed.
POST may involve anything, like storing or updating data, or ordering a product, or sending e-mail. POST method has a body.
POST method is secured for passing sensitive and confidential information to server it will not visible in query parameters in URL and parameters are not saved in browser history. There are no restrictions on data length. When we are reloading the browser should alert the user that the data are about to be re-submitted. POST method cannot be bookmarked
All or perhaps most of the answers in this question and in other questions on SO relating to GET and POST are misguided. They are technically correct and they explain the standards correctly, but in practice it's completely different. Let me explain:
GET is considered to be idempotent, but it doesn't have to be. You can pass parameters in a GET to a server script that makes permanent changes to data. Conversely, POST is considered not idempotent, but you can POST to a script that makes no changes to the server. So this is a false dichotomy and irrelevant in practice.
Further, it is a mistake to say that GET cannot harm anything if reloaded - of course it can if the script it calls and the parameters it passes are making a permanent change (like deleting data for example). And so can POST!
Now, we know that POST is (by far) more secure because it doesn't expose the parameters being passed, and it is not cached. Plus you can pass more data with POST and it also gives you a clean, non-confusing URL. And it does everything that GET can do. So it is simply better. At least in production.
So in practice, when should you use GET vs. POST? I use GET during development so I can see and tweak the parameters I am passing. I use it to quickly try different values (to test conditions for example) or even different parameters. I can do that without having to build a form and having to modify it if I need a different set of parameters. I simply edit the URL in my browser as needed.
Once development is done, or at least stable, I switch everything to POST.
If you can think of any technical reason that this is incorrect, I would be very happy to learn.
GET method is use to send the less sensitive data whereas POST method is use to send the sensitive data.
Using the POST method you can send large amount of data compared to GET method.
Data sent by GET method is visible in browser header bar whereas data send by POST method is invisible.
Use GET method if you want to retrieve the resources from URL. You could always see the last page if you hit the back button of your browser, and it could be bookmarked, so it is not as secure as POST method.
Use POST method if you want to 'submit' something to the URL. For example you want to create a google account and you may need to fill in all the detailed information, then you hit 'submit' button (POST method is called here), once you submit successfully, and try to hit back button of your browser, you will get error or a new blank form, instead of last page with filled form.
I find this list pretty helpful
GET
GET requests can be cached
GET requests remain in the browser history
GET requests can be bookmarked
GET requests should (almost) never be used when dealing with sensitive data
GET requests have length restrictions
GET requests should be used only to retrieve data
POST
POST requests are not cached
POST requests do not remain in the browser history
POST requests cannot be bookmarked
POST requests have no restrictions on data length
The GET method:
It is used only for sending 256 character date
When using this method, the information can be seen on the browser
It is the default method used by forms
It is not so secured.
The POST method:
It is used for sending unlimited data.
With this method, the information cannot be seen on the browser
You can explicitly mention the POST method
It is more secured than the GET method
It provides more advanced features

How to make sure AJAX is called by JavaScript?

I asked a similar question before, and the answer was simply:
if JavaScript can do it, then any client can do it.
But I still want to find out a way do restrict AJAX calls to JavaScript.
The reason is :
I'm building a web application, when a user clicks on an image, tagged like this:
<img src='src.jpg' data-id='42'/>
JavaScript calls a PHP page like this:
$.ajax("action.php?action=click&id=42");
then action.php inserts rows in database.
But I'm afraid that some users can automate entries that "clicks" all the id's and such, by calling necessary url's, since they are visible in the source code.
How can I prevent such a thing, and make sure it works only on click, and not by calling the url from a browser tab?
p.s.
I think a possible solution would be using encryption, like generate a key on user visit, and call the action page with that key, or hash/md5sum/whatever of it. But I think it can be done without transforming it into a security problem. Am I right ? Moreover, I'm not sure this method is a solution, since I don't know anything about this kind of security, or it's implementation.
I'm not sure there is a 100% secure answer. A combination of a server generated token that is inserted into a hidden form element and anti-automation techniques like limiting the number of requests over a certain time period is the best thing I can come up with.
[EDIT]
Actually a good solution would be to use CAPTCHAS
Your question isn't really "How can I tell AJAX from non-AJAX?" It's "How do I stop someone inflating a score by repeated clicks and ballot stuffing?"
In answer to the question you asked, the answer you quoted was essentially right. There is no reliable way to determine whether a request is being made by AJAX, a particular browser, a CURL session or a guy typing raw HTTP commands into a telnet session. We might see a browser or a script or an app, but all PHP sees is:
GET /resource.html HTTP/1.1
host:www.example.com
If there's some convenience reason for wanting to know whether a request was AJAX, some javascript libraries such as jQuery add an additional HTTP header to AJAX requests that you can look for, or you could manually add a header or include a field to your payload such as AJAX=1. Then you can check for those server side and take whatever action you think should be made for an AJAX request.
Of course there's nothing stopping me using CURL to make the same request with the right headers set to make the server think it's an AJAX request. You should therefore only use such tricks where whether or not the request was AJAX is of interest so you can format the response properly (send a HTML page if it's not AJAX, or JSON if it is). The security of your application can't rely on such tricks, and if the design of your application requires the ability to tell AJAX from non-AJAX for security or reliability reasons then you need to rethink the design of your application.
In answer to what you're actually trying to achieve, there are a couple of approaches. None are completely reliable, though. The first approach is to deposit a cookie on the user's machine on first click, and to ignore any subsequent requests from that user agent if the cookie is in any subsequent requests. It's a fairly simple, lightweight approach, but it's also easily defeated by simply deleting the cookie, or refusing to accept it in the first place.
Alternatively, when the user makes the AJAX request, you can record some information about the requesting user agent along with the fact that a click was submitted. You can, for example store a hash (stored with something stronger than MD5!) of the client's IP and user agent string, along with a timestamp for the click. If you see a lot of the same hash, along with closely grouped timestamps, then there's possibly abuse being attempted. Again, this trick isn't 100% reliable because user agents can see any string they want as their user agent string.
Use post method instead of get.Read the documentation here http://api.jquery.com/jQuery.post/ to learn how to use post method in jquery
You could, for example, implement a check if the request is really done with AJAX, and not by just calling the URL.
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
// Yay, it is ajax!
} else {
// no AJAX, man..
}
This solution may need more reflexion but might do the trick
You could use tokens as stated in Slicedpan's answer. When serving your page, you would generate uuids for each images and store them in session / database.
Then serve your html as
<img src='src.jpg' data-id='42' data-uuid='uuidgenerated'/>
Your ajax request would become
$.ajax("action.php?action=click&uuid=uuidgenerated");
Then on php side, check for the uuid in your memory/database, and allow or not the transaction. (You can also check for custom headers sent on ajax as stated in other responses)
You would also need to purge uuids, on token lifetime, on window unload, on session expired...
This method won't allow you to know if the request comes from an xhr but you'll be able to limit their number.

Is is possible to make a secure JS script or function?

Is it possible to have a secure piece of Javascript code in a web application? By secure I mean that we can do things like query the server for permissions, and do operations that cannot be altered by the client?
Example:
var flag = 0;
$.ajax({
async: false,
url: "/check_permission_script.php",
success: function(data){
flag = parseInt(data);
}
});
if (flag != 1){
display_normal_content();
}else{
display_secure_content();
}
Here I want to make a query to the server to check if the user has permission to see the secure content. If they have the permissions, then we use display_secure_content() to show them the secure content, if not, we use display_normal_content() to display normal content. The problem is, that via a debugging terminal, it is easy to set the flag variable == 1 on the client computer, or just call the display_secure_content() function directly.
My motivation for doing things this way is to have nice web app that uses ajax to get new content, without having to reload the page. I like this instead of having to reload the page.
So the question is, can we have JS scripts that are secure against client manipulation? Or is this simply impossible by the nature of the web infrastructure?
Thanks!!
By the very nature of JavaScript, this is not possible.
Anything you want to not be seen by the client cannot be sent to the client at all. All authentication/authorization should happen server-side.
You can still use AJAX for loading data in your interface, but make sure the checks are in place server-side to keep sensitive data from leaking out.
Short answer, no - not with JavaScript alone. JavaScript executes on the client-side, so anything you put in it is accessible and by extension modifiable by the client.
Several tools exist to help with "security through obscurity" such as obfuscating the code, but this will not help you for your end goal.
What could help, given your current setup, is through Ajax you contact a server-side PHP page that handles all security/validation and returns what content to display. Doing this, the client-facing JavaScript only has the ability to "request", not to validate or choose what to display.
You could query the session id against your internal database and return a secure public/private key encrypted token which contains a key for decrypting a blob of data. Then use this as a parameter of the javascript function, which uses this returned key to decrypt the blob.
This solution does not require reloading the page, and whilst it works in theory, you would have to return the page with the secure content encrypted with a different key each time. I wouldn't recommend actually trying this.
The server should know what the user can and can not see. If the flag is changed on the client, the server should not trust it, it should do a validation when it gets the request. Security 101 stuff
JavaScript is a client side scripting language. It's meant to be this way.
If you need a secure script, use PHP.

AJAX security and user managment

I am working on a web application that will be hosted on a server that is "on the internet", not a LAN.
The app uses quite a bit of AJAX calls and has about 12 ajax handler files for the functions.
My question is instead of asking anybody here to write a tutorial on AJAX security, does anybody know of any good resources (website, book, whatever) that can help me with securing these files.
Right now, as long as you know the variable name its looking for you can freely get data from the database.
I was thinking maybe session validation, or something along those lines for the logged in user.
Anyways if you have any good resources I'll do the homework myself.
Thanks
AJAX calls are generally used to access web services, which is what it seems you are using them for here. If that is the case then what you need to be concerned about is the security layer that you have provided in the server-side scripting language you are using (looks like you are using PHP as per your question's tags).
The same way that you do authentication and protection for other pages on your site that aren't accessed via AJAX calls you can implement for your web services. For instance, if you require authentication for your application then you can store the user's ID in $_SESSION. From there you can check to make sure the user is logged in via $_SESSION whenever one of your web services is requested.
I've often seen AJAX calls that check the X-REQUESTED-WITH HTTP header to "verify" that the request originated from AJAX. Depending on how you're sending your AJAX calls (with XmlHttpRequest or a JS library), you can either use the standard value for this header, or set it to a custom value. That way, you can do something similar to this in PHP to check if the page was requested with AJAX:
http://davidwalsh.name/detect-ajax
if( !empty($_SERVER['HTTP_X_REQUESTED_WITH']) &&
strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest')
It is important to note that since it's an HTTP header, it can be spoofed, so it is by no means full-proof.
Here is a good resource. Securing Ajax Applications: Ensuring the Safety of the Dynamic Web
However a very simple method is to use a MD5 hash with a private key. e.g. USER_NAME+PRIVATE_KEY. If you know the users name on the website/login you can provide that key in an MD5 hash set to a javascript variable. Then simply pass the users name in your AJAX request and the REST service can just take the same private key plus the users name and compare the two hashes. You're simply sending across a hash, and the user name then. It's simple and effective. Virtually impossible to reverse too unless you have a simple private key.
So in your javascript you might have this set:
var user='username';
var hash='925c35bae29a5d18124ead6fd0771756'
Then, when you send your request you send something like this:
myService.php?user=username&hash=925c35bae29a5d18124ead6fd0771756&morerequests=goodthings
When you check it, in the service you would do something like this
<?php
if(md5($_REQUEST['user']."_privatekey")==$_REQUEST['hash']){
echo 'passed validation';
}else{
echo 'sorry charlie';
}?>
Obviously you would need to use PHP or something else to generate the hash with the private key, but I think you get the general idea. _privatekey should be something complex in the event you do have a troll that tries to hack it.

Prevent Direct Access To File Called By ajax Function [duplicate]

This question already has answers here:
How to prevent direct access to my JSON service?
(7 answers)
Closed 3 years ago.
I'm calling the php code from ajax like this:
ajaxRequest.open("GET", "func.php" + queryString, true);
Since it's a get request anyone can see it by simply examining the headers. The data being passed is not sensitive, but it could potentially be abused since it is also trivial to get the parameter names.
How do I prevent direct access to http://mysite/func.php yet allow my ajax page access to it?
Also I have tried the solution posted here but its doesn't work for me - always get the 'Direct access not premitted' message.
Most Ajax requests/frameworks should set this particular header that you can use to filter Ajax v Non-ajax requests. I use this to help determine response type (json/html) in plenty of projects:
if( isset( $_SERVER['HTTP_X_REQUESTED_WITH'] ) && ( $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest' ) )
{
// allow access....
} else {
// ignore....
}
edit:
You can add this yourself in your own Ajax requests with the following in your javascript code:
var xhrobj = new XMLHttpRequest();
xhrobj.setRequestHeader("X-Requested-With", "XMLHttpRequest");
what I use is: PHP sessions + a hash that is sent each time I do a request. This hash is generated using some algorithm in the server side
Mmm... you could generate a one-time password on session start, which you could store in the _SESSION, and add a parameter to your ajax call which would re-transmit this (something like a captcha). It would be valid for that session only.
This would shield you from automated attacks, but a human who has access to your site could still do this manually, but it could be the base to devise something more complicated.
Anyone in this thread who suggested looking at headers is wrong in some way or other. Anything in the request (HTTP_REFERER, HTTP_X_REQUESTED_WITH) can be spoofed by an attacker who isn't entirely incompetent, including shared secrets [1].
You cannot prevent people from making an HTTP request to your site. What you want to do is make sure that users must authenticate before they make a request to some sensitive part of your site, by way of a session cookie. If a user makes unauthenticated requests, stop right there and give them a HTTP 403.
Your example makes a GET request, so I guess you are concerned with the resource requirements of the request [2]. You can do some simple sanity checks on HTTP_REFERER or HTTP_X_REQUESTED_WITH headers in your .htaccess rules to stop new processes from being spawned for obviously fake requests (or dumb search-crawlers that won't listen to robots.txt), but if the attacker fakes those, you'll want to make sure your PHP process quits as early as possible for non-authenticated requests.
[1] It's one of the fundamental problems with client/server applications. Here's why it doesn't work: Say you had a way for your client app to authenticate itself to the server - whether it's a secret password or some other method. The information that the app needs is necessarily accessible to the app (the password is hidden in there somewhere, or whatever). But because it runs on the user's computer, that means they also have access to this information: All they need is to look at the source, or the binary, or the network traffic between your app and the server, and eventually they will figure out the mechanism by which your app authenticates, and replicate it. Maybe they'll even copy it. Maybe they'll write a clever hack to make your app do the heavy lifting (You can always just send fake user input to the app). But no matter how, they've got all the information required, and there is no way to stop them from having it that wouldn't also stop your app from having it.
[2] GET requests in a well-designed application have no side-effects, so nobody making them will be able to make a change on the server. Your POST requests should always be authenticated with session plus CSRF token, to let only authenticated users call them. If someone attacks this, it means they have an account with you, and you want to close that account.
Put the following code at the very top of your php file that is called by ajax. It will execute ajax requests, but will "die" if is called directly from browser.
define('AJAX_REQUEST', isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest');
if(!AJAX_REQUEST) {die();}
Personally, I choose not to output anything after "die()", as an extra security measure. Meaning that I prefer to show just a blank page to the "intruder", rather than giving out hints such as "if" or "why" this page is protected.
I would question why you are so convinced that no-one should be able to visit that file directly. Your first action really should be to assume that people may visit the page directly and act around this eventuality. If you are still convinced you want to close access to this file then you should know that you cannot trust $_SERVER variables for this as the origins of $_SERVER can be difficult to determine and the values of the headers can be spoofed. In some testing I did I found those headers ($_SERVER['HTTP_X_REQUESTED_WITH'] & $_SERVER['HTTP_X_REQUESTED_WITH']) to be unreliable as well.
I solved this problem preparing a check function that make three things
check referer $_SERVER['HTTP_REFERER'];
check http x request $_SERVER['HTTP_X_REQUESTED_WITH'];
check the origin via a bridge file
if all three pass, you success in seeing php file called by ajax, if just one fails you don't get it
The points 1 and 2 were already explained, the bridge file solution works so:
Bridge File
immagine the following scenario:
A.php page call via ajax B.php and you want prevent direct access to B.php
1) when A.php page is loaded it generates a complicated random code
2) the code is copied in a file C.txt not directly accessible from
web (httpd secured)
3) at the same time this code is in clear sculpted in the rendered
html of the A.php page (for example as an attribute of body,
es:
data-bridge="ehfwiehfe5435ubf37bf3834i"
4) this sculpted code is retrived from javascript and sent via ajax
post request to B.php
5) B.php page get the code and check if it exists in the C.txt file
6) if code match the code is popped out from C.txt and the page B.php
is accessible
7) if code is not sent (in case you try to access directly the B
page) or not matches at all (in case you supply an old code trapped
or trick with a custom code), B.php page die.
In this way you can access the B page only via an ajax call generated from the father page A.
The key for pageB.php is given only and ever from pageA.php
There is no point in doing this. It doesn't add any actual security.
All the headers that indicate that a request is being made via Ajax (like HTTP_X_REQUESTED_WITH) can be forged on client side.
If your Ajax is serving sensitive data, or allowing access to sensitive operations, you need to add proper security, like a login system.
I tried this
1) in main php file (from which send ajax request) create session with some random value, like $_SESSION['random_value'] = 'code_that_creates_something_random'; Must be sure, that session is created above $.post.
2) then
$.post( "process_request.php",
{
input_data:$(':input').serializeArray(),
random_value_to_check:'<?php echo htmlspecialchars( $_SESSION['random value'], ENT_QUOTES, "UTF-8"); ?>'
}, function(result_of_processing) {
//do something with result (if necessary)
});
3) and in process_request.php
if( isset($_POST['random_value_to_check']) and
trim($_POST['random_value_to_check']) == trim($_SESSION['random value']) ){
//do what necessary
}
Before i defined session, then hidden input field with session value, then value of the hidden input field send with ajax. But then decided that the hidden input field not necessary, because can send without it
I have a simplified version of Edoardo's solution.
Web page A creates a random string, a [token], and saves a file with that name on disk in a protected folder (eg. with .htaccess with Deny from all on Apache).
Page A passes the [token] along with the AJAX request to the script B (in OP's queryString).
Script B checks if the [token] filename exists and if so it carries on with the rest of the script, otherwise exits.
You will also need to set-up some cleaning script eg. with Cron so the old tokens don't cumulate on disk.
It is also good to delete the [token] file right away with the script B to limit multiple requests.
I don't think that HTTP headers check is necessary since it can be easily spoofed.
Based on your description, I assume you're trying to prevent outright rampant abuse, but don't need a rock-solid solution.
From that, I would suggest using cookies:
Just setcookie() on the page that is using the AJAX, and check $_COOKIE for the correct values on func.php. This will give you some reasonable assurance that anyone calling func.php has visited your site recently.
If you want to get fancier, you could set and verify unique session ids (you might do this already) for assurance that the cookie isn't being forged or abused.
I tried many suggestions, no one solved the problem. Finally I protected the php target file's parameters and it was the only way to limit direct access to the php file.
** Puting php file and set limitation by.htaccess caused fail Ajax connection in the main Html page.

Categories