Has PHP's ignore_user_abort() function any security implication?
I'm thinking in DoS. For example, when having the function exposed to anonymous traffic in some code that is resource expensive.
As addition to the previous answer I'd like to add that the risk is not bigger, but is being shifted a little.
If the goal is to overload the server by calling an expensive script a lot of times, it is clear that calling ignore_user_abort(true); relieves the attacker from the need to keep the connection open. The script will continue to execute nevertheless of the connection status and consume resources.
In contrast without ignore_user_abort(true); the script would end its execution on the first output (if there happens no output the script will be as consuming as the first variant [1]).
In case of a DoS attack (and especially a DDoS attack) the attacker likely has absolutely no problem in opening (and holding open) a lot of connections. Therefore from this perspective ignore_user_abort makes no difference.
I can't think of any further security related implications of using this functionality.
I would even claim that most PHP developers do not really know that the execution of their scripts might stop somewhere in the middle just because the connection is lost. I think most would guess that their scripts will execute until the end in all cases although this is not the default setting.
I don't see any direct security implications with ignore_user_abort() function .
In terms of DoS attack , considering the containing script is
resource expensive
exposed to anonymous traffic
the concern should be of server overload which could lead to temporary or indefinite interruption or suspension of services .
If possible it would be wise to find alternatives for such a resource expensive code :
If the containing script is used to simulate cron task , it would be wise to use crontab instead .
If possible it would be wise to put programmatic restriction in place to run only one instance of such resource expensive code irrespective of how many page hits the containing script would get .
Hopefully this is of some help .
Related
Do I really need to do mysql_close()? Why or why not?
Is there a trigger that closes the link after mysql_connect even if I don't do mysql_close?
According to the documentation:
Using mysql_close() isn't usually necessary, as non-persistent open links are automatically closed at the end of the script's execution.
Personally, I always like to make sure I pedantically close anything that I open, but it's not required.
In most cases calling mysql_close will not make any difference, performance-wise. But it's always good practice to close out resources (file handles, open sockets, database connections, etc.) that your program is no longer using.
This is especially true if you're doing something that may potentially take a few seconds - for instance, reading and parsing data from a REST API. Since the API call is going over the line, negative network conditions can cause your script to block for several seconds. In this case, the appropriate time to open the database connection is after the REST call is completed and parsed.
To sum up my answer, the two big rules are:
Only allocate resources (file handles, sockets, database connections, etc.) when your program is ready to use them.
Free up resources immediately after your program is done with them.
what's the benefit of closing the link?
The benefit is that you can free the connection to the database, and corresponding resources in the database server, earlier than the PHP request cleanup would do it.
Say for example you query all the data your request will need in the first 20 milliseconds of the request. But then your PHP code spends another 80 milliseconds running code and formatting the results. Which means 80% of the time, the app is holding open a db connection without needing to, and on average, 8 out of 10 connection threads on the db server are idle and using resources.
The manual says:
Using mysql_close() isn't usually necessary, as non-persistent open links are automatically closed at the end of the script's execution.
So, no, not really. It is helpful to free up resources before attempting an operation that potentially consumes large amounts of resources though, but it probably won't make a big difference.
what;s the benefit of closing the link?
Normally, there is no benefit in closing the link yourself, as it will be automatically closed.
I can only think of a couple of benefits
If your script has a lot of processing to do after it has finished using the database, closing the database link prematurely may help free up a little bit of memory and other resources (such as MySQL connections) while your script continues with other things. This is very unlikely to be an issue in most scripts, since most scripts will terminate pretty quickly after it has finished with the database connection anyway, and the time the connection is held open before the PHP script terminates will be comparatively short.
Completeness and cleanliness of code. It can give you a good feeling, and is generally good code hygiene to close what you have opened, even though in this case it isn't technically required.
Was wondering if it helps significantly in terms of performance and specially memory to close sessions as soon as you're done using them, usually in my case, at the top of the script.. I rarely need it open midway and onwards in my script, however I have been used to just leaving sessions open in case i needed it anywhere down the lines, and let the php auto close it at end of script = assuming this doesn't really cost much in terms of performance, if any.
Any ideas?
For a small application keeping the session open is okay. But for a big application with time intensive operation, this can pull down the performance of the application.
This will be a newbie question but I'm learning php for one sole purpose (atm) to implement a solution--everything i've learned about php was learned in the last 18 hours.
The goal is adding indirection to my javascript get requests to allow for cross-domain accesses of another website. I also don't wish to throttle said website and want to put safeguards in place. I can't rely on them being in javascript because that can't account for other peers sending their requests.
So right now I have the following makeshift code, without any throttling measures:
<?php
$expires = 15;
if(!$_GET["target"])
exit();
$fn = md5($_GET["target"]);
if(!$_GET["cache"]) {
if(!array_search($fn, scandir("cache/")) ||
time() - filemtime($file) > $expires)
echo file_get_contents("cache/".$fn);
else
echo file_get_contents(file);
}
else if($_GET["data"]) {
file_put_contents("cache/".$fn, $_GET["data"]);
}
?>
It works perfectly, as far as I can tell (doesn't account for the improbable checksum clash). Now what I want to know is, and what my search queries in google refuse to procure for me, is how php actually launches and when it ends.
Obviously if I was running my own web server I'd have a bit more insight into this: I'm not, I have no shell access either.
Basically I'm trying to figure out whether I can control for when the script ends in the code, and whether every 'get' request to the php file would launch a new instance of the script or whether it can 'wake up' the same script. The reason being I wish to track whether, say, it already sent a request to 'target' within the last n milliseconds, and it seems a bit wasteful to dump the value to a savefile and then recover it, over and over, for something that doesn't need to be kept in memory for very long.
Every HTTP request starts a new instance of the interpreter; it's basically an implementation detail whether this is a whole new process, or a reuse of an existing one.
This generally pushes you towards good simple and scalable designs: you can run multiple server processes and threads and you won't get varying behaviour depending whether the request goes back to the same instance or not.
Loading a recently-touched file will be very fast on Linux, since it will come right from the cache. Don't worry about it.
Do worry about the fact that by directly appending request parameters to the path you have a serious security hole: people can get data=../../../etc/passwd and so on. Read http://www.php.net/manual/en/security.variables.php and so on. (In this particular example you're hashing the inputs before putting them in the path so it's not a practical problem but it is something to watch for.)
More generally, if you want to hold a cache across multiple requests the typical thing these days is to use memcached.
php is done from a per-connection basis. IE: each request for a php file is seen as a new instance. Each instance is ended, generally, when the connection is closed. You can however use sessions to save data between connections for a specific user
For basic use of sessions look into:
session_start()
$_SESSION
session_destroy()
I have a PHP function that I want to make available publically on the web - but it uses a lot of server resources each time it is called.
What I'd like to happen is that a user who calls this function is forced to wait for some time, before the function is called (or, at the least, before they can call it a second time).
I'd greatly prefer this 'wait' to be enforced on the server-side, so that it can't be overridden by dubious clients.
I plan to insist that users log into an online account.
Is there an efficient way I can make the user wait, without using server resources?
Would 'sleep()' be an appropriate way to do this?
Are there any suggested problems with using sleep()?
Is there a better solution to this?
Excuse my ignorance, and thanks!
sleep would be fine if you were using PHP as a command line tool for example. For a website though, your sleep will hold the connection open. Your webserver will only have a finite number of concurrent connections, so this could be used to DOS your site.
A better - but more involved - way would be to use a job queue. Add the task to a queue which is processed by a scheduled script and update the web page using AJAX or a meta-refresh.
sleep() is a bad idea in almost all possible situations. In your case, it's bad because it keeps the connection to the client open, and most webservers have a limit of open connections.
sleep() will not help you at all. The user could just load the page twice at the same time, and the command would be executed twice right after each other.
Instead, you could save a timestamp in your database for when your function was last invoked. Then, before invoking it, you should check the database to see if a suitable amount of time has passed. If it has, invoke the function and update the timestamp in the database.
If you're planning on enforcing a user login, than the problem just got a whole lot simpler.
Have a record inn the database listing users and the last time they used your resource consuming service, and measure the time difference between then and now. If the time difference is too low, deny access and display an error message.
This is best handled at the server level. No reason to even invoke PHP for repeat requests.
Like many sites, I use Nginx and you can use it's rate-limiting to block repeat requests over a certain number. So like, three requests per IP, per hour.
What is the best way to break up a recursive function that is using a ton of resources
For example:
function do_a_lot(){
//a lot of code and processing is done here
//it takes a lot of execution time
if($true){
//if true we have to do all of that processing again
do_a_lot();
}
}
Is there anyway to make the server only have to take the brunt of the first execution and then break up the recursion into separate processes? Or am I dreaming?
Honestly, if your function is using up that much of your system's resources, I'd most likely refactor my code. However, it's not truly multithreading, but you could perhaps look at using popen to fork your process.
One of the rule of PHP is "Share nothing". That means every PHP process is independant and shares nothing with the others. So if you want to break your execution on several PHP process you'll have to store the data somewhere. It can be a memcached storage, or a database, or the session, as you want.
Then you'll need to 'fork' your PHp process. They're solutions available to get this done on the server side. IMHO this is all hacks. Dangerous and not minded in the PHP/web way. With the exception of 'work queues' tools.
I think the nicest way is to break your task with ajax. This will allow you a clean user interface and will avoid any long response timeout in the web process. i.e. show a 'working zone' to you user, then ask in ajax for next step of the job (first one), get response (in server side stor you response), then ask for next step, store new response and respond , next step, etc. You can even add a 'stop that stuff' function on the client side.
You can check as well for 'php work queue' on google.
If it's a long running task, divide and conquer with gearman