Force timeout on a MysqlQuery from PHP - php

Is there a way to force Mysql from PHP to kill a query if it didn't return within a certain time frame?
I sometimes see expensive queries running for hours (obviously by this time HTTP connection timed out or the user has left). Once enough such queries accumulate it starts to affect the overall performance badly.

Long running queries are a sign of poor design. It is best to inspect the queries in question and how they could be optimised. This would be just ignoring the problem.
If still want this you could use the SHOW PROCESSESLIST command to get all running processes. And then use KILL x to kill a client connection. But for this to work you have to check this from another PHP script since multithreading is not yet possible. Also it is not advisable to give users utilized by PHP grants to mess with the server settings.

Warning: The intended behaviour should now be used with something like upstart.
You want to create DAEMONS for this sort of thing but you could also use cron.
Just have a script looking at set intervals for queries above xyz time and kill them.
// this instruction kills all processes executing for more than 10 seconds
SELECT CONCAT('KILL ',id,';')
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE state = "executing" AND `time` >= 10
However if queries are running for such a long time... they must be optimized.
On the other hand you may be trying to admin a shared server and there can be some rogue users. On that scenario you should specify in the terms of service that scripts will be monitored and disabled and that is exactly what should be done to such offensive ones.

Related

Make PHP script call itself after some time

I have some limitations with my host and my scripts can't run longer than 2 or 3 seconds. But the time it will take to finish will certainly increase as the database gets larger.
So I thought about making the script stop what it is doing and call itself after 2 seconds, for example.
Firstly I tried using cURL and then I made some attempts with wget. But there is always a problem with waiting for the response and timeouts (with cURL, for example, I just need to ping the script, not wait for a response) or permissions with the server (functions that we use to run wget such as exec seems to be disabled on my server, or something like that).
What do you think is the best idea to make a PHP script ping/call itself?
On Unix/LInux systems I would personally recommend to schedule CRON JOBS to keep running the scripts at certain intervals
May be this SO Link will help you
Php scripts generally don't call other php scripts. It is possible to spawn a background process as illustrated here, but I don't think that's what you're after. If, so you'd be better off using cron as was discussed above.
Calling a function every X amount of seconds with the same script is certainly possible, but this does the opposite of what you want since it would only extend the run time of the script in question.
What you seem to be asking is, contrary to your comment, somewhat paradoxical. A process that calls method() every so often is still a long running process and is subject to the same restrictions as any other process on the server, regardless of the fact that it may be sitting idle for short intervals.
As far as I can see your options are:
Extend the php max_execution_time directive, or have your sysadmin do so if they are willing
Revise your script so that it completes within the time limit
Move to a new server

What is killing my PHP process, and leaving so many sleeping mysql connections?

I'm having trouble investigating an issue with many sleeping MySQL connections.
Once every one or two days I notice that all (151) MySQL connections
are taken, and all of them seem to be sleeping.
I investigated this, and one of the most reasonable explanations is that the PHP script was just killed, leaving a MySQL connection behind. We log visits at the beginning of the request, and update that log when the request finishes, so we can tell that indeed some requests do start, but don't finish, which indicates that the script was indeed killed somehow.
Now, the worrying thing is, that this only happens for 1 specific user, and only on 1 specific page. The page works for everyone else, and when I log in as this user on the Production environment, and perform the exact same action, everything works fine.
Now, I have two questions:
I'd like to find out why the PHP script is killed. Could this possibly have anything to do with the client? Can a client do 'something' to end the request and kill the php script? If so, why don't I see any evidence of that in the Apache logs? Or maybe I don't know what to look for? How do I find out if the script was indeed killed or what caused it?
how do I prevent this? Can I somehow set a limit the to number of mysql connections per PHP session? Or can I somehow detect long-running and sleeping mysql connections and kill them? It isn't an option for me to set the connection-timeout to a shorter time, because there are processes which run considerably longer, and the 151 connetions are used up in less than 2 minutes. Also increasing the number of connections is no solution. So, basically.. how do I kill processes which are sleeping for more than say 1 minute?
Best solution would be that I find out why the request of 1 user can eat up all the database connections and basically bring down the whole application. And how to prevent this.
Any help greatly appreciated.
You can decrease wait_timeout variable of the MySQL server. This specifies the amount of seconds MySQL waits for anything on a non-interactive connection, before it aborts the connection. The default value is 28800 seconds, which seems quite high. You can set this dynamically by executing SET GLOBAL wait_timeout = X; once.
You can still increase it for cronjobs again. Just execute the query SET SESSION wait_timeout = 28800; at the beginning of the cronjob. This only affects the current connection.
Please note that this might cause problems too, if you set this too low. Although I do not see that much problems. Most scripts should finish in less than a second. Setting wait_timeout=5 should therefore cause no harm…

Website Down After Performing PHP Action

Say I have a website hosted on a remote server. I navigate to a page on this website and attempt to perform a specific action (the details of which I can go into more depth if necessary, but for the time being let me just say that the action involves running a program on the server with data obtained from a database.)
The page just continually loads. So, I attempt to navigate to the main website. That now continually loads without resolve as well. After about a day the website comes back, so perhaps there is some automated process that kills tasks after a certain time has passed. My question is this:
Am I able to kill this task or perform any action to allow me to navigate to the website without waiting a full day? I can go into more detail if necessary.
Thanks.
By request, some more in-depth information.
The PHP script retrieves text-based information from the database. Based on this information the PHP script calls an executable program. The output from the executable is output to the screen.
I've checked mysql processlist and found a process that took a particularly long time. I killed it, so it may be that the executable is continually running. If so, how would I go about determining this, and if not is there anything else it could potentially be? Thanks.
Solved:
Alright, so basically Mike Purcell's advice which was to show the process list of mysql processes and kill any ones that I saw that took a substantially long time. Once that was done, it was just a matter of restarting mysql and httpd. Thanks again to everyone who commented.
Sounds like that mysterious action may have caused an un-optimized query to be executed, which may cause other queries to hang until the bad query has finished executing. If you have access to the mysql server via terminal you could issue the following commands to kill the long running query:
mysql> show processlist;
This command will output any currently running queries. Pay attention to the time column, this will display the time in seconds of how long a query has been executing. In theory you should never have any queries running past a few seconds, but some queries may take upwards of 10 minutes depending on the query and the dataset involved. The other column to note is the id column, with this value you can kill a query manually (much like killing a process on a linux machine).
mysql> kill 387 # 387 is just an example
Now when you run the show processlist command again, that query should disappear from the process list.
Looks like you DoS'ed your own website!
Maybe this webpage shouldn't be a webpage, but a task performed manually or via cron? Else, anyone will be able to find this page and kill your website when they want...
If this program need so much resources, you should limit it somehow : try to optimize it, try to limit the resources allowed to it (I don't know how to do that, I just know it's possible :s )
If the problem is already here, you can try to kill the process (ps aux | grep <processname", then kill -9 processid), or the query if the problems comes from MySQL (inside your mysql client, show processist; then kill <query n°>). Try the SQL first.
If the site is locked because of an intensive MySQL process, or a MySQL process gone rogue, the best thing you could do is isolate the process to its own database thread so normal tables don't get locked.
Other option is to switch some of your databases from (assuming) MyISAM to the InnoDB engine, if that can work. Batch insertions will be slower and performance signature will vary, but, you won't be subjected to such severe locking.

Cron job for php script that requires VERY long execution time

I have a php script run as a cron job that executes a set of simple tasks that loops for each user in the database and takes about 30 mins to complete. This process starts over every hour and needs to be as fast and efficient as possible. The problem Im having, is like with any server script, execution time varies and I need to figure out the best cron time settings.
If I run cron every minute, I need to stop the last loop of the script 20 seconds before the end of the minute to make sure that the current loop finishes in time. Over the course of the hour this adds up to a lot of wasted time.
Im wondering if its a bad idea to simple remove the php execution time limit and run the script once an hour and let it run to completion.... is this a bad idea?
Instead of setting the max_execution_time you could also use set_time_limit() to reset the counter on every loop. This will ensure your script is never running out of time unless there is something serious hanging within the current loop (and taking longer than the max_execution_time).
Basically this should make your script run as long as it needs while giving it a 30 seconds timeout between two set_time_limit() calls.
Assuming you'd like the work done ASAP, don't use cron. Cron is good for things that need to happen at specific times. It's often abused to simulate a background process that would ideally process work as soon as work appears. You should probably write a daemon that runs continuously. (Note: you could also look at a message/work-queue type system, there are nice libraries out there to do this too)
You can write a daemon from scratch using the pcntl functions (since you don't care about multiple worker processes, it's super-easy to get a process running in the background.), or cheat and just make a script that runs forever and run it via screen, or leverage some solid library code like PEAR's System:Daemon or nanoserv
Once the daemonization stuff is taken care of, all you really care about is having a loop that runs forever. You'll want to take care that your script doesn't leak memory, or consume too many resources.
Generally, you can do something like:
<?PHP
// some setup code
while(true){
$todo = figureOutIfIHaveWorkToDo();
foreach($todo as $something){
//do stuff with $something
//remember to clean up resources so you don't leak memory!
usleep(/*some integer*/);
}
usleep(/* some other integer */);
}
And it'll work pretty well.
Setting the time limit to 0 and letting it do its thing is fairly typical of PHP based cronjobs (in my experience), but this is also the point when you should ask yourself a few important questions, such as "Should I rewrite this job in a compiled language?" and "Am I using all of my tools (database, etc) to their maximum efficiency?"
That said, maybe better than completely removing the time limit would be to set it to the upper limit you actually want. If that means 48 minutes, then set_time_limit(48 * 60);
I really think you shouldn't set the time out to 0, that is just looking for trouble. At most, set it to 59*60 seconds, but setting it to 0 might cause security problems, if a script hangs, it will hang almost forever until the server host stops the execution. It is considered bad practice to do so.
I have used the php command-line interface for similar long running tasks in the past. You probably do not want to remove the execution time limit for any request.
Sounds like a great idea if there's little chance that it will take more than an hour. Note, however, that the wrong bug can be a really good way of making it take longer than expected..
To avoid all sorts of nasty problems, you should have a guard file with the process ID of the script. On startup, you should check to make sure the file doesn't exist, or if it does that the process ID in the file doesn't exist (through a kill( pid, 0 ) call). If these conditions are met, create a new file with the script's PID and delete the file when you're done.
This is the same trick that many daemons use to ensure it isn't already running. If the daemon was killed suddenly, the file will still exist but the PID of the process therein is unlikely to be running.
Depending on what your script does, it can lead to problems if you remove the time limit. If per example, you are polling an external server that is unresponsive while the job is running, and that your cron takes 2 hours instead of 30 minutes to complete, you may get a stack of PHP processes being fired up even if the previous ones haven't completed yet. This can cause system instability and crashes.
You probably have two options:
Make sure that no other instance of your script is running beforehand, otherwise exit() on start.
Consider changing your cronjob into a daemon.
Does it have to run hourly like clockwork?
If not split the job (you mentioned it was more than one simple task) do each task every hour?
Or split it per user, do A-M on hour, then N-Z the next?

Is PHP suitable for very large projects? Can it be transaction-safe?

That question may appear strange.
But every time I made PHP projects in the past, I encountered this sort of bad experience:
Scripts cancel running after 10 seconds. This results in very bad database inconsistencies (bad example for an deleting loop: User is about to delete an photo album. Album object gets deleted from database, and then half way down of deleting the photos the script gets killed right where it is, and 10.000 photos are left with no reference).
It's not transaction-safe. I've never found a way to do something securely, to ensure it's done. If script gets killed, it gets killed. Right in the middle of a loop. It gets just killed. That never happened on tomcat with java. Java runs and runs and runs, if it takes long.
Lot's of newsletter-scripts try to come around that problem by splitting the job up into a lot of packages, i.e. sending 100 at a time, then relading the page (oh man, really stupid), doing the next one, and so on. Most often something hangs or script will take longer than 10 seconds, and your platform is crippled up.
But then, I hear that very big projects use PHP like studivz (the german facebook clone, actually the biggest german website). So there is a tiny light of hope that this bad behavior just comes from unprofessional hosting companies who just kill php scripts because their servers are so bad. What's the truth about this? Can it be configured in such a way, that scripts never get killed because they take a little longer?
Is PHP suitable for very large projects?
Whenever I see a question like that, I get a bit uneasy. What does very large mean? What may be large to you, may be small to me or vice versa. And that is even assuming that we use the same metric. Are you measuring time to build the project, complete life-cycle of the project, money that are involved, number of people using it, number of developers to build/maintain it, etc. etc.
That said, the problems you're describing sounds like you don't know your technology good enough. That would be a problem for you regardless of which technology you picked. For example, use database transactions to ensure atomicity. And use asynchronous offline jobs to process long running tasks (Such as dispatching a mailing list).
A lot if the bad behaviour is covered in good frameworks like the Zend Framework.
Anything that takes longer the 10 seconds is really messed up but you can always raise the execution time with http://de3.php.net/set_time_limit
A lot of big sites are writen in PHP: Facebook, Wikipedia, StudiVZ, Digg.com etc.. a lot of the things you are talking about are just configuration things maybe you should look into that?
Are you looking for set_time_limit() and ignore_user_abort()?
Performance is not a feature you can just throw in after most of the site is done.
You have to design the site for heavy load.
If a database task is normally involving 10K rows, you should be prepared not just the execution time issues, but other maintenance questions.
Worst case: make a consistency tool to check and fix those errors.
Better: instead of phisically delete the images, just flag them and let background services to take care of the expensive maneuvers.
Best: you can utilize a job queue service and add this job to the queue.
If you do need to do transactions in php, you can just do:
mysql_query("BEGIN");
/// do your queries here
mysql_query("COMMIT");
The commit command will just complete the transaction.
If any errors occur, you can just rollback with:
mysql_query("ROLLBACK");
Edit: Note this will only work if you are using a database that supports transactions, such as InnoDB
You can configure how much time is allowed for executing a script, either in the php.ini setting or via ini_set/set_time_limit
Instead of studivz (the German Facebook clone), you could look at the actual Facebook which is entirely PHP. Or Digg. Or many Yahoo sites. Or many, many others.
ignore_user_abort is probably what you're looking for, but you could also add another layer in terms of scheduled maintenance jobs. They basically run on a specified interval and do various things to make sure your data/filesystem are in a state that you want... deleting old/unlinked files is just one of many things you can do.
For these large loops like deleting photo albums or sending 1000's of emails your looking for ignore_user_abort and set_time_limit.
Something like this:
ignore_user_abort(true); //users leaves webpage will not kill script
set_time_limit(0); //script can take as long as it wants
for(i=0;i<10000;i++)
costly_very_important_operation();
Be carefull however that this could potentially run the script forever:
ignore_user_abort(true); //users leaves webpage will not kill script
set_time_limit(0); //script can take as long as it wants
while(true)
do_something();
That script will never die, unless you restart your server.
Therefore it is best to never set the time_limit the 0.
Technically no programming language is transaction safe, it's the database that needs to be transaction safe. So if the script/code running dies or disconnects, for whatever reason, the transaction will be rolled back.
Putting queries in a loop is a very bad idea unless it is specifically design to be running in batches and breaking a much larger set into smaller pieces. Adjusting PHP timers and limits is generally a stop gap solution, you are still dependent on the client browser if using the web to kick off a script.
If I have a long process that needs to be kicked off by a browser, I "disconnect" the process from the browser and web server so control is returned to the user while the script runs. PHP scripts run from the command line can run for hours if you want. You can then use AJAX, or reload the page, to check on the progress of the long running script.
There are security concern with this code, but to "disconnect" a process from PHP running under something like Apache:
exec("nohup /usr/bin/php -f /path/to/script.php > /dev/null 2>&1 &");
But that really has nothing to do with PHP being suitable for large projects or being transaction safe. PHP can be used for large projects, but since by default there is no code that remains "resident" between hits, it can get slow if not designed right. Also, since there is no namespace support, you want to plan ahead if you have a large development team.
It's fine for a Java based system to take a few minutes to startup, initialize and load all the default objects. But this is unacceptable with PHP. PHP will take more planning for larger systems. The question is, when does the time saved in using PHP get wasted by the additional planning time required for a large system?
The reason you most likely experienced bad database consistencies in the past is because you were using the MyISAM engine for mysql (which DOES NOT support transactions). Use InnoDB instead, it supports transactions and performs row level locking.
Or use postgreSQL.
Many, many software sites are made in PHP. However, you will not hear about millions of web pages made in PHP that do not exist anymore because they were abandoned. Those pages may have burned all company money for dealing with PHP mess, or maybe they bankrupted because their soft was so crappy that customer did not want it… PHP seems good at the startup, but it does not scale very well. Yes, there are many huge web sites made in PHP, but they are rather exceptions, than a norm.

Categories