After my host enabled suPHP, a previously working script has been timing out after ~3min (it varies, but the script has not run for more then 3, AFAIK).
The odd part is, the script is not throwing any errors that I can see (and yes, full PHP error reporting/logging is enabled and all MYSQL queries have been checked for errors, also) it simply stops.
Refreshing the page will load more of the data the script is supposed to process (probably because the MYSQL queries have been cached...), but if there is a lot of data to process it never fully executes.
The other oddity is that I can run test scripts for over 10 minutes on the same host w/ set_time_limit(0); / etc.
Anyone else had to deal with this, or know what is causing the timeout and how to fix it (assuming that dropping suPHP is not an option). There was also a simultaneous update from PHP 5.2.x to 5.3.x, but I doubt that is causing the issue. :/
I've seen this happen when memory runs out - the script just ends without error. If you have a loop, try using the memory functions to dump the memory status. Also, use phpinfo() to see what your maximum memory allowance is - the switch to suPHP may have changed that to your detriment.
Related
MySQL 5.1.73
Apache/2.2.15
PHP 5.6.13
CentOS release 6.5
Cakephp 3.1
After about 4 minutes (3 min, 57 seconds) the import process I'm running stops. There are no errors or warnings in any log that I can find. The import process consists of a lot of SQL calls and data processing, nothing too crazy, but it can take about 10 minutes to get through 5500 records if it's doing a full compare for updates.
Firefox: Secure Connection Failed - The connection to the server was reset while the page was loading.
Chrome: ERR_NO RESPONSE
The php set time limit is set to 900, which is working. I can set it to 5 seconds and get an error. The limit is not being reached.
I can sleep another controller for 10 minutes, and this error does not happen, indicating that something in the actual program is causing it to fail, and not the hosting service killing the request because it's taking too long (read about VPS doing this to prevent spam).
The php errors are turned all the way up in the php.ini, and just to be sure, in the controller itself.
The import process completes if I reduce the size of the file being imported. If it's just long enough, it will complete AND show the browser message. This indicates to me it's not failing at the same point of execution each time.
I have deleted all the cache and restarted the server.
I do not see any output in the apache logs other then that the request was made.
I do not see any errors in the mysql log, however, I don't know if it's because its not turned on.
The exact same code works on my local host without any issue. It's not a perfect match to the server, but it's close. Ubuntu Desktop vs Centos, php 5.5 vs php 5.6
I have kept an eye on the memory usage and don't see any issues there.
At this point I'm looking for any good suggestions on what else to look at or insights into what could be causing the failure. There are a lot of possible places to look, and without an error, it's really difficult to narrow down where the issue might be. Thanks in advance for any advice!
UPDATE
After taking a closer look at the memory usage during the request, I noticed it was getting much higher than it ideally should.
The httpd (apache) process gets killed and a new thread spawned. Once the new thread runs out of memory, the error shows up on the screen. When I had looked at it previous, it was only at 30%, probably because it had just killed the old process. Watching it the whole way through, I saw it get as high as 80%, which with the other processes was enough to get have it run out of memory, and a killed process can't log anything, hence the no errors or warnings. It is interesting to me that the process just starts right back up.
I found a command to show which processes had been killed due to memory which proved very useful:
dmesg | egrep -i 'killed process'
I did have similar problems with debugkit.
I had bug in my code during memory peak and the context was written to html in the error "log".
I've been migrating an old clients site (Kohana 2.3) from one of my servers to an third party server and am now getting a premature end of script headers error when I either attempt to export data from my database or attempt to send emails to my clients after about 30-40 seconds of processing.
I've attempted increasing the php.ini to raise my maximum memory limit and and maximum time limit both to no avail, producing the same error.
I attempted to manually reduce the number of elements that would be exported and got it to run the script without erroring for something between 700-750 elements, but this goes up and down whenever I run the script. The live data that I'm using contains over 5000 elements.
Running memory_get_peak_usage returns that I'm using a maximum of a bit under 16M of memory to execute these scripts, so I'm reasonably sure that I'm not going over any memory limits as my php memory limit is 256M.
Setting the time limit in php to 5 seconds will generate a timeout error instead of a premature end of scripts error, but, this being expected, is not helpful.
The strange thing is that nothing is being written to any logs. I've checked the php logs, the Kohana logs and the apache logs, and there is nothing that seems to point me in a direction of what could be causing this issue.
I was wondering if anyone had encountered this before or had any ideas with where I should go with this.
Check if you have a /var/log/apache/suexec.log file; if so, and if it's the problem, it'll explain the reason why it's denying your script from executing properly. One easy fix to try is adding a "-w" to the end of the first line of your Perl script, i.e. change the first line from "#!/usr/bin/perl" to "#!/usr/bin/perl -w" and see if that makes suEXEC happy. Another common fix is to make sure that your CGI script has the same user/group ownership as your cgi-bin folder.
I am using a script with set_time_limit(60*60*24) to process a big amount of images. But after 1k images or so (1 or 2 minutes), the script stops without showing any errors in the command line.
I'm also using a logger that writes to a file any error thrown by the script, on shutdown (by using register_shutdown_function). But when this script stops, nothing is written (it should write something, even if no errors are thrown. It works perfect with any other script, on any other situation I ever had).
Apache error_log doesn't show anything either.
Any ideas?
Edit: My enviroment is Centos 5.5, with php 5.3.
It is probably running out of memory.
ini_set('memory_limit', '1024M');
May get you going if you can allocate that much.
Please make sure you're not running in safe mode:
http://php.net/manual/en/features.safe-mode.php
Please note that register_shutdown_function does NOT guaranties that the associated function will be executed everytime. So you should not rely on it.
see http://php.net/register_shutdown_function
To debug the issue check the PHP error log. (which is NOT the apache error log when you're using PHP from the console. check your PHP.ini or ini_get('error_log') to know where it is.)
A solution may be to write a simple wrapper script in bash that executes the script and then does what you want to be executed at the end of the script.
Also note that PHP doesn't count the time spent in external, non-php, activities, like network calls, some libraries functions, image magick, etc.
So the time limit you set may actually last much longer than you expect it to.
I have a long running script that dies out for no reason. It's supposed to run for over 8 hours, but dies out after an hour or two, no errors, nothing. I tried running it via CLI and via http, no difference.
I have the following parameters set:
set_time_limit(0);
ini_set('memory_limit', '1024M');
I've been monitoring the memory usage, and it doesn't go over 200M
Is there anything else that I'm missing. Why would it die out?
One possible explanation could be that the PHP garbage collector is interfering with the script. That could be why you're seeing random die offs. When the garbage collector is turned on, the cycle-finding algorithm is executed whenever the root buffer runs full.
The PHP manual states:
The rationale behind the ability to turn the mechanism on and off, and to initiate cycle collection yourself, is that some parts of your application could be highly time-sensitive.
You could try disabling the PHP garbage collector using gc_disable. The manual recommends you call gc_collect_cycles right before disabling to free the buffer.
Another explanation could be the code itself. An 8 hour script is a long script and if it's complex, it could easily be hitting a snag that causes the script to exit. I think for your troubleshooting now, you should definitely turn error reporting to report everything using error_reporting(-1);.
Also, if your script is communicating with other services, say a database for example, it's quite possible that could be the issue. If the database server runs out of memory or times out, it could be causing your script to hang and die. If this is the case, you could split up your connections to the database and connect/disconnect at specific timed intervals during the script to keep that connection fresh. The same mentality could be applied to any other service you may be communicating with.
You could, for testing purposes only, purposely make your script write to a log file an each successful query, making sure to include the timestamp from when the query beings and another when the query ends. You might not get any errors, but it may help you determine if there is a specific problem query or if a query is hanging for longer than usual. You could also check to make sure your MySQL connection is still valid and print out something to inform you of that as well.
An example log file:
[START 2011/01/21 13:12:23] MySQL Connection: TRUE [END 2011/01/21 13:12:28] Query took 5s
[START 2011/01/21 13:12:28] MySQL Connection: TRUE [END 2011/01/21 13:12:37] Query took 9s
[START 2011/01/21 13:12:39] MySQL Connection: TRUE [END 2011/01/21 13:12:51] Query took 12s
It's propably something related to the code.
I have scripts running weeks and months with no trouble.
Your database connection might timeout and output error.
It's also possible you run out of filedescriptors if you open connections or files. Or you're shared memory region is full. It depends on the code.
Check out system logs that selinux is not messing with you. This way your script would not print any error. From the system logs you also see if you have crossed user limits on any system resources (see ulimit).
It's really strange if you run it in cli and you get nothing, not even segfault. You saw both stdout and stderr?
Maybe it segafults.
Try to launch your script on this way:
$ ulimt -c unlimited
$ php script.php
And see if you find a core dump file (core.xxxx) in the running directory when it dies
Apache also has it's own script timeout, you will need to tweak the httpd.conf file
I have a PHP script that grabs a chunk of data from a database, processes it, and then looks to see if there is more data. This processes runs indefinitely and I run several of these at a time on a single server.
It looks something like:
<?php
while($shouldStillRun)
{
// do stuff
}
logThatWeExitedLoop();
?>
The problem is, after some time, something causes the process to stop running and I haven't been able to debug it and determine the cause.
Here is what I'm using to get information so far:
error_log - Logging all errors, but no errors are shown in the error log.
register_shutdown_function - Registered a custom shutdown function. This does get called so I know the process isn't being killed by the server, it's being allowed to finish. (or at least I assume that is the case with this being called?)
debug_backtrace - Logged a debug_backtrace() in my custom shutdown function. This shows only one call and it's my custom shutdown function.
Log if reaches the end of script - Outside of the loop, I have a function that logs that the script exited the loop (and therefore would be reaching the end of the source file normally). When the script dies randomly, it's not logging this, so whatever kills it, kills it while it's in the middle of processing.
What other debugging methods would you suggest for finding the culprit?
Note: I should add that this is not an issue with max_execution_time, which is disabled for these scripts. The time before being killed is inconsistent. It could run for 10 seconds or 12 hours before it dies.
Update/Solution: Thank you all for your suggestions. By logging the output, I discovered that when a MySql query failed, the script was set to die(). D'oh. Updated it to log the mysql errors and then terminate. Got it working now like a charm!
I'd log memory usage of your script. Maybe it acquires too much memory, hits memory limit and dies?
Remember, PHP has a variable in the ini file that says how long a script should run. max-execution-time
Make sure that you are not going over this, or use the set_time_limit() to increase execution time. Is this program running through a web server or via cli?
Adding: My Bad Experiences with PHP. Looking through some background scripts I wrote earlier this year. Sorry, but PHP is a terrible scripting language for doing anything for long lengths of time. I see that the newer PHP (which we haven't upgraded to) adds the functionality to force the GC to run. The problem I've been having is from using too much memory because the GC almost never runs to clean up itself. If you use things that recursively reference themselves, they also will never be freed.
Creating an array of 100,000 items makes memory, but then setting the array to an empty array or splicing it all out, does NOT free it immediately, and doesn't mark it as unused (aka making a new 100,000 element array increases memory).
My personal solution was to write a perl script that ran forever, and system("php my_php.php"); when needed, so that the interpreter would free completely. I'm currently supporting 5.1.6, this might be fixed in 5.3+ or at the very least, now they have GC commands that you can use to force the GC to cleanup.
Simple script
#!/usr/bin/perl -w
use strict;
while(1) {
if( system("php /to/php/script.php") != 0 ) {
sleep(30);
}
}
then in your php script
<?php
// do a single processing block
if( $moreblockstodo ) {
exit(0);
} else {
// no? then lets sleep for a bit until we get more
exit(1);
}
?>
I'd log the state of the function to a file in a few different places in each loop.
You can get the contents of most variables as a string with var_export, using the var_export($varname,true) form.
You could just log this to a certain file, and keep an eye on it. The latest state of the function before the log ends should provide some clues.
Sounds like whatever is happening is not a standard php error. You should be able to throw your own errors using a try... catch statement that should then be logged. I don't have more details other than that because I'm on my phone away from a pc.
I've encountered this before on one of our projects at work. We have a similar setup - a PHP script checks the DB if there are tasks to be done (such as sending out an email, updating records, processing some data as well). The PHP script has a while loop inside, which is set to
while(true) {
//do something
}
After a while, the script will also be killed somehow. I've already tried most of what has been said here like setting max_execution_time, using var_export to log all output, placing a try_catch, making the script output ( php ... > output.txt) etc and we've never been able to find out what the problem is.
I think PHP just isn't built to do background tasks by itself. I know it's not answering your question (how to debug this) but the way we worked this is that we used a cronjob to call the PHP file every 5 minutes. This is similar to Jeremy's answer of using a perl script - it ensures that the interpreter if free after the execution is done.
If this is on Linux, try to look into system logs - the process could be killed by the OOM (out-of-memory) killer (unlikely, you'd also see other problems if this was happening), or a segmentation fault (some versions of PHP don't like some versions of extensions, resulting in weird crashes).