We've inherited a platform that has a crobjob that every minute curls a local php script three times with different parameters (curl -s -o --url https://localhost/myscript.php?option=XYZ -k). That script runs for about 1 minute and its possible multiple instances with the same option overlap for a bit of time. The script logs in a different file per option given and each log starts with the timestamp when the script started so it acts as an instance identifier.
The script has this skeleton:
<?php
$option=XYZ;
$scriptId = time();
$file = "log_$option.txt";
file_put_contents($file,"\n$scriptId: Start\n",FILE_APPEND);
session_start();
$expires = time()+60;
file_put_contents($file,"\n$scriptId: Expires at $expires\n",FILE_APPEND);
while(time()<$expires){
file_put_contents($file,"\n$scriptId: Not expired at ".time()."\n",FILE_APPEND);
switch($option){
case X:
do_db_stuff();
break;
...
}
file_put_contents($file,"\n$scriptId: Will sleep at ".time()."\n",FILE_APPEND);
sleep(13);
file_put_contents($file,"\n$scriptId: Woke up at ".time()."\n",FILE_APPEND);
}
file_put_contents($file,"\n$scriptId: Finished at ".time()."\n",FILE_APPEND);
Normally this script runs fine (even if they overlap when instance A is sleeping for the last time and instance B starts) but on occasion we have two issues that we can confirm with the logs:
sometimes it sleeps for less than 13 seconds (a
variable amount of time and always less than 13);
sometimes the script stops (no more logging after the "Will sleep" one, and we can verify that no db stuff is being done). [Update on this in Edit 2]
We've looked into the possible causes but couldn't find any:
php max_execution_time is set to 240 seconds and the script never
takes more than one and a half minutes;
sleep documentation says it is per session but curl isn't using cookies so it should be different sessions in every instance (and also if it was using the same it would always block since we always execute three script instances, which it doesn't);
the hosting tech team says there are no errors neither in the server
error log nor in php error log in the timestamp where these issues
happen.
I can't reproduce the issues at will, but they happen at least once a day.
What I'd like to know is what can be interfering with the sleep behaviour? How can I detect or fix it?
Additional information:
linux system
mysql 5.5
apache
php 5.3
php max_execution_time set to 240
Edit 1: Just to clarify: actually we have 3 options, so it writes to 3 log files, one for each option. At any given time there can be up to two instances per option running (each instance of the same option overlaps a small amount of time).
Edit2: As per #Jan suggestion, I added log to the sleep function result. The script stopped once with that log already:
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:29
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:37 with sleep return 5
[2016-01-05, 13:11:01] Not expired at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:38 with sleep return 13
... no more log from instance [2016-01-05, 13:11:01] ...
[2016-01-05, 13:12:01] Start
According to the sleep documentation:
If the call was interrupted by a signal, sleep() returns a non-zero value. On Windows, this value will always be 192 (the value of the WAIT_IO_COMPLETION constant within the Windows API). On other platforms, the return value will be the number of seconds left to sleep.
So according to the documentation and the log it seems that the sleep is being cut short due to an interrupt.
How can I know what interrupt caused this (pcntl_signal?), where did it come from and is there any way to avoid it?
Edit3: I've added code to handle signals with pcntl_signal (trying to register from signal 1 till 255) and log them, the issue still happens but the log is empty still.
You can define signal handlers with pcntl_signal.
With those handlers you can log when a interruption happens. But AFAIK you can't detect where it comes from.
Also you can use pcntl_alarm for your delayed jobs.
Check PHP Manual - PCNTL Alarm
Given your stack, the interrupt signal could come from apache. Depending on how Apache is communicating with PHP on your stack, there are several timeout options in Apache configuration that could infer with the script execution.
If your cron calls a curl on localhost, maybe it could directly call the PHP file instead, and so avoid using apache processes to keep the request alive? You'll have to edit your dedicated CLI php.ini file with the max_execution_time value.
Related
I have a PHP script that will be executed by requests from the application admins. It does lots of stuff and takes at least 20 minutes(depending on the database size)
the Apache TimeOut directive is set to 300(5 minutes) which closes the connection and returns 500 status code after my PHP script is finished if it takes longer time to execute
Setting the PHP ini max_execution_time for long time for this script is useless.
<?php
// long script
ini_set("max_execution_time", 3600);// 1 hour
// Apache still responses with the same "Connection: close" header and 500 status code
And I don't want to change the entire Apache TimeOut directive just for those couple of scripts, because if I did, any request will be able to take very long time which makes a scope for DDOS vulnerabilities, is this right?
Is there any way to allow this script only to run longer at the Apache level ?
Have you tried PHP's set_time_limit() method?
https://www.php.net/manual/en/function.set-time-limit.php
In addition to setting the initial execution time, the manual says that calling it resets the time expended to zero when called, and starts the counter again to the limit provided.
So if you want to be sure, you could just call set_time_limit(0) (0 == no limit) regularly throughout your script to make sure that you don't ever hit a limit (even though you're setting an infinite limit by passing in 0).
I need to one of my script (not all) works only five second. And if execution not finished yet (within five seconds) it should be dropped.
So I use
ini_set('max_execution_time', 5);
Also if I do
ini_get('max_execution_time');
it shows me five second, but script not interrupt after 5 seconds.
P.S
safe_mode = off
nginx -> php-fpm
set_time_limit(5) also has no effect
You can use set_time_limit function on top in your code as follows:
set_time_limit(5)
NB: If you dont place it on top of your code, supposed the php script has ran for 3 seconds and you called set_time_limit(5) so the total time of allowed execution would be 3+5 = 8 sec not 5sec as expected
Update
From php documentation:
Any time spent on activity that happens outside the execution of the
script such as system calls using system(), stream operations,
database queries, etc. is not included when determining the maximum
time that the script has been running. This is not true on Windows
where the measured time is real.
I'm having a bit of a server issue.
using the following random script to just produce a time out.
set_time_limit(1);
$x = 0;
while ($x < 1000)
{
}
The problem I'm facing is the server is actually taking around 10-25 seconds or so to finish the script and produce the fatal error "Fatal error: Maximum execution time of 1 second exceeded"
whereas on my local machine, the error appears almost instantly, I've disabled custom error handlers on production server as I thought it might be that but I'm still facing the same issue.
Any ideas as to what could be causing this?
Edit
Just to clarify, max execution time IS being set successfully on production and the error message is just the same as local - "Fatal error: Maximum execution time of 1 second exceeded"
It just takes around 10-25 seconds for the error to eventually appear on production.
Your production server may have disabled setting INI settings from PHP scripts, which is good practice. Instead of using set_time_limit(1) in your script, you should set this in php_ini.
Okay, I assume your production server is Linux/Unix and your local server is Windows? They use set_time_limit differently apparently:
From this URL: http://www.soliantconsulting.com/blog/2010/02/phps-max_execution_time-different-on-windows-and-linux
On Windows, the PHP INI variable, "max_execution_time," is the maximum clock time between when the PHP process starts and when it finishes.
On Linux and OS X, the same variable represents the maximum CPU time that the PHP process can take. CPU time is less than clock time because processes on Linux (and OS X) contend for CPU time, and sometimes sit idle.
For example, if you have a PHP program which makes a request to a web service, like FileMaker's Custom Web Publishing, on Linux, the time the PHP program waits for a response is not counted, while on Windows, it does. You can see this by running the script:
<?php
set_time_limit(15); /* Sets max_execution_time */
sleep(30); /* Wait thirty seconds */
echo "Hello World!\n";
?>
If you run this on Windows, it will fail with an error message, but on Linux or OS X, the time spent sleeping is time the process is "idle," so the execution time is considered to be almost zero.
set_time_limit(1);
echo "Max Execution Time: ".ini_get('max_execution_time');
That should tell you if your set call is even doing anything. If you're on shared hosting you may not even be allowed to change it.
I'm running really long task in php. It's a website crawler and It has to be polite and sleep for 5 seconds each page to prevent server overloading. Script starts with:
ignore_user_abort(1);
session_write_close();
ob_end_clean();
while (#ob_end_flush());
set_time_limit(0);
ini_set('max_execution_time',0);
After few hours (between 3-7h) script dies without any visible reason.
I've checked
apache error log (nothing)
php_errors.log (nothing)
output for errors (10 578 467b of debug output, no errors)
memory consumption (stable, around 3M from memory_get_usage(true) checked every 5 sec, limit set to 512M)
It's not browser, cause I was using wget and chrome to check with the similar reason.
Output is sent to browser every 2-3 seconds, so I don't think that's the fault + I ignore user abort.
Is there any other place I can check to find the issue?
I think there's a problem in the rest of your script, not Apache.
Try profiling your application using the register_tick_function with a light profiler like this and logging the memory usage, may be that.
I am running a huge import to my database(about 200k records) and I'm having a serious issue with my import script timing out. I used my cell phone as a stop watch and found that it times out at exactly 45 seconds every pass(internal server error)... it only does about 200 records at a time, sometimes less. I scanned my phpinfo() and nothing is set to 45 seconds; so, I am clueless as to why it would be doing this.
My max_execution_time is set to 5 minutes and my max_input_time is set to 60 seconds. I also tried setting set_time_limit(0); ignore_user_abort(1); at the top of my page but it did not work.
It may also be helpful to note that my error file reads: "Premature end of script headers" as the execution error.
Any assistance is greatly appreciated.
I tried all the solutions on this page and, of course, running from the command line:
php -f filename.php
as Brent says is the sensible way round it.
But if you really want to run a script from your browser that keeps timing out after 45 seconds with a 500 internal server error (as I found when rebuilding my phpBB search index) then there's a good chance it's caused by mod_fcgid.
I have a Plesk VPS and I fixed it by editing the file
/etc/httpd/conf.d/fcgid.conf
Specifically, I changed
FcgidIOTimeout 45
to
FcgidIOTimeout 3600
3600 seconds = 1 hour. Should be long enough for most but adjust upwards if required. I saw one example quoting 7200 seconds in there.
Finally, restart Apache to make the new setting active.
apachectl graceful
HTH someone. It's been bugging me for 6 months now!
Cheers,
Rich
It's quite possible that you are hitting an enforced resource limit on your server, especially if the server isn't fully under your control.
Assuming it's some type of Linux server, you can see your resource limits with ulimit -a on the command line. ulimit -t will also show you just the limits on cpu time.
If your cpu is limited, you might have to process your import in batches.
First, you should be running the script from the command line if it's going to take a while. At the very least your browser would timeout after 2 minutes if it receives no content.
php -f filename.php
But if you need to run it from the browser, try add header("Content-type: text/html") before the import kicks.
If you are on a shared host, then it's possible there are restrictions on the system when any long running queries and/or scripts are automatically killed after a certain length of time. These restrictions are generally loosened for non-web running scripts. Thus, running it from the command line would help.
The 45 seconds could be a coincidence -- it could be how long it takes for you to reach the memory limit.. increasing the memory limit would be like:
ini_set('memory_limit', '256M');
It could also be the actual db connection that is timing out.. what db server are you using?
For me, mssql times out with an extremely unhelpful error, "Database context changed", after 60 seconds by default. To get around this, you do:
ini_set('mssql.timeout', 60 * 10); // 10 min
First of all
max_input_time and
set_time_limit(0)
will only work with VPS or dedicated servers . Instead of that you can follow some rules to your implementation like below
First read the whole CSV file .
Then grab only 10 entries (rows) or less and make a ajax calls to import in DB
Try to call ajax every time with 10 entries and after that echo out something on browser . In this method your script will never timeout .
Follow the same method untill the CSV rows are finished .