Potential reasons for PHP script not ending after time out specified? - php

I'm having a bit of a server issue.
using the following random script to just produce a time out.
set_time_limit(1);
$x = 0;
while ($x < 1000)
{
}
The problem I'm facing is the server is actually taking around 10-25 seconds or so to finish the script and produce the fatal error "Fatal error: Maximum execution time of 1 second exceeded"
whereas on my local machine, the error appears almost instantly, I've disabled custom error handlers on production server as I thought it might be that but I'm still facing the same issue.
Any ideas as to what could be causing this?
Edit
Just to clarify, max execution time IS being set successfully on production and the error message is just the same as local - "Fatal error: Maximum execution time of 1 second exceeded"
It just takes around 10-25 seconds for the error to eventually appear on production.

Your production server may have disabled setting INI settings from PHP scripts, which is good practice. Instead of using set_time_limit(1) in your script, you should set this in php_ini.

Okay, I assume your production server is Linux/Unix and your local server is Windows? They use set_time_limit differently apparently:
From this URL: http://www.soliantconsulting.com/blog/2010/02/phps-max_execution_time-different-on-windows-and-linux
On Windows, the PHP INI variable, "max_execution_time," is the maximum clock time between when the PHP process starts and when it finishes.
On Linux and OS X, the same variable represents the maximum CPU time that the PHP process can take. CPU time is less than clock time because processes on Linux (and OS X) contend for CPU time, and sometimes sit idle.
For example, if you have a PHP program which makes a request to a web service, like FileMaker's Custom Web Publishing, on Linux, the time the PHP program waits for a response is not counted, while on Windows, it does. You can see this by running the script:
<?php
set_time_limit(15); /* Sets max_execution_time */
sleep(30); /* Wait thirty seconds */
echo "Hello World!\n";
?>
If you run this on Windows, it will fail with an error message, but on Linux or OS X, the time spent sleeping is time the process is "idle," so the execution time is considered to be almost zero.

set_time_limit(1);
echo "Max Execution Time: ".ini_get('max_execution_time');
That should tell you if your set call is even doing anything. If you're on shared hosting you may not even be allowed to change it.

Related

php sleep function odd behaviour

We've inherited a platform that has a crobjob that every minute curls a local php script three times with different parameters (curl -s -o --url https://localhost/myscript.php?option=XYZ -k). That script runs for about 1 minute and its possible multiple instances with the same option overlap for a bit of time. The script logs in a different file per option given and each log starts with the timestamp when the script started so it acts as an instance identifier.
The script has this skeleton:
<?php
$option=XYZ;
$scriptId = time();
$file = "log_$option.txt";
file_put_contents($file,"\n$scriptId: Start\n",FILE_APPEND);
session_start();
$expires = time()+60;
file_put_contents($file,"\n$scriptId: Expires at $expires\n",FILE_APPEND);
while(time()<$expires){
file_put_contents($file,"\n$scriptId: Not expired at ".time()."\n",FILE_APPEND);
switch($option){
case X:
do_db_stuff();
break;
...
}
file_put_contents($file,"\n$scriptId: Will sleep at ".time()."\n",FILE_APPEND);
sleep(13);
file_put_contents($file,"\n$scriptId: Woke up at ".time()."\n",FILE_APPEND);
}
file_put_contents($file,"\n$scriptId: Finished at ".time()."\n",FILE_APPEND);
Normally this script runs fine (even if they overlap when instance A is sleeping for the last time and instance B starts) but on occasion we have two issues that we can confirm with the logs:
sometimes it sleeps for less than 13 seconds (a
variable amount of time and always less than 13);
sometimes the script stops (no more logging after the "Will sleep" one, and we can verify that no db stuff is being done). [Update on this in Edit 2]
We've looked into the possible causes but couldn't find any:
php max_execution_time is set to 240 seconds and the script never
takes more than one and a half minutes;
sleep documentation says it is per session but curl isn't using cookies so it should be different sessions in every instance (and also if it was using the same it would always block since we always execute three script instances, which it doesn't);
the hosting tech team says there are no errors neither in the server
error log nor in php error log in the timestamp where these issues
happen.
I can't reproduce the issues at will, but they happen at least once a day.
What I'd like to know is what can be interfering with the sleep behaviour? How can I detect or fix it?
Additional information:
linux system
mysql 5.5
apache
php 5.3
php max_execution_time set to 240
Edit 1: Just to clarify: actually we have 3 options, so it writes to 3 log files, one for each option. At any given time there can be up to two instances per option running (each instance of the same option overlaps a small amount of time).
Edit2: As per #Jan suggestion, I added log to the sleep function result. The script stopped once with that log already:
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:29
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:37 with sleep return 5
[2016-01-05, 13:11:01] Not expired at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:38 with sleep return 13
... no more log from instance [2016-01-05, 13:11:01] ...
[2016-01-05, 13:12:01] Start
According to the sleep documentation:
If the call was interrupted by a signal, sleep() returns a non-zero value. On Windows, this value will always be 192 (the value of the WAIT_IO_COMPLETION constant within the Windows API). On other platforms, the return value will be the number of seconds left to sleep.
So according to the documentation and the log it seems that the sleep is being cut short due to an interrupt.
How can I know what interrupt caused this (pcntl_signal?), where did it come from and is there any way to avoid it?
Edit3: I've added code to handle signals with pcntl_signal (trying to register from signal 1 till 255) and log them, the issue still happens but the log is empty still.
You can define signal handlers with pcntl_signal.
With those handlers you can log when a interruption happens. But AFAIK you can't detect where it comes from.
Also you can use pcntl_alarm for your delayed jobs.
Check PHP Manual - PCNTL Alarm
Given your stack, the interrupt signal could come from apache. Depending on how Apache is communicating with PHP on your stack, there are several timeout options in Apache configuration that could infer with the script execution.
If your cron calls a curl on localhost, maybe it could directly call the PHP file instead, and so avoid using apache processes to keep the request alive? You'll have to edit your dedicated CLI php.ini file with the max_execution_time value.

PHP Fatal error: Max execution time exceeded - EVEN after set_time_limit(0)

I keep running into the following PHP error when running my script
Fatal error: Maximum execution time of 30 seconds exceeded in C:\wamp\apps\sqlbuddy1.3.3\functions.php on line 22
I already put this in my PHP file, and I STILL get this error message.
#set_time_limit(0);
Am I missing something?
Edit: This error only shows up after SEVERAL minutes, not after 30 seconds. Could something also be delaying its appearance?
set_time_limit() has no effect when running in safe_mode:
This function has no effect when PHP is running in safe mode. There is no workaround other than turning off safe mode or changing the time limit in the php.ini.
You can check the value of safe_mode and max_execution_time with phpinfo().
Given the facts that you are using Windows and experiencing the timeout later than 30s you might have somewhere else in your code a reset of the timeout (set_time_limit(30)):
The set_time_limit() function [..] only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.
Search your code for:
ini_set('max_execution_time', 30)
set_time_limit(30)
Rather than relying on the PHP file to change the php.ini settings, you should do it yourself. Find where the php.ini is kept for WAMP, and change/add this line:
max_execution_time = 500;
There is a PHP config that unallows the script to change the time_limit.
You can change your PHP behavior on php.ini file

My php script is killed automatically by the OS (DEBIAN)

I have a php script that is killed after 10 minutes by the OS (debian)
I don't want it to be killed
Someone told me that my server maybe has a task monitor that kills long-running processes as a safeguard against lockups.
I don't know how to access to the task monitor and how to disable it
What Apache error log says?
If you have error:
Fatal error: Maximum execution time of ...
then you have to turn off limit for PHP script execution time, which can be set like this:
set_time_limit(0); // limit in seconds, 0 for unlimited
while (1)// test loop
{
$a = 0;
}
You may set this an option in php.ini file, its max_execution_time.

Why php script may stop?

I'm running really long task in php. It's a website crawler and It has to be polite and sleep for 5 seconds each page to prevent server overloading. Script starts with:
ignore_user_abort(1);
session_write_close();
ob_end_clean();
while (#ob_end_flush());
set_time_limit(0);
ini_set('max_execution_time',0);
After few hours (between 3-7h) script dies without any visible reason.
I've checked
apache error log (nothing)
php_errors.log (nothing)
output for errors (10 578 467b of debug output, no errors)
memory consumption (stable, around 3M from memory_get_usage(true) checked every 5 sec, limit set to 512M)
It's not browser, cause I was using wget and chrome to check with the similar reason.
Output is sent to browser every 2-3 seconds, so I don't think that's the fault + I ignore user abort.
Is there any other place I can check to find the issue?
I think there's a problem in the rest of your script, not Apache.
Try profiling your application using the register_tick_function with a light profiler like this and logging the memory usage, may be that.

PHP Script Times out after 45 seconds

I am running a huge import to my database(about 200k records) and I'm having a serious issue with my import script timing out. I used my cell phone as a stop watch and found that it times out at exactly 45 seconds every pass(internal server error)... it only does about 200 records at a time, sometimes less. I scanned my phpinfo() and nothing is set to 45 seconds; so, I am clueless as to why it would be doing this.
My max_execution_time is set to 5 minutes and my max_input_time is set to 60 seconds. I also tried setting set_time_limit(0); ignore_user_abort(1); at the top of my page but it did not work.
It may also be helpful to note that my error file reads: "Premature end of script headers" as the execution error.
Any assistance is greatly appreciated.
I tried all the solutions on this page and, of course, running from the command line:
php -f filename.php
as Brent says is the sensible way round it.
But if you really want to run a script from your browser that keeps timing out after 45 seconds with a 500 internal server error (as I found when rebuilding my phpBB search index) then there's a good chance it's caused by mod_fcgid.
I have a Plesk VPS and I fixed it by editing the file
/etc/httpd/conf.d/fcgid.conf
Specifically, I changed
FcgidIOTimeout 45
to
FcgidIOTimeout 3600
3600 seconds = 1 hour. Should be long enough for most but adjust upwards if required. I saw one example quoting 7200 seconds in there.
Finally, restart Apache to make the new setting active.
apachectl graceful
HTH someone. It's been bugging me for 6 months now!
Cheers,
Rich
It's quite possible that you are hitting an enforced resource limit on your server, especially if the server isn't fully under your control.
Assuming it's some type of Linux server, you can see your resource limits with ulimit -a on the command line. ulimit -t will also show you just the limits on cpu time.
If your cpu is limited, you might have to process your import in batches.
First, you should be running the script from the command line if it's going to take a while. At the very least your browser would timeout after 2 minutes if it receives no content.
php -f filename.php
But if you need to run it from the browser, try add header("Content-type: text/html") before the import kicks.
If you are on a shared host, then it's possible there are restrictions on the system when any long running queries and/or scripts are automatically killed after a certain length of time. These restrictions are generally loosened for non-web running scripts. Thus, running it from the command line would help.
The 45 seconds could be a coincidence -- it could be how long it takes for you to reach the memory limit.. increasing the memory limit would be like:
ini_set('memory_limit', '256M');
It could also be the actual db connection that is timing out.. what db server are you using?
For me, mssql times out with an extremely unhelpful error, "Database context changed", after 60 seconds by default. To get around this, you do:
ini_set('mssql.timeout', 60 * 10); // 10 min
First of all
max_input_time and
set_time_limit(0)
will only work with VPS or dedicated servers . Instead of that you can follow some rules to your implementation like below
First read the whole CSV file .
Then grab only 10 entries (rows) or less and make a ajax calls to import in DB
Try to call ajax every time with 10 entries and after that echo out something on browser . In this method your script will never timeout .
Follow the same method untill the CSV rows are finished .

Categories