PHP Scripts Unresponsive After 120 Seconds - php

This one really has me stumped. I have not ran across this problem on any other servers I have worked on.
This is on an Ubuntu 10.04.1 LTS server with PHP 5.3.2-1ubuntu4.5.
When I have a PHP script that does not have any output for over 120 seconds, the script will not show any subsequent output; however, any non-output will still be executed. This happens for both php5-cgi & php5 (cli). For example:
1. $iSleep = 120;
2. echo 'Now: '.date('H:i:s')."\n";
3. echo 'Sleeping for: '.$iSleep."\n";
4. echo 'Will wake up at: '.date('H:i:s', (time()+$iSleep))."\n";
5. sleep($iSleep);
6. echo 'Woke up at: '.date('H:i:s')."\n";
7. mail('test#example.com', 'Subject', 'Message');
I will get all the output back to the screen through line 4. Line 6 will never appear on the screen, but I will get an email from line 7. If I change line 1 to be 119 or less, the code will execute fully as expected. Please let me know if there are any other settings (php.ini) or version numbers that you want to know. Thanks in advance for your time.

My answer is also mostly a guess but you have the normal max_execution_time variable. By default, this is 30 as per the documentation. But one caveat it mentions:
The maximum execution time is not affected by system calls, stream operations etc. Please see the set_time_limit() function for more details.
I am positive mail() is a system call, thus you want to use set_time_limit as described.
Hopefully this solves your issue.

PHP appears to respond properly when I connect from other clients. I need to figure out what makes the client I am connecting from different.

Related

php sleep function odd behaviour

We've inherited a platform that has a crobjob that every minute curls a local php script three times with different parameters (curl -s -o --url https://localhost/myscript.php?option=XYZ -k). That script runs for about 1 minute and its possible multiple instances with the same option overlap for a bit of time. The script logs in a different file per option given and each log starts with the timestamp when the script started so it acts as an instance identifier.
The script has this skeleton:
<?php
$option=XYZ;
$scriptId = time();
$file = "log_$option.txt";
file_put_contents($file,"\n$scriptId: Start\n",FILE_APPEND);
session_start();
$expires = time()+60;
file_put_contents($file,"\n$scriptId: Expires at $expires\n",FILE_APPEND);
while(time()<$expires){
file_put_contents($file,"\n$scriptId: Not expired at ".time()."\n",FILE_APPEND);
switch($option){
case X:
do_db_stuff();
break;
...
}
file_put_contents($file,"\n$scriptId: Will sleep at ".time()."\n",FILE_APPEND);
sleep(13);
file_put_contents($file,"\n$scriptId: Woke up at ".time()."\n",FILE_APPEND);
}
file_put_contents($file,"\n$scriptId: Finished at ".time()."\n",FILE_APPEND);
Normally this script runs fine (even if they overlap when instance A is sleeping for the last time and instance B starts) but on occasion we have two issues that we can confirm with the logs:
sometimes it sleeps for less than 13 seconds (a
variable amount of time and always less than 13);
sometimes the script stops (no more logging after the "Will sleep" one, and we can verify that no db stuff is being done). [Update on this in Edit 2]
We've looked into the possible causes but couldn't find any:
php max_execution_time is set to 240 seconds and the script never
takes more than one and a half minutes;
sleep documentation says it is per session but curl isn't using cookies so it should be different sessions in every instance (and also if it was using the same it would always block since we always execute three script instances, which it doesn't);
the hosting tech team says there are no errors neither in the server
error log nor in php error log in the timestamp where these issues
happen.
I can't reproduce the issues at will, but they happen at least once a day.
What I'd like to know is what can be interfering with the sleep behaviour? How can I detect or fix it?
Additional information:
linux system
mysql 5.5
apache
php 5.3
php max_execution_time set to 240
Edit 1: Just to clarify: actually we have 3 options, so it writes to 3 log files, one for each option. At any given time there can be up to two instances per option running (each instance of the same option overlaps a small amount of time).
Edit2: As per #Jan suggestion, I added log to the sleep function result. The script stopped once with that log already:
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:29
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:37 with sleep return 5
[2016-01-05, 13:11:01] Not expired at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:38 with sleep return 13
... no more log from instance [2016-01-05, 13:11:01] ...
[2016-01-05, 13:12:01] Start
According to the sleep documentation:
If the call was interrupted by a signal, sleep() returns a non-zero value. On Windows, this value will always be 192 (the value of the WAIT_IO_COMPLETION constant within the Windows API). On other platforms, the return value will be the number of seconds left to sleep.
So according to the documentation and the log it seems that the sleep is being cut short due to an interrupt.
How can I know what interrupt caused this (pcntl_signal?), where did it come from and is there any way to avoid it?
Edit3: I've added code to handle signals with pcntl_signal (trying to register from signal 1 till 255) and log them, the issue still happens but the log is empty still.
You can define signal handlers with pcntl_signal.
With those handlers you can log when a interruption happens. But AFAIK you can't detect where it comes from.
Also you can use pcntl_alarm for your delayed jobs.
Check PHP Manual - PCNTL Alarm
Given your stack, the interrupt signal could come from apache. Depending on how Apache is communicating with PHP on your stack, there are several timeout options in Apache configuration that could infer with the script execution.
If your cron calls a curl on localhost, maybe it could directly call the PHP file instead, and so avoid using apache processes to keep the request alive? You'll have to edit your dedicated CLI php.ini file with the max_execution_time value.

PHP terminates script unexpectedly

I'm working in PHP and creating a system with a lot of PHP-driven elements and I have noticed that some of my pages stop displaying text produced using the echo command.
I have made a small example of this. Of course, my program is not supposed to just print allt numbers from 1 to 10000, but this example demonstrates how the script just terminates without any warnings.
Example code:
<?php
for ($i = 1; $i <= 10000; $i++) {
echo $i, '<br>';
}
?>
Output:
1
2
More numbers...
8975
8976
8977
8
What is causing this? Is it a buffer issue, and how do I resolve it?
The fact that your code ran to completion on the cli suggests to me that your script is exceeding the ini.max_execution_time runtime configuration.
Note in the linked documentation that the value of this configuration on the cli is 0 which means it does not time out when run in that environment.
The default setting in the browser is 30 seconds.
You can show your current setting in the browser with:
echo ini_get('max_execution_time');
And you should be able to increase it with:
ini_set('max_execution_time', 0); // turns off timeout
If the script you have shown us behaves as you describe the there's something very wrong going on. If this is a Unix or Linux based system and its repeatedly exhibiting this behaviour then the kernel is terminating the script - unless it has been configured not to do so, the kernel will be forcing a core dump of the process.
Either go build a new system to run your code on or Google how to capture and diagnose a core dump on your operating system.
update
If xdebug is reporting the process is still running then it probably hasn't dumped its core, but "not producing output" != "not running". What state is the process in? What happens when you redirect the ouput? What is the end-to-end output channel when it misbehaves?
The problem did not lie directly with my PHP installation or the application itself, but somewhere in my IDE PHPStorm. When running the code with the same PHP interpreter outside of the IDE's wrappers, it all works fine. The procedures described by the many users here helped with that. Thank you.

Max execution time

I have a site which is hosted on a dev site for demonstration to the client, and everything works without problem. However, when I download the files and database to my local EasyPHP installation, I receive the following error:
Fatal error: Maximum execution time of
30 seconds exceeded in C:\Program
Files (x86)\EasyPHP-5.3.4.0\www\PC
Estimating\classes\database.class.php
on line 23
The database details for the connection are correct, as the Database object is already used on part of the template before this error is shown.
My question is, why does the system work fine on a live server, but not on EasyPHP?
You should check the max_execution_time setting in the php.ini files on your server and on your local installation.
btw... what is done in line 23 ?
Copied from my comment to make it easier to find the solution:
some things really runs slower on windows... while on mac/unix the php connects to mysql using a file socket while it should use tcpip in windows. Try using "127.0.0.1" instead of "localhost" when connecting to the db
These issue have two possible solutions:
1) Increase the max_execution_time in your php.ini . First of all, locate this file, and then edit it. Locate this line:
max_execution_time=30
and replace by:
max_execution_time=120
And then restart your webserver.
This will increase from 30 seconds to 120 seconds. You can increase even more, according to your application needs.
2) If this setting doesn't solve the problem, you may have to look into your PHP application, because there may be an infinite loop or something similar.
More details about this problem:
https://www.copahost.com/blog/increase-php-max-execution-time/
Because your PC is slow compared to server and/or your code is really badly optimised

AjaXplorer [written in PHP] is too slow on IIS

I've installed AjaXplorer (very nice web file explorer), written in PHP, on my IIS (Windows Server 2008 SP2 x64). It works too slow for me.
What can be the cause? Are there some settings in php.ini? Or, maybe, something is wrong with IIS?
I use 32-bit PHP, php-cgi.exe as interpreter.
Regards,
First off, CGI will always be slow. It needs to boot the entire PHP runtime for each request. Try using FastCGI (If you're using IIS 7, or if you're using IIS 6)...
After that, try to see why it's slow. Is it because the PHP script takes a long time to execute (meaning it's a code issue), or is it because of a server config. To test, modify this into the start of the entrance point of the PHP program (index.php):
define(START_TIME_CUSTOM, microtime(true));
function onEndTimeCompute() {
$timeTaken = microtime(true) - START_TIME_CUSTOM;
echo "Completed In: ".number_format($timeTaken, 4)." Seconds\n";
}
register_shutdown_function('onEndTimeCompute');
That write Completed in n Seconds to the end of the generated output (even if die() is called). It may cause some issues if Ajax calls are expected to return JSON, so don't do it as a rule, just for trying to figure out what's going on.
So, if the total request takes 1 second, yet you see Completed in 0.004 Seconds, you know that the PHP code itself is not the issue (it's either in the setup of the interpreter by CGI, or somewhere else in IIS)...
That should at least show you where the problem is...

PHP Script Times out after 45 seconds

I am running a huge import to my database(about 200k records) and I'm having a serious issue with my import script timing out. I used my cell phone as a stop watch and found that it times out at exactly 45 seconds every pass(internal server error)... it only does about 200 records at a time, sometimes less. I scanned my phpinfo() and nothing is set to 45 seconds; so, I am clueless as to why it would be doing this.
My max_execution_time is set to 5 minutes and my max_input_time is set to 60 seconds. I also tried setting set_time_limit(0); ignore_user_abort(1); at the top of my page but it did not work.
It may also be helpful to note that my error file reads: "Premature end of script headers" as the execution error.
Any assistance is greatly appreciated.
I tried all the solutions on this page and, of course, running from the command line:
php -f filename.php
as Brent says is the sensible way round it.
But if you really want to run a script from your browser that keeps timing out after 45 seconds with a 500 internal server error (as I found when rebuilding my phpBB search index) then there's a good chance it's caused by mod_fcgid.
I have a Plesk VPS and I fixed it by editing the file
/etc/httpd/conf.d/fcgid.conf
Specifically, I changed
FcgidIOTimeout 45
to
FcgidIOTimeout 3600
3600 seconds = 1 hour. Should be long enough for most but adjust upwards if required. I saw one example quoting 7200 seconds in there.
Finally, restart Apache to make the new setting active.
apachectl graceful
HTH someone. It's been bugging me for 6 months now!
Cheers,
Rich
It's quite possible that you are hitting an enforced resource limit on your server, especially if the server isn't fully under your control.
Assuming it's some type of Linux server, you can see your resource limits with ulimit -a on the command line. ulimit -t will also show you just the limits on cpu time.
If your cpu is limited, you might have to process your import in batches.
First, you should be running the script from the command line if it's going to take a while. At the very least your browser would timeout after 2 minutes if it receives no content.
php -f filename.php
But if you need to run it from the browser, try add header("Content-type: text/html") before the import kicks.
If you are on a shared host, then it's possible there are restrictions on the system when any long running queries and/or scripts are automatically killed after a certain length of time. These restrictions are generally loosened for non-web running scripts. Thus, running it from the command line would help.
The 45 seconds could be a coincidence -- it could be how long it takes for you to reach the memory limit.. increasing the memory limit would be like:
ini_set('memory_limit', '256M');
It could also be the actual db connection that is timing out.. what db server are you using?
For me, mssql times out with an extremely unhelpful error, "Database context changed", after 60 seconds by default. To get around this, you do:
ini_set('mssql.timeout', 60 * 10); // 10 min
First of all
max_input_time and
set_time_limit(0)
will only work with VPS or dedicated servers . Instead of that you can follow some rules to your implementation like below
First read the whole CSV file .
Then grab only 10 entries (rows) or less and make a ajax calls to import in DB
Try to call ajax every time with 10 entries and after that echo out something on browser . In this method your script will never timeout .
Follow the same method untill the CSV rows are finished .

Categories