Set ini max_execution_time doesn't work - php

Before I use nginx and php-fpm, I used Apache, so when I wanted only one of my cron jobs to run without time execution limitation, I used these lines in my PHP code:
set_time_limit(0);
ini_set('max_execution_time', 0);
but after I migrated from Apache to nginx, this code doesn't work. I know ways to change nginx.conf to increase maximum execution time.
But I want to handle this with php code. Is there a way?
I want to specify only one file that can run PHP code without time limitation.

Try This:
Increase PHP script execution time with Nginx
You can follow the steps given below to increase the timeout value. PHP default is 30s. :
Changes in php.ini
If you want to change max execution time limit for php scripts from 30 seconds (default) to 300 seconds.
vim /etc/php5/fpm/php.ini
Set…
max_execution_time = 300
In Apache, applications running PHP as a module above would have suffice. But in our case we need to make this change at 2 more places.
Changes in PHP-FPM
This is only needed if you have already un-commented request_terminate_timeout parameter before. It is commented by default, and takes value of max_execution_time found in php.ini
Edit…
vim /etc/php5/fpm/pool.d/www.conf
Set…
request_terminate_timeout = 300
Changes in Nginx Config
To increase the time limit for example.com by
vim /etc/nginx/sites-available/example.com
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
If you want to increase time-limit for all-sites on your server, you can edit main nginx.conf file:
vim /etc/nginx/nginx.conf
Add following in http{..} section
http {
#...
fastcgi_read_timeout 300;
#...
}
Reload PHP-FPM & Nginx
Don’t forget to do this so that changes you have made will come into effect:
service php5-fpm reload
service nginx reload
or try this
fastcgi_send_timeout 50;
fastcgi_read_timeout 50;
fastcgi has it's own set of timeouts and checks to prevent it from stalling out on a locked up process. They would kick in if you for instance set php's execuction time limit to 0 (unlimited) then accidentally created an infinite loop. Or if you were running some other application besides PHP which didn't have any of it's own timeout protections and it failed.

I think that if you have php-fpm and nginx "you can't" set this time only from PHP.
What you could do is a redirect with the parameters indicating you where to continue, but you must control the time that your script is running to avoid timeout.
If your process runs in a browser window, then do it with Javascript the redirect because the browser could limit the number of redirects... or do it with ajax.
Hope that helps.

You can add request_terminate_timeout = 300 to your server's php-fpm pool configuration if you are tried all of solutions here.

ini_set('max_execution_time', 0);
do this if "Safe Mode" is off
set_time_limit(0);
Place this at the top of your PHP script and let your script loose!
Note: if your PHP setup is running in safe mode, you can only change it from the php.ini file.

Related

Magento: Increase max_execution_time for SOAP calls only?

In a Magento system, I have a global max_execution_time set to 60 seconds for PHP requests. However, this limit is not enough for some calls to the SOAP API. What I would like to do is increase the execution time to say 10 minutes for API requests, without affecting the normal frontend pages.
How can I increase max_execution_time for Magento SOAP requests only?
For Apache
This can be configured in your vhost with a <LocationMatch "/api/"> node. Place max execution time inside that node (along with any other rules such as max memory) and they will only be applied to requests that hit /api/.*
For Nginx
In Nginx, you should be able to accomplish the same thing with:
location ~ ^/api {
fastcgi_read_timeout 600;
}

NginX issues HTTP 499 error after 60 seconds despite config. (PHP and AWS)

At the end of last week I noticed a problem on one of my medium AWS instances where Nginx always returns a HTTP 499 response if a request takes more than 60 seconds. The page being requested is a PHP script
I've spent several days trying to find answers and have tried everything that I can find on the internet including several entries here on Stack Overflow, nothing works.
I've tried modifying the PHP settings, PHP-FPM settings and Nginx settings. You can see a question I raised on the NginX forums on Friday (http://forum.nginx.org/read.php?9,237692) though that has received no response so I am hoping that I might be able to find an answer here before I am forced to moved back to Apache which I know just works.
This is not the same problem as the HTTP 500 errors reported in other entries.
I've been able to replicate the problem with a fresh micro AWS instance of NginX using PHP 5.4.11.
To help anyone who wishes to see the problem in action I'm going to take you through the set-up I ran for the latest Micro test server.
You'll need to launch a new AWS Micro instance (so it's free) using the AMI ami-c1aaabb5
This PasteBin entry has the complete set-up to run to mirror my test environment. You'll just need to change example.com within the NginX config at the end
http://pastebin.com/WQX4AqEU
Once that's set-up you just need to create the sample PHP file which I am testing with which is
<?php
sleep(70);
die( 'Hello World' );
?>
Save that into the webroot and then test. If you run the script from the command line using php or php-cgi, it will work. If you access the script via a webpage and tail the access log /var/log/nginx/example.access.log, you will notice that you receive the HTTP 1.1 499 response after 60 seconds.
Now that you can see the timeout, I'll go through some of the config changes I've made to both PHP and NginX to try to get around this. For PHP I'll create several config files so that they can be easily disabled
Update the PHP FPM Config to include external config files
sudo echo '
include=/usr/local/php/php-fpm.d/*.conf
' >> /usr/local/php/etc/php-fpm.conf
Create a new PHP-FPM config to override the request timeout
sudo echo '[www]
request_terminate_timeout = 120s
request_slowlog_timeout = 60s
slowlog = /var/log/php-fpm-slow.log ' >
/usr/local/php/php-fpm.d/timeouts.conf
Change some of the global settings to ensure the emergency restart interval is 2 minutes
# Create a global tweaks
sudo echo '[global]
error_log = /var/log/php-fpm.log
emergency_restart_threshold = 10
emergency_restart_interval = 2m
process_control_timeout = 10s
' > /usr/local/php/php-fpm.d/global-tweaks.conf
Next, we will change some of the PHP.INI settings, again using separate files
# Log PHP Errors
sudo echo '[PHP]
log_errors = on
error_log = /var/log/php.log
' > /usr/local/php/conf.d/errors.ini
sudo echo '[PHP]
post_max_size=32M
upload_max_filesize=32M
max_execution_time = 360
default_socket_timeout = 360
mysql.connect_timeout = 360
max_input_time = 360
' > /usr/local/php/conf.d/filesize.ini
As you can see, this is increasing the socket timeout to 3 minutes and will help log errors.
Finally, I'll edit some of the NginX settings to increase the timeout's that side
First I edit the file /etc/nginx/nginx.conf and add this to the http directive
fastcgi_read_timeout 300;
Next, I edit the file /etc/nginx/sites-enabled/example which we created earlier (See the pastebin entry) and add the following settings into the server directive
client_max_body_size 200;
client_header_timeout 360;
client_body_timeout 360;
fastcgi_read_timeout 360;
keepalive_timeout 360;
proxy_ignore_client_abort on;
send_timeout 360;
lingering_timeout 360;
Finally I add the following into the location ~ .php$ section of the server dir
fastcgi_read_timeout 360;
fastcgi_send_timeout 360;
fastcgi_connect_timeout 1200;
Before retrying the script, start both nginx and php-fpm to ensure that the new settings have been picked up. I then try accessing the page and still receive the HTTP/1.1 499 entry within the NginX example.error.log.
So, where am I going wrong? This just works on apache when I set PHP's max execution time to 2 minutes.
I can see that the PHP settings have been picked up by running phpinfo() from a web-accessible page. I just don't get, I actually think that too much has been increased as it should just need PHP's max_execution_time, default_socket_timeout changed as well as NginX's fastcgi_read_timeout within just the server->location directive.
Update 1
Having performed some further test to show that the problem is not that the client is dying I have modified the test file to be
<?php
file_put_contents('/www/log.log', 'My first data');
sleep(70);
file_put_contents('/www/log.log','The sleep has passed');
die('Hello World after sleep');
?>
If I run the script from a web page then I can see the content of the file be set to the first string. 60 seconds later the error appears in the NginX log. 10 seconds later the contents of the file changes to the 2nd string, proving that PHP is completing the process.
Update 2
Setting fastcgi_ignore_client_abort on; does change the response from a HTTP 499 to a HTTP 200 though nothing is still returned to the end client.
Update 3
Having installed Apache and PHP (5.3.10) onto the box straight (using apt) and then increasing the execution time the problem does appear to also happen on Apache as well. The symptoms are the same as NginX now, a HTTP200 response but the actual client connection times out before hand.
I've also started to notice, in the NginX logs, that if I test using Firefox, it makes a double request (like this PHP script executes twice when longer than 60 seconds). Though that does appear to be the client requesting upon the script failing
The cause of the problem is the Elastic Load Balancers on AWS. They, by default, timeout after 60 seconds of inactivity which is what was causing the problem.
So it wasn't NginX, PHP-FPM or PHP but the load balancer.
To fix this, simply go into the ELB "Description" tab, scroll to the bottom, and click the "(Edit)" link beside the value that says "Idle Timeout: 60 seconds"
Actually I faced the same issue on one server and I figured out that after nginx configuration changes I didn't restart the nginx server, so with every hit of nginx url I was getting a 499 http response. After nginx restart it started working properly with http 200 responses.
I thought I would leave my two cents. First the problem is not related with php(still could be a php related, php always surprises me :P ). Thats for sure. its mainly caused of a server proxied to itself, more specifically hostname/aliases names issue, in your case it could be the load balancer is requesting nginx and nginx is calling back the load balancer and it keeps going that way.
I have experienced a similar issue with nginx as the load balancer and apache as the webserver/proxy
In my case - nginx was sending a request to an AWS ALB and getting a timeout with a 499 status code.
The solution was to add this line:
proxy_next_upstream off;
The default value for this in current versions of nginx is proxy_next_upstream error timeout; - which means that on a timeout it tries the next 'server' - which in the case of an ALB is the next IP in the list of resolved ips.
You need to find in which place problem live. I dont' know exact answer, but just let's try to find it.
We have here 3 elements: nginx, php-fpm, php. As you told, same php settings under apache is ok. Does it's same no same setup? Did you try apache instead of nginx on same OS/host/etc.?
If we will see, that php is not suspect, then we have two suspects: nginx & php-fpm.
To exclude nginx: try to setup same "system" on ruby. See https://github.com/garex/puppet-module-nginx to get idea to install simplest ruby setup. Or use google (may be it will be even better).
My main suspect here is php-fpm.
Try to play with these settings:
php-fpm`s request_terminate_timeout
nginx`s fastcgi_ignore_client_abort

How to handle timeouts with php5-fpm + nginx timeout php.ini

how to handle timeouts with PHP in php5-fpm + ngnix configurations?
I tried to make a simple script with just
sleep(60);
php.ini
max_execution_time = 30
fast_cgi
fastcgi_connect_timeout 60;
fastcgi_send_timeout 50;
fastcgi_read_timeout 50;
The script stops at 50s for timeout of the backend. What do I have to do to
enable the max_execution_time in php.ini
enable ini_set to change the execution time to 0 directly in the
script
Why does fast_cgi get to control timeouts over everything instead of php itself?
It was basically the fact that on Linux the timeout counts only for the actual "php work" and not for all stream function times and moreover not for sleep that's why I never reached the limit and fastgci timeout always kicked in. Instead on Windows the actual "human" time elapsed counts.
from the PHP doc:
The set_time_limit() function and the configuration directive
max_execution_time only affect the execution time of the script
itself. Any time spent on activity that happens outside the execution
of the script such as system calls using system(), stream operations,
database queries, etc. is not included when determining the maximum
time that the script has been running. This is not true on Windows
where the measured time is real.
Try using set_time_limit in your PHP code.
When use php-cgi(php-fpm) php.ini's max_execution_timewill not take effects, but
fpm configuration item request_terminate_timeout will handle script execution time .
In php-fpm.conf set this item like below:
request_terminate_timeout = 60s

Apache and or PHP Timeouts - Stumped.

I have a PHP script that when called via a browser it times-out after exactly 60 seconds. I have modified httpd.conf and set the Timeout directive to be 300. I have modified all PHP timeout settings to extend longer than 60 seconds. When I run the script from the command line it will complete. When I execute through browser each time after 60 seconds, POOF, timeout.
I have also checked for the existence of timeout directives in any of the .htaccess files. Nothing there.. I am completely stumped.
I am also forcing set_time_limit(0) within the PHP code.
I've been digging and testing for a week and have exhausted my knowledge. Any help is greatly appreciated.
You need to make sure you are setting a higher timeout limit both in PHP and in Apache.
If you set a high max_execution_time in php.ini your script won't timeout, however, if you are not flushing the output butter of the script's results to the browser on a regular basis, the script might time out on the Apache end due to a network timeout.
In httpd.conf do:
Timeout 216000
In php.ini do:
max_execution_time = 0
(setting it to 0 makes it never time out, like with a CLI (command line) script).
Make sure you restart Apache after you are done! On most linux distro's you can do this by issuing the command (as root):
service httpd restart
Hope this helps!
There are numerous places that the maxtime can be set. If you are using FastCGI, especially though something such as Virtualmin, there are an additional set of settings for max_execution_time that are hidden to you unless you have access.
In short, you will need to figure out all the places, given your PHP stack setup, there can be an execution time limiter, up those values, restart the server, and then do
set_time_limit(0);
for good measure.
Without more information about your specific setup and given my experience in dealing with execution time hangups in PHP, that's the most I can tell you.

exec from php is causing an "Premature end of script headers: php-cgi.exe" error

I have a php script written which calls an external command using exec which compiles a spacial database query result into a shape file. In tables with lots of records (say 15,000), this command can take as long as 7 minutes to execute. The script works fine on scripts which do not take too long (maybe <2min) but on longer scripts (like the 7 minute one) the page will display "500 internal server error". Upon reviewing the Apache server log, the error states: "Premature end of script headers: php-cgi.exe". I have already adjusted the php maximum execution time to allow more than enough time, so I know it is not this. Is there an Apache maximum that's being hit, or something else?
Premature end of script headers means that webserver's timeout for CGI scripts was exceeded by your script. This is a webserver timeout and it has nothing to do with php.ini configuration. You need to look at your CGI handler configuration to increase time allowed for CGI scripts to run.
E.g. if you are using mod_fastcgi you may want to specify the following option in your Apache config: FastCgiServer -idle-timeout 600 which will give you timeout of 10 minutes. By default fastcgi provides 30 seconds. You could find some other fastcgi options here http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html
Apparently, the default httpd.conf file included in Apache 2.4 doesn't automatically include the extra file httpd-default.conf under /conf/extra (probably by design), which includes the timeout parameter. Since the timeout parameter isn't defined, it reverts back to the default value of 30 seconds, and your script(s) time out.
I did the following to resolve it:
Opened httpd.conf and added the following line to the bottom:
Include conf/extra/httpd-default.conf
Restarted Apache, and it worked.
An alternative would be to simply add the following line(s) to the httpd.conf file:
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
Hope this helps someone out there :)

Categories