Zend and long execution action - php

I would like to run action http://site.com/rss/rss_import which downloads many large files.
I use:
ignore_user_abort();
set_time_limit(0);
After about 60 seconds I get the following message:
504 Gateway timeout
When I run rss_import.php directly, the 504 error does not occur.
What can I do about that?

504 Gateway Timeout (you are probably using nginx) is web-server-related, not php-related. The server simply stops waiting for data from the php-fcgi.
Either change the config of nginx (see http://wiki.nginx.org/HttpFastcgiModule#fastcgi_read_timeout) or use the command line as ArneRie has already suggested.
//edit: In the (unlikely) case that you are using Apache with fcgi I want to put the parameter for apache, too: https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#fcgidiotimeout

Related

PHP scripting timing out after 60 seconds

Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.

nginx throws "504" gateway error when trying to run a phantomjs script from php

I'm trying to run the phantomjs script like so:
$max_time = ini_get('max_execution_time');
set_time_limit(0);
$result = shell_exec($path_to_phantomjs);
// Do stuff with result here...
set_time_limit($max_time);
It's a scraping script that takes a few minutes to complete but I want to wait for the result from it and manipulate it on my server. It is also important for me that this script will be run from the client-side and return some results to it for analysis etc.
This fails with a 504 error from nginx, it should be noted that the same code works well enough on my (local) apache server.
504 error means nginx reaches timeout while getting page from backend (php). To fix this you should increase values of following variables in your php proxy location to value higher than time of executing your script (in seconds):
fastcgi_read_timeout
proxy_read_timeout

502 Bad gateway error exact after 30 seconds

On my page, there is a script which takes a long time to execute fully. While in process, after 30 seconds, I'm getting 502 Bad gateway error. I have searched for this and it seems to be the KeepAlive feature of Apache. I've tried few things to keep it alive, such as:
set_time_limit(-1);
header("Connection: Keep-Alive");
header("Keep-Alive: timeout=600, max=100");
ini_set('max_execution_time', 30000);
ini_set('memory_limit', '-1');
I have also called an Ajax function that hits a page on server in every 5 seconds. But nothing worked for me.
I'm using PHP + MySql + Apache on Linux server.
If you are using some type of hosting it is quite possible that between your client and your server there is a proxy or a load balancer with connection time limit set to 30 seconds. It's quite a common solution.
Try to investigate logs to find which service returns 502.

NginX issues HTTP 499 error after 60 seconds despite config. (PHP and AWS)

At the end of last week I noticed a problem on one of my medium AWS instances where Nginx always returns a HTTP 499 response if a request takes more than 60 seconds. The page being requested is a PHP script
I've spent several days trying to find answers and have tried everything that I can find on the internet including several entries here on Stack Overflow, nothing works.
I've tried modifying the PHP settings, PHP-FPM settings and Nginx settings. You can see a question I raised on the NginX forums on Friday (http://forum.nginx.org/read.php?9,237692) though that has received no response so I am hoping that I might be able to find an answer here before I am forced to moved back to Apache which I know just works.
This is not the same problem as the HTTP 500 errors reported in other entries.
I've been able to replicate the problem with a fresh micro AWS instance of NginX using PHP 5.4.11.
To help anyone who wishes to see the problem in action I'm going to take you through the set-up I ran for the latest Micro test server.
You'll need to launch a new AWS Micro instance (so it's free) using the AMI ami-c1aaabb5
This PasteBin entry has the complete set-up to run to mirror my test environment. You'll just need to change example.com within the NginX config at the end
http://pastebin.com/WQX4AqEU
Once that's set-up you just need to create the sample PHP file which I am testing with which is
<?php
sleep(70);
die( 'Hello World' );
?>
Save that into the webroot and then test. If you run the script from the command line using php or php-cgi, it will work. If you access the script via a webpage and tail the access log /var/log/nginx/example.access.log, you will notice that you receive the HTTP 1.1 499 response after 60 seconds.
Now that you can see the timeout, I'll go through some of the config changes I've made to both PHP and NginX to try to get around this. For PHP I'll create several config files so that they can be easily disabled
Update the PHP FPM Config to include external config files
sudo echo '
include=/usr/local/php/php-fpm.d/*.conf
' >> /usr/local/php/etc/php-fpm.conf
Create a new PHP-FPM config to override the request timeout
sudo echo '[www]
request_terminate_timeout = 120s
request_slowlog_timeout = 60s
slowlog = /var/log/php-fpm-slow.log ' >
/usr/local/php/php-fpm.d/timeouts.conf
Change some of the global settings to ensure the emergency restart interval is 2 minutes
# Create a global tweaks
sudo echo '[global]
error_log = /var/log/php-fpm.log
emergency_restart_threshold = 10
emergency_restart_interval = 2m
process_control_timeout = 10s
' > /usr/local/php/php-fpm.d/global-tweaks.conf
Next, we will change some of the PHP.INI settings, again using separate files
# Log PHP Errors
sudo echo '[PHP]
log_errors = on
error_log = /var/log/php.log
' > /usr/local/php/conf.d/errors.ini
sudo echo '[PHP]
post_max_size=32M
upload_max_filesize=32M
max_execution_time = 360
default_socket_timeout = 360
mysql.connect_timeout = 360
max_input_time = 360
' > /usr/local/php/conf.d/filesize.ini
As you can see, this is increasing the socket timeout to 3 minutes and will help log errors.
Finally, I'll edit some of the NginX settings to increase the timeout's that side
First I edit the file /etc/nginx/nginx.conf and add this to the http directive
fastcgi_read_timeout 300;
Next, I edit the file /etc/nginx/sites-enabled/example which we created earlier (See the pastebin entry) and add the following settings into the server directive
client_max_body_size 200;
client_header_timeout 360;
client_body_timeout 360;
fastcgi_read_timeout 360;
keepalive_timeout 360;
proxy_ignore_client_abort on;
send_timeout 360;
lingering_timeout 360;
Finally I add the following into the location ~ .php$ section of the server dir
fastcgi_read_timeout 360;
fastcgi_send_timeout 360;
fastcgi_connect_timeout 1200;
Before retrying the script, start both nginx and php-fpm to ensure that the new settings have been picked up. I then try accessing the page and still receive the HTTP/1.1 499 entry within the NginX example.error.log.
So, where am I going wrong? This just works on apache when I set PHP's max execution time to 2 minutes.
I can see that the PHP settings have been picked up by running phpinfo() from a web-accessible page. I just don't get, I actually think that too much has been increased as it should just need PHP's max_execution_time, default_socket_timeout changed as well as NginX's fastcgi_read_timeout within just the server->location directive.
Update 1
Having performed some further test to show that the problem is not that the client is dying I have modified the test file to be
<?php
file_put_contents('/www/log.log', 'My first data');
sleep(70);
file_put_contents('/www/log.log','The sleep has passed');
die('Hello World after sleep');
?>
If I run the script from a web page then I can see the content of the file be set to the first string. 60 seconds later the error appears in the NginX log. 10 seconds later the contents of the file changes to the 2nd string, proving that PHP is completing the process.
Update 2
Setting fastcgi_ignore_client_abort on; does change the response from a HTTP 499 to a HTTP 200 though nothing is still returned to the end client.
Update 3
Having installed Apache and PHP (5.3.10) onto the box straight (using apt) and then increasing the execution time the problem does appear to also happen on Apache as well. The symptoms are the same as NginX now, a HTTP200 response but the actual client connection times out before hand.
I've also started to notice, in the NginX logs, that if I test using Firefox, it makes a double request (like this PHP script executes twice when longer than 60 seconds). Though that does appear to be the client requesting upon the script failing
The cause of the problem is the Elastic Load Balancers on AWS. They, by default, timeout after 60 seconds of inactivity which is what was causing the problem.
So it wasn't NginX, PHP-FPM or PHP but the load balancer.
To fix this, simply go into the ELB "Description" tab, scroll to the bottom, and click the "(Edit)" link beside the value that says "Idle Timeout: 60 seconds"
Actually I faced the same issue on one server and I figured out that after nginx configuration changes I didn't restart the nginx server, so with every hit of nginx url I was getting a 499 http response. After nginx restart it started working properly with http 200 responses.
I thought I would leave my two cents. First the problem is not related with php(still could be a php related, php always surprises me :P ). Thats for sure. its mainly caused of a server proxied to itself, more specifically hostname/aliases names issue, in your case it could be the load balancer is requesting nginx and nginx is calling back the load balancer and it keeps going that way.
I have experienced a similar issue with nginx as the load balancer and apache as the webserver/proxy
In my case - nginx was sending a request to an AWS ALB and getting a timeout with a 499 status code.
The solution was to add this line:
proxy_next_upstream off;
The default value for this in current versions of nginx is proxy_next_upstream error timeout; - which means that on a timeout it tries the next 'server' - which in the case of an ALB is the next IP in the list of resolved ips.
You need to find in which place problem live. I dont' know exact answer, but just let's try to find it.
We have here 3 elements: nginx, php-fpm, php. As you told, same php settings under apache is ok. Does it's same no same setup? Did you try apache instead of nginx on same OS/host/etc.?
If we will see, that php is not suspect, then we have two suspects: nginx & php-fpm.
To exclude nginx: try to setup same "system" on ruby. See https://github.com/garex/puppet-module-nginx to get idea to install simplest ruby setup. Or use google (may be it will be even better).
My main suspect here is php-fpm.
Try to play with these settings:
php-fpm`s request_terminate_timeout
nginx`s fastcgi_ignore_client_abort

How to Extend the Request/Connection Timeout on Apache-FastCGI-PHP application using .htaccess

I am trying to extend the Connection/Request Timeout at our allotted server space.
The Reason i am trying to do this is, for some operations in my application takes more than 120 seconds, then the server is not waiting for the operation to complete. It returns 500 Internal Server Error, exactly after 120 seconds.
To test it i placed the below script on server:
<?php
sleep(119);
echo "TEST";
?>
It will return TEST, to the browser after 119 seconds.
But when i place below script:
<?php
sleep(121);
echo "TEST";
?>
It will return 500 Internal Server Error after 120 seconds
we have set the Max_execution_time=360 in php.ini, but the problem still exists.
We have Apache installed with FastCGI.
I am trying to extend it to 360 seconds, using .htaccess, because that is the only way i can in Shared Hosting.
Any solutions or Suggestions ?, Thanks in Advance.
Fastcgi is a different beast; using set_time_limit will not solve the problem. I'm not sure what you can do with .htaccess, but the normal setting you're looking for is called IPCCommTimeout; you can try to change that in the .htaccess, I'm not sure if it's allowed or not.
See the directives on the apache fcgid page; if you're using an old version, you might need to try setting FcgidIOTimeout instead.
I would suggest that 120 seconds is far too long for a user to wait for a request over a web server; if things take this long to run, try running your script from the command line with PHP CLI instead.
Try this, hope it will work:
set_time_limit(int seconds)

Categories