So I have a script which loops doing multiple cURL calls. After about 7-9 minutes it randomly stops execution. I have set the .user.ini file to adjust these settings:
max_execution_time = 30000
max_input_time = 200
I believe I have fastCGI but can't for the life of me figure out why this keeps dying on me. I have a submit form on the front end and I just get a 500 when it dies with nothing in the error log. Anything else I could be missing? Some PHP setting somewhere limiting the number of cURLs or execution time?
EDIT: So this issue was definitely FastCGI limiting my time with the param "FcgidBusyTimeout". My hosting company upped it for me as a test and everything worked great. The issue now is that because I'm on shared hosting they wont up FastCGI timeouts for people. I'm going to try and loop my script onto itself (kind of like a function loop where it calls itself again) and see if the new process' get me past the timeout issue.
FastCGI has its own timeout.
<IfModule mod_fcgid.c>
IPCConnectTimeout 20
IPCCommTimeout 120
FcgidBusyTimeout 200
</IfModule>
So if your PHP timeout is high enough its possible that your FastCGI process were killed after that time.
If you have heavy scripts its better to run the script over your CLI then you have only the PHP Timeout.
Related
This has been asked and answered before https://stackoverflow.com/a/12686252/219116 but, the solution there is not working for me.
mod_fcgid config
<IfModule mod_fcgid.c>
AddHandler fcgid-script .fcgi
FcgidIPCDir /var/run/mod_fcgid/
FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm
FcgidIdleTimeout 60
FcgidProcessLifeTime 120
FcgidMaxRequestsPerProcess 500
FcgidMaxProcesses 150
FcgidMaxProcessesPerClass 144
FcgidMinProcessesPerClass 0
FcgidConnectTimeout 30
FcgidIOTimeout 600
FcgidIdleScanInterval 10
FcgidMaxRequestLen 269484032
</IfModule>
php-cgi script
#!/bin/bassh
export PHPRC=/var/www/vhosts/example.com/etc/
export PHP_FCGI_MAX_REQUESTS=5000
exec /usr/bin/php-cgi
System details
CentOS Linux release 7.1.1503 (Core)
httpd-2.4.6-31.el7.centos.x86_64
mod_fcgid-2.3.9-4.el7.x86_64
php56u-cli-5.6.12-1.ius.centos7.x86_64
So my FcgidMaxRequestsPerProcess is set to 500 and my PHP_FCGI_MAX_REQUESTS is set to 10x that as suggested in the previous answers and the Apache documentation. And yet I still get these errors
[Thu Nov 19 18:16:48.197238 2015] [fcgid:warn] [pid 6468:tid 139726677858048]
(32)Broken pipe: [client X.X.X.X:41098] mod_fcgid: ap_pass_brigade failed in handle_request_ipc function
I am also getting the same problem about a year back then I have tried many things and in last I have done some of the hit and run things after documentation reading and my problem is gone. First the important things required to be set as:
FcgidBusyTimeout 300 [default]
FcgidBusyScanInterval 120 [default]
The purpose of this directive is to terminate hung applications. The default timeout may need to be increased for applications that can take longer to process the request. Because the check is performed at the interval defined by FcgidBusyScanInterval, request handling may be allowed to proceed for a longer period of time
FcgidProcessLifeTime 3600 [default]
Idle application processes which have existed for greater than this time will be terminated, if the number of processses for the class exceeds FcgidMinProcessesPerClass.
This process lifetime check is performed at the frequency of the configured FcgidIdleScanInterval.
FcgidZombieScanInterval 3 [seconds default]
The module checks for exited FastCGI applications at this interval. During this period of time, the application may exist in the process table as a zombie (on Unix).
Note : All the above options Decrease or increase according to your application process time or needs or apply to specific vhost.
But My Problem resolve by this option:
Above options have tweaked my server but after some time the errors seems comming again but the error is really resolve by this:
FcgidOutputBufferSize 65536 [default]
I have change it to
FcgidOutputBufferSize 0
This is the maximum amount of response data the module will read from the FastCGI application before flushing the data to the client. This will flush the data instantly not waiting to have 64KB of bytes, which really helps me to flush out process more fast.
Other issues I got
if 500 Error coming from Nginx timing out. The fix:
/etc/nginx/nginx.conf
keepalive_timeout 125;
proxy_read_timeout 125;
proxy_connect_timeout 125;
fastcgi_read_timeout 125;
Intermittently I would get the MySQL "MySQL server has gone away" error, which required one more tweak:
/etc/my.conf
wait_timeout = 120
Then, just for funsies, I went ahead and upped my PHP memory limit, just in case:
/etc/php.ini
memory_limit = 256M
Using SuExec
mod_fastcgi doesn't work at all under SuExec on Apache 2.x. I had nothing but trouble from it (it also had numerous other issues in our testing). The real cause of your problem is SuExec
In my case that was a startup for me, I starting Apache, mod_fcgid spawns exactly 5 processes for each vhost. Now, when using a simple upload script and submitting a file larger than 4-8KB all of those child processes are killed at once for the specific vhost the script was executed on.
It might be possible to make debug build or crank up logging in mod_fcgid which might give a clue.
I tried mod_fastcgi in the meantime for 1 year and I too can say with many others that SuExec is nothing but troublesome and runs not smoothly at all, in every case.
The warning has nothing to do with any of the Fcgidxxx options and is simply caused by client's closing their side of the connection before the server gets a chance to respond.
From the actual source:
/* Now pass any remaining response body data to output filters */
if ((rv = ap_pass_brigade(r->output_filters, brigade_stdout)) != APR_SUCCESS) {
if (!APR_STATUS_IS_ECONNABORTED(rv)) {
ap_log_rerror(APLOG_MARK, APLOG_WARNING, rv, r,
"mod_fcgid: ap_pass_brigade failed in "
"handle_request_ipc function");
}
return HTTP_INTERNAL_SERVER_ERROR;
}
Credit goes to Avian's Blog who found out about it.
This error can occur when asynchronous requests are used by a website. These do not directly show up as an erroneous result on a web page, but they trigger the execution of PHP scripts. If such scripts fail during their execution and do not return a result, this or similar strange errors are logged. What you need to do is to identify the JavaScript (AJAX) calls to your PHP scripts and find out why the execution of these scripts is failing.
Clients that close the connection before waiting for the server to respond. The client is indeed closing it, but it is doing so, because it does not receive a response from the AJAX call, and that again is caused by a faulty script of the website.
I have a php script that continues to timeout at 45 seconds. I have done everything I can find with execution time and currently have the following.
In my php.ini:
max_execution_time = 0
In my .htaccess:
# set timeout to 10 minutes
<IfModule mod_php5.c>
php_value max_execution_time 600
</IfModule>
And in the script that timesout:
ini_set("memory_limit", "-1");
set_time_limit(0);
In the script, when I echo ini_get('max_execution_time') I get 0 so it looks like everything is right. Is are there other resource limits at play that are keeping the script from running? I've researched memory_limit, input time, etc but am thinking there's something here I don't know about.
The script does a while loop against a table and then crawls different sites according to the record. When I limit the return to 1 or 2 records it works fine but any more than that it goes to a 404 page not found. To me this means it times out but does the 404 error indicate something else is going on? Thanks
A 404 error does not mean your script times out, it just means that those URLs you hit are Not Found.
You need to evaluate your script and maybe send a HEAD request to check the status code.
See here a list of HTTP status code and their meaning.
I have a PHP script that when called via a browser it times-out after exactly 60 seconds. I have modified httpd.conf and set the Timeout directive to be 300. I have modified all PHP timeout settings to extend longer than 60 seconds. When I run the script from the command line it will complete. When I execute through browser each time after 60 seconds, POOF, timeout.
I have also checked for the existence of timeout directives in any of the .htaccess files. Nothing there.. I am completely stumped.
I am also forcing set_time_limit(0) within the PHP code.
I've been digging and testing for a week and have exhausted my knowledge. Any help is greatly appreciated.
You need to make sure you are setting a higher timeout limit both in PHP and in Apache.
If you set a high max_execution_time in php.ini your script won't timeout, however, if you are not flushing the output butter of the script's results to the browser on a regular basis, the script might time out on the Apache end due to a network timeout.
In httpd.conf do:
Timeout 216000
In php.ini do:
max_execution_time = 0
(setting it to 0 makes it never time out, like with a CLI (command line) script).
Make sure you restart Apache after you are done! On most linux distro's you can do this by issuing the command (as root):
service httpd restart
Hope this helps!
There are numerous places that the maxtime can be set. If you are using FastCGI, especially though something such as Virtualmin, there are an additional set of settings for max_execution_time that are hidden to you unless you have access.
In short, you will need to figure out all the places, given your PHP stack setup, there can be an execution time limiter, up those values, restart the server, and then do
set_time_limit(0);
for good measure.
Without more information about your specific setup and given my experience in dealing with execution time hangups in PHP, that's the most I can tell you.
I have a php script written which calls an external command using exec which compiles a spacial database query result into a shape file. In tables with lots of records (say 15,000), this command can take as long as 7 minutes to execute. The script works fine on scripts which do not take too long (maybe <2min) but on longer scripts (like the 7 minute one) the page will display "500 internal server error". Upon reviewing the Apache server log, the error states: "Premature end of script headers: php-cgi.exe". I have already adjusted the php maximum execution time to allow more than enough time, so I know it is not this. Is there an Apache maximum that's being hit, or something else?
Premature end of script headers means that webserver's timeout for CGI scripts was exceeded by your script. This is a webserver timeout and it has nothing to do with php.ini configuration. You need to look at your CGI handler configuration to increase time allowed for CGI scripts to run.
E.g. if you are using mod_fastcgi you may want to specify the following option in your Apache config: FastCgiServer -idle-timeout 600 which will give you timeout of 10 minutes. By default fastcgi provides 30 seconds. You could find some other fastcgi options here http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html
Apparently, the default httpd.conf file included in Apache 2.4 doesn't automatically include the extra file httpd-default.conf under /conf/extra (probably by design), which includes the timeout parameter. Since the timeout parameter isn't defined, it reverts back to the default value of 30 seconds, and your script(s) time out.
I did the following to resolve it:
Opened httpd.conf and added the following line to the bottom:
Include conf/extra/httpd-default.conf
Restarted Apache, and it worked.
An alternative would be to simply add the following line(s) to the httpd.conf file:
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
Hope this helps someone out there :)
When I use set_time_limit and the script runs for any amount of time greater than 360 seconds, it throws a 500 error.
359, nothing, 360 and above, error.
I don't have access to php.ini, how can I fix this bug?
script runs for any amount of time greater than 360 seconds, it throws a 500 error.
It sounds like you're hitting another timeout somewhere. If your server uses FastCGI, for example, Apache and/or the FastCGI process could be configured to only wait for six minutes (360 seconds) before timing out. It also could be that there's a reverse proxy sitting between you and Apache with the same timeout, though proxy timeouts are usually 504s, not 500s.
Please examine your server configuration. If you're on shared hosting, ask your host about the timeout.
If your script needs to execute for an extended time, you may wish to find another way to run it.
If you use Apache you can change maximum execution time by .htaccess with this line
php_value max_execution_time 200