On my page, there is a script which takes a long time to execute fully. While in process, after 30 seconds, I'm getting 502 Bad gateway error. I have searched for this and it seems to be the KeepAlive feature of Apache. I've tried few things to keep it alive, such as:
set_time_limit(-1);
header("Connection: Keep-Alive");
header("Keep-Alive: timeout=600, max=100");
ini_set('max_execution_time', 30000);
ini_set('memory_limit', '-1');
I have also called an Ajax function that hits a page on server in every 5 seconds. But nothing worked for me.
I'm using PHP + MySql + Apache on Linux server.
If you are using some type of hosting it is quite possible that between your client and your server there is a proxy or a load balancer with connection time limit set to 30 seconds. It's quite a common solution.
Try to investigate logs to find which service returns 502.
Related
Context
I am working on a PhP Server-sent event application running on PhP 7.4 and Apache 2.4 on Ubuntu 20.10. The app does what it's supposed to, but, presumably, increased number of users (connections? SSE connections?) causes server to hang. I am expecting/would like to be able to handle a relatively large number of users (~1000), but my SSE events fire rarely (~3x in 15 min) and only look for and send a few string values found in a textual file on server.
Problem
My problem is that under some circumstances including increased number of clients (~70 to 100) Apache starts hanging. New HTTP requests are not reported in access log, no errors are reported in errors log, and any requests sent from browser seem to be loading forever with no server answer. Server load (processor, RAM) in that moment is minimal and I can access the server via SSH or FTP normally.
What I've tried
This happens with the default Apache configuration so following online advice I tried turning off mpm_prefork module and activating mpm_event and php7.4-fpm. Not much changed except the number of clients going up for a few dozens but that also might not be true since I cannot test that manually, just have the application live-tested when I have a chance.
I've tried turning off the SSE element in the application and in that case I have no Apache hanging issues (but I can't update clients' info for which I need SSE). That means SSE are probably causing an overload/Apache hang with regard to something, but I don't know what.
I assume Apache hanging has to do with number of open connections or processes. As much as I've learned, I can control that only in /etc/apache2/apache2.conf (I tried setting MaxKeepAliveRequests 0) and in /etc/php/7.4/fpm/pool.d/www.conf (I tried setting pm.max_children = 250, pm.start_servers = 10, pm.min_spare_servers = 5, pm.max_spare_servers = 15, pm.max_requests = 1000) but to no avail.
My questions
what can I do to increase Apache supported number of connections/SSE processes running?
what can I do to find out what causes Apache to hang or what typically causes that?
any other ideas/suggestions on how to solve Apache hanging?
My server-side code is
<?php
header('Content-Type: text/event-stream; charset=utf-8');
header("Cache-Control: no-store");
header('Connection: keep-alive');
header('Content-Encoding: none;');
set_time_limit(0);
while (true) {
if (configurationChanged()) {
echo "data: " . newConfiguration() . "\n\n";
ob_end_flush();
flush();
} else {
sleep(3);
}
if (connection_aborted()) break;
}
?>
My client code is
var source = new EventSource('myScript.php', {withCredentials: false});
source.onopen = function (event) {
console.log("Connection opened.");
};
source.onmessage = function(event) {
console.log(event.data);
// Do stuff with the obtained data here
}
Thanks for reading this.
The solution
My main problem was I didn't expect Apache to hang while there were resources available on my server. A lack of experience caused me to waste many hours before I realized I should look for causes in
Apache error log /var/log/apache2/error.log
FPM log /var/log/php7.4-fpm.log
I tried re-configuring mpm-event module according to link given in the comment. While it helped to increase the number of concurrent users for a few dozens, the same problem started occurring when number of users further increased.
What did help was setting pm = ondemand in /etc/php/7.4/fpm/pool.d/www.conf to avoid having to manually defining parameters. I'm not sure why is that not a default or not more widely recommended. My problem seemed to be solved.
However, a new issue started occurring. FPM log /var/log/php7.4-fpm.log started reporting 2 kinds of errors:
[mpm_event:error] ... AH03490: scoreboard is full, not at MaxRequestWorkers.Increase ServerLimit.
which would leave my web application hanging for users for a few minutes, then going back to normal without any intervention.
[proxy_fcgi:error] ... (70007)The timeout specified has expired: ... AH01075: Error dispatching request to : (polling), referer: ...
which would kill my web application for my users (so I added JavaScript to reload target php script to my users if the SSE connection ended)
For 1.
I tried to follow the error message instructions "Increase ServerLimit" and added "ServerLimit 250" to /etc/apache2/mods-enabled/mpm_event.conf. That didn't solve the problem.
I found this Apache bug report, but I was using a version where that should have been fixed. I then found this page suggesting I should change mpm-event to mpm-worker. Worked like a charm and solved problem 1.
For 2.
Problem 2 was related to my PhP SSE application, in specific to the SSE script timeout. What did NOT help was simply adding set_time_limit(0); to my PhP script. Timeout was reached by proxy_fcgi according to the error, so I had to edit /etc/apache2/apache2.conf and set
Timeout 3600
ProxyTimeout 3600
This increased any script max execution time to 1 hour (3600 seconds). This is not an ideal solution, but I haven't been able to find a solution to allow only particular script (in my case SSE PhP script running in an infinite loop) execution time.
Hope this helps someone!
I'm connecting to a web service using SOAP in PHP. Everything was working fine when using Xampp on my computer but when I moved the PHP code to a web server, my timeout problems started.
First I got a timeout after 30 secs. max_execution_time fixed that.
Then I got a timeout after 60 secs, default socket timeout fixed that.
Now I get a 503 error after exactly 5 minutes. I am not sure how to fix that. I know the reason is simply that I am getting a lot of data and the fact that it is always exactly 5 minutes (+1 to 2 secs) means that it is a timeout.
But what can I do about it? Do I need a third time extender?
Edit
I also got the 503 error when this was my only code:
<?php
echo "hello";
sleep(305);
?>
So I assume there must be a timeout for something that I don't know about. But I have no idea how to check for those timeouts or if I can actually change those.
I have a site hosted on rackspace cloud sites. I have a troubleshooting script I am trying to run to figure out some problems with the site.
Cloud sites has a 30 second timeout and it is timing out before the results page can load. I spoke with their support and they advised me to put a page loading script at the top of the php file to keep the connection open but I have no idea how to do that and the googling I have done hasnt been much help.
The script I am trying to run is too long to include here but if anyone needs it you can find it here http://forum.joomla.org/viewtopic.php?f=621&t=582860
edit: so no matter what I set the execution time to in the script the load balancers rackspace uses will still timeout after 30 seconds. they have told me to run a 'page loading' script at the beginning of the script to keep the connection open so I am about to start looking into how to do that.
You could try the set_time_limit() function:
http://php.net/manual/en/function.set-time-limit.php
By default, a PHP script times out after 30 seconds.
Use the set_time_limit( int $seconds ) function to extend the maximum execution time.
You can also use ini_set() and set the max_execution_time:
ini_set("max_execution_time", 300);
EDIT
if the above doesn't work, then they probably use a secondary mechanism to timeout blocking connections. What you could try in this situation is to flush some data at a regular interval.
ob_start(); // enable output buffering
// output something at regular interval
echo " ";
ob_flush();
// at end of script
ob_end_flush();
Hope this helps.
I would like to run action http://site.com/rss/rss_import which downloads many large files.
I use:
ignore_user_abort();
set_time_limit(0);
After about 60 seconds I get the following message:
504 Gateway timeout
When I run rss_import.php directly, the 504 error does not occur.
What can I do about that?
504 Gateway Timeout (you are probably using nginx) is web-server-related, not php-related. The server simply stops waiting for data from the php-fcgi.
Either change the config of nginx (see http://wiki.nginx.org/HttpFastcgiModule#fastcgi_read_timeout) or use the command line as ArneRie has already suggested.
//edit: In the (unlikely) case that you are using Apache with fcgi I want to put the parameter for apache, too: https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#fcgidiotimeout
When I use set_time_limit and the script runs for any amount of time greater than 360 seconds, it throws a 500 error.
359, nothing, 360 and above, error.
I don't have access to php.ini, how can I fix this bug?
script runs for any amount of time greater than 360 seconds, it throws a 500 error.
It sounds like you're hitting another timeout somewhere. If your server uses FastCGI, for example, Apache and/or the FastCGI process could be configured to only wait for six minutes (360 seconds) before timing out. It also could be that there's a reverse proxy sitting between you and Apache with the same timeout, though proxy timeouts are usually 504s, not 500s.
Please examine your server configuration. If you're on shared hosting, ask your host about the timeout.
If your script needs to execute for an extended time, you may wish to find another way to run it.
If you use Apache you can change maximum execution time by .htaccess with this line
php_value max_execution_time 200