Context
I am working on a PhP Server-sent event application running on PhP 7.4 and Apache 2.4 on Ubuntu 20.10. The app does what it's supposed to, but, presumably, increased number of users (connections? SSE connections?) causes server to hang. I am expecting/would like to be able to handle a relatively large number of users (~1000), but my SSE events fire rarely (~3x in 15 min) and only look for and send a few string values found in a textual file on server.
Problem
My problem is that under some circumstances including increased number of clients (~70 to 100) Apache starts hanging. New HTTP requests are not reported in access log, no errors are reported in errors log, and any requests sent from browser seem to be loading forever with no server answer. Server load (processor, RAM) in that moment is minimal and I can access the server via SSH or FTP normally.
What I've tried
This happens with the default Apache configuration so following online advice I tried turning off mpm_prefork module and activating mpm_event and php7.4-fpm. Not much changed except the number of clients going up for a few dozens but that also might not be true since I cannot test that manually, just have the application live-tested when I have a chance.
I've tried turning off the SSE element in the application and in that case I have no Apache hanging issues (but I can't update clients' info for which I need SSE). That means SSE are probably causing an overload/Apache hang with regard to something, but I don't know what.
I assume Apache hanging has to do with number of open connections or processes. As much as I've learned, I can control that only in /etc/apache2/apache2.conf (I tried setting MaxKeepAliveRequests 0) and in /etc/php/7.4/fpm/pool.d/www.conf (I tried setting pm.max_children = 250, pm.start_servers = 10, pm.min_spare_servers = 5, pm.max_spare_servers = 15, pm.max_requests = 1000) but to no avail.
My questions
what can I do to increase Apache supported number of connections/SSE processes running?
what can I do to find out what causes Apache to hang or what typically causes that?
any other ideas/suggestions on how to solve Apache hanging?
My server-side code is
<?php
header('Content-Type: text/event-stream; charset=utf-8');
header("Cache-Control: no-store");
header('Connection: keep-alive');
header('Content-Encoding: none;');
set_time_limit(0);
while (true) {
if (configurationChanged()) {
echo "data: " . newConfiguration() . "\n\n";
ob_end_flush();
flush();
} else {
sleep(3);
}
if (connection_aborted()) break;
}
?>
My client code is
var source = new EventSource('myScript.php', {withCredentials: false});
source.onopen = function (event) {
console.log("Connection opened.");
};
source.onmessage = function(event) {
console.log(event.data);
// Do stuff with the obtained data here
}
Thanks for reading this.
The solution
My main problem was I didn't expect Apache to hang while there were resources available on my server. A lack of experience caused me to waste many hours before I realized I should look for causes in
Apache error log /var/log/apache2/error.log
FPM log /var/log/php7.4-fpm.log
I tried re-configuring mpm-event module according to link given in the comment. While it helped to increase the number of concurrent users for a few dozens, the same problem started occurring when number of users further increased.
What did help was setting pm = ondemand in /etc/php/7.4/fpm/pool.d/www.conf to avoid having to manually defining parameters. I'm not sure why is that not a default or not more widely recommended. My problem seemed to be solved.
However, a new issue started occurring. FPM log /var/log/php7.4-fpm.log started reporting 2 kinds of errors:
[mpm_event:error] ... AH03490: scoreboard is full, not at MaxRequestWorkers.Increase ServerLimit.
which would leave my web application hanging for users for a few minutes, then going back to normal without any intervention.
[proxy_fcgi:error] ... (70007)The timeout specified has expired: ... AH01075: Error dispatching request to : (polling), referer: ...
which would kill my web application for my users (so I added JavaScript to reload target php script to my users if the SSE connection ended)
For 1.
I tried to follow the error message instructions "Increase ServerLimit" and added "ServerLimit 250" to /etc/apache2/mods-enabled/mpm_event.conf. That didn't solve the problem.
I found this Apache bug report, but I was using a version where that should have been fixed. I then found this page suggesting I should change mpm-event to mpm-worker. Worked like a charm and solved problem 1.
For 2.
Problem 2 was related to my PhP SSE application, in specific to the SSE script timeout. What did NOT help was simply adding set_time_limit(0); to my PhP script. Timeout was reached by proxy_fcgi according to the error, so I had to edit /etc/apache2/apache2.conf and set
Timeout 3600
ProxyTimeout 3600
This increased any script max execution time to 1 hour (3600 seconds). This is not an ideal solution, but I haven't been able to find a solution to allow only particular script (in my case SSE PhP script running in an infinite loop) execution time.
Hope this helps someone!
Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.
I'm on Windows 7 (using a VirtualBox VM) to do some development. I have PHP/5.5.13, Apache 2.4.9, Chrome 35.0.1916.153. I'm seeing some very inconsistent behavior when I run the following code :
index2.php
<?php
echo "Load time before 'file_get_contents': ".(microtime(true) - $_SERVER['REQUEST_TIME_FLOAT'])."<br/>\r\n";
echo file_get_contents('http://custom.local/index3.php');
echo "Load time after 'file_get_contents': ".(microtime(true) - $_SERVER['REQUEST_TIME_FLOAT'])."<br/>\r\n";
exit;
?>
index3.php
<?php
echo "Load time inside 'index3.php': ".(microtime(true) - $_SERVER['REQUEST_TIME_FLOAT'])."<br/>\r\n";
exit;
?>
Here is the results in Chrome (I get consistent response time between 8 and 28 seconds after curl_exec). Response time fluctuates a lot as well:
Load time before 'file_get_contents': 0.002000093460083
Load time inside 'index3.php': 0.0026569366455078
Load time after 'file_get_contents': 15.411452054977
It seems that after all it was an Apache issue.
Chrome's advanced options "Predict network actions to improve page load performance" would use more threads than provisioned by the default Apache config (Windows 2.4.9 Windows build).
The fix is simple in httpd.conf :
AcceptFilter http none
More details here:
http://www.apachelounge.com/viewtopic.php?p=28142#28142
I've set up an Apache Server as localhost in a openSUSE 13.1 64 bit system and I'm currently testing my PHP scripts.
In Konquerer 4.11.5 everything seems fine, but with Firefox 29.0.1 there is a strange phenomenon:
Every 10th time or so the connection fails. Firefox reports: "Connection determined".
The failed connection is listed neither in error_log nor in access_log.
The error must be quite "early". Because my PHP script output.php calls "itself" via
header("Location: output.php?changed_url");
almost immediately, but the Firefox error is BEFORE output.php is opened for the second time.
I have no idea what to do about this. It's a quite annoying issue.
All answers will be appreciated! Thanks in advance!
I guess you are missing
exit;
after the header() location change.
So you have an open script, firefox redirecting to the next (itself) and still having one open, ... I think firefox doesn't like this kind of loop ;)
Do you have any .htaccess file there? Have you tried using firefox from different OS or computer? I bet it's related to your installation of firefox :) (i ain't pro take it as guess)
since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.