Monitor php-fpm max processes count script - php

We have an issue in which on production server, some bug in our system locks/hangs a php-fpm process and is not being released, this causes over a period of 10-15 minutes to more processes to lock (probably trying to access a shared resource which is not released) and after a while the server cannot serve any new users as no free php-fpm processes are available.
Parallel to trying and find what is creating that dead-lock, we were thinking of creating a simple cron job , which runs every 1-2 minutes and if it sees max processes above X it will either kill all php-fpm processes or restart the php-fpm .
What do you think of that simple temporary fix for the problem ?
Simple php script ,
$processCount = shell_exec("ps aux|grep php-fpm|grep USERNAME -c");
$killAll = $processCount >=60;
if($killAll){
echo "killing all processes";
try{
shell_exec("kill -9 $(lsof -t -i:9056)");
}catch(Exception $e1){
}
shell_exec("sudo service php56u-php-fpm restart");
$processCount = shell_exec("ps aux|grep php-fpm|grep USERNAME -c"); //check how much now
}

Killing all php processes doesn't seem like a good solution to your problem. It would also kill legitimate processes and return errors to visitors, and generally just hide the problem deeper. You may also introduce data inconsistencies, corrupt files and other problems killing processes indiscriminately.
Maybe it would be better to set some timeout, so the process would be killed if it takes too long to execute.
You could add something like this to php-fpm pool config:
request_terminate_timeout = 3m
and/or max_execution_time in php.ini
You can also enable logging in php-fpm config:
slowlog = /var/log/phpslow.log
request_slowlog_timeout = 2m
This will log slow requests and may help you find the cultprit of your issue.

it's not a good solution to kill PHP processes. in your PHP-fpm config file (/etc/php5/pool.d/www.conf)
set pm.max_requests=100 so after 100 requests the process will close and another process will start for the rest of requests.
also maybe there's a problem with your code, please make sure the request is ending.
So if the problem with your script try request_terminate_timeout=2m
The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0
Please note that if you are doing some long polling, this may affect your code.

Related

PHP scripting timing out after 60 seconds

Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.

APC restarts sometimes

After installing APC, see the apc.php script, the uptime restart every one or two hours? why?
How can I change that?
I set apc.gc_ttl = 0
APC caches lives as long as their hosting process, it could be that your apache workers reach their MaxConnectionsPerChild limit and they get killed and respawned clearing the cache with it. This a safety mechanism against leaking processes.
mod_php: MaxConnectionsPerChild
mod_fcgid or other fastcgi: FcgidMaxRequestsPerProcess and PHP_FCGI_MAX_REQUESTS (enviroment variable, the example is for lighttpd but it should be considered everywhere php -b used)
php-fpm: pm.max_requests individually for every pool.
You could try setting the option you are using to it's "doesn't matter" value (usually 0) and run test the setup with a simple hello world php script, and apachebench ab2 -n 10000 -c 10 http://localhost/hello.php (tweak the values as needed) to see if the worker pid's are changing or not.
If you use a TTL of 0 APC will clear all cache slots when it runs out of memory. This is what appends every 2 hours.
TTL must never be set to 0
Just read the manual to understand how TTL is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Use apc.php from http://pecl.php.net/get/APC, copy it to your webserver to check memory usage.
You must allow enough memory so APC have 20% free after some hours running. Check this on a regular basis.
If you don't have enough memory available, use filters option to prevent rarely accessed files from being cached.
Check my answer there
What is causing "Unable to allocate memory for pool" in PHP?
I ran into the same issue today, found the solution here:
http://www.itofy.com/linux/cpanel/apc-cache-reset-every-2-hours/
You need to go to AccesWHM > Apache Configuration > Piped Log Configuration and Enable Piped Apache Logs.

How to handle timeouts with php5-fpm + nginx timeout php.ini

how to handle timeouts with PHP in php5-fpm + ngnix configurations?
I tried to make a simple script with just
sleep(60);
php.ini
max_execution_time = 30
fast_cgi
fastcgi_connect_timeout 60;
fastcgi_send_timeout 50;
fastcgi_read_timeout 50;
The script stops at 50s for timeout of the backend. What do I have to do to
enable the max_execution_time in php.ini
enable ini_set to change the execution time to 0 directly in the
script
Why does fast_cgi get to control timeouts over everything instead of php itself?
It was basically the fact that on Linux the timeout counts only for the actual "php work" and not for all stream function times and moreover not for sleep that's why I never reached the limit and fastgci timeout always kicked in. Instead on Windows the actual "human" time elapsed counts.
from the PHP doc:
The set_time_limit() function and the configuration directive
max_execution_time only affect the execution time of the script
itself. Any time spent on activity that happens outside the execution
of the script such as system calls using system(), stream operations,
database queries, etc. is not included when determining the maximum
time that the script has been running. This is not true on Windows
where the measured time is real.
Try using set_time_limit in your PHP code.
When use php-cgi(php-fpm) php.ini's max_execution_timewill not take effects, but
fpm configuration item request_terminate_timeout will handle script execution time .
In php-fpm.conf set this item like below:
request_terminate_timeout = 60s

Apache and or PHP Timeouts - Stumped.

I have a PHP script that when called via a browser it times-out after exactly 60 seconds. I have modified httpd.conf and set the Timeout directive to be 300. I have modified all PHP timeout settings to extend longer than 60 seconds. When I run the script from the command line it will complete. When I execute through browser each time after 60 seconds, POOF, timeout.
I have also checked for the existence of timeout directives in any of the .htaccess files. Nothing there.. I am completely stumped.
I am also forcing set_time_limit(0) within the PHP code.
I've been digging and testing for a week and have exhausted my knowledge. Any help is greatly appreciated.
You need to make sure you are setting a higher timeout limit both in PHP and in Apache.
If you set a high max_execution_time in php.ini your script won't timeout, however, if you are not flushing the output butter of the script's results to the browser on a regular basis, the script might time out on the Apache end due to a network timeout.
In httpd.conf do:
Timeout 216000
In php.ini do:
max_execution_time = 0
(setting it to 0 makes it never time out, like with a CLI (command line) script).
Make sure you restart Apache after you are done! On most linux distro's you can do this by issuing the command (as root):
service httpd restart
Hope this helps!
There are numerous places that the maxtime can be set. If you are using FastCGI, especially though something such as Virtualmin, there are an additional set of settings for max_execution_time that are hidden to you unless you have access.
In short, you will need to figure out all the places, given your PHP stack setup, there can be an execution time limiter, up those values, restart the server, and then do
set_time_limit(0);
for good measure.
Without more information about your specific setup and given my experience in dealing with execution time hangups in PHP, that's the most I can tell you.

Making PHP scripts time out so they don't kill my server

The cause was probably that I ran out of disk space, causing everything to work strangely. I will leave this question up anyways in case anyone else has a similar issue.
I have a few PHP scripts that have hung for a long time, but apparently they are not really using much CPU time as they don't get killed. Still they are making it impossible for lighttpd to spawn any more PHP processes as the maximum amount of them has been spawned already.
I'm aware of the set_time_limit that can be used as a function or put into php.ini to control the maximum CPU time a script can run. What I want is to limit all PHP scripts run by my web server (lighttpd) not in CPU time, but clock time.
In case it matters, this is the PHP part from my lighttpd config file.
fastcgi.server = (".php" => ((
"bin-path" => "/opt/local/bin/php5-cgi",
"socket" => "/tmp/php.socket" + var.PID,
"min-procs" => 16,
"max-procs" => 16,
"idle-timeout" => 15,
)))
Here is my server-status from lighttpd. You can see that PHP has been running much longer than I bargained for and has caused the server to clog up. Strangely there also seem to be more PHP procs than my max-procs.
legend
. = connect, C = close, E = hard error
r = read, R = read-POST, W = write, h = handle-request
q = request-start, Q = request-end
s = response-start, S = response-end
388 connections
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhrhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhrhhhhhhhhhhhhhhhhhhhhhhhhrhhhhhhhhhhh
hhhhrhhhhhhhhhhrhrhhhrrhrhhhhhrhhhrhhhhhhrhhhrrrhr
rrhrrrhrhhhhrrhrrhhrrhrrhrrrrrrrrrrrrh
Connections
Client IP: Read: Written: State: Time: Host: URI: File:
204.16.33.51 0/0 0/0 handle-req 1361 ... (a PHP script)
204.16.33.46 0/0 0/0 handle-req 1420 ... (another PHP script)
... gazillion lines removed ...
Any ideas that could help me set up a configuration that I don't have to constantly babysit would be much appreciated!
You're probably best off editing the php.ini file and setting permissions there.
;;;;;;;;;;;;;;;;;;;
; Resource Limits ;
;;;;;;;;;;;;;;;;;;;
max_execution_time = 30 ; Maximum execution time of each script, in seconds
max_input_time = 60 ; Maximum amount of time each script may spend parsing request data
memory_limit = 32M ; Maximum amount of memory a script may consume (8MB)
I'm not sure you can do that in lighttpd. You could, however, set up a "spinner" script to periodically check for hung processes and kill them.

Categories