Making PHP scripts time out so they don't kill my server - php

The cause was probably that I ran out of disk space, causing everything to work strangely. I will leave this question up anyways in case anyone else has a similar issue.
I have a few PHP scripts that have hung for a long time, but apparently they are not really using much CPU time as they don't get killed. Still they are making it impossible for lighttpd to spawn any more PHP processes as the maximum amount of them has been spawned already.
I'm aware of the set_time_limit that can be used as a function or put into php.ini to control the maximum CPU time a script can run. What I want is to limit all PHP scripts run by my web server (lighttpd) not in CPU time, but clock time.
In case it matters, this is the PHP part from my lighttpd config file.
fastcgi.server = (".php" => ((
"bin-path" => "/opt/local/bin/php5-cgi",
"socket" => "/tmp/php.socket" + var.PID,
"min-procs" => 16,
"max-procs" => 16,
"idle-timeout" => 15,
)))
Here is my server-status from lighttpd. You can see that PHP has been running much longer than I bargained for and has caused the server to clog up. Strangely there also seem to be more PHP procs than my max-procs.
legend
. = connect, C = close, E = hard error
r = read, R = read-POST, W = write, h = handle-request
q = request-start, Q = request-end
s = response-start, S = response-end
388 connections
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhrhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhrhhhhhhhhhhhhhhhhhhhhhhhhrhhhhhhhhhhh
hhhhrhhhhhhhhhhrhrhhhrrhrhhhhhrhhhrhhhhhhrhhhrrrhr
rrhrrrhrhhhhrrhrrhhrrhrrhrrrrrrrrrrrrh
Connections
Client IP: Read: Written: State: Time: Host: URI: File:
204.16.33.51 0/0 0/0 handle-req 1361 ... (a PHP script)
204.16.33.46 0/0 0/0 handle-req 1420 ... (another PHP script)
... gazillion lines removed ...
Any ideas that could help me set up a configuration that I don't have to constantly babysit would be much appreciated!

You're probably best off editing the php.ini file and setting permissions there.
;;;;;;;;;;;;;;;;;;;
; Resource Limits ;
;;;;;;;;;;;;;;;;;;;
max_execution_time = 30 ; Maximum execution time of each script, in seconds
max_input_time = 60 ; Maximum amount of time each script may spend parsing request data
memory_limit = 32M ; Maximum amount of memory a script may consume (8MB)

I'm not sure you can do that in lighttpd. You could, however, set up a "spinner" script to periodically check for hung processes and kill them.

Related

Monitor php-fpm max processes count script

We have an issue in which on production server, some bug in our system locks/hangs a php-fpm process and is not being released, this causes over a period of 10-15 minutes to more processes to lock (probably trying to access a shared resource which is not released) and after a while the server cannot serve any new users as no free php-fpm processes are available.
Parallel to trying and find what is creating that dead-lock, we were thinking of creating a simple cron job , which runs every 1-2 minutes and if it sees max processes above X it will either kill all php-fpm processes or restart the php-fpm .
What do you think of that simple temporary fix for the problem ?
Simple php script ,
$processCount = shell_exec("ps aux|grep php-fpm|grep USERNAME -c");
$killAll = $processCount >=60;
if($killAll){
echo "killing all processes";
try{
shell_exec("kill -9 $(lsof -t -i:9056)");
}catch(Exception $e1){
}
shell_exec("sudo service php56u-php-fpm restart");
$processCount = shell_exec("ps aux|grep php-fpm|grep USERNAME -c"); //check how much now
}
Killing all php processes doesn't seem like a good solution to your problem. It would also kill legitimate processes and return errors to visitors, and generally just hide the problem deeper. You may also introduce data inconsistencies, corrupt files and other problems killing processes indiscriminately.
Maybe it would be better to set some timeout, so the process would be killed if it takes too long to execute.
You could add something like this to php-fpm pool config:
request_terminate_timeout = 3m
and/or max_execution_time in php.ini
You can also enable logging in php-fpm config:
slowlog = /var/log/phpslow.log
request_slowlog_timeout = 2m
This will log slow requests and may help you find the cultprit of your issue.
it's not a good solution to kill PHP processes. in your PHP-fpm config file (/etc/php5/pool.d/www.conf)
set pm.max_requests=100 so after 100 requests the process will close and another process will start for the rest of requests.
also maybe there's a problem with your code, please make sure the request is ending.
So if the problem with your script try request_terminate_timeout=2m
The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0
Please note that if you are doing some long polling, this may affect your code.

Limits set in the php.ini file

I have a system where I want users to be able to download large amounts of data. Some of the files can be over 800 MB. The problem is that php times out before the download is complete. I can get just under 250 MB worth, someone on a slower computer got considerably less.
I've think the problem lies in the php.ini file have increased some of the values which hasn't made any difference. I've found three sections of the file that possibly need to be changed but I don't know what they do and can't seem to find out in the php manual. I was wondering if somebody could tell me what they could do and it this could affect my issue.
; Default timeout for socket based streams (seconds)
default_socket_timeout = 360
; Connect timeout
;mssql.connect_timeout = 5
; Query timeout
;mssql.timeout = 60
; Default timeout in seconds.
pfpro.defaulttimeout = 30
Can anyone help me?
Have you tried this?
set_time_limit(0);
This is a likely candidate to have its value raised:
max_execution_time - 1800

Linux background task maximum process time

Is there any fixed time duration as how long a background task can run?
This is how I run the script (background task) manually:
php /var/www/html/app_v2/console.php massbulkinsert app.example.com 10 > /dev/null &
this script process huge data set, it takes about 1 hour to complete.
First time it stopped at 10100th record. second time it stopped at 9975th record. There is no pattern of it terminating.
top command and the mysql pid was at 98% and 100% and 130% most of the time and the free memory had about 200 MB. There is enough disk space.
Its a bit of a wild guess, but usually when you succeed with a smaller amount of data - and then gets crashes with larger amounts, it has to do with memory issues.
You should have a look at /etc/php5/cli. There is probably also a folder named cgi inthere - depending how your framework executes the background script i would expect either of these two configurations are used.
Files with extensions called 'ini' are configurations for PHP scripting, and these are among the values that you're interested in (values are defaults on debian 8):
; Maximum execution time of each script, in seconds
; http://php.net/max-execution-time
; Note: This directive is hardcoded to 0 for the CLI SAPI
max_execution_time = 30
; Maximum amount of memory a script may consume
; http://php.net/memory-limit
memory_limit = -1
Note, that there is also a timeout for how long the script can spend, reading the data sent to it through, say a pipe (max_input_time). But seeing your command, youre not piping values to it via stdin - but most likely reading a file already on the disk.
Hope it helps

APC restarts sometimes

After installing APC, see the apc.php script, the uptime restart every one or two hours? why?
How can I change that?
I set apc.gc_ttl = 0
APC caches lives as long as their hosting process, it could be that your apache workers reach their MaxConnectionsPerChild limit and they get killed and respawned clearing the cache with it. This a safety mechanism against leaking processes.
mod_php: MaxConnectionsPerChild
mod_fcgid or other fastcgi: FcgidMaxRequestsPerProcess and PHP_FCGI_MAX_REQUESTS (enviroment variable, the example is for lighttpd but it should be considered everywhere php -b used)
php-fpm: pm.max_requests individually for every pool.
You could try setting the option you are using to it's "doesn't matter" value (usually 0) and run test the setup with a simple hello world php script, and apachebench ab2 -n 10000 -c 10 http://localhost/hello.php (tweak the values as needed) to see if the worker pid's are changing or not.
If you use a TTL of 0 APC will clear all cache slots when it runs out of memory. This is what appends every 2 hours.
TTL must never be set to 0
Just read the manual to understand how TTL is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Use apc.php from http://pecl.php.net/get/APC, copy it to your webserver to check memory usage.
You must allow enough memory so APC have 20% free after some hours running. Check this on a regular basis.
If you don't have enough memory available, use filters option to prevent rarely accessed files from being cached.
Check my answer there
What is causing "Unable to allocate memory for pool" in PHP?
I ran into the same issue today, found the solution here:
http://www.itofy.com/linux/cpanel/apc-cache-reset-every-2-hours/
You need to go to AccesWHM > Apache Configuration > Piped Log Configuration and Enable Piped Apache Logs.

PHP Script Times out after 45 seconds

I am running a huge import to my database(about 200k records) and I'm having a serious issue with my import script timing out. I used my cell phone as a stop watch and found that it times out at exactly 45 seconds every pass(internal server error)... it only does about 200 records at a time, sometimes less. I scanned my phpinfo() and nothing is set to 45 seconds; so, I am clueless as to why it would be doing this.
My max_execution_time is set to 5 minutes and my max_input_time is set to 60 seconds. I also tried setting set_time_limit(0); ignore_user_abort(1); at the top of my page but it did not work.
It may also be helpful to note that my error file reads: "Premature end of script headers" as the execution error.
Any assistance is greatly appreciated.
I tried all the solutions on this page and, of course, running from the command line:
php -f filename.php
as Brent says is the sensible way round it.
But if you really want to run a script from your browser that keeps timing out after 45 seconds with a 500 internal server error (as I found when rebuilding my phpBB search index) then there's a good chance it's caused by mod_fcgid.
I have a Plesk VPS and I fixed it by editing the file
/etc/httpd/conf.d/fcgid.conf
Specifically, I changed
FcgidIOTimeout 45
to
FcgidIOTimeout 3600
3600 seconds = 1 hour. Should be long enough for most but adjust upwards if required. I saw one example quoting 7200 seconds in there.
Finally, restart Apache to make the new setting active.
apachectl graceful
HTH someone. It's been bugging me for 6 months now!
Cheers,
Rich
It's quite possible that you are hitting an enforced resource limit on your server, especially if the server isn't fully under your control.
Assuming it's some type of Linux server, you can see your resource limits with ulimit -a on the command line. ulimit -t will also show you just the limits on cpu time.
If your cpu is limited, you might have to process your import in batches.
First, you should be running the script from the command line if it's going to take a while. At the very least your browser would timeout after 2 minutes if it receives no content.
php -f filename.php
But if you need to run it from the browser, try add header("Content-type: text/html") before the import kicks.
If you are on a shared host, then it's possible there are restrictions on the system when any long running queries and/or scripts are automatically killed after a certain length of time. These restrictions are generally loosened for non-web running scripts. Thus, running it from the command line would help.
The 45 seconds could be a coincidence -- it could be how long it takes for you to reach the memory limit.. increasing the memory limit would be like:
ini_set('memory_limit', '256M');
It could also be the actual db connection that is timing out.. what db server are you using?
For me, mssql times out with an extremely unhelpful error, "Database context changed", after 60 seconds by default. To get around this, you do:
ini_set('mssql.timeout', 60 * 10); // 10 min
First of all
max_input_time and
set_time_limit(0)
will only work with VPS or dedicated servers . Instead of that you can follow some rules to your implementation like below
First read the whole CSV file .
Then grab only 10 entries (rows) or less and make a ajax calls to import in DB
Try to call ajax every time with 10 entries and after that echo out something on browser . In this method your script will never timeout .
Follow the same method untill the CSV rows are finished .

Categories