After installing APC, see the apc.php script, the uptime restart every one or two hours? why?
How can I change that?
I set apc.gc_ttl = 0
APC caches lives as long as their hosting process, it could be that your apache workers reach their MaxConnectionsPerChild limit and they get killed and respawned clearing the cache with it. This a safety mechanism against leaking processes.
mod_php: MaxConnectionsPerChild
mod_fcgid or other fastcgi: FcgidMaxRequestsPerProcess and PHP_FCGI_MAX_REQUESTS (enviroment variable, the example is for lighttpd but it should be considered everywhere php -b used)
php-fpm: pm.max_requests individually for every pool.
You could try setting the option you are using to it's "doesn't matter" value (usually 0) and run test the setup with a simple hello world php script, and apachebench ab2 -n 10000 -c 10 http://localhost/hello.php (tweak the values as needed) to see if the worker pid's are changing or not.
If you use a TTL of 0 APC will clear all cache slots when it runs out of memory. This is what appends every 2 hours.
TTL must never be set to 0
Just read the manual to understand how TTL is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Use apc.php from http://pecl.php.net/get/APC, copy it to your webserver to check memory usage.
You must allow enough memory so APC have 20% free after some hours running. Check this on a regular basis.
If you don't have enough memory available, use filters option to prevent rarely accessed files from being cached.
Check my answer there
What is causing "Unable to allocate memory for pool" in PHP?
I ran into the same issue today, found the solution here:
http://www.itofy.com/linux/cpanel/apc-cache-reset-every-2-hours/
You need to go to AccesWHM > Apache Configuration > Piped Log Configuration and Enable Piped Apache Logs.
Related
I have a owncloud in a Ubuntu with 5,3 GB RAM.
Every day I have a "out-of-memory" which kills mysql process and owncloud website fails. So I've to restart that server (dedicate to owncloud) every day at least one time (not a good practice...)
This is a px aux --sort -pmem |more ;
There are more than 50 process like files:scan -all.They increase continuously until OOM.
I read something about OCC but I can't get how to disable.
I try to edit mpm_prefork.conf and set;
Also read about edit overcommit_memory and set disable, but I don't want a kernel panic too.
Every process uses about 1.1% Memory. In a few hours get 100% and kernel kills MySQL and other process.
Any idea/solution?
We have an issue in which on production server, some bug in our system locks/hangs a php-fpm process and is not being released, this causes over a period of 10-15 minutes to more processes to lock (probably trying to access a shared resource which is not released) and after a while the server cannot serve any new users as no free php-fpm processes are available.
Parallel to trying and find what is creating that dead-lock, we were thinking of creating a simple cron job , which runs every 1-2 minutes and if it sees max processes above X it will either kill all php-fpm processes or restart the php-fpm .
What do you think of that simple temporary fix for the problem ?
Simple php script ,
$processCount = shell_exec("ps aux|grep php-fpm|grep USERNAME -c");
$killAll = $processCount >=60;
if($killAll){
echo "killing all processes";
try{
shell_exec("kill -9 $(lsof -t -i:9056)");
}catch(Exception $e1){
}
shell_exec("sudo service php56u-php-fpm restart");
$processCount = shell_exec("ps aux|grep php-fpm|grep USERNAME -c"); //check how much now
}
Killing all php processes doesn't seem like a good solution to your problem. It would also kill legitimate processes and return errors to visitors, and generally just hide the problem deeper. You may also introduce data inconsistencies, corrupt files and other problems killing processes indiscriminately.
Maybe it would be better to set some timeout, so the process would be killed if it takes too long to execute.
You could add something like this to php-fpm pool config:
request_terminate_timeout = 3m
and/or max_execution_time in php.ini
You can also enable logging in php-fpm config:
slowlog = /var/log/phpslow.log
request_slowlog_timeout = 2m
This will log slow requests and may help you find the cultprit of your issue.
it's not a good solution to kill PHP processes. in your PHP-fpm config file (/etc/php5/pool.d/www.conf)
set pm.max_requests=100 so after 100 requests the process will close and another process will start for the rest of requests.
also maybe there's a problem with your code, please make sure the request is ending.
So if the problem with your script try request_terminate_timeout=2m
The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0
Please note that if you are doing some long polling, this may affect your code.
I have a PHP script that when called via a browser it times-out after exactly 60 seconds. I have modified httpd.conf and set the Timeout directive to be 300. I have modified all PHP timeout settings to extend longer than 60 seconds. When I run the script from the command line it will complete. When I execute through browser each time after 60 seconds, POOF, timeout.
I have also checked for the existence of timeout directives in any of the .htaccess files. Nothing there.. I am completely stumped.
I am also forcing set_time_limit(0) within the PHP code.
I've been digging and testing for a week and have exhausted my knowledge. Any help is greatly appreciated.
You need to make sure you are setting a higher timeout limit both in PHP and in Apache.
If you set a high max_execution_time in php.ini your script won't timeout, however, if you are not flushing the output butter of the script's results to the browser on a regular basis, the script might time out on the Apache end due to a network timeout.
In httpd.conf do:
Timeout 216000
In php.ini do:
max_execution_time = 0
(setting it to 0 makes it never time out, like with a CLI (command line) script).
Make sure you restart Apache after you are done! On most linux distro's you can do this by issuing the command (as root):
service httpd restart
Hope this helps!
There are numerous places that the maxtime can be set. If you are using FastCGI, especially though something such as Virtualmin, there are an additional set of settings for max_execution_time that are hidden to you unless you have access.
In short, you will need to figure out all the places, given your PHP stack setup, there can be an execution time limiter, up those values, restart the server, and then do
set_time_limit(0);
for good measure.
Without more information about your specific setup and given my experience in dealing with execution time hangups in PHP, that's the most I can tell you.
I have no idea why or how this came to be, but for some odd reason PHP scripts on my server, once they utilize ini_set trying to influence the memory_limit setting, cause the script to completely crash. No error messages, no nothing. If i call the script through the browser, all i get is a blank page.
Any hints on this?
Update:
running 'free' returns
total used free shared buffers cached
Mem: 8190820 7922056 268764 0 565124 6598656
-/+ buffers/cache: 758276 7432544
Swap: 2102456 0 2102456
Is something hogging my memory?
running ps aux |grep apache gives me 'ERROR: Unsupported option (BSD syntax)'
Checking manually i found a whole bunch of lines refering to:
/usr/sbin/apache2 -k start
All at about 0.3% memory usage and owned by 'www-data'.
The scary part is that none of the processes listed by 'ps aux' uses more than 0.8% of the memory. And if i add up all the percentages listed, i never arrive at where i should according to what 'free' is telling me.
I seem to remember there being a problem with requesting anything over 2GB. I think 2GB is the magic cut-off in at least some versions of PHP.
try with this code:
ini_set('memory_limit', '-1');
I'm having an issue with memcached. Not sure if it's memcached, php, or tcp sockets but everytime I try a benchmark with 50 or more concurrency to a page with memcached, some of those request failed using apache ab. I get the (99) Cannot assign requested address error.
When I do a concurrency test of 5000 to a regular phpinfo() page. Everything is fine. No failed requests.
It seems like memcached cannot support high concurrency or am I missing something? I'm running memcached with the -c 5000 flag.
Server: (2) Quad Core Xeon 2.5Ghz, 64GB ram, 4TB Raid 10, 64bit OpenSUSE 11.1
Ok, I've figured it out. Maybe this will help others who have the same problem.
It seems like the issue can be a combination of things.
Set the sever.max-worker in the lighttpd.conf to a higher number
Original: 16 Now: 32
Turned off keep-alive in lighttpd.conf, it was keeping the connections opened for too long.
server.max-keep-alive-requests = 0
Change ulimit -n open files to a higher number.
ulimit -n 65535
If you're on linux use:
server.event-handler = "linux-sysepoll"
server.network-backend = "linux-sendfile"
Increase max-fds
server.max-fds = 2048
Lower the tcp TIME_WAIT before recycling, this keep close the connection faster.
In /etc/sysctl.conf add:
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 3
Make sure you force it to reload with: /sbin/sysctl -p
After I've made the changes, my server is now running 30,000 concurrent connections and 1,000,000 simultaneous requests without any issue, failed requests, or write errors with apache ab.
Command used to benchmark: ab -n 1000000 -c 30000 http://localhost/test.php
My Apache can't get even close to this benchmark. Lighttd make me laugh at Apache now. Apache crawl at around 200 concurrency.
I'm using just a 4 byte integer, using it as a page counter for testing purposes. Other php pages works fine even with 5,000 concurrent connections and 100,000 requests. This server have alot of horsepower and ram, so I know that's not the issue.
The page that seems to die have nothing but 5 lines to code to test the page counter using memcached. Making the connection gives me this error: (99) Cannot assign requested address.
This problem start to arise starting with 50 concurrent connections.
I'm running memcached with -c 5000 for 5000 concurrency.
Everything is on one machine (localhost)
The only process running is SSH, Lighttpd, PHP, and Memcached
There are no users connected to this box (test machine)
Linux -nofile is set to 32000
That's all I have for now, I'll post more information as I found more. It seems like there are alot of people with this problem.
I just tested something similar with a file;
$mc = memcache_connect('localhost', 11211);
$visitors = memcache_get($mc, 'visitors') + 1;
memcache_set($mc, 'visitors', $visitors, 0, 30);
echo $visitors;
running on a tiny virtual machine with nginx, php-fastcgi, and memcached.
I ran ab -c 250 -t 60 http://testserver/memcache.php from my laptop in the same network without seeing any errors.
Where are you seeing the error? In your php error log?
This is what I used for Nginx/php-fpm adding this lines in /etc/sysctl.conf # Rackspace dedicate servers with Memcached/Couchbase/Puppet:
# Memcached fix
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 3
I hope it helps.