I have a Windows Server that has random spikes of high CPU usage and upon looking at ProcessExplorer and Windows Task Manager, it seems that there are a high number of php-cgi.exe processes running concurrently, sometimes up to 6-8 instances, all taking around 10-15% of CPU each. Sometimes they are so bad that they cause the server to be unresponsive.
In the FastCGI settings, I've set MaxInstances to 4 so by right, so there shouldn't be more than 4 php-cgi.exe processes that are running simultaneously. Hence I would like some advice or directions on how to limit the number of instances to 4..
Additional notes: I've also set instanceMaxRequests to 10000 and also PHP_FCGI_MAX_REQUESTS to 10000 as well.
Related
I am running IIS 10 on Windows Server 2016. I am using PHP 8.1.4 NTS with FastCGI. PHP is optimized following the usual recommendations.
I noticed that the server's response times start to increase after about 2 hours. For example, the TTFB is roughly 150-200ms after IIS/worker processes are started. Sites load very quickly. Then, after about 2 hours or so, performance starts to decline where TTFB increases upward and eventually plateaus at around 500ms. Sometimes, it will even go as high as 800ms.
If I have the IIS application pool recycle, we're back to ~200ms where it will stay in that area for another 2 hours.
I'm trying to keep our server response times fast, and am curious what could be causing the performance to degrade after a few hours. Should we setup to recycle the pool more often? That can work, but it seems like something else is going on, and you shouldn't have to do that.
The server does not have high CPU, disk, or RAM usage. The w3wp and php-cgi processes have very little memory usage (10-20MB each). CPU is almost always under 10%, and RAM is only 50% in use.
Optimized IIS FastCGI parameters, and application pool parameters, to recommended settings (10k requests, etc.)
Reviewed MySQL 8.0 Server logs to find slow queries, but nothing with low performance was found.
I run my wordpress PHP website on windows server IIS 7.5. I have noticed that even there are no visitors in my website, some php-cgi.exe still running for long time. each of them (15-20%), so together CPU get almost (100%) slowing down my websitee when a new visitoer comes in.
I turned off all plugins, I turned off rewrite rules but nothing helped.
What can be done in order to stop php-cgi.exe process from running after visitor letf my webiste? or any other idea?
Thanks
The question is specific to the PHP instead of IIS.
There are so many reasons that lead to high CPU usage when the PHP-cgi.exe process consumes 100% CPU in the Task Manager.
Such as the Malformed PHP scripts containing some infinite loop, these poorly coded scripts will call the PHP-cgi.exe PHP worker program on the server and will eat 100% of CPU.
A large number of PHP processes on the server take up a considerable amount of server resources. Every user instance creates a large amount of process on the server. As a result, the CPU usage shoots up.
Besides, if there is something wrong with fetching certain content with lacking user privileges, PHP can cause high CPU usage.
See this blog for more details.
https://bobcares.com/blog/php-cgi-exe-high-cpu/
I want to find an opportunity to limit the CPU usage CPU for script php.
My script runs with the help of Cron tasks and works in the mode CLI.
The problem is that after starting the CPU usage is 100%.
What leads to the fact that the site on the same server stops responding to the execution time of the background task.
Is it possible to limit CPU usage for this script? For example, to 50% maximum.
VPS Linux Ubuntu 16.
RAM 6 GB.
CPU 2x.
PHP 7.2.
You could use nice or renice to low-priorize the process, e.g. renice +10 1234 will make the process 1234 low priorized on scheduling (limits are -20 to +19 with smaller values renders to higher priority).
With cpulimit it is possible to limit the cpu usage, eg. cpulimit -l 50 -p 1234 limits the process 1234 to 50%.
see also https://scoutapm.com/blog/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups
I have a web application written in Laravel / PHP that is in the early stages and generally serves about 500 - 600 reqs/min. We use Maria DB and Redis for caching and everything is on AWS.
For events we want to promote on our platform, we send out a push notification (mobile platform) to all users which results in a roughly 2-min long traffic burst that takes us to 3.5k reqs/min
At our current server scale, this completely bogs down the application servers' CPU which usually operate at around 10% CPU. The Databases and Redis clusters seem fine during this burst.
Looking at the logs, it seems all PHP-FPM worker pool processes get occupied and begin queuing up requests from the Nginx upstream.
We currently have:
three m4.large servers (2 cores, 8gb RAM each)
dynamic PHP-FPM process management, with a max of 120 child processes (servers)on each box
My questions:
1) Should we increase the FPM pool? It seems that memory-wise, we're probably nearing our limit
2) Should we decrease the FPM pool? It seems possible that we're spinning up so many process that the CPU is getting bogged down and is unable to really complete any of them. I wonder if we'd therefore get better results with less.
3) Should we simply use larger boxes with more RAM and CPU, which will allow us to add more FPM workers?
4) Is there any FPM performance tuning that we should be considering? We use Opcache, however, should we switch to static process management for FPM to cut down on the overhead of processes spinning up and down?
There are too many child processes in relation to the number of cores.
First, you need to know the server status at normal and burst time.
1) Check the number of php-fpm processes.
ps -ef | grep 'php-fpm: pool' | wc -l
2) Check the load average. At 2 cores, 2 or more means that the work's starting delayed.
top
htop
glances
3) Depending on the service, we start to adjust from twice the number of cores.
; Example
;pm.max_children = 120 ; normal) pool 5, load 0.1 / burst) pool 120, load 5 **Bad**
;pm.max_children = 4 ; normal) pool 4, load 0.1 / burst) pool 4, load 1
pm.max_children = 8 ; normal) pool 6, load 0.1 / burst) pool 8, load 2 **Good**
load 2 = Maximum Performance 2 cores
It is more accurate to test the web server with a load similar to the actual load through the apache benchmark(ab).
ab -c100 -n10000 http://example.com
Time taken for tests: 60.344 seconds
Requests per second: 165.72 [#/sec] (mean)
100% 880 (longest request)
We have a very lightweight tracking script written in PHP, running on Apache/2.2.14 (Ubuntu). The script will receive a high number of concurrent connections, but each connection will be fast. Currently, we are using prefork, configured as follows:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
ServerLimit 600
MaxClients 600
MaxRequestsPerChild 0
We also have KeepAlive Off
I've played around with these setting quite a bit, and have been testing with apache benchmark. Anytime I raise the concurrent connections with ab, I get "apr_socket_recv: Connection reset by peer (104)." I've also raised the max number of file descriptors.
I'm wondering if any apache gurus out there can point me in the right direction for this type of setup (high number of lightweight connections). What are the optimum values for StartServers, Min/MaxSpareServers, etc? Is the worker MPM worth looking into? Any thoughts are welcome.
I can give you some hints:
try to use Apache in worker mode instead of prefork. To do that either put PHP in fastcgi mode (php-fpm) or take the risk to keep it in mod_php inside a threaded apache worker (the risk is that siome external libraries may conflict, like locale settings, but if your PHp tracking code is small you may control that everything there is multi-thread enabled -- PHP5 without any external lib is multi-thread enabled)
If you MaxClient is 600 then put 600 in StartServers, MinSpareServers and MaxSpareServers. else Apache is creating a new fork at a very low speed :
the parent process creates new children at a maximum rate of 1 per second.
if you think your server can handle 600 forks then take the RAM, create the 600 forks, and maybe alter the MaxRequestsPerChild setting to something like 3000, so that sometimes old forks are removed and recreated (avoiding memleaks). You will not losse any time in the fork creation rate and Apache will not loose any time managing creation and deletion of childrens.
Disabling KeepAlive is a good thing in your case, as you did
To known what is the right value for MaxLients, either in prefork or worker mode, just test it, track the memory used by one fork and divide the siez of your available RAM by this number. Be carefull, php will use some RAM as well, with mod_php this RAM will be in the apache fork memory usage, in php-fpm it will be in the php-fpm processes, check the memory_limit setting in PHP for the max size of one PHP process.
Reduce your PHP RAM usage, so that you will be able to run more PHP scripts in parallel. Do not build big arrays, keep the session light, etc. Using APC opcode may reduce your memory footprint (and do other nice things as well), using PHP 5.3 instead of 5.2 as well.