cgi/fastcgi process - 0% CPU and N% Memory? - php

I have php wordpress website hosted on IIS, have been using FastCGI, below is my fastCGI configuration
Below is screenshot of task manager
There are many fastCGI processes that are using 0% CPU but consuming certain portion of RAM, is this ok ? or there is any misconfiguration due to which this is happening
I did research online to find reason for same, but did not found anything.

As most web servers do, IIS uses its php processes multiple times before restarting them. It leaves idle processes running so it can serve incoming requests with out the extra latency delay of spinning up new processes.

Related

php-cgi suddenly very slow with minimal CPU on DMZ server

We have a Moodle IIS implementation where the primary data/IIS server is on our LAN, but we also have a public facing IIS server on our DMZ. Until recently, performance when accessing Moodle via the DMZ server was on-par with accessing via the LAN server; but last week I noticed that accessing via the DMZ was very slow and I was often getting 500 timeouts. I increased Activity Timeout for fastcgi and the timeouts disappeared, but the site is now painfully slow.
I monitored Activity Monitor when browsing the site using the LAN server and php-cgi.exe shows CPU goes up while actively browsing (20-25% or so). Monitoring the same on the DMZ server shows no change in CPU utilisation for the php-cgi processes - they all stay at 0-1%.
I moved the DMZ server to the LAN and the performance was immediately as expected: pages loaded quickly and php-cgi CPU utilisation goes up to 20-25% while browsing.
I tested pings and bandwidth when copying files between LAN and DMZ servers and the pings are around 20ms and the bandwidth seems capped at 100 Mbps when on the DMZ. That was unexpected, but I don't have historic pings to prove that latency used to be lower and bandwidth used to be higher.
Our core network provider recently performed maintenance and access to our DMZ dropped completely for a period until they 'fixed' the issue. It feels like they've introduced a bottleneck recently (traffic now routing through a 100 Mbps adapter?) and I have an open ticket, but I'm not sure how to prove this is the issue.
The only logs I can think to check are for IIS and looking at response-time. It looks like this has gone up 2-4x since the maintenance, but it's not as conclusive as I'd like (I'm guessing due to a good amount being locally cached). Is there anything else I could/should be looking at?
Servers are Windows Data Center 2012 R2, php is 7.4 nts 64-bit, and Moodle is 3.10.
Many thanks.
It is difficult to reproduce your problem based on your description. When the server hangs, crashes or performance is low, it is usually necessary to grab the server's thread stack (Thread Dump) for subsequent analysis. So I suggest you open a case via: https://support.microsoft.com, there will be professional technicians to assist you in grabbing the dump file and analyzing it.

High CPU windows server IIS 7.5 processes php-cgi.exe remain high CPU although no visitors in website

I run my wordpress PHP website on windows server IIS 7.5. I have noticed that even there are no visitors in my website, some php-cgi.exe still running for long time. each of them (15-20%), so together CPU get almost (100%) slowing down my websitee when a new visitoer comes in.
I turned off all plugins, I turned off rewrite rules but nothing helped.
What can be done in order to stop php-cgi.exe process from running after visitor letf my webiste? or any other idea?
Thanks
The question is specific to the PHP instead of IIS.
There are so many reasons that lead to high CPU usage when the PHP-cgi.exe process consumes 100% CPU in the Task Manager.
Such as the Malformed PHP scripts containing some infinite loop, these poorly coded scripts will call the PHP-cgi.exe PHP worker program on the server and will eat 100% of CPU.
A large number of PHP processes on the server take up a considerable amount of server resources. Every user instance creates a large amount of process on the server. As a result, the CPU usage shoots up.
Besides, if there is something wrong with fetching certain content with lacking user privileges, PHP can cause high CPU usage.
See this blog for more details.
https://bobcares.com/blog/php-cgi-exe-high-cpu/

How to best troubleshoot apache webserver with lot of requests in "being processed" status

I am running a Linux Centos/Plesk box with a medium/high traffic Prestahop e-commerce website.
I use the stock-Plesk configuration with PHP 7.0 FPM served by Apache / Nginx as a reverse proxy. I only made some tweaks on FPM pool settings according to the server power basically to increase MaxChildren value to serve more requests.
From some days I am encountering occasional website slowdowns that I am trying to troubleshoot (website did not have any particular visits increases in last time).
I have already performed some checks on:
- server logs, can't see particular error_logs
- server load average (it is ok)
- I/O "wa" value is OK
- MySQL server has not slow queries during slowdowns (SHOW FULL PROCESSLIST is never returning long execution time queries)
- net stat (no DDOS / strange connections)
I installed mod_status form Apache and noticed from the server-status page that during slowdowns I have a high number of "requests currently being processed" in "W" status [sending reply] (During slowdowns I can have up to 70/80 of those requests for several seconds) so I can correlate the slowdowns to Apache being busy to deliver requests but I can't figure out why and what application/webpage component is the source of the problem.
My question here is some advice on how to understand the culprit of slowdowns (a PHP script ? a stuck external service during Apache request ?)
thank you.

External connections from PHP/Apache max-out server resources. How to deal with this?

I am connecting to an external service (ex. Algolia) from my PHP backend. Say, the external service takes a while to respond (ex. >1 sec). During that duration, the Apache process keeps waiting on it and is unable to complete the request.
Further incoming traffic on the website causes Apache to start forking more processes because the previously forked processes are still waiting for the request to be completed.
In the worst case, if I get too many simultaneous requests to the same URL which was invoking that external service, a lot of Apache processes go into waiting state ultimately causing the server CPU load to sky-rocket and the Ubuntu server to completely stop responding.
Is there a workaround for this problem?

php5-fpm children and requests

I have a question.
I own a 128mb vps with a simple blog that gets just a hundred hits per day.
I have nginx + php5-fpm installed. Considering the low visits and the ram I decided to set fpm to static with 1 server running. While I was doing my random tests like running php scripts through http that last over 30 minutes I tried to open the blog in the same machine and noticed that the site was basically unreachable. So I went to the configuration and read this:
The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes to be created when pm is set to 'dynamic'.
; **This value sets the limit on the number of simultaneous requests that will be
; served**
What shocked me the most was that I didn't know because I always assumed that a php children would handle hundreds of requests at the same time like a http server would do!
Did it get it right?
If for example I launch 2 php-fpm children and launch 2 "long scripts" at the same time all the sites using the same php backend will be unreachable?? How is this usable?
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
You've basically got it right. You configured a static number of workers (and that number was "one") -- so that's exactly what you got.
But you don't understand quite how things typically work, since you say:
I always assumed that a php children would handle hundreds of requests
at the same time like a http server would do!
I'm not really familiar with nginx, but consider the typical mod_php setup in apache. If you're using mod_php, then you're using the prefork mpm for apache. So every concurrent http requests is handled by a distinct httpd process (no threads). If you're tuning your apache/mod_php server for low-memory, you're going to have to tweak apache settings to limit the number of processes it will spawn (in particular, MaxClients).
Failing to tune this stuff means that when you get a large traffic spike, apache starts spawning a huge number of heavy processes (remember, it's mod_php, so you have the whole PHP interpreter embedded in each httpd process), and you run out of memory, and then everything starts swapping, and your server starts emitting smoke.
Tuned properly (meaning: tuned so that you ignore requests instead of allocating memory you don't have for more processes), clients will time out, but when traffic subsides, things go back to normal.
Compare that with fpm, and a smarter web server architecture like apache-worker, or nginx. Now you have some, much larger, pool of threads (still configurable!) to handle http requests, and a separate pool of php-fpm processes to handle just the requests that require PHP. It's basically the same thing, if you don't set limits on how many processes/threads can be created, you are asking for trouble. But if you do tune, you come out ahead, since only a fraction of your requests use PHP. So essentially, the average amount of memory needed per http requests is lower -- thus you can handle more requests with the same amount of memory.
But setting the number to "1" is too extreme. At "1", it doesn't even matter if you choose static or dynamic, since either way you'll just have one php-fpm process.
So, to try to give explicit answers to particular questions:
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
Yes, they'll all queue, and eventually timeout. The fact that you regularly have scripts that take 10 seconds to run is the real culprit here, though. There are lots of ways to architect around that (caching, work queues, etc), but the right solution depends entirely on what you're trying to do.
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
They do apply. You can set up an apache/mod_php server the same way as you have with nginx/php-fpm -- just set apache's MaxClients to 1!
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
Especially on linux, lots of things that report memory usage can be very misleading. But think about it this way: that 30kb is negligible. That's because most of PHP's memory was already allocated when some httpd process got started.
128MB VPS is pretty tight, but should be able to handle more than one php-process.
If you want to optimize, do something like this:
For PHP:
pm = static
pm.max_children=4
for nginx, figure out how to control processes and thread count (whatever the equivalent to apache's MaxClients, StartServers, MinSpareServers, MaxSpareServers)
Then figure out how to generate some realistic load (apachebench, siege, jmeter, etc). use vmstat, free, and top to watch your memory usage. Adjust pm.max_children and the nginx stuff to be as high as possible without causing any significant swap (according to vmstat)

Categories