I use php-fpm with STATIC pools and the problem is that 2-3 pools from 20 are used with 80-100% CPU. Other php pools stay unused.
My question is: Why other 17 processes stay unused?
We used AWS instance c4.large.
Our docker image use 1024 Units of CPU and 2560 MB ram.
DOCKER containers in instance
ALL PROCESSES in container
TOP screenshot
The PHP-FPM pm static setting depends heavily on how much free memory your server has. Basically if you are suffering from low server memory, then pm ondemand or dynamic maybe be better options. On the other hand, if you have the memory available you can avoid much of the PHP process manager (PM) overhead by setting pm static to the max capacity of your server. In other words, when you do the math, pm.static should be set to the max amount of PHP-FPM processes that can run without creating memory availability or cache pressure issues. Also, not so high as to overwhelm CPU(s) and have a pile of pending PHP-FPM operations.
Related
I am trying to configure our server cluster to handle large spikes in traffic. What I have noticed is when we get a spike we get a lot of failures due to PHP-FPM having to spawn a lot of workers quickly.
I can offset this by setting start_servers higher so the PHP-FPM processes are already ready but now this gives me some RAM management dilemmas.
On a test server with just me and some crons using it I load up a load of workers and watch the ram. Over time the PHP-FPM processes start to steadily increase in ram.
Why is there RAM left allocated inside the worker?
I am trying to understand why these processes gain ram and then just keep it. What is that RAM and when exactly does PHP-FPM recycle it?
A few months ago we moved our e-commerce website to a VPS, after struggling with poor performance from shared hosting platforms. To handle an increase in traffic (avg. 300-500 daily visitors), we tweaked our PHP-FPM settings and increased the Max Children from 5 (default) to 50. Currently, PHP-FPM "pool" processes are requiring high CPU usage (30-40%). Any tips to make those "pool" processes use less CPU? Thanks!
VPS Specs:
2 CPUs
Intel(R) Xeon(R) CPU E5-2630 v4 # 2.20GHz
4GB RAM
WHM: Centos 7.8 v86.0.18
Ecommerce platform: OpenCart 3.0.2.0
FPM has nothing to do with the CPU usage, it's your code.
That said, don't just arbitrarily change the number of worker processes without a sound basis to do so, eg: actual resource statistics.
With 300-500 daily users you're really unlikely to have 50 concurrent requests unless you're doing something strange.
The place I'm currently working at peaks at about 600 concurrent users and a grand maximum of 15-20 connections actually simultaneously doing anything. [Note: Much larger/broader backing infrastructure]
Do you really expect each CPU core to handle 25 simultaneous requests?
Can you reasonably fit 50 requests' worth of RAM into that 4GB?
Are you fine with those 50 idle PHP processes each consuming 10-15MB RAM apiece?
All that said, we can't tell you what in your code is using up resources, and it's not possible for you to post enough information for us to make more than a vague guess. You need to put things in place to measure where that resource usage is happening, profile your code to find out why, and tune your infrastructure configuration to accommodate your specific application requirements.
There's no one "magic" config that works for everyone.
I have a web application written in Laravel / PHP that is in the early stages and generally serves about 500 - 600 reqs/min. We use Maria DB and Redis for caching and everything is on AWS.
For events we want to promote on our platform, we send out a push notification (mobile platform) to all users which results in a roughly 2-min long traffic burst that takes us to 3.5k reqs/min
At our current server scale, this completely bogs down the application servers' CPU which usually operate at around 10% CPU. The Databases and Redis clusters seem fine during this burst.
Looking at the logs, it seems all PHP-FPM worker pool processes get occupied and begin queuing up requests from the Nginx upstream.
We currently have:
three m4.large servers (2 cores, 8gb RAM each)
dynamic PHP-FPM process management, with a max of 120 child processes (servers)on each box
My questions:
1) Should we increase the FPM pool? It seems that memory-wise, we're probably nearing our limit
2) Should we decrease the FPM pool? It seems possible that we're spinning up so many process that the CPU is getting bogged down and is unable to really complete any of them. I wonder if we'd therefore get better results with less.
3) Should we simply use larger boxes with more RAM and CPU, which will allow us to add more FPM workers?
4) Is there any FPM performance tuning that we should be considering? We use Opcache, however, should we switch to static process management for FPM to cut down on the overhead of processes spinning up and down?
There are too many child processes in relation to the number of cores.
First, you need to know the server status at normal and burst time.
1) Check the number of php-fpm processes.
ps -ef | grep 'php-fpm: pool' | wc -l
2) Check the load average. At 2 cores, 2 or more means that the work's starting delayed.
top
htop
glances
3) Depending on the service, we start to adjust from twice the number of cores.
; Example
;pm.max_children = 120 ; normal) pool 5, load 0.1 / burst) pool 120, load 5 **Bad**
;pm.max_children = 4 ; normal) pool 4, load 0.1 / burst) pool 4, load 1
pm.max_children = 8 ; normal) pool 6, load 0.1 / burst) pool 8, load 2 **Good**
load 2 = Maximum Performance 2 cores
It is more accurate to test the web server with a load similar to the actual load through the apache benchmark(ab).
ab -c100 -n10000 http://example.com
Time taken for tests: 60.344 seconds
Requests per second: 165.72 [#/sec] (mean)
100% 880 (longest request)
I have recently migrated my application from a single server w/ docker into Google Kubernetes Engine for the reasons of scaling. I am new to the kubernetes platform, and I may not yet fully understand the concepts of it but I do get the basics.
I have successfully migrated my application on a cluster size of 3 each with 1vCPU and 3.75 GB RAM
Now I came across on what is the best configuration for the php-fpm processes running in a kubernetes cluster. I have read a few articles on how to setup the php-fpm processes such as
https://serversforhackers.com/c/php-fpm-process-management
https://www.kinamo.be/en/support/faq/determining-the-correct-number-of-child-processes-for-php-fpm-on-nginx
On my cluster I have an Elasticsearch, Redis, Frontend and a REST Api and my understanding about kubernetes, each has their own pods running on my cluster, I tried to access the pod for the REST Api and see 1 vCPU and 3.75 GB RAM which is what I set on my cluster specs. And the RAM has only 1.75GB left, so I think there are other services or pods using the memory.
So now I wanted to increase the size of the following based on the articles I shared above.
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 4
pm.max_spare_servers = 8
But my problem is since the pod is on a worker, if I change the configuration base on the available memory left (base on the articles I shared above on Calculating pm.max_children) I might end up a pod consuming all memory space left, and will not be able to allocate for the other services. Is my problem makes sense? or is there an idea I am missing?
Base on the article since my worker has 3.75 GB RAM and and other services is already consuming 1.5GB ram so my best aim is at 1 GB RAM.
pm.max_children brings us to 1024 Mb / 60 Mb = 17 max_children
pm.max_children = 17
pm.start_servers = 8
pm.min_spare_servers = 7
pm.max_spare_servers = 10
pm.max_requests = 500
Which leads me to the question How to compute for the php-fpm child process on a Kubernetes Cluster when there are other services or pods shares the same resources.
Thank you for reading until the end, and thanks in advance for your inputs.
GKE comes with multiple system pods (such as kube-DNS and fluentd). Some of these pods do not scale up much, this means if you add additional nodes, they will have more available resources.
The nodes are also running an OS so some of the memory is being assigned to that.
You can also view the resources available per node by using kubectl describe no | grep Allocatable -A 5
This will show you the amount of resources left after the node's consumption.
Using kubectl describe no | grep Allocated -A 5 you can view the amount of memory and CPU that is already requested by current pods.
All this being said, you should choose the number of child processes based on your need. Once you know the amount of memory the pod will need, set resource requests and limits to your pod config so that the kubernetes scheduler can put the php-fpm on a node with sufficient resources.
Kubernetes strength is that you tell it what you want and it will try to make that happen. Instead of worrying too much about how much you can fit, choose an amount for your pod based on your expected/required performance and tell kubernetes that's how much memory you need. This way, you can also increase the number of pods using HPA instead of managing and scaling up the number of child processes.
I have a Nginx + PHP5-FPM server with few high traffic websites.
From my understanding of PHP5-FPM pools config, I understood that:
static = can be used to immediately create N child processes so they do not need to be opened/re-opened, they are already opened and can be used when needed, else they are "sleeping".
dynamic = can be used to open a limited number of child processes and re-spawn then when a limit is reached (min/max servers).
ondemand = I specify the max number of child processes to create, and then child processes are created on demand, when needed, and closed when not needed anymore, maintaining a low memory usage but increasing the response time of few milliseconds.
From my tests with a high traffic WordPress website, I noticed that:
If I use "static", the website is for sure faster and can handle immediately high number of concurrent connections, but the memory always increases its usage, and after N hours it seems to use almost the total RAM available. So I have to use a cronjob to periodically (every 1 hour) reload PHP5-FPM with /etc/init.d/php5-fpm reload.
If I use "dynamic" it uses less RAM but after N concurrent connections there are frequent 502 errors (but maybe I configured it not well).
If I use "ondemand" the site is a little slower (like +50/100ms response time), but it can handle all the high traffic without using too much RAM.
So my personal conclusion would be that "ondemand" is really the best method to use in terms of low/controlled memory usage, the only downside is the +50/100 ms in response time but in my case it is not a big problem.
Are my assumptions correct ?
You didn't mention WHY you would want to keep memory low. Assuming this machine is dedicated to serving PHP-FPM, keeping memory low doesn't help your application in anyway. You have memory, use it.
Therefore, in this case, "static" is the best choice, with max_requests set to something that will keep memory leaks (if you have any) under control.
If this machine is shared with other tasks, then keeping memory low is ideal. In this case, "dynamic" is the best compromise between speed and memory usage.
"ondemand" is a good choice only when the PHP-FPM engine will be used infrequently and the machine's primary purpose is something else.
You can configure PHP-FPM to restart automatically by detecting if children processes die within a determined period of time.
In the global configuration "php-fpm.conf" you can set to restart PHP-FPM if 5 child proccess die within 1 minute and wait for 10 seconds before that happens.
// php-fpm.conf
emergency_restart_threshold = 5
emergency_restart_interval = 1m
process_control_timeout = 10s
So you can continue using "dynamic" without using cron.