I have a Nginx + PHP5-FPM server with few high traffic websites.
From my understanding of PHP5-FPM pools config, I understood that:
static = can be used to immediately create N child processes so they do not need to be opened/re-opened, they are already opened and can be used when needed, else they are "sleeping".
dynamic = can be used to open a limited number of child processes and re-spawn then when a limit is reached (min/max servers).
ondemand = I specify the max number of child processes to create, and then child processes are created on demand, when needed, and closed when not needed anymore, maintaining a low memory usage but increasing the response time of few milliseconds.
From my tests with a high traffic WordPress website, I noticed that:
If I use "static", the website is for sure faster and can handle immediately high number of concurrent connections, but the memory always increases its usage, and after N hours it seems to use almost the total RAM available. So I have to use a cronjob to periodically (every 1 hour) reload PHP5-FPM with /etc/init.d/php5-fpm reload.
If I use "dynamic" it uses less RAM but after N concurrent connections there are frequent 502 errors (but maybe I configured it not well).
If I use "ondemand" the site is a little slower (like +50/100ms response time), but it can handle all the high traffic without using too much RAM.
So my personal conclusion would be that "ondemand" is really the best method to use in terms of low/controlled memory usage, the only downside is the +50/100 ms in response time but in my case it is not a big problem.
Are my assumptions correct ?
You didn't mention WHY you would want to keep memory low. Assuming this machine is dedicated to serving PHP-FPM, keeping memory low doesn't help your application in anyway. You have memory, use it.
Therefore, in this case, "static" is the best choice, with max_requests set to something that will keep memory leaks (if you have any) under control.
If this machine is shared with other tasks, then keeping memory low is ideal. In this case, "dynamic" is the best compromise between speed and memory usage.
"ondemand" is a good choice only when the PHP-FPM engine will be used infrequently and the machine's primary purpose is something else.
You can configure PHP-FPM to restart automatically by detecting if children processes die within a determined period of time.
In the global configuration "php-fpm.conf" you can set to restart PHP-FPM if 5 child proccess die within 1 minute and wait for 10 seconds before that happens.
// php-fpm.conf
emergency_restart_threshold = 5
emergency_restart_interval = 1m
process_control_timeout = 10s
So you can continue using "dynamic" without using cron.
Related
My website is always receiving 522 Connection timeout. I upgraded my vps to dedicated server but it still the same.
So i found this solution online: PHP-FPM tuning. What will happen if i increase it to the very maximum?
This is my configuration:
PHP-FPM Pool Options
Max Requests
1000000000000000
Process Idle Timeout
1000000000000000
Max Children
1000000000000000
Limitations
The maximum value for those fields are integer values.
The number of processes is limited by the kernel, to roughly 25.000 to 50.000
What happens when you set ridiculous high values?
Depending on other amouont of traffic, you might be happy with the server for hours, weeks, or months. After a time, the server will probably get unresponsive.
The exact behaviour highly depends on many factors and might be totally unpredictable.
What should you do?
There are basic direction towards what the settings should go, like spawning around cores*2 (think hyperthreading) processes and so on.
The suggested values are just an orientation, not an advice that fits all needs.
The settings highly depend on your code. How much memory does it use, how much cpu time, how much memory leaking, ...
522 Connection timeout
Various issues can lead to a connection timeoout. Your PHP application might
experience fatal errors (seg faults)
might have run into infinite loops
might itself be waiting on locks or responses
bad configured network / firewall.
Try to use a debugger on your code and watch the error log closely.
Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.
Case
Currently I am developing an application using Laravel 4. I installed profiler to see the stats about my app. This is the screenshot:
Questions
You can see that it consumes 12.25 MB memory for each request (very simple page) in my vagrant (Ubuntu 64 bit + Nginx + PHP 5.3.10+ MySQL). Do you think this is too much ? This means If I have 100 concurrent connections, the memory consumption will be about 1 GB. I think this is too much but what do you think ?
It loads 237 files for each request. Do you think this is too much ?
When I deploy this app to the my server (Centos 6.4 with Apache + PHP 5.5.3 with Zend OPcache + MySQL) the memory consumption decreases dramatically. This is the screenshot from the server:
What do you think about this difference between my mac and the server ?
No, you don't really need to worry about this.
12MB is not really a large amount for a PHP program. And 100 concurrent connections is a lot.
To put it into context, assume your PHP page takes half a second to run, that would mean you'd need to have 12000 page loads per minute to achieve a consistent 100 concurrent connections. That's a lot more traffic than any of my sites get, I can tell you that.
Of course, if your page takes longer than half a second to load, this number will come down quickly, and your 100 concurrent connections can become a possibility much more easily.
This is one reason why it's a really good idea to focus on performance‡ -- the quicker your program can finish running, the quicker it can free up its memory for the next visitor. In fact unless you have a really major memory usage problem (which you don't), performance is probably more important in this context than the amount of memory being used.
In any case, if you do have 100 concurrent connections, you're likely to get issues with your server software before you have them with PHP. Apache has a default limit to the max number of connections, and it is a lot lower than 100. (you can raise it, of course, but if you really are getting that kind of traffic, you'll likely be wanting more servers anyway)
As for the 12M memory usage, you're not really ever likely to get much less than that for a PHP program. PHP needs a chunk of memory just in order to run in the first place, and the framework will need a chunk too, so most of your 12M will be due to that. This means that although your small program may be using 12M, it does not follow that a larger program would use twice as much. So you probably don't need to worry too much about it.
If you do have high traffic, and performance issues as a result, there are various ways you can mitigate the problem. The main one is by using caching. PHP 5.5 comes with an OpCache module built-in, which will cache your programs for you so that it doesn't have to do all the bootstrap work such as loading all the files every time. For some systems, this can have a dramatic impact on performance.
There are also other layers of caching you can use, such as a server-level page cache like Varnish, which will cache your static pages so that PHP doesn't even need to be called if the page content hasn't changed.
(‡ of course there are other reasons for focussing on performance too, like keeping your visitors happy)
I have a question.
I own a 128mb vps with a simple blog that gets just a hundred hits per day.
I have nginx + php5-fpm installed. Considering the low visits and the ram I decided to set fpm to static with 1 server running. While I was doing my random tests like running php scripts through http that last over 30 minutes I tried to open the blog in the same machine and noticed that the site was basically unreachable. So I went to the configuration and read this:
The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes to be created when pm is set to 'dynamic'.
; **This value sets the limit on the number of simultaneous requests that will be
; served**
What shocked me the most was that I didn't know because I always assumed that a php children would handle hundreds of requests at the same time like a http server would do!
Did it get it right?
If for example I launch 2 php-fpm children and launch 2 "long scripts" at the same time all the sites using the same php backend will be unreachable?? How is this usable?
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
You've basically got it right. You configured a static number of workers (and that number was "one") -- so that's exactly what you got.
But you don't understand quite how things typically work, since you say:
I always assumed that a php children would handle hundreds of requests
at the same time like a http server would do!
I'm not really familiar with nginx, but consider the typical mod_php setup in apache. If you're using mod_php, then you're using the prefork mpm for apache. So every concurrent http requests is handled by a distinct httpd process (no threads). If you're tuning your apache/mod_php server for low-memory, you're going to have to tweak apache settings to limit the number of processes it will spawn (in particular, MaxClients).
Failing to tune this stuff means that when you get a large traffic spike, apache starts spawning a huge number of heavy processes (remember, it's mod_php, so you have the whole PHP interpreter embedded in each httpd process), and you run out of memory, and then everything starts swapping, and your server starts emitting smoke.
Tuned properly (meaning: tuned so that you ignore requests instead of allocating memory you don't have for more processes), clients will time out, but when traffic subsides, things go back to normal.
Compare that with fpm, and a smarter web server architecture like apache-worker, or nginx. Now you have some, much larger, pool of threads (still configurable!) to handle http requests, and a separate pool of php-fpm processes to handle just the requests that require PHP. It's basically the same thing, if you don't set limits on how many processes/threads can be created, you are asking for trouble. But if you do tune, you come out ahead, since only a fraction of your requests use PHP. So essentially, the average amount of memory needed per http requests is lower -- thus you can handle more requests with the same amount of memory.
But setting the number to "1" is too extreme. At "1", it doesn't even matter if you choose static or dynamic, since either way you'll just have one php-fpm process.
So, to try to give explicit answers to particular questions:
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
Yes, they'll all queue, and eventually timeout. The fact that you regularly have scripts that take 10 seconds to run is the real culprit here, though. There are lots of ways to architect around that (caching, work queues, etc), but the right solution depends entirely on what you're trying to do.
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
They do apply. You can set up an apache/mod_php server the same way as you have with nginx/php-fpm -- just set apache's MaxClients to 1!
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
Especially on linux, lots of things that report memory usage can be very misleading. But think about it this way: that 30kb is negligible. That's because most of PHP's memory was already allocated when some httpd process got started.
128MB VPS is pretty tight, but should be able to handle more than one php-process.
If you want to optimize, do something like this:
For PHP:
pm = static
pm.max_children=4
for nginx, figure out how to control processes and thread count (whatever the equivalent to apache's MaxClients, StartServers, MinSpareServers, MaxSpareServers)
Then figure out how to generate some realistic load (apachebench, siege, jmeter, etc). use vmstat, free, and top to watch your memory usage. Adjust pm.max_children and the nginx stuff to be as high as possible without causing any significant swap (according to vmstat)
We have a very lightweight tracking script written in PHP, running on Apache/2.2.14 (Ubuntu). The script will receive a high number of concurrent connections, but each connection will be fast. Currently, we are using prefork, configured as follows:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
ServerLimit 600
MaxClients 600
MaxRequestsPerChild 0
We also have KeepAlive Off
I've played around with these setting quite a bit, and have been testing with apache benchmark. Anytime I raise the concurrent connections with ab, I get "apr_socket_recv: Connection reset by peer (104)." I've also raised the max number of file descriptors.
I'm wondering if any apache gurus out there can point me in the right direction for this type of setup (high number of lightweight connections). What are the optimum values for StartServers, Min/MaxSpareServers, etc? Is the worker MPM worth looking into? Any thoughts are welcome.
I can give you some hints:
try to use Apache in worker mode instead of prefork. To do that either put PHP in fastcgi mode (php-fpm) or take the risk to keep it in mod_php inside a threaded apache worker (the risk is that siome external libraries may conflict, like locale settings, but if your PHp tracking code is small you may control that everything there is multi-thread enabled -- PHP5 without any external lib is multi-thread enabled)
If you MaxClient is 600 then put 600 in StartServers, MinSpareServers and MaxSpareServers. else Apache is creating a new fork at a very low speed :
the parent process creates new children at a maximum rate of 1 per second.
if you think your server can handle 600 forks then take the RAM, create the 600 forks, and maybe alter the MaxRequestsPerChild setting to something like 3000, so that sometimes old forks are removed and recreated (avoiding memleaks). You will not losse any time in the fork creation rate and Apache will not loose any time managing creation and deletion of childrens.
Disabling KeepAlive is a good thing in your case, as you did
To known what is the right value for MaxLients, either in prefork or worker mode, just test it, track the memory used by one fork and divide the siez of your available RAM by this number. Be carefull, php will use some RAM as well, with mod_php this RAM will be in the apache fork memory usage, in php-fpm it will be in the php-fpm processes, check the memory_limit setting in PHP for the max size of one PHP process.
Reduce your PHP RAM usage, so that you will be able to run more PHP scripts in parallel. Do not build big arrays, keep the session light, etc. Using APC opcode may reduce your memory footprint (and do other nice things as well), using PHP 5.3 instead of 5.2 as well.