My web app is php native, no frameworks or anything, my server is 32GB ram, when there is huge traffic on the website, it comes too slow (2 sec page loading becomes 50 Seconds).
Tried AB Test using
ab -n 20000 -c 1000 mywebsite.com
It gives
apr_socket_connect(): No connection could be made because the target machine actively refused it.
after some time
Is there anyway to allocate more RAM to php or something else ?
Most likely, you are hitting the maximum concurrent connections. Read the following suggestions below:
For Windows Server (MPM_WinNT)
Configure the ThreadLimit and ThreadsPerChild on httpd-mpm.conf. The value should be enough to handle multiple and/or concurrent requests.
For Linux (Module Dependent)
Depending on the module you are using, set the MaxRequestWorkers directive to the number of connections you want to handle.
For additional reference, you can check this documentation for the directives that are needed to be configured so that your server can properly handle multiple and/or concurrent requests. Apache MPM Modules
Related
we are working on an online Exam system which will have high traffic at the same time, the system developed using laravel and MySQL as database.
our server has 40 core and 64 GB RAM memory.
I installed Nginx in a docker container
PHP in another container
Mysql in another container,
also we tested the system with the classic way, where all services working in the same server directly without docker.
but the problem is the system can not handle more than 600 Users at the same time.
we tried with Nginx performance tunes and nothings works, and also with php-fpm.
I don't know why is that happening, we tried with all architecture that can help but nothings work, every PHP-fpm process using 27% of CPU .
and when I am using
ApacheBench
in order to send requests to the server and then trying to load the home page from my browser it taking a long time to respond to the page and many times return time-out.
I am using Nginx:stable-alpine as Nginx docker image.
php:7.4-fpm-alpine
as PHP image.
I did not change any of the default configs for PHP or Nginx.
so what is the best configuration for PHP and Nginx which I have to apply in order to have the best performance?
htop result before send requests
htop after send request using
sudo ab -n 5000 -c 600 https://demo.qsinav.com/
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section
thread_cache_size=100 # from 9 to reduce threads_created RPhr of 78
net_buffer_length=32K # from 16K to reduce # packets per hour
query_cache_size=0 # from 1M to conserve RAM because the QCtype of OFF
slow_query_log=ON # from OFF to allow awareness and proactively correct
long_query_time=1 # from 10 seconds - if overwhelmed with entries, raise the seconds by 1
innodb_write_io_threads=64 # from 4 to expedite writing to tables on storage devices
There are many more opportunities to improve configuration. View profile, Network profile for free downloadable Utility Scripts to assist with performance tuning, specifically findfragtables.sql is helpful with reminding you to OPTIMIZE heavily used tables periodically.
I have set up my server on the digital ocean and it was working very fine. Even if we have a good machine there, still the server was not taking more than 250 to 300 requests.
I checked the apache log and I got the following error there.
[Wed Dec 11 01:14:49.586728 2019] [mpm_prefork:error] [pid 1993] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
I have googled this error and came to know that my apache server has a request limit and I have to increase that. So I put the below configuration in /etc/apache2/sites-available/000-default.conf
<IfModule mpm_prefork_module>
StartServers 10
MinSpareServers 10
MaxSpareServers 20
ServerLimit 2000
MaxRequestWorkers 1500
MaxConnectionsPerChild 10000
</IfModule>
After adding this, I restarted my server and everything is working fine. Now my server is able to handle up to 1000 requests.
I have not deep knowledge in apache but I got ride of trouble. My question is how many requests can be handled by this above configuration. Do I need to change anything for 5K users? usually, we can have up to 5K users at the same time.
We have a good machine on the digital ocean with the following configuration.
64 GB Memory
25 GB Disk
Ubuntu 16.04.6 x64
My question is how many requests can be handled by this above configuration.
Noone can give you that answer because it depends completely on too many factors - how fast is your application? What size is the droplet? what data center is the droplet in? are you using a database? is that database on the server or external? etc etc
You need to perform load testing to stress test your environment and that will tell you.
Just increasing the max number of possible concurrent processes without doing bench marking is a bad idea. It might be able to manage that many processes but it does not mean it will do it efficiently.
Do I need to change anything for 5K users? usually, we can have up to 5K users at the same time.
As GBWDev noted with that many users you probably want to scale your environment. Load test your server and find out at what point you get a degraded performance, so for example after 1000 users the request time doubles then you might want to think about adding a load balancer and 3 servers.
I also use digital ocean in this setup and I have found from my own bench marking and scaling experiences that it is far cheaper to scale horizontally with lots of small droplets than it is to scale vertically with one of their mega beast servers.
I have a question.
I own a 128mb vps with a simple blog that gets just a hundred hits per day.
I have nginx + php5-fpm installed. Considering the low visits and the ram I decided to set fpm to static with 1 server running. While I was doing my random tests like running php scripts through http that last over 30 minutes I tried to open the blog in the same machine and noticed that the site was basically unreachable. So I went to the configuration and read this:
The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes to be created when pm is set to 'dynamic'.
; **This value sets the limit on the number of simultaneous requests that will be
; served**
What shocked me the most was that I didn't know because I always assumed that a php children would handle hundreds of requests at the same time like a http server would do!
Did it get it right?
If for example I launch 2 php-fpm children and launch 2 "long scripts" at the same time all the sites using the same php backend will be unreachable?? How is this usable?
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
You've basically got it right. You configured a static number of workers (and that number was "one") -- so that's exactly what you got.
But you don't understand quite how things typically work, since you say:
I always assumed that a php children would handle hundreds of requests
at the same time like a http server would do!
I'm not really familiar with nginx, but consider the typical mod_php setup in apache. If you're using mod_php, then you're using the prefork mpm for apache. So every concurrent http requests is handled by a distinct httpd process (no threads). If you're tuning your apache/mod_php server for low-memory, you're going to have to tweak apache settings to limit the number of processes it will spawn (in particular, MaxClients).
Failing to tune this stuff means that when you get a large traffic spike, apache starts spawning a huge number of heavy processes (remember, it's mod_php, so you have the whole PHP interpreter embedded in each httpd process), and you run out of memory, and then everything starts swapping, and your server starts emitting smoke.
Tuned properly (meaning: tuned so that you ignore requests instead of allocating memory you don't have for more processes), clients will time out, but when traffic subsides, things go back to normal.
Compare that with fpm, and a smarter web server architecture like apache-worker, or nginx. Now you have some, much larger, pool of threads (still configurable!) to handle http requests, and a separate pool of php-fpm processes to handle just the requests that require PHP. It's basically the same thing, if you don't set limits on how many processes/threads can be created, you are asking for trouble. But if you do tune, you come out ahead, since only a fraction of your requests use PHP. So essentially, the average amount of memory needed per http requests is lower -- thus you can handle more requests with the same amount of memory.
But setting the number to "1" is too extreme. At "1", it doesn't even matter if you choose static or dynamic, since either way you'll just have one php-fpm process.
So, to try to give explicit answers to particular questions:
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
Yes, they'll all queue, and eventually timeout. The fact that you regularly have scripts that take 10 seconds to run is the real culprit here, though. There are lots of ways to architect around that (caching, work queues, etc), but the right solution depends entirely on what you're trying to do.
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
They do apply. You can set up an apache/mod_php server the same way as you have with nginx/php-fpm -- just set apache's MaxClients to 1!
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
Especially on linux, lots of things that report memory usage can be very misleading. But think about it this way: that 30kb is negligible. That's because most of PHP's memory was already allocated when some httpd process got started.
128MB VPS is pretty tight, but should be able to handle more than one php-process.
If you want to optimize, do something like this:
For PHP:
pm = static
pm.max_children=4
for nginx, figure out how to control processes and thread count (whatever the equivalent to apache's MaxClients, StartServers, MinSpareServers, MaxSpareServers)
Then figure out how to generate some realistic load (apachebench, siege, jmeter, etc). use vmstat, free, and top to watch your memory usage. Adjust pm.max_children and the nginx stuff to be as high as possible without causing any significant swap (according to vmstat)
I am developing a big application and i have to load test it. It is a EC2 based cluster with one HighCPU Ex.Large instance for application which runs PHP / NGinx.
This applicaton is responsible for reading data from a redis server which holds some 5k - 10k key values, it then makes the response and logs the data into a mongoDB server and replies back to client.
Whenever i send a request to the app server, it does all its computations in about 20 - 25 ms which is awesome.
I am now trying to do some load testing and i run a php based app on my laptop to send requests to server. Many thousands of them quickly over 20 - 30 seconds. During this load period, whenever i open the app URL in the browser, it replies back with the execution time of around 25 - 35 ms which is again cool. So i am sure that redis and mongo are not causing bottlenecks. But it is taking about 25 seconds to get the response back during load.
The high CPU ex. large instance has 8 GB RAM and 8 cores.
Also, during the load test, the top command shows about 4 - 6 php_cgi processes consuming some 15 - 20% of CPU.
I have 50 worker processes on nginx and 1024 worker connections.
What could be the issue causing the bottleneck ?
IF this doesnt work out, i am seriously considering moving out to a whole java application with an embedded webserver and an embedded cache.
UPDATE - increased PHP_FCGI_CHILDREN to 8 and it halfed the response time during load
50 worker processes is too many, you need only one worker process per CPU core. Using more worker processes will invoke inter-process switching, that will consume many time.
What you can do now:
1. Set worker process to minimum (one worker per CPU, e.g. 4 worker process if you have 4 cpu units), but worker connections - to maximum (10240 for example)
Tune up TCP stack via sysctl. You can reach stack limits if you have many connections
Get statistics from nginx stub_status module (you can use munin + nginx, its easy to setup and gave you enough information about system status).
Check nginx error.log and system messages log for errors.
Tune up nginx (decrease connection timings and max query size).
I hope that helps you.
We have a very lightweight tracking script written in PHP, running on Apache/2.2.14 (Ubuntu). The script will receive a high number of concurrent connections, but each connection will be fast. Currently, we are using prefork, configured as follows:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
ServerLimit 600
MaxClients 600
MaxRequestsPerChild 0
We also have KeepAlive Off
I've played around with these setting quite a bit, and have been testing with apache benchmark. Anytime I raise the concurrent connections with ab, I get "apr_socket_recv: Connection reset by peer (104)." I've also raised the max number of file descriptors.
I'm wondering if any apache gurus out there can point me in the right direction for this type of setup (high number of lightweight connections). What are the optimum values for StartServers, Min/MaxSpareServers, etc? Is the worker MPM worth looking into? Any thoughts are welcome.
I can give you some hints:
try to use Apache in worker mode instead of prefork. To do that either put PHP in fastcgi mode (php-fpm) or take the risk to keep it in mod_php inside a threaded apache worker (the risk is that siome external libraries may conflict, like locale settings, but if your PHp tracking code is small you may control that everything there is multi-thread enabled -- PHP5 without any external lib is multi-thread enabled)
If you MaxClient is 600 then put 600 in StartServers, MinSpareServers and MaxSpareServers. else Apache is creating a new fork at a very low speed :
the parent process creates new children at a maximum rate of 1 per second.
if you think your server can handle 600 forks then take the RAM, create the 600 forks, and maybe alter the MaxRequestsPerChild setting to something like 3000, so that sometimes old forks are removed and recreated (avoiding memleaks). You will not losse any time in the fork creation rate and Apache will not loose any time managing creation and deletion of childrens.
Disabling KeepAlive is a good thing in your case, as you did
To known what is the right value for MaxLients, either in prefork or worker mode, just test it, track the memory used by one fork and divide the siez of your available RAM by this number. Be carefull, php will use some RAM as well, with mod_php this RAM will be in the apache fork memory usage, in php-fpm it will be in the php-fpm processes, check the memory_limit setting in PHP for the max size of one PHP process.
Reduce your PHP RAM usage, so that you will be able to run more PHP scripts in parallel. Do not build big arrays, keep the session light, etc. Using APC opcode may reduce your memory footprint (and do other nice things as well), using PHP 5.3 instead of 5.2 as well.