we are working on an online Exam system which will have high traffic at the same time, the system developed using laravel and MySQL as database.
our server has 40 core and 64 GB RAM memory.
I installed Nginx in a docker container
PHP in another container
Mysql in another container,
also we tested the system with the classic way, where all services working in the same server directly without docker.
but the problem is the system can not handle more than 600 Users at the same time.
we tried with Nginx performance tunes and nothings works, and also with php-fpm.
I don't know why is that happening, we tried with all architecture that can help but nothings work, every PHP-fpm process using 27% of CPU .
and when I am using
ApacheBench
in order to send requests to the server and then trying to load the home page from my browser it taking a long time to respond to the page and many times return time-out.
I am using Nginx:stable-alpine as Nginx docker image.
php:7.4-fpm-alpine
as PHP image.
I did not change any of the default configs for PHP or Nginx.
so what is the best configuration for PHP and Nginx which I have to apply in order to have the best performance?
htop result before send requests
htop after send request using
sudo ab -n 5000 -c 600 https://demo.qsinav.com/
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section
thread_cache_size=100 # from 9 to reduce threads_created RPhr of 78
net_buffer_length=32K # from 16K to reduce # packets per hour
query_cache_size=0 # from 1M to conserve RAM because the QCtype of OFF
slow_query_log=ON # from OFF to allow awareness and proactively correct
long_query_time=1 # from 10 seconds - if overwhelmed with entries, raise the seconds by 1
innodb_write_io_threads=64 # from 4 to expedite writing to tables on storage devices
There are many more opportunities to improve configuration. View profile, Network profile for free downloadable Utility Scripts to assist with performance tuning, specifically findfragtables.sql is helpful with reminding you to OPTIMIZE heavily used tables periodically.
Related
Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.
I am using a dedicated server for my PHP application. Server slowing down day by day after a reboot everything goes normal. I cache my json results as files and serve them to clients when everything normal response time is about 50ms but when server slows down response time goes up to 17 seconds or more.
This issue affects all server, I can't even login with ssh when this happens.
I don't have enough knowledge about servers.
How can I track this problem?
System Up for 6 Days now, and slowing down started again -
Here are my results
# lsof | wc -l
34255
# free
total used free shared buff/cache available
Mem: 32641048 1826832 6598216 232780 24216000 29805868
Swap: 16760828 0 16760828
My Server has 32gb Ram, 8 Core cpu, Centos 7.
I run a laravel application with 500 unique users daily.
I restarted MySQL service, httpd service, ngnix service, and cleared memory cache, nothing changed. Only the server reboot helps.
Static files response normally, but files served by PHP application or HTTP responses very slow and getting slower day by day.
Login with ssh getting slower too, I use plesk as control panel but it is getting slower too.
I mean this problem affects not only my application but all server.
My web app is php native, no frameworks or anything, my server is 32GB ram, when there is huge traffic on the website, it comes too slow (2 sec page loading becomes 50 Seconds).
Tried AB Test using
ab -n 20000 -c 1000 mywebsite.com
It gives
apr_socket_connect(): No connection could be made because the target machine actively refused it.
after some time
Is there anyway to allocate more RAM to php or something else ?
Most likely, you are hitting the maximum concurrent connections. Read the following suggestions below:
For Windows Server (MPM_WinNT)
Configure the ThreadLimit and ThreadsPerChild on httpd-mpm.conf. The value should be enough to handle multiple and/or concurrent requests.
For Linux (Module Dependent)
Depending on the module you are using, set the MaxRequestWorkers directive to the number of connections you want to handle.
For additional reference, you can check this documentation for the directives that are needed to be configured so that your server can properly handle multiple and/or concurrent requests. Apache MPM Modules
Bear with me here.
I'm seeing some php.ini processes (?) running or processes touching php.ini that are using up to 80% or more of the CPU and i have no idea what would cause this. All database processing is offloaded on a separate VPS and the whole service is supported by a CDN. I've provided a screenshot of "top -cM"
Setup:
MediaTemple DV level 2 application server (the server we are looking at in the images), 8 cores, 2GB RAM
Mediatemple VE level 1 database server
Cloudflare CDN
CentOS 6.5
NginX
Mysql 5.4, ect
EDIT
I'm seeing about 120K pageviews a day here, with a substantial number of concurrent connections
Where do i start looking to find what is causing this?
Thanks in advance
I am developing a big application and i have to load test it. It is a EC2 based cluster with one HighCPU Ex.Large instance for application which runs PHP / NGinx.
This applicaton is responsible for reading data from a redis server which holds some 5k - 10k key values, it then makes the response and logs the data into a mongoDB server and replies back to client.
Whenever i send a request to the app server, it does all its computations in about 20 - 25 ms which is awesome.
I am now trying to do some load testing and i run a php based app on my laptop to send requests to server. Many thousands of them quickly over 20 - 30 seconds. During this load period, whenever i open the app URL in the browser, it replies back with the execution time of around 25 - 35 ms which is again cool. So i am sure that redis and mongo are not causing bottlenecks. But it is taking about 25 seconds to get the response back during load.
The high CPU ex. large instance has 8 GB RAM and 8 cores.
Also, during the load test, the top command shows about 4 - 6 php_cgi processes consuming some 15 - 20% of CPU.
I have 50 worker processes on nginx and 1024 worker connections.
What could be the issue causing the bottleneck ?
IF this doesnt work out, i am seriously considering moving out to a whole java application with an embedded webserver and an embedded cache.
UPDATE - increased PHP_FCGI_CHILDREN to 8 and it halfed the response time during load
50 worker processes is too many, you need only one worker process per CPU core. Using more worker processes will invoke inter-process switching, that will consume many time.
What you can do now:
1. Set worker process to minimum (one worker per CPU, e.g. 4 worker process if you have 4 cpu units), but worker connections - to maximum (10240 for example)
Tune up TCP stack via sysctl. You can reach stack limits if you have many connections
Get statistics from nginx stub_status module (you can use munin + nginx, its easy to setup and gave you enough information about system status).
Check nginx error.log and system messages log for errors.
Tune up nginx (decrease connection timings and max query size).
I hope that helps you.