PHP-FPM processes causing high CPU usage on VPS - php

A few months ago we moved our e-commerce website to a VPS, after struggling with poor performance from shared hosting platforms. To handle an increase in traffic (avg. 300-500 daily visitors), we tweaked our PHP-FPM settings and increased the Max Children from 5 (default) to 50. Currently, PHP-FPM "pool" processes are requiring high CPU usage (30-40%). Any tips to make those "pool" processes use less CPU? Thanks!
VPS Specs:
2 CPUs
Intel(R) Xeon(R) CPU E5-2630 v4 # 2.20GHz
4GB RAM
WHM: Centos 7.8 v86.0.18
Ecommerce platform: OpenCart 3.0.2.0

FPM has nothing to do with the CPU usage, it's your code.
That said, don't just arbitrarily change the number of worker processes without a sound basis to do so, eg: actual resource statistics.
With 300-500 daily users you're really unlikely to have 50 concurrent requests unless you're doing something strange.
The place I'm currently working at peaks at about 600 concurrent users and a grand maximum of 15-20 connections actually simultaneously doing anything. [Note: Much larger/broader backing infrastructure]
Do you really expect each CPU core to handle 25 simultaneous requests?
Can you reasonably fit 50 requests' worth of RAM into that 4GB?
Are you fine with those 50 idle PHP processes each consuming 10-15MB RAM apiece?
All that said, we can't tell you what in your code is using up resources, and it's not possible for you to post enough information for us to make more than a vague guess. You need to put things in place to measure where that resource usage is happening, profile your code to find out why, and tune your infrastructure configuration to accommodate your specific application requirements.
There's no one "magic" config that works for everyone.

Related

IIS - PHP 8.1.4 Performance Degrades After 2 Hours

I am running IIS 10 on Windows Server 2016. I am using PHP 8.1.4 NTS with FastCGI. PHP is optimized following the usual recommendations.
I noticed that the server's response times start to increase after about 2 hours. For example, the TTFB is roughly 150-200ms after IIS/worker processes are started. Sites load very quickly. Then, after about 2 hours or so, performance starts to decline where TTFB increases upward and eventually plateaus at around 500ms. Sometimes, it will even go as high as 800ms.
If I have the IIS application pool recycle, we're back to ~200ms where it will stay in that area for another 2 hours.
I'm trying to keep our server response times fast, and am curious what could be causing the performance to degrade after a few hours. Should we setup to recycle the pool more often? That can work, but it seems like something else is going on, and you shouldn't have to do that.
The server does not have high CPU, disk, or RAM usage. The w3wp and php-cgi processes have very little memory usage (10-20MB each). CPU is almost always under 10%, and RAM is only 50% in use.
Optimized IIS FastCGI parameters, and application pool parameters, to recommended settings (10k requests, etc.)
Reviewed MySQL 8.0 Server logs to find slow queries, but nothing with low performance was found.

My server is getting high traffic and reaches his limits, what should be my new structure?

Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.

Optimal memory for page loading in hosted environment?

In PHP, shared hosting environment, what shall be an optimal memory consumption to load a page. My current PHP script is consuming 3,183,440 bytes of memory. What shall I consider a good memory usage, to entertain say, 10000 users parallely?
Please be detailed, as I am a novice in optimization part.
Thanks in advance
3MB isn't that bad - keep in mind that parts of PHP are shared, depending on which server is used (IIS, ngx, apache etc.) you can specify pools and clusters as well when having to scale up.
But the old adage testing is knowledge goes well here, try load tests on the site, concurrent 10 -> 100 -> 1000 connections and look at the performance metrics, it wil give you more insight on how much memory is required.
For comparison, the site I normally work on has an average of 300+ users concurrently online and the memory usage is just under 600MB, however I run certain processes locally it will easily use up 16MB.

need help in rough calculation of concurrent users

I will tell you about website specs for the below configuration.
social networking site- 70%dynamic content
Linux centos 6.6
Apache web server
php language
server specs x 2 (main server and sql server)
4 x Intel® Xeon® E5-4640 v2 2.20GHz, 20M Cache, 8.0GT/s QPI, 10 Core
48 x 16GB (768 GB) RDIMM, 1600MT/s, Low Volt RAM
4 x 300GB 15K RPM SAS 6Gbps
for other storage = Dell Storage Direct-Attached Storage (DAS)
network = 10 gigabit / sec
Assume that memcache / load balancer / other extra servers are there and not included in this.
(just needed rough calculation)
my question is:
how many concurrent users (users that will click at a same time) this platform can handle and assume that average connectivity of users will be 512 kilobit / sec.
concurrent users depends on which factor ? (ram>cpu>hdd is this right?)
I am not an expert , this question is for educational purpose only.
This question is very vague. The load that you can support will depend on the complexity of your PHP code and database design... planning (and even testing) load is a complicated topic.
You could also configure your hardware in a variety of ways which will have an impact on performance. Which RAID system you use will depend on whether your application is read or write heavy, as will your database design.
You will also need to consider whether to use Virtualisation for backup/redundancy which adds a layer of performance overhead...

Apache Server Slots, Memory, KeepAlive, PHP and SSL - how to speed up

on a Debian web server (VPS) with good CPU, 6 GB RAM, and fast backbone Internet connection, I run a PHP application. PHP runs in "prefork" mode (incl. APC opcache), because whenever you search for PHP and the MPM worker, there are abundant warning regarding thread safety. The PHP application is quite large, so each server process requires about 20 to 30 MB RAM. There is sensible data processed by the application, therefore, all connections to the Apache server are SSL encrypted.
Typically, the application shows no or few images (about 1-3 files incl CSS and JS per request) and the users send a new request each 1 minute (30 sec. to 4 minutes, depeding on the user).
Recently, this application faced a big storm of user requests (that was planned, no DoS, about 2.500 concurrent users). While the CPU did fine (<50% use), my server ran quickly out of slots. The point is that - in prefork mode - each slot requires memory and the 6 GB are just enough for "MaxClients" about 200 slots).
Problem 1: According to Apache server-status, most slots were occupied "..reading..". Sometimes reading for 10 seconds and more, while PHP processing takes 0.1 to 2 seconds. Few data is sent by the users, so I guess that this actually is the SSL handshake. This, of course, occupies lots of slots (I also enabled and configured mod_reqtimeout to drop very slow clients and - according to http://unhandledexpression.com/2013/01/25/5-easy-tips-to-accelerate-ssl/ - used SSLHonorCipherOrder to use faster encryption ciphers, SSLCertificateChainFile is also transmitted).
Problem 2: If I enable KeepAlive (only 1 or 2 seconds) to reduce the SSL overhead, slots are kept open and, therefore, occupied twice as long, as PHP processing would require.
Problem 3: If I actually wanted to serve 2.500 users, and want to use KeepAlive to speed up SSL, I would require 2.500 slots. However, I won't have a machine with 32 GB RAM.
With enough users on the server, to test its limits, I were stuck with about 110 requests per second, about 50% CPU load on a quadcore system (max. 400%). Less req/sec if I (re-)enabled KeepAlive. 110 req/sec on a modern webserver - this seems ridiculous! I cannot believe that this is actually what Apache, PHP and SSL can perform.
Is there a major fail in my thinking? Do I encounter a basic limitation of the prefork mode? Did I ignore the obvious? Is SSL acutually such a performance-eater? Thanks for any hints!
I'm the author of that article about SSL performance. I don't think the handshake is responsible for the 8+ seconds on reads. You can get useful information by using http://www.webpagetest.org/ . The handshake is done when a request is marked as "connected".
My guess would be that the slow processing of the PHP app with a lot of concurrent users can make some users wait a lot more.
Here are some ideas to get better performance:
I don't think the KeepAlive would be a good idea if each client does a request every minute.
You could enable SSL session tickets to reduce the handshake overhead.
MPM-Worker works fine for a lot of different setups, so I encourage you to try it out.
caching will probably not help you if the clients recieve a different response every time.
you should test PHP-FPM, that could speed up the PHP code.
also, test APC, to cache precompiled PHP code.
I don't know anything about the architecture of your app, but you could defer sending the results: get the data from the client, send an immediate answer ("processing data..." or something like that), process the data in a background process, then on the next request, send the calculated answer.

Categories