php fpm ondemand vs dynamic vs static? - php

Visited web application from browser and got cant be reached 504 message last night.
On checking the clients server, I was able to see that the timeout error from the web server log:
The timeout specified has expired: [client xxxxxxxx] AH01075: Error dispatching request to : (polling)
I advised client that this was due to them not having configured the PHP FPM values.
Bascially due to the use of the dynamic process manager FPM Config. And that furthermore they should try using Ondemand setting. This relates to the PHP FPM values. And see how that goes.
​
The site in question is loading pretty fine from our end. But am wondering if this is the best advice. I.e. changing from dynamic to on-demand for a vps 8gb ram, 2 core, 180sd. Any recommendations. Which is best?
Clients is running a wordpress app. High traffic site.
The web application stack is NGINX + Apache2 Hybrid (I can use .htaccess) let me know what else you need to know.

Related

cgi/fastcgi process - 0% CPU and N% Memory?

I have php wordpress website hosted on IIS, have been using FastCGI, below is my fastCGI configuration
Below is screenshot of task manager
There are many fastCGI processes that are using 0% CPU but consuming certain portion of RAM, is this ok ? or there is any misconfiguration due to which this is happening
I did research online to find reason for same, but did not found anything.
As most web servers do, IIS uses its php processes multiple times before restarting them. It leaves idle processes running so it can serve incoming requests with out the extra latency delay of spinning up new processes.

NGinx + PHP-FPM - Response time of 2.2x minutes for the first request

I deployed NGinx, php-fpm and php 8 on a EC2 / Linux 2 instance (T4g / ARM) to run a php application. As I had done for the previous version of this application and php 7.
It runs well, excepted for all first requests. Whatever the actions (clicking a button, submitted a text, etc.), the first request always takes 2.2x minutes, then the following ones run quickly.
The browsers (Firefox and Chrome) are just waiting for the response, then react normally.
I see nothing from the logs (especially, the slow-log is empty) and the caches seem to work well.
I guess I missed a configuration point. Based on my readings, I tried a lot of things about the configuration of php-fpm and php, but unsuccessfully.
Is someone already encountered this kind of issue?
Thanks in advance
Fred
Activation of all logs for php-fpm and php,
Augmentation of the memory for the process,
Checking of the system parameters (nlimit, etc.),
etc.
You've not provided details of the nginx config, nor the fpm config.
I see nothing from the logs
There's your next issue. The default (combined) log format does not show any timing information. Try adding $upstream_response_time and $request_time to your log file. This should tell you if the issue is outside your host, between nginx and PHP, or on the PHP side.
You should also be monitoring the load and CPU when those first couple of hits arrive along with the opcache usage.
first of all, thanks to #symcbean for your point. It helped me to find the script taking a long time to render and to fix the problem.
The problem was not due to the configuration of NGinx, PHP-FPM or PHP. It occurred because of an obscure parameter for auto-update of the application running on these components, forcing the application to call a remote server and blocking the rendering.

How to best troubleshoot apache webserver with lot of requests in "being processed" status

I am running a Linux Centos/Plesk box with a medium/high traffic Prestahop e-commerce website.
I use the stock-Plesk configuration with PHP 7.0 FPM served by Apache / Nginx as a reverse proxy. I only made some tweaks on FPM pool settings according to the server power basically to increase MaxChildren value to serve more requests.
From some days I am encountering occasional website slowdowns that I am trying to troubleshoot (website did not have any particular visits increases in last time).
I have already performed some checks on:
- server logs, can't see particular error_logs
- server load average (it is ok)
- I/O "wa" value is OK
- MySQL server has not slow queries during slowdowns (SHOW FULL PROCESSLIST is never returning long execution time queries)
- net stat (no DDOS / strange connections)
I installed mod_status form Apache and noticed from the server-status page that during slowdowns I have a high number of "requests currently being processed" in "W" status [sending reply] (During slowdowns I can have up to 70/80 of those requests for several seconds) so I can correlate the slowdowns to Apache being busy to deliver requests but I can't figure out why and what application/webpage component is the source of the problem.
My question here is some advice on how to understand the culprit of slowdowns (a PHP script ? a stuck external service during Apache request ?)
thank you.

I am receiving this error? Error : (1226) User 'qe' has exceeded the 'max_user_connections' resource (current value: 30)

So, after researching about this alot i am seeking help for somebody who encountered this and got a way out.
We developed a PTC script for a client and it worked fine, but as the users grew it starting displaying an error which is as below:
Error : (1226) User 'qe' has exceeded the 'max_user_connections' resource (current value: 30)
Now after seeking help somebody said its a server related issue and other people pointed that it was an issue related to the database design of the script.
Looking forward for a way to solve this problem. Have tried tons of things.
Using godaddy hosting at the moment, they increased the Limit from 30 to 50, but im sure the problem is going to show up again.
There's no problem with the database, the problem is in how you handle database connections from your software.
The way your script is set up is that every connection to your web server also opens a connection towards MySQL. That's not the scenario you want.
Raising the limit won't fix the issue, it will just delay yet another error. What you should do is use persistent connections.
One of the reasons why using php-fpm instead of server API's such as mod_php is preferred is because a set number of PHP processes is booted and a pool of connections to services is created.
The flow would be the following:
use php-fpm. Apache and nginx can use FCGI interface to speak to php-fpm processes
raise a relatively low amount of child processes for php-fpm. This shouldn't be overly large, default config usually works out, I'll make a guess that you don't run a hexacore system so 4-6 child processes should be fine
use persistent MySQL connections
What does this do? Your server accepts the request and sends it to php-fpm, which processes it when it becomes free. Each process uses 1 connection to MySQL. This means you can never hit some sort of hard limit like you have.
If your server is busy, the server should queue up the requests until PHP is capable of handling them. Be it Apache or nginx that you use, this approach will work well.
If your site is busy, it's likely that web server is working faster to accept connections and serve static content that PHP is to process dynamic content. In this case you have an option of adding another physical machine (or more) that runs php-fpm. Instructing your web server to round-robin between machines that serve PHP is trivial, for both of mentioned web servers.
Bottom line is that you want to utilize your resources in an optimal way. Opening and closing MySQL connections on every request isn't optimal. Pooling connections is.

mod_fcgid + PHP + apache lockup

I'm running a fairly typical LAMP stack with PHP running through mod_fcgid. I'd consider the server to be under "high load" given the amount of traffic it receives.
There is an intermittent problem, where Apache is reporting all connections to be in the "Sending content" state ("W" on the monitor) when accessing sites that rely on PHP.
There are no PHP errors to speak of, its as though PHP isn't actually getting called during these "lockup" periods. However, in the apache site logs I'm seeing the following:
(103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function
[warn] mod_fcgid: can't apply process slot for /var/www/cgi-bin/php.fcgi
During this time I can still access sites that do not depend on PHP, such as the apache status and HTML-only virtual hosts (that don't have the PHP handler include).
The php.fcgi script has PHP_FCGI_MAX_REQUESTS=500 set, because I have read there is a race condition problem with PHP running in CGI mode. The fcgid.conf also has MaxProcessCount=15 set.
Has anyone else experience this bug, and if so how can it be resolved?
I managed to fix this one myself.
To solve this problem add in stricter checks in the FastCGI configuration for process hangs, and reduce the lifetime of your PHP instances:
IPCConnectTimeout 20
ProcessLifeTime 120
IdleTimeout 60
IdleScanInterval 30
MaxRequestsPerProcess 499
MaxProcessCount 100
Depending on your requirements, this can satisfy a well-configured server that has in excess of 50k hits per hour.
You will find the number of recorded defunct / "zombie" PHP processes increases significantly. This is good, however, as previously the processes would have simply become unresponsive and the FastCGI manager would have continued to pipe requests to them!
I would also advise removing all override directives from your php.fcgi script, as this can cause problems with your system. Try to manage as much as possible from the primary FastCGI configuration in Apache.
We went with Nginx + http://php-fpm.org/
Try "strace -p".
I also saw lock-ups happen when some PHP software were trying to request file from the same server it's running on (get_file_contents('http://localhost...'))

Categories