I'm using IIS 7.5 with PHP and I'm having troubles with my application, it is VERY slow and it can take more than 2 minutes to display the login screen.
I believe this is due to some kind of queue of requests to process.
I've taken a look at the "Worker Processes" menu in IIS and I found out there are tends request in the DefaultAppPool which seem to be waiting for a response.
Is this normal? How can I get rid of them?
I think you have some "bottleneck" in your code, because all servers like Nginx, Apache, IIS must work well in a lot of situations (we don't talking about highload sites, because it is separated topic).
So I suggest you to try profile your code. For example you can use xhprof:
https://github.com/phacility/xhprof
And xhprof will show you where is "bottleneck" in your code
Related
I deployed NGinx, php-fpm and php 8 on a EC2 / Linux 2 instance (T4g / ARM) to run a php application. As I had done for the previous version of this application and php 7.
It runs well, excepted for all first requests. Whatever the actions (clicking a button, submitted a text, etc.), the first request always takes 2.2x minutes, then the following ones run quickly.
The browsers (Firefox and Chrome) are just waiting for the response, then react normally.
I see nothing from the logs (especially, the slow-log is empty) and the caches seem to work well.
I guess I missed a configuration point. Based on my readings, I tried a lot of things about the configuration of php-fpm and php, but unsuccessfully.
Is someone already encountered this kind of issue?
Thanks in advance
Fred
Activation of all logs for php-fpm and php,
Augmentation of the memory for the process,
Checking of the system parameters (nlimit, etc.),
etc.
You've not provided details of the nginx config, nor the fpm config.
I see nothing from the logs
There's your next issue. The default (combined) log format does not show any timing information. Try adding $upstream_response_time and $request_time to your log file. This should tell you if the issue is outside your host, between nginx and PHP, or on the PHP side.
You should also be monitoring the load and CPU when those first couple of hits arrive along with the opcache usage.
first of all, thanks to #symcbean for your point. It helped me to find the script taking a long time to render and to fix the problem.
The problem was not due to the configuration of NGinx, PHP-FPM or PHP. It occurred because of an obscure parameter for auto-update of the application running on these components, forcing the application to call a remote server and blocking the rendering.
I am running a Debian server 7, after preforming load testing using Jmeter on my website. I noticed that MySQL was dying after 50 users, PHP was dying after 100+ users and Apache 2 dying after 200+ users. Now my question is what is the best way to restart these services if they are terminated or froze up?
Restarting a service means killing all its current processes and start a new one. In the meantime you have lost/dropped all the requests from some legitimate users that will eventually see an http error or a timeout when connections are dropped.
I would ask myself, are you happy with 200+ users? Is mysql your bottleneck? Etc..
Use some sort of monitoring service like new relic and as a workaround just restart those services when alerts start coming in, either manually or automatically.
But if you want to improve your site performance deploy your service on a better infrastructure so it can scale up to bigger numbers or improve the code/app architecture used on your site, i.e. put some extra caching between mysql and your application.
Also It would be interesting to know how you have managed to test apache, mysql and php separately especially httpd vs php processes with a tool like JMeter that in my experience sure can test apache separately and mysql separately. But your php and apache scripts are really tightly bound together.
I have an Enterprise PHP application hosted on RHEL 5.5. It works with MySQL and perl scripts.
It is causing regular CPU and memory spikes. I can see httpd and MySQL processes in top command output.
I know I can profile individual php scripts. But is there a way which can give me statistics about how many web hits my application got, which script got called with that arguments and what was its execution time?
I intend to start refactoring and optimizing the top 10 scripts that shows up in the result, till the results become acceptable.
Your first port of call is your web server logs. You should be able to correlate the spikes in CPU usage with urls.
Consider using a log analyser such as webaliser which can extract lots of useful usage data from the apache logs.
For webserver statistics you can always use awstats it is a great handy tool for statistical information about no of hits,dynamic reports etc. http://awstats.sourceforge.net/
Thanks & Regards,
Alok Thaker
I develop a web server's PHP scripts to retrieve tiled images to cover Earth surfaces such as World Wind.
Suppose I have get_image.php that returns image/jpeg or image/png as the response.
The initial condition is also that I have Windows Server 2003 to test my script on and preinstalled Apache 2.2.16 with a thread model. So I had to install thread safe PHP 5.3 to use it as Apache's module.
After script has been succesfully written I decided to produce a load testing using JMeter. Starting with a single virtual user performing a request per no more than 0.5 seconds and increasing the number of virtual users every minute at the moment of nine virtual users I get some not handled requests though looking in server's Task Manager I see no more than 8-10% of CPU. The maximum performance I get is 1600-1700 successful responses per a minute (requests are produced by eight virtual users).
I am not a system administrator and not experienced dealing with heavy performances, so my question is: can this be a problem of PHP thread safety discussed here? How can I determine the problem I face? Would it be better to try my script as CGI with IIS + FastCGI or we have to look at Linux-based web server?
P.S. I also use memcached server and php_memcache.dll (so-called thread safe version) downloaded from http://downloads.php.net/pierre. This module is not officially supported for Windows and probably it's not really thread safe so it could cause an additional effect to my problem in the case of the problem described is PHP thread safe issue.
If I were you I would use XDebug to track down which parts of your application ar eating the time and causing the failed requests.
Since you are on windows ( i presume ) then you can use a handy little program called WinCacheGrind to open and review the output files. It will show you, line by line or function by function, exactly how long the different blocks of code in your application take to run.
If you find that there is nothing surprising in the grind then that is the time to start looking at the environment.
I want to test something when apache crashes. The thing I want to test involves Windows asking me if it wants to send an error report. Any way to make Apache crash and ask me to send an error report on it?
Just kill the apache instance running.
In windows: go to taskmanager>kill the process
In linux: pkill processname
Take a look at Advanced Process Termination, especially its crash options, those might do what you want (display the send error report message box), although I haven't tested it. It's worth a shot though.
I agree with the earlier idea that you should crash it using windows.
The basic of the apache is that for each connection request, it "fork" a new process. Since Windows don't have a built in "fork" functionality, it has to create a new process each request. As such, it can be glitchy especially if there are multiple processes running.
For me, everytime I "restart" apache on Windows while maintaining a connection, I get an "Illegal Operation" from Apache's process. Not sure that can be reproduced 100% of the time, but it does occur to me from time to time when I restart.
Alex provides a possible answer here:
Microsoft Application Verifier [...] can do fault injection (Low Resource Simulation) that makes various API calls fail, at configurable rates. [...]