We have a Moodle IIS implementation where the primary data/IIS server is on our LAN, but we also have a public facing IIS server on our DMZ. Until recently, performance when accessing Moodle via the DMZ server was on-par with accessing via the LAN server; but last week I noticed that accessing via the DMZ was very slow and I was often getting 500 timeouts. I increased Activity Timeout for fastcgi and the timeouts disappeared, but the site is now painfully slow.
I monitored Activity Monitor when browsing the site using the LAN server and php-cgi.exe shows CPU goes up while actively browsing (20-25% or so). Monitoring the same on the DMZ server shows no change in CPU utilisation for the php-cgi processes - they all stay at 0-1%.
I moved the DMZ server to the LAN and the performance was immediately as expected: pages loaded quickly and php-cgi CPU utilisation goes up to 20-25% while browsing.
I tested pings and bandwidth when copying files between LAN and DMZ servers and the pings are around 20ms and the bandwidth seems capped at 100 Mbps when on the DMZ. That was unexpected, but I don't have historic pings to prove that latency used to be lower and bandwidth used to be higher.
Our core network provider recently performed maintenance and access to our DMZ dropped completely for a period until they 'fixed' the issue. It feels like they've introduced a bottleneck recently (traffic now routing through a 100 Mbps adapter?) and I have an open ticket, but I'm not sure how to prove this is the issue.
The only logs I can think to check are for IIS and looking at response-time. It looks like this has gone up 2-4x since the maintenance, but it's not as conclusive as I'd like (I'm guessing due to a good amount being locally cached). Is there anything else I could/should be looking at?
Servers are Windows Data Center 2012 R2, php is 7.4 nts 64-bit, and Moodle is 3.10.
Many thanks.
It is difficult to reproduce your problem based on your description. When the server hangs, crashes or performance is low, it is usually necessary to grab the server's thread stack (Thread Dump) for subsequent analysis. So I suggest you open a case via: https://support.microsoft.com, there will be professional technicians to assist you in grabbing the dump file and analyzing it.
Related
I run my wordpress PHP website on windows server IIS 7.5. I have noticed that even there are no visitors in my website, some php-cgi.exe still running for long time. each of them (15-20%), so together CPU get almost (100%) slowing down my websitee when a new visitoer comes in.
I turned off all plugins, I turned off rewrite rules but nothing helped.
What can be done in order to stop php-cgi.exe process from running after visitor letf my webiste? or any other idea?
Thanks
The question is specific to the PHP instead of IIS.
There are so many reasons that lead to high CPU usage when the PHP-cgi.exe process consumes 100% CPU in the Task Manager.
Such as the Malformed PHP scripts containing some infinite loop, these poorly coded scripts will call the PHP-cgi.exe PHP worker program on the server and will eat 100% of CPU.
A large number of PHP processes on the server take up a considerable amount of server resources. Every user instance creates a large amount of process on the server. As a result, the CPU usage shoots up.
Besides, if there is something wrong with fetching certain content with lacking user privileges, PHP can cause high CPU usage.
See this blog for more details.
https://bobcares.com/blog/php-cgi-exe-high-cpu/
I am hosting a PHP/MySQL site on Windows server 2008, PHP v5.4, IIS 6.1. The PHP install was set up by the hosting company as part of Plesk. It is a development site so there is almost zero traffic at present but despite this, every so often php-cgi.exe starts multiple processes which, occasionally reach the maximum memory available on the server and bring my site down.
The increases in memory use do not appear to correlate with times when there would be visitors. They occur a few times during the day (not sure if there is a pattern) but I've noticed that at about 11:15pm every night it seems to happen.
I've tried looking in Windows Application logs but nothing seems to show and IIS logs don't appear to have anything. I'm not running a scheduled job (that I know of) that would cause this.
Please could anyone advise me on what this might be? The server has 1GB RAM and hosts only one very basic site.
ISSUE: My website is extremely slow processing pages, and often times just hangs.
OLD environment: I used to run Apache 2.2 (from Apachelounge), PHP 5.3, and MySQL 5.5 on a Windows 2008 Web Server phyical box, hosted by Peer1 (ServerBeach). Website ran fast and reliable. At the time, I was using "mysql_" commands versus "mysqli_".
NEW environment: I now have a google compute engine VM running Windows Server 2012 R2. I installed Apache 2.4 from apachehaus.com (httpd-2.4.12-x64-vc11-r2.zip), PHP 5.6.9 (VC11 x64 Thread Safe), and MySQL 5.6.25 (64 bit). With some tweaking to the new httpd.conf and vhosts.conf files, I got my website back up and running. I also converted all the mysql_* references successfully to mysqli_*. Here's what I'm noticing: If I pull up the task manager on the apache server and watch the httpd process, every time I navigate to a website page, the CPU process climbs drastically and slowly gets back down to 0%. Often times, it'll even hit between 95-100% CPU utilization and stay there for a good 5 seconds, before slowly getting back down to 0%. Also, the apache memory utilization keeps getting larger with every page request, and never goes down. The website and mysql calls work and pages display fine, it just takes a long time to process.
I believe this to be a PHP issue, but I could be wrong. Recommendations are welcome.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
We are using Jmeter to test our Php application running on the Apache 2 web server. I can load up Jmeter to use 25 or 50 threads and the load on the server does not increase, however the response time from the server does. The more threads the slower the response time. It seems like Jmeter or Apache is queuing the requests. I have changed the maxclients value in apache web server configuration file, but this does not change the problem. While Jmeter is running I can use the application and get respectable response times. What gives? I would expect to be able to tax my server down to 0% idle by increase the number of threads. Can anyone help point me in the right direction?
Update: I found that if I remove sessions from my application I am able to simulate a full load on the server. I have tried to re-enable sessions and use an HTTP Cookie Manager for each thread, but it does not seem to make an impact.
You need to identify where the bottleneck is occurring, and then attempt to remediate the problem.
The JMeter client should be running on a well equipted machine. I prefer a Solaris/Unix server running the JVM, but for <200 threads, a modern windows machine will do just fine. JMeter can become a bottleneck, and you won't get any meaningful results once it does. Additionally, it should run on a separate machine to what your testing, and preferable on the same network. The WAN latency can become a problem if your test rig and server are far apart.
The second thing to check is your Apache workers. Apache has a module - mod_status - which will show you the state of every worker. It's possible to have your pool size set too low. From the mod_status, you'll be able to see how many workers are in use. To few, and Apache won't have any workers to process requests, and the requests will queue up. Too many, and Apache may exhaust the memory on the box it's running on.
Next, you should check your database. If it's on a separate machine, the database could have an IO or CPU shortage.
If your hitting a bottleneck, and the server and db are on the same machine, you'll generally hit a CPU, RAM, or IO limit. I listed those in the order in which they are easiest to identify. If you get a CPU bound app, you can easily see you CPU usage go to 100%. If you run out of RAM, your machine will start swapping. On both Windows and unix it's fairly easy to see your available free RAM. Lastly, you may be IO bound. This too can be monitored using various tools or stats, but it's not as obvious as CPU.
Lastly, specifically to your question, the one thing that stands out is it's possible to have a huge number of session files stored in a single directory. Often PHP stores session information in files. If this directory gets large, it will take increasingly long amount of time for PHP to find the session. If you ran your test will cookies turned off, the PHP app may have created thousands of session files for each user request. On a Windows server, it will slow down faster than on a unix server, do to differences in the way directories are stored on the two operating systems.
Are you using a constant throughput timer? If Jmeter can't service the throughput with the threads allocated to it, you'll see this queueing and blowouts in the response time. To figure out if this is the problem, try adding more threads.
I also found a report of this happening when there are javascript calls inside the script. In this instance, try to move javascript calls to the test plan element at the top of the script, or look for ways to pre-calculate the value.
Try checking a static file served by apache and not by PHP to see if the problem is in the Apache config or the PHP config.
Also check your network connections and configuration. Our JMeter testing was progressing nicely until it hit a wall. Eventually realized we only had a 100Mb connection and it was saturated, going to gigabit fixed it. Your network cards or switch may be running at a lower speed than you think, especially if their speed setting is "auto".