Apache deflate and progressive HTTP response - php

I work on an e-commerce website and we have important performances difference between our production server and our test server.
Both are VM running on Windows Server 2008 R2 64bits with WampServer 2.5 (Apache 2.4.9 / PHP 5.5.12).
(Note : I know that WampServer is not for production uses, but at this time it's still the best solution for us as we highly depend on Windows environment for our databases and other stuff. We tried to optimize Apache & PHP configuration for production, and we replicated that conf on our test server, so we have the same environment on both machines.)
Everything was running okay from there, until today. We were attempting to improve Apache files compression configuration (disabling it on images, enabling on html files, etc) when we noticed a major difference between the two servers.
On the same page (for test needs : an huge products list with a lot of content and images to display), same request, same user, same browser :
The production server seems to "prepare" the whole document before sending it. During several seconds I've to wait and watch a blank browser, then all shows up instantly. In Chrome Dev Tools, Waiting time is around 7 sec and Receiving time around 50 msec.
The test server seems to do just the opposite : no blank page during seconds, the header is displaying very quickly and the rest of the content comes step by step while I can already browse the page. Waiting time is around 200 msec and Receiving time around 11 sec.
On my own development machine, I can observe both situations when I toggle Apache's configuration for mod_deflate.
So after several attempts, we just disabled the mod_deflate on the production server, and then on the test server. Both have the exact same configuration, and still there is this big difference.
I also looked on php.ini files, thinking about cache issues or something like that, but same deal here : both configuration files are matching but the two servers are still working differently.
We spent hours searching answers on the web, but nothing seems to work...
Please, can somebody help us on that ?

Related

How to troubleshoot performance difference between 2 web servers?

I have a production virtual web server that is being migrated to a new virtual web server on the same local network. The problem is that there is a performance problem on the new server.
For example, there is one page that loads in about 1 second on the original server, but takes over 25 seconds to load on the new one. I have already ruled out the database connection as the problem.
Both servers are Ubuntu Apache servers running PHP. There are slight differences in the versions of the servers, I will list as best I can here.
My main question is: is there a general way to profile the web requests on each server?
Similar to the way I can profile a python script or function and get a break-down of which parts of the program take the most time, I would like to profile the web requests on one server compared to the other.
Of course a web requests to the server are fundamentally different than programs run on a local computer, but I need to find where the bottleneck is. Any help is greatly appreciated.
Old Server Config
Ubuntu 14.04 - PHP version 5.5.9
New Server Config
Ubuntu 16.04 - PHP version 5.6.31 (also tested with version 7, same result)
I would suggest to log PHP script execution time.
If it comes from somewhere in the PHP execution, you will notice it easily.
Do a log at the start and one at the end. Then you can stress test both and see different execution time.
I seriously doubt the problem comes from PHP but if you do that you could also see differences with PHP7 which should be 30% faster.

Apache localhost not responding to clients until reset

I have setup a local server on a regular desktop (not a server desktop) and have 3-4 client machines accessing the local web application I developed from the server via a WIFI router (server is connected to router via cable. All clients via WIFI).
When two of the clients are connected to the application all is well, but when a third (or more) machine joins in there are periods where each machine does not get any service from the server (the application webpage remains loading until I manually reset Apache on the server via services). At times the server responds when one of the clients refresh their page but most of the time I have to perform a reset of the Apache server.
This occurs roughly once an hour on average (6 hours of continuous usage) as the clients are using the application.
Server is running Windows 7 and Apache v2.4 with PHP v5.4
Server and all client machines are running AVG internet security
Firewall is handled by AVG Internet Security
Is this issue due to the code in my application, desktop not being able to manage requests like a server machine, antivirus or a mix of the three?
If so, how can I set-up the server to reset automatically?
Thanks
UPDATE
It is a application where users write reports on files after reviewing information
-Frequent sql requests for file data
-No images
-Some pages contain multiple sql queries that represent the page content
-Network has no internet connection
-Code does not make requests for external information from the internet
-All client machines run the application on Google Chrome web browser
But it rarely happens but sometimes the amount of connection is restricted by the third-party interface being used by the application. We are unable to figure out the reason unless having more details like what content of your app, and the error code apache or HTTP returning.
This kind of situations is difficult to track, especially on Windows where diagnostic tools aren't as readily available as on other platforms.
I suppose you can try and check the antivirus by either running server and clients with no antivirus at all for some hours, or disabling and re-enabling the antivirus when the hangup occurs.
Apart from that, you would need to pinpoint where the error occurs:
in the connection stage (Windows OS is the problem)
in the response stage (Apache is the problem - try fiddling with the child spawning parameters)
in the management stage (PHP is the problem - you can probably check this by changing the setup from PHP-as-a-module, and PHP-as-CGI-application)
in the response stage (that is, connection to the SQL server). You can check this by setting up some pages that use different combinations of session, database, and output buffering and see whether those pages remain reachable even when the application is hung up.
For an example of the last, if a page such as
<?php print date("H:i:s"); phpinfo(); ?>
remains reachable and correctly refreshes (that's why the date() command) even when the app does not respond, this demonstrates that Windows, Apache and PHP are "innocent", and either the SQL server has issues, or you do not interface with it correctly. It might be for example be the case (though unlikely in this instance) that the resident PHP module is accumulating connections to the SQL server and not releasing them, so that after a while you need to stop Apache (thereby freeing the module) and restart.
If this were the case, even if it's not a "real" fix, you can set up Apache so that all children die and are replaced after a small number of requests (once it was 150, but when leaks all but disappeared, I believe that the default became 0 -- Apache children no longer die. Check it out, I might well misremember).

upgraded to PHP7 on Windows: php_soap.dll causing crashes / frequent 500 errors

I have two servers, a test server running Windows 7... and a prod server running Windows Server 2008. (Yeah, it's unfortunate that they're different OS's.)
For months now, they've been running on PHP 5.4.1.4.
I decided to upgrade them to PHP 7. Everything went completely fine with the test box. But of course, it doesn't get much traffic.
On the prod / Windows Server 2008 box, it seems like, web apps would run for a minute or two and then show "500 error". I could refresh and sometimes they'd work again, sometime it'd take a few minutes.
Nothing is/was getting written to the NEW PHP error log (even though IIS's PHP Manager section showed that we were pointed to the correct INI and the correct log file).
The webserver failed request logs simply indicated that FastCGI was failing because of too many 500 errors.
I checked Event Viewer and I would see application crashes that would point to php_soap.dll.
Now, that file is THERE and it's the same size as the one I have over in non-prod.
Still, I thought perhaps it was because my scripts were getting 500 errors for a valid reason. So I investigated one of them. Confirmed that it was an exact match to a working one on the test box. Refreshed it...and it worked fine. Refreshed some more, 500 errors.
So, finally, I went into IIS Manager -> PHP Manager and disabled the SOAP extension.
I then STOPPED seeing the massive number of failed requests and I stopped seeing the 500 errors... for everything except the one script I have that makes SOAP calls.
I tried copying the dll from the test box over to the prod box. Enabled the extension again in PHP. The issue returned. So, I've pointed us back to the 5.4.1.4 config for now.
Any ideas on how I might figure out why this dll is causing issues and/or how to fix it?
Thanks!
-= Dave =-
I know this is old now but I had a similar problem and it turns out that the cached WDSL files are not binary compatible between versions of PHP.
By default in php.ini the SOAP module has caching enabled. In order to avoid a crash you'll either need to clear the current WDSL cache or change the cache location for the new PHP install.
Hope that helps...
I think I figured out a fix. I'm not sure why I'm only having to do this on the prod server, though: The directories that my scripts that make SOAP calls are in were set to allow both Anonymous Authentication and Windows Authentication. When I was manually testing things in my browser, it would accept me and run the script as Anonymous.
I suddenly realized a theory: my erroring script was being called by remote desktop gadgets...so it was probably defaulting too to executing anonymously. But I have another script with SOAP in it that's run by Scheduled Task (as a specific user). I had NOT seen that erroring!
So, I turned off Anonymous Authentication on the directory I was testing and reran my script from my browser. Sure, I had to log in, but it then worked! I checked my version of the Desktop Gadget that calls a SOAP script in that same directory...and it was now working too!
I think the key reason why I did not see this on the test machine is that we really don't have any Desktop Gadgets pointed to those proxy SOAP scripts over there. That, plus I had THOUGHT that my script that runs by Scheduled Task was failing on the prod machine, as I know I saw it throw 500 errors within the first few minutes after I activated PHP7 the first time....and that same Scheduled Task/script DOES run over on the test box.
Thanks!

IIS+PHP random Error 500.19

I'm running a WIMP stack on Windows Server 2012 R2 Standard, and I have a strange issue with IIS: After working for a random period of time (usually from several hours to days), most requests start to give the following error page:
What is weird is that it works for a period of time. Changing the user access settings of web.config or restarting the website both work to resolve the issue temporarily, however it's a recurring problem.
I do realise that WIMP is not the most desirable stack (normally I'd opt for LNMP myself), however our website will be migrated to an ASP.NET app soon, and since we had to do a server upgrade beforehand, we opted into installing Windows on it to make the transition faster.
EDIT: After some testing, changing the NTFS permissions result in getting the same error until the website is restarted. Might have to do something with the issue?
Also, I'm running the website with a pre-selected user rather than pass-through, if that helps.

Why Is Apache Giving 403?

I am getting 403 Errors from Apache when I send too many, 12, synchronous HTTP Posts via a desktop app I am building in XCode / Objective-C. The 12 POST requests are just a few kb each and go out instantly one after the other and the Apache Error Log shows...
client denied by server configuration: /the-path/the-file.php
Apache 2.0 PHP 5 and I have this same setup working fine on my local machine. The error is coming from a VPS with my host, which runs very fast and smooth and has plenty of resources. To debug I threw a sleep(1); function (stalls script execution by 1 second) into the php file and that fixed it. This makes me think that I am breaking some limit for too many requests for a single IP in a certain amount of time. I have googled and combed PHP ini and Apache configs, but I cannot find what that directive/setting might be.
I should mention that the although it varies the first 4 or 5 POSTS usually work then it starts returning the 403 error intermittently after that. Just really acting like its bogging down.
Any ideas?
The error tells you everything: Most likely your VPS has flood control on their web server, which kicks in at 4 or 5 quickly-sequential hits. This has nothing to do with PHP itself, but ratherly completely to do with Apache. In other words, your home setup is not the same as the VPS's setup.
Try to off or configure mod_evasive. It is a module for Apache to provide evasive action in the event of an HTTP DoS or DDoS attack or brute force attack. (Here you can read more about it). Use the command to off mod_evasive:
a2dismod mod-evasive
service apache2 restart

Categories