I have got an PHP application running a report and firing off about 1 million SQL queries to SQL server within 90 seconds. During the period of time no one else can use this web-based application - the egg timer is rolling but nothing loads until the report is timed out or completed. I tested this problem in an isolated environment with just myself in there, ran the report in a browser, then any actions from other browser windows to this application site hung.
Some details about the testing environment:
Windows 2008 R2 x64 - IIS 7.5 - PHP 5.3.8 via FastCGI
Windows 2008 R2 x64 - SQL Server 2008 R2 x64
FastCGI settings in IIS:
Instance MaxRequests = 200
Max Instances = 16
Activity Timeout = 70
Idle Timeout = 300
Queue Length = 1000
Rapid Fails PerMin = 10
Request Timeout = 90
Each of the SQL requests completes less than 60ms on the SQL server side. CPU loads of both of the Web server and the SQL server are less than 10%. The Web server has 16GB RAM and about 60% RAM are available when running the report.
It seems that, PHP's been firing off too many requests to the SQL server and becoming too busy to handle other requests. If this is the case then there should be something I can tweak to make PHP handle more concurrent requests.
Does anyone know? Please help!
I'll just stab in the dark here and assume it's due to session locking.
When you use the standard session handler that comes with PHP, it makes sure that your session files cannot be corrupted by using (advisory) write locks throughout the course of your script's execution (unless session_write_close() is called earlier).
Other scripts that try to access the same session (your browser would pass the same cookie value) will wait for the lock to get released, as long as it takes.
You can verify this by using two completely different browsers to simulate two users (one runs the report, the other accesses the site). If that works, you're pretty sure it's due to session locking.
This shouldn't be a problem, because you will know when you're running a report, but if this might cause issues nonetheless you can consider two things:
don't start a session for the report script (but this also means unauthorized users could try to run your report script)
close the session before your grunt work starts, by using session_write_close() after you have verified the user's identity.
Related
Recently L started experiencing performance issues with my online application hosted on bluehost.
I have an online form that takes a company name and event handler "onKeyUp" tied up to that field. Every time you put a character into the field it sends request to server which makes multiple mysql queries to get the data. Mysql queries all together take about 1-2 seconds. But since requests are send after every character that is put in it easily overloads the server.
The solution for this problem was to cancel previous XHR request before sending a new one. And it seemed to work fine for me (for about a year) until today. Not sure if bluehost changed any configuration on server (I have VPS), or any php/apache settings, but right now my application is very slow due to the amount of users i have.
And i would understand gradual decrease in productivity that may be caused bu database grow, but it suddenly happened over the weekend and speeds went down like 10 times. usual request that took about 1-2 seconds before now takes 10-16 seconds.
I connected to server via SSH & ran some stress test sending lots of queries to see what process monitor (top) will show. And as I expected, for every new request it was a php process created that was put in queue for processing. This queue waiting, apparently, took the most of wait-time.
Now I'm confused, is it possible that before (hypothetical changes on server) every XHR Abort command was actually causing PHP process to quit, reducing additional load on server, and therefore making it work faster? And now for some reason this doesn't work anymore?
I have WAMP installed on Windows 7, as my test environment, and when I export the same database and run the stress-test locally it works fast. Just like it used to be on server before. But on windows I dont have such handy process monitor as TOP, so i cannot see if php processes are actually created and killed respectively.
Not sure how to do the troubleshooting at this point.
I'm working on an application which gets some data from a web service using a PHP soap client. The web service accesses the clients SQL server, which has very slow performance (some requests will take several minutes to run).
Everything works fine for the smaller requests, but if the browser is waiting for 2 minutes, it prompts me to download a blank file.
I've increased the php max_execution_time, memory_limit and default_socket_timeout, but the browser will always seem to stop waiting at exactly 2 minutes.
Any ideas on how to get the brower to hang around indefinitely?
You could change your architecture from pull to push. Then the user can carrying on using your web application & be notified when the data is ready.
Or as a simple work around (not ideal) if you are able to modify the soap server you could have another web service that checks if the data is ready, then on the client you could call this every 30secs to keep checking if data is available rather than waiting.
The web server was timing out - in my case, Apache. I initially thought it was something else as I increased the timeout value in httpd.conf, and it was still stopping after two minutes. However, I'm using Zend Server, which has an additional configuration file which was setting the timeout to 120 seconds - I increased this and the browser no longer stops after two minutes.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
I have a Lighttpd(1.4.28) web server running on Centos 5.3 and PHP 5.3.6 in fastcgi mode.
The server itself is a quad core with 1gb ram and is used to record viewing statistics for a video platform.
Each request consists of a very small bit of xml being posted and the receiving php script performs a simple INSERT or UPDATE mysql query. The php returns a very small response to acknowledge the request.
These requests are performed very frequently and i need the system to be able to handle as many concurrent connections as possible at a high rate of requests/second.
I have disabled keep alive as only single requests will be made and so I don't need to keep connections open.
One of the things that concern me is that in server-status I am seeing a lot of connections in the 'read' state. I take it this is controlled by server.max-read-idle which is set to 60 by default? Is it ok to change this to something like 5 as I am seeing the majority of connections being kept open for long periods of time.
Also what else can I do to optimise lighttpd to be able to server lots of small requests
This is my first experience setting up lighttpd as I thought it would be more suitable than apache in this case.
Thanks
Irfan
I believe the problem is not in the webserver, but in your PHP application, especially in MySQL part.
I would replace lighty with apache + mod_php, and mysql with some NoSQL such Redis, which will queue the INSERT requests to the database. Then I would write a daemon / crontab that insert the data in MySQL.
We had such thing before, but instead of Redis, we created TXT files in one directory.
So last night I had a cpu spike for 100% for 30 minutes which made the server not very usable :-(
The server doesn't seem to be the quickest even today (its running on Amazon cloud!).
The application is a chat application which only has about registered 5 users who's client polls a php script every every 5 seconds for new information ( each request hits mysql).
Running some commands I found on the net it returned that I have 200 thousand connections - is this active or since the server was up?
Can anyon offer any advice if there is anything out of the ordinary in the below.
(Note these stats are from today where there were only 2 users logged in)
120 thosand connections since your mysql running up. About 100% of cpu. Check apache logs in /var/logs/apacheX looking for excessive error messages. It can be the server slow.