Am having problems with my PHP/MySQL website. It is running fine on my development machine but on Godaddy it has started giving me problems. After running it multiple times I either error 500(Internal server error) or connection timed out. Am now convinced that its not the web host as files like sitemap.xml are loading very fast.
I attempted profiling it with the NuSphere profiler and the total time it takes to load the scripts is 143.0ms. Using the Zend Controller benchmark tool(without any performance-related components) I can make an average of 12 requests per sec on my local script. Using
I get PHP function memory_get_usage I get 1340648
My questions are
What is the the allowable amount of time that a script should take to load
How can I know the CPU utilization of my scripts
How can I know the memory utilization of my scripts
I use windows with Zend CE. I have checked the error logs and nothing shows. I have googled but none of the solutions seem to work .
If its timing out on the server it's more than likely because a link to a resource in your script is not pointing to the correct location.
Leading to a function call not working or other resource not being found.
Related
I have a web server application with Apache, PHP and MySQL in Windows Server 2008. The server also serves web pages and images.
Recently I have noticed that some users (8 users out of 150) that upload images have a response time from Apache of 200 seconds for example, but the execution time of the PHP script is 2 seconds. But other users are not affected and they're using the same script.
I know this times because I'm logging each request in a MySQL table.
To obtain the apache response time before the execution ends I use
microtime(true) - $_SERVER["REQUEST_TIME_FLOAT"]
And to obtain the PHP execution time I use
microtime(true) - $GLOBALS["tiempo_inicio_ejecucion"];
where $GLOBALS["tiempo_inicio_ejecucion"] is another microtime that I get at the beginning of the script execution.
The server load is low, CPU and RAM are far of their limits.
If I try to reproduce this behaviour uploading files from my PC, I can't reproduce it, it uploads fast.
I suppose is some network issue, but I can't get it solved, or maybe is a network issue of the clients.
How can I know what is happening here?
Thanks in advance.
Possible suggestion: A virus checker.
Your server may have a virus checker installed which scans files automatically when they are created. This will be scanning uploaded file. It is possible that it may be running low on resource or being given a low priority by the server, and thus scans of the uploaded files are taking a long time. However it won't release the file to the web server until the scan is complete, and thus the server takes a long time to start running the PHP code.
I have no idea if this is actually your problem, but I have seen similar problems on other Windows Server boxes. It can be a very difficult problem to diagnose.
So, after researching about this alot i am seeking help for somebody who encountered this and got a way out.
We developed a PTC script for a client and it worked fine, but as the users grew it starting displaying an error which is as below:
Error : (1226) User 'qe' has exceeded the 'max_user_connections' resource (current value: 30)
Now after seeking help somebody said its a server related issue and other people pointed that it was an issue related to the database design of the script.
Looking forward for a way to solve this problem. Have tried tons of things.
Using godaddy hosting at the moment, they increased the Limit from 30 to 50, but im sure the problem is going to show up again.
There's no problem with the database, the problem is in how you handle database connections from your software.
The way your script is set up is that every connection to your web server also opens a connection towards MySQL. That's not the scenario you want.
Raising the limit won't fix the issue, it will just delay yet another error. What you should do is use persistent connections.
One of the reasons why using php-fpm instead of server API's such as mod_php is preferred is because a set number of PHP processes is booted and a pool of connections to services is created.
The flow would be the following:
use php-fpm. Apache and nginx can use FCGI interface to speak to php-fpm processes
raise a relatively low amount of child processes for php-fpm. This shouldn't be overly large, default config usually works out, I'll make a guess that you don't run a hexacore system so 4-6 child processes should be fine
use persistent MySQL connections
What does this do? Your server accepts the request and sends it to php-fpm, which processes it when it becomes free. Each process uses 1 connection to MySQL. This means you can never hit some sort of hard limit like you have.
If your server is busy, the server should queue up the requests until PHP is capable of handling them. Be it Apache or nginx that you use, this approach will work well.
If your site is busy, it's likely that web server is working faster to accept connections and serve static content that PHP is to process dynamic content. In this case you have an option of adding another physical machine (or more) that runs php-fpm. Instructing your web server to round-robin between machines that serve PHP is trivial, for both of mentioned web servers.
Bottom line is that you want to utilize your resources in an optimal way. Opening and closing MySQL connections on every request isn't optimal. Pooling connections is.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
I've got 2 servers: my local server and remote production server. They've got basically the same config: Ubuntu 10.10, Apache 2, PHP 5.3, PHP-APC, MYsql etc. I also have copies of a webapp on both servers and here's the problem with PHP:
On my local server webapp uses only ~4 MB of memory, but on my production server memory usage spikes up to 50 MB of memory for no good reason. I tried to run memory_get_peak_usage() function to get memory usage at different stages of webapp execution and i've found that on production server memory spikes from 0.7 up to 49 MB on such function calls as class_exists().
What could be the problem?
Tanks.
Hate to sound a bore, but have you verified that they have exactly the same Apache/PHP config as they can easily become the source of these sort of differences..
Also are they experiencing the same sort of load, as code running on a server under load can behave very differently to code running with ample uncontested resources.
Are there any other differences in terms of other running applications that could be affecting stuff?
It maybe worth profiling the code on both the servers to see if there are any per-request differences, XHprof[1] is a great tool for this and it can safely be run in production (as long as you read the instructions)
[1] http://phpadvent.org/2010/profiling-with-xhgui-by-paul-reinheimer
Ok, i've found where was a problem. There is a class that was creating cache file containing information on user's browser (in order to recognize them later). Apparently there was a problem with that file and/or parser so it was using too much memory. Since then i've cleared cache files and if situation will repeat, i'll ditch that class altogether.
Thanks to all who answered/commented on problem.
I am debugging my application here and basically in a nutshell - the application is dying out on my online server or maybe its my server dying out. But I have checked this application three different servers and all exhibited similar results, the application would run for a while but all of a sudden once I'd be opening more and more requests I'd get a Network error or the site would fail to load.
I'm suspecting its my code here so I need to find out how I can make it less resource intensive infact I don't know why is it doing this in the first place. It runs ok on my localhost machine though.
Or is it because I'm hosting it on a technically shared host? Should I look for specialised hosting for hosting an application? There are a lot of complex database queries and ajax requests in my application here.
As far as checking how much memory your script is using you can periodically call memory_get_usage(true) at points in your code to identify which parts of your script are using the memory. memory_get_peak_usage(true) obviously returns the max amount of memory that was used.
You say your application runs OK for a while. Is this a single script which is running all this time, or many different page requests / visitors? There is usually a max_execution_time for each script (often default to 30 seconds). This can be changed in code on a per script basis by calling set_time_limit().
There is also an inherent memory_limit as set in php.ini. This could be 64M or lower on a shared host.
"...once I'd be opening more and more requests..." - There is a limit to the number of simultaneous (ajax) requests a client can make with the server. Browsers could be set at 8 or even less (this can be altered in Firefox via about:config). This is to prevent a single client from swamping the server with requests. A server could be configured to ban clients that open too many requests!
A shared host could be restrictive. However, providing the host isn't hosting too many sites then they can be quite powerful servers, giving you access to a lot of power for a short time. Emphasis on short time - it's in the interests of the host to control scripts that consume too many resources on a shared server as other customers would be affected.
Should I look for specialised hosting for hosting an application?
You'll have to be more specific. Most websites these days are 'applications'. If you are doing more than simply serving webpages and are constantly running intensive scripts that run for a period of time then you may need to go for dedicated hosting. Not just for your benefit, but for the benefit of others on the shared server!
The answer is probably the fact that your hosting company has a fairly restrictive php.ini configuration. They could, for example, limit the amount of time that a script can run for, or limit the amount of memory that a script could use.
What does your code attempt to do?
You might consider making use of memory_get_usage and/or memory_get_peak_usage.