Request is terminating without response - php

In an site I'm currently writing, I'm facing a strange, mind-boggling behaviour: after seconds of 100% CPU usage, the server is responding nothing, only closing the connection. If I limit down the work of this request, it starts working as normal.
XDebug is showing that the bottleneck is not SQL, but rather CPU usage (some function calls 20000 times, many object instances etc.). Another interesting side effect: the request time is considerably longer; Firebug is giving me 600ms for a working request, and 2,2s for a emptily terminating request with only double as much work.
I'm pretty sure that it's not the execution time limit (it's set to indefinite), nor the memory limit (as no PHP Error comes back).
For the record, I'm using: Apache 2.2.12 mpm-prefork/Ubuntu, PHP 5.2.10
Similar behaviour has been observed on Windows.
Any hints to explain this behaviour? Is maybe Apache killing threads he supposes to be in an indefinite loop? Or is there some log file I could look at?

Apache's error.log is saying
zend_mm_heap corrupted
which is leading to a PHP Bug page with an workaround. It seems to appear under heavy load and may be related to a zend_extension.

Related

Possible causes for connection interrupted, LAMP stack

MySQL 5.1.73
Apache/2.2.15
PHP 5.6.13
CentOS release 6.5
Cakephp 3.1
After about 4 minutes (3 min, 57 seconds) the import process I'm running stops. There are no errors or warnings in any log that I can find. The import process consists of a lot of SQL calls and data processing, nothing too crazy, but it can take about 10 minutes to get through 5500 records if it's doing a full compare for updates.
Firefox: Secure Connection Failed - The connection to the server was reset while the page was loading.
Chrome: ERR_NO RESPONSE
The php set time limit is set to 900, which is working. I can set it to 5 seconds and get an error. The limit is not being reached.
I can sleep another controller for 10 minutes, and this error does not happen, indicating that something in the actual program is causing it to fail, and not the hosting service killing the request because it's taking too long (read about VPS doing this to prevent spam).
The php errors are turned all the way up in the php.ini, and just to be sure, in the controller itself.
The import process completes if I reduce the size of the file being imported. If it's just long enough, it will complete AND show the browser message. This indicates to me it's not failing at the same point of execution each time.
I have deleted all the cache and restarted the server.
I do not see any output in the apache logs other then that the request was made.
I do not see any errors in the mysql log, however, I don't know if it's because its not turned on.
The exact same code works on my local host without any issue. It's not a perfect match to the server, but it's close. Ubuntu Desktop vs Centos, php 5.5 vs php 5.6
I have kept an eye on the memory usage and don't see any issues there.
At this point I'm looking for any good suggestions on what else to look at or insights into what could be causing the failure. There are a lot of possible places to look, and without an error, it's really difficult to narrow down where the issue might be. Thanks in advance for any advice!
UPDATE
After taking a closer look at the memory usage during the request, I noticed it was getting much higher than it ideally should.
The httpd (apache) process gets killed and a new thread spawned. Once the new thread runs out of memory, the error shows up on the screen. When I had looked at it previous, it was only at 30%, probably because it had just killed the old process. Watching it the whole way through, I saw it get as high as 80%, which with the other processes was enough to get have it run out of memory, and a killed process can't log anything, hence the no errors or warnings. It is interesting to me that the process just starts right back up.
I found a command to show which processes had been killed due to memory which proved very useful:
dmesg | egrep -i 'killed process'
I did have similar problems with debugkit.
I had bug in my code during memory peak and the context was written to html in the error "log".

How do I make my PHP web service return a 500 when it's slow?

My web service is really fast, but people sometimes complain that it times out for some unknown glitchy reason. Is there some header or something I can put at the top of the script that will return a 500 and terminate any server side processes if the script takes longer than,say, 2 seconds?
I think you're coming at this from the wrong angle - if you were to do something like this, what makes you think that the 500 error wouldn't be 'glitchy' too and sometimes not trigger or the script would still time out?
Technically you probably could achieve something using php's register_tick_function() but I would suggest if you're not certain what the cause of the original glitch is you should be looking at logging / debugging / resolving that as a more sustainable solution.
If for example the timeout is related to server load, connectivity/network timeouts etc I doubt your work-around would be worth the effort in writing it.

What can be causing an "exceeded process limit" error?

I launched a website about a week ago and I sent out an email blast to a mailing list telling everyone the website was live. Right after that the website went down and the general error log was flooded with "exceeded process limit" errors. Since then, I've tried to really clean up a lot of the code and minimize database connections. I will still see that error about once a day in the error log. What could be causing this error? I tried to call the web host and they said it had something to do with my code but couldn't point me in any direction as to what was wrong with the code or which page was causing the error. Can anyone give me any more information? Like for instance, what is a process and how many processes should I have?
Wow. Big question.
Obviously, your maxing out your apache child worker processes. To get a rough idea of how many you can create, use top to get the rough memory footprint of one http process. If you are using wordpress or another cms, it could easily be 50-100m each (if you're using the php module for apache). Then, assuming the machine is only used for web serving, take your total memory, subtract a chunk for OS use, then divide that by 100m (in this example). Thats the max worker processes you can have. Set it in your httpd.conf. Once you do this and restart apache, monitor top and make sure you don't start swapping memory. If you do, you have set too high a number of workers.
If there is any other stuff running like mysql servers, make space for that before you compute number of workers you can have. If this number is small, to roughly quote a great man 'you are gonna need a bigger boat'. Just kidding. You might see really high memory usage for a http process like over 100m. You can tweak your the max requests per child lower to shorten the life of a http process. This could help clean up bloated http workers.
Another area to look at is time response time for a request... how long does each request take? For a quick check, use firebug plugin for firefox and look at the 'net' tab to see how long it takes for your initial request to respond back (not images and such). If for some reason request are taking more than 1 or 2 seconds to respond, that's a big problem as you get sort of a log jam. The cause of this could be php code, or mysql queries taking too long to respond. To address this, make sure if you're using wordpress to use some good caching plugin to lower the stress on mysql.
Honestly, though, unless your just not utilizing memory by having too few workers, optimizing your apache isn't something easily addressed in a short post without detail on your server (memory, cpu count, etc..) and your httpd.conf settings.
Note: if you don't have server access you'll have a hard time figuring out memory usage.
The process limit is typically something enforced by shared webhost providers, and generally has to do with the number of processes executing under your account. This will typically equate to the number of connections made to your server at once (assuming one PHP process per each connection).
There are many factors that come into play. You should figure out what that limit is from your hosting provider, and then find a new one that can handle your load.

How to fix Apache instability?

I have configured a simple LAMP stack on Debian and I am experiencing some problems with the Apache web server.
Each 3-4 hours the web server is entering a deadlock and all the requests that hit the database block. The server is creating a new child for each request. The number of processes increases very quickly. After a few seconds Monit notices something is wrong and restarts the Apache server.
I suspect this problem is generated by the way PHP handles database connection pooling because the server is still able to answer static content requests. Have you experienced this kind of behavior? What should I try to do?
Update: Problem solved. It seems it's a bad idea to use APC for opcode caching and user data. I am now using Memcache for storing user data and APC only for code. I still get some segmentation faults from time to time but the server is most of the time stable.
I would suspect that the problems are:
A difficult long-running database query which blocks further requests. This is fairly easy if you're using the MySQL MyISAM engine which has only table-level locking and readers can easily block writers and vice versa, so a single tricky query on, say a user table, can pretty much block the entire server while the database waits for I/O. You can usually diagnose this by using "SHOW PROCESSLIST" or a tool which does this for you.
Having set MaxClients much too high for the RAM available on a prefork server - almost everyone does this. If you are using a "fat" prefork Apache (e.g. with in-process PHP), then don't set MaxClients higher than you have enough ram for. This is probably a lot less than typical values of 100 or 150.
These two things conspire to cause the issue you're seeing. They both need to be fixed as they can cause problems alone.
This is based entirely on guesswork and experience.
Why don't you have a look at the logs? /var/log/apache2/* is a good place to start. What is requested just before the server dies? From there on, you can probably deduce what's going wrong. As php scripts are terminated after 30 seconds by default, the mistake needs to be quite massive to cause something like that.
Check your timeout settings in /etc/apache2/apache2.conf, I have seen similar problems when Timeout is set high and the system gets hit with a bunch of dropped connections.
The mysql-slow log is also useful for finding slow problem-causing queries.

sleep() silently hogs CPU

I'm running Apache on Linux within VMWare.
One of the PHP pages I'm requesting does a sleep(), and I find that if I attempt to request a second page whilst the first page is sleep()'ing, the second page hangs, waiting for the sleep() from the first page to finish.
Has anyone else seen this behaviour?
I know that PHP isn't multi-threaded, but this seems like gross mishandling of the CPU.
Edit: I should've mentioned that the CPU usage doesn't spike. What I mean by CPU "hogging" is that no other PHP page seems able to use the CPU whilst the page is sleep()'ing.
It could be that the called page opens a session and then doesn't commit it, in this case see this answer for a solution.
What this probably means is that your Apache is only using 1 child process.
Therefore:
The 1 child process is handling a request (in this case sleeping but it could be doing real work, Apache can't tell the difference), so when a new request comes it, it will have to wait until the first process is done.
The solution would be to increase the number of child processes Apache is allowed to spawn (MaxClients directive if you're using the prefork MPM), simply remove the sleep() from the PHP script.
Without exactly knowing what's going on in your script it's hard to say, but you can probably get rid of the sleep().
Are you actually seeing the CPU go to 100% or just that no other pages are being served? How many apache-instances are you runnning? Are they all stopping when you run sleep() in of of the threads?
PHP's sleep() function essentially runs through an idle loop for n seconds. It doesn't release any memory, but it should not increase CPU load significantly.

Categories