Possible causes for connection interrupted, LAMP stack - php

MySQL 5.1.73
Apache/2.2.15
PHP 5.6.13
CentOS release 6.5
Cakephp 3.1
After about 4 minutes (3 min, 57 seconds) the import process I'm running stops. There are no errors or warnings in any log that I can find. The import process consists of a lot of SQL calls and data processing, nothing too crazy, but it can take about 10 minutes to get through 5500 records if it's doing a full compare for updates.
Firefox: Secure Connection Failed - The connection to the server was reset while the page was loading.
Chrome: ERR_NO RESPONSE
The php set time limit is set to 900, which is working. I can set it to 5 seconds and get an error. The limit is not being reached.
I can sleep another controller for 10 minutes, and this error does not happen, indicating that something in the actual program is causing it to fail, and not the hosting service killing the request because it's taking too long (read about VPS doing this to prevent spam).
The php errors are turned all the way up in the php.ini, and just to be sure, in the controller itself.
The import process completes if I reduce the size of the file being imported. If it's just long enough, it will complete AND show the browser message. This indicates to me it's not failing at the same point of execution each time.
I have deleted all the cache and restarted the server.
I do not see any output in the apache logs other then that the request was made.
I do not see any errors in the mysql log, however, I don't know if it's because its not turned on.
The exact same code works on my local host without any issue. It's not a perfect match to the server, but it's close. Ubuntu Desktop vs Centos, php 5.5 vs php 5.6
I have kept an eye on the memory usage and don't see any issues there.
At this point I'm looking for any good suggestions on what else to look at or insights into what could be causing the failure. There are a lot of possible places to look, and without an error, it's really difficult to narrow down where the issue might be. Thanks in advance for any advice!
UPDATE
After taking a closer look at the memory usage during the request, I noticed it was getting much higher than it ideally should.
The httpd (apache) process gets killed and a new thread spawned. Once the new thread runs out of memory, the error shows up on the screen. When I had looked at it previous, it was only at 30%, probably because it had just killed the old process. Watching it the whole way through, I saw it get as high as 80%, which with the other processes was enough to get have it run out of memory, and a killed process can't log anything, hence the no errors or warnings. It is interesting to me that the process just starts right back up.
I found a command to show which processes had been killed due to memory which proved very useful:
dmesg | egrep -i 'killed process'

I did have similar problems with debugkit.
I had bug in my code during memory peak and the context was written to html in the error "log".

Related

PHP-CLI crash with no logs on the server

I have got a cron in PHP that crash without any log into the php_errors on a debian. Most of the time (99% of the time), it's work fine and I have no problem with it.
But randomly, it just stop and I have nothing in any log on the server. I have got the issue on 2 different server (with very similar install), always when the load increase.
I installed systemd-coredump on the server because I suspected a segfault into one of php library (the script is complex and make a lot of webservices call) but it didn't log anything on the last crash.
Out of memory are well log into the php_errors, so it doesn't seem to be the problem.
What can I do to gather any logs that can give me a hint on what happen and why my pid just stop ?

A lot of php processes popping at once and eats all memory

I have a cloud server in Rackspace with cPanel installed. I have some 16 sites running on it. Out of them 14 sites run under a single account (this is a Drupal multisite installation). Everything has been running fine for last 5 months. Recently my server became unresponsive and had to be rebooted. Later it was found out that the server ran out of memory.
The issue now continue to occur intermittently. At that time I can find a lot of php processes popping out at once, memory usage increases at that time and the comes to nearly 200 MB out of the total 8GB.
/usr/bin/php /home/username/public_html/index.php
All sites become inaccessible. The load average also spikes. After 5-8 minutes the huge number of php processes disappears and then memory usage also comes to normal. This lasts not more than 5-6 minutes
The issue now continue to occur intermittently. And I checked all server logs and could not find any trace of the issue. I checked the server using maldet, rkhunter and could not find any traces of malicious codes or back-doors
The strange issue is that the issue does not occur during the most peak hours. It occurs during off-peak hours as well. There is no pattern in which this issue occurs.
I can find that there were 150 php instances running at once yesterday.
Can someone guide me in the correct direction? Is this a server side issue or has something to do with the internal site functions?

Script keeps dying out after a while, not a timeout / memory issue

I have a long running script that dies out for no reason. It's supposed to run for over 8 hours, but dies out after an hour or two, no errors, nothing. I tried running it via CLI and via http, no difference.
I have the following parameters set:
set_time_limit(0);
ini_set('memory_limit', '1024M');
I've been monitoring the memory usage, and it doesn't go over 200M
Is there anything else that I'm missing. Why would it die out?
One possible explanation could be that the PHP garbage collector is interfering with the script. That could be why you're seeing random die offs. When the garbage collector is turned on, the cycle-finding algorithm is executed whenever the root buffer runs full.
The PHP manual states:
The rationale behind the ability to turn the mechanism on and off, and to initiate cycle collection yourself, is that some parts of your application could be highly time-sensitive.
You could try disabling the PHP garbage collector using gc_disable. The manual recommends you call gc_collect_cycles right before disabling to free the buffer.
Another explanation could be the code itself. An 8 hour script is a long script and if it's complex, it could easily be hitting a snag that causes the script to exit. I think for your troubleshooting now, you should definitely turn error reporting to report everything using error_reporting(-1);.
Also, if your script is communicating with other services, say a database for example, it's quite possible that could be the issue. If the database server runs out of memory or times out, it could be causing your script to hang and die. If this is the case, you could split up your connections to the database and connect/disconnect at specific timed intervals during the script to keep that connection fresh. The same mentality could be applied to any other service you may be communicating with.
You could, for testing purposes only, purposely make your script write to a log file an each successful query, making sure to include the timestamp from when the query beings and another when the query ends. You might not get any errors, but it may help you determine if there is a specific problem query or if a query is hanging for longer than usual. You could also check to make sure your MySQL connection is still valid and print out something to inform you of that as well.
An example log file:
[START 2011/01/21 13:12:23] MySQL Connection: TRUE [END 2011/01/21 13:12:28] Query took 5s
[START 2011/01/21 13:12:28] MySQL Connection: TRUE [END 2011/01/21 13:12:37] Query took 9s
[START 2011/01/21 13:12:39] MySQL Connection: TRUE [END 2011/01/21 13:12:51] Query took 12s
It's propably something related to the code.
I have scripts running weeks and months with no trouble.
Your database connection might timeout and output error.
It's also possible you run out of filedescriptors if you open connections or files. Or you're shared memory region is full. It depends on the code.
Check out system logs that selinux is not messing with you. This way your script would not print any error. From the system logs you also see if you have crossed user limits on any system resources (see ulimit).
It's really strange if you run it in cli and you get nothing, not even segfault. You saw both stdout and stderr?
Maybe it segafults.
Try to launch your script on this way:
$ ulimt -c unlimited
$ php script.php
And see if you find a core dump file (core.xxxx) in the running directory when it dies
Apache also has it's own script timeout, you will need to tweak the httpd.conf file

Request is terminating without response

In an site I'm currently writing, I'm facing a strange, mind-boggling behaviour: after seconds of 100% CPU usage, the server is responding nothing, only closing the connection. If I limit down the work of this request, it starts working as normal.
XDebug is showing that the bottleneck is not SQL, but rather CPU usage (some function calls 20000 times, many object instances etc.). Another interesting side effect: the request time is considerably longer; Firebug is giving me 600ms for a working request, and 2,2s for a emptily terminating request with only double as much work.
I'm pretty sure that it's not the execution time limit (it's set to indefinite), nor the memory limit (as no PHP Error comes back).
For the record, I'm using: Apache 2.2.12 mpm-prefork/Ubuntu, PHP 5.2.10
Similar behaviour has been observed on Windows.
Any hints to explain this behaviour? Is maybe Apache killing threads he supposes to be in an indefinite loop? Or is there some log file I could look at?
Apache's error.log is saying
zend_mm_heap corrupted
which is leading to a PHP Bug page with an workaround. It seems to appear under heavy load and may be related to a zend_extension.

suPHP / PHP script timeout

After my host enabled suPHP, a previously working script has been timing out after ~3min (it varies, but the script has not run for more then 3, AFAIK).
The odd part is, the script is not throwing any errors that I can see (and yes, full PHP error reporting/logging is enabled and all MYSQL queries have been checked for errors, also) it simply stops.
Refreshing the page will load more of the data the script is supposed to process (probably because the MYSQL queries have been cached...), but if there is a lot of data to process it never fully executes.
The other oddity is that I can run test scripts for over 10 minutes on the same host w/ set_time_limit(0); / etc.
Anyone else had to deal with this, or know what is causing the timeout and how to fix it (assuming that dropping suPHP is not an option). There was also a simultaneous update from PHP 5.2.x to 5.3.x, but I doubt that is causing the issue. :/
I've seen this happen when memory runs out - the script just ends without error. If you have a loop, try using the memory functions to dump the memory status. Also, use phpinfo() to see what your maximum memory allowance is - the switch to suPHP may have changed that to your detriment.

Categories