I've written a PHP script that takes in two CSV files, processes them and returns an HTML table. I developed it on my MacBook running on Apache. When I uploaded the script to our production server, it began having problems. Production is an Ubuntu 10.04 LTS running nginx that forwards requests to Apache/PHP.
I added some debugging statements and tailed the logs so I can see exactly where it's stopping the execution of the script, but there are no errors thrown anywhere. The first file is 1.9 MB and it processes 366 kb before it fails. I've tested this several times and it always fails at the same place. I don't believe it's the file as it's the same file I used for testing the script and it never had a problem on my MacBook.
I've searched the internet and have increased several timeout parameters in nginx to no avail. I'm not sure where to look or what to look for at this point. Can someone point me in the right direction?
have you fully turned on error reporting on the server?
Related
I have got a cron in PHP that crash without any log into the php_errors on a debian. Most of the time (99% of the time), it's work fine and I have no problem with it.
But randomly, it just stop and I have nothing in any log on the server. I have got the issue on 2 different server (with very similar install), always when the load increase.
I installed systemd-coredump on the server because I suspected a segfault into one of php library (the script is complex and make a lot of webservices call) but it didn't log anything on the last crash.
Out of memory are well log into the php_errors, so it doesn't seem to be the problem.
What can I do to gather any logs that can give me a hint on what happen and why my pid just stop ?
I have fastCGI with PHP
However, server runs fine for several hours and then suddenly I have php error log full of:
mod_fcgid: can't apply process slot for /var/www/php-bin/website/php
At the same time, there is no spike in activity on the web, CPUs are not spiking, all seems normal based on the server usage.
If I restart apache, all is running ok again for several hours and then situation repeats.
I have tried to set higher values to fcgid settings:
FcgidMaxProcesses 18000
FcgidMaxProcessesPerClass 3800
but the problem stil persist.
What is interesting, I also have the second server with totally different setup and SW versions (FastCGI as PHP module), but the same problem sometimes (not so frequently) occurs there as well.
So I am wondering, can this problem be caused by some PHP script? On both servers, there are some PHP libraries that are the same.
In generall, how to track down what is causing this? On debug server, this problem is non-existing and on production, I cannot blindly change settings and reset server over and over again.
MySQL 5.1.73
Apache/2.2.15
PHP 5.6.13
CentOS release 6.5
Cakephp 3.1
After about 4 minutes (3 min, 57 seconds) the import process I'm running stops. There are no errors or warnings in any log that I can find. The import process consists of a lot of SQL calls and data processing, nothing too crazy, but it can take about 10 minutes to get through 5500 records if it's doing a full compare for updates.
Firefox: Secure Connection Failed - The connection to the server was reset while the page was loading.
Chrome: ERR_NO RESPONSE
The php set time limit is set to 900, which is working. I can set it to 5 seconds and get an error. The limit is not being reached.
I can sleep another controller for 10 minutes, and this error does not happen, indicating that something in the actual program is causing it to fail, and not the hosting service killing the request because it's taking too long (read about VPS doing this to prevent spam).
The php errors are turned all the way up in the php.ini, and just to be sure, in the controller itself.
The import process completes if I reduce the size of the file being imported. If it's just long enough, it will complete AND show the browser message. This indicates to me it's not failing at the same point of execution each time.
I have deleted all the cache and restarted the server.
I do not see any output in the apache logs other then that the request was made.
I do not see any errors in the mysql log, however, I don't know if it's because its not turned on.
The exact same code works on my local host without any issue. It's not a perfect match to the server, but it's close. Ubuntu Desktop vs Centos, php 5.5 vs php 5.6
I have kept an eye on the memory usage and don't see any issues there.
At this point I'm looking for any good suggestions on what else to look at or insights into what could be causing the failure. There are a lot of possible places to look, and without an error, it's really difficult to narrow down where the issue might be. Thanks in advance for any advice!
UPDATE
After taking a closer look at the memory usage during the request, I noticed it was getting much higher than it ideally should.
The httpd (apache) process gets killed and a new thread spawned. Once the new thread runs out of memory, the error shows up on the screen. When I had looked at it previous, it was only at 30%, probably because it had just killed the old process. Watching it the whole way through, I saw it get as high as 80%, which with the other processes was enough to get have it run out of memory, and a killed process can't log anything, hence the no errors or warnings. It is interesting to me that the process just starts right back up.
I found a command to show which processes had been killed due to memory which proved very useful:
dmesg | egrep -i 'killed process'
I did have similar problems with debugkit.
I had bug in my code during memory peak and the context was written to html in the error "log".
Am having problems with my PHP/MySQL website. It is running fine on my development machine but on Godaddy it has started giving me problems. After running it multiple times I either error 500(Internal server error) or connection timed out. Am now convinced that its not the web host as files like sitemap.xml are loading very fast.
I attempted profiling it with the NuSphere profiler and the total time it takes to load the scripts is 143.0ms. Using the Zend Controller benchmark tool(without any performance-related components) I can make an average of 12 requests per sec on my local script. Using
I get PHP function memory_get_usage I get 1340648
My questions are
What is the the allowable amount of time that a script should take to load
How can I know the CPU utilization of my scripts
How can I know the memory utilization of my scripts
I use windows with Zend CE. I have checked the error logs and nothing shows. I have googled but none of the solutions seem to work .
If its timing out on the server it's more than likely because a link to a resource in your script is not pointing to the correct location.
Leading to a function call not working or other resource not being found.
After my host enabled suPHP, a previously working script has been timing out after ~3min (it varies, but the script has not run for more then 3, AFAIK).
The odd part is, the script is not throwing any errors that I can see (and yes, full PHP error reporting/logging is enabled and all MYSQL queries have been checked for errors, also) it simply stops.
Refreshing the page will load more of the data the script is supposed to process (probably because the MYSQL queries have been cached...), but if there is a lot of data to process it never fully executes.
The other oddity is that I can run test scripts for over 10 minutes on the same host w/ set_time_limit(0); / etc.
Anyone else had to deal with this, or know what is causing the timeout and how to fix it (assuming that dropping suPHP is not an option). There was also a simultaneous update from PHP 5.2.x to 5.3.x, but I doubt that is causing the issue. :/
I've seen this happen when memory runs out - the script just ends without error. If you have a loop, try using the memory functions to dump the memory status. Also, use phpinfo() to see what your maximum memory allowance is - the switch to suPHP may have changed that to your detriment.