I have a php app that calls a class called Client. Every so often I get a sort of time out error. I thought it was SQL at first but it turns its pointing to the class itself.
Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\htdocs\ClientPortal\classes\Connections.php on line 3
<?php
session_start();
class Connections { //line 3
Does anyone know what's going on here?
thanks,
Billy
PHP scripts have a maximum time they're allowed to execute for, as declared in the php.ini.
You can circumvent this if you really want by adding the following line:
ini_set('max_execution_time', 123456);
where 123456 is the number of seconds you want the limit to be.
You can also use the set_time_limit function, which I only just found out about and assume does the same thing. I've always just done the former though.
You can change it in the php.ini file, but you might be using your script to do a batch operation or something. You wouldn't want a PHP script that is being accessed by an end user to sit there hanging for 30 seconds or more though, so you're better off leaving it at the default or even turning it down in the php.ini file, and setting the max_execution_time on an as-needed basis.
As seengee points out in the comment below, you can set the max_execution_time to 0 to stop the error from ever happening, but seengee is right to say that at least for a web request, you really shouldn't do this. For the php command line interpreter, this behaviour is the default though.
If you're seeing this problem for things that are supposed to be used by end-users through a web request, you might have to do some profiling to work out the real cause. If you're doing MySQL queries, start by turning on the slow query log. It's particularly good at letting you know when you've forgotten an index, or if you're doing something else inefficient.
You can also shove a few $s = microtime(true); yourstuff(); var_dump(microtime(true)-$s); things around to get a vague overview of which bits are slowing things down, just make sure you don't leave any of them in afterwards!
If you're still struggling to find the root cause, set xdebug up on your local machine and run the profiler. The extension is available as a precompiled windows binary (although there seems to be a confusing array of versions). You can inspect the results of running the profiler using wincachegrind.
Related
In my production Environment I'm observing a sporadic issue where pages are taking a long time to load. In the error logs we are seeing:
PHP Fatal error: Maximum execution time of 30 seconds exceeded
The affected line is where a session is being created for the user.
The directories are physical. There are +3.5 million files in the directory. The trash collection is set for 31 days for sessions in PHP.
The issue is sporadic so I can't trigger it. The behavior is consistent that it is always the session starting that takes above 30 seconds to execute. The lines prior to that run fine, if I list the contents of the sessions directory (ls /var/www/sessions/) it takes +45 seconds just from the command line. I think application monitoring would be good but this seems to be an issue at the system level.
I've looked at the cloudwatch metrics but don't see a bottleneck involving the disc reads there.
Could anyone advise on what issues we might be running into and how to resolve them?
PHP uses session.gc_probability to sporadically cleanup the sessions folder. Make sure to set it to 0 in production so your API/page calls don't hang.
I suggest checking session.gc_maxlifetime value, it will give you some ideas about how long the files will be kept.
You can call session_gc() to force cleanup manually (and probably simulate your issue), check more info at https://www.php.net/manual/en/function.session-gc.php. If it hangs for too long running via the command line, you might consider deleting the entire session folder instead (WARNING: this will kill all user's sessions).
Note that some distros/packages install automatically a session garbage collection cron job, I had issues a long ago with too many files at the folder and the cron job simply hangs (more details https://serverfault.com/questions/511609/why-does-debian-clean-php-sessions-with-a-cron-job-instead-of-using-phps-built).
As a long-term solution, I would say to move away from file-based sessions and use Redis to handle sessions, especially on AWS where the disk performance is not the best. Not sure what framework you use (most modern ones have built-in solutions for it) but you can also find framework-less solutions online.
I've been migrating an old clients site (Kohana 2.3) from one of my servers to an third party server and am now getting a premature end of script headers error when I either attempt to export data from my database or attempt to send emails to my clients after about 30-40 seconds of processing.
I've attempted increasing the php.ini to raise my maximum memory limit and and maximum time limit both to no avail, producing the same error.
I attempted to manually reduce the number of elements that would be exported and got it to run the script without erroring for something between 700-750 elements, but this goes up and down whenever I run the script. The live data that I'm using contains over 5000 elements.
Running memory_get_peak_usage returns that I'm using a maximum of a bit under 16M of memory to execute these scripts, so I'm reasonably sure that I'm not going over any memory limits as my php memory limit is 256M.
Setting the time limit in php to 5 seconds will generate a timeout error instead of a premature end of scripts error, but, this being expected, is not helpful.
The strange thing is that nothing is being written to any logs. I've checked the php logs, the Kohana logs and the apache logs, and there is nothing that seems to point me in a direction of what could be causing this issue.
I was wondering if anyone had encountered this before or had any ideas with where I should go with this.
Check if you have a /var/log/apache/suexec.log file; if so, and if it's the problem, it'll explain the reason why it's denying your script from executing properly. One easy fix to try is adding a "-w" to the end of the first line of your Perl script, i.e. change the first line from "#!/usr/bin/perl" to "#!/usr/bin/perl -w" and see if that makes suEXEC happy. Another common fix is to make sure that your CGI script has the same user/group ownership as your cgi-bin folder.
I have an issue with my host setting in terms of timeout but actually i don't know which one exactly is responsible for that. When i run this code in php page:
<?PHP
sleep(30);
echo "Done";
?>
It gives me "tcp error operation timeout".
But when i change it to be sleep(20) it runs successfully. So please help me to find out which item in my php ini file is responsible for this timeout. However i tried to google it and already tried a lot of suggestions but with no luck, and finally i am stuck at this point to let my php script go live.
Based on the error message, I suspect that this is not PHP timing out, but the actual web server "pulling the plug" (although 30 seconds is a ridiculously short time limit for that).
That is certainly not the normal PHP error message, and you shouldn't be able to induce a PHP timeout using sleep, since that uses up no actual CPU time.
Without knowing what kind of host you're running under (Apache? Nginx? IIS? Shared hosting? A VPS? etc) it's hard to know where, and even if, you could change this.
Take a look at your time limit in your configuration.
You can modifiy the time limit of the script with this line of code:
set_time_limit(60);
The default value id 30s. If you are using the set_time_limit() without parameter inside, it will just reset the counter, and the total time available will be increased by 30s.
Take a look at the documentation set_time_limit documentation
30 seconds is default php timeout, if you need to sleep(30) you must increase the timeout, there are 2 ways to change it, from php script.
set_time_limit ( int $seconds );
OR
ini_set('max_execution_time', int $seconds);
I would like to share this answer with every one who is searching for similar issue. After spending several hours searching for the root cause, finally i found out without changing any piece of code that Network Error (tcp_error) A communication error occurred: "Operation timed out" error disappeared when i ran my script on another machine, hence the issue was with either my machine or my internet speed really i don't know. Anyway the error might be not found only in code.
According to the documentation:
max_execution_time only affect the execution time of the script itself.
Any time spent on activity that happens outside the execution of the script
such as system calls using system(), stream operations, database queries, etc.
is not included when determining the maximum time that the script has been running.
This is not true on Windows where the measured time is real.
This is confirmed by testing:
Will not time out
<?php
set_time_limit(5);
$sql = mysqli_connect('localhost','root','root','mysql');
$query = "SELECT SLEEP(10) FROM mysql.user;";
$sql->query($query) or die($query.'<br />'.$sql->error);
echo "You got the page";
Will time out
<?php
set_time_limit(5);
while (true) {
// do nothing
}
echo "You got the page";
Our problem is that we really would like PHP to timeout, regardless of what it is doing, after a given amount of time (as we don't want to keep resources busy if we know we've failed delivering a page in an acceptable amount of time, like 10 seconds). We know we can play with settings such as the MySQL wait_timeout for the SQL queries, but the page timeout will depend on the number of queries that are executed.
Some people have tried to come up with workarounds and it doesn't seem implementable.
Q: Is there an easy way to get a real PHP max_execution_time on linux, or are we better timing out elsewhere, such as Apache level?
This is quite a tricky advice, but it will definitely do what you want, if you are willing to modify and recompile PHP.
Take a look at the PHP source code at https://github.com/php/php-src/blob/master/Zend/zend_execute_API.c (the file is Zend/zend_execute_API.c), at function zend_set_timeout. This is the function that implements time limit. Here's how it works on different platforms:
on Windows, create a new thread, start a timer on it, and when it finishes, set a global variable called timed_out to 1, the PHP execution core checks this variable for every instruction, then exits (very simplified)
on Cygwin, use itimer with ITIMER_REAL, which measures real time, including any sleep, wait, whatever, then raise a signal that will interrupt any processing and stop processing
on other unix systems, use itimer with ITIMER_PROF, which only measures CPU time spent by the current process (but both in user-mode and kernel-mode). This means waiting for other processes (like MySQL) doesn't count into this.
Now what you want to do is to change the itimer on your Linux from ITIMER_PROF to ITIMER_REAL, which of course you need to do manually, recompile, install etc. The other difference between these two is that they also use different signal when the timer runs out. So my suggestion is to change the ifdef:
# ifdef __CYGWIN__
into
# if 1
so that you set both ITIMER_REAL and the signal that PHP waits for to SIGALRM.
Anyway this whole idea is untested (I use it for some very specific system, where ITIMER_PROF is broken, and it seems to work), unsupported, etc. Use it at your own risk. It may work with PHP itself, but it could break other modules, in PHP and in Apache, if they for whatever reason, use the SIGALRM signal or other timer.
This is an old and answered question. But for the sake of helping others, I wanted to point out the request_terminate_timeout php-fpm option. If you're using PHP-FPM, it is most likely what you need.
If set, this option allows you to tell PHP-FPM to kill a request after N seconds, regardless of what PHP does.
See http://php.net/manual/en/install.fpm.configuration.php#request-terminate-timeout for details.
From httpd.conf:
Timeout: The number of seconds before receives and sends time out
Timeout 300
I have a massive amount of data that needs to be read from mysql, analyzed and based on the results split up and stored in new records.
five record takes about 20 seconds, but the records vary in length so I can't really estimate how long the program will take, however have calculated that the process should not take longer much longer than 5 hours, so I'd like to run it over night and feel quite sure that when I come back to the office the next morning the program is done.
Assuming the code is fail safe (I know right ;) how should set up Apache / PHP /Mysql settings so that when I execute the script so that I can be sure that the program will not time out and/or not run out of ram?
(it is basically running in a loop fetching sets of 100 rows until it can't anymore a loop so, I am hoping the fact that the variables are being reset at the beginning of each iteration will keep the memory usage constant.)
The actual size of the database when dumped is 14mb, so the volume of the data is not so high
(on a side note, it might also be that I haven't assigned the maximum resources to the server settings, so maybe that's why it takes 20 seconds to run 5 records)
Make sure you have removed any max_execution_time limits by setting this to 0 (unlimited) in your PHP.ini or by calling set_time_limit(0). This will ensure that PHP doesn't stop the script mid-execution.
If it all possible, you should run the script from the CLI so that you don't have to worry about Apache timing your request out (it shouldn't, but it might).
Since you are working with only 15 MB of data I wouldn't worry about memory usage (128 MB is the default in PHP). If you are really worried you can remove memory limits in PHP by modifying the memory_limit to be either a higher number of -1 (infinite memory).
Keep in mind modifying the PHP.ini will affect all scripts that are interpreted by that installation. I prefer to use the appropriate ini setting functions at the top of my scripts to prevent dangerous global changes.
On a side note: This doesn't really sound like a job for PHP. I'm not trying to discourage your use of PHP here, but there are other languages that are better suited for command line usage.
Better make your script exit the execution, and then restart it. Store the point
where it left last time. This will ensure you do not have memory leaks and script does not run out of memory due to some error in garbage collection,and that
the execution continues if there is unexpected failure.
A simple shell command would be :
while [ 1 ]; do php myPhpScript.php a; done
you can make other checks to ensure proper running.
I'd like to point out, by default scripts run via a CLI in PHP, default to having no time limit, unlike scripts run through CGI, mod_php etc.
And as stated avoid running this via Apache.
However if you MUST do this, consider breaking it down. You can make a page that could process 5-10 results, appends the dump file, then prints out either a meta refresh, or some JavaScript to reload the page with with a parameter telling it where it's up too, and to continue until done.
Not recommended though.
adding to some of the other good options here you might want to look at http://www.electrictoolbox.com/article/php/process-forking/ and also sending some requests to dev/null if you dont need them to give back feedback.
Don't do this using a web interface. Run it from the command line; but look to see if your code can be optimised, or set break points and do it in "chunks"
First of all http://php.net/manual/en/function.set-time-limit.php put set_time_limit(0); at the beginning of the script.
As for the memory you should take care of that by unsetting any variables, array, pointers that you do not need on each iteration.
Better run the script from the shell (CLI) or as cronjob.
As far as I know MySQL connections do not time out, so you should be safe by setting:
php_value max_execution_time X
in a .htaccess file or placing set_time_limit(X) at the beginning of your script where X is a comfortable value in seconds.