I wonder if there is a way to extend the maximum execution time and maximum input time with a PHP scaled app on openshift. I need to make an ajax request that need about 5 minutes to excute completely. I tried add some config to set php_value in .htaccess file but it didn't work. I think the problem is the Time Latency is too long, maybe more than 1 minute
5 minutes is a LONG time to have something making an ajax request. Can you break the request up into shorter chunks or anything? I'd be more concerned about fixing that issue than making the PHP request run longer.
Related
The problem
I am using Laravel 5.3 to import a huge (about >1 million rows and >25 columns) tab separated file into mysql database using functions in controller code (I am restraining from posting all the code here). While processing the files I am encountered with the following error:
FatalErrorException in Connection.php line 720:
Maximum execution time of 30 seconds exceeded
Please note that the application is importing a different number of rows for different instances before failing.
Question
I know we can fix this using either of following:
changing php.ini suggested here
adding ini_set('max_execution_time', 300); at the beginning of public/index as suggested here
A varied number of reasons might be behind this and I am more interested in knowing where exactly is it running out of time. Laravel doesn't provide any more details than the above-mentioned. I would really appreciate if someone can provide ways to debug this. Things that would help:
Is the time aggregate of all requests by a method?
Does memory overload cause this?
Will it help by chunking the data and handling it through multiple request?
Environment
Laravel 5.3
Centos 7 on vagrant
MySQL
It's not a specific operation running out of time. It's... everything, combined, from start to finish.
max_execution_time integer
This sets the maximum time in seconds a script is allowed to run before it is terminated by the parser. This helps prevent poorly written scripts from tying up the server. The default setting is 30.
http://php.net/manual/en/info.configuration.php#ini.max-execution-time
The idea, here, is that for a web service, generally speaking, only a certain amount of time from request to response is reasonable. Obviously, if it takes 30 seconds (an arbitrary number for "reasonableness") to return a response to a web browser or from an API, something probably isn't working as intended. A lot of requests tying up server resources would result in a server becoming unresponsive to any subsequent requests, taking the entire site down.
The max_execution_time parameter is a protective control to mitigate the degradation of a site when a script -- for example -- gets stuck in an endless loop or otherwise runs for an unreasonable amount of time. The script execution is terminated, freeing resources that were being consumed, usually in an unproductive way.
Is the time aggregate of all requests by a method?
It's the total runtime time for everything in the script -- not one specific operation.
Does memory overload cause this?
Not typically, except perhaps when the system is constrained for memory and uses a swap file, since swap thrashing can consume a great deal of time.
Will it help by chunking the data and handling it through multiple request?
In this case, yes, it may make sense to work with smaller batches, which (generally speaking) should reduce the runtime. Everything is a tradeoff, as larger batches may or may not be more efficient, in terms of proccessing time per unit of work, which is workload-specific and rarely linear.
I have a script which generates 3 different image sizes from a library of images and as you may guess, it takes a while to do its job-approximately 5 minutes for 400 images.
The default maximum execution time value of 30 seconds was not enough so I decided to change it in php.ini by setting
max_execution_time = 1800;, I checked the updated value in phpinfo() and it proved that the new time limit is 1800. Just to be sure that the error is not caused by mysl timeout either, I updated mysql.connect_timeout = 1800.
The problem is that my script is still timing out after 30 seconds when it should not be.
What I was thinking about setting
set_time_limit(1800)
at the beginning of every script involved in the process but this would require me to set it in processors, controllers and so on.
I was trying to search for some internal settings regarding script execution time but I have found none.
Does anybody have any ideas how to force script to run longer without timing out?
UPDATE
The error is 500
MODX has nothing to do with it. Change the setting in your PHP.ini: Check the docs here
Also, why are you slamming such a heavy script all at once?
Use getCount to get a total number of them, then place a foreach loop processing a fixed number of images, inside another foreach loop which has a sleep or wait to taper out the load.
My server would probably process 400 images with little effort under 30 seconds. You almost may want to look at memory_limit in your config. I utilize 256 MBs, but I also have a couple dozen cores on the server with massive amount on memory.
i have a script that load a csv by CURL, once i have the csv it add each of the records to the database and when finished, display the total amount of registries added.
on less than 500 registries, it execute just fine, the problem is that whennever the amount of registries is too big, the execution is interrupted at some point and the browser displays the download dialog with a file named like the last part of my url withouth extension containing nothing. no warning, error or any kind of message. the database shows that it added some of the registries, if i run the script several times it adds a small amount more.
i have tried to look for someone with a similar situation but haven't find it yet.
i would appreciate any insight in the matter, i'm not sure if this is a symfony2 problem, a server configuration problem or what.
thanks in advance
Probably your script is reaching the maximum php execution time which is by default 30 secs. You can change it in the controller doing the lengthy operation with the php set_time_limit() function. For example:
set_time_limit (300); //300 seconds = 5 minutes
That's more a limitation of your webserver/environment PHP is running in.
Increase max_execution_time to allow your webserver running the request longer - alternative would be writing a console command, the cli environment isn't restricted in many cases.
I have a PHP script running a loop which could go on for hours on end. However after about 50 minutes I get the following error although the script is far beyond 60 seconds:
Fatal error: Maximum execution time of 60 seconds exceeded in
/path/script.php on line 275
The memory usage by the time the script fails is 11359848 Bytes - 10.8336 MB.
Any ideas what sort of thing could actually be causing the script to trip out like this?
The maximum execution time is not real time but CPU time.
So if send e.g. a HTTP request that takes 10 hours to finish (i.e. you wait for I/O) you can easily stay within the 60-second limit. But if try breaking a hash using brute force (i.e. something where the script is actually doing something) you'll hit the time limit after pretty much 60s of real time.
The solution for your problem is pretty simple: set_time_limit(0); disables the time limit unless PHP is running in safe_mode, but if that's the case it's time to chance the hosting company.
I have few doubts about maximum execution time set in php.ini.
Assuming max_execution_time is 3 minutes, consider the following cases:
I have a process which will end in 2 minutes.
But it's in a loop and it should work 5 times. So it become 10 minutes.
Will the script run properly without showing error for timeout? Why?
PHP function just prints the data and it will take only 2 minutes.
But the query execution is taking 5 minutes.
Will the script run without error? Why?
My single php process itself take 5 minutes.
But am calling the script from command line.
Will it work properly? Why?
How are memory allowed and execution time related?
If execution time for a script is very high
But it returns small amount of data
Will it affect memory or not? Why?
I want to learn what is happening internally, that is why am asking these.
I don't want to just increase time limit and memory limit.
The rules on max_execution_time are relatively simple.
Execution time starts to count when the file is interpreted. Time needed before to prepare the request, prepare uploaded files, the web server doing its thing etc. does not count towards the execution time.
The execution time is the total time the script runs, including database queries, regardless whether it's running in loops or not. So in the first and second case, the script will terminate with a timeout error because that's the defined behaviour of max_execution_time.
External system calls using exec() and such do not count towards the execution time except on Windows. (Source) That means that you could run a external program that takes longer than max_execution_time.
When called from the command line, max_execution_time defaults to 0. (Source) So in the third case, your script should run without errors.
Execution time and memory usage have nothing to do with each other. A script can run for hours without reaching the memory limit. If it does, then often due to a loop where variables are not unset, and previously reserved memory not freed properly.