Does running a PHP script from SSH bypass max execution time? - php

Basically I have an issue. I am posting to my users facebook statuses using a cron job, but when i run the cron from the browser I get an error after about 30 seconds. I have edited the .ini file to allow max execution time but it dont seem to work.
It updates the statuses of the first 700ish users but after that it stops.
Can I run it from the terminal or is there anything I can check/do to get around this?

When running PHP scripts from the command line the default max execution time is 0 - that is, unlimited. From an HTTP context there's other settings that can shutdown your script, including the Apache Timeout directive. This is definitely a job I'd run through the PHP CLI.
I would enable error logging which would describe what limits your script is running into. There's a lot of possibilities - you may be hitting the memory limit, the execution time may be too low, the Facebook API may be rate-limiting your requests, etc.

Make sure that you'll see errors by doing:
error_reporting(E_ALL);
ini_set('display_errors',1);
at the top of your script.
You could be running into a max_execution_time ceiling, or you could be running out of memory, etc. Error messages will help with determining that.
As Frank Farmer implies in his comment, you can use set_time_limit(0); in your script to allow it to run indefinitely.
If you're having memory limit issues, you can up time memory limit in your script (ini_set('memory_limit',...);) -- but you should really consider fixing your code so it doesn't keep consuming memory.

Related

About max execution timeout

I have some question on server response max execution timeout.
If, I called server API to running something huge and not able to finish within time limit set in server php.ini max_execution_time config, will the process in server still continue to process?
- if so, will it process endless?
- if not, is the process stop immediately or canceling loop one by one and finish all process.
In my experience, when I receive max execution timeout on local hosting, the data is already process.
So I not sure it is because it is stuck on response until timeout or server is continue running after throw max execution timeout exception.
It really depends on what your PHP code is like.
Usually the code execution will halt. You can alter this behaviour using ignore_user_abort().
PHP interpreter runs scripts against php.ini configuration and checks max_execution_time = 500 and max_input_time = 500.
PHP doesn't continue to run the script after the max_execution_time. It's simply "kills" the script.
What can also happen, script starts a database query, normally query will run on database server until finished no-matter what happens to the script. Also you may get a Gateway Timeout coming from the web server, for Apache check httpd.conf and look for the setting Timeout.
If you need to run a script that takes time to execute, a lot more then the rest of your website, you should call a web page, PHP on server, fork a new process as a background executed script (the PHP part that takes lot of time), inform user via async status updates or sending an email that processing ended. You should not extend max_execution_time for all script just for one exception.
It doesn't continue after the exception is thrown. It's simply cut when the time is up.
Anything before the time out is executed. If not designed especially to precent this.
The process won't continue and stop immediately after the time limit set in server php.ini max_executiontime config has been reached then php throw a max execution timeout exception.
Here (How to increase maximum execution time in php) if you want to increase maximum execution time in php file.

Very long script keeps failing

I have a script that updates my database with listings from eBay. The amount of sellers it grabs items from is always different and there are some sellers who have over 30,000 listings. I need to be able to grab all of these listings in one go.
I already have all the data pulling/storing working since I've created the client side app for this. Now I need an automated way to go through each seller in the DB and pull their listings.
My idea was to use CRON to execute the PHP script which will then populate the database.
I keep getting Internal Server Error pages when I'm trying to execute a script that takes a very long time to execute.
I've already set
ini_set('memory_limit', '2G');
set_time_limit(0);
error_reporting(E_ALL);
ini_set('display_errors', true);
in the script but it still keeps failing at about the 45 second mark. I've checked ini_get_all() and the settings are sticking.
Are there any other settings I need to adjust so that the script can run for as long as it needs to?
Note the warnings from the set_time_limit function:
This function has no effect when PHP is running in safe mode. There is no workaround other than turning off safe mode or changing the time limit in the php.ini.
Are you running in safe mode? Try turning it off.
This is the bigger one:
The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.
Are you using external system calls to make the requests to eBay? or long calls to the database?
Look for particularly long operations by profiling your php script, and looking for long operations (> 45 seconds). Try to break those operations into smaller chunks.
Well, as it turns out, I overlooked the fact that I was testing the script through the browser. Which means Apache was handling the PHP process, which was executed with mod_fcgid, which had a timeout of exactly 45 seconds.
Executing the script directly from shell and CRON works just fine.

Script keeps dying out after a while, not a timeout / memory issue

I have a long running script that dies out for no reason. It's supposed to run for over 8 hours, but dies out after an hour or two, no errors, nothing. I tried running it via CLI and via http, no difference.
I have the following parameters set:
set_time_limit(0);
ini_set('memory_limit', '1024M');
I've been monitoring the memory usage, and it doesn't go over 200M
Is there anything else that I'm missing. Why would it die out?
One possible explanation could be that the PHP garbage collector is interfering with the script. That could be why you're seeing random die offs. When the garbage collector is turned on, the cycle-finding algorithm is executed whenever the root buffer runs full.
The PHP manual states:
The rationale behind the ability to turn the mechanism on and off, and to initiate cycle collection yourself, is that some parts of your application could be highly time-sensitive.
You could try disabling the PHP garbage collector using gc_disable. The manual recommends you call gc_collect_cycles right before disabling to free the buffer.
Another explanation could be the code itself. An 8 hour script is a long script and if it's complex, it could easily be hitting a snag that causes the script to exit. I think for your troubleshooting now, you should definitely turn error reporting to report everything using error_reporting(-1);.
Also, if your script is communicating with other services, say a database for example, it's quite possible that could be the issue. If the database server runs out of memory or times out, it could be causing your script to hang and die. If this is the case, you could split up your connections to the database and connect/disconnect at specific timed intervals during the script to keep that connection fresh. The same mentality could be applied to any other service you may be communicating with.
You could, for testing purposes only, purposely make your script write to a log file an each successful query, making sure to include the timestamp from when the query beings and another when the query ends. You might not get any errors, but it may help you determine if there is a specific problem query or if a query is hanging for longer than usual. You could also check to make sure your MySQL connection is still valid and print out something to inform you of that as well.
An example log file:
[START 2011/01/21 13:12:23] MySQL Connection: TRUE [END 2011/01/21 13:12:28] Query took 5s
[START 2011/01/21 13:12:28] MySQL Connection: TRUE [END 2011/01/21 13:12:37] Query took 9s
[START 2011/01/21 13:12:39] MySQL Connection: TRUE [END 2011/01/21 13:12:51] Query took 12s
It's propably something related to the code.
I have scripts running weeks and months with no trouble.
Your database connection might timeout and output error.
It's also possible you run out of filedescriptors if you open connections or files. Or you're shared memory region is full. It depends on the code.
Check out system logs that selinux is not messing with you. This way your script would not print any error. From the system logs you also see if you have crossed user limits on any system resources (see ulimit).
It's really strange if you run it in cli and you get nothing, not even segfault. You saw both stdout and stderr?
Maybe it segafults.
Try to launch your script on this way:
$ ulimt -c unlimited
$ php script.php
And see if you find a core dump file (core.xxxx) in the running directory when it dies
Apache also has it's own script timeout, you will need to tweak the httpd.conf file

suPHP / PHP script timeout

After my host enabled suPHP, a previously working script has been timing out after ~3min (it varies, but the script has not run for more then 3, AFAIK).
The odd part is, the script is not throwing any errors that I can see (and yes, full PHP error reporting/logging is enabled and all MYSQL queries have been checked for errors, also) it simply stops.
Refreshing the page will load more of the data the script is supposed to process (probably because the MYSQL queries have been cached...), but if there is a lot of data to process it never fully executes.
The other oddity is that I can run test scripts for over 10 minutes on the same host w/ set_time_limit(0); / etc.
Anyone else had to deal with this, or know what is causing the timeout and how to fix it (assuming that dropping suPHP is not an option). There was also a simultaneous update from PHP 5.2.x to 5.3.x, but I doubt that is causing the issue. :/
I've seen this happen when memory runs out - the script just ends without error. If you have a loop, try using the memory functions to dump the memory status. Also, use phpinfo() to see what your maximum memory allowance is - the switch to suPHP may have changed that to your detriment.

PHP Timeouts and FTP function

In implementing the backup script I described in this serverfault question, I ran into some timeout issues that have prompted optimizations to the code (namely, backing up one file per execution of the script and doing everything I can to minimize the number of file-hashes I am calculating over the very large data files).
So far, that seems to have solved the timeout issue, but given the size of the files, there is certainly room for the transfer to take longer than the standard 30s allotted before a script times out. If that happens, I assume the file will simply be cut off as partially transferred. Is there any way to protect against this?
Note that I am operating on a shared-hosting environment, so editing the php.ini file is not an option.
If it's enabled, you can call set_time_limit(). Alternatively, if you run php from the command line (via cron or similar), max execution time does not apply.
Can you try running the ftp job via the shell? Might work on a shared host...
shell_exec('nohup ftp my-ftp-command 2> /dev/null');
According to set_time_limit(), this should never be an issue because time spent executing activities outside the script are not included when calculating execution time of the script for timeout issues.

Categories