I am trying to run a php file through cron job. It starts the running well but after a certain time period,the server is terminating it's execution.
So that I can't get my desired output. I am taking the output in a text file. After start the cron, it store some output into the text file but before completing full execution, it is terminating the process.
I also called mail function at beginning and ending of the file. But I only got the beginning message.
I set the max_time, max_memory to the infinite and also checked the settings from php_info().
Everything is ok there but file is not completing its execution successfully. I am able to run the file through browser but it requires a very long time.
So I must do it with others way like cron. If any one provide me a better solution in this regard, I will be grateful.
Turn off safe_mode in php.ini and change maximum execution time by adding
set_time_limit(0)
At the beginning of your script
php set_time_limit function
Related
I have a backup script that runs from the browser without a problem. It extracts data from the database and writes it to a ZIP file that's under 2MB .
It mostly runs from the server, but it fails (silently) when it hits a particular line:
require ('/absolute-path/filename'); // pseudo filespec
This is one of several such statements. These are library files that do nothing but 'put stuff in memory'. I have definitely eliminated any possibility that the path is the problem. I'm testing the file with a conditional is_readable(), output it, and sent myself emails.
$fs = '/absolute-path/filename'; // pseudo filespec
if (is_readable ($fs) ) {
mail('myaddress','cron','before require'); // this works reliably
require ($fs); // can be an empty file ie. <?php ?>
mail('myaddress','cron','after require'); // this never works.
}
When I comment out the require($fs), the script continues (mostly, see below).
I've checked the line endings (invisible chars). Not on every single include-ed file, but certainly the one that is running has newline (NL) endings (Linux-style), as opposed to newline + carriage return (NL CR) (Windows style).
I have tried requiring an empty file (just <?php ?>) to see if the script would get past that point. It doesn't.
I have tried calling mail(); from the included script. I get the mail. So again, I know the path is right. It is getting executed, but it never returns and I get no errors, at least not in the PHP log. The CRON job dies...
This is a new server. I just migrated the application from PHP 5.3.10 to PHP7. Everything else works.
I don't think I am running out of memory. I haven't even gotten the data out of the database at this point in the script, but it seems like some sort of cumulative error because, when I comment out the offending line, the error moves on to another equally puzzling silent failure further down the code.
Are there any other useful tests, logs, or environment conditions I should be looking at? Anything I could be asking the web host?
This usually means that there is some fatal error being triggered in the included file. If you don't have all errors turned on, PHP may fail silently when including files with certain fatal errors.
PHP 7 throws fatal errors on certain things that PHP 5.3 did not, such as Division by Zero.
If you have no access to server config to turn all errors on, then calling an undefined function will fail silently. You can try debugging by putting
die('test');
__halt_compiler();
at the beginning of a line, starting from the top, on the line after the first <?php tag and see if it loads. If it does slowly displace line by line (though don't cut a control structure!) and retest after each time and when it dies you know the error is on the line above.
I believe the problem may be a PHP 7 bug. The code only broke when it was called by CRON and the 'fix' was to remove the closing PHP tag ?>. Though it is hard to believe this could be an issue, I did a lot of unit testing, removing prior code, etc. I am running PHP 7.0.33. None of the other dozen or so (backup) scripts broke while run by CRON.
As nzn indicated this is most likely caused by an error triggered from the included file. From the outside it is hard to diagnose. A likely case is a relative include/require within that file. A way to verify that is by running the script on console from a different location. A f might be to either call cd from cron before starting PHP or doing a chdir(__DIR__) within the primary file before doing further includes.
I made a script that shouldn't return anything to the browser (not any echo, print or interruptions of the code with blank space, like ?> <?, and that uses ignore_user_abort(true); to avoid that, once the browser window is closed, the process stops.
Thus once the script is launched, it should go till the end.
The script is designed for newsletter, and it sends one email each 5 seconds, to respect spam policies of my provider, through mail();
Said that, what's happening is that after about 20 minutes working (the total emails are 1002 ), the script "collapses", with no error returned.
Hence my question: is there a life time limit for scripts are running with ignore_user_abort(true); ?
EDIT
Following the suggestion of Hanky (here below) I put the line:
set_time_limit(0);
But the issue persists
So whilst ignore_user_abort(true); will prevent the script stopping after a visitor browses away from a page, it is set_time_limit(0); that will remove the time limit. You can also change the PHP memory_limit in your php.ini or by setting something like php_value memory_limit 2048M in your .htaccess file.
In order to list the default max_execution time you can run echo ini_get('max_execution_time'); (seconds) or echo ini_get('memory_limits'); (megabytes).
This being said, it sounds like your PHP scripts are better suited to being run from the CLI. Using the command line you can run PHP scripts, this sounds better suited to your usage as it seems, from what you have described, the script doesn't really need to serve anything to the web browser. This method is better for PHP scripts that are run in order to operate a background process rather than to return a front-end to the user.
You can run a file from the command line simply by running php script.php or php -f script.php.
Initially there was not way to solve the issue. Also the provider still investigating.
Meanwhile following your suggestions, I was able to make it running. I created a TEST file and I fired it to verify:
exec("/php5.5/bin/php -f /web/htdocs/www.mydomain.tld/home/test.php > /dev/null 2>&1 &");
In worked. I setup a sleep(600); and I sent 6 emails + one that inform me when the process is really finished.
It runs in a transparent way till the end.
Thank you so much for your support
I have a php script with an unlimited while, it must run 24/7
in another php file, how can I check is that file running on server or stopped?
how can i send a signal to apache to stop and re-execute that file?
I'm going to assign numbers to files. The file that is running 24/7 will be the first file and the file that will change the state of the first one will be called the second file.
Now, the first file can write to a file or in database, let's say, every 10 minutes. This way you know if it's running by checking that file and the last date when it wrote that file. So you create a second file or database table and write in it the state you want the first file to be. Example: active or disabled. Now you read the file/table with the first file, if it's disabled, you stop executing the script.
easiest might be to save timestamped status in memcache or other shared location, and have the other php script check the status timestamp
it's easy to kill an apache process and hit the page again, that will restart the script. Or you can add a signal handler to restart on SIGHUP
You cannot check whether a specific file is running. You'll have to check whether a process is still running with that file. That also means this isn't something Apache can do for you. You'll either have to:
1) use an OS-dependant kill-signal to the process running the script
2) check for a kill-signal from inside the script
The former requires a lot of privilige on the server, so it's probably easier to do the second.
The easiest way is for the running file to write to a database or file somewhere to signify that it´s still going, and to read the same location for a signal to stop every so often. If the running process sees a stop-signal, it can simply break from whatever loop is keeping it going.
The second script can set the signal at whatever point and for whatever reason, and the other script will quickly after terminate.
If the first script terminates, write a file to a disk called error.txt, and then make the second script check for this every minute or so.
The second script once spots an error.txt file will be signaled to restart your first script. More realtime would be to use a database and with the current timestamp.
I have a PHP script that runs on a shared hosting environment server. This PHP script takes a long time to run. It may take 20 to 30 minutes to finish a run. It is a recurring background process. However I do not have control over when the process starts (it could be triggered every five minutes, or every three hours, no one knows).
Anyway, at the beginnin of this script I would like to detect if the previous process is still running, if the earlier run is still running and has not finished, then I would not run the script again. If it is not running, then I run the new process.
In other words, here is a pseudo code. Let's call the script abc.php
1. Start script abc.php
2. Check if an older version of abc.phh is still running. If it is running, then terminate
3. If it is not running, then continue with abc.php and do your work which might take 30 minutes or more
How can I do that? Please keep in mind this is shared hosting.
UPDATE: I was thinking of using a DB detection mechanism. So, when the script starts, it will set a value in a DB as 'STARTED=TRUE', when done, it will set 'STARTED=FALSE'. However this solution is not proper, because there is no garantee that the script will terminate properly. It might get interrupted, and therefore may not update the STARTED value to FALSE. So the DB solution is out of the question. It has to be a process detection of some sort, or maybe a different solution that I did not think off. Thanks.
If this is a CGI process, I would try using exec + ps, if the latter is available in your environment. A quick SO search turns up this answer: https://stackoverflow.com/a/7182595/177920
You'll need to have a script that is responsible for (and separate from) checking to see if your target script is running, of course, otherwise you'll always see that your target script is running based on the order of ops in your "psuedo code".
You can implement a simple locking mechanism: Create a tmp lock file when script starts and check before if the lock file already exists. If it does, dont run the script, it it doesnt create a lock file and run the script. At then end of successful run, delete the lock file so that it will run properly next time.
if(!locked()) {
lock();
// your code here
unlock();
} else {
echo "script already running";
}
function lock() { file_put_contents("write.lock", 'running'); }
function locked() { return file_exists("write.lock"); }
function unlock() { return unlink("write.lock"); }
If i run shell_exec(php file) , will it activate the shell_execution and continue with the php file, or will it try to complete everything in the shell_executed php file first, then run the rest of the php file that executed it.
It will complete the shell execution first and then it will run rest of the code in the php file.
shell_exec(), as stated in the Documentation, will return the complete output as a string. So it has to be a "blocking" function. That means it will block the execution of the rest of your code until it is complete.
Depending on your command that you want to execute, you may want to force the process to run in the background with the & character at the end of the command. This is assuming ofcourse that you are running on a unix based server.