I have a cron file cron/cron1.php. i have set up this for cron running 1 minute.
so for next process it will take 1 minute to execute.
now i want to run this file parallel three times in minute. this file takes execution time more than 2 min.
can i run this file parallel in a single file like this
file1.php
<?php
include("cron/cron1.php"); // run seperately
sleep(5);
include("cron/cron1.php"); // run seperately
sleep(5);
include("cron/cron1.php"); // run seperately
?>
in above file cron1.php will execute 5 seconds difference but when above one is completed its process. as i told you each cron1.php will takes more than 2 minutes to complete. so i couldn't achieve it.
is there any process or multithreading or approch so that i can run each cron1.php every 5 seconds delay. then i will set the file1.php as a cron job.
PHP DOES SUPPORT MULTI-THREADING
http://php.net/pthreads
Here is a multi-threaded example of the kind of logic you require:
<?php
define("SECOND", 1000000);
define("LOG", Mutex::create());
/*
* Log safely to stdout
* #param string message the format string for log
* #param ... args the arguments for sprintf
* #return void
*/
function slog($message, $args = []) {
$args = func_get_args();
if ((count($args) > 0) &&
($message = array_shift($args))) {
$time = microtime(true);
Mutex::lock(LOG);
echo vsprintf(
"{$time}: {$message}\n", $args);
Mutex::unlock(LOG);
}
}
class MyTask extends Thread {
public $id;
public $done;
public function __construct($id) {
$this->id = $id;
$this->done = false;
}
public function run() {
slog("%s#%d entered ...", __CLASS__, $this->id);
/* don't use sleep in threads */
$this->synchronized(function(){
/* simulate some work */
$this->wait(10 * SECOND);
});
slog("%s#%d leaving ...", __CLASS__, $this->id);
$this->done = true;
}
}
$threads = [];
function get_next_id(&$threads) {
foreach ($threads as $id => $thread) {
if ($thread->done) {
return $id;
}
}
return count($threads);
}
do {
slog("Main spawning ...");
$id = get_next_id($threads);
$threads[$id] = new MyTask($id);
$threads[$id]->start();
slog("Main sleeping ...");
usleep(5 * SECOND);
} while (1);
?>
This will spawn a new thread every 5 seconds, the threads take 10 seconds to execute.
You should try to find ways of increasing the speed of individual tasks, perhaps by sharing some common set of data.
What you could do is run multiple processes at the same time, with something like this:
exec('php cron/cron1.php > /dev/null 2>&1 &');
exec('php cron/cron1.php > /dev/null 2>&1 &');
exec('php cron/cron1.php > /dev/null 2>&1 &');
Each exec call will run in the background so you can have as many as needed.
Related
I want to add cron job dynamically once user install a php application on their server, like admin configuration, I need to set cron job dynamically once going thought its configuration settings in php?
I am using codeigniter to set the cron job, also I added the same from cpanel manually and it is working fine.
You can create a file in
/etc/cron.d
and use PHP's file_put_contents() to update the file. However, you'll need to elevate PHP or Apache's permissions (depending on your PHP handler). This isn't recommended since it leads to security issues.
If you're using Cron to run PHP scripts, then you can call the scripts directly using PHP every X amount of time. Create a variable in your database representing the last time your script was run. If X amount of time passed since the script was run, then you run the script and update the variable.
If the script takes a long time to execute, then use PHP to run it in a separate process, so the user doesn't have to wait for it to finish.
Thanks
First You have to create one txt file for example crontab.txt
then you have to use shell script like below
exec ( 'sudo crontab -u apache -r' );
file_put_contents ( '/var/www/html/YOUR_PROJECT/crontab.txt',"25 15 * * * php /var/www/html/YOUR_PROJECT/YOUR_CONTROLLER/YOUR_METHOD'.PHP_EOL);
And Lastly, you have to execute that file like
exec ( 'crontab /var/www/html/YOUR_PROJECT/crontab.txt' );
Hope this will help you.
you can do this using shell script
shell_exec('echo "25 15 * * * <path to php> /var/www/cronjob/helloworld.php > /var/www/cronjob/cron.log" | crontab -')
you can use these functions or class
class Crontab {
// In this class, array instead of string would be the standard input / output format.
// Legacy way to add a job:
// $output = shell_exec('(crontab -l; echo "'.$job.'") | crontab -');
static private function stringToArray($jobs = '') {
$array = explode("\r\n", trim($jobs)); // trim() gets rid of the last \r\n
foreach ($array as $key => $item) {
if ($item == '') {
unset($array[$key]);
}
}
return $array;
}
static private function arrayToString($jobs = array()) {
$string = implode("\r\n", $jobs);
return $string;
}
static public function getJobs() {
$output = shell_exec('crontab -l');
return self::stringToArray($output);
}
static public function saveJobs($jobs = array()) {
$output = shell_exec('echo "'.self::arrayToString($jobs).'" | crontab -');
return $output;
}
static public function doesJobExist($job = '') {
$jobs = self::getJobs();
if (in_array($job, $jobs)) {
return true;
} else {
return false;
}
}
static public function addJob($job = '') {
if (self::doesJobExist($job)) {
return false;
} else {
$jobs = self::getJobs();
$jobs[] = $job;
return self::saveJobs($jobs);
}
}
static public function removeJob($job = '') {
if (self::doesJobExist($job)) {
$jobs = self::getJobs();
unset($jobs[array_search($job, $jobs)]);
return self::saveJobs($jobs);
} else {
return false;
}
}
}
Hi thank you for the replies, i got my answers now, based on the replies
I tried to add a new cron job to crontab file using php (codeigniter):
following is my answer
$phppath = exec('which php');
$user_file_path = getcwd();
$cronjob1 = "0 0 1 * * $phppath $user_file_path/index.php
automatic_updates/leave_update_cron";
// run each cron job
// get all current cron jobs
$output = shell_exec('crontab -l');
// add our new job
file_put_contents('/tmp/crontab.txt', $output.$cronjob1.PHP_EOL);
// once append the job, execute the new file
exec('crontab /tmp/crontab.txt');
This will add a new cron job without deleting anything on the current cron job file
I have a PHP script that is used to query an API and download some JSON information / insert that information into a MySQL database, we'll call this scriptA.php. I need to run this script multiple times as minute, preferably as many times in a minute that I can without allowing two instances to run at the same exact time or with any overlap. My solution to this has been to create scriptB.php and put in on a one minute cron job. Here is the source code of scriptB.php...
function next_run()
{
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "http://somewebsite.com/scriptA.php");
curl_exec($curl);
curl_close($curl);
unset($curl);
}
$i = 0;
$times_to_run = 7;
$function = array();
while ($i++ < $times_to_run) {
$function = next_run();
sleep(3);
}
My question at this point is to how cURL performs when used in a loop, does this code trigger scriptA.php and THEN once it has finished loading it at that point start the next cURL request? Does the 3 second sleep even make a difference or will this literally run as fast as the time it takes each cURL request to complete. My objective is to time this script and run it as many times as possible in a one minute window without two iterations of it being run at the same time. I don't want to include the sleep statement if it is not needed. I believe what happens is cURL will run each request upon finishing the last, if I am wrong is there someway that I can instruct it to do this?
preferably as many times in a minute that I can without allowing two instances to run at the same exact time or with any overlap. - then you shouldn't use a cronjob at all, you should use a daemon. but if you for some reason have to use a cronjob (eg, if you're on a shared webhosting platform that doesn't allow daemons), guess you could use the sleep hack to run the same code several times a minute?
* * * * * /usr/bin/php /path/to/scriptA.php
* * * * * sleep 10; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 20; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 30; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 40; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 50; /usr/bin/php /path/to/scriptA.php
should make it execute every 10 seconds.
as for making sure it doesn't run in paralell if the previous execution hasn't finished yet, add this to the start of scriptA
call_user_func ( function () {
static $lock;
$lock = fopen ( __FILE__, "rb" );
if (! flock ( $lock, LOCK_EX | LOCK_NB )) {
// failed to get a lock, probably means another instance is already running
die ();
}
register_shutdown_function ( function () use (&$lock) {
flock ( $lock, LOCK_UN );
} );
} );
and it will just die() if another instance of scriptA is already running. however, if you want it to wait for the previous execution to finish, instead of just exiting, remove LOCK_NB... but that could be dangerous, if every, or even just a majority of the executions use more than 10 seconds, you'll have more and more processes waiting for the previous execution to finish, until you run out of ram.
as for your curl questions,
My question at this point is to how cURL performs when used in a loop, does this code trigger scriptA.php and THEN once it has finished loading it at that point start the next cURL request, that is correct, curl waits until the page has completely loaded, usually meaning the entire scriptA has completed. (you can tell scriptA to finish the pageload prematurely with the fastcgi_finish_request() function if you really want, but that's unusual)
Does the 3 second sleep even make a difference or will this literally run as fast as the time it takes each cURL request to complete - yes, the sleep will make the loop 3 seconds slower per iteration.
My objective is to time this script and run it as many times as possible in a one minute window without two iterations of it being run at the same time - then make it a daemon that never exits, rather than a cronjob.
I don't want to include the sleep statement if it is not needed. - it's not needed.
I believe what happens is cURL will run each request upon finishing the last - this is correct.
I need to run this script multiple times as minute, preferably as many times in a minute that I can without allowing two instances to run
Your in luck as I wrote a class to handle just such a thing. You can find it on my github here
https://github.com/ArtisticPhoenix/MISC/blob/master/ProcLock.php
I'll also copy the full code at the end of this post.
The basic idea is to create a file, I will call it afile.lock for this example. In this file is recorded the PID, or the process ID of the current process that is ran by cron. Then when cron attempts to run the process again, it checks this lock file and sees if there is a PHP process running that is using this PID.
if there is it updates the modified time of the file (and throws an exception)
if there is not then you are free to create a new instance of the "worker".
As a bonus th modified time of the lock file can be used by the script (whose PID we are tracking) as a way of shutting down in the event the file is not updated, so for example: if cron is stopped, or if the lock file is manually deleted you can set in in such a way that the running script will detect this and self destruct.
So not only can you keep multiple instances from running, you can tell the current instance to die if cron is turned off.
The basic usage is as follows. In the cron file that starts up the "worker"
//define a lock file (this is actually optional)
ProcLock::setLockFile(__DIR__.'/afile.lock');
try{
//if you didn't set a lock file you can pass it in with this method call
ProcLock::lock();
//execute your process
}catch(\Exception $e){
if($e->getCode() == ProcLock::ALREADY_LOCKED){
//just exit or what have you
}else{
//some other exception happened.
}
}
It's basically that easy.
Then in the running process you can every so often check (for example if you have a loop that runs something)
$expires = 90; //1 1/2 minute (you may need a bit of fudge time)
foreach($something as $a=>$b){
$lastAccess = ProcLock::getLastAccess()
if(false == $lastAccess || $lastAccess + $expires < time()){
//if last access is false (no lock file)
//or last access + expiration, is less then the current time
//log something like killed by lock timeout
exit();
}
}
Basically what this says is that either the lock file was deleted wile the process was running, or cron failed to update it before the expiration time. So here we are giving it 90 seconds and cron should be updating the lock file every 60 seconds. As I said the lock file is updated automatically if it's found when calling lock(), which calls canLock() which if it returns true meaning we can lock the process because its not currently locked, then it runs touch($lockfile) which updates the mtime (modified time).
Obviously you can only self kill the process in this way if it is actively checking the access and expiration times.
This script is designed to work both on windows and linux. On windows under certain circumstances the lock file won't properly be deleted (sometimes when hitting ctrl+c in the CMD window), however I have taken great pains to make sure this does not happen, so the class file contains a custom register_shutdown_function that runs when the PHP script ends.
When running something using the ProcLoc in the browser please note that the process id will always be the same no matter the tab its ran in. So if you open one tab that is Process locked, then open another tab, the process locker will see it as the same process and allow it to lock again. To properly run it in a browser and test the locking it must be done using two separate browsers such as crome and firefox. It's not really intended to be ran in the browser but this is one quirk I noticed.
One last note this class is completely static, as you can have only one Process ID per process that is running, which should be obvious.
The tricky parts are
making sure the lock file is disposed of in the event of even critical PHP failures
making sure another process didn't pick up the pid number when it was freed from PHP. This can be done with relative accuracy, in that we can tell if a PHP process is using it, and if so we assume its the process we need, there is much less chance a re-used PID would show up for another process very quickly, even less that it would be another PHP process
making all this work on both Linux and Windows
Lucky for you I have already invested sufficient time in this to do all these things, this is a more generic version of an original lock script I made for my job that we have used in this way successfully for 3 years in maintaining control over various synchronous cron jobs, everything from sFTP upload scanning, expired file clean up to RabbitMq message workers that run for an indefinite period of time.
In anycase here is the full code, enjoy.
<?php
/*
(c) 2017 ArtisticPhoenix
For license information please view the LICENSE file included with this source code GPL3.0.
Proccess Locker
==================================================================
This is a pseudo implementation of mutex since php does not have
any thread synchronization objects
This class uses files to provide locking functionality.
Lock will be released in following cases
1 - user calls unlock
2 - when this lock object gets deleted
3 - when request or script ends
4 - when pid of lock does not match self::$_pid
==================================================================
Only one Lock per Process!
-note- when running in a browser typically all tabs will have the same PID
so the locking will not be able to tell if it's the same process, to get
around this run in CLI, or use 2 diffrent browsers, so the PID numbers are diffrent.
This class is static for the simple fact that locking is done per-proces, so there is no need
to ever have duplate ProcLocks within the same process
---------------------------------------------------------------
*/
final class {
/**
* exception code numbers
* #var int
*/
const DIRECTORY_NOT_FOUND = 2000;
const LOCK_FIRST = 2001;
const FAILED_TO_UNLOCK = 2002;
const FAILED_TO_LOCK = 2003;
const ALREADY_LOCKED = 2004;
const UNKNOWN_PID = 2005;
const PROC_UNKNOWN_PID = 2006;
/**
* process _key
* #var string
*/
protected static $_lockFile;
/**
*
* #var int
*/
protected static $_pid;
/**
* No construction allowed
*/
private function __construct(){}
/**
* No clones allowed
*/
private function __clone(){}
/**
* globaly sets the lock file
* #param string $lockFile
*/
public static function setLockFile( $lockFile ){
$dir = dirname( $lockFile );
if( !is_dir( dirname( $lockFile ))){
throw new Exception("Directory {$dir} not found", self::DIRECTORY_NOT_FOUND); //pid directroy invalid
}
self::$_lockFile = $lockFile;
}
/**
* return global lockfile
*/
public static function getLockFile() {
return ( self::$_lockFile ) ? self::$_lockFile : false;
}
/**
* safe check for local or global lock file
*/
protected static function _chk_lock_file( $lockFile = null ){
if( !$lockFile && !self::$_lockFile ){
throw new Exception("Lock first", self::LOCK_FIRST); //
}elseif( $lockFile ){
return $lockFile;
}else{
return self::$_lockFile;
}
}
/**
*
* #param string $lockFile
*/
public static function unlock( $lockFile = null ){
if( !self::$_pid ){
//no pid stored - not locked for this process
return;
}
$lockFile = self::_chk_lock_file($lockFile);
if(!file_exists($lockFile) || unlink($lockFile)){
return true;
}else{
throw new Exception("Failed to unlock {$lockFile}", self::FAILED_TO_UNLOCK ); //no lock file exists to unlock or no permissions to delete file
}
}
/**
*
* #param string $lockFile
*/
public static function lock( $lockFile = null ){
$lockFile = self::_chk_lock_file($lockFile);
if( self::canLock( $lockFile )){
self::$_pid = getmypid();
if(!file_put_contents($lockFile, self::$_pid ) ){
throw new Exception("Failed to lock {$lockFile}", self::FAILED_TO_LOCK ); //no permission to create pid file
}
}else{
throw new Exception('Process is already running[ '.$lockFile.' ]', self::ALREADY_LOCKED );//there is a process running with this pid
}
}
/**
*
* #param string $lockFile
*/
public static function getPidFromLockFile( $lockFile = null ){
$lockFile = self::_chk_lock_file($lockFile);
if(!file_exists($lockFile) || !is_file($lockFile)){
return false;
}
$pid = file_get_contents($lockFile);
return intval(trim($pid));
}
/**
*
* #return number
*/
public static function getMyPid(){
return ( self::$_pid ) ? self::$_pid : false;
}
/**
*
* #param string $lockFile
* #param string $myPid
* #throws Exception
*/
public static function validatePid($lockFile = null, $myPid = false ){
$lockFile = self::_chk_lock_file($lockFile);
if( !self::$_pid && !$myPid ){
throw new Exception('no pid supplied', self::UNKNOWN_PID ); //no stored or injected pid number
}elseif( !$myPid ){
$myPid = self::$_pid;
}
return ( $myPid == self::getPidFromLockFile( $lockFile ));
}
/**
* update the mtime of lock file
* #param string $lockFile
*/
public static function canLock( $lockFile = null){
if( self::$_pid ){
throw new Exception("Process was already locked", self::ALREADY_LOCKED ); //process was already locked - call this only before locking
}
$lockFile = self::_chk_lock_file($lockFile);
$pid = self::getPidFromLockFile( $lockFile );
if( !$pid ){
//if there is a not a pid then there is no lock file and it's ok to lock it
return true;
}
//validate the pid in the existing file
$valid = self::_validateProcess($pid);
if( !$valid ){
//if it's not valid - delete the lock file
if(unlink($lockFile)){
return true;
}else{
throw new Exception("Failed to unlock {$lockFile}", self::FAILED_TO_UNLOCK ); //no lock file exists to unlock or no permissions to delete file
}
}
//if there was a valid process running return false, we cannot lock it.
//update the lock files mTime - this is usefull for a heartbeat, a periodic keepalive script.
touch($lockFile);
return false;
}
/**
*
* #param string $lockFile
*/
public static function getLastAccess( $lockFile = null ){
$lockFile = self::_chk_lock_file($lockFile);
clearstatcache( $lockFile );
if( file_exists( $lockFile )){
return filemtime( $lockFile );
}
return false;
}
/**
*
* #param int $pid
*/
protected static function _validateProcess( $pid ){
$task = false;
$pid = intval($pid);
if(stripos(php_uname('s'), 'win') > -1){
$task = shell_exec("tasklist /fi \"PID eq {$pid}\"");
/*
'INFO: No tasks are running which match the specified criteria.
'
*/
/*
'
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
php.exe 5064 Console 1 64,516 K
'
*/
}else{
$cmd = "ps ".intval($pid);
$task = shell_exec($cmd);
/*
' PID TTY STAT TIME COMMAND
'
*/
}
//print_rr( $task );
if($task){
return ( preg_match('/php|httpd/', $task) ) ? true : false;
}
throw new Exception("pid detection failed {$pid}", self::PROC_UNKNOWN_PID); //failed to parse the pid look up results
//this has been tested on CentOs 5,6,7 and windows 7 and 10
}
/**
* destroy a lock ( safe unlock )
*/
public static function destroy($lockFile = null){
try{
$lockFile = self::_chk_lock_file($lockFile);
self::unlock( $lockFile );
}catch( Exception $e ){
//ignore errors here - this called from distruction so we dont care if it fails or succeeds
//generally a new process will be able to tell if the pid is still in use so
//this is just a cleanup process
}
}
}
/*
* register our shutdown handler - if the script dies unlock the lock
* this is superior to __destruct(), because the shutdown handler runs even in situation where PHP exhausts all memory
*/
register_shutdown_function(array('\\Lib\\Queue\\ProcLock',"destroy"));
I am using Symfony 3 framework with pheanstalk php library. I run the app on server with Linux Debian Jesse. The job creation works ok and if run the worker from terminal it works like it should. But when I added the command to crontab I see that the command do not work. There is not any log in /var/main/user to help me with debuging. I will be very glad for any help.
This is my worker command (only the execute function):
protected function execute(InputInterface $input, OutputInterface $output)
{
$output->writeln("\n<info>Beanstalk worker service started</info>");
$tubes = $this->getContainer()->get('app.job_manager.power_plant')->getTubes();
if ($input->getOption('one-check')) {
// run once
foreach ($tubes as $tubeName) {
if ($tubeName != "default") {
$this->getContainer()->get('app.queue_manager')->fetchQueue($tubeName);
}
}
$output->writeln("\n<info>Beanstalk worker completed check and stoped</info>");
} else {
// run forever
set_time_limit(0);
max_execution_time(0);
while (1) {
foreach ($tubes as $tubeName) {
if ($tubeName != "default") {
$this->getContainer()->get('app.queue_manager')->fetchQueue($tubeName);
}
}
}
$output->writeln("\n<info>Beanstalk worker stoped</info>");
}
}
This is my app.queue_manager function to get job from queue and run command:
public function fetchQueue($tubeName)
{
if ($this->pheanstalk->getConnection()->isServiceListening()) {
while (true === is_object($job = $this->pheanstalk->watch($tubeName)->ignore('default')->reserve(self::WATCH_TIMEOUT))) {
$data = json_decode($job->getData(), true);
$this->worker($data);
$this->pheanstalk->delete($job);
}
}
}
And the worker to run command
public function worker($data)
{
$application = new Application($this->kernel);
$application->setAutoExit(false);
$parameters = [];
$parameters['command'] = $data['command'];
foreach ($data['meta'] as $key => $param) {
$parameters[$key] = $param;
}
$input = new ArrayInput($parameters);
$output = new NullOutput();
return $application->run($input, $output);
}
This is my crontab that do not work:
#reboot /usr/bin/php /var/www/mose-base/bin/console beanstalk:worker:start
I created another cron tab that works ok. It works every 15 min, and the difference is that do not have infinite loop (while(1)) so it goes only once thru tubes and than finished. But it is not what i want. I want infinite loop that works all the time like I created it with first crontab:
*/15 * * * * /usr/bin/php /var/www/mose-base/bin/console beanstalk:worker:start --one-check
If it works in the console, it doesn't work for your file user permissions. Check it.
You can report your error by email with this job
*/15 * * * * /usr/bin/php /var/www/mose-base/bin/console beanstalk:worker:start --one-check 2>&1 | mail -s "mysql_dump" example#mail.example
I used rc.local. It solve the problem. Tnx FreudianSlip
I have one php script, and I am executing this script via cron every 10 minutes on CentOS.
The problem is that if the cron job will take more than 10 minutes, then another instance of the same cron job will start.
I tried one trick, that is:
Created one lock file with php code (same like pid files) when
the cron job started.
Removed the lock file with php code when the job finished.
And when any new cron job started execution of script, I checked if lock
file exists and if so, aborted the script.
But there can be one problem that, when the lock file is not deleted or removed by script because of any reason.
The cron will never start again.
Is there any way I can stop the execution of a cron job again if it is already running, with Linux commands or similar to this?
Advisory locking is made for exactly this purpose.
You can accomplish advisory locking with flock(). Simply apply the function to a previously opened lock file to determine if another script has a lock on it.
$f = fopen('lock', 'w') or die ('Cannot create lock file');
if (flock($f, LOCK_EX | LOCK_NB)) {
// yay
}
In this case I'm adding LOCK_NB to prevent the next script from waiting until the first has finished. Since you're using cron there will always be a next script.
If the current script prematurely terminates, any file locks will get released by the OS.
Maybe it is better to not write code if you can configure it:
https://serverfault.com/questions/82857/prevent-duplicate-cron-jobs-running
flock() worked out great for me - I have a cron job with database requests scheduled every 5 minutes, so not having several running at the same time is crucial. This is what I did:
$filehandle = fopen("lock.txt", "c+");
if (flock($filehandle, LOCK_EX | LOCK_NB)) {
// code here to start the cron job
flock($filehandle, LOCK_UN); // don't forget to release the lock
} else {
// throw an exception here to stop the next cron job
}
fclose($filehandle);
In case you don't want to kill the next scheduled cron job, but simply pause it till the running one is finished, then just omit the LOCK_NB:
if (flock($filehandle, LOCK_EX))
This is a very common problem with a very simple solution: cronjoblock a simple 8-lines shellscript wrapper applies locking using flock:
https://gist.github.com/coderofsalvation/1102e56d3d4dcbb1e36f
btw. cronjoblock also reverses cron's spammy emailbehaviour: only output something if stuff goes wrong. This is handy in respect to cron's MAILTO variable. The stdout/stderr output will be suppressed (so cron will not send mails) unless the given process has an exitcode > 0
flock will not work in php 5.3.3 as The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually.
I use this ::
<?php
// Create a PID file
if (is_file (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing")) { die (); }
file_put_contents (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing", "processing");
// SCRIPT CONTENTS GOES HERE //
#unlink (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing");
?>
#!/bin/bash
ps -ef | grep -v grep | grep capture_12hz_sampling_track.php
if [ $? -eq 1 ];
then
nohup /usr/local/bin/php /opt/Apache/htdocs/cmsmusic_v2/script/Mp3DownloadProcessMp4/capture_12hz_sampling_track.php &
else
echo "Already running"
fi
Another alternative:
<?php
/**
* Lock manager to ensure our cron doesn't run twice at the same time.
*
* Inspired by the lock mechanism in Mage_Index_Model_Process
*
* Usage:
*
* $lock = Mage::getModel('stcore/cron_lock');
*
* if (!$lock->isLocked()) {
* $lock->lock();
* // Do your stuff
* $lock->unlock();
* }
*/
class ST_Core_Model_Cron_Lock extends Varien_Object
{
/**
* Process lock properties
*/
protected $_isLocked = null;
protected $_lockFile = null;
/**
* Get lock file resource
*
* #return resource
*/
protected function _getLockFile()
{
if ($this->_lockFile === null) {
$varDir = Mage::getConfig()->getVarDir('locks');
$file = $varDir . DS . 'stcore_cron.lock';
if (is_file($file)) {
$this->_lockFile = fopen($file, 'w');
} else {
$this->_lockFile = fopen($file, 'x');
}
fwrite($this->_lockFile, date('r'));
}
return $this->_lockFile;
}
/**
* Lock process without blocking.
* This method allow protect multiple process runing and fast lock validation.
*
* #return Mage_Index_Model_Process
*/
public function lock()
{
$this->_isLocked = true;
flock($this->_getLockFile(), LOCK_EX | LOCK_NB);
return $this;
}
/**
* Lock and block process.
* If new instance of the process will try validate locking state
* script will wait until process will be unlocked
*
* #return Mage_Index_Model_Process
*/
public function lockAndBlock()
{
$this->_isLocked = true;
flock($this->_getLockFile(), LOCK_EX);
return $this;
}
/**
* Unlock process
*
* #return Mage_Index_Model_Process
*/
public function unlock()
{
$this->_isLocked = false;
flock($this->_getLockFile(), LOCK_UN);
return $this;
}
/**
* Check if process is locked
*
* #return bool
*/
public function isLocked()
{
if ($this->_isLocked !== null) {
return $this->_isLocked;
} else {
$fp = $this->_getLockFile();
if (flock($fp, LOCK_EX | LOCK_NB)) {
flock($fp, LOCK_UN);
return false;
}
return true;
}
}
/**
* Close file resource if it was opened
*/
public function __destruct()
{
if ($this->_lockFile) {
fclose($this->_lockFile);
}
}
}
Source: https://gist.github.com/wcurtis/9539178
I was running a php cron job script that dealt specifically with sending text messages using an existing API. On my local box the cron job was working fine, but on my customer's box it was sending double messages. Although this doesn't make sense to me, I double checked the permissions for the folder responsible for sending messages and the permission was set to root. Once I set the owner as www-data (Ubuntu) it started behaving normally.
This might mot be the issue for you, but if its a simple cron script I would double check the permissions.
Is there a way you can abort a block of code if it's taking too long in PHP? Perhaps something like:
//Set the max time to 2 seconds
$time = new TimeOut(2);
$time->startTime();
sleep(3)
$time->endTime();
if ($time->timeExpired()){
echo 'This function took too long to execute and was aborted.';
}
It doesn't have to be exactly like above, but are there any native PHP functions or classes that do something like this?
Edit: Ben Lee's answer with pcnt_fork would be the perfect solution except that it's not available for Windows. Is there any other way to accomplish this with PHP that works for Windows and Linux, but doesn't require an external library?
Edit 2: XzKto's solution works in some cases, but not consistently and I can't seem to catch the exception, no matter what I try. The use case is detecting a timeout for a unit test. If the test times out, I want to terminate it and then move on to the next test.
You can do this by forking the process, and then using the parent process to monitor the child process. pcntl_fork is a method that forks the process, so you have two nearly identical programs in memory running in parallel. The only difference is that in one process, the parent, pcntl_fork returns a positive integer which corresponds to the process id of the child process. And in the other process, the child, pcntl_fork returns 0.
Here's an example:
$pid = pcntl_fork();
if ($pid == 0) {
// this is the child process
} else {
// this is the parent process, and we know the child process id is in $pid
}
That's the basic structure. Next step is to add a process expiration. Your stuff will run in the child process, and the parent process will be responsible only for monitoring and timing the child process. But in order for one process (the parent) to kill another (the child), there needs to be a signal. Signals are how processes communicate, and the signal that means "you should end immediately" is SIGKILL. You can send this signal using posix_kill. So the parent should just wait 2 seconds then kill the child, like so:
$pid = pcntl_fork();
if ($pid == 0) {
// this is the child process
// run your potentially time-consuming method
} else {
// this is the parent process, and we know the child process id is in $pid
sleep(2); // wait 2 seconds
posix_kill($pid, SIGKILL); // then kill the child
}
You can't really do that if you script pauses on one command (for example sleep()) besides forking, but there are a lot of work arounds for special cases: like asynchronous queries if you programm pauses on DB query, proc_open if you programm pauses at some external execution etc. Unfortunately they are all different so there is no general solution.
If you script waits for a long loop/many lines of code you can do a dirty trick like this:
declare(ticks=1);
class Timouter {
private static $start_time = false,
$timeout;
public static function start($timeout) {
self::$start_time = microtime(true);
self::$timeout = (float) $timeout;
register_tick_function(array('Timouter', 'tick'));
}
public static function end() {
unregister_tick_function(array('Timouter', 'tick'));
}
public static function tick() {
if ((microtime(true) - self::$start_time) > self::$timeout)
throw new Exception;
}
}
//Main code
try {
//Start timeout
Timouter::start(3);
//Some long code to execute that you want to set timeout for.
while (1);
} catch (Exception $e) {
Timouter::end();
echo "Timeouted!";
}
but I don't think it is very good. If you specify the exact case I think we can help you better.
This is an old question, and has probably been solved many times by now, but for people looking for an easy way to solve this problem, there is a library now: PHP Invoker.
You can use declare function if the execution time exceeds the limits. http://www.php.net/manual/en/control-structures.declare.php
Here a code example of how to use
define("MAX_EXECUTION_TIME", 2); # seconds
$timeline = time() + MAX_EXECUTION_TIME;
function check_timeout()
{
if( time() < $GLOBALS['timeline'] ) return;
# timeout reached:
print "Timeout!".PHP_EOL;
exit;
}
register_tick_function("check_timeout");
$data = "";
declare( ticks=1 ){
# here the process that might require long execution time
sleep(5); // Comment this line to see this data text
$data = "Long process result".PHP_EOL;
}
# Ok, process completed, output the result:
print $data;
With this code you will see the timeout message.
If you want to get the Long process result inside the declare block you can just remove the sleep(5) line or increase the Max Execution Time declared at the start of the script
What about set-time-limit if you are not in the safe mode.
Cooked this up in about two minutes, I forgot to call $time->startTime(); so I don't really know exactly how long it took ;)
class TimeOut{
public function __construct($time=0)
{
$this->limit = $time;
}
public function startTime()
{
$this->old = microtime(true);
}
public function checkTime()
{
$this->new = microtime(true);
}
public function timeExpired()
{
$this->checkTime();
return ($this->new - $this->old > $this->limit);
}
}
And the demo.
I don't really get what your endTime() call does, so I made checkTime() instead, which also serves no real purpose but to update the internal values. timeExpired() calls it automatically because it would sure stink if you forgot to call checkTime() and it was using the old times.
You can also use a 2nd script that has the pause code in it that is executed via a curl call with a timeout set. The other obvious solution is to fix the cause of the pause.
Here is my way to do that. Thanks to others answers:
<?php
class Timeouter
{
private static $start_time = FALSE, $timeout;
/**
* #param integer $seconds Time in seconds
* #param null $error_msg
*/
public static function limit($seconds, $error_msg = NULL)
: void
{
self::$start_time = microtime(TRUE);
self::$timeout = (float) $seconds;
register_tick_function([ self::class, 'tick' ], $error_msg);
}
public static function end()
: void
{
unregister_tick_function([ self::class, 'tick' ]);
}
public static function tick($error)
: void
{
if ((microtime(TRUE) - self::$start_time) > self::$timeout) {
throw new \RuntimeException($error ?? 'You code took too much time.');
}
}
public static function step()
: void
{
usleep(1);
}
}
Then you can try like this:
<?php
try {
//Start timeout
Timeouter::limit(2, 'You code is heavy. Sorry.');
//Some long code to execute that you want to set timeout for.
declare(ticks=1) {
foreach (range(1, 100000) as $x) {
Timeouter::step(); // Not always necessary
echo $x . "-";
}
}
Timeouter::end();
} catch (Exception $e) {
Timeouter::end();
echo $e->getMessage(); // 'You code is heavy. Sorry.'
}
I made a script in php using pcntl_fork and lockfile to control the execution of external calls doing the kill after the timeout.
#!/usr/bin/env php
<?php
if(count($argv)<4){
print "\n\n\n";
print "./fork.php PATH \"COMMAND\" TIMEOUT\n"; // TIMEOUT IN SECS
print "Example:\n";
print "./fork.php /root/ \"php run.php\" 20";
print "\n\n\n";
die;
}
$PATH = $argv[1];
$LOCKFILE = $argv[1].$argv[2].".lock";
$TIMEOUT = (int)$argv[3];
$RUN = $argv[2];
chdir($PATH);
$fp = fopen($LOCKFILE,"w");
if (!flock($fp, LOCK_EX | LOCK_NB)) {
print "Already Running\n";
exit();
}
$tasks = [
"kill",
"run",
];
function killChilds($pid,$signal) {
exec("ps -ef| awk '\$3 == '$pid' { print \$2 }'", $output, $ret);
if($ret) return 'you need ps, grep, and awk';
while(list(,$t) = each($output)) {
if ( $t != $pid && $t != posix_getpid()) {
posix_kill($t, $signal);
}
}
}
$pidmaster = getmypid();
print "Add PID: ".(string)$pidmaster." MASTER\n";
foreach ($tasks as $task) {
$pid = pcntl_fork();
$pidslave = posix_getpid();
if($pidslave != $pidmaster){
print "Add PID: ".(string)$pidslave." ".strtoupper($task)."\n";
}
if ($pid == -1) {
exit("Error forking...\n");
}
else if ($pid == 0) {
execute_task($task);
exit();
}
}
while(pcntl_waitpid(0, $status) != -1);
echo "Do stuff after all parallel execution is complete.\n";
unlink($LOCKFILE);
function execute_task($task_id) {
global $pidmaster;
global $TIMEOUT;
global $RUN;
if($task_id=='kill'){
print("SET TIMEOUT = ". (string)$TIMEOUT."\n");
sleep($TIMEOUT);
print("FINISHED BY TIMEOUT: ". (string)$TIMEOUT."\n");
killChilds($pidmaster,SIGTERM);
die;
}elseif($task_id=='run'){
###############################################
### START EXECUTION CODE OR EXTERNAL SCRIPT ###
###############################################
system($RUN);
################################
### END ###
################################
killChilds($pidmaster,SIGTERM);
die;
}
}
Test Script run.php
<?php
$i=0;
while($i<25){
print "test... $i\n";
$i++;
sleep(1);
}