PHP cURL Timing Issue - php

I have a PHP script that is used to query an API and download some JSON information / insert that information into a MySQL database, we'll call this scriptA.php. I need to run this script multiple times as minute, preferably as many times in a minute that I can without allowing two instances to run at the same exact time or with any overlap. My solution to this has been to create scriptB.php and put in on a one minute cron job. Here is the source code of scriptB.php...
function next_run()
{
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "http://somewebsite.com/scriptA.php");
curl_exec($curl);
curl_close($curl);
unset($curl);
}
$i = 0;
$times_to_run = 7;
$function = array();
while ($i++ < $times_to_run) {
$function = next_run();
sleep(3);
}
My question at this point is to how cURL performs when used in a loop, does this code trigger scriptA.php and THEN once it has finished loading it at that point start the next cURL request? Does the 3 second sleep even make a difference or will this literally run as fast as the time it takes each cURL request to complete. My objective is to time this script and run it as many times as possible in a one minute window without two iterations of it being run at the same time. I don't want to include the sleep statement if it is not needed. I believe what happens is cURL will run each request upon finishing the last, if I am wrong is there someway that I can instruct it to do this?

preferably as many times in a minute that I can without allowing two instances to run at the same exact time or with any overlap. - then you shouldn't use a cronjob at all, you should use a daemon. but if you for some reason have to use a cronjob (eg, if you're on a shared webhosting platform that doesn't allow daemons), guess you could use the sleep hack to run the same code several times a minute?
* * * * * /usr/bin/php /path/to/scriptA.php
* * * * * sleep 10; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 20; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 30; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 40; /usr/bin/php /path/to/scriptA.php
* * * * * sleep 50; /usr/bin/php /path/to/scriptA.php
should make it execute every 10 seconds.
as for making sure it doesn't run in paralell if the previous execution hasn't finished yet, add this to the start of scriptA
call_user_func ( function () {
static $lock;
$lock = fopen ( __FILE__, "rb" );
if (! flock ( $lock, LOCK_EX | LOCK_NB )) {
// failed to get a lock, probably means another instance is already running
die ();
}
register_shutdown_function ( function () use (&$lock) {
flock ( $lock, LOCK_UN );
} );
} );
and it will just die() if another instance of scriptA is already running. however, if you want it to wait for the previous execution to finish, instead of just exiting, remove LOCK_NB... but that could be dangerous, if every, or even just a majority of the executions use more than 10 seconds, you'll have more and more processes waiting for the previous execution to finish, until you run out of ram.
as for your curl questions,
My question at this point is to how cURL performs when used in a loop, does this code trigger scriptA.php and THEN once it has finished loading it at that point start the next cURL request, that is correct, curl waits until the page has completely loaded, usually meaning the entire scriptA has completed. (you can tell scriptA to finish the pageload prematurely with the fastcgi_finish_request() function if you really want, but that's unusual)
Does the 3 second sleep even make a difference or will this literally run as fast as the time it takes each cURL request to complete - yes, the sleep will make the loop 3 seconds slower per iteration.
My objective is to time this script and run it as many times as possible in a one minute window without two iterations of it being run at the same time - then make it a daemon that never exits, rather than a cronjob.
I don't want to include the sleep statement if it is not needed. - it's not needed.
I believe what happens is cURL will run each request upon finishing the last - this is correct.

I need to run this script multiple times as minute, preferably as many times in a minute that I can without allowing two instances to run
Your in luck as I wrote a class to handle just such a thing. You can find it on my github here
https://github.com/ArtisticPhoenix/MISC/blob/master/ProcLock.php
I'll also copy the full code at the end of this post.
The basic idea is to create a file, I will call it afile.lock for this example. In this file is recorded the PID, or the process ID of the current process that is ran by cron. Then when cron attempts to run the process again, it checks this lock file and sees if there is a PHP process running that is using this PID.
if there is it updates the modified time of the file (and throws an exception)
if there is not then you are free to create a new instance of the "worker".
As a bonus th modified time of the lock file can be used by the script (whose PID we are tracking) as a way of shutting down in the event the file is not updated, so for example: if cron is stopped, or if the lock file is manually deleted you can set in in such a way that the running script will detect this and self destruct.
So not only can you keep multiple instances from running, you can tell the current instance to die if cron is turned off.
The basic usage is as follows. In the cron file that starts up the "worker"
//define a lock file (this is actually optional)
ProcLock::setLockFile(__DIR__.'/afile.lock');
try{
//if you didn't set a lock file you can pass it in with this method call
ProcLock::lock();
//execute your process
}catch(\Exception $e){
if($e->getCode() == ProcLock::ALREADY_LOCKED){
//just exit or what have you
}else{
//some other exception happened.
}
}
It's basically that easy.
Then in the running process you can every so often check (for example if you have a loop that runs something)
$expires = 90; //1 1/2 minute (you may need a bit of fudge time)
foreach($something as $a=>$b){
$lastAccess = ProcLock::getLastAccess()
if(false == $lastAccess || $lastAccess + $expires < time()){
//if last access is false (no lock file)
//or last access + expiration, is less then the current time
//log something like killed by lock timeout
exit();
}
}
Basically what this says is that either the lock file was deleted wile the process was running, or cron failed to update it before the expiration time. So here we are giving it 90 seconds and cron should be updating the lock file every 60 seconds. As I said the lock file is updated automatically if it's found when calling lock(), which calls canLock() which if it returns true meaning we can lock the process because its not currently locked, then it runs touch($lockfile) which updates the mtime (modified time).
Obviously you can only self kill the process in this way if it is actively checking the access and expiration times.
This script is designed to work both on windows and linux. On windows under certain circumstances the lock file won't properly be deleted (sometimes when hitting ctrl+c in the CMD window), however I have taken great pains to make sure this does not happen, so the class file contains a custom register_shutdown_function that runs when the PHP script ends.
When running something using the ProcLoc in the browser please note that the process id will always be the same no matter the tab its ran in. So if you open one tab that is Process locked, then open another tab, the process locker will see it as the same process and allow it to lock again. To properly run it in a browser and test the locking it must be done using two separate browsers such as crome and firefox. It's not really intended to be ran in the browser but this is one quirk I noticed.
One last note this class is completely static, as you can have only one Process ID per process that is running, which should be obvious.
The tricky parts are
making sure the lock file is disposed of in the event of even critical PHP failures
making sure another process didn't pick up the pid number when it was freed from PHP. This can be done with relative accuracy, in that we can tell if a PHP process is using it, and if so we assume its the process we need, there is much less chance a re-used PID would show up for another process very quickly, even less that it would be another PHP process
making all this work on both Linux and Windows
Lucky for you I have already invested sufficient time in this to do all these things, this is a more generic version of an original lock script I made for my job that we have used in this way successfully for 3 years in maintaining control over various synchronous cron jobs, everything from sFTP upload scanning, expired file clean up to RabbitMq message workers that run for an indefinite period of time.
In anycase here is the full code, enjoy.
<?php
/*
(c) 2017 ArtisticPhoenix
For license information please view the LICENSE file included with this source code GPL3.0.
Proccess Locker
==================================================================
This is a pseudo implementation of mutex since php does not have
any thread synchronization objects
This class uses files to provide locking functionality.
Lock will be released in following cases
1 - user calls unlock
2 - when this lock object gets deleted
3 - when request or script ends
4 - when pid of lock does not match self::$_pid
==================================================================
Only one Lock per Process!
-note- when running in a browser typically all tabs will have the same PID
so the locking will not be able to tell if it's the same process, to get
around this run in CLI, or use 2 diffrent browsers, so the PID numbers are diffrent.
This class is static for the simple fact that locking is done per-proces, so there is no need
to ever have duplate ProcLocks within the same process
---------------------------------------------------------------
*/
final class {
/**
* exception code numbers
* #var int
*/
const DIRECTORY_NOT_FOUND = 2000;
const LOCK_FIRST = 2001;
const FAILED_TO_UNLOCK = 2002;
const FAILED_TO_LOCK = 2003;
const ALREADY_LOCKED = 2004;
const UNKNOWN_PID = 2005;
const PROC_UNKNOWN_PID = 2006;
/**
* process _key
* #var string
*/
protected static $_lockFile;
/**
*
* #var int
*/
protected static $_pid;
/**
* No construction allowed
*/
private function __construct(){}
/**
* No clones allowed
*/
private function __clone(){}
/**
* globaly sets the lock file
* #param string $lockFile
*/
public static function setLockFile( $lockFile ){
$dir = dirname( $lockFile );
if( !is_dir( dirname( $lockFile ))){
throw new Exception("Directory {$dir} not found", self::DIRECTORY_NOT_FOUND); //pid directroy invalid
}
self::$_lockFile = $lockFile;
}
/**
* return global lockfile
*/
public static function getLockFile() {
return ( self::$_lockFile ) ? self::$_lockFile : false;
}
/**
* safe check for local or global lock file
*/
protected static function _chk_lock_file( $lockFile = null ){
if( !$lockFile && !self::$_lockFile ){
throw new Exception("Lock first", self::LOCK_FIRST); //
}elseif( $lockFile ){
return $lockFile;
}else{
return self::$_lockFile;
}
}
/**
*
* #param string $lockFile
*/
public static function unlock( $lockFile = null ){
if( !self::$_pid ){
//no pid stored - not locked for this process
return;
}
$lockFile = self::_chk_lock_file($lockFile);
if(!file_exists($lockFile) || unlink($lockFile)){
return true;
}else{
throw new Exception("Failed to unlock {$lockFile}", self::FAILED_TO_UNLOCK ); //no lock file exists to unlock or no permissions to delete file
}
}
/**
*
* #param string $lockFile
*/
public static function lock( $lockFile = null ){
$lockFile = self::_chk_lock_file($lockFile);
if( self::canLock( $lockFile )){
self::$_pid = getmypid();
if(!file_put_contents($lockFile, self::$_pid ) ){
throw new Exception("Failed to lock {$lockFile}", self::FAILED_TO_LOCK ); //no permission to create pid file
}
}else{
throw new Exception('Process is already running[ '.$lockFile.' ]', self::ALREADY_LOCKED );//there is a process running with this pid
}
}
/**
*
* #param string $lockFile
*/
public static function getPidFromLockFile( $lockFile = null ){
$lockFile = self::_chk_lock_file($lockFile);
if(!file_exists($lockFile) || !is_file($lockFile)){
return false;
}
$pid = file_get_contents($lockFile);
return intval(trim($pid));
}
/**
*
* #return number
*/
public static function getMyPid(){
return ( self::$_pid ) ? self::$_pid : false;
}
/**
*
* #param string $lockFile
* #param string $myPid
* #throws Exception
*/
public static function validatePid($lockFile = null, $myPid = false ){
$lockFile = self::_chk_lock_file($lockFile);
if( !self::$_pid && !$myPid ){
throw new Exception('no pid supplied', self::UNKNOWN_PID ); //no stored or injected pid number
}elseif( !$myPid ){
$myPid = self::$_pid;
}
return ( $myPid == self::getPidFromLockFile( $lockFile ));
}
/**
* update the mtime of lock file
* #param string $lockFile
*/
public static function canLock( $lockFile = null){
if( self::$_pid ){
throw new Exception("Process was already locked", self::ALREADY_LOCKED ); //process was already locked - call this only before locking
}
$lockFile = self::_chk_lock_file($lockFile);
$pid = self::getPidFromLockFile( $lockFile );
if( !$pid ){
//if there is a not a pid then there is no lock file and it's ok to lock it
return true;
}
//validate the pid in the existing file
$valid = self::_validateProcess($pid);
if( !$valid ){
//if it's not valid - delete the lock file
if(unlink($lockFile)){
return true;
}else{
throw new Exception("Failed to unlock {$lockFile}", self::FAILED_TO_UNLOCK ); //no lock file exists to unlock or no permissions to delete file
}
}
//if there was a valid process running return false, we cannot lock it.
//update the lock files mTime - this is usefull for a heartbeat, a periodic keepalive script.
touch($lockFile);
return false;
}
/**
*
* #param string $lockFile
*/
public static function getLastAccess( $lockFile = null ){
$lockFile = self::_chk_lock_file($lockFile);
clearstatcache( $lockFile );
if( file_exists( $lockFile )){
return filemtime( $lockFile );
}
return false;
}
/**
*
* #param int $pid
*/
protected static function _validateProcess( $pid ){
$task = false;
$pid = intval($pid);
if(stripos(php_uname('s'), 'win') > -1){
$task = shell_exec("tasklist /fi \"PID eq {$pid}\"");
/*
'INFO: No tasks are running which match the specified criteria.
'
*/
/*
'
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
php.exe 5064 Console 1 64,516 K
'
*/
}else{
$cmd = "ps ".intval($pid);
$task = shell_exec($cmd);
/*
' PID TTY STAT TIME COMMAND
'
*/
}
//print_rr( $task );
if($task){
return ( preg_match('/php|httpd/', $task) ) ? true : false;
}
throw new Exception("pid detection failed {$pid}", self::PROC_UNKNOWN_PID); //failed to parse the pid look up results
//this has been tested on CentOs 5,6,7 and windows 7 and 10
}
/**
* destroy a lock ( safe unlock )
*/
public static function destroy($lockFile = null){
try{
$lockFile = self::_chk_lock_file($lockFile);
self::unlock( $lockFile );
}catch( Exception $e ){
//ignore errors here - this called from distruction so we dont care if it fails or succeeds
//generally a new process will be able to tell if the pid is still in use so
//this is just a cleanup process
}
}
}
/*
* register our shutdown handler - if the script dies unlock the lock
* this is superior to __destruct(), because the shutdown handler runs even in situation where PHP exhausts all memory
*/
register_shutdown_function(array('\\Lib\\Queue\\ProcLock',"destroy"));

Related

laravel queue timing out all the time

I have successfully developed a laravel job in 5.5 locally using database queue, however moving the application to a testing server has thrown a constant issue.
Every job fails immediately and is placed in the failed jobs list.
The only error that is being returned is:
has been attempted too many times or run too long. The job may have previously timed out
This is generated in the core worker.php through the function
markJobAsFailedIfAlreadyExceedsMaxAttempts($connectionName, $job, $maxTries)
I have 'tries' set to 5 in the job and the database config timeout is set longer than the execution timeout.
however it still fails as if it never tried in the first place.
this is the code that generates the error:
/**
* Mark the given job as failed if it has exceeded the maximum allowed attempts.
*
* This will likely be because the job previously exceeded a timeout.
*
* #param string $connectionName
* #param \Illuminate\Contracts\Queue\Job $job
* #param int $maxTries
* #return void
*/
protected function markJobAsFailedIfAlreadyExceedsMaxAttempts($connectionName, $job, $maxTries)
{
$maxTries = ! is_null($job->maxTries()) ? $job->maxTries() : $maxTries;
$timeoutAt = $job->timeoutAt();
if ($timeoutAt && Carbon::now()->getTimestamp() <= $timeoutAt) {
return;
}
if (! $timeoutAt && ($maxTries === 0 || $job->attempts() <= $maxTries)) {
return;
}
$this->failJob($connectionName, $job, $e = new MaxAttemptsExceededException(
$job->resolveName().' has been attempted too many times or run too long. The job may have previously timed out.'
));
throw $e;
}

Track folder using php and inotify

I need help to understand how i can make inotify work with PHP.
I have a main file where i call a instance of a inotify class i created.
This works for 30 seconds and then php throws a timeout error. In that time window it can in fact print information from new files and deleted ones. It kinda works but...
My questions for you guys are:
how can i make it persistent and stable. I mean i can set timeout requests for unlimited time but that doesn't seem to be a good practice. How to deal with this?
It's supposed to work like this? I call the function and the php
hangs in that loop until a new change happens?
My index.php
$teste = new Inotify_service();
$teste->add_watch('files');
class Inotify_service
{
private $instance;
private $watch_id;
public function __construct()
{
$this->instance = inotify_init();
stream_set_blocking($this->instance, 0); # this is needed so inotify_read while operate in non blocking mode
}
/**
* [add_watch Adds a new watch or modify an existing watch for the file or directory specified in pathname]
* #param [string] $pathname [description]
*/
public function add_watch($pathname)
{
$this->watch_id = inotify_add_watch($this->instance, $pathname, IN_CREATE | IN_DELETE);
while(true){
// read events
$events = inotify_read($this->instance);
// if the event is happening within our 'Files directory'
if ($events[0]['wd'] === $this->watch_id){
// a file was created
if($events[0]['mask'] === IN_CREATE){
printf("Created file: %s in Files directory\n", $events[0]['name']);
// a file was deleted
} else if ($events[0]['mask'] === IN_DELETE){
printf("Deleted file: %s in Files directory\n", $events[0]['name']);
}
}
}
// stop watching our directories
inotify_rm_watch($this->instance, $this->watch_id);
// close our inotify instance
fclose($this->instance);
}

How to avoid race hazard with multiple requests?

In order to protect script form race hazard, I am considering approach described by code sample
$file = 'yxz.lockctrl';
// if file exists, it means that some other request is running
while (file_exists($file))
{
sleep(1);
}
file_put_contents($file, '');
// do some work
unlink($file);
If I go this way, is it possible to create file with same name simultaneously from multiple requests?
I know that there is php mutex. I would like to handle this situation without any extensions (if possible).
Task for the program is to handle bids in auctions application. I would like to process every bid request sequentially. With most possible latency.
From what I understand you want to make sure only a single process at a time is running a certain piece of code. A mutex or similar mechanism could be used for this. I myself use lockfiles to have a solution that works on many platforms and doesn't rely on a specific library only available on Linux etc.
For that, I have written a small Lock class. Do note that it uses some non-standard functions from my library, for instance, to get where to store temporary files etc. But you could easily change that.
<?php
class Lock
{
private $_owned = false;
private $_name = null;
private $_lockFile = null;
private $_lockFilePointer = null;
public function __construct($name)
{
$this->_name = $name;
$this->_lockFile = PluginManager::getInstance()->getCorePlugin()->getTempDir('locks') . $name . '-' . sha1($name . PluginManager::getInstance()->getCorePlugin()->getPreference('EncryptionKey')->getValue()).'.lock';
}
public function __destruct()
{
$this->release();
}
/**
* Acquires a lock
*
* Returns true on success and false on failure.
* Could be told to wait (block) and if so for a max amount of seconds or return false right away.
*
* #param bool $wait
* #param null $maxWaitTime
* #return bool
* #throws \Exception
*/
public function acquire($wait = false, $maxWaitTime = null) {
$this->_lockFilePointer = fopen($this->_lockFile, 'c');
if(!$this->_lockFilePointer) {
throw new \RuntimeException(__('Unable to create lock file', 'dliCore'));
}
if($wait && $maxWaitTime === null) {
$flags = LOCK_EX;
}
else {
$flags = LOCK_EX | LOCK_NB;
}
$startTime = time();
while(1) {
if (flock($this->_lockFilePointer, $flags)) {
$this->_owned = true;
return true;
} else {
if($maxWaitTime === null || time() - $startTime > $maxWaitTime) {
fclose($this->_lockFilePointer);
return false;
}
sleep(1);
}
}
}
/**
* Releases the lock
*/
public function release()
{
if($this->_owned) {
#flock($this->_lockFilePointer, LOCK_UN);
#fclose($this->_lockFilePointer);
#unlink($this->_lockFile);
$this->_owned = false;
}
}
}
Usage
Now you can have two process that run at the same time and execute the same script
Process 1
$lock = new Lock('runExpensiveFunction');
if($lock->acquire()) {
// Some expensive function that should only run one at a time
runExpensiveFunction();
$lock->release();
}
Process 2
$lock = new Lock('runExpensiveFunction');
// Check will be false since the lock will already be held by someone else so the function is skipped
if($lock->acquire()) {
// Some expensive function that should only run one at a time
runExpensiveFunction();
$lock->release();
}
Another alternative would be to have the second process wait for the first one to finish instead of skipping the code.
$lock = new Lock('runExpensiveFunction');
// Process will now wait for the lock to become available. A max wait time can be set if needed.
if($lock->acquire(true)) {
// Some expensive function that should only run one at a time
runExpensiveFunction();
$lock->release();
}
Ram disk
To limit the number of writes to your HDD/SSD with the lockfiles you could crate a RAM disk to store them in.
On Linux you could add something like the following to /etc/fstab
tmpfs /mnt/ramdisk tmpfs nodev,nosuid,noexec,nodiratime,size=1024M 0 0
On Windows you can download something like ImDisk Toolkit and create a ramdisk with that.

php mutex for ram based wordpress cache in php

Im trying to implement a cache for a high traffic wp site in php. so far ive managed to store the results to a ramfs and load them directly from the htaccess. however during peak hours there are mora than one process generatin certain page and is becoming an issue
i was thinking that a mutex would help and i was wondering if there is a better way than system("mkdir cache.mutex")
From what I understand you want to make sure only a single process at a time is running a certain piece of code. A mutex or similar mechanism could be used for this. I myself use lockfiles to have a solution that works on many platforms and doesn't rely on a specific library only available on Linux etc.
For that, I have written a small Lock class. Do note that it uses some non-standard functions from my library, for instance, to get where to store temporary files etc. But you could easily change that.
<?php
class Lock
{
private $_owned = false;
private $_name = null;
private $_lockFile = null;
private $_lockFilePointer = null;
public function __construct($name)
{
$this->_name = $name;
$this->_lockFile = PluginManager::getInstance()->getCorePlugin()->getTempDir('locks') . $name . '-' . sha1($name . PluginManager::getInstance()->getCorePlugin()->getPreference('EncryptionKey')->getValue()).'.lock';
}
public function __destruct()
{
$this->release();
}
/**
* Acquires a lock
*
* Returns true on success and false on failure.
* Could be told to wait (block) and if so for a max amount of seconds or return false right away.
*
* #param bool $wait
* #param null $maxWaitTime
* #return bool
* #throws \Exception
*/
public function acquire($wait = false, $maxWaitTime = null) {
$this->_lockFilePointer = fopen($this->_lockFile, 'c');
if(!$this->_lockFilePointer) {
throw new \RuntimeException(__('Unable to create lock file', 'dliCore'));
}
if($wait && $maxWaitTime === null) {
$flags = LOCK_EX;
}
else {
$flags = LOCK_EX | LOCK_NB;
}
$startTime = time();
while(1) {
if (flock($this->_lockFilePointer, $flags)) {
$this->_owned = true;
return true;
} else {
if($maxWaitTime === null || time() - $startTime > $maxWaitTime) {
fclose($this->_lockFilePointer);
return false;
}
sleep(1);
}
}
}
/**
* Releases the lock
*/
public function release()
{
if($this->_owned) {
#flock($this->_lockFilePointer, LOCK_UN);
#fclose($this->_lockFilePointer);
#unlink($this->_lockFile);
$this->_owned = false;
}
}
}
Usage
Now you can have two process that run at the same time and execute the same script
Process 1
$lock = new Lock('runExpensiveFunction');
if($lock->acquire()) {
// Some expensive function that should only run one at a time
runExpensiveFunction();
$lock->release();
}
Process 2
$lock = new Lock('runExpensiveFunction');
// Check will be false since the lock will already be held by someone else so the function is skipped
if($lock->acquire()) {
// Some expensive function that should only run one at a time
runExpensiveFunction();
$lock->release();
}
Another alternative would be to have the second process wait for the first one to finish instead of skipping the code.
$lock = new Lock('runExpensiveFunction');
// Process will now wait for the lock to become available. A max wait time can be set if needed.
if($lock->acquire(true)) {
// Some expensive function that should only run one at a time
runExpensiveFunction();
$lock->release();
}
Ram disk
To limit the number of writes to your HDD/SSD with the lockfiles you could create a RAM disk to store them in.
On Linux you could add something like the following to /etc/fstab
tmpfs /mnt/ramdisk tmpfs nodev,nosuid,noexec,nodiratime,size=1024M 0 0
On Windows you can download something like ImDisk Toolkit and create a ramdisk with that.
I agree with #gries, a reverse proxy is going to be a really good bang-for-the-buck way to get high performance out of a high-volume Wordpress site. I've leveraged Varnish with quite a lot of success, though I suspect you can do so with nginx as well.

How to prevent the cron job execution, if it is already running

I have one php script, and I am executing this script via cron every 10 minutes on CentOS.
The problem is that if the cron job will take more than 10 minutes, then another instance of the same cron job will start.
I tried one trick, that is:
Created one lock file with php code (same like pid files) when
the cron job started.
Removed the lock file with php code when the job finished.
And when any new cron job started execution of script, I checked if lock
file exists and if so, aborted the script.
But there can be one problem that, when the lock file is not deleted or removed by script because of any reason.
The cron will never start again.
Is there any way I can stop the execution of a cron job again if it is already running, with Linux commands or similar to this?
Advisory locking is made for exactly this purpose.
You can accomplish advisory locking with flock(). Simply apply the function to a previously opened lock file to determine if another script has a lock on it.
$f = fopen('lock', 'w') or die ('Cannot create lock file');
if (flock($f, LOCK_EX | LOCK_NB)) {
// yay
}
In this case I'm adding LOCK_NB to prevent the next script from waiting until the first has finished. Since you're using cron there will always be a next script.
If the current script prematurely terminates, any file locks will get released by the OS.
Maybe it is better to not write code if you can configure it:
https://serverfault.com/questions/82857/prevent-duplicate-cron-jobs-running
flock() worked out great for me - I have a cron job with database requests scheduled every 5 minutes, so not having several running at the same time is crucial. This is what I did:
$filehandle = fopen("lock.txt", "c+");
if (flock($filehandle, LOCK_EX | LOCK_NB)) {
// code here to start the cron job
flock($filehandle, LOCK_UN); // don't forget to release the lock
} else {
// throw an exception here to stop the next cron job
}
fclose($filehandle);
In case you don't want to kill the next scheduled cron job, but simply pause it till the running one is finished, then just omit the LOCK_NB:
if (flock($filehandle, LOCK_EX))
This is a very common problem with a very simple solution: cronjoblock a simple 8-lines shellscript wrapper applies locking using flock:
https://gist.github.com/coderofsalvation/1102e56d3d4dcbb1e36f
btw. cronjoblock also reverses cron's spammy emailbehaviour: only output something if stuff goes wrong. This is handy in respect to cron's MAILTO variable. The stdout/stderr output will be suppressed (so cron will not send mails) unless the given process has an exitcode > 0
flock will not work in php 5.3.3 as The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually.
I use this ::
<?php
// Create a PID file
if (is_file (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing")) { die (); }
file_put_contents (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing", "processing");
// SCRIPT CONTENTS GOES HERE //
#unlink (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing");
?>
#!/bin/bash
ps -ef | grep -v grep | grep capture_12hz_sampling_track.php
if [ $? -eq 1 ];
then
nohup /usr/local/bin/php /opt/Apache/htdocs/cmsmusic_v2/script/Mp3DownloadProcessMp4/capture_12hz_sampling_track.php &
else
echo "Already running"
fi
Another alternative:
<?php
/**
* Lock manager to ensure our cron doesn't run twice at the same time.
*
* Inspired by the lock mechanism in Mage_Index_Model_Process
*
* Usage:
*
* $lock = Mage::getModel('stcore/cron_lock');
*
* if (!$lock->isLocked()) {
* $lock->lock();
* // Do your stuff
* $lock->unlock();
* }
*/
class ST_Core_Model_Cron_Lock extends Varien_Object
{
/**
* Process lock properties
*/
protected $_isLocked = null;
protected $_lockFile = null;
/**
* Get lock file resource
*
* #return resource
*/
protected function _getLockFile()
{
if ($this->_lockFile === null) {
$varDir = Mage::getConfig()->getVarDir('locks');
$file = $varDir . DS . 'stcore_cron.lock';
if (is_file($file)) {
$this->_lockFile = fopen($file, 'w');
} else {
$this->_lockFile = fopen($file, 'x');
}
fwrite($this->_lockFile, date('r'));
}
return $this->_lockFile;
}
/**
* Lock process without blocking.
* This method allow protect multiple process runing and fast lock validation.
*
* #return Mage_Index_Model_Process
*/
public function lock()
{
$this->_isLocked = true;
flock($this->_getLockFile(), LOCK_EX | LOCK_NB);
return $this;
}
/**
* Lock and block process.
* If new instance of the process will try validate locking state
* script will wait until process will be unlocked
*
* #return Mage_Index_Model_Process
*/
public function lockAndBlock()
{
$this->_isLocked = true;
flock($this->_getLockFile(), LOCK_EX);
return $this;
}
/**
* Unlock process
*
* #return Mage_Index_Model_Process
*/
public function unlock()
{
$this->_isLocked = false;
flock($this->_getLockFile(), LOCK_UN);
return $this;
}
/**
* Check if process is locked
*
* #return bool
*/
public function isLocked()
{
if ($this->_isLocked !== null) {
return $this->_isLocked;
} else {
$fp = $this->_getLockFile();
if (flock($fp, LOCK_EX | LOCK_NB)) {
flock($fp, LOCK_UN);
return false;
}
return true;
}
}
/**
* Close file resource if it was opened
*/
public function __destruct()
{
if ($this->_lockFile) {
fclose($this->_lockFile);
}
}
}
Source: https://gist.github.com/wcurtis/9539178
I was running a php cron job script that dealt specifically with sending text messages using an existing API. On my local box the cron job was working fine, but on my customer's box it was sending double messages. Although this doesn't make sense to me, I double checked the permissions for the folder responsible for sending messages and the permission was set to root. Once I set the owner as www-data (Ubuntu) it started behaving normally.
This might mot be the issue for you, but if its a simple cron script I would double check the permissions.

Categories