I've got a bunch of php scripts scheduled to run every couple of minutes in cron on a CentOS machine. I would like for every script to self check if the previous instance of it is still running when it starts and stop if it is.
I do this to manage tasks and making sure they only run one at a time
public function startTask($taskname)
{
$running = "running";
$lockfile = $taskname . "RUNNING";
file_put_contents($lockfile, $running);
}
public function endTask($taskname)
{
$lockfile = $taskname . "RUNNING";
unlink($lockfile);
}
public function isTaskRunning($taskname)
{
$lockfile = $taskname . "RUNNING";
if (file_exists($lockfile))
{
return true;
}
}
You call startTask('name') when you start the task and then endTask('name') when you finish. And on the first line of the task you use
if (isTaskRunning('name')) {
die('already running');
}
Put these in a config class or something thats included in all task files and your away
Use a lock file:
<?php
$lockfile = "/tmp/lock.txt";
$fp = fopen($lockfile, "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, sprintf("Started: %s\nPID: %s", date(), getmypid()));
// perform your tasks here.
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!\nCheck $lockfile for more info.";
}
fclose($fp);
Alternatively, if you are using a database you can do a database Named Lock like this:
<?php
$process = "myProcess";
$sql = mysql_query("select get_lock('$process', 0)");
$got_lock = (bool)mysql_fetch_array($sql)[0];
// If process is already running exit
if(!$got_lock){
echo "Process running";
exit;
}
// Run my process
for($i=0;$i<100000000;$i++){
echo $i;
}
// Release the lock
mysql_query("select release_lock('$process')");
This form of locking is called a named lock, it doesn't "Lock" the database, it just creates a "Named Lock" and when you call it it checks to see if the name exists. It is nothing like a table lock or row lock It was built into mysql for other applications, such as this.
You can have as many locks as you need, and they are automatically released once the application ends, such as your client disconnecting from mysql: process finishes, php breaks/crashes, mysql crashes (not 100% sure on this one), etc.
You can add to your PHP script a simple echo at the end (maybe with date()) which will be visible in your cron logs and shows you if the script reaches the end (and then finish its task)
Related
I'm working on a cron system and need to execute a script only once at a time. By using the following codes, I execute the script first time and while it's looping (for delaying purpose), executing it again but file_exists always returns false while first execution returns content of file after loop is done.
Cronjob.php:
include "Locker.class.php";
Locker::$LockName = __DIR__.'/OneTime_[cron].lock';
$Locker = new Locker();
for ($i = 0 ; $i < 1000000; $i++){
echo 'Z';
$z = true;
ob_end_flush();
ob_start();
}
Locker.class.php:
class Locker{
static $LockName;
function __construct($Expire){
if (!basename(static::$LockName)){
die('Locker: Not a filename.');
}
// It doesn't help
clearstatcache();
if (file_exists(static::$LockName)){ // returns false always
die('Already running');
} else {
$myfile = fopen(static::$LockName, "x"); // Tried with 'x' and 'w', no luck
fwrite($myfile, 'Keep it alive'); // Tried with file_put_content also, no luck
fclose($myfile);
}
// The following function returns true by the way!
// echo file_exists(static::$LockName);
}
function __destruct(){
// It outputs content
echo file_get_contents(static::$LockName);
unlink(static::$LockName);
}
}
What is the problem? Why file_exists returns false always?
I suspect the PHP parser has noticed that you never use the variable $Locker, so it immediately destroys the object, which runs the destructor and removes the file. Try putting a reference to the object after the loop:
include "Locker.class.php";
Locker::$LockName = __DIR__.'/OneTime_[cron].lock';
$Locker = new Locker();
for ($i = 0 ; $i < 1000000; $i++){
echo 'Z';
$z = true;
ob_end_flush();
ob_start();
}
var_dump($Locker);
If you're goal is to prevent a potentially long running job from executing multiple copies at the same time, you can take a simpler approach and just flock() the file itself.
This would go in cronjob.php
<?php
$wb = false;
$fp = fopen(__FILE__, 'r');
if (!$fp) die("Could not open file");
$locked = flock($fp, LOCK_EX | LOCK_NB, $wb);
if (!$locked) die("Couldn't acquire lock!\n");
// do work here
sleep(20);
flock($fp, LOCK_UN);
fclose($fp);
To address your actual question, I found that by running your code, the reason the file is going away is because on subsequent calls, it outputs Already running if a job is running, and then the second script invokes the destructor and deletes the file before the initial task finishes running.
The flock method above solves this problem. Otherwise, you'll need to ensure that only the process that actually creates the lock file is able to delete it (and take care that it never gets left around too long).
so i want to lock a file so i can see that a php process is already running. The example code looks as follows:
<?php
$file = fopen("test.txt","w+");
if (flock($file,LOCK_EX))
{
fwrite($file,"Write something");
sleep(10);
flock($file,LOCK_UN);
}
else
{
echo "Error locking file!";
}
fclose($file);
?>
But the problem is that when i executed this file, and execute the file again the second waits for the first one to be done with the lock. So then both are successfully executed. But only the first one has to be executed successfully. Anyknow know how to do this?
It sounds like you don't want flock to be blocking? You just want the first process to obtain the lock and the second one to fail?
To do that, you can use the LOCK_NB flag to stop the flock call from blocking:
$file = fopen("test.txt","r+");
if (flock($file,LOCK_EX | LOCK_NB))
{
fwrite($file,"Write something");
sleep(10);
flock($file,LOCK_UN);
}
else
{
echo "Error locking file!";
}
fclose($file);
More info is available on the PHP flock doc page - http://php.net/manual/en/function.flock.php
You could use a second file called e.g. "test.txt.updated" and maintain the state there -- i.e., whether "test.txt" was already updated or not. Something like the following. NOTE, I've opened "text.txt" in append-mode to see whether two concurrent runs really wrote only once to the file.
<?php
if (! ($f = fopen("text.txt", "a")) )
print("Cannot write text.txt\n");
elseif (! flock($f, LOCK_EX))
print("Error locking text.txt\n");
else {
print("Locked text.txt\n");
if (file_exists("text.txt.updated"))
print("text.txt already updated\n");
else {
print("Updating text.txt and creating text.txt.updated\n");
fwrite($f, "Write something\n");
if ($stamp = fopen("text.txt.updated", "w"))
fwrite($stamp, "whatever\n");
else
print("Oooooops, can't create the updated state file\n");
sleep(10);
}
flock($f, LOCK_UN);
}
?>
I have a php script running each minute from cron. but sometimes it works longer than 1 minute.
My question is: what is the best way to ensure that only one process is running right now?
I use this code:
$output = shell_exec('ps aux | grep some_script.php | grep -v grep'); //get all processes containing "some_script.php" and exclude current grep process
$trimmed = rtrim($output, PHP_EOL); //trim newline symbol in the end
$processes = explode(PHP_EOL, $trimmed); //get the array of lines (i.e. processes)
$procCnt = count($processes); //get number of lines
if ($procCnt > 2) {
echo "busy\n";
exit(); //exit if number of processes more than 2 (see explaination below)
}
If there is one some_script process running from cron, shell_exec returns something like that:
apache 13593 0.0 0.0 9228 1068 ? Ss 18:20 0:00 /bin/bash -c php -f /srv/www/robot/some_script.php 2>&1
apache 13602 0.0 0.0 290640 10544 ? S 18:20 0:00 php -f /srv/www/robot/some_script.php
so if I have more than 2 lines in output, I call exit()
I want to ask: am I on a right way? Or is there a better way?
Any help would be appreciated
Checking that way creates a race condition where two processes can retrieve the list at the same time, then both decide to exit. Depending on what you're trying to do, this may or may not be a problem.
A possible better alternative is to create a lock of some sort. A simple one I've used is a directory that only exists while the process is running - mkdir is atomic, it will either succeed (no other process is running) or fail (another process has already created it). Just make sure to remove it when complete:
if (!mkdir("lock_dir")) {
echo "busy\n";
exit();
}
register_shutdown_function(function() {
rmdir("lock_dir");
});
Or better, it looks like flock was made for a similar purpose. This is the example from the manual:
<?php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
?>
Just hold the lock for the script's runtime, similar to my first example.
Simplest method is to create file while process start, and delete it at very end.
And check if file exist, then proceed and create it or die.
Another one is limit MaxClients for apache to one (if it is valid option in your case).
Just for OOP style, I am using separated Helper class.
flock — Portable advisory file locking
register_shutdown_function — Register a function for execution on shutdown
<?php
class ProcessHelper
{
const PIDFILE = 'yourProcessName.pid';
public static function isLocked()
{
$fp = fopen(self::PIDFILE, 'w');
if (!flock($fp, LOCK_EX | LOCK_NB, $wouldBlock)) {
if ($wouldBlock) {
// if this file locked by other process
var_dump('DO NOTHING');
return true;
}
} else {
var_dump('Do something and remove PID file');
register_shutdown_function(function () {
unlink(self::PIDFILE);
});
sleep(5); //for test, uncomment
}
return false;
}
}
class FooService
{
public function init()
{
if (!ProcessHelper::isLocked()) {
var_dump('count 1000\n');
for ($i = 0; $i < 10000; $i++) {
echo $i;
}
}
}
}
$foo = new FooService();
$foo->init();
Currently, I tried to prevent an onlytask.php script from running more than once:
$fp = fopen("/tmp/"."onlyme.lock", "a+");
if (flock($fp, LOCK_EX | LOCK_NB)) {
echo "task started\n";
//
while (true) {
// do something lengthy
sleep(10);
}
//
flock($fp, LOCK_UN);
} else {
echo "task already running\n";
}
fclose($fp);
and there is a cron job to execute the above script every minute:
* * * * * php /usr/local/src/onlytask.php
It works for a while. After a few day, when I do:
ps auxwww | grep onlytask
I found that there are two instances running! Not three or more, not one. I killed one of the instances. After a few days, there are two instances again.
What's wrong in the code? Are there other alternatives to limit only one instance of the onlytask.php is running?
p.s. my /tmp/ folder is not cleaned up. ls -al /tmp/*.lock show the lock file was created in day one:
-rw-r--r-- 1 root root 0 Dec 4 04:03 onlyme.lock
You should use x flag when opening the lock file:
<?php
$lock = '/tmp/myscript.lock';
$f = fopen($lock, 'x');
if ($f === false) {
die("\nCan't acquire lock\n");
} else {
// Do processing
while (true) {
echo "Working\n";
sleep(2);
}
fclose($f);
unlink($lock);
}
Note from the PHP manual
'x' - Create and open for writing only; place the file pointer at the
beginning of the file. If the file already exists, the fopen() call
will fail by returning FALSE and generating an error of level
E_WARNING. If the file does not exist, attempt to create it. This is
equivalent to specifying O_EXCL|O_CREAT flags for the underlying
open(2) system call.
And here is O_EXCL explanation from man page:
O_EXCL - If O_CREAT and O_EXCL are set, open() shall fail if the file
exists. The check for the existence of the file and the creation of
the file if it does not exist shall be atomic with respect to other
threads executing open() naming the same filename in the same
directory with O_EXCL and O_CREAT set. If O_EXCL and O_CREAT are set,
and path names a symbolic link, open() shall fail and set errno to
[EEXIST], regardless of the contents of the symbolic link. If O_EXCL
is set and O_CREAT is not set, the result is undefined.
UPDATE:
More reliable approach - run main script, which acquires lock, runs worker script and releases the lock.
<?php
// File: main.php
$lock = '/tmp/myscript.lock';
$f = fopen($lock, 'x');
if ($f === false) {
die("\nCan't acquire lock\n");
} else {
// Spawn worker which does processing (redirect stderr to stdout)
$worker = './worker 2>&1';
$output = array();
$retval = 0;
exec($worker, $output, $retval);
echo "Worker exited with code: $retval\n";
echo "Output:\n";
echo implode("\n", $output) . "\n";
// Cleanup the lock
fclose($f);
unlink($lock);
}
Here goes the worker. Let's raise a fake fatal error in it:
#!/usr/bin/env php
<?php
// File: worker (must be executable +x)
for ($i = 0; $i < 3; $i++) {
echo "Processing $i\n";
if ($i == 2) {
// Fake fatal error
trigger_error("Oh, fatal error!", E_USER_ERROR);
}
sleep(1);
}
Here is the output I got:
galymzhan#atom:~$ php main.php
Worker exited with code: 255
Output:
Processing 0
Processing 1
Processing 2
PHP Fatal error: Oh, fatal error! in /home/galymzhan/worker on line 8
PHP Stack trace:
PHP 1. {main}() /home/galymzhan/worker:0
PHP 2. trigger_error() /home/galymzhan/worker:8
The main point is that the lock file is cleaned up properly so you can run main.php again without problems.
Now I check whether the process is running by ps and warp the php script by a bash script:
#!/bin/bash
PIDS=`ps aux | grep onlytask.php | grep -v grep`
if [ -z "$PIDS" ]; then
echo "Starting onlytask.php ..."
php /usr/local/src/onlytask.php >> /var/log/onlytask.log &
else
echo "onlytask.php already running."
fi
and run the bash script by cron every minute.
<?php
$sLock = '/tmp/yourScript.lock';
if( file_exist($sLock) ) {
die( 'There is a lock file' );
}
file_put_content( $sLock, 1 );
// A lot of code
unlink( $sLock );
You can add an extra check by writing the pid and then check it within file_exist-statement.
To secure it even more you can fetch all running applications by "ps fax" end check if this file is in the list.
try using the presence of the file and not its flock flag :
$lockFile = "/tmp/"."onlyme.lock";
if (!file_exists($lockFile)) {
touch($lockFile);
echo "task started\n";
//
// do something lengthy
//
unlink($lockFile);
} else {
echo "task already running\n";
}
You can use lock files, as some have suggested, but what you are really looking for is the PHP Semaphore functions. These are kind of like file locks, but designed specifically for what you are doing, restricting access to shared resources.
Never use unlink for lock files or other functions like rename. It's break your LOCK_EX on Linux. For example, after unlink or rename lock file, any other script always get true from flock().
Best way to detect previous valid end - write to lock file few bytes on the end lock, before LOCK_UN to handle. And after LOCK_EX read few bytes from lock files and ftruncate handle.
Important note: All tested on PHP 5.4.17 on Linux and 5.4.22 on Windows 7.
Example code:
set semaphore:
$handle = fopen($lockFile, 'c+');
if (!is_resource($handle) || !flock($handle, LOCK_EX | LOCK_NB)) {
if (is_resource($handle)) {
fclose($handle);
}
$handle = false;
echo SEMAPHORE_DENY;
exit;
} else {
$data = fread($handle, 2);
if ($data !== 'OK') {
$timePreviousEnter = fileatime($lockFile);
echo SEMAPHORE_ALLOW_AFTER_FAIL;
} else {
echo SEMAPHORE_ALLOW;
}
fseek($handle, 0);
ftruncate($handle, 0);
}
leave semaphore (better call in shutdown handler):
if (is_resource($handle)) {
fwrite($handle, 'OK');
flock($handle, LOCK_UN);
fclose($handle);
$handle = false;
}
Added a check for old stale locks to galimzhan's answer (not enough *s to comment), so that if the process dies, old lock files would be cleared after three minutes and let cron start the process again. That's what I use:
<?php
$lock = '/tmp/myscript.lock';
if(time()-filemtime($lock) > 180){
// remove stale locks older than 180 seconds
unlink($lock);
}
$f = fopen($lock, 'x');
if ($f === false) {
die("\nCan't acquire lock\n");
} else {
// Do processing
while (true) {
echo "Working\n";
sleep(2);
}
fclose($f);
unlink($lock);
}
You can also add a timeout to the cron job so that the php process will be killed after, let's say 60 seconds, with something like:
* * * * * user timeout -s 9 60 php /dir/process.php >/dev/null
I'm trying to write a PHP script that I want to ensure only has a single instance of it running at any given time. All of this talk about different ways of locking, and race conditions, and etc. etc. etc. is giving me the willies.
I'm confused as to whether lock files are the way to go, or semaphores, or using MySQL locks, or etc. etc. etc.
Can anyone tell me:
a) What is the correct way to implement this?
AND
b) Point me to a PHP implementation (or something easy to port to PHP?)
One way is to use the php function flock with a dummy file, that will act as a watchdog.
On the beginning of our job, if the file raise a LOCK_EX flag, exit, or wait, can be done.
Php flock documentation: http://php.net/manual/en/function.flock.php
For this examples, a file called lock.txt must be created first.
Example 1, if another twin process is running, it will properly quit, without retrying, giving a state message.
It will throw the error state, if the file lock.txt isn't reachable.
<?php
$fp = fopen("lock.txt", "r+");
if (!flock($fp, LOCK_EX|LOCK_NB, $blocked)) {
if ($blocked) {
// another process holds the lock
echo "Couldn't get the lock! Other script in run!\n";
}
else {
// couldn't lock for another reason, e.g. no such file
echo "Error! Nothing done.";
}
}
else {
// lock obtained
ftruncate($fp, 0); // truncate file
// Your job here
echo "Job running!\n";
// Leave a breathe
sleep(3);
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
}
fclose($fp); // Empty memory
Example 2, FIFO (First in, first out): we wants the process to wait, for an execution after the queue, if any:
<?php
$fp = fopen("lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
// Your job here
echo "Job running!\n";
// Leave a breathe
sleep(3);
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
}
fclose($fp);
It is also doable with fopen into x mode, by creating and erasing a file when the script ends.
Create and open for writing only; place the file pointer at the
beginning of the file. If the file already exists, the fopen() call
will fail by returning FALSE
http://php.net/manual/en/function.fopen.php
However, into a Unix environment, for fine tuning, I found easier to list the PID's of every background scripts with getmypid() into a DB, or a separate JSON file.
When one task ends, the script is responsible to declare his state in this file (eq: success/failure/debug infos, etc), and then remove his PID. This allows from my view to create admins tools and daemons in a simpler way. And use posix_kill() to kill a PID from PHP if necessary.
Micro-Services are composed using Unix-like pipelines.
Services can call services.
https://en.wikipedia.org/wiki/Microservices
See also: Prevent PHP script using up all resources while it runs?
// borrow from 2 anwsers on stackoverflow
function IsProcessRunning($pid) {
return shell_exec("ps aux | grep " . $pid . " | wc -l") > 2;
}
function AmIRunning($process_file) {
// Check I am running from the command line
if (PHP_SAPI != 'cli') {
error('Run me from the command line');
exit;
}
// Check if I'm already running and kill myself off if I am
$pid_running = false;
$pid = 0;
if (file_exists($process_file)) {
$data = file($process_file);
foreach ($data as $pid) {
$pid = (int)$pid;
if ($pid > 0 && IsProcessRunning($pid)) {
$pid_running = $pid;
break;
}
}
}
if ($pid_running && $pid_running != getmypid()) {
if (file_exists($process_file)) {
file_put_contents($process_file, $pid);
}
info('I am already running as pid ' . $pid . ' so stopping now');
return true;
} else {
// Make sure file has just me in it
file_put_contents($process_file, getmypid());
info('Written pid with id '.getmypid());
return false;
}
}
/*
* Make sure there is only one instance running at a time
*/
$lockdir = '/data/lock';
$script_name = basename(__FILE__, '.php');
// The file to store our process file
$process_file = $lockdir . DS . $script_name . '.pid';
$am_i_running = AmIRunning($process_file);
if ($am_i_running) {
exit;
}
Use semaphores:
$key = 156478953; //this should be unique for each script
$maxAcquire = 1;
$permissions =0666;
$autoRelease = 1; //releases semaphore when request is shut down (you dont have to worry about die(), exit() or return
$non_blocking = false; //if true, fails instantly if semaphore is not free
$semaphore = sem_get($key, $maxAcquire, $permissions, $autoRelease);
if (sem_acquire($semaphore, $non_blocking )) //blocking (prevent simultaneous multiple executions)
{
processLongCalculation();
}
sem_release($semaphore);
See:
https://www.php.net/manual/en/function.sem-get.php
https://www.php.net/manual/en/function.sem-acquire.php
https://www.php.net/manual/en/function.sem-release.php
You can go for the solution that fits best your project, the two simple ways to achieve that are file locking or database locking.
For implementations of file locking, check http://us2.php.net/flock
If you already use a database, create a table, generate known token for that script, put it there, and just remove it after the end of the script. To avoid problems on errors, you can use expiry times.
Perhaps this could work for you,
http://www.electrictoolbox.com/check-php-script-already-running/
In case you are using php on linux and I think the most practical way is:
<?php
if(shell_exec('ps aux | grep '.__FILE__.' | wc -l')>3){
exit('already running...');
}
?>
Another way to do it is with file flag and exit callback,
the exit callback will ensures that the file flag will be reset to 0 in any case of php execution end also fatal errors.
<?php
function exitProcess(){
if(file_get_contents('inprocess.txt')!='0'){
file_put_contents('inprocess.txt','0');
}
}
if(file_get_contents('inprocess.txt')=='1'){
exit();
}
file_put_contents('inprocess.txt','1');
register_shutdown_function('exitProcess');
/**
execute stuff
**/
?>