How to detect whether a PHP script is already running? - php

I have a cron script that executes a PHP script every 10 minutes. The script checks a queue and processes the data in the queue. Sometimes the queue has enough data to last over 10 minutes of processing, creating the potential of two scripts trying to access the same data. I want to be able to detect whether the script is already running to prevent launching multiple copies of the script. I thought about creating a database flag that says that a script is processing, but if the script were ever to crash it would leave it in the positive state. Is there an easy way to tell if the PHP script is already running from withing a PHP or shell script?

You can just use a lock file. PHP's flock() function provides a simple wrapper for Unix's flock function, which provides advisory locks on files.
If you don't explicitly release them, the OS will automatically release these locks for you when the process holding them terminates, even if it terminates abnormally.
You can also follow the loose Unix convention of making your lock file a 'PID file' - that is, upon obtaining a lock on the file, have your script write its PID to it. Even if you never read this from within your script, it will be convenient for you if your script ever hangs or goes crazy and you want to find its PID in order to manually kill it.
Here's a copy/paste-ready implementation:
#!/usr/bin/php
<?php
$lock_file = fopen('path/to/yourlock.pid', 'c');
$got_lock = flock($lock_file, LOCK_EX | LOCK_NB, $wouldblock);
if ($lock_file === false || (!$got_lock && !$wouldblock)) {
throw new Exception(
"Unexpected error opening or locking lock file. Perhaps you " .
"don't have permission to write to the lock file or its " .
"containing directory?"
);
}
else if (!$got_lock && $wouldblock) {
exit("Another instance is already running; terminating.\n");
}
// Lock acquired; let's write our PID to the lock file for the convenience
// of humans who may wish to terminate the script.
ftruncate($lock_file, 0);
fwrite($lock_file, getmypid() . "\n");
/*
The main body of your script goes here.
*/
echo "Hello, world!";
// All done; we blank the PID file and explicitly release the lock
// (although this should be unnecessary) before terminating.
ftruncate($lock_file, 0);
flock($lock_file, LOCK_UN);
Just set the path of your lock file to wherever you like and you're set.

If you need it to be absolutely crash-proof, you should use semaphores, which are released automatically when php ends the specific request handling.
A simpler approach would be to create a DB record or a file at the beginning of the execution, and remove it at the end. You could always check the "age" of that record/file, and if it's older than say 3 times the normal script execution, suppose it crashed and remove it.
There's no "silver bullet", it just depends on your needs.

If you are running Linux, this should work at the top of your script:
$running = exec("ps aux|grep ". basename(__FILE__) ."|grep -v grep|wc -l");
if($running > 1) {
exit;
}

A common way for *nix daemons (though not necessarily PHP scripts, but it will work) is to use a .pid file.
When the script starts check for the existence of a .pid file named for the script (generally stored in /var/run/). If it doesn't exist, create it setting its contents to the PID of the process running the script (using getmypid) then continue with normal execution. If it does exist read the PID from it and see if that process is still running, probably by running ps $pid. If it is running, exit. Otherwise, overwrite its contents with your PID (as above) and continue normal execution.
When execution finished, delete the file.

I know this is an old question but in case someone else is looking here I'll post some code. This is what I have done recently in a similar situation and it works well. Put put this code at the top of your file and if the same script is already running it will leave it be and end the new one.
I use it to keep a monitoring system running at all times. A cron job starts the script every 5 minutes but unless the other has stopped from some reason (usually if it has crashed, which is very rare!) the new one will just exit itself.
// The file to store our process file
define('PROCESS_FILE', 'process.pid');
// Check I am running from the command line
if (PHP_SAPI != 'cli') {
log_message('Run me from the command line');
exit;
}
// Check if I'm already running and kill myself off if I am
$pid_running = false;
if (file_exists(PROCESS_FILE)) {
$data = file(PROCESS_FILE);
foreach ($data as $pid) {
$pid = (int)$pid;
if ($pid > 0 && file_exists('/proc/' . $pid)) {
$pid_running = $pid;
break;
}
}
}
if ($pid_running && $pid_running != getmypid()) {
if (file_exists(PROCESS_FILE)) {
file_put_contents(PROCESS_FILE, $pid);
}
log_message('I am already running as pid ' . $pid . ' so stopping now');
exit;
} else {
// Make sure file has just me in it
file_put_contents(PROCESS_FILE, getmypid());
log_message('Written pid with id '.getmypid());
}
It will NOT work without modification on Windows, but should be fine in unix based systems.

You can use new Symfony 2.6 LockHandler.
Source
$lock = new LockHandler('update:contents');
if (!$lock->lock()) {
echo 'The command is already running in another process.';
}

This worked for me. Set a database record with a lock flag and a time stamp. My script should complete well within 15min so added that as a last locked feild to check:
$lockresult = mysql_query("
SELECT *
FROM queue_locks
WHERE `lastlocked` > DATE_SUB(NOW() , INTERVAL 15 MINUTE)
AND `locked` = 'yes'
AND `queid` = '1'
LIMIT 1
");
$LockedRowCount = mysql_num_rows($lockresult);
if($LockedRowCount>0){
echo "this script is locked, try again later";
exit;
}else{
//Set the DB record to locked and carry on son
$result = mysql_query("
UPDATE `queue_locks` SET `locked` = 'yes', `lastlocked` = CURRENT_TIMESTAMP WHERE `queid` = 1;
");
}
Then unlock it at the end of the script:
$result = mysql_query("UPDATE `queue_locks` SET `locked` = 'no' WHERE `queid` = 1;");

I know this is an old question, but there's an approach which hasn't been mentioned before that I think is worth considering.
One of the problems with a lockfile or database flag solution, as already mentioned, is that if the script fails for some reason other than normal completion it won't release the lock. And therefore the next instance won't start until the lock is either manually cleared or cleared by a clean-up function.
If, though, you are certain that the script should only ever be running once, then it's relatively easy to check from within the script whether it is already running when you start it. Here's some code:
function checkrun() {
exec("ps auxww",$ps);
$r = 0;
foreach ($ps as $p) {
if (strpos($p,basename(__FILE__))) {
$r++;
if ($r > 1) {
echo "too many instances, exiting\n";
exit();
}
}
}
}
Simply call this function at the start of the script, before you do anything else (such as open a database handler or process an import file), and if the same script is already running then it will appear twice in the process list - once for the previous instance, and once for this one. So, if it appears more than once, just exit.
A potential gotcha here: I'm assuming that you will never have two scripts with the same basename that may legitimately run simultaneously (eg, the same script running under two different users). If that is a possibility, then you'd need to extend the checking to something more sophisticated than a simple substring on the file's basename. But this works well enough if you have unique filenames for your scripts.

Assuming this is a linux server and you have cronjobs available
///Check for running script and run if non-exist///
#! /bin/bash
check=$(ps -fea | grep -v grep | grep script.php | wc -l)
date=$(date +%Y-%m%d" "%H:%M:%S)
if [ "$check" -lt 1 ]; then
echo "["$date"] Starting script" >> /path/to/script/log/
/sbin/script ///Call the script here - see below///
fi
script file
#/usr/bin/php /path/to/your/php/script.php

Home / Check if a PHP script is already running
Check if a PHP script is already running
If you have long running batch processes with PHP that are run by cron and you want to ensure there’s only ever one running copy of the script, you can use the functions getmypid() and posix_kill() to check to see if you already have a copy of the process running. This post has a PHP class for checking if the script is already running.
Each process running on a Linux/Unix computer has a pid, or process identifier. In PHP this can be retrieved using getmypid() which will return an integer number. This pid number can be saved to a file and each time the script is run a check made to see if the file exists. If it is the posix_kill() function can be used to see if a process is running with that pid number.
My PHP class for doing this is below. Please feel free to use this and modify to suit your individual requirements.
class pid {
protected $filename;
public $already_running = false;
function __construct($directory) {
$this->filename = $directory . '/' . basename($_SERVER['PHP_SELF']) . '.pid';
if(is_writable($this->filename) || is_writable($directory)) {
if(file_exists($this->filename)) {
$pid = (int)trim(file_get_contents($this->filename));
if(posix_kill($pid, 0)) {
$this->already_running = true;
}
}
}
else {
die("Cannot write to pid file '$this->filename'. Program execution halted.n");
}
if(!$this->already_running) {
$pid = getmypid();
file_put_contents($this->filename, $pid);
}
}
public function __destruct() {
if(!$this->already_running && file_exists($this->filename) && is_writeable($this->filename)) {
unlink($this->filename);
}
}
}
Use Class below
$pid = new pid('/tmp');
if($pid->already_running) {
echo "Already running.n";
exit;
}
else {
echo "Running...n";
}

Inspired by Mark Amery's answer I created this class. This might help someone. Simply change the "temp/lockFile.pid" to where you want the file placed.
class ProcessLocker
{
private $lockFile;
private $gotLock;
private $wouldBlock;
function __construct()
{
$this->lockFile = fopen('temp/lockFile.pid', 'c');
if ($this->lockFile === false) {
throw new Exception("Unable to open the file.");
}
$this->gotLock = flock($this->lockFile, LOCK_EX | LOCK_NB, $this->wouldBlock);
}
function __destruct()
{
$this->unlockProcess();
}
public function isLocked()
{
if (!$this->gotLock && $this->wouldBlock) {
return true;
}
return false;
}
public function lockProcess()
{
if (!$this->gotLock && !$this->wouldBlock) {
throw new Exception("Unable to lock the file.");
}
ftruncate($this->lockFile, 0);
fwrite($this->lockFile, getmypid() . "\n");
}
public function unlockProcess()
{
ftruncate($this->lockFile, 0);
flock($this->lockFile, LOCK_UN);
}
}
Simply use the class as such in the beginning of your script:
$locker = new ProcessLocker();
if(!$locker->isLocked()){
$locker->lockProcess();
} else{
// The process is locked
exit();
}

Related

PHP lockfunction with fopen seems fails

I've written a lockfunction to protect functions from running simultaneously. But I'm getting the idea that the function isn't performing well and the function might run simultaneously when started within a couple of seconds.
My lock function:
function lockFunction($function){
$lock = SERVER_ROOT.'/lock/'.$function.'.lock';
$f = #fopen($lock, 'x');
if($f === false){
# Lockfile exists, script is running
return false;
}else{
# Script is not running, now locked
fclose($f);
return true;
}
}
When the function is done, I unlock the function
function unlockFunction($function){
#unlink(SERVER_ROOT.'/lock/'.$function.'.lock');
}
How I use the function:
script.php
if(lockFunction('functiontolock')){
# Run code that may not be run simultaneously
# sleep just as example of runtime.
sleep(10);
unlockFunction('functiontolock');
}
Can this cause issues when script.php is run twice within e.g. 2 seconds.
Note
The server where this is running on uses cluster technology, meaning that files are on a file server, database on a different server etc. The files are accessed via network. Don't know it that slows down this function.

flock call within function always 'succeeds', ignoring previous lock

To prevent multiple instances of a PHP-based daemon I wrote from ever running simultaneously, I wrote a simple function to acquire a lock with flock when the process starts, and called it at the start of my daemon. A simplified version of the code looks like this:
#!/usr/bin/php
<?php
function acquire_lock () {
$file_handle = fopen('mylock.lock', 'w');
$got_lock_successfully = flock($file_handle, LOCK_EX);
if (!$got_lock_successfully) {
throw new Exception("Unexpected failure to acquire lock.");
}
}
acquire_lock(); // Block until all other instances of the script are done...
// ... then do some stuff, for example:
for ($i=1; $i<=10; $i++) {
echo "Sleeping... $i\n";
sleep(1);
}
?>
When I execute the script above multiple times in parallel, the behaviour I expect to see - since the lock is never explicitly released throughout the duration of the script - is that the second instance of the script will wait until the first has completed before it proceeds past the acquire_lock() call. In other words, if I run this particular script in two parallel terminals, I expect to see one terminal count to 10 while the other waits, and then see the other count to 10.
This is not what happens. Instead, I see both scripts happily executing in parallel - the second script does not block and wait for the lock to be available.
As you can see, I'm checking the return value from flock and it is true, indicating that the (exclusive) lock has been acquired successfully. Yet this evidently isn't preventing another process from acquiring another 'exclusive' lock on the same file.
Why, and how can I fix this?
Simply store the file pointer resource returned from fopen in a global variable. In the example given in the question, $file_handle is automatically destroyed upon going out of scope when acquire_lock() returns, and this releases the lock taken out by flock.
For example, here is a modified version of the script from the question which exhibits the desired behaviour (note that the only change is storing the file handle returned by fopen in a global):
#!/usr/bin/php
<?php
function acquire_lock () {
global $lock_handle;
$lock_handle = fopen('mylock.lock', 'w');
$got_lock_successfully = flock($lock_handle, LOCK_EX);
if (!$got_lock_successfully) {
throw new Exception("Unexpected failure to acquire lock.");
}
}
acquire_lock(); // Block until all other instances of the script are done...
// ... then do some stuff, for example:
for ($i=1; $i<=10; $i++) {
echo "Sleeping... $i\n";
sleep(1);
}
?>
Note that this seems to be a bug in PHP. The changelog from the flock documentation states that in version 5.3.2:
The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually.
but at least for PHP 5.5, this is false; flock locks are released both by explicit calls to fclose and by the resource handle going out of scope.
I reported this as a bug in November 2014 and may update this question and answer pair if it is ever resolved. In case I get eaten by piranhas before then, you can check the bug report yourself to see if this behaviour has been fixed: https://bugs.php.net/bug.php?id=68509

Make sure one copy of php script running in background

I'm using cronjob to run php script that will be executed every 1 minute
I need also to make sure only of copy is running so if this php script is still running after 2 minutes, cronjob should not run another version.
currently I have 2 options and I would like to see your feedback and if you have any more options
Option 1: create a tmp file when the php script start and remove it when php script finish (and check if the file exists) ---> the problem for me with this option is that if I have my php script crash for any reason, it will not run again (the tmp file will not be deleted)
Option 2: run a bash script like the one below to control the php script execution ---> good but looking for something that can be done within php
#!/bin/bash
function rerun {
BASEDIR=$(dirname $0)
echo $BASEDIR/$1
if ps -ef | grep -v grep | grep $1; then
echo "Running"
exit 0
else
echo "NOT running";
/usr/local/bin/php $BASEDIR/$1 &
exit $?
fi
}
rerun myphpscript.php
PS: I just saw "Mutex class" at http://www.php.net/manual/en/class.mutex.php but not sure if it's stable and anyone tried it.
You might want to use my library ninja-mutex which provides simple interface for handling mutex. Currently it can use flock, memcache, redis or mysql to handle lock.
Below is an example which uses memcache:
<?php
require 'vendor/autoload.php';
use NinjaMutex\Lock\MemcacheLock;
use NinjaMutex\Mutex;
$memcache = new Memcache();
$memcache->connect('127.0.0.1', 11211);
$lock = new MemcacheLock($memcache);
$mutex = new Mutex('very-critical-stuff', $lock);
if ($mutex->acquireLock(1000)) {
// Do some very critical stuff
// and release lock after you finish
$mutex->releaseLock();
} else {
throw new Exception('Unable to gain lock!');
}
I often use the program flock that comes with many linux distributions directly in my crontabs like:
* * * * * flock -n /var/run/mylock.LCK /usr/local/bin/myprogram
Of cause it is still possible to actually start two simultaneously instances of myprogram if you do it by hand, but crond will only make one.
Flock being a small compiled binary, makes it super fast to launch compared to a eventually larger chunk of php code. This is especially a benefit if you have many longer running executions, which it is not perfectly clear that you actually have.
If you're not on a NFS mount, you can use flock() (http://php.net/manual/en/function.flock.php):
$fh = fopen('guestbook.txt','a') or die($php_errormsg);
$tries = 3;
while ($tries > 0) {
$locked = flock($fh,LOCK_EX | LOCK_NB);
if (! $locked) {
sleep(5);
$tries--;
} else {
// don't go through the loop again
$tries = 0;
}
}
if ($locked) {
fwrite($fh,$_REQUEST['guestbook_entry']) or die($php_errormsg);
fflush($fh) or die($php_errormsg);
flock($fh,LOCK_UN) or die($php_errormsg);
fclose($fh) or die($php_errormsg);
} else {
print "Can't get lock.";
}
From: http://docstore.mik.ua/orelly/webprog/pcook/ch18_25.htm
I found the best solution for me is creating a separate database user for your Script and limit the concurent connection to 1 for that user.

how to synchronise multiple processes in PHP to simulate wait()/notifyAll()

I'm trying to test a race condition in PHP. I'd like to have N PHP processes get ready to do something, then block. When I say "go", they should all execute the action at the same time. Hopefully this will demonstrate the race.
In Java, I would use Object.wait() and Object.notifyAll(). What can I use in PHP?
(Either Windows or linux native answers are acceptable)
Create a file "wait.txt"
Start N processes, each with the code shown below
Delete the "wait.txt" file.
...
<?php
while (file_exists('wait.txt')) {}
runRaceTest();
Usually with PHP file lock approach is used. One create a RUN_LOCK or similar file and asks for file_exists("RUN_LOCK"). This system is also used to secure potential endless loops in recursive threads.
I decided to require the file for the execution. Other approach may be, that existence of the file invokes the blocking algorithm. That depends on your situation. Always the safer situation should be the easier to achieve.
Wait code:
/*prepare the program*/
/* ... */
/*Block until its time to go*/
define("LOCK_FILE", "RUN_UNLOCK"); //I'd define this in some config.php
while(!file_exists(LOCK_FILE)) {
usleep(1); //No sleep will eat lots of CPU
}
/*Execute the main code*/
/* ... */
/*Delete the "run" file, so that no further executions should be allowed*/
usleep(1); //Just for sure - we want other processes to start execution phase too
if(file_exists(LOCK_FILE))
unlink(LOCK_FILE);
I guess it would be nice to have a blocking function for that, like this one:
function wait_for_file($filename, $timeout = -1) {
if($timeout>=0) {
$start = microtime(true)*1000; //Remember the start time
}
while(!file_exists($filename)) { //Check the file existence
if($timeout>=0) { //Only calculate when timeout is set
if((microtime(true)*1000-$start)>$timeout) //Compare current time with start time (current always will be > than start)
return false; //Return failure
}
usleep(1); //Save some CPU
}
return true; //Return success
}
It implements timeout. You don't need them but maybe someone else will.
Usage:
header("Content-Type: text/plain; charset=utf-8");
ob_implicit_flush(true);while (#ob_end_clean()); //Flush buffers so the output will be live stream
define("LOCK_FILE","RUN_FOREST_RUN"); //Define lock file name again
echo "Starting the blocking algorithm. Waiting for file: ".LOCK_FILE."\n";
if(wait_for_file(LOCK_FILE, 10000)) { //Wait for 10 secconds
echo "File found and deleted!\n";
if(file_exists(LOCK_FILE)) //May have been deleted by other proceses
unlink(LOCK_FILE);
}
else {
echo "Wait failed!\n";
}
This will output:
Starting the blocking algorithm. Waiting for file: RUN_FOREST_RUN
Wait failed!
~or~
Starting the blocking algorithm. Waiting for file: RUN_FOREST_RUN
File found and deleted!
PHP doesn't have multithreading. And its not planned to be implemented either.
You can try hacks with sockets though or 0MQ to communicate between multiple processes
See Why does PHP not support multithreading?
Php multithread

getting pid of spawned exec in phing

I am using phing and running selenium server via ExecTask. Sometimes I need to stop running server by killing its process.
Is there a possibility in phing of getting PID of process spawned in ExecTask ?
No, ExecTask cannot give the pid of spawned processes directly. It can only return it's exit status and output.
Maybe you can modify the command that you run in ExecTask itself to save the pid of spawned process. You can use $! to get the pid of the most recent background command.
job1 & //start job1 and run in background, end command with &
p1=$! //stores the pid
echo $p1 //gives pid of job1
When you want to kill the selenium server you can call this in another ExecTask :
pkill pid_to_kill
I am not sure if the changes made in shell environment with ExecTask stay or not. If yes then you can use $p1. Replace pid_to_kill with $p1 to kill job1. Else you will have to echo the pid and use the value from its output.
Otherwise you will have do pgrep name_of_program. It will give all processes containing the name. Then you can kill it with pkill.
Instead of launching the process you want to kill from the exec task (selenium server in your case). Use the exec task to launch a script (I used bash but ruby, python etc. will work too). this script will start the desired task and echo the pid. Substitute the required path and executable you want to run in the below.
#!bin/bash
./path_to_executable/process_to_run &
echo $!
Note the "&" this sends the process to the background and allows phing to continue building your project. The last line outputs the pid which can then be captured and saved to a file by the phing exec task. To save this pid add the output option to the phing exec task:
<exec command="your_script" spawn="true" output="./pid.txt" />
the output option will save the output of the exec task to a pid.txt file in the current directory. Note you may need to chown this file (to the user running phing) to enable it to be read later.
In a separate task, you can read the pid from the file and then use an exec task to kill the process.
<loadfile property="pid" file="./pid.txt" />
<exec command="kill ${pid}" dir="./" />
Note: in the above you may need to prepend sudo to the kill command (depending on who owns the process, and how it was started.
Optional but worth considering is adding a task to remove the pid.txt file. This will prevent any possibility of killing the wrong process (based on a stale pid). You may also want to sanity check the content of the pid.txt file since in the event of an error it could contain something other than the pid.
While this may not be the most direct or optimal solution it does work.
It is possible, you could go about the second parameter inside the exec command.
exec("Script To Run", $output);
The second variable gets the output of the current running script in an array format. So to show the full and readable text from the output I would use a foreach loop:
exec("ifconfig", $output); // Presuming you are developing for a Linux server
foreach ($output as $outputvar) {
echo $outputvar . "<br>";
}
After that, I would use something like strpos to pull the information from the $outputvar for the string which you are looking for.
I hope this is something similar to what you are looking for.
I ended up creating a phing task that saves the pid of the launched program and it stops it when you ask it for. It uses Cocur\BackgroundProcess to start the process in background and can also return the pid.
<?php
require_once "phing/Task.php";
class BackgroundExecTask extends Task {
protected $command = null;
protected $executable = null;
protected $id = null;
protected static $pidMap = [];
public function init() {
if (!class_exists('\Cocur\BackgroundProcess\BackgroundProcess')) {
throw new BuildException("This task requires the Cocur Background Process componente installed and available on the include path", $this->getLocation());
}
}
public function main() {
switch ($this->command) {
case "start":
return $this->start();
case "stop":
return $this->stop();
}
}
protected function start() {
$process = new \Cocur\BackgroundProcess\BackgroundProcess($this->executable);
$process->run();
// you can also return the pid
//$this->project->setProperty($this->pidProperty, $process->getPid());
self::$pidMap[$this->id] = $process;
}
protected function stop() {
self::$pidMap[$this->id]->stop();
}
public function setCommand($command)
{
$this->command = "" . $command;
}
public function setExecutable($executable)
{
$this->executable = "" . $executable;
}
public function setId($id)
{
$this->id = "" . $id;
}
}
Usage:
<backgroundexec id="myprogram" command="start" executable="somebinary" />
<backgroundexec id="myprogram" command="stop" />

Categories