What is the benifit of PCNTL-PHP for daemon process - php

I was researching and trying to do a daemon process using php, I found my self compelled to recompile PHP to enable PCNTL. Then I started to do some tests. I forked the single orphan example :
#!/usr/bin/env php
<?php
$pid = pcntl_fork();
if ($pid === -1) {
echo("Could not fork! \n");die;
} elseif ($pid) {
echo("shell root tree \n");
} else {
echo "Child process \n";
chdir("/");
fclose(STDIN);
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null.txt', 'r');
$STDOUT = fopen('/dev/null.txt', 'wb');
$STDERR = fopen('/dev/null.txt', 'wb');
posix_setsid();
while(1) {
echo ".";
sleep(1);
}
}
then I ran the script :
$cd /var/www
$./test.php
every thing was going well, the file /dev/null.txt cleared and was being updated in the infinite loop each 1 second.
Then I wondered about the benefit of PCNTL, so I changed the code :
#!/usr/bin/env php
<?php
fclose(STDIN);
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null.txt', 'r');
$STDOUT = fopen('/dev/null.txt', 'wb');
$STDERR = fopen('/dev/null.txt', 'wb');
while(1) {
echo ".";
sleep(1);
}
Both of the previous examples gave me the same results.
Have I missed something ? Can you guide me

Both your examples do the basically the same, except the first one forks before continuing. Forking is the way that processes become daemons in UNIX or derivatives.
Since forking leaves the parent and child processes sharing the same STDIN STDOUT and STDERR descriptors, it's common to just close them like you did.
In your trivial example, forking serves no purpose. Because you fopen() three times and no other descriptors are open at that time, these become the new descriptors 0, 1 and 2, matching input, output and error, hence your echo "."; ends up in that file.
Moreover, /dev/null.txt is just a regular file named like that and not the special /dev/null null device.

Related

PHP: Need to close STDIN in order to read STDOUT?

I recently tried to communicate with a binary on my Ubuntu webserver [1] using the PHP function proc_open. I can establish a connection and define the pipes STDIN, STDOUT, and STDERR. Nice.
Now the bimary I am talking to is an interactive computer algebra software - therefore I would like to keep both STDOUT and STDIN alive after the first command such that I can still use the application a few lines later in an interactive manner (direct user inputs from a web-frontend).
However, as it turns out, the PHP functions to read the STDOUT of the binary (either stream_get_contents or fgets) need a closed STDIN before they can work. Otherwise the program deadlocks.
This is a severe drawback since I can not just reopen the closed STDIN after closing it. So my question is: why does my script deadlock if I want to read the STDOUT when my STDIN is still alive?
Thanks
Jens
[1] proc_open returns false but does not write in error file - permissions issue?
my source:
$descriptorspec = array(
0 => array("pipe","r"),
1 => array("pipe","w"),
2 => array("file","./error.log","a")
) ;
// define current working directory where files would be stored
$cwd = './' ;
// open reduce
$process = proc_open('./reduce/reduce', $descriptorspec, $pipes, $cwd) ;
if (is_resource($process)) {
// some valid Reduce commands
fwrite($pipes[0], 'load excalc; operator x; x(0) := t; x(1) := r;');
// if the following line is removed, the script deadlocks
fclose($pipes[0]);
echo "output: " . stream_get_contents($pipes[1]);
// close pipes & close process
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
}
EDIT:
This code kind of works. Kind of because it uses usleeps to wait for the non-blocked STDOUT to be filled with data. How do I do that more elegantly?
# Elias: By polling the $status['running'] entry you can only determine if the overall process is still running, but not if the process is busy or idling... That is why I have to include these usleeps.
define('TIMEOUT_IN_MS', '100');
define('TIMEOUT_STEPS', '100');
function getOutput ($pipes) {
$result = "";
$stage = 0;
$buffer = 0;
do {
$char = fgets($pipes[1], 4096);
if ($char != null) {
$buffer = 0;
$stage = 1;
$result .= $char;
} else if ($stage == "1") {
usleep(TIMEOUT_IN_MS/TIMEOUT_STEPS);
$buffer++;
if ($buffer > TIMEOUT_STEPS) {
$stage++;
}
}
} while ($stage < 2);
return $result;
}
$descriptorspec = array( 0 => array("pipe", "r"), 1 => array("pipe", "w") ) ;
// define current working directory where files would be stored
$cwd = './' ;
// open reduce
$process = proc_open('./reduce/reduce', $descriptorspec, $pipes, $cwd);
if (is_resource($process)) {
stream_set_blocking($pipes[1], 0);
echo "startup output:<br><pre>" . getOutput($pipes) . "</pre>";
fwrite($pipes[0], 'on output; load excalc; operator x; x(0) := t; x(1) := r;' . PHP_EOL);
echo "output 1:<br><pre>" . getOutput($pipes) . "</pre>";
fwrite($pipes[0], 'coframe o(t) = sqrt(1-2m/r) * d t, o(r) = 1/sqrt(1-2m/r) * d r with metric g = -o(t)*o(t) + o(r)*o(r); displayframe;' . PHP_EOL);
echo "output 2:<br><pre>" . getOutput($pipes) . "</pre>";
// close pipes & close process
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
}
This reminds me of a script I wrote a while back. While it might serve as inspiration to you (or others), it doesn't do what you need. What it does contain is an example of how you can read the output of a stream, without having to close any of the streams.
Perhaps you can apply the same logic to your situation:
$allInput = array(
'load excalc; operator x; x(0) := t; x(1) := r;'
);//array with strings to pass to proc
if (is_resource($process))
{
$output = '';
$input = array_shift($allInput);
do
{
usleep(200);//make sure the running process is ready
fwrite(
$pipes,
$input.PHP_EOL,//add EOL
strlen($input)+1
);
fflush($pipes[0]);//flush buffered data, write to stream
usleep(200);
$status = proc_get_status($process);
while($out = fread($pipes[1], 1024) && !feof($pipes[1]))
$output .= $out;
} while($status['running'] && $input = array_shift($allInput));
//proc_close & fclose calls here
}
Now, seeing as I don't know what it is exactly you are trying to do, this code will need to be tweaked quite a bit. You may, for example, find yourself having to set the STDIN and STDOUT pipes as non-blocking.
It's a simple matter of adding this, right after calling proc_open, though:
stream_set_blocking($pipes[0], 0);
stream_set_blocking($pipes[1], 0);
Play around, have fun, and perhaps let me know if this answer was helpful in any way...
My guess would be that you're doing everything correctly, except that the binary is never notified that it has received all the input and can start to work. By closing STDIN, you're kicking off the work process, because it's clear that there will be no more input. If you're not closing STDIN, the binary is waiting for more input, while your side is waiting for its output.
You probably need to end your input with a newline or whatever other protocol action is expected of you. Or perhaps closing STDIN is the action that's expected of you. Unless the process is specifically created to stay open and continue to stream input, you can't make it do it. If the process reads all input, processes it, returns output and then quits, there's no way you can make it stay alive to process more input later. If the process explicitly supports that behaviour, there should be a definition on how you need to delimit your input.

Ensure that there is only one running PHP process on Linux

I have a php script running each minute from cron. but sometimes it works longer than 1 minute.
My question is: what is the best way to ensure that only one process is running right now?
I use this code:
$output = shell_exec('ps aux | grep some_script.php | grep -v grep'); //get all processes containing "some_script.php" and exclude current grep process
$trimmed = rtrim($output, PHP_EOL); //trim newline symbol in the end
$processes = explode(PHP_EOL, $trimmed); //get the array of lines (i.e. processes)
$procCnt = count($processes); //get number of lines
if ($procCnt > 2) {
echo "busy\n";
exit(); //exit if number of processes more than 2 (see explaination below)
}
If there is one some_script process running from cron, shell_exec returns something like that:
apache 13593 0.0 0.0 9228 1068 ? Ss 18:20 0:00 /bin/bash -c php -f /srv/www/robot/some_script.php 2>&1
apache 13602 0.0 0.0 290640 10544 ? S 18:20 0:00 php -f /srv/www/robot/some_script.php
so if I have more than 2 lines in output, I call exit()
I want to ask: am I on a right way? Or is there a better way?
Any help would be appreciated
Checking that way creates a race condition where two processes can retrieve the list at the same time, then both decide to exit. Depending on what you're trying to do, this may or may not be a problem.
A possible better alternative is to create a lock of some sort. A simple one I've used is a directory that only exists while the process is running - mkdir is atomic, it will either succeed (no other process is running) or fail (another process has already created it). Just make sure to remove it when complete:
if (!mkdir("lock_dir")) {
echo "busy\n";
exit();
}
register_shutdown_function(function() {
rmdir("lock_dir");
});
Or better, it looks like flock was made for a similar purpose. This is the example from the manual:
<?php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
?>
Just hold the lock for the script's runtime, similar to my first example.
Simplest method is to create file while process start, and delete it at very end.
And check if file exist, then proceed and create it or die.
Another one is limit MaxClients for apache to one (if it is valid option in your case).
Just for OOP style, I am using separated Helper class.
flock — Portable advisory file locking
register_shutdown_function — Register a function for execution on shutdown
<?php
class ProcessHelper
{
const PIDFILE = 'yourProcessName.pid';
public static function isLocked()
{
$fp = fopen(self::PIDFILE, 'w');
if (!flock($fp, LOCK_EX | LOCK_NB, $wouldBlock)) {
if ($wouldBlock) {
// if this file locked by other process
var_dump('DO NOTHING');
return true;
}
} else {
var_dump('Do something and remove PID file');
register_shutdown_function(function () {
unlink(self::PIDFILE);
});
sleep(5); //for test, uncomment
}
return false;
}
}
class FooService
{
public function init()
{
if (!ProcessHelper::isLocked()) {
var_dump('count 1000\n');
for ($i = 0; $i < 10000; $i++) {
echo $i;
}
}
}
}
$foo = new FooService();
$foo->init();

How to prevent PHP script running more than once?

Currently, I tried to prevent an onlytask.php script from running more than once:
$fp = fopen("/tmp/"."onlyme.lock", "a+");
if (flock($fp, LOCK_EX | LOCK_NB)) {
echo "task started\n";
//
while (true) {
// do something lengthy
sleep(10);
}
//
flock($fp, LOCK_UN);
} else {
echo "task already running\n";
}
fclose($fp);
and there is a cron job to execute the above script every minute:
* * * * * php /usr/local/src/onlytask.php
It works for a while. After a few day, when I do:
ps auxwww | grep onlytask
I found that there are two instances running! Not three or more, not one. I killed one of the instances. After a few days, there are two instances again.
What's wrong in the code? Are there other alternatives to limit only one instance of the onlytask.php is running?
p.s. my /tmp/ folder is not cleaned up. ls -al /tmp/*.lock show the lock file was created in day one:
-rw-r--r-- 1 root root 0 Dec 4 04:03 onlyme.lock
You should use x flag when opening the lock file:
<?php
$lock = '/tmp/myscript.lock';
$f = fopen($lock, 'x');
if ($f === false) {
die("\nCan't acquire lock\n");
} else {
// Do processing
while (true) {
echo "Working\n";
sleep(2);
}
fclose($f);
unlink($lock);
}
Note from the PHP manual
'x' - Create and open for writing only; place the file pointer at the
beginning of the file. If the file already exists, the fopen() call
will fail by returning FALSE and generating an error of level
E_WARNING. If the file does not exist, attempt to create it. This is
equivalent to specifying O_EXCL|O_CREAT flags for the underlying
open(2) system call.
And here is O_EXCL explanation from man page:
O_EXCL - If O_CREAT and O_EXCL are set, open() shall fail if the file
exists. The check for the existence of the file and the creation of
the file if it does not exist shall be atomic with respect to other
threads executing open() naming the same filename in the same
directory with O_EXCL and O_CREAT set. If O_EXCL and O_CREAT are set,
and path names a symbolic link, open() shall fail and set errno to
[EEXIST], regardless of the contents of the symbolic link. If O_EXCL
is set and O_CREAT is not set, the result is undefined.
UPDATE:
More reliable approach - run main script, which acquires lock, runs worker script and releases the lock.
<?php
// File: main.php
$lock = '/tmp/myscript.lock';
$f = fopen($lock, 'x');
if ($f === false) {
die("\nCan't acquire lock\n");
} else {
// Spawn worker which does processing (redirect stderr to stdout)
$worker = './worker 2>&1';
$output = array();
$retval = 0;
exec($worker, $output, $retval);
echo "Worker exited with code: $retval\n";
echo "Output:\n";
echo implode("\n", $output) . "\n";
// Cleanup the lock
fclose($f);
unlink($lock);
}
Here goes the worker. Let's raise a fake fatal error in it:
#!/usr/bin/env php
<?php
// File: worker (must be executable +x)
for ($i = 0; $i < 3; $i++) {
echo "Processing $i\n";
if ($i == 2) {
// Fake fatal error
trigger_error("Oh, fatal error!", E_USER_ERROR);
}
sleep(1);
}
Here is the output I got:
galymzhan#atom:~$ php main.php
Worker exited with code: 255
Output:
Processing 0
Processing 1
Processing 2
PHP Fatal error: Oh, fatal error! in /home/galymzhan/worker on line 8
PHP Stack trace:
PHP 1. {main}() /home/galymzhan/worker:0
PHP 2. trigger_error() /home/galymzhan/worker:8
The main point is that the lock file is cleaned up properly so you can run main.php again without problems.
Now I check whether the process is running by ps and warp the php script by a bash script:
#!/bin/bash
PIDS=`ps aux | grep onlytask.php | grep -v grep`
if [ -z "$PIDS" ]; then
echo "Starting onlytask.php ..."
php /usr/local/src/onlytask.php >> /var/log/onlytask.log &
else
echo "onlytask.php already running."
fi
and run the bash script by cron every minute.
<?php
$sLock = '/tmp/yourScript.lock';
if( file_exist($sLock) ) {
die( 'There is a lock file' );
}
file_put_content( $sLock, 1 );
// A lot of code
unlink( $sLock );
You can add an extra check by writing the pid and then check it within file_exist-statement.
To secure it even more you can fetch all running applications by "ps fax" end check if this file is in the list.
try using the presence of the file and not its flock flag :
$lockFile = "/tmp/"."onlyme.lock";
if (!file_exists($lockFile)) {
touch($lockFile);
echo "task started\n";
//
// do something lengthy
//
unlink($lockFile);
} else {
echo "task already running\n";
}
You can use lock files, as some have suggested, but what you are really looking for is the PHP Semaphore functions. These are kind of like file locks, but designed specifically for what you are doing, restricting access to shared resources.
Never use unlink for lock files or other functions like rename. It's break your LOCK_EX on Linux. For example, after unlink or rename lock file, any other script always get true from flock().
Best way to detect previous valid end - write to lock file few bytes on the end lock, before LOCK_UN to handle. And after LOCK_EX read few bytes from lock files and ftruncate handle.
Important note: All tested on PHP 5.4.17 on Linux and 5.4.22 on Windows 7.
Example code:
set semaphore:
$handle = fopen($lockFile, 'c+');
if (!is_resource($handle) || !flock($handle, LOCK_EX | LOCK_NB)) {
if (is_resource($handle)) {
fclose($handle);
}
$handle = false;
echo SEMAPHORE_DENY;
exit;
} else {
$data = fread($handle, 2);
if ($data !== 'OK') {
$timePreviousEnter = fileatime($lockFile);
echo SEMAPHORE_ALLOW_AFTER_FAIL;
} else {
echo SEMAPHORE_ALLOW;
}
fseek($handle, 0);
ftruncate($handle, 0);
}
leave semaphore (better call in shutdown handler):
if (is_resource($handle)) {
fwrite($handle, 'OK');
flock($handle, LOCK_UN);
fclose($handle);
$handle = false;
}
Added a check for old stale locks to galimzhan's answer (not enough *s to comment), so that if the process dies, old lock files would be cleared after three minutes and let cron start the process again. That's what I use:
<?php
$lock = '/tmp/myscript.lock';
if(time()-filemtime($lock) > 180){
// remove stale locks older than 180 seconds
unlink($lock);
}
$f = fopen($lock, 'x');
if ($f === false) {
die("\nCan't acquire lock\n");
} else {
// Do processing
while (true) {
echo "Working\n";
sleep(2);
}
fclose($f);
unlink($lock);
}
You can also add a timeout to the cron job so that the php process will be killed after, let's say 60 seconds, with something like:
* * * * * user timeout -s 9 60 php /dir/process.php >/dev/null

PDFTK with php://memory

Question: Is it possible to use php://memory on a exec or passthru command?
I can use php variables in the exec or passthru with no problem, but I am having trouble with php://memory
background:
I am trying to eliminate all of my temporary pdf file writing with PDFTK.
1)I am writing an temporary fdf file
2) form-fill a temporary pdf file using #1
3) repeat #1 and #2 for all the pdfs
4) merge all pdf's together.
This currently works - but it creates a lot of files, and is the bottleneck.
I would like to speed things up with pdftk by making use of the virtual file php://memory
First, I am trying to just virtualize the fdf file used in #1. Answering this alone is enough for a 'correct answer'. :)
The code is as follows:
$fdf = 'fdf file contents here';
$tempFdfVirtual= fopen("php://memory", 'r+');
if( $tempFdfVirtual ) {
fwrite( $tempFdfVirtual, $fdf);
} else {
echo "Failure to open temporary fdf file";
exit;
}
rewind( $tempFdfVirtual);
$url = "unfilled.pdf";
$temppdf_fn = "output.pdf";
$command = "pdftk $url fill_form $tempFdfVirtual output $temppdf_fn flatten";
$error="";
exec( $command, $error );
if ($error!="") {
$_SESSION['err'] = $error;
} else {
$_SESSION['err'] = 0;
}
I am getting an errorcode #1. If I do a stream_get_contents($tempFdfVirtual), it shows the contents.
Thanks for looking!
php://memory and php://temp (and in fact any file descriptor) are only available to the currently-running php process. Besides, $tempFdfVirtual is a resource handle so it makes no sense to put it in a string.
You should pass the data from your resource handle to the process through its standard-in. You can do this with proc-open, which gives you more control over input and output to the child process than exec.
Note that for some reason, you can't pass a 'php://memory' file descriptor to a process. PHP will complain:
Warning: proc_open(): cannot represent a stream of type MEMORY as a File Descriptor
Use php://temp instead, which is supposed to be exactly the same except it will use a temporary file once the stream gets big enough.
This is a tested example that illustrates the general pattern of code that uses proc_open(). This should be wrapped up in a function or other abstraction:
$testinput = "THIS IS A TEST STRING\n";
$fp = fopen('php://temp', 'r+');
fwrite($fp, $testinput);
rewind($fp);
$cmd = 'cat';
$dspec = array(
0 => $fp,
1 => array('pipe', 'w'),
);
$pp = proc_open($cmd, $dspec, $pipes);
// busywait until process is finished running.
do {
usleep(10000);
$stat = proc_get_status($pp);
} while($stat and $stat['running']);
if ($stat['exitcode']===0) {
// index in $pipes will match index in $dspec
// note only descriptors created by proc_open will be in $pipes
// i.e. $dspec indexes with an array value.
$output = stream_get_contents($pipes[1]);
if ($output == $testinput) {
echo "TEST PASSED!!";
} else {
echo "TEST FAILED!! Output does not match input.";
}
} else {
echo "TEST FAILED!! Process has non-zero exit status.";
}
// cleanup
// close pipes first, THEN close process handle.
foreach ($pipes as $pipe) {
fclose($pipe);
}
// Only file descriptors created by proc_open() will be in $pipes.
// We still need to close file descriptors we created ourselves and
// passed to it.
// We can do this before or after proc_close().
fclose($fp);
proc_close($pp);
Untested Example specific to your use of PDFTK:
// Command takes input from STDIN
$command = "pdftk unfilled.pdf fill_form - output tempfile.pdf flatten";
$descriptorspec = array(
0 => $tempFdfVirtual, // feed stdin of process from this file descriptor
// 1 => array('pipe', 'w'), // Note you can also grab stdout from a pipe, no need for temp file
);
$prochandle = proc_open($command, $descriptorspec, $pipes);
// busy-wait until it finishes running
do {
usleep(10000);
$stat = proc_get_status($prochandle);
} while ($stat and $stat['running']);
if ($stat['exitcode']===0) {
// ran successfully
// output is in that filename
// or in the file handle in $pipes if you told the command to write to stdout.
}
// cleanup
foreach ($pipes as $pipe) {
fclose($pipe);
}
proc_close($prochandle);
It's not just that you're using php://memory, it's any file handle. File handles only exist for the current process. For all intents and purposes, the handle you get back from fopen cannot be transferred to any other place outside of your script.
As long as you're working with an outside application, you're pretty much stuck using temporary files. Your only other option is to try and pass the data to pdftk on stdin, and retrieve the output on stdout (if it supports that). As far as I know the only way to invoke an external process with that kind of access to its descriptors (stdin/stdout) is using the proc_ family of functions, specifically proc_open.

PHP locking / making sure a given script is only running once at any given time

I'm trying to write a PHP script that I want to ensure only has a single instance of it running at any given time. All of this talk about different ways of locking, and race conditions, and etc. etc. etc. is giving me the willies.
I'm confused as to whether lock files are the way to go, or semaphores, or using MySQL locks, or etc. etc. etc.
Can anyone tell me:
a) What is the correct way to implement this?
AND
b) Point me to a PHP implementation (or something easy to port to PHP?)
One way is to use the php function flock with a dummy file, that will act as a watchdog.
On the beginning of our job, if the file raise a LOCK_EX flag, exit, or wait, can be done.
Php flock documentation: http://php.net/manual/en/function.flock.php
For this examples, a file called lock.txt must be created first.
Example 1, if another twin process is running, it will properly quit, without retrying, giving a state message.
It will throw the error state, if the file lock.txt isn't reachable.
<?php
$fp = fopen("lock.txt", "r+");
if (!flock($fp, LOCK_EX|LOCK_NB, $blocked)) {
if ($blocked) {
// another process holds the lock
echo "Couldn't get the lock! Other script in run!\n";
}
else {
// couldn't lock for another reason, e.g. no such file
echo "Error! Nothing done.";
}
}
else {
// lock obtained
ftruncate($fp, 0); // truncate file
// Your job here
echo "Job running!\n";
// Leave a breathe
sleep(3);
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
}
fclose($fp); // Empty memory
Example 2, FIFO (First in, first out): we wants the process to wait, for an execution after the queue, if any:
<?php
$fp = fopen("lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
// Your job here
echo "Job running!\n";
// Leave a breathe
sleep(3);
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
}
fclose($fp);
It is also doable with fopen into x mode, by creating and erasing a file when the script ends.
Create and open for writing only; place the file pointer at the
beginning of the file. If the file already exists, the fopen() call
will fail by returning FALSE
http://php.net/manual/en/function.fopen.php
However, into a Unix environment, for fine tuning, I found easier to list the PID's of every background scripts with getmypid() into a DB, or a separate JSON file.
When one task ends, the script is responsible to declare his state in this file (eq: success/failure/debug infos, etc), and then remove his PID. This allows from my view to create admins tools and daemons in a simpler way. And use posix_kill() to kill a PID from PHP if necessary.
Micro-Services are composed using Unix-like pipelines.
Services can call services.
https://en.wikipedia.org/wiki/Microservices
See also: Prevent PHP script using up all resources while it runs?
// borrow from 2 anwsers on stackoverflow
function IsProcessRunning($pid) {
return shell_exec("ps aux | grep " . $pid . " | wc -l") > 2;
}
function AmIRunning($process_file) {
// Check I am running from the command line
if (PHP_SAPI != 'cli') {
error('Run me from the command line');
exit;
}
// Check if I'm already running and kill myself off if I am
$pid_running = false;
$pid = 0;
if (file_exists($process_file)) {
$data = file($process_file);
foreach ($data as $pid) {
$pid = (int)$pid;
if ($pid > 0 && IsProcessRunning($pid)) {
$pid_running = $pid;
break;
}
}
}
if ($pid_running && $pid_running != getmypid()) {
if (file_exists($process_file)) {
file_put_contents($process_file, $pid);
}
info('I am already running as pid ' . $pid . ' so stopping now');
return true;
} else {
// Make sure file has just me in it
file_put_contents($process_file, getmypid());
info('Written pid with id '.getmypid());
return false;
}
}
/*
* Make sure there is only one instance running at a time
*/
$lockdir = '/data/lock';
$script_name = basename(__FILE__, '.php');
// The file to store our process file
$process_file = $lockdir . DS . $script_name . '.pid';
$am_i_running = AmIRunning($process_file);
if ($am_i_running) {
exit;
}
Use semaphores:
$key = 156478953; //this should be unique for each script
$maxAcquire = 1;
$permissions =0666;
$autoRelease = 1; //releases semaphore when request is shut down (you dont have to worry about die(), exit() or return
$non_blocking = false; //if true, fails instantly if semaphore is not free
$semaphore = sem_get($key, $maxAcquire, $permissions, $autoRelease);
if (sem_acquire($semaphore, $non_blocking )) //blocking (prevent simultaneous multiple executions)
{
processLongCalculation();
}
sem_release($semaphore);
See:
https://www.php.net/manual/en/function.sem-get.php
https://www.php.net/manual/en/function.sem-acquire.php
https://www.php.net/manual/en/function.sem-release.php
You can go for the solution that fits best your project, the two simple ways to achieve that are file locking or database locking.
For implementations of file locking, check http://us2.php.net/flock
If you already use a database, create a table, generate known token for that script, put it there, and just remove it after the end of the script. To avoid problems on errors, you can use expiry times.
Perhaps this could work for you,
http://www.electrictoolbox.com/check-php-script-already-running/
In case you are using php on linux and I think the most practical way is:
<?php
if(shell_exec('ps aux | grep '.__FILE__.' | wc -l')>3){
exit('already running...');
}
?>
Another way to do it is with file flag and exit callback,
the exit callback will ensures that the file flag will be reset to 0 in any case of php execution end also fatal errors.
<?php
function exitProcess(){
if(file_get_contents('inprocess.txt')!='0'){
file_put_contents('inprocess.txt','0');
}
}
if(file_get_contents('inprocess.txt')=='1'){
exit();
}
file_put_contents('inprocess.txt','1');
register_shutdown_function('exitProcess');
/**
execute stuff
**/
?>

Categories