running a single process in the bakground at all times - php

How can I run process in the background at all times?
I want to create a process that manages some work queues based on the info from a database.
I'm currently doing it using a cron job, where I run the cron job every minute, and have 30 calls with a sleep(2) interval. While this is working OK, I've noticed from time to time that there is a race condition.
Is it possible to just run the same process all the time? I would still have the cron job attempt to start periodically, but it would just shut down if it sees itself running.
Or is this a bad idea? any possibility of a memory leak or other issues occurring?

some years ago I didn't know about MQ systems and nodejs and etc.
so then I used code like this and added to cron to run every minute:
<?php
// defining path to lock file. example: /home/user1/bin/cronjob1.lock
define('LOCK_FILE', __DIR__."/".basename(__FILE__).'.lock');
// function to check if process is running or not
function isLocked()
{
// lock file exists, but let's check if it's running?
if(is_file(LOCK_FILE))
{
$pid = trim(file_get_contents(LOCK_FILE)); // reading process id from .lock file
$pids = explode("\n", trim(`ps -e | awk '{print $1}'`)); // running process ids
if(in_array($pid, $pids)) // $pid exists in process ids
return true; // it's ok, process running
}
// making .lock file with new process id in it
file_put_contents(LOCK_FILE, getmypid()."\n" );
return false; // previous process was not running
}
// if previous process locked to run same script
if(isLocked()) die("Already running.\n"); // locked, exiting
// from this point we run our new process
set_time_limit(0);
while(true) {
// some ops
sleep(1);
}
// cleanup before finishing
unlink(LOCK_FILE);

You could use something called forever which requires nodejs.
Once you have node installed,
Install forever with:
$ [sudo] npm install forever -g
To run a script forever:
forever start app.js

Related

prevent multiple instances of same php script

I am executing the following bash script on ubuntu 16.04 virtua machine startup with rc.local.
#!/bin/bash
# Loop forever (until break is issued)
(while true; do
sudo php /var/www/has/index.php external communication
done ) &
As you can see, the bash executes a php script continuously. Over time, the script might take longer time to execute. Sometime scripts like the one above keep starting even though another instance of that same script is running. So, i want to know how can I prevent a new instance of the php script to execute, if there is an existing instance?
You can use file locking to acquire an exclusive lock. If the lock exists, you can end the script or wait until the lock is released.
I suggest you read up on http://php.net/manual/en/function.flock.php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
// Execute logic
} else {
echo "Couldn't get the lock!";
}

Why does PHP child process become zombie?

I have a script that does several tasks.
In order to avoid timeout, memorylimits, crossvar and so on I decided to have a main script that fork all the taks on different PHP process.
I can run manually the script and work fine.
I can run manually every single child process and work fine.
However from time to time I see that some of child process are running forever and I have to kill them from top.
Does anybody know why a PHP process executed by CLI should become zombie and avoiding to close itself and also the main process?
The Spawn process:
foreach ($OPS as $OP) {
$command = $PHP_BIN." ".__DIR__."/process_this.php op_id=".$OP["id"];
exec($command);
sleep(5);
}

Check if a php file command is already running on cron

I have a cron job that executes every minutes. But I discovered thats it's running multiple times. I'm looking for a way to check if the process is still running then don't start a new one, or terminate the already running process before starting a new one.
If you want the php solution, the simple way is to create a lock file, each time script is executed , check if file exist then exit script, if not let script go to end. But i think it's better to use flock in cron instruction ;)
<?php
$filename = "myscript.lock";
$lifelimit = 120; // in Second lifetime to prevent errors
/* check lifetime of file if exist */
if(file_exists($filename)){
$lifetime = time() - filemtime($filename);
}else{
$lifetime = 0;
}
/* check if file exist or if file is too old */
if(!file_exists($filename) || $lifetime > $lifelimit){
if($lifetime > $lifelimit){
unlink($filename); //Suppress if exist and too old
}
$file=fopen($filename, "w+"); // Create lockfile
if($file == false){
die("file didn't create, check permissions");
}
/* Your process */
unlink($filename); //Suppress lock file after your process
}else{
exit(); // Process already in progress
}
Here can be lot of variants to test it. For example you can update DB when task is in progress and each run test this flag. Or you can open file and lock it. And when your script executed - test if you can lock this file. Or you can read process list and if there no your script - continue execution
So, main goal is - create somewhere flag, that will tell your script that it is already in progress. But what is better for your specific case - it is your choice.
UPDATE
After some researches found good variant to use system "flock" command to do same things. Might be useful:
* * * * * flock -n /some/lockfile command_to_run_every_minute
As of all the locking mechanisms (including flock and lock files) - please note that if something will go wrong, your cron job will never run automatically:
flock: if server crashes (and this happens sometimes) you'll have an everlasting lock (until manually removed) on the command.
lock file: if command fails with fatal error, your file won't be removed, so the cron job won't be able to start (sure, you can use error handlers, this anyways won't save you from failures due to memory limits or server reset).
So, I recon running system "ps aux" is the best solution. Sure, this works only on Linux-based systems (and MacOS).
This snipped could be good solution for the problem. Please note that "grep -v grep" is necessary to filter out the particular query in a shell_exec.
#!/usr/bin/php -f
<?php
$result = shell_exec("ps aux | grep -v grep | grep ". basename(__FILE__) );
echo strlen($result) ? "running" : "not running";
This code is handy when you need to define if this particular file run by cron job

Dont run a cron php task until last one has finished

I have a php-cli script that is run by cron every 5 minutes. Because this interval is short, multiple processes are run at the same time. That's not what I want, since this script has to write inside a text file a numeric id that is incremented each time. It happens that writers are writing at the same time on this text file, and the value written is incorrect.
I have tried to use php's flock function to block writing in the file, when another process is writing on it but it doesnt work.
$fw = fopen($path, 'r+');
if (flock($fw, LOCK_EX)) {
ftruncate($fw, 0);
fwrite($fw, $latestid);
fflush($fw);
flock($fw, LOCK_UN);
}
fclose($fw);
So I suppose that the solution to this is create a bash script that verifies if there is an instance of this php script that is running, if so it should wait until it finished. But I dont know how to do it, any ideas?
The solution I'm using with a bash script is this:
exec 9>/path/to/lock/file
if ! flock -n 9 ; then
echo "another instance is running";
exit 1
fi
# this now runs under the lock until 9 is closed (it will be closed automatically when the script ends)
A file descriptor 9> is created in /var/lock/file, and flock will exit a new process that's trying to run, unless there is no other instance of the script that is running.
How can I ensure that only one instance of a script is running at a time (mutual exclusion)?
I don't really understand how incrementing a counter every 5 minutes will result in multiple processes trying to write the counter file at the same time, but...
A much simpler approach is to use a simple locking mechanism similar to the below:
<?php
$lock_filename = 'nobodyshouldincrementthecounterwhenthisfileishere';
if(file_exists($lock_filename)) {
return;
}
touch($lock_filename);
// your stuff...
unlink($lock_filename);
This as a simple approach will not deal with a situation when the script breaks before it could remove the lock file, in which case it would never run again until it is removed.
More sophisticated approaches are also possible as you suggest, e.g. fork the job in its own process, write the PID into a file, then before running the job it could be checked whether that PID is still running.
To prevent start of a next session of any program until the previous session still running, such as next cron job, I recommend to use either built into your program or external check of running process of this program. Just execute before starting of your program
ps -ef|grep <process_name>|grep -v grep|wc -l
and check, if its result will be 0. Only in this case your program could be started.
I suppose, that you must guarantee an absence of 3rd party process having similar name. (For this purpose give your program a longer and unique name). And a name of your program must not contain pattern "grep".
This work good in combination with normal regular starting of your program, that is configured in a cron table, by cron daemon.
For the case if your check is written as an external script, an entry in the crontab might look like
<time_specification> <your_starter_script> <your_program> ...
2 important remarks: Exit code of your_starter_script must be 0 in case of not starting of your program and it would be better to completely prohibit writing to stdout or stderr by this script.
Such starter is very short and a simple programming exercise. Therefore I don't feel a need to provide its complete code.
Instead of using cron to run your script every 5 minutes, how about using at to schedule your script to run again, 5 minutes after it finishes. Near the end of your script, you can use shell_exec() to run an at command to schedule your script to run again in 5 minutes, like so:
at now + 5 minutes /path/to/script
Or, perhaps even simpler than my previous answer (using at to schedule the script to run again in 5 minutes) is make your script a daemon, by using a non-terminating loop, like so:
while(1) {
// whatever your script does here....
sleep(300) //wait 5 minutes
}
Then, you can do away with scheduling by way of cron or at altogether. Just simply run your script in the background from the command line, like so:
/path/to/your/script &
Or, add /path/to/your/script in /etc/rc.local to make your script start automatically when the machine boots.

Kill or stop execution of PHP script initiated by CRON?

I have scheduled a CRON which calls/executes a PHP script every five minutes. PHP script perform following tasks
Checks for the flag value in database to identify if the previous run is still executing. Value of 1 in the DB tells that process is still running while a value of 0 means it is not.
If the flag value is 1, then exit the PHP else continue to next step.
Update the flag value in database from 0 to 1.
Execute the business logic.
Update the flag value back from 1 to 0, so that next CRON can executes if the data is available in user tables.
All works fine so far, depending on the size of user uploaded data the process on an average takes 35 to 40 minutes to complete.
Question, Is there anyway to kill or stop the execution of PHP script once started by cron. May be a button to let users stop the execution, upload new data and wait for CRON run. I can take care of reseting all the flags and data it's just the kill of PHP script is what i am trying to figure out.
I did some google and figured i can use some commands like:
Killall -9 PHP
to kill all php processes running on server, but not sure how to do this through PHP.
Try this:
ps aux |grep 'part_of_the_name_of_your_script'|awk '{print $2}' |xargs kill -9 {}
Or in your crontab file use crun and variable CRUN_TIME
see crun -h
A lock file would be very appropriate for this tasks.
The PHP script can attempt to create a new file, and if none is created already you can safely know that the script is the only one running at the present time. If a file exists, you can simply exit the script.
Example:
<?php
if (file_exists('/var/run/my-script')) {
exit(1); // already running
}
file_put_contents('/var/run/my-script', getmypid());
/** Business Logic **/
unlink('/var/run/my-script');
exit(0);
?>
You can try system() or exec(), but it might not work (or return permission denied errors) as cron processes are executed by either the current user or root, and the web server user doesn't usually have access to these processes.

Categories