I have a cron job that executes every minutes. But I discovered thats it's running multiple times. I'm looking for a way to check if the process is still running then don't start a new one, or terminate the already running process before starting a new one.
If you want the php solution, the simple way is to create a lock file, each time script is executed , check if file exist then exit script, if not let script go to end. But i think it's better to use flock in cron instruction ;)
<?php
$filename = "myscript.lock";
$lifelimit = 120; // in Second lifetime to prevent errors
/* check lifetime of file if exist */
if(file_exists($filename)){
$lifetime = time() - filemtime($filename);
}else{
$lifetime = 0;
}
/* check if file exist or if file is too old */
if(!file_exists($filename) || $lifetime > $lifelimit){
if($lifetime > $lifelimit){
unlink($filename); //Suppress if exist and too old
}
$file=fopen($filename, "w+"); // Create lockfile
if($file == false){
die("file didn't create, check permissions");
}
/* Your process */
unlink($filename); //Suppress lock file after your process
}else{
exit(); // Process already in progress
}
Here can be lot of variants to test it. For example you can update DB when task is in progress and each run test this flag. Or you can open file and lock it. And when your script executed - test if you can lock this file. Or you can read process list and if there no your script - continue execution
So, main goal is - create somewhere flag, that will tell your script that it is already in progress. But what is better for your specific case - it is your choice.
UPDATE
After some researches found good variant to use system "flock" command to do same things. Might be useful:
* * * * * flock -n /some/lockfile command_to_run_every_minute
As of all the locking mechanisms (including flock and lock files) - please note that if something will go wrong, your cron job will never run automatically:
flock: if server crashes (and this happens sometimes) you'll have an everlasting lock (until manually removed) on the command.
lock file: if command fails with fatal error, your file won't be removed, so the cron job won't be able to start (sure, you can use error handlers, this anyways won't save you from failures due to memory limits or server reset).
So, I recon running system "ps aux" is the best solution. Sure, this works only on Linux-based systems (and MacOS).
This snipped could be good solution for the problem. Please note that "grep -v grep" is necessary to filter out the particular query in a shell_exec.
#!/usr/bin/php -f
<?php
$result = shell_exec("ps aux | grep -v grep | grep ". basename(__FILE__) );
echo strlen($result) ? "running" : "not running";
This code is handy when you need to define if this particular file run by cron job
Related
I am executing the following bash script on ubuntu 16.04 virtua machine startup with rc.local.
#!/bin/bash
# Loop forever (until break is issued)
(while true; do
sudo php /var/www/has/index.php external communication
done ) &
As you can see, the bash executes a php script continuously. Over time, the script might take longer time to execute. Sometime scripts like the one above keep starting even though another instance of that same script is running. So, i want to know how can I prevent a new instance of the php script to execute, if there is an existing instance?
You can use file locking to acquire an exclusive lock. If the lock exists, you can end the script or wait until the lock is released.
I suggest you read up on http://php.net/manual/en/function.flock.php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
// Execute logic
} else {
echo "Couldn't get the lock!";
}
I'm running a PHP loop that 'scans' a directory, every 60 seconds, until a file with a given name is found:
<?php
do {
if (file_exists("../path/file.txt)) {
//Do Stuff
$status = "File Found";
echo $status;
} else {
$status = "File Not Found";
sleep(60);
}
} while ($status == "File Not Found");
?>
In this example, would removing sleep() require more server resources?
Thank you,
In a nutshell, yes, but don't worry about it.
While sleep is executing, CPU processing of your script virtually stops. So yes, it will alleviate processing resources. (The script is still in memory, so those resources are still used, but that shouldn't be a problem on a modern machine.)
If your goal is to do this every 60 seconds, the best practice would be make your PHP a cron script, and run it at low priority.
Configure Cron for a low priority PHP script:
nano crontab -e
Add the following:
* * * * * /usr/bin/nice -n 12 php -q /path/file.php
And replace the /path/file.php with the full path to your PHP script.
Ensure your PHP script is ready
Edit your script's file permissions to allow execution.
chmod ug+rwx /path/file.php
(Again replacing /path/file.php with your actual PHP script's full path.)
Lastly, it's a good idea to make these the very first 2 lines in your PHP script, if you intend to run it this way:
#!/usr/bin/php5
<?php
Yep, every instruction run costs resources whether it be register space, memory, or disk IO.
In this case DO NOT REMOVE your sleep() -- polling constantly without rest is a great way to needlessly crush your resources.
In this case, until the file shows up you'll be looping like mad wasting processor cycles and possibly some disk IO on the conditional check file_exists("../path/file.txt).
By waiting a minute between loop that's tremendously less costly that constant as-fast-as-you-can conditional checks.
How can I run process in the background at all times?
I want to create a process that manages some work queues based on the info from a database.
I'm currently doing it using a cron job, where I run the cron job every minute, and have 30 calls with a sleep(2) interval. While this is working OK, I've noticed from time to time that there is a race condition.
Is it possible to just run the same process all the time? I would still have the cron job attempt to start periodically, but it would just shut down if it sees itself running.
Or is this a bad idea? any possibility of a memory leak or other issues occurring?
some years ago I didn't know about MQ systems and nodejs and etc.
so then I used code like this and added to cron to run every minute:
<?php
// defining path to lock file. example: /home/user1/bin/cronjob1.lock
define('LOCK_FILE', __DIR__."/".basename(__FILE__).'.lock');
// function to check if process is running or not
function isLocked()
{
// lock file exists, but let's check if it's running?
if(is_file(LOCK_FILE))
{
$pid = trim(file_get_contents(LOCK_FILE)); // reading process id from .lock file
$pids = explode("\n", trim(`ps -e | awk '{print $1}'`)); // running process ids
if(in_array($pid, $pids)) // $pid exists in process ids
return true; // it's ok, process running
}
// making .lock file with new process id in it
file_put_contents(LOCK_FILE, getmypid()."\n" );
return false; // previous process was not running
}
// if previous process locked to run same script
if(isLocked()) die("Already running.\n"); // locked, exiting
// from this point we run our new process
set_time_limit(0);
while(true) {
// some ops
sleep(1);
}
// cleanup before finishing
unlink(LOCK_FILE);
You could use something called forever which requires nodejs.
Once you have node installed,
Install forever with:
$ [sudo] npm install forever -g
To run a script forever:
forever start app.js
I am trying to stop my cron script from allowing it to run in parallel. I need it so that if there is no current execution of it, the script will be allowed to run until it is complete, the script timesout or an exception occurs.
I have been trying to use the PHP flock function to engage a file lock, run the script and then release the lock. However, it still looks like I am able to run the script multiple times in parallel. Am I missing something?
Btw, I am developing on Mac OS X with the Mac filesystem, maybe this is the reason the file locks are being ignored? Though the PHP documentation only looks about NTFS filesystems?
// Construct cron lock file path
$cronLockFilePath = realpath(APPLICATION_PATH . '/locks/cron');
// Get cron lock file
$cronLockFile = fopen($cronLockFilePath, 'r');
// Lock cron lock file
if (flock($cronLockFile, LOCK_EX)) {
echo 'lock';
sleep(10);
} else {
echo 'no lock';
}
Your idea is basically correct, but tinkering with file locks generally leads to strange behaviour.
Just create a file on script start and delete it in the end. The presense of the file will indicate if the cron is already running. Make absolutely sure, that the file is deleted in the end, even if the cron runs into an error halfway through.
From documentation:
Warning
On some operating systems flock() is
implemented at the process level. When
using a multithreaded server API like
ISAPI you may not be able to rely on
flock() to protect files against other
PHP scripts running in parallel
threads of the same server instance!
You can try to create and delete file, or write something in to it.
I think what you could do is write a regular file somewhere (lock.txt or something) when script starts to execute, without any flocks, and remove it when the script stops running. And then always check upon initialization whether that file already exists - another instance running.
I have a multi part question for a php script file. I am creating this file that updates the database every second. There is no other modeling method, it has to be done every second.
Now i am running CentOS and i am new to it. The first noob question is:
How do i run a php file via SSH. I read it is just # php path-to/myfile.php. But i tried to echo something, and i dont see it in the text.
Now i don't think that starting the file is going to be a problem. One problem i guess will be, i don't know if it is even possible, but here goes.
Is it possible for me to be hundred percent sure that the file is only run once. What happens if i by accident run the file again.
I was wondering further, if i implement a write to a log every second, i can know if everything is running ok. If there is an error or something wrong the log file will stop.
Is the writing to a log file with the fopen, and write and close. Isn't this going to take a lot of time, isn't there an easier method in CentOS.
Ok another big point i have is what happens when i run the file. Is the file run in the memory, or does it use the file in the system. Does it respond on changes made in the file, for example to stop the execution of the script.
Can i implement some kind of stop mechanism in the file itself. Or is there a command i can use to stop the file.
Another option i know of is implementing a cronjob that runs every minute. And this cronjob executes the php file. The php file will loop for one minute, updateting everything needed, and terminating. I implemented this method, but just used a browser. I just browsed to mu file, and opened it. I saw the browser was busy for a minute, but it didn't update anything in the database. Does anyone have an idea what the reason of this can be.
Another question i have is by implementing the cronjob method, what is the command i fill in the PLESK panel. Is it the same as the above command. just php and the file name. Or are there special command like -f -q -something.
Sorry for all the noob questions.
If someone can help me i really appreciate it.
Ciao!
The simplest way to ensure only one copy of your script is running is to use flock() to obtain a file lock. For example:
<?php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // do an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
?>
So basically you'd have a dummy file set up where your script, upon starting, tries to acquire a lock. If it succeeds, it runs. If not, it exits. That way only one copy of your script can be running at a time.
Note: flock() is what is called an advisory locking method, meaning it only works if you use it. So this will stop your own script from being run multiple times but won't do anything about any other scripts, which sounds fine in your situation.
You can't always rely on the lock within the script itself, as stated in the comment to previous answer. This might be a solution.
#Mins Hours Days Months Day of week
* * * * * lockfile -r 0 /tmp/the.lock; php parse_tweets.php; rm -f /tmp/the.lock
* * * * * lockfile -r 0 /tmp/the.lock; php php get_tweets.php; rm -f /tmp/the.lock
This way even if the scripts crashes, the lockfile will be released. Taken from here: https://unix.stackexchange.com/a/158459