I am executing the following bash script on ubuntu 16.04 virtua machine startup with rc.local.
#!/bin/bash
# Loop forever (until break is issued)
(while true; do
sudo php /var/www/has/index.php external communication
done ) &
As you can see, the bash executes a php script continuously. Over time, the script might take longer time to execute. Sometime scripts like the one above keep starting even though another instance of that same script is running. So, i want to know how can I prevent a new instance of the php script to execute, if there is an existing instance?
You can use file locking to acquire an exclusive lock. If the lock exists, you can end the script or wait until the lock is released.
I suggest you read up on http://php.net/manual/en/function.flock.php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
// Execute logic
} else {
echo "Couldn't get the lock!";
}
Related
I have a cron job that executes every minutes. But I discovered thats it's running multiple times. I'm looking for a way to check if the process is still running then don't start a new one, or terminate the already running process before starting a new one.
If you want the php solution, the simple way is to create a lock file, each time script is executed , check if file exist then exit script, if not let script go to end. But i think it's better to use flock in cron instruction ;)
<?php
$filename = "myscript.lock";
$lifelimit = 120; // in Second lifetime to prevent errors
/* check lifetime of file if exist */
if(file_exists($filename)){
$lifetime = time() - filemtime($filename);
}else{
$lifetime = 0;
}
/* check if file exist or if file is too old */
if(!file_exists($filename) || $lifetime > $lifelimit){
if($lifetime > $lifelimit){
unlink($filename); //Suppress if exist and too old
}
$file=fopen($filename, "w+"); // Create lockfile
if($file == false){
die("file didn't create, check permissions");
}
/* Your process */
unlink($filename); //Suppress lock file after your process
}else{
exit(); // Process already in progress
}
Here can be lot of variants to test it. For example you can update DB when task is in progress and each run test this flag. Or you can open file and lock it. And when your script executed - test if you can lock this file. Or you can read process list and if there no your script - continue execution
So, main goal is - create somewhere flag, that will tell your script that it is already in progress. But what is better for your specific case - it is your choice.
UPDATE
After some researches found good variant to use system "flock" command to do same things. Might be useful:
* * * * * flock -n /some/lockfile command_to_run_every_minute
As of all the locking mechanisms (including flock and lock files) - please note that if something will go wrong, your cron job will never run automatically:
flock: if server crashes (and this happens sometimes) you'll have an everlasting lock (until manually removed) on the command.
lock file: if command fails with fatal error, your file won't be removed, so the cron job won't be able to start (sure, you can use error handlers, this anyways won't save you from failures due to memory limits or server reset).
So, I recon running system "ps aux" is the best solution. Sure, this works only on Linux-based systems (and MacOS).
This snipped could be good solution for the problem. Please note that "grep -v grep" is necessary to filter out the particular query in a shell_exec.
#!/usr/bin/php -f
<?php
$result = shell_exec("ps aux | grep -v grep | grep ". basename(__FILE__) );
echo strlen($result) ? "running" : "not running";
This code is handy when you need to define if this particular file run by cron job
I have a web application from where the user can choose a list of scripts to execute , the executions are then added to a table in mysql and each one have its own state like "Pending,"success"
,"failed" or "in progress" the user also can choose to stop the execution.
The problem is that only one script can be executed at the same time so that the others have to wait until it is finished.
My environement is LINUX (UBUNTU) and the scripts are in PHP
I though about doing a crontab that executes a php script , this php script will grab the informations from the sql table and search if there is an other execution by looking if there there is an execution with an "In progress"
state so if there is one it will simply exit,otherwise it will execute an other execution having the pending state.
Is there any other solution for this ?
It's better to use an atomic check. The way how you do this with the database is not atomic as after you checked that no other scripts are running, but before you've written that the current script starts, another process may perform the same check and therefore you'll get two concurrent scripts running.
Also if the script terminates abnormally for any reason, it won't update the database, so other scripts won't be able to start at all.
More reliable way is to use file locking:
$lock_file = 'some_path/process.lock';
$fd = fopen($lock_file, "w");
if (!$fd)
throw new Exception("Can't open file, check permissions on ".$lock_file, 1);
if (!flock($fd, LOCK_EX + LOCK_NB))
throw new AlreadyRunningException("Can't lock the file - another script is already running", 0);
Then, after the script job is done, unlock the file:
flock($fd, LOCK_UN);
fclose($fd);
On my Linux server, I need to synchronize multiple scripts, written in BASH and PHP, so that only one of them is able to start a system-critical job, which is a series of BASH/PHP commands, that would mess things up if performed simultaneously by two or more scripts. From my experience with multithreading in C++, I'm familiar with the notion of mutex, but how do I implement a mutex for a bunch of scripts that run in separate processes and, of course, aren't written in C++?
Well, the first solution that comes into mind would be making sure that each of the scripts initially creates a "lock flag" file to let other scripts know that the job is "locked" and then deletes the file after it's done with the job. But, as I see it, the file writing and reading operations are required to be completely atomic to let this approach work out with a 100% probability, and the same requirement would apply to any other synchronization method. And I'm pretty sure that file writing/reading operations are not atomic, they are not atomic across all existing Linux/Unix systems at least.
So what is the most flexible and reliable way to synchronize concurrent BASH and PHP scripts?
I'm not a PHP programmer, but the documentation says it provides a portable version of flock that you can use. The first example snippet looks pretty close to what you want. Try this:
<?php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
// Do your critical section here, while you hold the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
?>
Note that by default flock waits until it can acquire the lock. You can use LOCK_EX | LOCK_NB if you want it to exit immediately in the case where another copy of the program is already running.
Using the name "/tmp/lock.txt" may be a security hole (I don't want to think hard enough to decide whether it truly is) so you should probably choose a directory that can only be written to by your program.
You can use flock to atomically lock your flag file. The -e option is to acquire an exclusive lock.
From the man page:
By default, if the lock cannot be immediately acquired, flock waits
until the lock is available.
So if all your bash/php scripts try to lock the file exclusively, only one can successfully acquire it and rest of them would wait for the lock.
If you don't want to wait thenuse -w to timeout.
fuser-based lock in Bash (it guarantees that no two processes access the protected resource at the same time but may result in negative locking attempt even if no processes access the resource, almost improbable though):
#!/bin/bash
set -eu
function mutex {
local file=$1 pid pids
exec 8>>"$file"
{ pids=$(/sbin/fuser -f "$file"); } 2>&- 9>&-
for pid in $pids; do
[[ $pid = $$ ]] && continue
exec 8>&-
return 1 # locked by other pid
done
}
I am trying to stop my cron script from allowing it to run in parallel. I need it so that if there is no current execution of it, the script will be allowed to run until it is complete, the script timesout or an exception occurs.
I have been trying to use the PHP flock function to engage a file lock, run the script and then release the lock. However, it still looks like I am able to run the script multiple times in parallel. Am I missing something?
Btw, I am developing on Mac OS X with the Mac filesystem, maybe this is the reason the file locks are being ignored? Though the PHP documentation only looks about NTFS filesystems?
// Construct cron lock file path
$cronLockFilePath = realpath(APPLICATION_PATH . '/locks/cron');
// Get cron lock file
$cronLockFile = fopen($cronLockFilePath, 'r');
// Lock cron lock file
if (flock($cronLockFile, LOCK_EX)) {
echo 'lock';
sleep(10);
} else {
echo 'no lock';
}
Your idea is basically correct, but tinkering with file locks generally leads to strange behaviour.
Just create a file on script start and delete it in the end. The presense of the file will indicate if the cron is already running. Make absolutely sure, that the file is deleted in the end, even if the cron runs into an error halfway through.
From documentation:
Warning
On some operating systems flock() is
implemented at the process level. When
using a multithreaded server API like
ISAPI you may not be able to rely on
flock() to protect files against other
PHP scripts running in parallel
threads of the same server instance!
You can try to create and delete file, or write something in to it.
I think what you could do is write a regular file somewhere (lock.txt or something) when script starts to execute, without any flocks, and remove it when the script stops running. And then always check upon initialization whether that file already exists - another instance running.
I have a multi part question for a php script file. I am creating this file that updates the database every second. There is no other modeling method, it has to be done every second.
Now i am running CentOS and i am new to it. The first noob question is:
How do i run a php file via SSH. I read it is just # php path-to/myfile.php. But i tried to echo something, and i dont see it in the text.
Now i don't think that starting the file is going to be a problem. One problem i guess will be, i don't know if it is even possible, but here goes.
Is it possible for me to be hundred percent sure that the file is only run once. What happens if i by accident run the file again.
I was wondering further, if i implement a write to a log every second, i can know if everything is running ok. If there is an error or something wrong the log file will stop.
Is the writing to a log file with the fopen, and write and close. Isn't this going to take a lot of time, isn't there an easier method in CentOS.
Ok another big point i have is what happens when i run the file. Is the file run in the memory, or does it use the file in the system. Does it respond on changes made in the file, for example to stop the execution of the script.
Can i implement some kind of stop mechanism in the file itself. Or is there a command i can use to stop the file.
Another option i know of is implementing a cronjob that runs every minute. And this cronjob executes the php file. The php file will loop for one minute, updateting everything needed, and terminating. I implemented this method, but just used a browser. I just browsed to mu file, and opened it. I saw the browser was busy for a minute, but it didn't update anything in the database. Does anyone have an idea what the reason of this can be.
Another question i have is by implementing the cronjob method, what is the command i fill in the PLESK panel. Is it the same as the above command. just php and the file name. Or are there special command like -f -q -something.
Sorry for all the noob questions.
If someone can help me i really appreciate it.
Ciao!
The simplest way to ensure only one copy of your script is running is to use flock() to obtain a file lock. For example:
<?php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // do an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
?>
So basically you'd have a dummy file set up where your script, upon starting, tries to acquire a lock. If it succeeds, it runs. If not, it exits. That way only one copy of your script can be running at a time.
Note: flock() is what is called an advisory locking method, meaning it only works if you use it. So this will stop your own script from being run multiple times but won't do anything about any other scripts, which sounds fine in your situation.
You can't always rely on the lock within the script itself, as stated in the comment to previous answer. This might be a solution.
#Mins Hours Days Months Day of week
* * * * * lockfile -r 0 /tmp/the.lock; php parse_tweets.php; rm -f /tmp/the.lock
* * * * * lockfile -r 0 /tmp/the.lock; php php get_tweets.php; rm -f /tmp/the.lock
This way even if the scripts crashes, the lockfile will be released. Taken from here: https://unix.stackexchange.com/a/158459