I have a project that needs to send notifications via WebSockets continuously. It should connect to a device that returns the overall status in string format. The system processes it and then sends notifications based on various conditions.
Since the scheduler can repeat a task as early as a minute, I need to find a way to execute the function every second.
Here is my app/Console/Kernel.php:
<?php
...
class Kernel extends ConsoleKernel
{
...
protected function schedule(Schedule $schedule)
{
$schedule->call(function(){
// connect to the device and process its response
})->everyMinute();
}
}
PS: If you have a better idea to handle the situation, please share your thoughts.
Usually, when you want more granularity than 1 minute, you have to write a daemon.
I advise you to try, now it's not so hard as it was some years ago. Just start with a simple loop inside a CLI command:
while (true) {
doPeriodicStuff();
sleep(1);
}
One important thing: run the daemon via supervisord. You can take a look at articles about Laravel's queue listener setup, it uses the same approach (a daemon + supervisord). A config section can look like this:
[program:your_daemon]
command=php artisan your:command --env=your_environment
directory=/path/to/laravel
stdout_logfile=/path/to/laravel/app/storage/logs/your_command.log
redirect_stderr=true
autostart=true
autorestart=true
for per second you can add command to cron job
* * * * * /usr/local/php56/bin/php56 /home/hamshahr/domains/hamshahrapp.com/project/artisan taxis:beFreeTaxis 1>> /dev/null 2>&1
and in command :
<?php
namespace App\Console\Commands;
use App\Contracts\Repositories\TaxiRepository;
use App\Contracts\Repositories\TravelRepository;
use App\Contracts\Repositories\TravelsRequestsDriversRepository;
use Carbon\Carbon;
use Illuminate\Console\Command;
class beFreeRequest extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'taxis:beFreeRequest';
/**
* The console command description.
*
* #var string
*/
protected $description = 'change is active to 0 after 1 min if yet is 1';
/**
* #var TravelsRequestsDriversRepository
*/
private $travelsRequestsDriversRepository;
/**
* Create a new command instance.
* #param TravelsRequestsDriversRepository $travelsRequestsDriversRepository
*/
public function __construct(
TravelsRequestsDriversRepository $travelsRequestsDriversRepository
)
{
parent::__construct();
$this->travelsRequestsDriversRepository = $travelsRequestsDriversRepository;
}
/**
* Execute the console command.
*
* #return mixed
*/
public function handle()
{
$count = 0;
while ($count < 59) {
$startTime = Carbon::now();
$this->travelsRequestsDriversRepository->beFreeRequestAfterTime();
$endTime = Carbon::now();
$totalDuration = $endTime->diffInSeconds($startTime);
if($totalDuration > 0) {
$count += $totalDuration;
}
else {
$count++;
}
sleep(1);
}
}
}
$schedule->call(function(){
while (some-condition) {
runProcess();
}
})->name("someName")->withoutOverlapping();
Depending on how long your runProcess() takes to execute, you can use sleep(seconds) to have more fine tuning.
some-condition is normally a flag that you can change any time to have control on the infinite loop. e.g. you can use file_exists(path-to-flag-file) to manually start or stop the process any time you need.
this might be a bit old question but i recommend you to take a look at
laravel-short-schedule package from spatie
it allows you to run a commands at sub-minute or even sub-second
You can try and duplicate the jobs every second * 60 times using sleep(1).
Related
When using the Laravel5 scheduler:
* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
We receive the following default output if no command is ready to run:
# No scheduled commands are ready to run.
How to disable this default Laravel5 message? We don't want to have an output if there is no command ready to run. The best would be, when we were able to configure that message and return code on our self.
You can create a new command in app/Console/Commands similar to below, which extends the default schedule:run command.
It overrides the handle method while leaving everything else as-is to avoid having Laravel output the "No scheduled commands are ready to run." line when it didn't do anything.
By using a different name there's no need to worry about conflicts, and you can still run the original php artisan schedule:run command at any time if you so desire.
<?php
namespace App\Console\Commands
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Console\Scheduling\ScheduleRunCommand;
class RunTasks extends ScheduleRunCommand
{
/**
* The console command name.
*
* #var string
*/
protected $name = 'run:tasks';
/**
* The console command description.
*
* #var string
*/
protected $description = 'Custom task runner with no default output';
/**
* Create a new command instance.
*
* #param \Illuminate\Console\Scheduling\Schedule $schedule
* #return void
*/
public function __construct(Schedule $schedule)
{
parent::__construct($schedule);
}
/**
* Execute the console command.
*
* #return void
*/
public function handle()
{
foreach ($this->schedule->dueEvents($this->laravel) as $event) {
if (! $event->filtersPass($this->laravel)) {
continue;
}
if ($event->onOneServer) {
$this->runSingleServerEvent($event);
} else {
$this->runEvent($event);
}
$this->eventsRan = true;
}
if (! $this->eventsRan) {
// Laravel would output the default text here. You can remove
// this if statement entirely if you don't want output.
//
// Alternatively, define some custom output with:
// $this->info("My custom 'nothing ran' message");
}
}
}
Verify that Laravel sees your new command:
php artisan | grep run:tasks
Finally update your cron to run the new command:
* * * * * cd /path-to-your-project && php artisan run:tasks >> /dev/null 2>&1
As I mentioned in the comments I see two possibilities
You can filter the output by removing what you don't want
* * * * * cd /path-to-your-project && php artisan schedule:run | awk '{ if (/No scheduled commands are ready to run./ && !seen) { seen = 1 } else print }'
Or you can override with your own command:
$ php artisan make:command ScheduleRunCommand
By making your own command (mostly copy/past from ScheduleRunCommand) or extending the ScheduleRunCommand as #dave-s proposed
And if you want to still run php artisan schedule:run with your new command,
you need to register it in a service provider
$this->app->extend('schedule.run', function () {
return new \App\Console\Commands\ScheduleRunCommand;
});
If you look at the code for Laravel at https://github.com/laravel/framework/blob/78505345f2a34b865a980cefbd103d8eb839eedf/src/Illuminate/Console/Scheduling/ScheduleRunCommand.php#L82
public function handle()
{
foreach ($this->schedule->dueEvents($this->laravel) as $event) {
if (! $event->filtersPass($this->laravel)) {
continue;
}
if ($event->onOneServer) {
$this->runSingleServerEvent($event);
} else {
$this->runEvent($event);
}
$this->eventsRan = true;
}
if (! $this->eventsRan) {
$this->info('No scheduled commands are ready to run.');
}
}
You see that it's handled via $this->info handler.
The info handler is defined in Command.php Which calls the line method, which calls the output handler which is defined in the run command
So in essence to be able to intercept this you should be able to override the OutputStyle which is based on the symfonystyle by binding your own output handler before running the commands in the file you call in your cron job.
The best working scenario I can think of is by using an OutputFormatter where you simply return null when the string matches your target string.
$this->output->setFormatter( new MyCatchemAllFormatter() );
And in the class you would define something along the lines of:
use Symfony\Component\Console\Formatter\OutputFormatter;
class MyCatchemAllFormatter extends OutputFormatter
{
public function formatAndWrap(string $message, int $width)
{
if($message != 'No scheduled commands are ready to run.') {
return parent::formatAndWrap($message, $width);
}
return null;
}
}
I understand that my solution is DIRTY and I'll get downvotes by most of SO users, but it's quick to do without registering additional service providers, modifying classes and etc.
I've checked sources and found this at line 81 of ScheduleRunCommand which is
public function handle()
{
foreach ($this->schedule->dueEvents($this->laravel) as $event) {
if (! $event->filtersPass($this->laravel)) {
continue;
}
if ($event->onOneServer) {
$this->runSingleServerEvent($event);
} else {
$this->runEvent($event);
}
$this->eventsRan = true;
}
if (! $this->eventsRan) { // L81
$this->info('No scheduled commands are ready to run.'); // L82
}
}
The quickest way to "cheat" with it is to copy that class to app/Console/ScheduleRunCommand.php and copy that file to original source path every-time when composer dump-autoload called.
1) Copy original file to app/Console folder:
cp vendor/laravel/framework/src/Illuminate/Console/Scheduling/ScheduleRunCommand.php patch/ScheduleRunCommand.php app/Console/ScheduleRunCommand.php
2) add such line in composer.json scripts:post-autoload-dump section:
cp app/Console/ScheduleRunCommand.php vendor/laravel/framework/src/Illuminate/Console/Scheduling/ScheduleRunCommand.php
3) modify Your message in app/Console/ScheduleRunCommand.php (L82):
if (! $this->eventsRan) {
$this->info('NOTHING');
}
4) run: composer dump
and result:
Hellow , I need some hel regarding executing symfony commands inside another command. I am not new with this and I created many commands and run them from inside commands, controllers and it always work. But this one I do not understand why It do not working like the others are. I am running one command all the time and from time to time I created some extra workers when there are many jobs to get this one worker some help (one-check option).
I created command to run beanstalk worker with this class:
<?php
namespace AppBundle\Command;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputArgument;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Component\Console\Question\Question;
use Symfony\Component\Console\Input\InputOption;
use Symfony\Component\Console\Command\LockableTrait;
/**
* Class BeanstalkWorkerCommand
* Command start the Beanstalk worker. Get the job from queue and preform job
*
* #method configure()
* #method execute(InputInterface $input, OutputInterface $output)
* #method authenticateUser(InputInterface $input, OutputInterface $output)
* #method checkAuthentication($username, $password)
*/
class BeanstalkWorkerCommand extends ContainerAwareCommand
{
use LockableTrait;
protected function configure()
{
$this
->setName('beanstalk:worker:start')
->setDescription('Start the Beanstalk infinitive worker. Get the job from queue and preform job')
->addOption(
'one-check',
'o',
InputOption::VALUE_NONE,
'If set, the worker will check tubes only once and died if there is no jobs in queue'
)
;
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$output->writeln("\n<info>Beanstalk worker service started</info>");
$tubes = $this->getContainer()->get('app.job_manager.power_plant')->getTubes();
if ($input->getOption('one-check')) {
// run once
foreach ($tubes as $tubeName) {
if ($tubeName != "default") {
$this->getContainer()->get('app.queue_manager')->fetchQueue($tubeName);
}
}
$output->writeln("\n<info>Beanstalk worker completed check and stoped</info>");
} else {
// run forever
set_time_limit(0);
ini_set('xdebug.max_nesting_level', 1000);
if (!$this->lock()) {
$output->writeln('The command is already running in another process.');
return 0;
}
while (1) {
foreach ($tubes as $tubeName) {
if ($tubeName != "default") {
$this->getContainer()->get('app.queue_manager')->fetchQueue($tubeName);
}
sleep(0.1);
}
}
$output->writeln("\n<error>Beanstalk worker service has stoped</error>");
}
}
}
Than I run another command to create some extra workers with this functions:
public function startExtraWorkers(OutputInterface $output)
{
$numOfExtraWorkers = $this->getContainer()->getParameter('num_of_extra_beanstalk_workers');
for ($i=0; $i < $numOfExtraWorkers; $i++) {
$payload = [
"--one-check" => TRUE
];
$this->getContainer()->get('app.job_manager.queue')->createAndSendToQueue('beanstalk:worker:start', $payload, 10);
}
$output->writeln("\n<question>".$numOfExtraWorkers." extra benastalk workers started!</question>");
return TRUE;
}
public function createAndSendToQueue($command, $payload, $priority = 65536)
{
$jobData = $this->createJob($command, $payload);
return $this->job->enqueue($command, $jobData, $priority);
}
public function enqueue($job, array $args, $priority = 65536, $delay = 0, $ttr = 120)
{
if (!preg_match('/[a-z0-9\.]+/i', $job)) {
throw new InvalidArgumentException("Invalid job name");
}
$args = json_encode($args);
return $this->pheanstalk->put($args, $priority, $delay, $ttr);
}
And the problem is that if I run this command from terminal or with cron job it forks but if i run it like that with this function it do not work. I see that command has been executed but for some unknown reason it do not work.
If I executed this command i can see all commands has been executed bot they do not perform job like if i run the same command from terminal or with cron job:
ps ax |grep "beanstalk:worker:start --one-check"
Output (first one has been run from this function and second one with cron job. And only second one works):
31934 ? Ss 0:00 /bin/sh -c /usr/bin/php /var/www/mose-base/bin/console beanstalk:worker:start --one-check
31935 ? S 0:00 /usr/bin/php /var/www/mose-base/bin/console beanstalk:worker:start --one-check
Can any one give me some advice why is this not working like other commands? And why the same command run OK if i run it with cron job or inside terminal?
Thanks for help!
Try using:
/bin/sh -c '/usr/bin/php /var/www/mose-base/bin/console beanstalk:worker:start --one-check'
Mind the quotes.
I'm currently making a mail client with Symfony 2.7. I've made a command to collect new mails with IMAP and create entities from them.
This command runs fine in command line, it collects and displays new mails.
Here is the command :
protected function configure()
{
parent::configure();
$this
->setName('app:mails:collect')
->setDescription('Collects new mails from every Mailboxes.');
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$em = $this->getContainer()->get('doctrine')->getEntityManager();
$mailboxes = $em->getRepository('MIPMailBundle:MailBox')->findAllActive();
foreach ($mailboxes as $mailbox){
$imapBox = new ImapMailbox('{'.$mailbox->getServer().':143/notls/norsh/novalidate-cert}INBOX',
$mailbox->getAdress(), $mailbox->getPassword(), "web/mails/", "UTF-8");
if (count($mailbox->getMails()) == 0){
$output->writeln("Getting mails from mailbox...");
$mailsIds = $imapBox->searchMailbox('ALL');
if(!$mailsIds) {
$output->writeln($mailbox->getAdress() . " is empty");
}
} else {
$output->writeln("Searching for new mails...");
$mailsIds = $imapBox->searchMailbox('UNSEEN');
if(!$mailsIds) {
$output->writeln("No new mail for " . $mailbox);
}
}
foreach ($mailsIds as $mailId){
//Creates new mail entities...
$imapBox->markMailAsRead($mailId);
}
}
$em->flush();
}
I want this command to be runned every minute by the server. So I've made a cronjob for this :
* * * * * /path/to/myapp/app/console app:mails:collect
But the command doesn't work if runned by cron. I tried to write the output to a file, but the file is empty. In cron logs, I can see the job executed :
Jul 19 09:39:01 dev CRON[26967]: (web) CMD (/path/to/myapp/app/console app:mails:collect)
but it doesn't work... I tried to specify environment (i work in dev env) by specifying it in the command :
* * * * * /path/to/myapp/app/console app:mails:collect --env=dev
This was unsuccesful. Any idea why ? Thanks
This works fine for me. cd into directory first then run. Do this see what happens.
* * * * * cd /path/to/myapp && php app/console app:mails:collect --env=dev
Use either bin/console or app/console! Depending on your setup.
Note: When you run out of options, try to use a very simple command (something like printing "hello world" to terminal) to see if it is your command or something else.
try to specify php executable path:
* * * * * /path/to/php /path/to/myapp/app/console app:mails:collect --env=dev
ex:
* * * * * /usr/bin/php /path/to/myapp/app/console app:mails:collect --env=dev
I'm writing a program for my Raspberry Pi, that consists of two main parts:
A C-Program that uses the Spotify-API "Libspotify" to search for music and for playing it.
A PHP-program, running on an apache2-Webserver, to control the Spotify-program from a PC or Smartphone in my local network.
The communication between these two separate programs works over a couple of files.
The C-part for receiving the user inputs is being called once a second and works like this:
void get_input() {
int i = 0, c;
FILE *file;
struct stat stat;
char buffer[INPUT_BUFFER_SIZE];
file = fopen(PATH_TO_COMMUNICATION, "r");
if (file == NULL) {
perror("Error opening PATH_TO_COMMUNICATION");
return;
}
fstat(fileno(file), &stat);
if (stat.st_size > 1) {
while((c = fgetc(file)) != EOF && i < INPUT_BUFFER_SIZE) {
buffer[i] = c;
++i;
}
buffer[i] = '\0';
parse_input(buffer);
}
fclose(file);
clear_file(PATH_TO_COMMUNICATION);
}
So, via fwrite() in PHP, I can send commands to the C-program.
Afterwards, the C-Program parses the input and writes the results in an "results"-file. Once this is done, I write the last contents of the "communication"-file an "last_query"-file, so in PHP i can see, when the entire results are written in the "results":
function search($query) {
write_to_communication("search ".$query);
do { // Wait for results
$file = fopen("../tmp/last_query", "r");
$line = fgets($file);
fclose($file);
time_nanosleep(0, 100000000);
} while($line != $query);
echo get_result_json();
}
It does work already, but I don't like the way at all. There is a lot of polling and unneccessary opening and closing of different files. Also, in the worst case the program needs more than a second until it starts to do something with the user input. Furthermore, there can be race conditions, when the C-program tries to read the file during the PHP-program writes into it.
So, now to my question: What is the "right" way, to achieve a nice and clean communication between the two program parts? Is there some completely different way without the ugly polling and without race conditions? Or can I improve the existing code, so that it gets nicer?
I suppose that you write both the PHP and the C code yourself, and that you have access to compilers and so on? What I would do is not at all start up a different process and use inter-process-communication, but write a PHP C++ extension that does this all for you.
The extension starts up a new thread, and this thread picks up all instruction from an in-memory instruction queue. When you're ready to pick up the result, the final instruction is sent to the thread (the instruction to close down), and when the thread is finally finished, you can pick up the result.
You could use a program like this:
#include <phpcpp.h>
#include <queue>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <unistd.h>
/**
* Class that takes instructions from PHP, and that executes them in CPP code
*/
class InstructionQueue : public Php::Base
{
private:
/**
* Queue with instructions (for simplicity, we store the instructions
* in strings, much cooler implementations are possible)
*
* #var std::queue
*/
std::queue<std::string> _queue;
/**
* The final result
* #var std::string
*/
std::string _result;
/**
* Counter with number of instructions
* #var int
*/
int _counter = 0;
/**
* Mutex to protect shared data
* #var std::mutex
*/
std::mutex _mutex;
/**
* Condition variable that the thread uses to wait for more instructions
* #var std::condition_variable
*/
std::condition_variable _condition;
/**
* Thread that processes the instructions
* #var std::thread
*/
std::thread _thread;
/**
* Procedure to execute one specific instruction
* #param instruction
*/
void execute(const std::string &instruction)
{
// #todo
//
// add your own the implementation, for now we just
// add the instruction to the result, and sleep a while
// to pretend that this is a difficult algorithm
// append the instruction to the result
_result.append(instruction);
_result.append("\n");
// sleep for a while
sleep(1);
}
/**
* Main procedure that runs the thread
*/
void run()
{
// need the mutex to access shared resources
std::unique_lock<std::mutex> lock(_mutex);
// keep looping
while (true)
{
// go wait for instructions
while (_counter == 0) _condition.wait(lock);
// check the number of instructions, leap out when empty
if (_queue.size() == 0) return;
// get instruction from the queue, and reduce queue size
std::string instruction(std::move(_queue.front()));
// remove front item from queue
_queue.pop();
// no longer need the lock
lock.unlock();
// run the instruction
execute(instruction);
// get back the lock for the next iteration of the main loop
lock.lock();
}
}
public:
/**
* C++ constructor
*/
InstructionQueue() : _thread(&InstructionQueue::run, this) {}
/**
* Copy constructor
*
* We just create a brand new queue when it is copied (copy constructor
* is required by the PHP-CPP library)
*
* #param queue
*/
InstructionQueue(const InstructionQueue &queue) : InstructionQueue() {}
/**
* Destructor
*/
virtual ~InstructionQueue()
{
// stop the thread
stop();
}
/**
* Method to add an instruction
* #param params Object representing PHP parameters
*/
void add(Php::Parameters ¶ms)
{
// first parameter holds the instruction
std::string instruction = params[0];
// need a mutex to access shared resources
_mutex.lock();
// add instruction
_queue.push(instruction);
// update instruction counter
_counter++;
// done with shared resources
_mutex.unlock();
// notify the thread
_condition.notify_one();
}
/**
* Method to stop the thread
*/
void stop()
{
// is the thread already finished?
if (!_thread.joinable()) return;
// thread is still running, send instruction to stop (which is the
// same as not sending an instruction at all but just increasing the
// instruction counter, lock mutex to access protected data
_mutex.lock();
// add instruction
_counter++;
// done with shared resources
_mutex.unlock();
// notify the thread
_condition.notify_one();
// wait for the thread to finish
_thread.join();
}
/**
* Retrieve the result
* #return string
*/
Php::Value result()
{
// stop the thread first
stop();
// return the result
return _result;
}
};
/**
* Switch to C context to ensure that the get_module() function
* is callable by C programs (which the Zend engine is)
*/
extern "C" {
/**
* Startup function that is called by the Zend engine
* to retrieve all information about the extension
* #return void*
*/
PHPCPP_EXPORT void *get_module() {
// extension object
static Php::Extension myExtension("InstructionQueue", "1.0");
// description of the class so that PHP knows
// which methods are accessible
Php::Class<InstructionQueue> myClass("InstructionQueue");
// add methods
myClass.method("add", &InstructionQueue::add);
myClass.method("result", &InstructionQueue::result);
// add the class to the extension
myExtension.add(std::move(myClass));
// return the extension
return myExtension;
}
}
You can use this instruction queue from a PHP script like this:
<?php
$queue = new InstructionQueue();
$queue->add("instruction 1");
$queue->add("instruction 2");
$queue->add("instruction 3");
echo($queue->result());
As an example I've only added a silly implementation that sleeps for a while, but you could run your API-calling functions in there.
The extension uses the PHP-CPP library (see http://www.php-cpp.com).
I have one php script, and I am executing this script via cron every 10 minutes on CentOS.
The problem is that if the cron job will take more than 10 minutes, then another instance of the same cron job will start.
I tried one trick, that is:
Created one lock file with php code (same like pid files) when
the cron job started.
Removed the lock file with php code when the job finished.
And when any new cron job started execution of script, I checked if lock
file exists and if so, aborted the script.
But there can be one problem that, when the lock file is not deleted or removed by script because of any reason.
The cron will never start again.
Is there any way I can stop the execution of a cron job again if it is already running, with Linux commands or similar to this?
Advisory locking is made for exactly this purpose.
You can accomplish advisory locking with flock(). Simply apply the function to a previously opened lock file to determine if another script has a lock on it.
$f = fopen('lock', 'w') or die ('Cannot create lock file');
if (flock($f, LOCK_EX | LOCK_NB)) {
// yay
}
In this case I'm adding LOCK_NB to prevent the next script from waiting until the first has finished. Since you're using cron there will always be a next script.
If the current script prematurely terminates, any file locks will get released by the OS.
Maybe it is better to not write code if you can configure it:
https://serverfault.com/questions/82857/prevent-duplicate-cron-jobs-running
flock() worked out great for me - I have a cron job with database requests scheduled every 5 minutes, so not having several running at the same time is crucial. This is what I did:
$filehandle = fopen("lock.txt", "c+");
if (flock($filehandle, LOCK_EX | LOCK_NB)) {
// code here to start the cron job
flock($filehandle, LOCK_UN); // don't forget to release the lock
} else {
// throw an exception here to stop the next cron job
}
fclose($filehandle);
In case you don't want to kill the next scheduled cron job, but simply pause it till the running one is finished, then just omit the LOCK_NB:
if (flock($filehandle, LOCK_EX))
This is a very common problem with a very simple solution: cronjoblock a simple 8-lines shellscript wrapper applies locking using flock:
https://gist.github.com/coderofsalvation/1102e56d3d4dcbb1e36f
btw. cronjoblock also reverses cron's spammy emailbehaviour: only output something if stuff goes wrong. This is handy in respect to cron's MAILTO variable. The stdout/stderr output will be suppressed (so cron will not send mails) unless the given process has an exitcode > 0
flock will not work in php 5.3.3 as The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually.
I use this ::
<?php
// Create a PID file
if (is_file (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing")) { die (); }
file_put_contents (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing", "processing");
// SCRIPT CONTENTS GOES HERE //
#unlink (dirname ($_SERVER['SCRIPT_NAME']) . "/.processing");
?>
#!/bin/bash
ps -ef | grep -v grep | grep capture_12hz_sampling_track.php
if [ $? -eq 1 ];
then
nohup /usr/local/bin/php /opt/Apache/htdocs/cmsmusic_v2/script/Mp3DownloadProcessMp4/capture_12hz_sampling_track.php &
else
echo "Already running"
fi
Another alternative:
<?php
/**
* Lock manager to ensure our cron doesn't run twice at the same time.
*
* Inspired by the lock mechanism in Mage_Index_Model_Process
*
* Usage:
*
* $lock = Mage::getModel('stcore/cron_lock');
*
* if (!$lock->isLocked()) {
* $lock->lock();
* // Do your stuff
* $lock->unlock();
* }
*/
class ST_Core_Model_Cron_Lock extends Varien_Object
{
/**
* Process lock properties
*/
protected $_isLocked = null;
protected $_lockFile = null;
/**
* Get lock file resource
*
* #return resource
*/
protected function _getLockFile()
{
if ($this->_lockFile === null) {
$varDir = Mage::getConfig()->getVarDir('locks');
$file = $varDir . DS . 'stcore_cron.lock';
if (is_file($file)) {
$this->_lockFile = fopen($file, 'w');
} else {
$this->_lockFile = fopen($file, 'x');
}
fwrite($this->_lockFile, date('r'));
}
return $this->_lockFile;
}
/**
* Lock process without blocking.
* This method allow protect multiple process runing and fast lock validation.
*
* #return Mage_Index_Model_Process
*/
public function lock()
{
$this->_isLocked = true;
flock($this->_getLockFile(), LOCK_EX | LOCK_NB);
return $this;
}
/**
* Lock and block process.
* If new instance of the process will try validate locking state
* script will wait until process will be unlocked
*
* #return Mage_Index_Model_Process
*/
public function lockAndBlock()
{
$this->_isLocked = true;
flock($this->_getLockFile(), LOCK_EX);
return $this;
}
/**
* Unlock process
*
* #return Mage_Index_Model_Process
*/
public function unlock()
{
$this->_isLocked = false;
flock($this->_getLockFile(), LOCK_UN);
return $this;
}
/**
* Check if process is locked
*
* #return bool
*/
public function isLocked()
{
if ($this->_isLocked !== null) {
return $this->_isLocked;
} else {
$fp = $this->_getLockFile();
if (flock($fp, LOCK_EX | LOCK_NB)) {
flock($fp, LOCK_UN);
return false;
}
return true;
}
}
/**
* Close file resource if it was opened
*/
public function __destruct()
{
if ($this->_lockFile) {
fclose($this->_lockFile);
}
}
}
Source: https://gist.github.com/wcurtis/9539178
I was running a php cron job script that dealt specifically with sending text messages using an existing API. On my local box the cron job was working fine, but on my customer's box it was sending double messages. Although this doesn't make sense to me, I double checked the permissions for the folder responsible for sending messages and the permission was set to root. Once I set the owner as www-data (Ubuntu) it started behaving normally.
This might mot be the issue for you, but if its a simple cron script I would double check the permissions.