I'm creating a web app where users can specify a time and date to run 2 scheduled tasks (one at the start date and one at the end date). As these are only run once each I didn't know if a cron job would be appropriate.
The other option I thought of would be to save all of the task times to a DB and run a cron job every hour to check if $usertime == NOW(), etc. But I was worried about jobs overlapping, etc.
Thoughts?
Additional: Many users can create many tasks that run 2 scripts each.
cron is great for scripts run on a regular basis, but if you want a one-off (or two-off) script to run at a particular time you would use the unix 'at' command, and you can do it directly from php using code like this:
/****
* Schedule a command using the AT command
*
* To do this you need to ensure that the www-data user is allowed to
* use the 'at' command - check this in /etc/at.deny
*
*
* EXAMPLE USAGE ::
*
* scriptat( '/usr/bin/command-to-execute', 'time-to-run');
* The time-to-run shoud be in this format: strftime("%Y%m%d%H%M", $unixtime)
*
**/
function scriptat( $cmd = null, $time = null ) {
// Both parameters are required
if (!$cmd) {
error_log("******* ScriptAt: cmd not specified");
return false;
}
if (!$time) {
error_log("******* ScriptAt: time not specified");
return false;
}
// We need to locate php (executable)
if (!file_exists("/usr/bin/php")) {
error_log("~ ScriptAt: Could not locate /usr/bin/php");
return false;
}
$fullcmd = "/usr/bin/php -f $cmd";
$r = popen("/usr/bin/at $time", "w");
if (!$r) {
error_log("~ ScriptAt: unable to open pipe for AT command");
return false;
}
fwrite($r, $fullcmd);
pclose($r);
error_log("~ ScriptAt: cmd=${cmd} time=${time}");
return true;
}
I'd do it like that, save settings in a database and check when needed if the task should start.
You could run a checking/initiating cronjob every minute. Just make sure the checking code is not not too heavy (exits quickly). A database query for a couple of rows shouldn't be a problem to execute every minute.
If the "task" is really heavy, you should consider a daemon instead of a cronjob calling php. Here is a good & easy-to-read introduction: Create daemons in PHP
Edit: I took for granted that even if the tasks are only ran "once each", you have multiple users which are 1:1 to the "once each", thereby jobs for each user. If not, at (as the comments says) looks worthy of an experiment.
Whatever mechanism you chose (cron/at/daemon) I would only put the start task into the queue. Along with that start task is to place the end task. That part can either place it into the future or it the time has elapsed start it immediately. That way they will never overlap.
I would also favour the PHP/DB and cron option. Seems simpler and gives more flexibility - could chose multiple threads etc if performance dicttates.
Related
I have a problem with terminating processes started from a queue job.
I use the yii2-queue extension to run some long running system commands that have a total execution time limit controlled by the getTtr method of the RetryableInterface. The command may take anywhere from minutes to hours to fully complete, but I need to kill it after it hits the 60-minute mark.
<?php
use Symfony\Component\Process\Process;
use yii\base\BaseObject;
use yii\queue\RetryableJobInterface;
class TailJob extends BaseObject implements RetryableJobInterface
{
public function getTtr()
{
return 10;
}
public function execute($queue)
{
$process = new Process('tail -f /var/log/dpkg.log');
$process->setTimeout(60);
$process->run();
}
public function canRetry($attempt, $error)
{
return false;
}
}
Now, the problem that I face is that even when queue/listen kills the job, the tail command (it's just an example; in production I need to run a different command) keeps running in the background. Is there any way I can force the system to kill the tail command when the job is killed?
Your script needs to keep checking if the timeout was reached; e.g.
while($process->isRunning()) {
$process->checkTimeout();
usleep(200000);
}
Read more about "Process Timeout" here:
https://symfony.com/doc/current/components/process.html
Run the command with a timeout
$process = new Process('timeout 3600 tail -f /var/log/dpkg.log');
Will limit the process to a maximum of 60 minutes.
If your script kills it first that's fine and if it doesn't then the process will die at the timeout time.
https://linux.die.net/man/1/timeout
I want to create a queue (AMAZON SQS) that only runs jobs every X sec. So if suddenly 50 jobs are submitted, the end up in the queue. The queue listener then pulls a job, does something and waits X sec. After that, the next job is pulled. Another X sec pause. Etc etc
For the queue listener, the sleep option option only determines how long the worker will "sleep" if there are no new jobs available. So it will only sleep if there is nothing in the queue.
Or should I just put in a pause(x) in my PHP code?
[edit] I just tested the sleep method with a FIFO and standard AWS SQS queue and this messes up the whole queue. Suddenly jobs are (sucesssfully) resubmitted 3 times after which the go into failed state. Moreover, the delay that is given in my code (3-4 min) was ignored, instead a one minute was taken
<?php
namespace App\Jobs;
use App\City;
class RetrieveStations extends Job
{
protected $cities;
/**
* Create a new job instance.
*
* #return void
*/
public function __construct ($cities)
{
$this->cities = $cities;
}
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
// code here
doSomething()
sleep(X);
}
}
I have the exact same problem to solve. I'm using Laravel 5.8 and I don't see how I can get the queue worker to wait a fixed period between jobs.
I'm now thinking of using a scheduled task to handle this. I can schedule a task to run, say, every 5 minutes and run the following artisan command:
$schedule->command('queue:work --queue=emails --once')->everyFiveMinutes();
This will take one job from the queue and run it. Unfortunately, there's not much more granular control over how often a job is processed.
Exactly, you need to set asleep your php code, there is no other way.
Php sleep
I have the following (simple) lock code for a Laravel 5.3 command:
private $hash = null;
public final function handle() {
try {
$this->hash = md5(serialize([ static::class, $this->arguments(), $this->options() ]));
$this->info("Generated signature ".$this->hash,"v");
if (Redis::exists($this->hash)) {
$this->hash = null;
throw new \Exception("Method ".$this->signature." is already running");
}
Redis::set($this->hash, true);
$this->info("Running method","vv");
$this->runMutuallyExclusiveCommand(); //Actual command is not important here
$this->cleanup();
} catch (\Exception $e) {
$this->error($e->getMessage());
}
}
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
}
}
This works fine if the command is allowed to go through its execution cycle normally (including handling when there's a PHP exception). However the problem arises when the command is interrupted via other means (e.g. CTRL-C or when the terminal window is closed). In that case the cleanup code is not ran and the command is considered to be still "executing" so I need to manually remove the entry from the cache in order to restart it. I have tried running the cleanup code in a __destruct function but that does not seem to be called either.
My question is, is there a way to set some code to be ran when a command is terminated regardless how it was terminated?
Short answer is no. When you kill the running process, either by Ctrl-C or just closing the terminal, you terminate it. You would need to have an interrupt in your shell that links to your cleanup code, but that is way out of scope.
There are other options however. Cron jobs can be run at intermittent intervals to perform clean up tasks and other helpful things. You could also create a start up routine that runs prior to your current code. When you execute the start up routine, it could do the cleanup for you, then call your current routine. I believe your best bet is to use a cron job that simply runs at given intervals that then looks for entries in the cache that are no longer appropriate, and then cleans them. Here is a decent site to get you started with cron jobs https://www.linux.com/learn/scheduling-magic-intro-cron-linux
I just discovered that it exists a ConsoleEvents::TEMINATE event in Symfony.
I want to use it to execute some additional process after the command execution (and not delaying the command).
But the fact is that i want to execute some process when a specific command is finish, not for all the commands (because i think that consoleevent.terminate is fired for all the commands.
I really don't know how to do that.
Regards.
You can access instance of the command from ConsoleTerminateEvent
It's almost copy paste from documentation of Console component. with full symfony registering listener looks a little different but you should get the idea.
$dispatcher->addListener(
ConsoleEvents::TERMINATE,
function(ConsoleTerminateEvent $event) {
$command = $event->getCommand();
// if it's not the command you want
if (!$command instanceof YourDesiredCommand) {
return;
}
// put your logic here
}
);
I've set up a Wiki family consisting of a small number of Wikis that have (and are expected to continue having) low to moderate traffic.
When you run a single MediaWiki, it runs a job on every page request which is nice for keeping links and categories up to date, but I can't get this behaviour to work for a wiki family.
I have a wiki setup with a branching localSettings (depending on the SERVER_NAME) and have despite searching (and asking on Mediawiki) found no way to keep this job behaviour, rather I get jobs queueing up, presumably because the maintenance scripts being automatically run do not know which Wiki they originate from.
Is there a way to fix/circumvent this? I have not found any kind of variable being supplied when the job queue is run that could be passed into the localSettings.php so that the correct settings are loaded and the jobs can run properly.
Generally jobs are run on each page load within context of current Wiki, this means in your case there should be no problems with queue, because your LocalSettings file is branched. Though, in certain circumstances job queue may be overloaded, in this case you will need to disable default queue behavior ( by setting $wgJobRunRate = 0;) and configure maintenance scripts runner in crontab. This can be tricky for branched farm, but i think it will work like that:
* * * * * php /path/to/your/wiki/maintenance/runJobs.php --wiki domainA.com
* * * * * php /path/to/your/wiki/maintenance/runJobs.php --wiki domainB.com
* * * * * php /path/to/your/wiki/maintenance/runJobs.php --wiki domainC.com
In this scenario during script execution two constants will be available in LocalSettings.php: MW_DB and MW_PREFIX (use only MW_DB), so you will need to modify your LocalSettings.php like that:
...
$activeWiki = 'defaultWiki';
$switchVar = $_SERVER['SERVER_NAME'];
if( defined('DO_MAINTENANCE') && defined('MW_DB') ) {
$switchVar = MW_DB;
}
switch( $switchVar ) {
...
}
...
Problem found - the issue was that the wikis were behind a permission gate (just a regular Apache one), and async jobs don't inherit the permissions, so I had to set async jobs to false to solve it.
In case anyone else gets this problem - $wgRunJobsAsync = false; should be added to localSettings.php