I've created a simple laravel queue, a controller with a method which dispatches jobs into the queue, and a job for handling logic.
My plan is to have multiple queue workers using Supervisor config. And it works fine.
Supervisor config:
/etc/supervisor/conf.d/worker-node.conf
[program:laravel-worker]
...
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=admin
numprocs=8 //8 concurrent workers
...
Problem is that i want to log whenever a worker takes on a job from the queue and starts processing it. It's actually easy, i can just do some Log::info(...) but i also want to log the worker's unique identifier. It could be process id, worker id, worker number, whatever is possible. I want to do this so i can inspect which worker handled the job. Is such thing possible in laravel ? I know worker processes are daemon, but i think it could be possible to get process id somehow. Expected log output:
laravel.log
[2021-09-28 14:12:54] local.INFO: [worker identifier here] started processing JOB id 156
[2021-09-28 14:12:54] local.INFO: [worker identifier here] started processing JOB id 187
[2021-09-28 14:12:54] local.INFO: [worker identifier here] started processing JOB id 1214
Job class:
class ProcessLocationJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
protected $fetchLocation;
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
$this->fetchLocation->execute();
}
/**
* Handle a job failure.
*
* #param \Throwable $exception
* #return void
*/
public function failed(Throwable $exception)
{
$jobId=''; //Some JOB ID or PID is needed here
Log::error("job id {$jobId} failed: {$exception->getMessage()}" );
}
}
TL;DR: How can i get worker's PID or any other identifier inside job's handle() method.
Any thought and alternatives are appreciated. Thank you.
Edit #1: Added job class
Related
I've got the tasks table, this table has status and deadline columns. How can status be changed to "expired" automatically when current date will become greater than task's deadline date? Is there any realtime event listeners in Laravel?
I guess that's how event listener class should look like, but I'm not sure what to do next.
<?php
namespace App\Events;
use App\Models\Task;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\SerializesModels;
class DeadlineExpired
{
use Dispatchable, InteractsWithSockets, SerializesModels;
/**
* The task instance.
*
* #var \App\Models\Task
*/
public $task;
/**
* Create a new event instance.
*
* #param \App\Models\Task $task
* #return void
*/
public function __construct(Task $task)
{
$this->task = $task;
}
}
Since you're checking only the date. Your cron needs to run only once at midnight. Use Laravel Scheduler to do your Job.
First create a class
class UpdateTasks
{
public function __invoke()
{
// do your task here...e.g.,
Tasks::whereDate('deadline','<',today())->update(['status'=>'expired']);
}
}
Then in your app\Console\Kernel.php, schedule method-
$schedule->call(new UpdateTasks())->daily();
Finally configure a cron job at your server to run the schedule command daily at midnight.
php artisan schedule:run
There are Realtime event listeners but thes require a action to fire. Example are when a model is Created, Updated or Deleted then these events fire.
There is no built in "listener" to ping every model waiting for a field you defined to change.
If there is further logic you would like to fire when the Task becomes expired (like send email) then your best would be to run a check for any new expired Tasks using the scheduler.
The Scheduler runs every minute - set by cron.
I am using the laravel middleware withoutOverlapping in laravel but it does not seem to work when jobs are dispatched at the same time (and take < 1 second)
Example
class TestWaitJob implements ShouldQueue
{
use Dispatchable;
use InteractsWithQueue;
use Queueable;
use SerializesModels;
public Charge $charge;
public function __construct(Charge $charge)
{
$this->charge = $charge;
}
/**
* #return array
*/
public function middleware(): array
{
return [
(new WithoutOverlapping($this->charge->id))
->releaseAfter(30)
];
}
/**
* #return void
*/
public function handle()
{
$charge = $this->charge->current_state_key_name;
return;
}
}
This is an absolutely simple job. It does nothing and it is what I have used to test.
If I go to my application and dispatch two copies of the job at the same time (using tinker)
TestWaitJob::dispatch(Charge::Find(1)); TestWaitJob::dispatch(Charge::find(1));1;
Both jobs are processed by horizon at the same time.
If I add a super simple sleep(1) line to the handle method of the job I get the expected behaviour which is
Job begins processing and acquires lock
The next job cannot acquire lock so is released back to the queue.
So with this sleep line, I have a job processed immediately (in 1 second) and the next job completes 31 seconds later which matches up exactly with the releaseAfter(30 seconds)
I have been looking at this for hours and everytime I introduce a delay the jobs process as expected but with no delay they process at the same time. The application is financial in nature so I cannot afford to potentially process jobs at the same time
Any help/advice would be greatly appreciated. (ps I am using Redis as cache)
I am using the latest version of Homestead.
I also have Laravel Horizon set up.
I am using Redis as the queue driver.
Laravel is version 5.6 and is a fresh install.
What's happening is my jobs are all failing (even though the job exits correctly).
I am running the job through command line by using a custom command:
vagrant#homestead:~/myapp$ artisan crawl:start
vagrant#homestead:~/myapp$ <-- No CLI errors after running
app/Console/Command/crawl.php
<?php
namespace MyApp\Console\Commands;
use Illuminate\Console\Command;
use MyApp\Jobs\Crawl;
class crawl extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'crawl:start';
/**
* The console command description.
*
* #var string
*/
protected $description = 'Start long running job.';
/**
* Create a new command instance.
*
* #return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* #return mixed
*/
public function handle()
{
Crawl::dispatch();
}
}
app/Jobs/Crawl.php
<?php
namespace MyApp\Jobs;
use Illuminate\Bus\Queueable;
use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
class Crawl implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
/**
* The number of seconds the job can run before timing out.
*
* #var int
*/
public $timeout = 3600;
/**
* The number of times the job may be attempted.
*
* #var int
*/
public $tries = 1;
/**
* Create a new job instance.
*
* #return void
*/
public function __construct()
{
}
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
$crawl = new Crawl();
$crawl->start();
}
}
app/Crawl.php
<?php
namespace MyApp;
class Crawl
{
public function start()
{
ini_set('memory_limit','256M');
set_time_limit(3600);
echo "Started.";
sleep(30);
echo "Exited.";
exit();
}
}
worker.log
[2018-03-21 10:14:27][1] Processing: MyApp\Jobs\Crawl
Started.
Exited.
[2018-03-21 10:15:59][1] Processing: MyApp\Jobs\Crawl
[2018-03-21 10:15:59][1] Failed: MyApp\Jobs\Crawl
From Horizon's failed job detail
Failed At 18-03-21 10:15:59
Error Illuminate\Queue\MaxAttemptsExceededException:
MyApp\Jobs\Crawl has been attempted too many
times or run too long. The job may have previously
timed out. in /home/vagrant/app/vendor/laravel
/framework/src/Illuminate/Queue/Worker.php:396
laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/vagrant/myapp/artisan queue:work --sleep=3 --tries=1 --timeout=3600
autostart=true
autorestart=true
user=vagrant
numprocs=1
redirect_stderr=true
stdout_logfile=/home/vagrant/myapp/worker.log
config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 90,
'block_for' => null,
],
.env
QUEUE_DRIVER=redis
Synopsis
Looking at my worker.log I can see that the output from my class has worked:
Started.
Exited.
But the job is reported as failed. Why?
Strangely, also in the worker.log, it says Processing twice for one job:
[2018-03-21 10:15:59][1] Processing: MyApp\Jobs\Crawl
[2018-03-21 10:15:59][1] Failed: MyApp\Jobs\Crawl
Any help is greatly appreciated!
UPDATE
Removing the exit() has resolved the issue - this is strange as the PHP manual says that you can use exit() to exit the program "normally":
https://secure.php.net/manual/en/function.exit.php
<?php
//exit program normally
exit;
exit();
exit(0);
Removing the exit() has resolved the issue - this is strange as the PHP manual says that you can use exit() to exit the program "normally"
This is true for regular programs, but a queued job in Laravel doesn't follow the same lifecycle.
When the queue system processes a job, that job executes in an existing queue worker process. Specifically, the queue worker fetches the job data from the backend and then calls the job's handle() method. When that method returns, the queue worker runs some code to finalize the job.
If we exit from a job—by calling exit(), die(), or by triggering a fatal error—PHP stops the worker process running the job as well, so the queue system never finishes the job lifecycle, and the job is never marked "complete."
We don't need to explicitly exit from a job. If we want to finish the job early, we can simply return from the handle() method:
public function handle()
{
// ...some code...
if ($exitEarly) {
return;
}
// ...more code...
}
Laravel also includes a trait, InteractsWithQueue, that provides an API which enables a job to manage itself. In this case, we can call the delete() method from a job that exhibits this trait:
public function handle()
{
if ($exitEarly) {
$this->delete();
}
}
But the job is reported as failed. Why? Strangely, also in the worker.log, it says Processing twice for one job
As described above, the job could not finish successfully because we called exit(), so the queue system dutifully attempted to retry the job.
I have a queue that sends requests to a remote service. Sometimes this service undergoes a maintenance. I want all queue tasks to pause and retry in 10 minutes when such situation is encountered. How do I implement that?
You can use the Queue::looping() event listener to pause an entire queue or connection (not just an individual job class). Unlike other methods, this will not put each job in a cycle of pop/requeue while the queue is paused, meaning the number of attempts will not increase.
Here's what the docs say:
Using the looping method on the Queue facade, you may specify
callbacks that execute before the worker attempts to fetch a job from
a queue.
https://laravel.com/docs/5.8/queues#job-events
What this doesn't document very well is that if the callback returns false then the worker will not fetch another job. For example, this will prevent the default queue from running:
Queue::looping(function (\Illuminate\Queue\Events\Looping $event) {
// $event->connectionName (e.g. "database")
// $event->queue (e.g. "default")
if ($event->queue == 'default') {
return false;
}
});
Note: The queue property of the event will contain the value from the command line when the worker process was started, so if your worker was checking more than one queue (e.g. artisan queue:work --queue=high,default) then the value of queue in the event will be 'high,default'. As a precaution, you may instead want to explode the string by commas and check if default is in the list.
So for example, if you want to create a rudimentary circuit breaker to pause the mail queue when your mail service returns a maintenance error, then you can register a listener like this in your EventServiceProvider.php:
/**
* Register any events for your application.
*
* #return void
*/
public function boot()
{
parent::boot();
Queue::looping(function (\Illuminate\Queue\Events\Looping $event) {
if (($event->queue == 'mail') && (cache()->get('mail-queue-paused'))) {
return false;
}
});
}
This assumes you have a mechanism somewhere else in your application to detect the appropriate situation and, in this example, that mechanism would need to assign a value to the mail-queue-paused key in the shared cache (because that's what my code is checking for). There are much more robust solutions, but setting a specific well-known key in the cache (and expiring it automatically) is simple and achieves the desired effect.
<?php
namespace App\Jobs;
use ...
class SendRequest implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
const REMOTE_SERVER_UNAVAILABLE = 'remote_server_unavailable';
private $msg;
private $retryAfter;
public function __construct($msg)
{
$this->msg = $msg;
$this->retryAfter = 10;
}
/**
* Execute the job.
*
* #return void
*/
public function handle(){
try {
// if we have tried sending the request and get a RemoteServerException, we will
// redispatch the job directly and return.
if(Cache::get(self::REMOTE_SERVER_UNAVAILABLE)) {
self::dispatch($this->msg)->delay(Carbon::now()->addMinutes($this->retryAfter));
return;
}
// send request to remote server
// ...
} catch (RemoteServerException $e) {
// set a cache value expires in 10 mins if not exists.
Cache::add(self::REMOTE_SERVER_UNAVAILABLE,'1', $this->retryAfter);
// if the remote service undergoes a maintenance, redispatch a new delayed job.
self::dispatch($this->msg)->delay(Carbon::now()->addMinutes($this->retryAfter));
}
}
}
I am trying to dispatch jobs in Laravel into redis. If I do
Queue::push('LogMessage', array('message' => 'Time: '.time()));
then a job is placed into the queues:default key in redis. (note that I do not yet have the listener running; I'm just trying to show that the queue is being used.) However, if I do
$this->dispatch(new AudienceMetaReportJob(1));
then nothing is added to redis and the job executes immediately.
The Job definition:
<?php
use App\Jobs\Job;
use Illuminate\Queue\InteractsWithQueue;
/**
* Simple Job used to build an Audience Report. Its purpose is to
* allow code to dispatch the command to build the report.
*/
class AudienceMetaReportJob extends Job {
use InteractsWithQueue;
protected $audience_id;
/**
* #param $audience_id
*/
public function __construct($audience_id){
$this->audience_id = $audience_id;
}
/**
* Execute the job.
*
* #return void
*/
public function handle(){
$audience_meta = new AudienceMeta();
$audience_meta->reportable = Audience::findOrFail($this->audience_id);
$audience_meta->clear()
->build()
->save();
}
}
Notes:
The Audience job appears to complete properly, but does so as if the QUEUE_DRIVER env variable was set to sync
Redis has been configured properly, as I can manually use the Queue:push() function and can read from redis
I have set the QUEUE_DRIVER env variable to redis, and the problem exists even if I hardcode redis in the config/queue.php file: 'default' => 'redis',
I do not have php artisan queue:listen running
What other configuration am I missing? Why does the Queue class work with redis, while the dispatch function does not?