I wanted to push messages to multiple SQS queues parallel or one after another, but it should be dynamic and when I start the worker it should fetch the messages from both the queues and differentiate.
How can I achieve this in Lumen?
UPDATE
How to use multiple worker for different queues with different amazon SQS instances?
As far as I can see Lumen and Laravel use the exact same code to handle queues so here's something that might work, though I haven't tested it.
Run the queue worker as:
php artisan queue:work --queue=queue1,queue2
This will mean that jobs in queue1 are processed before jobs in queue2 (unfortunately this is the only way to listen to multiple queues)
Then in your job:
class MyJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function handle()
{
if ($this->job->getQueue() === 'queue1') {
//Things
} else {
// different things
}
}
If you need to use multiple connections you can't do that using a single worker, however you can use multiple workers at a time. First configure your connections e.g. in your config/queue.php
'connections' => [
'sqs' => [
'driver' => 'sqs',
'key' => 'your-public-key',
'secret' => 'your-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-account-id',
'queue' => 'your-queue-name',
'region' => 'us-east-1',
],
'sqs2' => [
'driver' => 'sqs',
'key' => 'your-other-public-key',
'secret' => 'your-other-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-other-account-id',
'queue' => 'your-other-queue-name',
'region' => 'us-east-1',
],
]
If you're using supervisor then setup your supervisor configuration, if not you'll have to start both workers manually. Here's a supervisor configuration you can use:
[program:laravel-sqs-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --queue=queue1
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
[program:laravel-sqs2-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs2 --queue=queue2
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
Alter the paths and user settings according to your app.
Related
I'm using supervisor to run laravel jobs, and I was having a long-running job that was timing out, so I changed the rety_after variable from 90 to 3600 in config/queue.php.
It seems like now when I run php artisan queue:listen locally, it's trying to wait for 3600 (seconds?) before it attempts to run the job.
I just pushed some changes out to the prod environment and it seems like it's doing it there as well. Wondering if someone can give some insights on how to set up my jobs so they run quickly, but also don't timeout.
Here's my job definition in my supervisor conf file:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/my_project/artisan queue:work database --sleep=3 --tries=3 --timeout=2400
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=ec2-user
numprocs=8
redirect_stderr=true
stdout_logfile=/var/www/html/my_project/storage/logs/supervisor.log
stopwaitsecs=3600
startsecs=0
And in my config/queue.php file I have:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 3600,
],
To me the documentation is not clear on exactly what all the various flags do.
Hi below is the config file of supervisor of one project
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /livesites/siteA.example.com/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/livesites/siteA.example.com/storage/logs/worker.log
Its running fine . I have another project (siteB.example.com) with the redis as QUEUE_CONNECTION in the .env. What should the the config file for that. Will there be any issue running two projects queues on same server ?
First If two projects are on different connections (Redis and Database) shouldn't be any problem.
But if connections are the same (both on Database or Redis), One solution might be using a different queue for each project.
for example in the siteA project push your jobs on siteA queue and in the siteB project push your jobs on siteB queue. Then create two separate supervisor config files and in each of them put --queue=siteA or --queue=siteB in the artisan command argument.
siteA.conf:
command=php /livesites/siteA.example.com/artisan queue:work database --queue=siteA --sleep=3 --tries=3
siteB.conf:
command=php /livesites/siteB.example.com/artisan queue:work database --queue=siteB --sleep=3 --tries=3
and finally, in your Laravel code dispatch each job to appropriate queue:
dispatch((new Job)->onQueue('siteA'));
in siteB project
dispatch((new Job)->onQueue('siteB'));
or you can globally change the default queue for each project in config/queue.php as below:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'siteA' // or siteB,
'retry_after' => 90,
]
I have an application running many different kinds of jobs that run for several seconds to an hour. This requires me to have two seperate queue connection due to retry_after being bound to a connection instead of a queue (even though they both use Redis, which is annoying but not the issue right now). One connection has a retry_after of 600 seconds, the other 3600 seconds (or one hour). My jobs implement ShouldQueue and use traits Queueable, SerializesModels, Dispatchable, InteractsWithQueue.
Now for the problem. I created a TestJob that sleeps for 900 seconds to make sure we pass the retry_after limit. When I try to dispatch a job on a specific connection using:
dispatch(new Somejob)->onConnection('redis-long-run')
or as I used to do it previously in the constructor of the job (which always used to work):
public function __construct() {
$this->onConnection('redis-long-run');
}
The job gets picked up by the queue worker, it runs for 600 seconds after which the worker restarts job, notices it has already run once and fails it. 300 seconds later, the job processes successfully. If my worker allow for more than one try, the duplicate jobs will run in parallel for 300 seconds.
In my test job I'm also printing out $this->connection which does show the correct connection being used so my guess is the broadcaster is just ignoring it completely.
I'm using Laravel 5.8.35 and PHP 7.3 in a Docker environment. Supervisor handles my workers.
Edit: I've confirmed the behavior persists after upgrading to Laravel v6.5.1
Steps To Reproduce:
Set your queue driver to Redis.
Create two different Redis connections in queue.php, one named redis having a retry_after of 600, the other named redis-long-run with a retry_after of 3600. In my case they also have different queue's, though I'm not sure this is required for this test.
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 600,
'block_for' => null,
],
'redis-long-run' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'long_run',
'retry_after' => 3600,
'block_for' => null,
]
]
Create a little command to dispatch our test job three times
<?php
namespace App\Console\Commands;
use App\Jobs\TestFifteen;
use Illuminate\Console\Command;
class TestCommand extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'test:fifteen';
/**
* Create a new command instance.
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* #return void
*/
public function handle()
{
for ($i = 1; $i <= 3; $i++) {
dispatch(new TestFifteen($i))->onConnection('redis-long-run');
}
}
}
Create the test job
<?php
namespace App\Jobs;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use function sleep;
use function var_dump;
class TestFifteen implements ShouldQueue
{
use Queueable, SerializesModels, Dispatchable, InteractsWithQueue;
private $testNumber;
public function __construct($testNumber)
{
$this->onConnection('redis-long-run');
$this->testNumber = $testNumber;
}
public function handle()
{
var_dump("Started test job {$this->testNumber} on connection {$this->connection}");
sleep(900);
var_dump("Finished test job {$this->testNumber} on connection {$this->connection}");
}
}
Run your queue workers. I use supervisor with the follow config for these workers.
[program:laravel-queue-default]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=default --tries=3 --timeout=600
numprocs=8
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:laravel-queue-long-run]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=long_run --tries=1 --timeout=3540
numprocs=8
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Execute the artisan command php artisan test:fifteen
So am I doing something wrong or is the applied connection really not respected?
Also, what's the design philosophy behind not being able to just decide on a per job or queue basis what the retry_after should be and thus being able to use REDIS as my actual queue driver? Why can't I pick Redis as my queue handler and decide that queue-1 retries after 60 seconds and queue-2 retries after 120 seconds? I feels so unnatural having to set up two connections for this, when they use exactly the same Redis instance and everything.
Anyway, here's hoping some can shine some light on this issue. Thank you in advance.
From my understanding your connection is always redis, so instead you should specify the queue instead, when dispatching:
dispatch(new TestFifteen($i))->onQueue('long_run');
The connection is a different driver, like redis, sync, AWS, etc. and your queue is different configuration or multiple stacks of jobs to that connection.
I need your help.
I'm working with Laravel queue and with Linux supervisor tool (Exactly like it the documentation)
Now I have a very weird issue.
When I use this command without delay
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data));
It's working fine.
But when I use the delay option
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data))->delay(60);
The job failed and not continue anymore
I can see the job on my failed-job table.
Now... When I'm not working with the supervisor tool and just run the command in my terminal:
php artisan queue:listen
The command with the delay option and other queue task working fine.
This is my larave-worker content look like:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/Poptin/artisan queue:work database --sleep=3 --tries=3 --daemon
autostart=true
autorestart=true
user=ubuntu
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/<project>/worker.log
What you think I need to do in order to fix it?
Also ... How can I use a different queue for a different job? like that
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data))->onQueue('autoresponder')->delay(60);
?
currently, I have only the default queue. Where I declare others queues in my config/queue.php file?
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'expire' => 60,
],
dispatch your job
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data))->delay(60);
$this->dispatch($job);
So... Eventually, I solve the issue by creating a new supervisor worker in a different connection and queue, like this:
[program:autoresponder-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/<Project>/artisan queue:listen autoresponder --sleep=5 -
-tries=3
autostart=true
autorestart=true
user=ubuntu
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/html/<Project>/worker.log
I have two laravel 5.1 aplications that uses beanstalkd and supervisord to manage queue jobs.
The supervisord.conf file has the two programs defined as
[program:diagbovespa-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite
process_name=%(program_name)s_%(process_num)02d
directory=/sciere/sites/diagbovespa.aceite.pro.br
numprocs=2
user=apache
redirect_stderr=true
autostart=true
autorestart=true
stdout_logfile=/sciere/sites/diagbovespa.aceite.pro.br/storage/logs/queue_supervisord.log
[program:questionarioise-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite
process_name=%(program_name)s_%(process_num)02d
directory=/sciere/sites/questionarioise.aceite.pro.br
numprocs=2
user=apache
redirect_stderr=true
autostart=true
autorestart=true
stdout_logfile=/sciere/sites/questionarioise.aceite.pro.br/storage/logs/queue_supervisord.log
The queue.php file for diagbovespa application has beanstalkd defined as
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'ttr' => 60,
],
and the queue.php questionarioise application has beanstalkd defined as
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'questionarioise',
'ttr' => 60,
],
So beanstalkd consider two queue groups, default and questionarioise.
The problem is that when I send an email via laravel default queue (program:diagbovespa-default-queue), sometimes I receive email from diagbovespa, sometiems from questionarioise.
What I'm missing in supervisord and/or beanstalkd configuration?
Your queue workers don't have a queue name specified, so they'll pick up any jobs with any queue label.
In your configs you have 'queue' => 'default' and 'queue' => 'questionarioise'. You need to update your queue workers to listen and handle those jobs only:
[program:diagbovespa-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite --queue=default
And:
[program:questionarioise-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite --queue= questionarioise
Though I'd suggest changing the first queue name from default to something more specific like diagbovespa, and use that in supervisord as well.