I'm using supervisor to run laravel jobs, and I was having a long-running job that was timing out, so I changed the rety_after variable from 90 to 3600 in config/queue.php.
It seems like now when I run php artisan queue:listen locally, it's trying to wait for 3600 (seconds?) before it attempts to run the job.
I just pushed some changes out to the prod environment and it seems like it's doing it there as well. Wondering if someone can give some insights on how to set up my jobs so they run quickly, but also don't timeout.
Here's my job definition in my supervisor conf file:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/my_project/artisan queue:work database --sleep=3 --tries=3 --timeout=2400
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=ec2-user
numprocs=8
redirect_stderr=true
stdout_logfile=/var/www/html/my_project/storage/logs/supervisor.log
stopwaitsecs=3600
startsecs=0
And in my config/queue.php file I have:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 3600,
],
To me the documentation is not clear on exactly what all the various flags do.
Related
I have web application that runs a job to convert videos into HLS using the aminyazdanpanah/php-ffmpeg-video-streaming package. However, after about 2 minutes, the job fails and throws the error:
Symfony\Component\Process\Exception\ProcessTimedOutException: The process '/usr/bin/ffmpeg -y -i...'
exceeded the timeout of 300 seconds. in /var/www/vendor/symfony/process/Process.php:1206
The Laravel job has it's time out set to 7200s.
My supervisor setup also specifies a timeout of 7200s:
[program:app_worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --tries=1 --timeout=7200 --memory=2000
autostart=true
autorestart=true
I have also set my php max_execution_time to 7200s in the ini file.
In the job handle() function I also call set_time_limit(7200); to set the time limit.
I have restarted the queue worker and cleared my cache but that doesn't seem to solve the issue.
It seems Symfony just ignores the timeout specification from Laravel.
I noticed that it failed after about 2 minutes because in my config/queue.php file redis retry_after was set to 90.
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
'after_commit' => false,
],
I increased that to 3600 so the job stopped failing after 2 minutes but kept failing after 300s.
I later traced down the timeout to be coming from aminyazdanpanah/php-ffmpeg-video-streaming FFmpeg::create().
By default, the function sets a timeout of 300s. So I had to pass a config to the function to increase the time out:
use Streaming\FFMpeg;
$ffmpeg = FFMpeg::create([
'timeout' => 3600,
]);
And this solved the timeout issue.
Hi below is the config file of supervisor of one project
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /livesites/siteA.example.com/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/livesites/siteA.example.com/storage/logs/worker.log
Its running fine . I have another project (siteB.example.com) with the redis as QUEUE_CONNECTION in the .env. What should the the config file for that. Will there be any issue running two projects queues on same server ?
First If two projects are on different connections (Redis and Database) shouldn't be any problem.
But if connections are the same (both on Database or Redis), One solution might be using a different queue for each project.
for example in the siteA project push your jobs on siteA queue and in the siteB project push your jobs on siteB queue. Then create two separate supervisor config files and in each of them put --queue=siteA or --queue=siteB in the artisan command argument.
siteA.conf:
command=php /livesites/siteA.example.com/artisan queue:work database --queue=siteA --sleep=3 --tries=3
siteB.conf:
command=php /livesites/siteB.example.com/artisan queue:work database --queue=siteB --sleep=3 --tries=3
and finally, in your Laravel code dispatch each job to appropriate queue:
dispatch((new Job)->onQueue('siteA'));
in siteB project
dispatch((new Job)->onQueue('siteB'));
or you can globally change the default queue for each project in config/queue.php as below:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'siteA' // or siteB,
'retry_after' => 90,
]
I need your help.
I'm working with Laravel queue and with Linux supervisor tool (Exactly like it the documentation)
Now I have a very weird issue.
When I use this command without delay
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data));
It's working fine.
But when I use the delay option
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data))->delay(60);
The job failed and not continue anymore
I can see the job on my failed-job table.
Now... When I'm not working with the supervisor tool and just run the command in my terminal:
php artisan queue:listen
The command with the delay option and other queue task working fine.
This is my larave-worker content look like:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/Poptin/artisan queue:work database --sleep=3 --tries=3 --daemon
autostart=true
autorestart=true
user=ubuntu
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/<project>/worker.log
What you think I need to do in order to fix it?
Also ... How can I use a different queue for a different job? like that
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data))->onQueue('autoresponder')->delay(60);
?
currently, I have only the default queue. Where I declare others queues in my config/queue.php file?
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'expire' => 60,
],
dispatch your job
$job = (new SendAutoresponderEmail($poptin,$autoresponder,$data))->delay(60);
$this->dispatch($job);
So... Eventually, I solve the issue by creating a new supervisor worker in a different connection and queue, like this:
[program:autoresponder-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/<Project>/artisan queue:listen autoresponder --sleep=5 -
-tries=3
autostart=true
autorestart=true
user=ubuntu
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/html/<Project>/worker.log
We're currently running two laravel applications on the same dedicated server. Each application utilizes laravel's queueing system for background jobs and notifications. Each uses redis for the driver. Neither define any specific queues, they are both using the default. Our supervisor .conf is as follows:
[program:site-one-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/siteone.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/siteone.com/storage/logs/worker.log
[program:site-two-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/sitetwo.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/sitetwo.com/storage/logs/worker.log
Everything worked as expected before adding the configuration for the second site. After adding it, we were testing and noticed that when invoking an event on sitetwo.com that triggered a notification to be sent to the queue, that the email addresses which should have received the notifications did not, and instead they were sent to two email addresses that only exist within the database for siteone.com!
Everything seems to function as expected as long as only one of the above supervisor jobs is running.
Is there somehow a conflict between the two different applications using the same queue name for processing? Did I botch the supervisor config? Is there something else that I'm missing here?
The name of the class is all Laravel cares about when reading the queue. So if you have 2 sites dispatching the job App\Jobs\CoolEmailSender, then whichever application picks it up first is going to process it, regardless of which invoked it.
I can think of 2 things here:
Multiple redis-instances
or
unique queue names passed to --queue
I just changed APP_ENV and APP_NAME into .env file and it worked for me.
For Example:
First .env: APP_ENV=local APP_NAME = localapp
Second .env: APP_ENV=staging APP_NAME=stagingapp
Maybe late but did you try to modify the queue config at config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'project1', // 'project2'...
'retry_after' => 90,
'block_for' => null,
],
run queue with --queue=project1
Note: This answer is for those who are having multiple domains, multiple apps, and one database.
You can dispatch & listen to your job from multiple servers by specifying the queue name.
App 1
dispatch((new Job($payload))->onQueue('app1'));
php artisan queue:listen --queue=app1
App 2
dispatch((new Job($payload))->onQueue('app2'));
php artisan queue:listen --queue=app2
I have two laravel 5.1 aplications that uses beanstalkd and supervisord to manage queue jobs.
The supervisord.conf file has the two programs defined as
[program:diagbovespa-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite
process_name=%(program_name)s_%(process_num)02d
directory=/sciere/sites/diagbovespa.aceite.pro.br
numprocs=2
user=apache
redirect_stderr=true
autostart=true
autorestart=true
stdout_logfile=/sciere/sites/diagbovespa.aceite.pro.br/storage/logs/queue_supervisord.log
[program:questionarioise-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite
process_name=%(program_name)s_%(process_num)02d
directory=/sciere/sites/questionarioise.aceite.pro.br
numprocs=2
user=apache
redirect_stderr=true
autostart=true
autorestart=true
stdout_logfile=/sciere/sites/questionarioise.aceite.pro.br/storage/logs/queue_supervisord.log
The queue.php file for diagbovespa application has beanstalkd defined as
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'ttr' => 60,
],
and the queue.php questionarioise application has beanstalkd defined as
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'questionarioise',
'ttr' => 60,
],
So beanstalkd consider two queue groups, default and questionarioise.
The problem is that when I send an email via laravel default queue (program:diagbovespa-default-queue), sometimes I receive email from diagbovespa, sometiems from questionarioise.
What I'm missing in supervisord and/or beanstalkd configuration?
Your queue workers don't have a queue name specified, so they'll pick up any jobs with any queue label.
In your configs you have 'queue' => 'default' and 'queue' => 'questionarioise'. You need to update your queue workers to listen and handle those jobs only:
[program:diagbovespa-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite --queue=default
And:
[program:questionarioise-default-queue]
command=php artisan queue:listen --tries=2 --env=aceite --queue= questionarioise
Though I'd suggest changing the first queue name from default to something more specific like diagbovespa, and use that in supervisord as well.