I am having some trouble with a custom laravel queue connection/queue. This particular connection/queue is being used for jobs which may be anywhere from 5 minutes to 10 hours (large data aggregations and data rebuilds)
I have a supervisor conf defined as
[program:laravel-worker-extended]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --queue=refreshQueue,rebuildQueue --sleep=3 --timeout=86400 --tries=2 --delay=360
autostart=true
autorestart=true
user=root
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/storage/logs/queue-worker.log
I have a queue connection defined as:
'refreshQueue' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'refreshQueue',
'retry_after' => 420, // Retry after 7 minutes
],
I’m adding a job to the queue with a Command via:
AggregateData::dispatch()->onConnection('refreshQueue')->onQueue('refreshQueue');
When DatabaseQueue is constructed, retryAfter is 420 as defined. however here are my job logs:
[2020-01-22 18:25:37] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:25:37] local.INFO: Aggregating data
[2020-01-22 18:27:08] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:27:08] local.ALERT: AGGREGATION FAILED: Aggregation in progress
Why does it continue to retry after 90 seconds when I explicitly tell it to retry after 420?
I’ve rebuilt my container, restarted the queue, and done about everything else I can to debug... and then waiting a while, I get this final log output:
[2020-01-22 18:25:37] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:25:37] local.INFO: Aggregating data
[2020-01-22 18:27:08] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:27:08] local.ALERT: AGGREGATION FAILED: Aggregation in progress
[2020-01-22 18:33:04] local.INFO: [COMPLETE] Aggregating data
[2020-01-22 18:33:04] local.INFO: Queue job finishedIlluminate\Queue\CallQueuedHandler#call
I can't quite grasp why the queue continues to retry the job after 90 seconds. Am I doing something wrong here?
Editing for some additional context here:
This method sets an in_progress flag when it begins, so that it cannot be run twice at the same exact time. The logs can be interpreted as:
BEGINNING AGGREGATION: First line in the handle() method of the job
AGGREGATION FAILED: Aggregation in progress: The failed() method of the job handles failures via exception. This line shows that it has both attempted the job again, and encountered the flag being set to 1 already meaning another job is processing currently. This flag gets reset to 0 when the job is complete or a different exception (not 'in-progress') is encountered.
Queue job finishedIlluminate\Queue\CallQueuedHandler#call Is further debugging I added in the service provider to listen for queue complete events.
This might have to do something with time timeout you're using. From the docs:
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
I've figured out the issue here. In queue.php I was defining a connection refreshQueue. However, in my supervisor conf I was using:
command=php /var/www/artisan queue:work --queue=refreshQueue,rebuildQueue --sleep=3 --timeout=86400 --tries=2 --delay=360
as the command (--queue), where the command should have been:
command=php /var/www/artisan queue:work refreshQueue --sleep=3 --timeout=86400 --tries=2 --delay=360
Note the lack of --queue. The connection has the retry_after defined, not the queue itself.
This is a valuable lesson in the difference of connections vs queues.
Related
How can you retry all failed jobs in Laravel Horizon? There appears to be no "Retry All" button and the artisan command doesn't work as the failed jobs aren't stored in a table.
The queue:retry command accepts all in place of an individual job ID:
php artisan queue:retry all
This will push all of the failed jobs back onto your redis queue for retry:
The failed job [44] has been pushed back onto the queue!
The failed job [43] has been pushed back onto the queue!
...
If you didn't create the failed logs table according to the installation guide with:
php artisan queue:failed-table
php artisan migrate
Then you may be up a creek. Maybe try interacting with redis manually and trying to access the list of failed jobs directly (assuming the failed jobs entries haven't been wiped - looks like they default to persisting in redis for a week, based on the config settings in config/horizon.php).
as the failed jobs aren't stored in a table
Actually, you should create that table. From the Laravel Horizon documentation:
You should also create the failed_jobs table which Laravel will use to
store any failed queue jobs:
php artisan queue:failed-table
php artisan migrate
Then, to retry failed jobs:
Retrying Failed Jobs
To view all of your failed jobs that have been inserted into your
failed_jobs database table, you may use the queue:failed Artisan
command:
php artisan queue:failed
The queue:failed command will list the job ID, connection, queue,
and failure time. The job ID may be used to retry the failed job. For
instance, to retry a failed job that has an ID of 5, issue the
following command:
php artisan queue:retry 5
I've configured a queuing on Laravel 5.4 using the "beanstalkd" queue driver ... I deployed it on CentOS 7 (cPanel) and installed Supervisor... but I've two main problems
In the logs, I found this exception "local.ERROR: exception 'PDOException' with message 'SQLSTATE[42S02]: Base table or view not found: 1146 Table '{dbname}.failed_jobs' doesn't exist" So Question #1 is .. Should I configure any database tables for "beanstalkd" queue driver, If so could you please state these tables structure?
Also I've configured the queue:work command in the Supervisor config file as following
[program:test-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /home/****/****/artisan queue:work beanstalkd --sleep=3 --tries=3
autostart=true
autorestart=true
user=gcarpet
numprocs=8
redirect_stderr=true
stdout_logfile= /home/*****/*****/storage/logs/supervisor.log
I found that the supervisor.log contained multiple calls for the job even after the first call was "Processed" .. Question #2 I dispatched the job once but the job was pushed in to the queue several times, I need a solution for this problem I don't want the same job to pushed multiple times in the queue?
[2019-05-14 09:08:15] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:15] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:15] Failed: App\Jobs\{JobName}
[2019-05-14 09:08:24] Processed: App\Jobs\{JobName}
[2019-05-14 09:08:24] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:33] Processed: App\Jobs\{JobName}
[2019-05-14 09:08:33] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:41] Processed: App\Jobs\{JobName}
[2019-05-14 09:08:41] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:41] Failed: App\Jobs\{JobName}
Please note the time difference between processed and failed jobs, Also I had set the driver 'retry_after' to 900 once and to 90 another time .. And I didn't feel it made any difference.
Create the table using the migration as documented.
php artisan queue:failed-table
php artisan migrate
The job failed, so it is retried.
This behaviour is specified by the 'tries' option that either your queue worker receives on the command line
php artisan queue:work --tries=3
...or the tries property of the specific job.
<?php
namespace App\Jobs;
class Reader implements ShouldQueue
{
public $tries = 5;
}
You currently are seeing that jobs retry 3 times, then fail.
Check your logging output and the failed_jobs table to see what exceptions have been thrown and fix those appropriately.
A job is retried whenever the handle method throws.
After a couple of retried, the job will fail and the failed() method will be invoked.
Failed jobs will be stored in the failed_jobs table for later reference or manual retrying.
Also note: there is a timeout and a retry after which need to be set independently.
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
See, Job Expirations & Timeouts.
I have a strange behavior when a Job runs:
On dev server (win 7 php 7.2.10) everything work fine,
on the production server Linux centOS php 7.0.10 it throws an Exception:
Illuminate\Queue\MaxAttemptsExceededException: A queued job has been attempted too many times. The job may have previously timed out.
config/queue.php
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
this happens after a job is queued ... when it starts working ... after about 30 seconds (Failed)
the exception is in the failed_jobs table
I though it could be dependent by the php max_execution_time directive but when i do
php -r "echo ini_get('max_execution_time') . PHP_EOL;"
it shows me zero (no timeout ... which is correct)
the Job is queued in this way:
dispatch((new Syncronize($file))->onQueue('sync'));
The Sincronize Job has no timeout (has 1 try) and simply calls two artisan commands which work perfecly both on prod and on dev server if called from the shell.
https://pastebin.com/mnaHWq71
to start jobs on the dev server I use
php artisan queue:work --queue=sync,newsletter,default
on prod server I use this
https://pastebin.com/h7uv5gca
any idea of what can be the cause ?
Found the problem ...
was in my service /etc/init.d/myservice
cd /var/www/html/
case "$1" in
start)
php artisan queue:work --queue=sync,newsletter,default &
echo $!>/var/run/myservice.pid
echo "server daemon started"
;;
I didn't check if the process was already running so I launch it twice.
I saw 2 processes in ps axu and seems that this was the cause
This check solved
if [ -e /var/run/myservice.pid ]; then
echo "Service is running. Call stop first".
exit 1
fi
We're currently running two laravel applications on the same dedicated server. Each application utilizes laravel's queueing system for background jobs and notifications. Each uses redis for the driver. Neither define any specific queues, they are both using the default. Our supervisor .conf is as follows:
[program:site-one-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/siteone.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/siteone.com/storage/logs/worker.log
[program:site-two-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/sitetwo.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/sitetwo.com/storage/logs/worker.log
Everything worked as expected before adding the configuration for the second site. After adding it, we were testing and noticed that when invoking an event on sitetwo.com that triggered a notification to be sent to the queue, that the email addresses which should have received the notifications did not, and instead they were sent to two email addresses that only exist within the database for siteone.com!
Everything seems to function as expected as long as only one of the above supervisor jobs is running.
Is there somehow a conflict between the two different applications using the same queue name for processing? Did I botch the supervisor config? Is there something else that I'm missing here?
The name of the class is all Laravel cares about when reading the queue. So if you have 2 sites dispatching the job App\Jobs\CoolEmailSender, then whichever application picks it up first is going to process it, regardless of which invoked it.
I can think of 2 things here:
Multiple redis-instances
or
unique queue names passed to --queue
I just changed APP_ENV and APP_NAME into .env file and it worked for me.
For Example:
First .env: APP_ENV=local APP_NAME = localapp
Second .env: APP_ENV=staging APP_NAME=stagingapp
Maybe late but did you try to modify the queue config at config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'project1', // 'project2'...
'retry_after' => 90,
'block_for' => null,
],
run queue with --queue=project1
Note: This answer is for those who are having multiple domains, multiple apps, and one database.
You can dispatch & listen to your job from multiple servers by specifying the queue name.
App 1
dispatch((new Job($payload))->onQueue('app1'));
php artisan queue:listen --queue=app1
App 2
dispatch((new Job($payload))->onQueue('app2'));
php artisan queue:listen --queue=app2
My stack set-up consists of the following
Machine1 - Main Server (Running laravel)
Machine2 - MySql Server for the laravel codebase
Machine3 - Beanstalkd worker
I have setup Supervisord on Machine1 and added the following queue listener
[program:queue1]
command=php artisan queue:listen --queue=queue1 --tries=2
...
My laravel queue config file(app/config/queue.php) reads the following
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => '--- Machine3 IP ---',
'queue' => 'queue1',
'ttr' => 60,
),
And I have installed beanstalkd on Machine3 along with Beanstalk console and can see my tasks being pushed to the queue and executing successfully. However I am not sure if Machine3 is actually executing them, and the reason for my suspicion is the High CPU usage on the main server as compared to no spikes in CPU usage on Machine3
I completely shutdown my beanstalkd Server to check if the queue still processes and the outcome was an error reported by laravel indicating it could not connect to the beanstalkd server.
I read somewhere that you need to have your laravel codebase on the beanstalkd server(Machine3) too, was that really the way to go?
Whichever machine you run queue:listen on is the machine that does the actual processing of the queue.
At the moment all you are doing is storing the queues on machine3, but processing them on machine1.
So you need to have machine3 run the queue:listen command if you want it to process the queue.