Laravel beanstalkd queue repeating jobs before retry time - php

I've configured a queuing on Laravel 5.4 using the "beanstalkd" queue driver ... I deployed it on CentOS 7 (cPanel) and installed Supervisor... but I've two main problems
In the logs, I found this exception "local.ERROR: exception 'PDOException' with message 'SQLSTATE[42S02]: Base table or view not found: 1146 Table '{dbname}.failed_jobs' doesn't exist" So Question #1 is .. Should I configure any database tables for "beanstalkd" queue driver, If so could you please state these tables structure?
Also I've configured the queue:work command in the Supervisor config file as following
[program:test-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /home/****/****/artisan queue:work beanstalkd --sleep=3 --tries=3
autostart=true
autorestart=true
user=gcarpet
numprocs=8
redirect_stderr=true
stdout_logfile= /home/*****/*****/storage/logs/supervisor.log
I found that the supervisor.log contained multiple calls for the job even after the first call was "Processed" .. Question #2 I dispatched the job once but the job was pushed in to the queue several times, I need a solution for this problem I don't want the same job to pushed multiple times in the queue?
[2019-05-14 09:08:15] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:15] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:15] Failed: App\Jobs\{JobName}
[2019-05-14 09:08:24] Processed: App\Jobs\{JobName}
[2019-05-14 09:08:24] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:33] Processed: App\Jobs\{JobName}
[2019-05-14 09:08:33] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:41] Processed: App\Jobs\{JobName}
[2019-05-14 09:08:41] Processing: App\Jobs\{JobName}
[2019-05-14 09:08:41] Failed: App\Jobs\{JobName}
Please note the time difference between processed and failed jobs, Also I had set the driver 'retry_after' to 900 once and to 90 another time .. And I didn't feel it made any difference.

Create the table using the migration as documented.
php artisan queue:failed-table
php artisan migrate
The job failed, so it is retried.
This behaviour is specified by the 'tries' option that either your queue worker receives on the command line
php artisan queue:work --tries=3
...or the tries property of the specific job.
<?php
namespace App\Jobs;
class Reader implements ShouldQueue
{
public $tries = 5;
}
You currently are seeing that jobs retry 3 times, then fail.
Check your logging output and the failed_jobs table to see what exceptions have been thrown and fix those appropriately.
A job is retried whenever the handle method throws.
After a couple of retried, the job will fail and the failed() method will be invoked.
Failed jobs will be stored in the failed_jobs table for later reference or manual retrying.
Also note: there is a timeout and a retry after which need to be set independently.
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
See, Job Expirations & Timeouts.

Related

Backup database in Laravel 8

Am using https://spatie.be/docs/laravel-backup/v7/introduction for backup and I tried to backup database using php artisan backup:run but i got these errors:
Backup failed because The dump process failed with exitcode 2 : Misuse of shell builtins : sh: 1: /opt/lampp/bin/mysql/mysqldump: not found
Sending notification failed
Backup failed because: The dump process failed with exitcode 2 : Misuse of shell builtins : sh: 1: /opt/lampp/bin/mysql/mysqldump: not found
.
Swift_RfcComplianceException
Address in mailbox given [] does not comply with RFC 2822, 3.6.2.
at vendor/swiftmailer/swiftmailer/lib/classes/Swift/Mime/Headers/MailboxHeader.php:355
351▕ */
352▕ private function assertValidAddress($address)
353▕ {
354▕ if (!$this->emailValidator->isValid($address, new RFCValidation())) {
➜ 355▕ throw new Swift_RfcComplianceException('Address in mailbox given ['.$address.'] does not comply with RFC 2822, 3.6.2.');
356▕ }
357▕ }
358▕ }
359▕
+38 vendor frames
39 artisan:37
Illuminate\Foundation\Console\Kernel::handle()
What should I do ?
I think the problem is in database.php in this code
'dump' => [
'dump_binary_path' => '/opt/lampp/bin/mysql', // only the path, so without `mysqldump` or `pg_dump`
'use_single_transaction',
'timeout' => 60 * 5, // 5 minute timeout
],
Where can I find mysqldump on lampp Ubuntu ?
PS: I am using laravel 8 on Ubuntu
Laravel doesn't have official command to take db backup which you mentioned
You must use following package
https://spatie.be/docs/laravel-backup/v7/installation-and-setup
To know laravel default php artisan commands .You can run following command to defualt commands available in php artisan.
php artisan
The result will show like below
Laravel Framework 8.42.1
Usage:
command [options] [arguments]
Options:
-h, --help Display help for the given command. When no command is given display help for the list command
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
--env[=ENV] The environment the command should run under
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
Available commands:
clear-compiled Remove the compiled class file
db Start a new database CLI session
down Put the application into maintenance / demo mode
env Display the current framework environment
help Display help for a command
inspire Display an inspiring quote
list List commands
migrate Run the database migrations
optimize Cache the framework bootstrap files
serve Serve the application on the PHP development server
test Run the application tests
tinker Interact with your application
ui Swap the front-end scaffolding for the application
up Bring the application out of maintenance mode
auth
auth:clear-resets Flush expired password reset tokens
cache
cache:clear Flush the application cache
cache:forget Remove an item from the cache
cache:table Create a migration for the cache database table
config
config:cache Create a cache file for faster configuration loading
config:clear Remove the configuration cache file
db
db:seed Seed the database with records
db:wipe Drop all tables, views, and types
event
event:cache Discover and cache the application's events and listeners
event:clear Clear all cached events and listeners
event:generate Generate the missing events and listeners based on registration
event:list List the application's events and listeners
key
key:generate Set the application key
make
make:cast Create a new custom Eloquent cast class
make:channel Create a new channel class
make:command Create a new Artisan command
make:component Create a new view component class
make:controller Create a new controller class
make:event Create a new event class
make:exception Create a new custom exception class
make:factory Create a new model factory
make:job Create a new job class
make:listener Create a new event listener class
make:mail Create a new email class
make:middleware Create a new middleware class
make:migration Create a new migration file
make:model Create a new Eloquent model class
make:notification Create a new notification class
make:observer Create a new observer class
make:policy Create a new policy class
make:provider Create a new service provider class
make:request Create a new form request class
make:resource Create a new resource
make:rule Create a new validation rule
make:seeder Create a new seeder class
make:test Create a new test class
migrate
migrate:fresh Drop all tables and re-run all migrations
migrate:install Create the migration repository
migrate:refresh Reset and re-run all migrations
migrate:reset Rollback all database migrations
migrate:rollback Rollback the last database migration
migrate:status Show the status of each migration
notifications
notifications:table Create a migration for the notifications table
optimize
optimize:clear Remove the cached bootstrap files
package
package:discover Rebuild the cached package manifest
queue
queue:batches-table Create a migration for the batches database table
queue:clear Delete all of the jobs from the specified queue
queue:failed List all of the failed queue jobs
queue:failed-table Create a migration for the failed queue jobs database table
queue:flush Flush all of the failed queue jobs
queue:forget Delete a failed queue job
queue:listen Listen to a given queue
queue:prune-batches Prune stale entries from the batches database
queue:restart Restart queue worker daemons after their current job
queue:retry Retry a failed queue job
queue:retry-batch Retry the failed jobs for a batch
queue:table Create a migration for the queue jobs database table
queue:work Start processing jobs on the queue as a daemon
route
route:cache Create a route cache file for faster route registration
route:clear Remove the route cache file
route:list List all registered routes
sail
sail:install Install Laravel Sail's default Docker Compose file
sail:publish Publish the Laravel Sail Docker files
schedule
schedule:list List the scheduled commands
schedule:run Run the scheduled commands
schedule:test Run a scheduled command
schedule:work Start the schedule worker
schema
schema:dump Dump the given database schema
session
session:table Create a migration for the session database table
storage
storage:link Create the symbolic links configured for the application
stub
stub:publish Publish all stubs that are available for customization
ui
ui:auth Scaffold basic login and registration views and routes
ui:controllers Scaffold the authentication controllers
vendor
vendor:publish Publish any publishable assets from vendor packages
view
view:cache Compile all of the application's Blade templates
view:clear Clear all compiled view files
Updated:
Look like the issue is with dump_binary_path
so it should be 'dump_binary_path'=>'/opt/lampp/bin' in config file

Laravel Custom Queue Not Respecting retry_after

I am having some trouble with a custom laravel queue connection/queue. This particular connection/queue is being used for jobs which may be anywhere from 5 minutes to 10 hours (large data aggregations and data rebuilds)
I have a supervisor conf defined as
[program:laravel-worker-extended]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --queue=refreshQueue,rebuildQueue --sleep=3 --timeout=86400 --tries=2 --delay=360
autostart=true
autorestart=true
user=root
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/storage/logs/queue-worker.log
I have a queue connection defined as:
'refreshQueue' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'refreshQueue',
'retry_after' => 420, // Retry after 7 minutes
],
I’m adding a job to the queue with a Command via:
AggregateData::dispatch()->onConnection('refreshQueue')->onQueue('refreshQueue');
When DatabaseQueue is constructed, retryAfter is 420 as defined. however here are my job logs:
[2020-01-22 18:25:37] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:25:37] local.INFO: Aggregating data
[2020-01-22 18:27:08] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:27:08] local.ALERT: AGGREGATION FAILED: Aggregation in progress
Why does it continue to retry after 90 seconds when I explicitly tell it to retry after 420?
I’ve rebuilt my container, restarted the queue, and done about everything else I can to debug... and then waiting a while, I get this final log output:
[2020-01-22 18:25:37] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:25:37] local.INFO: Aggregating data
[2020-01-22 18:27:08] local.INFO: BEGINNING AGGREGATION
[2020-01-22 18:27:08] local.ALERT: AGGREGATION FAILED: Aggregation in progress
[2020-01-22 18:33:04] local.INFO: [COMPLETE] Aggregating data
[2020-01-22 18:33:04] local.INFO: Queue job finishedIlluminate\Queue\CallQueuedHandler#call
I can't quite grasp why the queue continues to retry the job after 90 seconds. Am I doing something wrong here?
Editing for some additional context here:
This method sets an in_progress flag when it begins, so that it cannot be run twice at the same exact time. The logs can be interpreted as:
BEGINNING AGGREGATION: First line in the handle() method of the job
AGGREGATION FAILED: Aggregation in progress: The failed() method of the job handles failures via exception. This line shows that it has both attempted the job again, and encountered the flag being set to 1 already meaning another job is processing currently. This flag gets reset to 0 when the job is complete or a different exception (not 'in-progress') is encountered.
Queue job finishedIlluminate\Queue\CallQueuedHandler#call Is further debugging I added in the service provider to listen for queue complete events.
This might have to do something with time timeout you're using. From the docs:
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
I've figured out the issue here. In queue.php I was defining a connection refreshQueue. However, in my supervisor conf I was using:
command=php /var/www/artisan queue:work --queue=refreshQueue,rebuildQueue --sleep=3 --timeout=86400 --tries=2 --delay=360
as the command (--queue), where the command should have been:
command=php /var/www/artisan queue:work refreshQueue --sleep=3 --timeout=86400 --tries=2 --delay=360
Note the lack of --queue. The connection has the retry_after defined, not the queue itself.
This is a valuable lesson in the difference of connections vs queues.

How to retry all failed jobs from Redis queue in Laravel Horizon

How can you retry all failed jobs in Laravel Horizon? There appears to be no "Retry All" button and the artisan command doesn't work as the failed jobs aren't stored in a table.
The queue:retry command accepts all in place of an individual job ID:
php artisan queue:retry all
This will push all of the failed jobs back onto your redis queue for retry:
The failed job [44] has been pushed back onto the queue!
The failed job [43] has been pushed back onto the queue!
...
If you didn't create the failed logs table according to the installation guide with:
php artisan queue:failed-table
php artisan migrate
Then you may be up a creek. Maybe try interacting with redis manually and trying to access the list of failed jobs directly (assuming the failed jobs entries haven't been wiped - looks like they default to persisting in redis for a week, based on the config settings in config/horizon.php).
as the failed jobs aren't stored in a table
Actually, you should create that table. From the Laravel Horizon documentation:
You should also create the failed_jobs table which Laravel will use to
store any failed queue jobs:
php artisan queue:failed-table
php artisan migrate
Then, to retry failed jobs:
Retrying Failed Jobs
To view all of your failed jobs that have been inserted into your
failed_jobs database table, you may use the queue:failed Artisan
command:
php artisan queue:failed
The queue:failed command will list the job ID, connection, queue,
and failure time. The job ID may be used to retry the failed job. For
instance, to retry a failed job that has an ID of 5, issue the
following command:
php artisan queue:retry 5

Laravel multiple apps same server do queues conflict if same name?

We're currently running two laravel applications on the same dedicated server. Each application utilizes laravel's queueing system for background jobs and notifications. Each uses redis for the driver. Neither define any specific queues, they are both using the default. Our supervisor .conf is as follows:
[program:site-one-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/siteone.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/siteone.com/storage/logs/worker.log
[program:site-two-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/sitetwo.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/sitetwo.com/storage/logs/worker.log
Everything worked as expected before adding the configuration for the second site. After adding it, we were testing and noticed that when invoking an event on sitetwo.com that triggered a notification to be sent to the queue, that the email addresses which should have received the notifications did not, and instead they were sent to two email addresses that only exist within the database for siteone.com!
Everything seems to function as expected as long as only one of the above supervisor jobs is running.
Is there somehow a conflict between the two different applications using the same queue name for processing? Did I botch the supervisor config? Is there something else that I'm missing here?
The name of the class is all Laravel cares about when reading the queue. So if you have 2 sites dispatching the job App\Jobs\CoolEmailSender, then whichever application picks it up first is going to process it, regardless of which invoked it.
I can think of 2 things here:
Multiple redis-instances
or
unique queue names passed to --queue
I just changed APP_ENV and APP_NAME into .env file and it worked for me.
For Example:
First .env: APP_ENV=local APP_NAME = localapp
Second .env: APP_ENV=staging APP_NAME=stagingapp
Maybe late but did you try to modify the queue config at config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'project1', // 'project2'...
'retry_after' => 90,
'block_for' => null,
],
run queue with --queue=project1
Note: This answer is for those who are having multiple domains, multiple apps, and one database.
You can dispatch & listen to your job from multiple servers by specifying the queue name.
App 1
dispatch((new Job($payload))->onQueue('app1'));
php artisan queue:listen --queue=app1
App 2
dispatch((new Job($payload))->onQueue('app2'));
php artisan queue:listen --queue=app2

Laravel Queue Driver not calling handle() on jobs, but queue:listen daemon is logging jobs as processed

I've taken over a Laravel 5.2 project where handle() was being called successfully with the sync queue driver.
I need a driver that supports dispatch(..)->delay(..) and have attempted to configure both database and beanstalkd, with numerous variations, unsuccessfully - handle() is no longer getting called.
Current setup
I am using Forge for server management and have set up a daemon, which is automatically kept running by Supervisor, for this command:
php /home/forge/my.domain.com/envoyer/current/artisan queue:listen --sleep=3 --tries=3
I've also tried queue:work, naming 'database/beanstalkd', with and without specifying --sleep, --tries , --deamon
I have an active beanstalkd worker running on forge.
I have set the default driver to beanstalkd in \config\queue.php and QUEUE_DRIVER=beanstalkd in my .env from within Envoyer, which has worked fine for other environment variables.
After build deployment Envoyer runs the following commands successfully:
php artisan config:clear
php artisan migrate
php artisan cache:clear
php artisan queue:restart
Debug information
My queue:listen daemon produces log within .forge says it processed a job!
[2017-07-04 08:59:13] Processed: App\Jobs\GenerateRailwayReport
Where that job class is defined like this:
class GenerateRailwayReport extends Job implements ShouldQueue
{
use InteractsWithQueue, SerializesModels;
protected $task_id;
public function __construct($task_id)
{
$this->task_id = $task_id;
clock("GenerateRailwayReport constructed"); // Logs Fine
}
public function handle()
{
clock("Handling Generation of railway report"); // Never Logs
//Bunch of stuff all commented out during my testing
}
public function failed(Exception $e)
{
clock("Task failed with exception:"); // Never Logs
clock($e);
}
}
My beanstalkd worker log within .forge has no output in it.
Nothing in my failed_jobs table.
-Really, really appreciate any help at this point!

Categories