Dispatching to queue always chooses the default connection - php

I have an application running many different kinds of jobs that run for several seconds to an hour. This requires me to have two seperate queue connection due to retry_after being bound to a connection instead of a queue (even though they both use Redis, which is annoying but not the issue right now). One connection has a retry_after of 600 seconds, the other 3600 seconds (or one hour). My jobs implement ShouldQueue and use traits Queueable, SerializesModels, Dispatchable, InteractsWithQueue.
Now for the problem. I created a TestJob that sleeps for 900 seconds to make sure we pass the retry_after limit. When I try to dispatch a job on a specific connection using:
dispatch(new Somejob)->onConnection('redis-long-run')
or as I used to do it previously in the constructor of the job (which always used to work):
public function __construct() {
$this->onConnection('redis-long-run');
}
The job gets picked up by the queue worker, it runs for 600 seconds after which the worker restarts job, notices it has already run once and fails it. 300 seconds later, the job processes successfully. If my worker allow for more than one try, the duplicate jobs will run in parallel for 300 seconds.
In my test job I'm also printing out $this->connection which does show the correct connection being used so my guess is the broadcaster is just ignoring it completely.
I'm using Laravel 5.8.35 and PHP 7.3 in a Docker environment. Supervisor handles my workers.
Edit: I've confirmed the behavior persists after upgrading to Laravel v6.5.1
Steps To Reproduce:
Set your queue driver to Redis.
Create two different Redis connections in queue.php, one named redis having a retry_after of 600, the other named redis-long-run with a retry_after of 3600. In my case they also have different queue's, though I'm not sure this is required for this test.
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 600,
'block_for' => null,
],
'redis-long-run' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'long_run',
'retry_after' => 3600,
'block_for' => null,
]
]
Create a little command to dispatch our test job three times
<?php
namespace App\Console\Commands;
use App\Jobs\TestFifteen;
use Illuminate\Console\Command;
class TestCommand extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'test:fifteen';
/**
* Create a new command instance.
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* #return void
*/
public function handle()
{
for ($i = 1; $i <= 3; $i++) {
dispatch(new TestFifteen($i))->onConnection('redis-long-run');
}
}
}
Create the test job
<?php
namespace App\Jobs;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use function sleep;
use function var_dump;
class TestFifteen implements ShouldQueue
{
use Queueable, SerializesModels, Dispatchable, InteractsWithQueue;
private $testNumber;
public function __construct($testNumber)
{
$this->onConnection('redis-long-run');
$this->testNumber = $testNumber;
}
public function handle()
{
var_dump("Started test job {$this->testNumber} on connection {$this->connection}");
sleep(900);
var_dump("Finished test job {$this->testNumber} on connection {$this->connection}");
}
}
Run your queue workers. I use supervisor with the follow config for these workers.
[program:laravel-queue-default]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=default --tries=3 --timeout=600
numprocs=8
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:laravel-queue-long-run]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=long_run --tries=1 --timeout=3540
numprocs=8
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Execute the artisan command php artisan test:fifteen
So am I doing something wrong or is the applied connection really not respected?
Also, what's the design philosophy behind not being able to just decide on a per job or queue basis what the retry_after should be and thus being able to use REDIS as my actual queue driver? Why can't I pick Redis as my queue handler and decide that queue-1 retries after 60 seconds and queue-2 retries after 120 seconds? I feels so unnatural having to set up two connections for this, when they use exactly the same Redis instance and everything.
Anyway, here's hoping some can shine some light on this issue. Thank you in advance.

From my understanding your connection is always redis, so instead you should specify the queue instead, when dispatching:
dispatch(new TestFifteen($i))->onQueue('long_run');
The connection is a different driver, like redis, sync, AWS, etc. and your queue is different configuration or multiple stacks of jobs to that connection.

Related

i could not run artisan schedule:run cron command at shared hosting

I want to achieve task scheduling in my laravel 5.8 project. For that, I have created a custom artisan command artisan send: credentials which send emails to specific users based on their status.
sendUserCredentials.php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use App\Mail\credentialsEmail;
use App\Models\userModel;
use Mail;
class sendUserCredentials extends Command
{
protected $signature = 'send:credentials';
protected $description = 'Credentials send Successfully!';
public function __construct()
{
parent::__construct();
}
public function handle()
{
$users = userModel::select(["email","username","role","id"])->where("credentials","NO")->get();
foreach ($users as $key => $user) {
Mail::to($user->email)->send(new credentialsEmail($user));
userModel::where("id",$user->id)->update(["credentials"=>"SEND"]);
}
}
}
I added this command in kernel.php so that I can run this command using the laravel task scheduler.
kernel.php
namespace App\Console;
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Foundation\Console\Kernel as ConsoleKernel;
class Kernel extends ConsoleKernel
{
protected $commands = [
Commands\sendUserCredentials::class,
];
protected function schedule(Schedule $schedule)
{
$schedule->command('send:credentials')
->everyMinute();
}
protected function commands()
{
$this->load(__DIR__.'/Commands');
require base_path('routes/console.php');
}
}
so on my local server, everything works like a charm when I run this command php artisan schedule:run
but on the shared server when I run scheduler using the cron command *****/path/to/project/artisan schedule:run >> /dev/null 2>&1 it gives me an error like this
local.ERROR: The Process class relies on proc_open, which is not available on your PHP installation. {"exception":"[object] (Symfony\\Component\\Process\\Exception\\LogicException(code: 0): The Process class relies on proc_open, which is not available on your PHP installation. at /path/to/vendor/vendor/symfony/process/Process.php:143)
BUT when I run the artisan command directly *****/path/to/project/artisan send:credentials >> /dev/null 2>&1 using the cron job then there is no error and emails send successfully!
I am using laravel 5.8 and deployed my website on namecheap shared hosting. Following command help me to execute cron job properly:
*/5 * * * * /usr/local/bin/php /home/YOUR_USER/public_html/artisan schedule:run >> /home/YOUR_USER/public_html/cronjobs.txt
As namecheap allow minimum of 5 min interval, so above command will execute after 5 min and output will be displayed in a text file.
The Error
The Process class relies on proc_open, which is not available on your PHP installation.
is because of Flare error reporting service enabled in debug mode. To solve this please follow the steps shared below.
Add the File /config/flare.php and add the below content
'reporting' => [
'anonymize_ips' => true,
'collect_git_information' => false,
'report_queries' => true,
'maximum_number_of_collected_queries' => 200,
'report_query_bindings' => true,
'report_view_data' => true,
],
And Clear the Bootstrap cache with below command
php artisan cache:clear && php artisan config:clear
Most probably the issue will be solved. Otherwise check once this link

How to use multiple SQS(queue service) instance in Lumen?

I wanted to push messages to multiple SQS queues parallel or one after another, but it should be dynamic and when I start the worker it should fetch the messages from both the queues and differentiate.
How can I achieve this in Lumen?
UPDATE
How to use multiple worker for different queues with different amazon SQS instances?
As far as I can see Lumen and Laravel use the exact same code to handle queues so here's something that might work, though I haven't tested it.
Run the queue worker as:
php artisan queue:work --queue=queue1,queue2
This will mean that jobs in queue1 are processed before jobs in queue2 (unfortunately this is the only way to listen to multiple queues)
Then in your job:
class MyJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function handle()
{
if ($this->job->getQueue() === 'queue1') {
//Things
} else {
// different things
}
}
If you need to use multiple connections you can't do that using a single worker, however you can use multiple workers at a time. First configure your connections e.g. in your config/queue.php
'connections' => [
'sqs' => [
'driver' => 'sqs',
'key' => 'your-public-key',
'secret' => 'your-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-account-id',
'queue' => 'your-queue-name',
'region' => 'us-east-1',
],
'sqs2' => [
'driver' => 'sqs',
'key' => 'your-other-public-key',
'secret' => 'your-other-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-other-account-id',
'queue' => 'your-other-queue-name',
'region' => 'us-east-1',
],
]
If you're using supervisor then setup your supervisor configuration, if not you'll have to start both workers manually. Here's a supervisor configuration you can use:
[program:laravel-sqs-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --queue=queue1
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
[program:laravel-sqs2-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs2 --queue=queue2
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
Alter the paths and user settings according to your app.

Laravel calling queue worker from other artisan command

I'm using Laravel 5.4 and I want to call my queue:work from a build script. Problem is I want to determine which queue to use based on an env variable i'm setting on the server. I'm i safe to use a command like the one I wrote.
What happens if the worker stops working. Can i restart the worker gracefully?
Anny tips or suggestions are welcome!
namespace App\Console\Commands;
use Illuminate\Console\Command;
use Illuminate\Support\Facades\Artisan;
class SetupQueueWorker extends Command
{
protected $signature = 'queue:setup-worker';
const QUEUE_TO_USE_ENV_KEY = 'USE_QUEUE';
public function handle()
{
//php artisan queue:work --tries=2 --queue=so_test --sleep=5
$queueToUse = env(self::QUEUE_TO_USE_ENV_KEY);
if(is_null($queueToUse))
{
throw new \RuntimeException('Environment variable to determine which queue should be used is not set. Env key: '.self::QUEUE_TO_USE_ENV_KEY);
}
$exit = Artisan::call('queue:work', [
'--tries' => 2, '--queue' => $queueToUse, '--sleep' => 5
]);
throw new \RuntimeException('Queue worker stopped listing.');
}
}

Laravel multiple apps same server do queues conflict if same name?

We're currently running two laravel applications on the same dedicated server. Each application utilizes laravel's queueing system for background jobs and notifications. Each uses redis for the driver. Neither define any specific queues, they are both using the default. Our supervisor .conf is as follows:
[program:site-one-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/siteone.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/siteone.com/storage/logs/worker.log
[program:site-two-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/sitetwo.com/artisan queue:work --sleep=5 --tries=1
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/sitetwo.com/storage/logs/worker.log
Everything worked as expected before adding the configuration for the second site. After adding it, we were testing and noticed that when invoking an event on sitetwo.com that triggered a notification to be sent to the queue, that the email addresses which should have received the notifications did not, and instead they were sent to two email addresses that only exist within the database for siteone.com!
Everything seems to function as expected as long as only one of the above supervisor jobs is running.
Is there somehow a conflict between the two different applications using the same queue name for processing? Did I botch the supervisor config? Is there something else that I'm missing here?
The name of the class is all Laravel cares about when reading the queue. So if you have 2 sites dispatching the job App\Jobs\CoolEmailSender, then whichever application picks it up first is going to process it, regardless of which invoked it.
I can think of 2 things here:
Multiple redis-instances
or
unique queue names passed to --queue
I just changed APP_ENV and APP_NAME into .env file and it worked for me.
For Example:
First .env: APP_ENV=local APP_NAME = localapp
Second .env: APP_ENV=staging APP_NAME=stagingapp
Maybe late but did you try to modify the queue config at config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'project1', // 'project2'...
'retry_after' => 90,
'block_for' => null,
],
run queue with --queue=project1
Note: This answer is for those who are having multiple domains, multiple apps, and one database.
You can dispatch & listen to your job from multiple servers by specifying the queue name.
App 1
dispatch((new Job($payload))->onQueue('app1'));
php artisan queue:listen --queue=app1
App 2
dispatch((new Job($payload))->onQueue('app2'));
php artisan queue:listen --queue=app2

Laravel Queue Driver not calling handle() on jobs, but queue:listen daemon is logging jobs as processed

I've taken over a Laravel 5.2 project where handle() was being called successfully with the sync queue driver.
I need a driver that supports dispatch(..)->delay(..) and have attempted to configure both database and beanstalkd, with numerous variations, unsuccessfully - handle() is no longer getting called.
Current setup
I am using Forge for server management and have set up a daemon, which is automatically kept running by Supervisor, for this command:
php /home/forge/my.domain.com/envoyer/current/artisan queue:listen --sleep=3 --tries=3
I've also tried queue:work, naming 'database/beanstalkd', with and without specifying --sleep, --tries , --deamon
I have an active beanstalkd worker running on forge.
I have set the default driver to beanstalkd in \config\queue.php and QUEUE_DRIVER=beanstalkd in my .env from within Envoyer, which has worked fine for other environment variables.
After build deployment Envoyer runs the following commands successfully:
php artisan config:clear
php artisan migrate
php artisan cache:clear
php artisan queue:restart
Debug information
My queue:listen daemon produces log within .forge says it processed a job!
[2017-07-04 08:59:13] Processed: App\Jobs\GenerateRailwayReport
Where that job class is defined like this:
class GenerateRailwayReport extends Job implements ShouldQueue
{
use InteractsWithQueue, SerializesModels;
protected $task_id;
public function __construct($task_id)
{
$this->task_id = $task_id;
clock("GenerateRailwayReport constructed"); // Logs Fine
}
public function handle()
{
clock("Handling Generation of railway report"); // Never Logs
//Bunch of stuff all commented out during my testing
}
public function failed(Exception $e)
{
clock("Task failed with exception:"); // Never Logs
clock($e);
}
}
My beanstalkd worker log within .forge has no output in it.
Nothing in my failed_jobs table.
-Really, really appreciate any help at this point!

Categories