Laravel 5.5 - Horizon not running second queue automatically - php

Using Laravel Horizon on Forge with Redis, I have a default queue and a notifications queue.
The notification jobs are all built up with a pause status under recent jobs and do NOT get processed. This is the code used:
$event->owner->notify((new ItemWasLiked($event))->onQueue('notifications'));
The only way I found to process them was to manually run the following command explicitly for notifications to process:
php artisan queue:work --queue=notifications
Shouldn't this be automatic as it comes in? What am I missing?

We need to instruct Horizon to start a queue worker that processes the notifications queue in addition to the default queue by adding an element to the queue worker configuration in config/horizon.php:
'environments' => [
...
'(environment name)' => [
'supervisor-1' => [
...
'queue' => [ 'default', 'notifications' ],
],
],
],
The 'queue' directive declares which queues a Horizon worker watches for jobs. The out-of-box configuration only specifies the default queue, so the worker will only process jobs pushed to that queue. The above is roughly equivalent to:
php artisan queue:work --queue=default,notifications
...where the first queue in the comma-separated list has the highest priority, and the last queue has the lowest priority. Horizon prioritizes queues by allocating a greater share of the number of processes rather than processing queues in order of priority.
Alternatively, we could add a second worker group to the configuration that processes the second queue:
'(environment name)' => [
'supervisor-1' => [
...
'queue' => [ 'default' ],
],
'supervisor-2' => [
...
'queue' => [ 'notifications' ],
],
],
...for which Horizon starts queue worker processes for each each of the two queues that run simultaneously.

Related

Laravel Job throwing Symfony\Component\Process\Exception\ProcessTimedOutException

I have web application that runs a job to convert videos into HLS using the aminyazdanpanah/php-ffmpeg-video-streaming package. However, after about 2 minutes, the job fails and throws the error:
Symfony\Component\Process\Exception\ProcessTimedOutException: The process '/usr/bin/ffmpeg -y -i...'
exceeded the timeout of 300 seconds. in /var/www/vendor/symfony/process/Process.php:1206
The Laravel job has it's time out set to 7200s.
My supervisor setup also specifies a timeout of 7200s:
[program:app_worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --tries=1 --timeout=7200 --memory=2000
autostart=true
autorestart=true
I have also set my php max_execution_time to 7200s in the ini file.
In the job handle() function I also call set_time_limit(7200); to set the time limit.
I have restarted the queue worker and cleared my cache but that doesn't seem to solve the issue.
It seems Symfony just ignores the timeout specification from Laravel.
I noticed that it failed after about 2 minutes because in my config/queue.php file redis retry_after was set to 90.
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
'after_commit' => false,
],
I increased that to 3600 so the job stopped failing after 2 minutes but kept failing after 300s.
I later traced down the timeout to be coming from aminyazdanpanah/php-ffmpeg-video-streaming FFmpeg::create().
By default, the function sets a timeout of 300s. So I had to pass a config to the function to increase the time out:
use Streaming\FFMpeg;
$ffmpeg = FFMpeg::create([
'timeout' => 3600,
]);
And this solved the timeout issue.

Laravel Redis Jobs are not Being Queued

I am using Laravel with Phpredis and I've created a webhook that adds a job to the queue. I've followed the docs for the interrogation but my jobs are not being queued.
.env
QUEUE_CONNECTION=redis
config/database.php
'client' => env('REDIS_CLIENT', 'phpredis'),
config/queue.php
...
'connections' => [
...
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],
...
],
...
I am using Windows with Xampp and redis-server.exe is running. This is what I am getting when the job is being added to the queue:
[9672] 03 Nov 21:44:03 - Accepted 127.0.0.1:52945
[9672] 03 Nov 21:44:03 - Client closed connection
This is how I'm adding the jobs to queue:
ProcessPhotos::dispatch($settings, $data, $id);
And this is how I'm trying to run the queued jobs:
php artisan queue:work
or
php artisan queue:listen
I am running one of the previous commands and nothing is happening and I'm also not receiving any errors. It's like the queue is empty (I've also checked the queue length using this code and I've got 000).
I've also tried to set a key into redis and that seemed to work. Does somebody know what's happening? I'm thinking to move to database if i can't get this solved ...
I've fixed the issue!
It turned out that it was something wrong with the server. (I've reinstalled again the Redis extension and it still wasn't working, then I changed the server version and it was working)
I reinstalled the Redis extension from here and switched to this server version. The rest of the settings were the same as in my previous post.

Timed out job hangs for 15 or 30 minutes and then runs

Horizon Version: 3.7.2 / 3.4.7
Laravel Version: 6.17.0
PHP Version: 7.4.4
Redis Driver & Version: predis 1.1.1 / phpredis 5.2.1
Database Driver & Version: -
We are having strange errors with our Horizon. Basically this is what happens:
- A job is queued. And starts processing.
After 90 seconds (our timeout config value) it times out.
After 120 seconds (our retry_after value) job is retried.
Retried job succeeds.
After 15 minutes or 30 minutes, the original job(the one timed out) finishes. With running the actual job.
Seems like this can happen to any kind of job. For example if it's mailable that is queued, the user gets an email first, then 15 or 30 minutes later user gets another email. Same one.
Here our config files
config/database.php:
'redis' => [
'client' => env('REDIS_CLIENT', 'predis'),
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
config/queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('DEFAULT_QUEUE_NAME', 'default'),
'retry_after' => 120, // 2 minutes
'block_for' => null,
],
config/horizon.php:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => env('HORIZON_CONNECTION', 'redis'),
'queue' => [env('DEFAULT_QUEUE_NAME', 'default')],
'balance' => 'simple',
'processes' => 10,
'tries' => 3,
'timeout' => 90,
],
],
]
Here how it looks in Horizon Dashboard
This when the initial job times out. It stays like this in Recent Jobs while the retries are working.
After almost half an hour it changes to this:
It's the same tags, I just blacked out names.
Here are the logs we are seeing (times here are in UTC)
[2020-04-22 11:24:59][88] Processing: App\Mail\ReservationInformation
[2020-04-22 11:29:00][88] Failed: App\Mail\ReservationInformation
[2020-04-22 11:29:00][88] Processing: App\Mail\ReservationInformation
[2020-04-22 11:56:21][88] Processed: App\Mail\ReservationInformation
Note: With Predis we also see some logs like Error while reading line from the server. [tcp://REDIS_HOST:6379] but with PHPRedis there was none.
We tried a lot of different combinations, to eliminate the problem. And it happened in every combination. So we think it must be something with Horizon.
We tried:
- Predis with Redis 5 and Redis 3
Predis with different read_write_timeout values
PHPRedis with Redis 5 and Redis 3
THP was enabled on one server. So we also tried all combinations with a server that has THP disabled.
We were at Laravel 6.11 and Horizon 3.4.7 then upgraded to Laravel 6.14 and Horizon 3.7.2
There is only one instance of Horizon running. And no other queue is handled in this Horizon instance.
Any information or tips to try are welcome!
For us this turned out to be a configuration error in our systems. We were using OpenShift and Docker. We adjusted these values in our containers/systems
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
net.ipv4.tcp_keepalive_time
and for now everything works normally.

How to use multiple SQS(queue service) instance in Lumen?

I wanted to push messages to multiple SQS queues parallel or one after another, but it should be dynamic and when I start the worker it should fetch the messages from both the queues and differentiate.
How can I achieve this in Lumen?
UPDATE
How to use multiple worker for different queues with different amazon SQS instances?
As far as I can see Lumen and Laravel use the exact same code to handle queues so here's something that might work, though I haven't tested it.
Run the queue worker as:
php artisan queue:work --queue=queue1,queue2
This will mean that jobs in queue1 are processed before jobs in queue2 (unfortunately this is the only way to listen to multiple queues)
Then in your job:
class MyJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function handle()
{
if ($this->job->getQueue() === 'queue1') {
//Things
} else {
// different things
}
}
If you need to use multiple connections you can't do that using a single worker, however you can use multiple workers at a time. First configure your connections e.g. in your config/queue.php
'connections' => [
'sqs' => [
'driver' => 'sqs',
'key' => 'your-public-key',
'secret' => 'your-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-account-id',
'queue' => 'your-queue-name',
'region' => 'us-east-1',
],
'sqs2' => [
'driver' => 'sqs',
'key' => 'your-other-public-key',
'secret' => 'your-other-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-other-account-id',
'queue' => 'your-other-queue-name',
'region' => 'us-east-1',
],
]
If you're using supervisor then setup your supervisor configuration, if not you'll have to start both workers manually. Here's a supervisor configuration you can use:
[program:laravel-sqs-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --queue=queue1
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
[program:laravel-sqs2-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs2 --queue=queue2
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
Alter the paths and user settings according to your app.

Running beanstalkd worker on a remote server

My stack set-up consists of the following
Machine1 - Main Server (Running laravel)
Machine2 - MySql Server for the laravel codebase
Machine3 - Beanstalkd worker
I have setup Supervisord on Machine1 and added the following queue listener
[program:queue1]
command=php artisan queue:listen --queue=queue1 --tries=2
...
My laravel queue config file(app/config/queue.php) reads the following
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => '--- Machine3 IP ---',
'queue' => 'queue1',
'ttr' => 60,
),
And I have installed beanstalkd on Machine3 along with Beanstalk console and can see my tasks being pushed to the queue and executing successfully. However I am not sure if Machine3 is actually executing them, and the reason for my suspicion is the High CPU usage on the main server as compared to no spikes in CPU usage on Machine3
I completely shutdown my beanstalkd Server to check if the queue still processes and the outcome was an error reported by laravel indicating it could not connect to the beanstalkd server.
I read somewhere that you need to have your laravel codebase on the beanstalkd server(Machine3) too, was that really the way to go?
Whichever machine you run queue:listen on is the machine that does the actual processing of the queue.
At the moment all you are doing is storing the queues on machine3, but processing them on machine1.
So you need to have machine3 run the queue:listen command if you want it to process the queue.

Categories