Laravel fails to acknowledge jobs for long time processes - php

I've a problem with running long time queue workers, I'm currently using Laravel 5.0. I used to queue the jobs on database and have no problem with that, but I needed to move this from DB so I went to rabbitmq, so I'm integrating this package:
https://github.com/vyuldashev/laravel-queue-rabbitmq/tree/v5.0
Everything is doing well with short time jobs, ones taking less than 3 or 4 mints, but I'm trying to run a queue listener for jobs taking more than 10 mints, the thing is they don't get acknowledged and they remain in the unacked, and after 16.6 mints exactly - default ttl-; they moves to the next job and still not acked. And I'm getting broken pipe or connection sometimes if the process took too long.
I believe the problem is with the worker itself, not the package I'm using, and here're two examples for the queue listener I'm trying to apply, could you advice how to use them in a better way or what options I may use with them:
php artisan queue:listen rabbitmq --queue=QUEUENAME --timeout=0
--tries=2
php artisan queue:work rabbitmq --queue=QUEUENAME --daemon --tries=2

You can set the $timeout per job like so:
namespace App\Jobs;
class LongProcessJob implements ShouldQueue
{
/**
* The number of seconds the job can run before timing out.
* #var int
*/
public $timeout = 120;
}
see Laravel Queues for more details.

Related

Laravel 5.7 - Queues Jobs are too slow

I use Laravel 5.7 and 3 queues jobs, the time between jobs is too long/slow.
I foreach items of RSS feeds in the first job, and I dispatch this item in second job, etc... I don't enter in details but there are some ridiculous little calculations that must not take time.
The problem is that every dispatch to a job takes a lot of time. Horizon and Telescope do not allow me to debug.
The machine I use has 32 GB of RAM, and there are several processes (15 each) that turn the tails.
[program:mywebsite_feeder]
command=/RunCloud/Packages/php72rc/bin/php artisan queue:work redis --queue=feeder --tries=3 --sleep=0
directory=/home/runcloud/webapps/mywebsite
redirect_stderr=true
autostart=true
autorestart=true
user=runcloud
numprocs=15
process_name=%(program_name)s_%(process_num)s
I have this error in laravel.log:
production.ERROR: App\Jobs\FeederJob has been attempted too many times
or run too long. The job may have previously timed out.
by default laravel queues sleep 3 seconds when there are no jobs available.
u should use --sleep=0 option
I had the same issue and did a lot of searches but nothing help, even there is some issues about this bug in horizon Github but without a useful solution. the problem is about horizon and Redis bug for heavy tasks.
finally, I switch from Redis and horizon to SQL database (whatever you use in your project for me mssql) as queue connection and it fixed the problem
notice: use --timeout=0 in your artisan command

How to prevent Laravel queue job, schedule and cron job from killing the server

My problem is more of fallback I think, now I have a two queue job in my Laravel job queue, and I am using database Driver. The first command create credentials for my user from another site base on API calls, and the second is to email for verification and 2FA. Also, there is another command that update change my unit conversion rate.
protected function schedule(Schedule $schedule){
$schedule->command('update:conversionRate')->everyFiveMinutes();
$schedule->command('queue:work')->everyMinute();
}
Queue job are added to my queue using dispatch command and shouldQueue interface the API call uses the dispatch function while the email uses shouldQueue.
Now it work because I can see the jobs in my database. But when the server cron job runs it will crash, and my Log file shows that the my MySQL users has reach it maximum connection limit. Hence nobody can assess the database using that user account.
So my question how do I setup the cron job and queue:work so that it does not crash the server?
How I understood your problem with the maximum connection to the database.
The first solution, but it is not best to increase the connection limit to the database.
The second solution is to work with the queue. You have not tried to use a driver not a database, for example redis or beanstalkd?
You also run the command every minute. It's a bad practice to use cron job for queues. There is a supervisor for this.
Also, with the team, try to use the queue: work parameters.
Example
php artisan queue:work --sleep=3 --tries=3 --daemon
--sleep this handler will have a break between the processing of the elements of the queue
--tries = 3 if for some reason the item will not be processed, after this parameter it will try 3 times and proceed to the next element, where by default it will try to try many times.
Experiment with these options.

How to manually release all the queued jobs in Laravel?

I need to know if there is a way to use the internal laravel api to force the release of all queued jobs. The reason is that we have a queue implementation and there we have a mechanism that releases the execution of a job 5 minutes, if there was a problem during the job execution. The problem is that is required to have some sort of refresh feature that triggers all of those "delayed" jobs manually, since we need a bit of control of when to run those delayed jobs, keeping the fail-safe mechanism intact. There is some way to implement this using Laravel??
You can run the php artisan queue:work commando to start the Queue Work. If you wish start this from the code, you can call this command programmatically

Most efficient way of implementing an email queue in Laravel

I want to implement a queue for sending out emails in Laravel. I have the queue working fine, but am worried about efficiency. These are my settings:
I have created the jobs table and set up the .env file, to use the queues with my local database.
I have set up this crontab on the server:
* * * * * php /var/www/imagine.dev/artisan schedule:run >> /dev/null 2>&1
And have set up a schedule in app\Conosle\Kernel.php, so I dont have to manually enter the 'queue:listen' every time through console.
$schedule->command('queue:listen');
Now to my question. I would like to know if this is efficient? I am worried about having the queue:listen running all the time in the background consuming cpu and memory.
I have been trying to only run the queue:listen once every 5 minutes, and then put it to sleep with
$schedule->command('queue:listen --sleep 300');
but again, am not sure if this is the best approach.
Another thing I tried is using 'queue:work', but this only processes one queue at a time.
Ideally, I would like a way, to process all the queues every 5 minutes, avoiding a constant use of memory and cpu.
What is the best approach?
Not sure which version of Laravel you're using, but I suspect it's 5.2 or earlier.
You do not need to run this every minute, it continues to run until it's manually stopped.
From Laravel 5.2 documentation:
Note that once this task has started, it will continue to run until it is manually stopped. You may use a process monitor such as Supervisor to ensure that the queue listener does not stop running.
So maybe you want to look into Supervisor
Also, if this is helpful at all, you can chain onto $schedule, ->everyFiveMinutes(). There are several other methods available as well. Laravel Scheduling

Laravel 4: Queues and Multiple Listeners

If I am running Beanstalk with Supervisor on a server with a Laravel 4 application, and I want it to process all queues asynchronously -- as many as it can at the same time -- can I have multiple listeners running at the same time? Will they be smart enough to not "take" the same to-do item from the queue, or will they all reach for the same one at the same time, and thus not work in the way I'm wanting? In short, I want to use Queues to process multiple tasks at a time -- can this be done?
php artisan queue:listen && php artisan queue:listen && php artisan queue:listen
In short, I want to use Queues to process multiple tasks at a time -- can this be done?
In short - yes it can be done. Every job taken by a worker is locked until it's release. It means that other workers will get different jobs to process.
IMO it's better to configure Supervisor to run multiple queue:work command. It will take only one job, process it and stop execution.
It's not encouraged to run PHP scripts in a infinite loop (as queue:listen does), because after some time they can have memory issues (leaks etc).
You can configure Supervisor to re-run finished workers.

Categories