I need to know if there is a way to use the internal laravel api to force the release of all queued jobs. The reason is that we have a queue implementation and there we have a mechanism that releases the execution of a job 5 minutes, if there was a problem during the job execution. The problem is that is required to have some sort of refresh feature that triggers all of those "delayed" jobs manually, since we need a bit of control of when to run those delayed jobs, keeping the fail-safe mechanism intact. There is some way to implement this using Laravel??
You can run the php artisan queue:work commando to start the Queue Work. If you wish start this from the code, you can call this command programmatically
Related
I have a Product model with id,name,price.
The price value is stored in an external API and i need to fetch it every minute in order to update it in the database.
Looking through the Laravel documentation I found two ways to implement:
Create an artisan command (https://laravel.com/docs/8.x/artisan) and add it to task scheduling (https://laravel.com/docs/8.x/scheduling#scheduling-artisan-commands)
Create a job (https://laravel.com/docs/8.x/queues) and add it to task scheduling (https://laravel.com/docs/8.x/scheduling#scheduling-artisan-commands)
First of all, is there any other approach i should take in consideration?
If not, which one of the above would be the best approach and why is it correct for my use case?
As per my comments on one of your previous questions on this topic, whether you use a queue or not depends on your use case.
An Artisan command is a process that executes once and performs a task or tasks and then exits when that task is complete. It is generally run from the command line rather than through a user action. You can then use the task scheduling of your command's host operating system (e.g. a CRON job) to execute that command periodically. It will faithfully execute it when you schedule it to be done.
A Queued job will execute when the Job turns up next in the queue, in priority order. Let's say you send your API call (from your other post) to the queue to be processed. Another system then decides it needs to send out emails urgently (with a higher priority). Suddenly, your Job, which was next, is now waiting for 2000 other Jobs to finish (which might take a half hour). Then, you're no longer receiving new data until your Job executes.
With a scheduled job, you have a time critical system in place. With queues, you have a "when I get to it" approach.
Hope this makes the difference clearer.
With laravel it is a lot easy to use the built in scheduler. You have to add only one entry to the crontab and that is to run the command php artisan schedule:run EVERY MINUTE on your project. After that you dont have to thing about configuring the crontab on the server, you just add commands to the laravel scheduler and they will work as expected.
You should probably use Cron Job Task Scheduling which would be the first approach you mentioned.
Commonly for this type of use-cases commands are the easiest and cleanest approach.
There are a few things to do in order to make it work as expected:
Create a new command that will need to take care of hitting the endpoint and storing the retrieved data to the database
In Kernel.php file register your command and the frequency of running (each minute)
Run php artisan schedule:run
You can read more about how to create it here:
I have a cron job that run every 5 Hours. It calls a PHP script , this script will do a call to an external API to sync some data.
The problem is sometimes I'm getting timeout from the API and the job will fail.
Are there any mechanisms to let cron tab do auto retry or auto recover the jobs that are failed?
I have tried to do an extra job and call it in case of any failures manually.
What is the best approach to do so?
Cron does only run once at specific time or every minutes/hours/days etc. It doesn't check the return code. So it's not that easy peasy lemon squeezy at all...
In my opinion you have a few options how to do it:
Create a some kind of scheduler where you can write your CRON job again if it fails, in this case you will need one more CRON job to read you scheduler and run proper command. Scheduler can be database / file / NoSQL based. In scheduler you can have flag like (bool) executed which will let scheduler know which tasks are already done.
Use queues (f.ex. Rabbit) to call it self again when fail.
Use framework, I'm using Symfony to manage own created commands to execute them (check second link below) based on database, using also enqueue/enqueue-bundle package to manage queues in Symfony.
I think if you are not so advanced with PHP I'd recommend to go for self made scheduler based on database (MySQL / PostgreSQL / NoSQL) with Symfony (check second link below). In this case you just have to SELECT all non executed record (commands) from database and just run them.
Lecture
Laravel - Queues, retrying failed jobs
Symfony - calling another commands in command
Queues package for PHP (incl. Symfony)
enqueue/enqueue-bundle
What you can do is something like this:
https://crontab.guru/#1_0-23_13__
1 0-23 13 * *
Start the job past 1 minute at every hour on the 13th of each month.
“At minute 1 past every hour from 0 through 23 on day-of-month 13.”
...then in your code you'd have some logic to detect if the process\script already ran correclty... if yes, skip the run attempt; otherwise let it run and then have a flag set to check against on the subsequent attempt run.
Hope you get the idea.
You can use supervisord :
supervisord website
Or handle API timeout in code.
My problem is more of fallback I think, now I have a two queue job in my Laravel job queue, and I am using database Driver. The first command create credentials for my user from another site base on API calls, and the second is to email for verification and 2FA. Also, there is another command that update change my unit conversion rate.
protected function schedule(Schedule $schedule){
$schedule->command('update:conversionRate')->everyFiveMinutes();
$schedule->command('queue:work')->everyMinute();
}
Queue job are added to my queue using dispatch command and shouldQueue interface the API call uses the dispatch function while the email uses shouldQueue.
Now it work because I can see the jobs in my database. But when the server cron job runs it will crash, and my Log file shows that the my MySQL users has reach it maximum connection limit. Hence nobody can assess the database using that user account.
So my question how do I setup the cron job and queue:work so that it does not crash the server?
How I understood your problem with the maximum connection to the database.
The first solution, but it is not best to increase the connection limit to the database.
The second solution is to work with the queue. You have not tried to use a driver not a database, for example redis or beanstalkd?
You also run the command every minute. It's a bad practice to use cron job for queues. There is a supervisor for this.
Also, with the team, try to use the queue: work parameters.
Example
php artisan queue:work --sleep=3 --tries=3 --daemon
--sleep this handler will have a break between the processing of the elements of the queue
--tries = 3 if for some reason the item will not be processed, after this parameter it will try 3 times and proceed to the next element, where by default it will try to try many times.
Experiment with these options.
I want to implement a queue for sending out emails in Laravel. I have the queue working fine, but am worried about efficiency. These are my settings:
I have created the jobs table and set up the .env file, to use the queues with my local database.
I have set up this crontab on the server:
* * * * * php /var/www/imagine.dev/artisan schedule:run >> /dev/null 2>&1
And have set up a schedule in app\Conosle\Kernel.php, so I dont have to manually enter the 'queue:listen' every time through console.
$schedule->command('queue:listen');
Now to my question. I would like to know if this is efficient? I am worried about having the queue:listen running all the time in the background consuming cpu and memory.
I have been trying to only run the queue:listen once every 5 minutes, and then put it to sleep with
$schedule->command('queue:listen --sleep 300');
but again, am not sure if this is the best approach.
Another thing I tried is using 'queue:work', but this only processes one queue at a time.
Ideally, I would like a way, to process all the queues every 5 minutes, avoiding a constant use of memory and cpu.
What is the best approach?
Not sure which version of Laravel you're using, but I suspect it's 5.2 or earlier.
You do not need to run this every minute, it continues to run until it's manually stopped.
From Laravel 5.2 documentation:
Note that once this task has started, it will continue to run until it is manually stopped. You may use a process monitor such as Supervisor to ensure that the queue listener does not stop running.
So maybe you want to look into Supervisor
Also, if this is helpful at all, you can chain onto $schedule, ->everyFiveMinutes(). There are several other methods available as well. Laravel Scheduling
If I am running Beanstalk with Supervisor on a server with a Laravel 4 application, and I want it to process all queues asynchronously -- as many as it can at the same time -- can I have multiple listeners running at the same time? Will they be smart enough to not "take" the same to-do item from the queue, or will they all reach for the same one at the same time, and thus not work in the way I'm wanting? In short, I want to use Queues to process multiple tasks at a time -- can this be done?
php artisan queue:listen && php artisan queue:listen && php artisan queue:listen
In short, I want to use Queues to process multiple tasks at a time -- can this be done?
In short - yes it can be done. Every job taken by a worker is locked until it's release. It means that other workers will get different jobs to process.
IMO it's better to configure Supervisor to run multiple queue:work command. It will take only one job, process it and stop execution.
It's not encouraged to run PHP scripts in a infinite loop (as queue:listen does), because after some time they can have memory issues (leaks etc).
You can configure Supervisor to re-run finished workers.