I want to queue mails as explained at https://laravel.com/docs/5.5/mail#queueing-mail.
This is what I did so far:
I changed QUEUE_DRIVER in the .env file
QUEUE_DRIVER=database
I created a job table
php artisan queue:table
php artisan migrate
I add a mail to the queue like this:
Mail::to($request->user())
->queue(new OrderShipped($order));
I setup a cronjob that will send the queried mails as explained in the docs like this:
protected function schedule(Schedule $schedule)
{
$schedule->command('php artisan queue:work --once')->everyMinute();
}
If I would only write $schedule->command('php artisan queue:work')->everyMinute(); then the work process would never stop, so the server would be very busy at some point having a lot of parallel working processes, right?
Did I miss anything important in order to query mails with laravel? Also if I would like to send every minute at most 5 mails - how could I achieve that?
I think instead of cron job, it is better to set up a supervisor configuration. It will help to monitor the queue jobs. It can be easily configured by using the following documentation.
https://laravel.com/docs/5.5/queues#supervisor-configuration
I think instead of starting every minute the command
php artisan queue:work --once
its better to start the work queue once and add a sleep timer:
php artisan queue:work --sleep=60
this would do one job every minute. If one wants to do 5 jobs every minute once can reduce the sleeping time:
php artisan queue:work --sleep=12
Related
I try to dispatch a job on my local dev machine.
I setup my .env and queue config ; QUEUE_CONNECTION=database
I also did the migrate :
php artisan queue:table
php artisan migrate
Then create my job something like : myjob::dispatchNow();
And finally run my worker: php artisan queue:work
With all that the code launch and execute well but not the job. The job is not created in the database jobs table. (nor in faield_jobs
Is I am missing any step?
Do I need something else on my local machine to run jobs queue?
Thanks for any help.
You are using synchronous dispatching by using myjob::dispatchNow(). The database does not need to be involved as it runs the job during the same request and your queue worker never knows about it. It is equivalent to having QUEUE_DRIVER=sync.
If you use myjob::dispatch() before you start the queue worker, you will see the job in your database.
I want to run asynchronous Laravel jobs and work forever. As far as I understand, I need to setup Jobs and push them into separate queues.
I have set .env - QUEUE_DRIVER=database and run php artisan queue:table and php artisan migrate accordingly.
and I have run php artisan make:job MyJob
(at this point queues table is empty though, but I don't know if I did something wrong)
The point I mainly got confused is how is it going to start all the jobs and run them forever, or run the job initially?
As far as I understand, to trigger the job I need to call:
MyFirstJob::dispatch();
but where do I need to call it to work all the time and forever?
you need to put all jobs
$schedule->job(new Job1)->everyMinute();
$schedule->job(new Job2)->everyMinute();
$schedule->job(new Job3)->everyMinute();
under schedule() function in kernel.php and than scheduler will handle all the jobs.
You can get better idea from this link
https://spiderwebsolutions.com.au/laravel-5-1-and-job-queues-tutorial/
I'm implementing a command that will process uploaded files.
The files can contain up to 300MB of data, so the job needs to be queued and I also expect that it takes a while to complete.
My problem is, when I run php artisan queue:listen it gets the job from the queue, starts processing it normally but after around 20 seconds, it freezes. The job doesn't launch any exception and neither continues so its not removed from the queue.
I'm using the database driver. Am is missing something here?
php artisan queue:listen does not output the errors for the user. Run php artisan queue:work and it will output the errors. This command will only run one process in the queue. So you need to make sure that the next process is the one you want to debug.
Maybe it does not freezes, but it seems like it froze when you see nothing happening in command line interface after you run php artisan queue:work or php artisan queue:listen command.
As in my case,
// Execute Laravel queues by queue names on coomand line interface->
I was using running command
php artisan queue:work
which was not running my queued jobs in jobs table.
Then i relaized, it was only working for jobs with queue column value = 'default'
and i had given names like sendemail, inboxemail etc.
So when i changed this other value to 'default' in queue column in jobs table,
this job ran instantly as i have opended cli and php artisan queue:work
command was active.
So if you want to run only a specific queue by queue name, run command ->
php artisan queue:listen --queue=sendemail
or
php artisan queue:listen --queue=inboxemail
For anyone making the same mistake as I did, DO NOT CREATE TABLES BASED ON THE "JOBS" TABLE!
Its not a memory leak or sth. I had another table "job_info" with a foreign key referring to the jobs table; As it turned out laravel wasn't able to remove the job from the table after it's done successfully (as it normally do). That's because it would break the relationships in the database. So it'd keep retrying until max attempts exceeded leading to an exception with no information about the actual problem.
I'm having a bit of trouble grasping the concept of Queues in laravel 5.0. From what I understand, queues store a list of Commands to be executed by either by the php artisan queue:listen command or the php artisan queue:work --daemon command.
Correct me if I'm wrong, but the php artisan queue:listen just waits until there is a command in queue and then executes it, right? Then what does the php artisan queue:work --daemon command do in comparison? Does this command only work one at a time?
Anyways, a task I wish to complete is this... I want to periodically check if there are commands in the queue and if there are I wish to execute them. Since this is a periodic problem I assume a chron would be used, but how do i check if there are outstanding commands to be? Would I run a query on my queue table? If I do, how would I dispatch the command? Or should I just chron the php artisan queue:listen command
The main difference between queue:listen and queue:work is that the second one only sees if there's a job waiting, takes it and handles it. That's it.
The listener though runs as a background process checking for jobs available all the time. If there's a new job it is then handled (the process is slightly more difficult, but that's the main idea).
So basically if you need to handle your commands (jobs) as soon as they appear you might wanna use the queues.
And if you need to do something periodically (for example every 2 hours or every Monday at 9 AM etc.) you should go with cron + Schedule.
I wouldn't recommend combining them as you described. Also keep in mind that jobs can be delayed if you don't want then do be handled ASAP.
If I am running Beanstalk with Supervisor on a server with a Laravel 4 application, and I want it to process all queues asynchronously -- as many as it can at the same time -- can I have multiple listeners running at the same time? Will they be smart enough to not "take" the same to-do item from the queue, or will they all reach for the same one at the same time, and thus not work in the way I'm wanting? In short, I want to use Queues to process multiple tasks at a time -- can this be done?
php artisan queue:listen && php artisan queue:listen && php artisan queue:listen
In short, I want to use Queues to process multiple tasks at a time -- can this be done?
In short - yes it can be done. Every job taken by a worker is locked until it's release. It means that other workers will get different jobs to process.
IMO it's better to configure Supervisor to run multiple queue:work command. It will take only one job, process it and stop execution.
It's not encouraged to run PHP scripts in a infinite loop (as queue:listen does), because after some time they can have memory issues (leaks etc).
You can configure Supervisor to re-run finished workers.