I am creating a web application in laravel in which bidding is being done by users in multiple games. Bidding is being performed by front end users and by cron job as well. Cron job do bid on each game after each second. Therefore some collision was happening between bids when same row was accessed at same time. To resolve concurrency issue I decided to use laravel queues for bidding. But I am having multiple games and therefore I want simultaneously bids of each game. I don't want bids of same game to be process at same time because then concurrency issue can be occur. I want to know about multiple queues system in laravel. After having search on internet I got to know about multiple queues like
php artisan queue:work --queue=myJobQueue, myJobQueue1, myJobQueue2,..myJobQueue7
But I am not sure how it works. Please someone explain me in detail that all 7 queues work simultaneously or one by one.
php artisan queue:work --queue=myJobQueue, myJobQueue1, myJobQueue2,..myJobQueue7 sets the priority in which queues will be executed. So with this all jobs on myJobQueue will be executed before moving to execute jobs on myJobQueue1 then to myJobQueue2 in that order.
However if you want jobs on these queues to be executed simultaneously, you could run each queue in the background.
php artisan queue:work --queue=myJobQueue & php artisan queue:work --queue=myJobQueue1 & php artisan queue:work --queue=myJobQueue2 &
This will run each queue as single processes in the background.
Are looking for the queue:listen command?
queue:work will process all pending jobs that are stored by the queue driver, whereas queue:listen will be waiting for jobs to be thrown at it to execute them as they come.
If you do php artisan queue:listen --queue=myJobQueue, myJobQueue1, myJobQueue2,..myJobQueue7, 7 queues are being created and listening to new tasks on their own.
In your code, you can dispatch jobs like the following:
dispatch((new MyJob)->onQueue('myJobQueue'));
You might want to use a tool like Supervisor to make sure queue:listen is always running in the background.
Hope this helps!
Like Ben V said, it is highly recommended to use Supervisor to keep the workers active at all times, especially if you want to run one or more workers per queue, or if you want the queues to be processed simultaneously.
Here is an example Supervisor configuration file:
[program:laravel-worker-myJobQueue]
process_name=%(program_name)s_%(process_num)s
command=php artisan queue:work --queue=myJobQueue
numprocs=8
autostart=true
autorestart=true
[program:laravel-worker-myJobQueue1]
process_name=%(program_name)s_%(process_num)s
command=php artisan queue:work --queue=myJobQueue1
numprocs=1
autostart=true
autorestart=true
The above configuration creates 8 workers for myJobQueue, and one worker for myJobQueue1, since multiple workers can help speed things up, but can cause trouble for jobs that try to access the same row in the database, in which case you want to limit things to 1 worker only.
You then simply dispatch jobs to the correct queue using
dispatch((new MyJob)->onQueue('myJobQueue'));
or
dispatch((new MyJob)->onQueue('myJobQueue1'));
This might be old but just in case, all of the answers are on point, but you must set the .env variable QUEUE_CONNECTION to something else than sync,if your configuration is set to sync it will take every job in order of entry to the queue (thus finishing one and then starting the next one), if it's set to database or redis it will be taking jobs in parallel if needed (which is the idea of setting priorities) you should check this article (it helped me) https://medium.com/hexavara-tech/optimize-laravel-with-redis-caching-made-easy-bf486bf4c58 also you will need to configure your queues in config/queue.php like such for example in the 'connections' array:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => ['default','another_queue'], //this is just 'default' by default
'retry_after' => 90,
],
this applies for all the other configurations in this file.
If you are testing on a local machine you can create a .bat file inside your project and enter these lines in that bat file
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
start "" php artisan queue:work --queue=low
one line represents one queue at a time 10 means 10 queues will run at once.
also, I included --queue=low cause I have a low queue.
this is for local machines only for the online production checkout Supervisor.
In addition to the other answers here, you can also dispatch a job on a specified queue in Laravel like so:
MyJob::dispatch()->onQueue('myJobQueue');
Or, within the Job constructor:
public function __construct()
{
$this->onQueue('myJobQueue');
}
Remember to start your queue from the terminal:
php artisan queue:listen --queue=myJobQueue
Related
I use laravel job & queue to defer some tasks in Laravel. What I'm going to do is store the uploaded files on the storage server and then do some processing on them. So I have defined two jobs and I run them in two queues.
The files are stored in the /tmp/ path, but due to the use of php-fpm and nginx, this path is actually /tmp/systemd-private-e8***2-php-fpm.service-3zCfIf/tmp/
Storage::disk('...') is also used to access these files.
Everything works fine when I run the queue worker with the following command
php artisan queue: work database --queue = StoreMedia, ProcessMedia --sleep = 3 --tries = 5
The problem arises when I use Supervisor. In this case, the files are not found ... while they exist ... In this case, the path does not seem to be correct and Storage::disk('...')->exist(...) Returns the false value
The supervisor settings are as follows
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/app/artisan queue:work database --queue=StoreMedia,ProcessMedia --sleep=3 --tries=5
autostart=true
autorestart=true
user=root
numprocs=8
redirect_stderr=true
stdout_logfile= /var/www/html/app/storage/logs/worker.log
stopwaitsecs=3600
How should I use supervisor in this mode? Is Problem with supervisor access permissions to files? I think in this case the path is interpreted as /tmp/x/.mp4 instead of /tmp/systemd-private-e8***2-php-fpm.service-3zCfIf/tmp/x/y.mp4 And that has caused the problem. What is the solution?
On my Linux server I have the following cron:
* * * * * php /var/www/core/v1/general-api/artisan schedule:run >> /dev/null 2>&1
The CRON works correctly. I have a scheduled command defined in my Kernel.php as such:
protected function schedule(Schedule $schedule)
{
$schedule->command('pickup:save')
->dailyAt('01:00');
$schedule->command('queue:restart')->hourly();
}
The scheduled task at 1AM runs my custom command php artisan pickup:save. The only thing this command does is dispatch a Job I have defined:
public function handle()
{
$job = (new SaveDailyPropertyPickup());
dispatch($job);
}
So this job is dispatched and since I am using the database driver for my Queues, a new row is inserted into the jobs table.
Everything works perfectly up to here.
Since I need a queue listener to process the queue and since this queue listener has to run basically forever, I start the queue listener like this:
nohup php artisan queue:listen --tries=3 &
This will write all the logs from nohup to a file called nohup.out in my /home directory
What happens is this: The first time, queue is processed and the code defined in the handle function of my SaveDailyPropertyPickup job is executed.
AFTER it is executed once, my queue listener just exits. When I check the logs nohup.out, I can see the following error:
In Process.php line 1335:
The process "'/usr/bin/php7.1' 'artisan' queue:work '' --once --queue='default'
--delay=0 --memory=128 --sleep=3 --tries=3" exceeded the timeout of 60 seconds.
I checked this answer and it says to specify timeout as 0 when I start the queue listener but there are also answers not recommending this approach. I haven't tried it so I dont know if it will work in my situation.
Any recommendations for my current situation?
The Laravel version is 5.4
Thanks
Call it with timeout parameter, figure out how long your job takes and scale from there.
nohup php artisan queue:listen --tries=3 --timeout=600
In your config you need to update retry after, it has to be larger than timeout, to avoid the same job running at the same time. Assuming you use beanstalkd.
'beanstalkd' => [
...
'retry_after' => 630,
...
],
In more professional settings, i often end up doing a queue for short running jobs and one for long running operations.
I have a problem with laravel jobs.
I configured laravel jobs to work with the database and it is working.
When I execute a job, the entry is created in database and the constructor is well executed.
However, the handle function is never executed ... and the jobs stay in the jobs table.
Someone already had this problem?
(I use Laravel 5.7).
I found the problem...
I'm using a different queue name that the default and in config/queue.php, in the database array you have the default queue name set to "default".
So when i execute : php artisan queue:work , he is waiting for default queue.
When i execute the command line : php artisan queue:work --queue QUEUENAME it is working !
Thanks everybody.
You should listen to the queue for default
php artisan queue:work
or
php artisan queue:work --sleep=1 --tries=5 --timeout=60
If you are not using the default queue then mention the custom queue
php artisan queue:work --sleep=1 --tries=5 --timeout=60 --queue customQueue
I have application in laravel that send several email and some of this email have to wait some time to be sent.
So I'm using the queue database type and in localhost I run the command php artisan schedule:run that runs this command:
$schedule->command('queue:work')->everyMinute();
and works perfectly.
Now I pass the project to a cpanel shared hosting and to run the schedule command I create a cron job that do that.
/usr/local/bin/php /path to project/artisan schedule:run
As I need to be always watching if I need to send an email I define run a cron job each minute and works in first 5 or 10 minutes.
Next I start to receive a 503 error from server because I arrive to the lime of processes probably because the cron job execution. And right now the server will be down for 24hours.
How can I solve that? What is the better solution for this?
Thank you
I use shared hosting and had a similar issue. If your hosting service accepts the php command shell_exec() you could do this.
protected function schedule(Schedule $schedule)
{
if (!strstr(shell_exec('ps xf'), 'php artisan queue:work'))
{
$schedule->command('queue:work --timeout=60 --tries=1')->everyMinute();
}
}
Your cron job seems ok. By the way, if your hosting server is 24h down, you may consider another host my friend.
queue:work is a long running process. This check ensures it's running on your server. It will listens to your queue and does the job. It also means that if you make changes to your production files, the worker will not pick the changes up. Have a look at my top -ac
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2398733 user 20 0 466m 33m 12m S 0.0 0.1 0:03.15 /opt/alt/php72/usr/bin/php artisan queue:work --timeout=60 --tries=1
2397359 user 20 0 464m 33m 12m S 0.0 0.1 0:03.04 /usr/local/bin/php /home/user/booklet/artisan schedule:run
2398732 user 20 0 105m 1308 1136 S 0.0 0.0 0:00.00 sh -c '/opt/alt/php72/usr/bin/php' 'artisan' queue:work --timeout=60 --tries=1 >> '/home/user/booklet/storage/queue.log' 2>&1
As you can see, the worker is on top, another process simply writes everything it does to a log file. You have to kill 2398733 after making new uploads/changes to your prod server. The process will restart by itself in less than 5 minutes. Because of the schedule:run cron job.
Update October 2019
protected function schedule(Schedule $schedule)
{
if (!strstr(shell_exec('ps xf'), 'php artisan queue:work'))
{
$schedule->command('queue:work --timeout=60 --tries=1')->withoutOverlapping();
}
}
The ->withoutOverlapping() method pushes the process command in the background. It ensures that the artisan Schedule command exits properly.
You can prevent this from happening with withoutOverlapping on the cron task.
By default, scheduled tasks will be run even if the previous instance of the task is still running. To prevent this, you may use the withoutOverlapping method:
$schedule->command('emails:send')->withoutOverlapping();
https://laravel.com/docs/5.7/scheduling#preventing-task-overlaps
This way, your cron will restart the queue:work task if it fails for some reason, but it won't fire up multiple instances of it.
In the laravel 4.2 docs it is said that if I want to retry failed job from the failed jobs table I should do:
php artisan queue:retry 5
where 5 is the job id.
How can I retry all failed jobs at once?
You can retry all failed Jobs by running: php artisan queue:retry all.
here is official doc: https://laravel.com/docs/7.x/queues#retrying-failed-jobs
Laravel docs says:
To retry all of your failed jobs, use queue:retry with all as the ID:
php artisan queue:retry all
However this doesn't work for me. I get "No failed job matches the given ID.".
What I did was I ran a command allowing me to execute php:
php artisan tinker
And wrote this:
for ($i = 100; $i <= 150; $i ++) Artisan::call('queue:retry', ['id' => $i]);
Here 100 and 150 are your failed job IDs range. I used to retreive them from DB dynamically but that won't work if you use another queue driver.
What this does is it loops through the IDs in the range you specified and calls a "php artisan queue:retry XXX" command for every single one of them.
I couldn't find an answer to this (I don't think laravel provides this by default) so I wrote a bash script to retry all the jobs I needed:
#!/bin/bash
for i in {53..800}
do
php artisan queue:retry $i
done
One way to do this via artisan is to specify a range. Even if not every ID in the range exists, artisan will still fire all of the failed jobs, skipping over the ones it can't find. For example, if you have a bunch of jobs sparsely populated between IDs 200 and 510 you can do:
php artisan queue:retry --range 200-510
I've created a command to execute this operation:
https://gist.github.com/vitorbari/0ed093cf336278311ec070ab22b3ec3d
You can use :
To retry all of your failed jobs, execute the queue:retry command and pass all as the ID:
php artisan queue:retry all
Source : https://laravel.com/docs/9.x/queues#retrying-failed-jobs
Check the result via :
php artisan queue:failed
> No failed jobs!
Important Note : If you don't have workers, remember to run the queue as this command will only push these fail jobs back to the queue and WILL NOT execute them. (Which makes sense, if you think about it.)
Try using the command below in your CLI :
"php artisan queue:retry"
This command will push all failed queues back to the job table and rerun it again by typing;
"php artisan queue:work"