I am running into an issue since I switched to Redis for the queue in Laravel. I am dispatching jobs, but they arent always being picked up in the queue. I am testing this by dispatching the job in Tinker with a separate command line running php artisan queue:work and I am noticing sometimes I have to dispatch the job two or three times before it is being picked up by the queue.
Here is the job I am dispatching:
namespace App\Jobs;
use App\Events\GameFunction;
use App\Events\GameUpdate;
use App\Http\Livewire\GolfGame;
use App\Models\Cards;
use App\Models\Games;
use App\Models\Scores;
use App\Models\User;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Log;
use romanzipp\QueueMonitor\Traits\IsMonitored;
class BotPlay implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
use IsMonitored;
I am calling it in tinker like this BotPlay::dispatch($game); and getting Illuminate\Foundation\Bus\PendingDispatch as a response each time.
Is there something I need to do differently with Redis when dispatching the job?
Thanks!
Please check ...
Have you started queue worker?
if not then start it by running below artisan command
php artisan queue:work
Hope this will be helpful.
This was a mistake with my laravel setup and having two queues both picking up the jobs so they didn't always appear in the queue I was watching but were getting picked up in the other queue. With one queue it is working as expected.
Related
My problem is more of fallback I think, now I have a two queue job in my Laravel job queue, and I am using database Driver. The first command create credentials for my user from another site base on API calls, and the second is to email for verification and 2FA. Also, there is another command that update change my unit conversion rate.
protected function schedule(Schedule $schedule){
$schedule->command('update:conversionRate')->everyFiveMinutes();
$schedule->command('queue:work')->everyMinute();
}
Queue job are added to my queue using dispatch command and shouldQueue interface the API call uses the dispatch function while the email uses shouldQueue.
Now it work because I can see the jobs in my database. But when the server cron job runs it will crash, and my Log file shows that the my MySQL users has reach it maximum connection limit. Hence nobody can assess the database using that user account.
So my question how do I setup the cron job and queue:work so that it does not crash the server?
How I understood your problem with the maximum connection to the database.
The first solution, but it is not best to increase the connection limit to the database.
The second solution is to work with the queue. You have not tried to use a driver not a database, for example redis or beanstalkd?
You also run the command every minute. It's a bad practice to use cron job for queues. There is a supervisor for this.
Also, with the team, try to use the queue: work parameters.
Example
php artisan queue:work --sleep=3 --tries=3 --daemon
--sleep this handler will have a break between the processing of the elements of the queue
--tries = 3 if for some reason the item will not be processed, after this parameter it will try 3 times and proceed to the next element, where by default it will try to try many times.
Experiment with these options.
I've a problem with running long time queue workers, I'm currently using Laravel 5.0. I used to queue the jobs on database and have no problem with that, but I needed to move this from DB so I went to rabbitmq, so I'm integrating this package:
https://github.com/vyuldashev/laravel-queue-rabbitmq/tree/v5.0
Everything is doing well with short time jobs, ones taking less than 3 or 4 mints, but I'm trying to run a queue listener for jobs taking more than 10 mints, the thing is they don't get acknowledged and they remain in the unacked, and after 16.6 mints exactly - default ttl-; they moves to the next job and still not acked. And I'm getting broken pipe or connection sometimes if the process took too long.
I believe the problem is with the worker itself, not the package I'm using, and here're two examples for the queue listener I'm trying to apply, could you advice how to use them in a better way or what options I may use with them:
php artisan queue:listen rabbitmq --queue=QUEUENAME --timeout=0
--tries=2
php artisan queue:work rabbitmq --queue=QUEUENAME --daemon --tries=2
You can set the $timeout per job like so:
namespace App\Jobs;
class LongProcessJob implements ShouldQueue
{
/**
* The number of seconds the job can run before timing out.
* #var int
*/
public $timeout = 120;
}
see Laravel Queues for more details.
I want to queue mails as explained at https://laravel.com/docs/5.5/mail#queueing-mail.
This is what I did so far:
I changed QUEUE_DRIVER in the .env file
QUEUE_DRIVER=database
I created a job table
php artisan queue:table
php artisan migrate
I add a mail to the queue like this:
Mail::to($request->user())
->queue(new OrderShipped($order));
I setup a cronjob that will send the queried mails as explained in the docs like this:
protected function schedule(Schedule $schedule)
{
$schedule->command('php artisan queue:work --once')->everyMinute();
}
If I would only write $schedule->command('php artisan queue:work')->everyMinute(); then the work process would never stop, so the server would be very busy at some point having a lot of parallel working processes, right?
Did I miss anything important in order to query mails with laravel? Also if I would like to send every minute at most 5 mails - how could I achieve that?
I think instead of cron job, it is better to set up a supervisor configuration. It will help to monitor the queue jobs. It can be easily configured by using the following documentation.
https://laravel.com/docs/5.5/queues#supervisor-configuration
I think instead of starting every minute the command
php artisan queue:work --once
its better to start the work queue once and add a sleep timer:
php artisan queue:work --sleep=60
this would do one job every minute. If one wants to do 5 jobs every minute once can reduce the sleeping time:
php artisan queue:work --sleep=12
I want to run asynchronous Laravel jobs and work forever. As far as I understand, I need to setup Jobs and push them into separate queues.
I have set .env - QUEUE_DRIVER=database and run php artisan queue:table and php artisan migrate accordingly.
and I have run php artisan make:job MyJob
(at this point queues table is empty though, but I don't know if I did something wrong)
The point I mainly got confused is how is it going to start all the jobs and run them forever, or run the job initially?
As far as I understand, to trigger the job I need to call:
MyFirstJob::dispatch();
but where do I need to call it to work all the time and forever?
you need to put all jobs
$schedule->job(new Job1)->everyMinute();
$schedule->job(new Job2)->everyMinute();
$schedule->job(new Job3)->everyMinute();
under schedule() function in kernel.php and than scheduler will handle all the jobs.
You can get better idea from this link
https://spiderwebsolutions.com.au/laravel-5-1-and-job-queues-tutorial/
I have a class in my Symfony 2.3 project that is doing some http requests and takes some time.
I would like to run this task as a background process, so that the server returns an answer to the client and the background process continues running.
Do you know how to do that in Symfony?
I found the Process Component: http://symfony.com/doc/current/components/process.html but I am not sure if I can run a class method from there.
A simple way to do this is to separate the heavy lifting from the response by using a queue and a symfony command to process the queue.
http://symfony.com/doc/current/components/console/introduction.html
Create a symfony command that processes the jobs added to a queue, then add the work to be done to the queue from your controller. The queue will probably be implemented as a database table of jobs.
That way you can return a success response to the user and run a cron job on the server regularly to process the work you require.
This is something you could easily do with enqueue library. First, you can choose from a variety of transports, such as AMQP, STOMP, Redis, Amazon SQS, Filesystem and so on.
Secondly, That's super easy to use. Let's start from installation:
You have to install the enqueue/enqueue-bundle library and one of the transports. Assuming you choose the filesystem enqueue/fs library:
composer require enqueue/enqueue-bundle enqueue/fs
Now let's see how you can send messages from your POST script:
<?php
use Enqueue\Client\ProducerInterface;
use Symfony\Component\DependencyInjection\Container;
/** #var Container $container */
/** #var ProducerInterface $producer */ $producer = $container->get('enqueue.client.producer');
$producer->sendCommand('a_background_task', 'task_data');
For the consumption, you have to create a processor service and tag it with enqueue.client.processor tag:
<?php
use Enqueue\Client\CommandSubscriberInterface;
use Enqueue\Psr\PsrContext;
use Enqueue\Psr\PsrMessage;
use Enqueue\Psr\PsrProcessor;
class BackgroundTask implements PsrProcessor, CommandSubscriberInterface
{
public static function getSubscribedCommand()
{
// do job
return self::ACK;
}
public function process(PsrMessage $message, PsrContext $context)
{
return 'a_background_task';
}
}
And run a consumer with a command:
./bin/console enqueue:consume --setup-broker -vvv
On the prod you most likely need more then one consumer and if the process exists it has to be restarted. To address this you need a sort of process manager. There several options:
http://supervisord.org/ - You need extra service. It has to be configured properly.
A pure PHP process manager like this. Based on Symfony process component and pure PHP code. It can handle process reboot, correct exit on sigterm signal and a lot more.
A php\swoole process manager like this. It requires a swoole PHP extension but it is performance is amazing.