What would prevent jobs in a queue from processing? [PHP / Laravel 5] - php

I have a queue I set up in Laravel 5 to delete companies and associated records. Each time this happens, a lot of work happens on the back-end, so queues are my best option.
I set up my config/queue.php file along with my .env file so that the database driver will be used. I am using the Queue::pushOn method to push jobs onto a queue called company_deletions. Ex.
Queue::pushOn('company_deletions', new CompanyDelete($id));
Where CompanyDelete is a command created with php artisan command:make CompanyDelete --queued
I have tried to get my queue to process using the following commands:
php artisan queue:work
php artisan queue:work company_deletions
php artisan queue:listen
php artisan queue:listen company_deletions
php artisan queue:work database
php artisan queue:listen database
Sometimes when looking at the output of the above commands, I get the following error:
[InvalidArgumentException]
No connector for []
Even when I don't get an error I cannot get it to actually process the jobs for some reason. When I look in my jobs table, I can see the job on the queue, however the attempts column shows 0, reserved shows 0 and reserved_at is null. Am I missing some steps? I have looked over the documentation several times and cannot for the life of me figure out what is wrong. I don't see anything in the laravel error logs either. What would prevent these jobs from being processed once they are in the jobs database? Any help is appreciated.

i run into a smiliar issue because i dont add the jobs in the default queue..
$job = (new EmailJob(
$this->a,
$this->b,
$this->c,
$this->d,
$e
))->onQueue('emails');
then i have to listen to the specific queue:
php artisan queue:listen --queue=emails
in your case it would be
php artisan queue:listen --queue=company_deletions

Related

Laravel dispatch job run async

I have a function that posts some content and pushes a job onto a queue and returns response to user even before the queue complete the job.
for that I changed the .env QUEUE_DRIVER to database, And records is saved in table jobs, but to execute this jobs I have to call the command php artisan queue:work, and that is my question: how do I call this command in the code or what should I do whenever there is jobs in the table?
The command
php artisan:queue work
Should be runnig always it will check if there is new jobs he will dispatch them
But it should be always running you can't execute it from the code
Also you can run
php artisan queue:work --tries=5
This for example will try 5 times then it will stop
Plus you can install supervisor it will always start the queue:work if it faild

Laravel - Artisan::call() doesn't take argument

I'm using Laravel 5.4 and PHP 7.0.
I have a lot of failed jobs in the table that I want to re-queue. I have written a script to go through a list of IDs that I pulled from the database and I want a foreach to re-queue each one. Pretty simple stuff.
My issue is that when I run
foreach($jobsToRetry as $failedJob) {
Artisan::call('queue:retry '.$failedJob);
}
I receive the following error:
Command "queue:retry 1" is not defined.
Did you mean one of these?
queue:failed
queue:failed-table
queue:flush
queue:forget
queue:listen
queue:restart
queue:retry
queue:table
queue:work
It needs to be using the command "queue:retry" and have the parameter separate but I just can't figure out how to get that to work.
Give the parameter in the arguments
Artisan::call('queue:retry', ['id' => $failedJob]);
You should try this:
Artisan::call('queue:retry', ['--yourparameter' => $failedJob]);

Laravel 5.5 Job with delay fires instantly instead of waiting

In my application I am dispatching a job on work queue with delay time. But its work instantly not waiting for delay time. In my config and eve I am using driver as database.
In my database job table not insert any job till now.
My config:
'default' => env('QUEUE_DRIVER', 'database')
My controller code:
Log::info('Request Status Check with Queues Begins', __METHOD__);
MyGetInfo::dispatch($this->name,$this->password,$this->id,$trr->id)->onQueue('work')->delay(12);
return json_encode($data);
The value of QUEUE_DRIVER must be set to database in .env file.
make sure to run this afterwards:
php artisan config:clear
also run
php artisan queue:listen

How can I learn more about why my Laravel Queued Job failed?

The Situation
I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).
I am using Supervisor to run my queue, and I am running 20 processes at a time. My supervisor config file looks like this:
[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log
In my duplitron-worker.log I noticed Failed: Illuminate\Queue\CallQueuedHandler#call occurs occasionally and I would like to better understand what exactly is failing. Nothing appears in my laravel.log file (which is where exceptions would normally appear).
The Question
Is there a handy way for me to learn more about what is causing my job to fail?
In the newer Laravel versions there's an exception column in the failed_jobs table that has all the info you need. Thanks cdarken and Toskan for pointing this out!
==== OLD METHOD BELOW
Here's what I always do, but first - make sure you have a failed-jobs table! It's well documented, look it up :)
Run the php artisan queue:failed command to list all the failed jobs, and pick the one you're after. Write the ID down.
Then, make sure to stop your queue with supervisorctl stop all duplitron-worker:
Lastly, make sure your .env setting for APP_DEBUG = true.
Then run php artisan queue:retry {step_job_1_id}
Now manually runphp artisan queue:listen --timeout=XXX
If the error is structural (and most are), you should get the failure with debug stack in your log file.
Good luck with debugging :-)
as #cdarken pointed out, the exception can be found in your database table failed_jobs column name exception. Thanks #cdarken I wish his answer would be an answer and not a comment.
Run these command to create a table failed_jobs in db
php artisan queue:failed-table
php artisan migrate
Run queue worker php artisan queue:work --tries=2
Check the exception reason in your database table failed_jobs you've just created.
If you're using database than goto failed_jobs table and look for the exception there.
Worked for me,
in vendor/laravel/framework/src/Illuminate/Notifications/SendQueuedNotifications.php
Just remove "use Illuminate\Queue\SerializesModels;" Line 6
& modify Line 11 to "use Queueable;"

laravel 5.1 not seeing changes to Job file without VM restart

I have created a new Job in a laravel 5.1 app, running in Homestead VM. I've set it to be queued and have code in the handle method.
The handle() method previous expected a param to be passed, but is no longer required and I've removed the param form the handle method.
However, when the queue runs the job I get an error saying:
[2015-06-17 14:08:46] local.ERROR: exception 'ErrorException' with message 'Missing argument 1 for Simile\Jobs\SpecialJob::handle()' in /home/vagrant/Code/BitBucket/simile-app/app/Jobs/SpecialJob.php:31
line 31 of that file is:
public function handle()
Its not longer expecting any parameters, unless there's a default one that's not documented.
Now ANY changes I make, including comments out ALL content in the Job file are not seen when I run the queue. I will still get the same error.
Ive tried restarting nginx, php5-fpm, supervisor, beanstalkd, and running: artisan cache:clear, artisan clear-compiled, artisan optimize, composer dumpautoload.
Nothing works.
The only way I get get laravel to see any updated to the Job file is to restart the VM. vagrant halt, then vagrant up.
The job is triggered in a console command like this:
$this->dispatch(new SpecialJob($site->id));
Here is the full code of the SpecialJob.php file:
http://laravel.io/bin/qQQ3M#5
I tried created another new Job and tested, I get the same result.
All other non-job files update instantly, no issue. Its just the Job files. Like an old copy is being cached somewhere I can't find.
When running the queue worker as a daemon, you must tell the worker to restart after a code change.
Since daemon queue workers are long-lived processes, they will not pick up changes in your code without being restarted. So, the simplest way to deploy an application using daemon queue workers is to restart the workers during your deployment script. You may gracefully restart all of the workers by including the following command in your deployment script:
php artisan queue:restart

Categories