I have created a new Job in a laravel 5.1 app, running in Homestead VM. I've set it to be queued and have code in the handle method.
The handle() method previous expected a param to be passed, but is no longer required and I've removed the param form the handle method.
However, when the queue runs the job I get an error saying:
[2015-06-17 14:08:46] local.ERROR: exception 'ErrorException' with message 'Missing argument 1 for Simile\Jobs\SpecialJob::handle()' in /home/vagrant/Code/BitBucket/simile-app/app/Jobs/SpecialJob.php:31
line 31 of that file is:
public function handle()
Its not longer expecting any parameters, unless there's a default one that's not documented.
Now ANY changes I make, including comments out ALL content in the Job file are not seen when I run the queue. I will still get the same error.
Ive tried restarting nginx, php5-fpm, supervisor, beanstalkd, and running: artisan cache:clear, artisan clear-compiled, artisan optimize, composer dumpautoload.
Nothing works.
The only way I get get laravel to see any updated to the Job file is to restart the VM. vagrant halt, then vagrant up.
The job is triggered in a console command like this:
$this->dispatch(new SpecialJob($site->id));
Here is the full code of the SpecialJob.php file:
http://laravel.io/bin/qQQ3M#5
I tried created another new Job and tested, I get the same result.
All other non-job files update instantly, no issue. Its just the Job files. Like an old copy is being cached somewhere I can't find.
When running the queue worker as a daemon, you must tell the worker to restart after a code change.
Since daemon queue workers are long-lived processes, they will not pick up changes in your code without being restarted. So, the simplest way to deploy an application using daemon queue workers is to restart the workers during your deployment script. You may gracefully restart all of the workers by including the following command in your deployment script:
php artisan queue:restart
Related
I am trying to run a time consuming script in the background with Laravel 8, but cant quite get it to work. I try to follow the docs from here https://laravel.com/docs/8.x/queues in combination with the tutorial found here: https://learn2torials.com/a/how-to-create-background-job-in-laravel
As per docs, I should run the following commands to get strateted with Queue/jobs in Laravel
php artisan queue:table
php artisan migrate
Then we should create our Job with the following command
php artisan make:job TestJob
In App\Jobs\ is our newly created job-file: TestJob.php
Again following the docs, I should put my time consuming script/code in the handle() method of TestJob.php. I have written the following code in handle() for test purposes:
public function handle()
{
//Do some time-consuming stuff
sleep(30);
}
Next, according to the docs, we should dispatch our job with the following line of code TestJob::dispatch(), anywhere in our app, so for test purposes, I put this line directly into our routes file, like this:
Route::get('/', function () {
//Run this job in the background and continue
\App\Jobs\TestJob::dispatch();
//After job is started/Queued return view
return view('welcome');
});
That should be it, as I understand from the docs, but it is not working as I expected. The code in handle() gets executed, but the return view('welcome'); is executed AFTER the the job is completed.
I was expecting the script to be executed and while running in the background the next line of code will be executed. How can I make it run in the background so the user do not have to wait for the script to finish?
I have googled a lot and according to the tutorial linked to earlier, I should have the following line: QUEUE_DRIVER=database in my .env file. I have sat this, and also sat it in Config\queue.php with the following line: 'default' => env('QUEUE_CONNECTION', 'database'),, but still same result
I also found the following solution for Laravel 5 here on SO (link), where there is suggested that we also should run the following code to get it to work: php artisan queue:listen, but its the same result again
Any help would be much appreciated!
By default the .env file has QUEUE_CONNECTION=sync.
Meaning, the sync connection uses the main thread for the execution of tasks. Hence, it has to first complete before moving on to the next line of code.
To make tasks run in the background so that your main application thread won't block and you can serve your client requests more quickly, try using a different connection i.e database.
To do this, simply change QUEUE_CONNECTION=database in your .env file.
You may run php artisan queue:listen on your local computer set-up to process tasks as they come in.
NOTE: On the production server, it may be more convenient to set-up something more robust to automatically restart your processes if they fail. Supervisor Configuration
I have a function that posts some content and pushes a job onto a queue and returns response to user even before the queue complete the job.
for that I changed the .env QUEUE_DRIVER to database, And records is saved in table jobs, but to execute this jobs I have to call the command php artisan queue:work, and that is my question: how do I call this command in the code or what should I do whenever there is jobs in the table?
The command
php artisan:queue work
Should be runnig always it will check if there is new jobs he will dispatch them
But it should be always running you can't execute it from the code
Also you can run
php artisan queue:work --tries=5
This for example will try 5 times then it will stop
Plus you can install supervisor it will always start the queue:work if it faild
Hello I've setup queues with Laravel 5.1.
I perform the HTTP request (Post), this is routed to the respective controller.
Controller executes the following:
//try saving model
try{
$lg = new User();
$lg->fill($request);
$lg->save();
}catch(QueryException $e){
Log::error( $e->getCode());
}
//creates Job instance
$job = new ProcessUser($lg);
//dispatching job
$queue_id = $this->dispatch($job);
also if I
dump($queue_id);
instead of having the Id key fo the queue I get back 0.
Everything works.....as expected on local dev env, with Homestead.
But on production where I have CentOS...
I expected the job just to be queued. Instead seems like it 's processed right away. (I can never see the Job inserted in my queue)
On my server (CentOS 6) I installed supervisor.
And it is stopped:
$ service supervisord status
supervisord is stopped
And also... I hardly doubt It could work since I didn't configure it in
/etc/supervisor.conf
What am I missing?
How can I check out how's processed the queue?
I have never issued any artisan command like
$php artisan queue:*
Sorry all,
I realised I didn't configured properly the .env file by setting
QUEUE_DRIVER=database
it was set to
QUEUE_DRIVER=sync
Didn't know that "sync" config would process right away the queue...
The Situation
I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).
I am using Supervisor to run my queue, and I am running 20 processes at a time. My supervisor config file looks like this:
[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log
In my duplitron-worker.log I noticed Failed: Illuminate\Queue\CallQueuedHandler#call occurs occasionally and I would like to better understand what exactly is failing. Nothing appears in my laravel.log file (which is where exceptions would normally appear).
The Question
Is there a handy way for me to learn more about what is causing my job to fail?
In the newer Laravel versions there's an exception column in the failed_jobs table that has all the info you need. Thanks cdarken and Toskan for pointing this out!
==== OLD METHOD BELOW
Here's what I always do, but first - make sure you have a failed-jobs table! It's well documented, look it up :)
Run the php artisan queue:failed command to list all the failed jobs, and pick the one you're after. Write the ID down.
Then, make sure to stop your queue with supervisorctl stop all duplitron-worker:
Lastly, make sure your .env setting for APP_DEBUG = true.
Then run php artisan queue:retry {step_job_1_id}
Now manually runphp artisan queue:listen --timeout=XXX
If the error is structural (and most are), you should get the failure with debug stack in your log file.
Good luck with debugging :-)
as #cdarken pointed out, the exception can be found in your database table failed_jobs column name exception. Thanks #cdarken I wish his answer would be an answer and not a comment.
Run these command to create a table failed_jobs in db
php artisan queue:failed-table
php artisan migrate
Run queue worker php artisan queue:work --tries=2
Check the exception reason in your database table failed_jobs you've just created.
If you're using database than goto failed_jobs table and look for the exception there.
Worked for me,
in vendor/laravel/framework/src/Illuminate/Notifications/SendQueuedNotifications.php
Just remove "use Illuminate\Queue\SerializesModels;" Line 6
& modify Line 11 to "use Queueable;"
I have added some jobs to a queue in Laravel. However, I forgot to put $job->delete() in the function and there is an error in my function. This means the job is never ending. It keeps going being replaced onto the queue and keeps erroring in my log file. How can I delete it from the command line?
I am using beanstalkd for my queuing.
I am using Redis instead of Beanstalkd but this should be the same in both. Restarting Redis doesn't solve the problem. I looked at RedisQueues in the Laravel 4.2 API Docs and found:
public Job|null pop(string $queue = null)
//Pop the next job off of the queue.
This is the same if you look at BeanstalkedQueue.
I threw it in app/routes.php inside dd*, loaded that page and voila.
Route::get('/', function() {
dd(Queue::pop());
#return View::make('hello');
});
NOTE: Reload the page once per queue.
The queue was pulled off the stack. I would like to see a cleaner solution but this worked for me more than once.
*dd($var) = Laravel's die and dump function = die(var_dump($var))
Edit 1: For Redis
The above obviously isn't the best solution so here is a better way. Be careful!
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails.
For Redis use FLUSHDB. This will flush the Redis database not Laravel's database. In the terminal:
$ redis-cli
127.0.0.1:6379> FLUSHDB
OK
127.0.0.1:6379> exit
Restart Beanstalk. On Ubuntu:
sudo service beanstalkd restart
I made an artisan command which will clear all the jobs in your queue. You can optionally specify the connection and/or the pipe.
https://github.com/morrislaptop/laravel-queue-clear
Important note: This solution works only for beanstalk
There are two solutions:
1- From Your PHP Code
To delete jobs programatically, you can do this:
//Que the job first. (YourJobProcessor is a class that has a method called fire like `fire($job,$data)`
$res = Queue::later(5, YourJobProcessor::class, $data, 'queue_name');
//get the job from the que that you just pushed it to
$job = Queue::getPheanstalk()->useTube("queue_name")->peek($res);
//get the job from the que that you just pushed it to
$res = Queue::getPheanstalk()->useTube("queue_name")->delete($job);
If everything went good, the job will not execute, else the job will execute after 5 seconds
2- From Command Line (Linux and Mac only)
From command line (In linux and mac) you can use beanstool.
For example, if you want to delete 100 ready jobs from the queue_name tube you can do the following:
for i in {1..100}; do beanstool delete -t queue_name --state=ready; done
For Redis users, instead of flushing, using redis-cli I ran this command:
KEYS *queue*
on the Redis instance holding queued jobs,
then deleted whatever keys in the response
DEL queues:default queues:default:reserved
Only way I could do it was to restart my computer. Could not find a way to delete a job.
I've used this php-based web admin console in the past.
Otherwise, I believe you'll find yourself using Terminal + telnet, altho I can't find any documentation for deleting via telnet (Just viewing a list of jobs in queue).
It seems that most articles tell you to use your code+library of choice and loop around queues jobs to delete them in this situation.
Here is Laravel 5.1 compatible command, which allows you to clear Beanstalkd queue. The command takes queue name as argument ('default' by default). Do not forget to register it in app/Console/Kernel.php