Hello I've setup queues with Laravel 5.1.
I perform the HTTP request (Post), this is routed to the respective controller.
Controller executes the following:
//try saving model
try{
$lg = new User();
$lg->fill($request);
$lg->save();
}catch(QueryException $e){
Log::error( $e->getCode());
}
//creates Job instance
$job = new ProcessUser($lg);
//dispatching job
$queue_id = $this->dispatch($job);
also if I
dump($queue_id);
instead of having the Id key fo the queue I get back 0.
Everything works.....as expected on local dev env, with Homestead.
But on production where I have CentOS...
I expected the job just to be queued. Instead seems like it 's processed right away. (I can never see the Job inserted in my queue)
On my server (CentOS 6) I installed supervisor.
And it is stopped:
$ service supervisord status
supervisord is stopped
And also... I hardly doubt It could work since I didn't configure it in
/etc/supervisor.conf
What am I missing?
How can I check out how's processed the queue?
I have never issued any artisan command like
$php artisan queue:*
Sorry all,
I realised I didn't configured properly the .env file by setting
QUEUE_DRIVER=database
it was set to
QUEUE_DRIVER=sync
Didn't know that "sync" config would process right away the queue...
Related
Good day all. I have this persisting problem with a laravel app I created. I want to send bulk emails to subscribers. I want to queue the emails (jobs)so that they won't slow down my app. I am using a shared hosting account. Without the queue process, the mails work work fine. However, without it, they do not work.
I am using database as Queue Connection and I have the "jobs" table set up in my database. I guess up till this point, everything is working perfectly well because anytime I send the mails, I can see the jobs in the database. The problem is that I can't seem to make the queue:work command on the shared hosting unlike on my system (locally).
Furthermore, I created command using php artisan make:command cronEmail, and inside App/Console/Kernel.php, I set up the schedule method as follows
protected function schedule(Schedule $schedule)
{
$schedule->command('queue:work --tries=3')
->cron('* * * * * *')
->withoutOverlapping();
}
I then created a cron job on my cpanel as follows:
php /home/myrootfolder/mywebsitefolder/artisan queue:work >> /dev/null 2>&1
Yet, still I don't get any result. The mails are not getting sent.
My Website files are set up as follows:
Inside the root folder of the cpanel, I created a new folder called "mywebsitefolder" where I put all my laravel files except for the "public folder". The contents of the public folder are placed inside the root folder's "public_html". I then edited my index.php accordingly. It's shown below:
require __DIR__.'/../mywebsitefolder/vendor/autoload.php';
$app = require_once __DIR__.'/../mywebsitefolder/bootstrap/app.php';
So, could it be that I am not pointing to my "artisan" correctly or I am getting everything completely wrong?
Please, if anyone knows a better way of doing this or where I am getting it all wrong, I'll be glad. Thanks in advance.
You can easily from your Cpanel you will find terminal open it and run this command
nohup php artisan queue:work --daemon &
you can find the terminal in the advanced section
it will run your queue in the background
make sure when you run the command above to be in your project app path
I am having issue of running laravel scheduler to send mails in the queue
The setup is as follows: Laravel 5.7
I have configured the scheduler (App/Console/Kernel.php) like mentioned below
protected function schedule(Schedule $schedule)
{
$schedule->command('queue:work --tries=3')->everyFiveMinutes()->withoutOverlapping();
}
The db is set-up as per laravel docs. As soon as I click the link in my UI, I can see the entry in the db.
The .env QUEUE_CONNECTION=database and the same setting in Config/queue.php
(if i change the database to sync, it works perfectly)
My cron job in the server is as follows ( i just tried to log the cron)
/usr/local/bin/php /home/XXX/YYY/artisan schedule:run 1>> /home/XXX/public_html/junk/cron_log.php 2>&1
I can see the cron logs getting updated every five minues but
"No scheduled commands are ready to run"
Exactly the same code and settings last night worked(before going to bed). I had tested for more than
40 emais send attempts and the db entries were getting deleted. I only tried to save the scheduler with everyFiveMinues() but now it is not working.
I can understand mails reaching slowly but why the db entries were not deleted like last night?
this may be useful to other who are using Laravel 5.7, shared hosting Godaddy.
The above issue of dispatching email jobs was not executing ( I mean the cron jobs are running but database entries are not cleared. The issue seems to be with
->withoutOverlapping();
After I have deleted this method, I am now seeing the cron_log entries correctly and I have also received mails. My cron_log entries are as seen below
Running scheduled command: '/opt/alt/php71/usr/bin/php' 'artisan' queue:work --tries=3 > '/dev/null' 2>&1
I am guessing the method withoutOverlapping() has problem in cron execution. I have not changed anything in the code.
In my application I am dispatching a job on work queue with delay time. But its work instantly not waiting for delay time. In my config and eve I am using driver as database.
In my database job table not insert any job till now.
My config:
'default' => env('QUEUE_DRIVER', 'database')
My controller code:
Log::info('Request Status Check with Queues Begins', __METHOD__);
MyGetInfo::dispatch($this->name,$this->password,$this->id,$trr->id)->onQueue('work')->delay(12);
return json_encode($data);
The value of QUEUE_DRIVER must be set to database in .env file.
make sure to run this afterwards:
php artisan config:clear
also run
php artisan queue:listen
I have created a new Job in a laravel 5.1 app, running in Homestead VM. I've set it to be queued and have code in the handle method.
The handle() method previous expected a param to be passed, but is no longer required and I've removed the param form the handle method.
However, when the queue runs the job I get an error saying:
[2015-06-17 14:08:46] local.ERROR: exception 'ErrorException' with message 'Missing argument 1 for Simile\Jobs\SpecialJob::handle()' in /home/vagrant/Code/BitBucket/simile-app/app/Jobs/SpecialJob.php:31
line 31 of that file is:
public function handle()
Its not longer expecting any parameters, unless there's a default one that's not documented.
Now ANY changes I make, including comments out ALL content in the Job file are not seen when I run the queue. I will still get the same error.
Ive tried restarting nginx, php5-fpm, supervisor, beanstalkd, and running: artisan cache:clear, artisan clear-compiled, artisan optimize, composer dumpautoload.
Nothing works.
The only way I get get laravel to see any updated to the Job file is to restart the VM. vagrant halt, then vagrant up.
The job is triggered in a console command like this:
$this->dispatch(new SpecialJob($site->id));
Here is the full code of the SpecialJob.php file:
http://laravel.io/bin/qQQ3M#5
I tried created another new Job and tested, I get the same result.
All other non-job files update instantly, no issue. Its just the Job files. Like an old copy is being cached somewhere I can't find.
When running the queue worker as a daemon, you must tell the worker to restart after a code change.
Since daemon queue workers are long-lived processes, they will not pick up changes in your code without being restarted. So, the simplest way to deploy an application using daemon queue workers is to restart the workers during your deployment script. You may gracefully restart all of the workers by including the following command in your deployment script:
php artisan queue:restart
I have added some jobs to a queue in Laravel. However, I forgot to put $job->delete() in the function and there is an error in my function. This means the job is never ending. It keeps going being replaced onto the queue and keeps erroring in my log file. How can I delete it from the command line?
I am using beanstalkd for my queuing.
I am using Redis instead of Beanstalkd but this should be the same in both. Restarting Redis doesn't solve the problem. I looked at RedisQueues in the Laravel 4.2 API Docs and found:
public Job|null pop(string $queue = null)
//Pop the next job off of the queue.
This is the same if you look at BeanstalkedQueue.
I threw it in app/routes.php inside dd*, loaded that page and voila.
Route::get('/', function() {
dd(Queue::pop());
#return View::make('hello');
});
NOTE: Reload the page once per queue.
The queue was pulled off the stack. I would like to see a cleaner solution but this worked for me more than once.
*dd($var) = Laravel's die and dump function = die(var_dump($var))
Edit 1: For Redis
The above obviously isn't the best solution so here is a better way. Be careful!
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails.
For Redis use FLUSHDB. This will flush the Redis database not Laravel's database. In the terminal:
$ redis-cli
127.0.0.1:6379> FLUSHDB
OK
127.0.0.1:6379> exit
Restart Beanstalk. On Ubuntu:
sudo service beanstalkd restart
I made an artisan command which will clear all the jobs in your queue. You can optionally specify the connection and/or the pipe.
https://github.com/morrislaptop/laravel-queue-clear
Important note: This solution works only for beanstalk
There are two solutions:
1- From Your PHP Code
To delete jobs programatically, you can do this:
//Que the job first. (YourJobProcessor is a class that has a method called fire like `fire($job,$data)`
$res = Queue::later(5, YourJobProcessor::class, $data, 'queue_name');
//get the job from the que that you just pushed it to
$job = Queue::getPheanstalk()->useTube("queue_name")->peek($res);
//get the job from the que that you just pushed it to
$res = Queue::getPheanstalk()->useTube("queue_name")->delete($job);
If everything went good, the job will not execute, else the job will execute after 5 seconds
2- From Command Line (Linux and Mac only)
From command line (In linux and mac) you can use beanstool.
For example, if you want to delete 100 ready jobs from the queue_name tube you can do the following:
for i in {1..100}; do beanstool delete -t queue_name --state=ready; done
For Redis users, instead of flushing, using redis-cli I ran this command:
KEYS *queue*
on the Redis instance holding queued jobs,
then deleted whatever keys in the response
DEL queues:default queues:default:reserved
Only way I could do it was to restart my computer. Could not find a way to delete a job.
I've used this php-based web admin console in the past.
Otherwise, I believe you'll find yourself using Terminal + telnet, altho I can't find any documentation for deleting via telnet (Just viewing a list of jobs in queue).
It seems that most articles tell you to use your code+library of choice and loop around queues jobs to delete them in this situation.
Here is Laravel 5.1 compatible command, which allows you to clear Beanstalkd queue. The command takes queue name as argument ('default' by default). Do not forget to register it in app/Console/Kernel.php