Laravel Queue - Database in cache? - php

I have an application with multiple databases;
I have a function to "choose" the correct database;
The problem is: when I start php artisan queue:work --tries=3
The project join in MySqlConnector.php only in first time.
So, I just can connect in my correct database in the first time.
Attempts fails
Disconnect \DB::disconnect('database name'),
Clear cache \Cache::flush();
Change the mysql default config(['database.connections.queue' => $correctDatabase]);
And several others I don't even remember anymore.
How do I make sure that whenever I enter a queue worker, I will connect back to the database?
Note: I enter the correct database configuration inside MySQLConnector in the connect method.
public function connect(array $config)
{
// ...
// multipleDatabases is my custom function
if(multipleDatabases('connection') !== null) {
$config = multipleDatabases('database');
}
// ...
}
This code working fine.

I solve this problem using listen instead of worker
php artisan queue:listen --tries=3
Laravel documentation
When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:

Related

laravel run failed job locally instead of pushing it back on the queue

Is it possible to run a failed job from the failed_jobs table locally for debugging?
I only found php artisan queue:retry in the docs, but this pushes the job back on the queue instead of executing it.
I would like to directly run the failed job instead.
You can add this to your FailedJobs Model:
public function retry() {
$this->queue = 'failed_jobs_'.$this->id;
$this->save();
Artisan::call('queue:retry '.$this->id);
Artisan::call('queue:work --once --queue='.$this->queue);
}
And then run/debug the job locally like this:
FailedJob::find($id)->retry();

Laravel - Catch php artisan commands

I've did some changes in my config/app to use multiple databases selected by front-end,
now I have to tell in \Request()->header('database') which database I want access.
It's work perfectly, the problem is: when I try to do any artisan commands my logic dies, because isn't informed the database.
So I need to inform the database in artisan commands, like that:
php artisan migrate --database=sandiego_school
php artisan migrate:rollback --database=newyork_school
How can I observer all commands to get the argument?
In this case I guess you should create your own commands that overrides the commands you want to call, then in the handle method of the command you could specify the connexion you want to work on:
\DB::setDefaultConnection($connexion);
or also you can simply add header to request:
request()->headers->set('database', $dbname)

Laravel 5.5 Job with delay fires instantly instead of waiting

In my application I am dispatching a job on work queue with delay time. But its work instantly not waiting for delay time. In my config and eve I am using driver as database.
In my database job table not insert any job till now.
My config:
'default' => env('QUEUE_DRIVER', 'database')
My controller code:
Log::info('Request Status Check with Queues Begins', __METHOD__);
MyGetInfo::dispatch($this->name,$this->password,$this->id,$trr->id)->onQueue('work')->delay(12);
return json_encode($data);
The value of QUEUE_DRIVER must be set to database in .env file.
make sure to run this afterwards:
php artisan config:clear
also run
php artisan queue:listen

Deleting queued jobs in laravel

I have added some jobs to a queue in Laravel. However, I forgot to put $job->delete() in the function and there is an error in my function. This means the job is never ending. It keeps going being replaced onto the queue and keeps erroring in my log file. How can I delete it from the command line?
I am using beanstalkd for my queuing.
I am using Redis instead of Beanstalkd but this should be the same in both. Restarting Redis doesn't solve the problem. I looked at RedisQueues in the Laravel 4.2 API Docs and found:
public Job|null pop(string $queue = null)
//Pop the next job off of the queue.
This is the same if you look at BeanstalkedQueue.
I threw it in app/routes.php inside dd*, loaded that page and voila.
Route::get('/', function() {
dd(Queue::pop());
#return View::make('hello');
});
NOTE: Reload the page once per queue.
The queue was pulled off the stack. I would like to see a cleaner solution but this worked for me more than once.
*dd($var) = Laravel's die and dump function = die(var_dump($var))
Edit 1: For Redis
The above obviously isn't the best solution so here is a better way. Be careful!
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails.
For Redis use FLUSHDB. This will flush the Redis database not Laravel's database. In the terminal:
$ redis-cli
127.0.0.1:6379> FLUSHDB
OK
127.0.0.1:6379> exit
Restart Beanstalk. On Ubuntu:
sudo service beanstalkd restart
I made an artisan command which will clear all the jobs in your queue. You can optionally specify the connection and/or the pipe.
https://github.com/morrislaptop/laravel-queue-clear
Important note: This solution works only for beanstalk
There are two solutions:
1- From Your PHP Code
To delete jobs programatically, you can do this:
//Que the job first. (YourJobProcessor is a class that has a method called fire like `fire($job,$data)`
$res = Queue::later(5, YourJobProcessor::class, $data, 'queue_name');
//get the job from the que that you just pushed it to
$job = Queue::getPheanstalk()->useTube("queue_name")->peek($res);
//get the job from the que that you just pushed it to
$res = Queue::getPheanstalk()->useTube("queue_name")->delete($job);
If everything went good, the job will not execute, else the job will execute after 5 seconds
2- From Command Line (Linux and Mac only)
From command line (In linux and mac) you can use beanstool.
For example, if you want to delete 100 ready jobs from the queue_name tube you can do the following:
for i in {1..100}; do beanstool delete -t queue_name --state=ready; done
For Redis users, instead of flushing, using redis-cli I ran this command:
KEYS *queue*
on the Redis instance holding queued jobs,
then deleted whatever keys in the response
DEL queues:default queues:default:reserved
Only way I could do it was to restart my computer. Could not find a way to delete a job.
I've used this php-based web admin console in the past.
Otherwise, I believe you'll find yourself using Terminal + telnet, altho I can't find any documentation for deleting via telnet (Just viewing a list of jobs in queue).
It seems that most articles tell you to use your code+library of choice and loop around queues jobs to delete them in this situation.
Here is Laravel 5.1 compatible command, which allows you to clear Beanstalkd queue. The command takes queue name as argument ('default' by default). Do not forget to register it in app/Console/Kernel.php

Queue work on "sync" driver, but not on Beanstalkd

I have a quite simple job that runs on Laravel 4 FW. When the queue driver is set as "sync", it works fine. But, when I set it to 'beanstalkd', it simply DOESN'T RUN! I already ran the artisan command php artisan queue:listen and php artisan queue:work but none seems to work.
When I type php artisan queue:work it gives me the following error:
[ErrorException]
Trying to get property of non-object
Here's my beanstalkd connection configuration:
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'localhost:11300',
'queue' => 'default',
),
I've already tried to set the 'host' as a '0.0.0.0' and '127.0.0.1'.
Any ideas why isn't working?
EDIT:
Here's some piece of code of the fire() method.
static public function fire($job, $data)
{
ini_set('memory_limit', '512M');
set_time_limit(300);
$hotel_ids = $data['hotels'];
self::$client = $data['client'];
self::$currency = $data['currency'];
// A list of paths to the generated PDFs
$paths = array();
foreach ($hotel_ids as $list) {
$hotels = Hotel::whereIn('id', $list)->orderBy('name', 'asc')->get();
$paths[] = self::makePDF($hotels);
}
#self::sentPDFs($paths);
$job->delete();
}
EDIT 2:
The job itself run on sync driver, though my thoughts are on beanstalkd. I installed the beanstalkd console, a way of view the jobs and the queue grafically. Here's another interesting thing: the job is queued, he gets in the 'ready' stage then goes back! And that keeps going on! He gets in ready stage, e then (I believe) happens some sort of error and it get's out!I don't know what is the error, since it doesn't appear in SYNC drive.
Another interesting thing: if I remove all code from the fire method and lets only, for example, Log::error('Error'); it happens the same exact thing!
Have you installed Pheanstalk? It's required to use beanstalkd with the Laravel queue system.
Check your firewall configuration. I added port 11300 to the firewall tables and it works!

Categories