I have a quite simple job that runs on Laravel 4 FW. When the queue driver is set as "sync", it works fine. But, when I set it to 'beanstalkd', it simply DOESN'T RUN! I already ran the artisan command php artisan queue:listen and php artisan queue:work but none seems to work.
When I type php artisan queue:work it gives me the following error:
[ErrorException]
Trying to get property of non-object
Here's my beanstalkd connection configuration:
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'localhost:11300',
'queue' => 'default',
),
I've already tried to set the 'host' as a '0.0.0.0' and '127.0.0.1'.
Any ideas why isn't working?
EDIT:
Here's some piece of code of the fire() method.
static public function fire($job, $data)
{
ini_set('memory_limit', '512M');
set_time_limit(300);
$hotel_ids = $data['hotels'];
self::$client = $data['client'];
self::$currency = $data['currency'];
// A list of paths to the generated PDFs
$paths = array();
foreach ($hotel_ids as $list) {
$hotels = Hotel::whereIn('id', $list)->orderBy('name', 'asc')->get();
$paths[] = self::makePDF($hotels);
}
#self::sentPDFs($paths);
$job->delete();
}
EDIT 2:
The job itself run on sync driver, though my thoughts are on beanstalkd. I installed the beanstalkd console, a way of view the jobs and the queue grafically. Here's another interesting thing: the job is queued, he gets in the 'ready' stage then goes back! And that keeps going on! He gets in ready stage, e then (I believe) happens some sort of error and it get's out!I don't know what is the error, since it doesn't appear in SYNC drive.
Another interesting thing: if I remove all code from the fire method and lets only, for example, Log::error('Error'); it happens the same exact thing!
Have you installed Pheanstalk? It's required to use beanstalkd with the Laravel queue system.
Check your firewall configuration. I added port 11300 to the firewall tables and it works!
Related
I can't log anything from my Job's handle method, with the Log facade. Logging from controller or other part of the application with this facade works fine.
I have tried the solution here : Logging not working in laravel queue job, but it does not work with Laravel 6.17, and here : https://stackoverflow.com/a/55206727/10767428 , but it does not affect behaviour in any way.
PHP 7.2
Laravel 6.17
APP_ENV=local
APP_LOG_LEVEL=debug
Laravel runs inside Docker with Alpine image and other stuff unrelated
Here is my job :
class MyJob implements ShouldQueue
{
use Dispatchable;
use InteractsWithQueue;
use Queueable;
use SerializesModels;
public function handle()
{
Log::warning("HI");
}
}
The Job is correctly handled when I dispatch if, but nothing shows up in my storage/app/logs/laravel.log.
This file and the entire folder storage has 777 permissions.
I use "single" driver in config/logging.php :
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => 'debug',
],
Any ideas?
EDIT 07/17/2020
As requested, here is my config.horizon.php :
https://pastebin.com/jkQLcDKF
EDIT 07/20/2020
I can log from the same job when I use dipatchNow method to call it, instead of dispatch one. Any ideas why ?
If your queue is running on Supervisor, your log would be in the Supervisor logs. Share your queue/supervisor configuration if you're not able to find it.
You could also check your docker logs as well.
If anyone else is having issues, Horizon has to be restarted when changes are made to the jobs. Otherwise, the job that runs will not reflect your code changes.
This is regardless of the environment. That's why it's a good idea to restart horizon when you re-deploy to the server.
According to the Laravel docs I should be able to specify a job specific timeout:
If the timeout is specified on the job, it will take precedence over any timeout specified on the command line [...]
So, when I run artisan queue:listen without the --timeout option and I define the timeout inside the job (like Laravel tells me to):
public $timeout = 600;
I expect the timeout of that specific job to be 600 seconds. Unfortunately, I still get a ProcessTimedOutException. A custom timeout only works when I run the queue with --timeout=600.
I'm using Laravel 6 with PHP 7.4. As recommended by Laravel I've also enabled the pcntl PHP extension. For the queue I use the database driver with the following config:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
]
I opened a bug report because I couldn't get this to work. However, it seems like the timeout specified inside the job class only takes precedence over the timeout specified in the command line when running the queue with queue:work.
I've tested this and can confirm that it works with queue:work. According to a commenter on my bug report it doesn't work with queue:listen because:
queue:listen runs several processes while queue:work is a single process. queue:listen sets a timeout for the process it runs so we don't leave ghost processes running on the machine in case the master process was killed for some reason.
I have an application with multiple databases;
I have a function to "choose" the correct database;
The problem is: when I start php artisan queue:work --tries=3
The project join in MySqlConnector.php only in first time.
So, I just can connect in my correct database in the first time.
Attempts fails
Disconnect \DB::disconnect('database name'),
Clear cache \Cache::flush();
Change the mysql default config(['database.connections.queue' => $correctDatabase]);
And several others I don't even remember anymore.
How do I make sure that whenever I enter a queue worker, I will connect back to the database?
Note: I enter the correct database configuration inside MySQLConnector in the connect method.
public function connect(array $config)
{
// ...
// multipleDatabases is my custom function
if(multipleDatabases('connection') !== null) {
$config = multipleDatabases('database');
}
// ...
}
This code working fine.
I solve this problem using listen instead of worker
php artisan queue:listen --tries=3
Laravel documentation
When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
In my application I am dispatching a job on work queue with delay time. But its work instantly not waiting for delay time. In my config and eve I am using driver as database.
In my database job table not insert any job till now.
My config:
'default' => env('QUEUE_DRIVER', 'database')
My controller code:
Log::info('Request Status Check with Queues Begins', __METHOD__);
MyGetInfo::dispatch($this->name,$this->password,$this->id,$trr->id)->onQueue('work')->delay(12);
return json_encode($data);
The value of QUEUE_DRIVER must be set to database in .env file.
make sure to run this afterwards:
php artisan config:clear
also run
php artisan queue:listen
My stack set-up consists of the following:
www.main.com - Main Server (Main Application code & supervisord)
www.queue-server.com - Beanstalkd installed here (No code here only beanstalkd)
I'm using Laravel 4.2.
I have setup Supervisord on www.main.com and added the following queue listener:
php artisan queue:work--queue=test --env=test
My app/config/queue.php file settings are as below:
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'www.queue-server.com',
'queue' => 'test',
'ttr' => 60,
),
From my understanding, it should push & process jobs on www.queue-server.com server but it shows no cpu spikes there, but www.main.com server shows high cpu usage.
So my questions are:
Is my setup correct? Or I have to change something?
I want to process my job on www.queue-server.com server. How can I achieve that?
The beanstalkd server is just the storage of the queue data itself, it does no processing. Its the php artisan queue:work command that then processes the queue. This is why you are seeing the higher load on your www.main.com server as although your queue is stored on the other server, the main server is the one currently processing the queue.
If you wish for the www.queue-server.com server to process the queue you need to install your application there as well and run the artisan command from there.