Setup Remote beanstalkd Laravel 4.2 - php

My stack set-up consists of the following:
www.main.com - Main Server (Main Application code & supervisord)
www.queue-server.com - Beanstalkd installed here (No code here only beanstalkd)
I'm using Laravel 4.2.
I have setup Supervisord on www.main.com and added the following queue listener:
php artisan queue:work--queue=test --env=test
My app/config/queue.php file settings are as below:
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'www.queue-server.com',
'queue' => 'test',
'ttr' => 60,
),
From my understanding, it should push & process jobs on www.queue-server.com server but it shows no cpu spikes there, but www.main.com server shows high cpu usage.
So my questions are:
Is my setup correct? Or I have to change something?
I want to process my job on www.queue-server.com server. How can I achieve that?

The beanstalkd server is just the storage of the queue data itself, it does no processing. Its the php artisan queue:work command that then processes the queue. This is why you are seeing the higher load on your www.main.com server as although your queue is stored on the other server, the main server is the one currently processing the queue.
If you wish for the www.queue-server.com server to process the queue you need to install your application there as well and run the artisan command from there.

Related

Laravel queue not honouring job specific timeout

According to the Laravel docs I should be able to specify a job specific timeout:
If the timeout is specified on the job, it will take precedence over any timeout specified on the command line [...]
So, when I run artisan queue:listen without the --timeout option and I define the timeout inside the job (like Laravel tells me to):
public $timeout = 600;
I expect the timeout of that specific job to be 600 seconds. Unfortunately, I still get a ProcessTimedOutException. A custom timeout only works when I run the queue with --timeout=600.
I'm using Laravel 6 with PHP 7.4. As recommended by Laravel I've also enabled the pcntl PHP extension. For the queue I use the database driver with the following config:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
]
I opened a bug report because I couldn't get this to work. However, it seems like the timeout specified inside the job class only takes precedence over the timeout specified in the command line when running the queue with queue:work.
I've tested this and can confirm that it works with queue:work. According to a commenter on my bug report it doesn't work with queue:listen because:
queue:listen runs several processes while queue:work is a single process. queue:listen sets a timeout for the process it runs so we don't leave ghost processes running on the machine in case the master process was killed for some reason.

How to use Laravel 5 to receive Amazon SNS message from SQS?

I am trying to get the email response (bounce, complaint and delivery) from Amazon SES through SNS. On Amazon SQS console, I see that the message is already in the queue, so I am sure the setting for structures on Amazon is correct.
Then, using Laravel 5.5, following the official guide, I set up a queue listening to SQS. I skip the part of dispatching jobs to the queue as this will be done by SNS. In the job handler, for simplicity, I just var_dump what I receive. The job looks like this:
public function handle($testing_message)
{
var_dump($testing_message);
echo "testing handle!\n";
}
The config for that looks something like this:
'sqs' => [
'driver' => 'sqs', //mainly to show that I am using the correct driver
'key' => env('SQS_KEY', 'your-public-key'),
'secret' => env('SQS_SECRET', 'your-secret-key'),
'prefix' => env('SQS_PREFIX', 'https://sqs.us-east-1.amazonaws.com/your-account-id'),
'queue' => env('SQS_QUEUE', 'your-queue-name'),
'region' => env('SQS_REGION', 'us-east-1'),
],
For security, The actual value is hidden in .env. I then run:
composer require aws/aws-sdk-php ~3.0
php artisan config:cache
php artisan queue:listen
However, the process just sit there running, no response and no error message.
I want to ask:
How do I know if the connection to the queue is correct?
If the connection is correct, why there is no return from SQS? (I am sure there are already message inside SQS queue from the Amazon console)
I'm not sure why you are using a queue rather than a simple https endpoint. That removes a lot of complexity and doesn't require polling.
There is a simple php example that shows what is needed.
The biggest gotcha is that you need to be able to confirm the SNS subscription before SNS will start sending POST requests to your endpoint.

Laravel 5.5 Job with delay fires instantly instead of waiting

In my application I am dispatching a job on work queue with delay time. But its work instantly not waiting for delay time. In my config and eve I am using driver as database.
In my database job table not insert any job till now.
My config:
'default' => env('QUEUE_DRIVER', 'database')
My controller code:
Log::info('Request Status Check with Queues Begins', __METHOD__);
MyGetInfo::dispatch($this->name,$this->password,$this->id,$trr->id)->onQueue('work')->delay(12);
return json_encode($data);
The value of QUEUE_DRIVER must be set to database in .env file.
make sure to run this afterwards:
php artisan config:clear
also run
php artisan queue:listen

laravel 5.1 not seeing changes to Job file without VM restart

I have created a new Job in a laravel 5.1 app, running in Homestead VM. I've set it to be queued and have code in the handle method.
The handle() method previous expected a param to be passed, but is no longer required and I've removed the param form the handle method.
However, when the queue runs the job I get an error saying:
[2015-06-17 14:08:46] local.ERROR: exception 'ErrorException' with message 'Missing argument 1 for Simile\Jobs\SpecialJob::handle()' in /home/vagrant/Code/BitBucket/simile-app/app/Jobs/SpecialJob.php:31
line 31 of that file is:
public function handle()
Its not longer expecting any parameters, unless there's a default one that's not documented.
Now ANY changes I make, including comments out ALL content in the Job file are not seen when I run the queue. I will still get the same error.
Ive tried restarting nginx, php5-fpm, supervisor, beanstalkd, and running: artisan cache:clear, artisan clear-compiled, artisan optimize, composer dumpautoload.
Nothing works.
The only way I get get laravel to see any updated to the Job file is to restart the VM. vagrant halt, then vagrant up.
The job is triggered in a console command like this:
$this->dispatch(new SpecialJob($site->id));
Here is the full code of the SpecialJob.php file:
http://laravel.io/bin/qQQ3M#5
I tried created another new Job and tested, I get the same result.
All other non-job files update instantly, no issue. Its just the Job files. Like an old copy is being cached somewhere I can't find.
When running the queue worker as a daemon, you must tell the worker to restart after a code change.
Since daemon queue workers are long-lived processes, they will not pick up changes in your code without being restarted. So, the simplest way to deploy an application using daemon queue workers is to restart the workers during your deployment script. You may gracefully restart all of the workers by including the following command in your deployment script:
php artisan queue:restart

Queue work on "sync" driver, but not on Beanstalkd

I have a quite simple job that runs on Laravel 4 FW. When the queue driver is set as "sync", it works fine. But, when I set it to 'beanstalkd', it simply DOESN'T RUN! I already ran the artisan command php artisan queue:listen and php artisan queue:work but none seems to work.
When I type php artisan queue:work it gives me the following error:
[ErrorException]
Trying to get property of non-object
Here's my beanstalkd connection configuration:
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'localhost:11300',
'queue' => 'default',
),
I've already tried to set the 'host' as a '0.0.0.0' and '127.0.0.1'.
Any ideas why isn't working?
EDIT:
Here's some piece of code of the fire() method.
static public function fire($job, $data)
{
ini_set('memory_limit', '512M');
set_time_limit(300);
$hotel_ids = $data['hotels'];
self::$client = $data['client'];
self::$currency = $data['currency'];
// A list of paths to the generated PDFs
$paths = array();
foreach ($hotel_ids as $list) {
$hotels = Hotel::whereIn('id', $list)->orderBy('name', 'asc')->get();
$paths[] = self::makePDF($hotels);
}
#self::sentPDFs($paths);
$job->delete();
}
EDIT 2:
The job itself run on sync driver, though my thoughts are on beanstalkd. I installed the beanstalkd console, a way of view the jobs and the queue grafically. Here's another interesting thing: the job is queued, he gets in the 'ready' stage then goes back! And that keeps going on! He gets in ready stage, e then (I believe) happens some sort of error and it get's out!I don't know what is the error, since it doesn't appear in SYNC drive.
Another interesting thing: if I remove all code from the fire method and lets only, for example, Log::error('Error'); it happens the same exact thing!
Have you installed Pheanstalk? It's required to use beanstalkd with the Laravel queue system.
Check your firewall configuration. I added port 11300 to the firewall tables and it works!

Categories