Amazon has announced their new FIFO SQS service and I'd like to use it in Laravel Queue to solve some concurrency issues.
I've created several new queues and changed the configurations. However, I got a MissingParameter error which says
The request must contain the parameter MessageGroupId.
So I modified the file vendor/laravel/framework/src/Illuminate/Queue/SqsQueue.php
public function pushRaw($payload, $queue = null, array $options = [])
{
$response = $this->sqs->sendMessage(['QueueUrl' => $this->getQueue($queue), 'MessageBody' => $payload,
'MessageGroupId' => env('APP_ENV', getenv('APP_ENV'))]);
return $response->get('MessageId');
}
public function later($delay, $job, $data = '', $queue = null)
{
$payload = $this->createPayload($job, $data);
$delay = $this->getSeconds($delay);
return $this->sqs->sendMessage([
'QueueUrl' => $this->getQueue($queue), 'MessageBody' => $payload, 'DelaySeconds' => $delay,
'MessageGroupId' => env('APP_ENV', getenv('APP_ENV'))
])->get('MessageId');
}
I'm using APP_ENV as the group ID (it's a single message queue so actually it doesn't matter a lot. I just want everything to be FIFO).
But I'm still getting the same error message. How could I fix it? Any help would be appreciated.
(btw, where has the SDK defined sendMessage? I can find a stub for it but I didn't find the detailed implementation)
I want to point out to others who might stumble across the same issue that, although editing SqsQueue.php works, it will easily be reset by a composer install or composer update. An alternative is to implement a new Illuminate\Queue\Connectors\ConnectorInterface for SQS FIFO then add it to Laravel's queue manager.
My approach is as follows:
Create a new SqsFifoQueue class that extends Illuminate\Queue\SqsQueue but supports SQS FIFO.
Create a new SqsFifoConnector class that extends Illuminate\Queue\Connectors\SqsConnector that would establish a connection using SqsFifoQueue.
Create a new SqsFifoServiceProvider that registers the SqsFifoConnector to Laravel's queue manager.
Add SqsFifoServiceProvider to your config/app.php.
Update config/queue.php to use the new SQS FIFO Queue driver.
Example:
Create a new SqsFifoQueue class that extends Illuminate\Queue\SqsQueue but supports SQS FIFO.
<?php
class SqsFifoQueue extends \Illuminate\Queue\SqsQueue
{
public function pushRaw($payload, $queue = null, array $options = [])
{
$response = $this->sqs->sendMessage([
'QueueUrl' => $this->getQueue($queue),
'MessageBody' => $payload,
'MessageGroupId' => uniqid(),
'MessageDeduplicationId' => uniqid(),
]);
return $response->get('MessageId');
}
}
Create a new SqsFifoConnector class that extends Illuminate\Queue\Connectors\SqsConnector that would establish a connection using SqsFifoQueue.
<?php
use Aws\Sqs\SqsClient;
use Illuminate\Support\Arr;
class SqsFifoConnector extends \Illuminate\Queue\Connectors\SqsConnector
{
public function connect(array $config)
{
$config = $this->getDefaultConfiguration($config);
if ($config['key'] && $config['secret']) {
$config['credentials'] = Arr::only($config, ['key', 'secret']);
}
return new SqsFifoQueue(
new SqsClient($config), $config['queue'], Arr::get($config, 'prefix', '')
);
}
}
Create a new SqsFifoServiceProvider that registers the SqsFifoConnector to Laravel's queue manager.
<?php
class SqsFifoServiceProvider extends \Illuminate\Support\ServiceProvider
{
public function register()
{
$this->app->afterResolving('queue', function ($manager) {
$manager->addConnector('sqsfifo', function () {
return new SqsFifoConnector;
});
});
}
}
Add SqsFifoServiceProvider to your config/app.php.
<?php
return [
'providers' => [
...
SqsFifoServiceProvider::class,
],
];
Update config/queue.php to use the new SQS FIFO Queue driver.
<?php
return [
'default' => 'sqsfifo',
'connections' => [
'sqsfifo' => [
'driver' => 'sqsfifo',
'key' => 'my_key'
'secret' => 'my_secret',
'queue' => 'my_queue_url',
'region' => 'my_sqs_region',
],
],
];
Then your queue should now support SQS FIFO Queues.
Shameless plug: While working on the steps above I've created a laravel-sqs-fifo composer package to handle this at https://github.com/maqe/laravel-sqs-fifo.
FIFO message works in a different way than standard AWS SQS queues.
You need a separate driver for handling FIFO queues.
I had to face the same situation and the below package was a lifesaver.
https://packagist.org/packages/shiftonelabs/laravel-sqs-fifo-queue
in queue.php
'sqs-fifo' => [
'driver' => 'sqs-fifo',
'key' => env('SQS_KEY'),
'secret' => env('SQS_SECRET'),
'prefix' => env('SQS_PREFIX'),
'queue' => env('SQS_QUEUE'),
'region' => env('SQS_REGION'),
'group' => 'default',
'deduplicator' => 'unique',
],
then
dispatch(new TestJob([]))->onQueue('My_Mail_Queue.fifo');
NB:
you need to specify default queue name you are going to use in your application in the .env
SQS_QUEUE=My_Default_queue.fifo
Also, you need to specify all the queue names you are going to use in your application in the listener. (if you are using the same queue name for the whole application, you don't need to specify the queue name in the listener)
php artisan queue:listen --queue=My_Default_queue.fifo,My_Mail_Queue.fifo,My_Message_Queue.fifo
Apart from the MessageGroupId, it needs a MessageDeduplicationId or enabling content-based deduplication.
Related
I've upgraded lumen from 5.8 to 9.1 and php from 7.x to 8.1
In my env we load json logs to kibana, but after update log format has changed to something like this:
2022-11-08 11:56:47 App\Domain\Email\Listeners\MoveEmailToProcessed{"message":"Marked email as processed","context":{"object_key":"incoming/0i02qrl9t4llea3fhfm2t6184r3nv9rs7nb4cu01","email":"wx1vx6fnek7rqhcr#lead-import.test.com"},"level":200,"level_name":"INFO","channel":"production","datetime":"2022-11-08T11:56:48.006053+00:00","extra":{}}
447.64ms DONE
while it was and should stay raw JSON like this:
{"message":"Marked email as processed","context":{"object_key":"incoming/0i02qrl9t4llea3fhfm2t6184r3nv9rs7nb4cu01","email":"wx1vx6fnek7rqhcr#lead-import.test.com"},"level":200,"level_name":"INFO","channel":"production","datetime":"2022-11-08T11:56:48.006053+00:00","extra":{}}
I have config for this as different channel in lumen using monolog:
'stdout-json' => [
'driver' => 'monolog',
'handler' => StreamHandler::class,
'with' => [
'stream' => 'php://stdout',
],
'formatter' => Monolog\Formatter\JsonFormatter::class,
],
And since format has changed it don't end up in kibana as expected. Question is what am I missing? Config didn't changed and I didn't saw anything specific in any upgrade guide I was following. Any idea what might be wrong?
Oh, and I don't use any lumen 'shortcuts' so logger is injected as it should. So code looks like this i.e.
public function __construct(Cloud $s3, LoggerInterface $logger)
{
$this->s3 = $s3;
$this->logger = $logger;
}
public function handle(EmailProcessed $emailProcessed): void
{
$filename = explode('/', $emailProcessed->getObjectKey())[1];
$newPath = sprintf('%s/%s', self::PROCESSED_DIR, $filename);
if ($this->s3->exists($newPath)) {
$this->s3->delete($newPath);
}
$this->s3->move($emailProcessed->getObjectKey(), $newPath);
$this->logger->info(
'Marked email as processed',
[
'object_key' => $emailProcessed->getObjectKey(),
'email' => $emailProcessed->getEmail(),
]
);
}
I am trying to understand if my issue is a limitation with Laravel's logic or an issue in my config. I have this in my queue.php file
'connections' => [
'sqs' => [
'driver' => 'sqs',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'token' => env('AWS_SESSION_TOKEN'),
'prefix' => env('SQS_PREFIX'),
'region' => env('AWS_REGION'),
],
],
Now, when I try to this command
php artisan queue:work sqs --tries 3 --queue='high-priority'
I would expect it to process the outstanding jobs in the 'high-priority' queue, however instead it throws this exception
ErrorException : Undefined index: queue
at {redacted}/vendor/laravel/framework/src/Illuminate/Queue/Connectors/SqsConnector.php:26
22| $config['credentials'] = Arr::only($config, ['key', 'secret', 'token']);
23| }
24|
25| return new SqsQueue(
> 26| new SqsClient($config), $config['queue'], $config['prefix'] ?? ''
27| );
28| }
29|
30| /**
Now I can see that laravel is trying to construct a new SqsClient using the config value in key of queue, so in this case it seems that to use multiple queues here I would need to define an individual connection for every queue (which seems completely overkill).
So really, wondering if this is a limitation or if I misunderstand how the config is being passed into the construction of the SqsClient.
class ProcessComment extends Job implements ShouldQueue
{
use InteractsWithQueue;
/**
* #var int
*/
public $tries = 1;
public function handle(Somedepency $someDependency) {
// method body....
// tries to connect to a database
// deliberately provide the wrong database url so that the job .
// will throw exception and hence faild
}
The problem is that when i run php artisan queue:work --daemon or php artisan queue:work --daemon --tries=1
The tries option doesn't seem to work. In my redis queue I continuously see the attempts it tries to make like. It should try only one time and if the job failed, just ignore that job and move ahead.
"EXEC"
1522044746.165780 [0 172.20.0.5:48992] "WATCH" "queues:comments:reserved"
1522044746.166110 [0 172.20.0.5:48992] "ZRANGEBYSCORE" "queues:comments:reserved" "-inf" "1522044746"
1522044746.166718 [0 172.20.0.5:48992] "UNWATCH"
1522044746.167436 [0 172.20.0.5:48992] "LPOP" "queues:comments"
1522044746.168051 [0 172.20.0.5:48992] "ZADD" "queues:comments:reserved" "1522044806" {"some serialized data here ... "attempts: 4"}
and so on
This is my configs/queue.php
'default' => env('QUEUE_DRIVER', 'redis'),
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'comments'
],
],
tried to google it alot but couldn't find satisfactory answer.
Thanks
I'm using Phinx to execute migrations across 100s of applications on multiple servers. Every application should execute same migrations.
In order to do this there is a instance of app on central server which is aware of all configs and other information needed to do bootstrap process (which is being done based on applicationId).
That central instance (let's call it adminapp) executes command and receives applicationIds through STDIN and then does a loop which bootstraps application and runs migration command.
<?php
namespace Command\Db;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use App\Command\AppCommand;
class MigrateBulkCommand extends AppCommand
{
protected function configure()
{
$this
->setName('command:blah')
->setDescription('Executes SQL migrations accross multiple applications. Expects ApplicationIDs to be passed as new line delimited string on STDIN.')
;
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$stdin = $this->getStdin();
if ($stdin === false) {
throw new \RuntimeException("Bulk migration command requires applicationIds to be passed to STDIN.");
}
$applicationIds = explode("\n", $stdin);
foreach($applicationIds as $applicationId) {
try {
$this->bootstrap($applicationId);
} catch (\Exception $e) {
$output->writeln(sprintf("<error>Bootstrap process failed for applicationId `%s`</error>", $applicationId));
}
$command = new \Phinx\Console\Command\Migrate();
$migrationInput = new \Symfony\Component\Console\Input\ArrayInput([
]);
$returnCode = $command->run($migrationInput, $output);
$output->writeln(sprinf("<info>Migrations for applicationId `%s` executed successfully.</info>", $applicationId));
}
}
}
Now Phinx expects it's configuration to be present in form of a config file. What I'm trying to do is reuse DB connection resource (PDO) and pass it to Phinx command Phinx\Console\Command\Migrate on the fly, together with db name.
I've seen in Phinx documentation that this is an option with PHP config file but I can't find a way to do this on the fly (during Phinx\Console\Command\Migrate class initialization).
Phinx doc suggests:
require 'app/init.php';
global $app;
$pdo = $app->getDatabase()->getPdo();
return array('environments' =>
array(
'default_database' => 'development',
'development' => array(
'name' => 'devdb',
'connection' => $pdo
)
)
);
Is there a way, without horrible hacking to pass PDO connection resource and db name to \Phinx\Console\Command\Migrate
I ended up extending Phinx Config class \Phinx\Config\Config and creating method fromArray.
$command = new \Phinx\Console\Command\Migrate();
$command->setConfig(\MyNamespace\Config::fromArray(
[
'paths' => [
'migrations' => APPLICATION_PATH . "/../db/migrations",
'seeds' => APPLICATION_PATH . "/../db/seeds"
],
'environments' => [
'default_database' => 'production',
'production' => [
'name' => $db->get('dbname'),
'adapter' => 'mysql',
'host' => $db->get('host'),
'port' => $db->get('port'),
'user' => $db->get('username'),
'pass' => $db->get('password'),
]
]
]
));
I used the filesystems.php file to configure my S3.
When I try to put content in the bucket I receive this error:
Encountered a permanent redirect while requesting https://s3-us-west-2.amazonaws.com/MYBUCKET... Are you sure you are using the correct region for this bucket?
I then try to acces the url and I get this message:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
On the same page I get:
<Endpoint>s3.amazonaws.com</Endpoint>
Then how could I remove the region from the URL Laravel generates?
You can create a customer service provider like that:
use Illuminate\Support\ServiceProvider;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v3\AwsS3Adapter;
use League\Flysystem\Filesystem;
use Aws\Laravel\AwsServiceProvider;
use Storage;
class AwsS3ServiceProvider extends ServiceProvider
{
/**
* Perform post-registration booting of services.
*
* #return void
*/
public function boot()
{
Storage::extend('s3', function($app, $config) {
$client = new S3Client([
'credentials' => [
'key' => $config['key'],
'secret' => $config['secret'],
],
'region' => $config['region'],
'version' => $config['version'],
'endpoint' => $config['endpoint'],
'ua_append' => [
'L5MOD/' . AwsServiceProvider::VERSION,
],
]);
return new Filesystem(new AwsS3Adapter($client, $config['bucket_name']));
});
}
/**
* Register bindings in the container.
*
* #return void
*/
public function register()
{
//
}
}
And add endpoint variable into config/filesystems.php as well:
's3' => [
'driver' => 's3',
'key' => env('AWS_KEY'),
'secret' => env('AWS_SECRET'),
'region' => env('AWS_REGION'),
'version' => 'latest',
'endpoint' => env('AWS_ENDPOINT'),
'bucket_name' => env('AWS_BUCKET_NAME')
]
Look at docs to get details about how to extend Storage facade.
The simplest approach is to create a new disk that uses the S3 driver in config/filesystems.php. You don't need to create a service provider - the S3 driver will pick up the endpoint from the disk config if supplied.
'spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => 'https://nyc3.digitaloceanspaces.com',
'region' => 'nyc3',
'bucket' => env('DO_SPACES_BUCKET'),
],
Set the DO_SPACES_KEY, DO_SPACES_SECRET and DO_SPACES_BUCKET environment variables to the appropriate values.
Source: https://laracasts.com/discuss/channels/laravel/custom-file-driver-digital-ocean-spaces?page=1
You can use this package https://github.com/aws/aws-sdk-php-laravel
In this package you can specify your settings in config file as desired.
The specific bucket is not created in AWS, so please create a bucket in AWS, and use same bucket at filesystem config, the issue will be resolved.