I'm using Symfony 4.4 with Redis for the session.
I have some controllers and I wish to update the DB on back ground after to send a reply to user.
So I have written a code like this:
class GetCatController extends AbstractController
{
public function getCatController(LoggerInterface $logger, ManagerRegistry $doctrine, SessionInterface $session, ValidatorInterface $validator)
{
[...]
$replyToSend = new JsonResponse($reply, 200);
$replyToSend->send();
//My back ground activity like to do some query on the db.
[...]
return null;
}
But I have some problems about the sessions.
Is there a better way to create and run background activity sending before the reply to user?
There are two decent ways to do this.
If you are running PHP under php-fpm (not mod_php), you can dispatch & catch an event, kernel.terminate In Symfony pre-4.4, this is called PostResponseEvent (TerminateEvent from 4.4/5.0).
The better choice may be with Symfony Messenger. Here, you would create a message object, with all the information needed to perform the task, and send it to a background queue (Redis is supported as a queue). A worker then consumes that message, and does the task.
Related
at my work, we are currently struggling with uuid generation approach in a web application.
We do DDD and our persistent storage is a remote API that we own.
here is simplified exemple of my code:
class HireFooHandler() {
private $repository;
private $mailer;
public function __construct(FooRepository $repository, HiringMailer $mailer, ... ) {
$this->repository = $repository; //behind the repository, there are api calls
$this->mailer = $mailer;
...
}
public function handle(HireFooCommand $command) {
try {
$foo = new Foo($command->uuid, $command->baz, ...);
$this->repository->hire($foo);
$this->mailer->sendHiredMail($foo);
...
} catch(...) {
...
}
}
}
Some of my co worker are not confortable with the app generating and sending the uuid to the api.
They prefer to let the server side handle the uuid generation and if in this exemple "foo" is saved, send back the uuid generated to the app in the response.
The main argument is that, the api will be public too and they dont want the client having to generate the uuid.
On the other hand, from the point of view of my application and coding, as my Entity Foo should not be in an invalid state, to create it, i have to provided the uuid and so generate it from the client side, which i think is ok.
So here i'm, struggling with this and dont know what is the best approach, and where to generate my uuid ? client side ? server side ?
I proposed them to make the api more flexible by adding a uuid optional field in the post endpoint that can be validated. So the Api can received a uuid from the client and if not generate it's own.
But i'm not sure this is a good practice.
If someone have some inputs, i'll be glad to read it :)
You should understand why it is good to have uuid generation on a client-side, the biggest benefit for me is to allow doing async or complex stuff. For example, the client can send a request to a server and without waiting for a response, started to work with generated uuid, this approach allows the client to get a lot of benefit from uuid, as well as your php app when you started using uuid instead of dd, because you are no longer wanting and can send events for example. Also, there is a kind of middle solution, generating the uuid in the controller while creating command. There is more interesting concept: commands never failed.
I'm adding functionality to a pre-existing app, using Laravel 5.8.38 and the SQS queue driver.
I'm looking for a way to log the receipt handle of queue messages as they're processed, so that we can manually delete messages from the queue for jobs that have gone horribly wrong (without the receipt ID, we'd have to wait for the visibility timeout to be reached).
I'm not super familiar with Laravel and am trying to figure things out as I go. We have two types of queued jobs:
a custom class implementing Illuminate\Contracts\Queue\ShouldQueue, that also uses the Illuminate\Queue\InteractsWithQueue, Illuminate\Foundation\Bus\Dispatchable and Illuminate\Bus\Queueable traits (our class gets queued directly)
a custom command, extending Illuminate\Console\Command, that runs via Illuminate\Foundation\Console\QueuedCommand
For the custom class, browsing through the source for InteractsWithQueue and Illuminate/Queue/Jobs/SqsJob I discovered I could access the receipt handle directly:
$sqsJob = $this->job->getSqsJob();
\Log::info("Processing SQS job {$sqsJob["MessageId"]} with handle {$sqsJob["ReceiptHandle"]}");
This works great! However, I can't figure out how to do a similar thing from the console command.
Laravel's QueuedCommand implements ShouldQueue as well as Illuminate\Bus\Queueable, so my current guess is that I'll need to extend this, use InteractsWithQueue, and retrieve and log the receipt handle from there. However if I do that, I can't figure out how I would modify Artisan::queue('app:command', $commandOptions); to queue my custom QueuedCommand class instead.
Am I almost there? If so, how can I queue my custom QueuedCommand class instead of the Laravel one? Or, is there a better way to do this?
Ok I had just posted this question and then realised a suggestion a colleague offered provided a way forward.
So, here's my solution in case it helps anyone else!
Laravel fires a Illuminate\Queue\Events\JobProcessing event when processing any new queue job. I just needed to register a listener in app/Providers/EventServiceProvider.php:
protected $listen = [
'Illuminate\Queue\Events\JobProcessing' => [
'App\Listeners\LogSQSJobDetails',
],
];
and then provide the listener to handle it:
namespace App\Listeners;
use Illuminate\Queue\Events\JobProcessing;
class LogSQSJobDetails
{
public function __construct()
{
}
public function handle(JobProcessing $event)
{
$sqsJob = $this->job->getSqsJob();
\Log::info("Processing SQS job {$sqsJob["MessageId"]} with handle {$sqsJob["ReceiptHandle"]}");
}
}
This works great - and means I can also now remove the addition to my custom class from earlier.
I'm using Lumen (Laravel) to handle an AWS SQS queue with the sqs-plain driver for the connection. That driver works the same way as the default sqs driver but allows me to use custom JSON in the queue message body.
Based on that package I've created a job handler that works fine, but I'm having trouble understanding how to properly release a message back to the queue.
In the handler method I've injected a SqsJob and tried calling the release method which isn't working as I'd expect.
It seems that the method SqsJob::release calls changeMessageVisibility on the SQS API and the parent release method, which simply sets a class variable $this->released = true; on the job.
So the "releasing" of the message isn't really done, as much as the message is just being marked as released and the visibility of the message in the queue is changed. Not handling the release would result in the message being "handled" and deleted from the queue.
By trial and error I found out, that throwing an exception in the handler method sends the message back to the queue.
Here's a simplified version of the job handler:
namespace App\Jobs;
use Illuminate\Contracts\Queue\Job as LaravelJob;
use Illuminate\Queue\Jobs\SqsJob;
use Illuminate\Support\Facades\Log;
class HandlerJob extends Job
{
protected $data;
/**
* #param SqsJob $job
* #param array $data
*/
public function handle(SqsJob $job, array $data)
{
if ($this->validData($data)) {
// handle the job
} else {
$this->release(60);
// actually release the job back to queue
throw new \Exception('Invalid Data');
}
}
private function validData($data)
{
//not relevant
}
}
This seems like a basic task to be handled by the queue worker but I can't figure it out.
What is the proper way of releasing a message back into the the queue using the Laravel/Lumen framework?
I am wondering how to deal with this. I have a webhook endpoint which responds to a webhook call from Github.
It starts a long running process in where it clones the repository from which the webhook call was made.
/**
* The webhook endpoint.
*
* #param Request $request
* #return mixed
* #throws \Exception
*/
public function webhook(Request $request)
{
// The type of GitHub event that we receive.
$event = $request->header('X-GitHub-Event');
$url = $this->createCloneUrl();
$this->cloneGitRepo($url);
return new Response('Webhook received succesfully', 200);
}
The problem with this is that Github gives an error when the response is not provided soon enough.
We couldn’t deliver this payload: Service Timeout
This is rightfully so because my cloneGitRepo method is simply blocking the execution of the response and it is taking too long.
How can I still deliver a response to acknowledge to Github that the webhook call has been made successfully and start my long running process?
I am using Laravel for all of this with Redis, maybe something can be accomplished there? I am open to all suggestions.
What you're looking for is a queued job. Laravel makes this very easy with Laravel Queues.
With queues, you setup a queue driver (database, redis, Amazon SQS, etc), and then you have one to many queue workers that are continuously running. When you put a job on the queue from your webhook method, it will be picked up by one of your queue workers and run in a separate process. The act of dispatching a queued job to a queue is very quick, though, so your webhook method will return quickly while the real work is being done by the queue worker.
The linked documentation has all the details, but the general process will be:
Setup a queue connection. You mention you're already using redis, I would start with that.
Use php artisan make:job CloneGitRepo to create a CloneGitRepo job class.
It should implement the Illuminate\Contracts\Queue\ShouldQueue interface so that Laravel knows to send this job to a queue when it is dispatched.
Make sure to define properties on the class for any data you pass into the constructor. This is necessary so the worker can rebuild the job correctly when it is pulled off the queue.
The queue worker will call the handle() method to process the job. Any dependencies can be type hinted here and they will be injected from the IoC container.
To dispatch the job to the queue, you can either use the global dispatch() helper function, or call the static dispatch() method on the job itself.
dispatch(new CloneGitRepo($url));
CloneGitRepo::dispatch($url);
So, your webhook would look like:
public function webhook(Request $request)
{
// The type of GitHub event that we receive.
$event = $request->header('X-GitHub-Event');
$url = $this->createCloneUrl();
CloneGitRepo::dispatch($url);
return new Response('Webhook received succesfully', 200);
}
I have a slack notification class that sends a message inside our company slack account, in a specific channel, every time an user performs the activation process.
The system works, but it's manually tested and that's not cool.
The notification is sent by a listener attached to an UserHasBeenActivated event, the listener is the following:
public function handle(UserHasBeenActivated $event)
{
Notification::route("slack", config("services.slack.user.url"))
->notify(new UserActivated($event->user));
}
Pretty straight forward. The problem here is that the notification is on demand thus it's difficult to test... because there isn't any sort of documentation on how to test on demand notifications!
At the moment I'm stuck here:
public function it_sends_a_notification_when_an_user_is_activated()
{
Notification::fake();
$user = factory(User::class)->states("deleted")->create();
$user->activate();
Notification::assertSentTo(
$user,
UserActivated::class
);
}
Of course this test fails, the activate() method is what triggers the Event UserHasBeenActivated and sequentially all the listeners, and one of them sends the corresponding notification.
Do you know how to test on demand Notifications? Is there any hidden API that am I missing?
For the newcomers
Laravel introduces in v5.5.14 the ability to test anonymous notification by using the provided Notification::fake() system.
More about this new feature here: https://github.com/laravel/framework/pull/21379