There i am diving into the world of queues and all of its goodness and it hit me:
Session data is lost when the application pushes a task to the queue, due to serialization of information by laravel.
Having found out how to send data to queues, a question remains:
Given that the queue pushes information to a single class,
how do i make that information persistent(such as a session) across other classes throughout the duration of this task?
Coding Example:
//Case where the user object is needed by each class
class queueme {
...
//function called by queue
function atask($job,$data)
{
//Does xyz
if(isset($data['user_id'])
{
//Push user_id to another class
anotherclass::anothertask($data['user_id']);
}
}
}
class anotherclass {
...
function anothertask($user_id)
{
//Does abc
//Yup, that anotherofanother class needs user_id, we send it again.
anotherofanotherclass::yetanothertask($user_id);
}
}
The above code illustrates my problem.
Do i have to pass the $user_id or User object around, if my classes need this information?
Isn't there a cleaner way to do it?
When you queue up a job, you should pass all data required by the job to do its work. So if it's a job to resize a user's avatar, the necessary information required is the primary key of the user so we can pull their model out in the job. Just like if you're viewing a user's profile page in the browser, the necessary information (the user's primary key) is likely provided in the request URI (e.g. users/profile/{id}).
Sessions won't work for queue jobs, because sessions are used to carry state over from browser requests, and queue jobs are run by the system, so they simply don't exist. But that's fine, because it's not good practice for every class to be responsible for looking up data. The class that handles the request (a controller for an HTTP request, or a job class for a queue job) can take the input and look up models and such, but every call thereafter can pass those objects around.
Back to the user avatar example. You would pass the ID of the user as a primitive when queueing the job. You could pass the whole user model, but if the job is delayed for a long time, the state of that user could have changed in the meanwhile, so you'd be working with inaccurate data. And also, as you mention, not all objects can be serialised, so it's best to just pass the primary key to the job and it can pull it fresh from the database.
So queue your job:
Queue::push('AvatarProcessor', [$user->id]);
When your job is fired, pull the user fresh from the database and then you're able to pass it around to other classes, just like in a web request or any other scenario.
class AvatarProcessor {
public function fire($job, $data)
{
$user_id = $data[0]; // the user id is the first item in the array
$user = User::find($user_id); // re-pull the model from the database
if ($user == null)
{
// handle the possibility the user has been deleted since
// the job was pushed
}
// Do any work you like here. For an image manipulation example,
// we'll probably do some work and upload a new version of the avatar
// to a cloud provider like Amazon S3, and change the URL to the avatar
// on the user object. The method accepts the user model, it doesn't need
// to reconstruct the model again
(new ImageManipulator)->resizeAvatar($user);
$user->save(); // save the changes the image manipulator made
$job->delete(); // delete the job since we've completed it
}
}
As mentioned by maknz, the data needs to be passed explicitly to the job. But in the job handle() method, you can use session():
public function handle()
{
session()->put('query_id', 'H123214e890890');
Then your variable is directly accessible in any class:
$query_id = session()->get('query_id')
Related
I was given the idea to look in the AppServiceProvider with Queue::before as a way to add a check for Jobs I no longer want to run and delete them without having to add checks to every Job I write.
Background, I am working on a SaaS that does audits so an audit can run for hours and be 1000s of jobs. If I can look for an audit id inside the jobs as they come through and compare with a Cache array of any audit ids that have been cancelled, I can save time.
So what I have got to is how do I unwrap the Job in the Queue::before to get an id to check? (Normal laravel Queues code, and using RabbitMQ)
As the jobs are wrapped in a layer or two of Event classes, and I can not dump the data to screen to see, just to log files, as it is in the queue.
in app/Providers/AppServiceProvider.php:
Queue::before(function (JobProcessing $event) {
// $event->connectionName
// $event->job
$job = $event->job->payload();
$obj = unserialize($job['data']['data']);
}
As far as it looks like for the events I am interesting the payload has data, which has data, that is the serialised object I am interested in. This does not seem the best way, or to see how to interact with it in a better way.
thanks
I am in the middle of a similar problem involving webhook delivery. Through a developer portal, we are allowing users to re-queue a webhook (to short-cut the wait on backed-off delivery attempts). Since this could create a second job for the same webhook, we sought a way to identify the original as out of date.
app/Jobs/DeliverWebhook.php constructor:
public function __construct(Webhook $webhook)
{
$this->webhook = $webhook;
$this->queued_at = Carbon::now();
Cache::put(
'DeliverWebhook.'. $this->webhook->id .'.QueuedAt',
$this->queued_at,
Carbon::now()->addDays(3)
);
}
Here, you can see we've attached a queued_at attribute to this instance of the job. (We can probably also make this more unique with use of something like uniqid() or random_bytes() to avoid potential double-click issues or similar hiccups when queuing.)
The second part is that we set the semi-unique cache key to match this queued_at time. I set it to expire in 3 days, past the end of our backed-off retry attempts.
Now, when a job is picked up for processing, I can check the job instance's queued_at attribute against the cached value, and delete the job if it is old.
In my AppServiceProvider boot method:
Queue::before(function ($event) {
if ($event->job->queue == 'webhooks' && $event->job->getName() == 'DeliverWebhook') {
$cache_key = 'DeliverWebhook.'. $event->job->instance->webhook->id .'QueuedAt';
if ($event->job->instance->queued_at < Cache::get($cache_key)) {
$event->job->delete();
throw new JobRequeuedException;
}
}
});
An exception is thrown at the end because the queue worker, by default, does not check if the job is deleted before calling $job->fire(). Throwing the exception forces the worker to skip fire() and jump into the handleJobException() method.
NOTE: I still need to test this appropriately.
I am implementing PHP application with CQRS.
Let's say I have CreateOrderCommand and when I do
$command = new CreateOrderCommand(/** some data**/);
$this->commandBus->handle($command);
CommandBus now just pass command to proper CreateOrderCommandHandler class as easily as:
abstract class SimpleCommandBus implements CommandBus
{
/** #var ICommandHandlerLocator */
protected $locator;
/**
* Executes command
*
* #param Command $command
*/
public function handle(Command $command)
{
$handler = $this->locator->getCommandHandler($command);
$handler->handle($command);
}
}
Everything ok.
But handling is void method, so I do not know anything about progress or result. What can I do to be able to for example fire CreateOrderCommand and then in same process acquire newly created entity id (probably with some passive waiting for its creation)?
public function createNewOrder(/** some data**/){
$command = new CreateOrderCommand(/** some data**/);
$this->commandBus->handle($command);
// something that will wait until command is done
$createdOrder = // some magic that retrieves some adress to result data
return $createdOrder;
}
And to get closer of what CQRS can provide, command bus should be able to have RabbitMqCommandBus implementation that just serializes command and sends it to rabbit queue.
So, then the process that finally handles command might be some running consumer and some kind of communication between processes is needed here - to be able to somehow inform original user process from consumer, that it is done (with some information, for example id of new entity).
I know that there is solution with GUID - I could mark command with GUID. But then what:
public function createNewOrder(/** some data**/){
$command = new CreateOrderCommand(/** some data**/);
$this->commandBus->handle($command);
$guid = $command->getGuid();
// SOME IMPLEMENTATION
return $createdOrder;
}
SOME IMPLEMENTATION should do some checking of events (so I need to implement some event system too) on command with specific GUID, to be able to for example echo progress or on OrderCreatedEvent just return it's ID that I would get from that event. Consumer process that asynchronously handles command might for example feed events to rabbit and user client would taking them and do proper response (echo progress, return newly created entity for example).
But how to do that? And is solution with GUID the only one? What are acceptable implementations of solutions? Or, what point am I missing? :)
The easiest solution to get information about id of created aggregate/entity is to add it to the command. So the frontend generates the id and pass it with the data. But to make this solution works, you need to make use of uuid instead of normal database integers, otherwise you may find yourself duplicating identifiers on the db side.
If the command is async and perform so time consuming actions, you can for sure publish events from the consumer. So the client via.e.g. websockets receives the informations in real time.
Or ask the backend about existance of the order with the id from the command, from time to time and when the resource exists, redirect him to the right page.
I'm fairly new to domain driven design concepts and I've run into a problem with returning proper responses in an API while using a command bus with commands and command handlers for the domain logic.
Let's say we’re building an application with a domain driven design approach. We have a back end and front end portion. The back end has all of our domain logic with an exposed API. The front end uses the API to make requests to the application.
We're building our domain logic with commands and command handlers mapped to a command bus. Under our Domain directory we have a command for creating a post resource called CreatePostCommand. It's mapped to its handler CreatePostCommandHandler via the command bus.
final class CreatePostCommand
{
private $title;
private $content;
public function __construct(string $title, string $content)
{
$this->title = $title;
$this->content= $content;
}
public function getTitle() : string
{
return $this->title;
}
public function getContent() : string
{
return $this->content;
}
}
final class CreatePostCommandHandler
{
private $postRepository;
public function __construct(PostRepository $postRepository)
{
$this->postRepository = $postRepository;
}
public function handle(Command $command)
{
$post = new Post($command->getTitle(), $command->getContent());
$this->postRepository->save($post);
}
}
In our API we have an endpoint for creating a post. This is routed the createPost method in a PostController under our Application directory.
final class PostController
{
private $commandBus;
public function __construct(CommandBus $commandBus)
{
$this->commandBus = $commandBus;
}
public function createPost($req, $resp)
{
$command = new CreatePostCommand($command->getTitle(), $command->getContent());
$this->commandBus->handle($command);
// How do we get the data of our newly created post to the response here?
return $resp;
}
}
Now in our createPost method we want to return the data of our newly created post in our response object so our front end application can know about the newly created resource. This is troublesome since we know that by definition the command bus should not return any data. So now we're stuck in a confusing position where we don't know how to add our new post to the response object.
I'm not sure how to proceed with this problem from here, several questions come to mind:
Is there an elegant way to return the post's data in the response?
Am I incorrectly implementing the Command/CommandHandler/CommandBus pattern?
Is this simply just the wrong use case for the Command/CommandHandler/CommandBus pattern?
First, notice that if we wire the controller directly to the command handler, we face a similar problem:
public function createPost($req, $resp)
{
$command = new CreatePostCommand($command->getTitle(), $command->getContent());
$this->createPostCommandHandler->handle($command);
// How do we get the data of our newly created post to the response here?
return $resp;
}
The bus is introducing a layer of indirection, allowing you to decouple the controller from the event handler, but the problem you are running into is more fundamental.
I'm not sure how to proceed with this problem from here
TL;DR - tell the domain what identifiers to use, rather than asking the domain what identifier was used.
public function createPost($req, $resp)
{
// TADA
$command = new CreatePostCommand($req->getPostId()
, $command->getTitle(), $command->getContent());
$this->createPostCommandHandler->handle($command);
// happy path: redirect the client to the correct url
$this->redirectTo($resp, $postId)
}
In short, the client, rather than the domain model or the persistence layer, owns the responsibility of generating the id of the new entity. The application component can read the identifier in the command itself, and use that to coordinate the next state transition.
The application, in this implementation, is simply translating the message from the DTO representation to the domain representation.
An alternative implementation uses the command identifier, and derives from that command the identities that will be used
$command = new CreatePostCommand(
$this->createPostId($req->getMessageId())
, $command->getTitle(), $command->getContent());
Named UUIDs are a common choice in the latter case; they are deterministic, and have small collision probabilities.
Now, that answer is something of a cheat -- we've really only demonstrated that we don't need a result from the command handler in this case.
In general, we would prefer to have one; Post/Redirect/Get is a good idiom to use for updating the domain model, but when the client gets the resource, we want to make sure they are getting a version that includes the edits they just made.
If your reads and writes are using the same book of record, this isn't a problem -- whatever you read is always the most recent version available.
However, cqrs is a common architectural pattern in domain driven design, in which case the write model (handling the post) will redirect to the read model -- which is usually publishing stale data. So you may want to include a minimum version in the get request, so that the handler knows to refresh its stale cache.
Is there an elegant way to return the post's data in the response?
There's an example in the code sample you provided with your question:
public function createPost($req, $resp)
Think about it: $req is a representation of the http request message, which is roughly analogous to your command, and $resp is essentially a handle to a data structure that you can write your result into.
In other words, pass a callback or a result handle with your command, and let the command handler fill in the details.
Of course, that depends on your bus supporting callbacks; not guaranteed.
Another possibility, which doesn't require changing the signature of your command handler, is to arrange that the controller subscribes to events published by the command handler. You coordinate a correlation id between the command and the event, and use that to pull up the result event that you need.
The specifics don't matter very much -- the event generated when processing the command could be written to a message bus, or copied into a mailbox, or....
I am using this approach and I am returning command results. However, this is a solution which works only if the command handlers are part of the same process. Basically, I'm using a mediator, the controller and the command handler get an instance of it (usually as a constructor dependency).
Pseudo code controller
var cmd= new MyCommand();
var listener=mediator.GetListener(cmd.Id);
bus.Send(cmd);
//wait until we get a result or timeout
var result=listener.Wait();
return result;
Pseudo code command handler function
var result= new CommandResult();
add some data here
mediator.Add(result,cmd.Id);
That's how you get immediate feedback. However, this shouldn't be used to implement a business process.
Btw, this has nothing to do with DDD, it's basically a message driven CQS approach which can be and it is used in a DDD app.
I am new to laravel. I have been working on a laravel 5 app with different types of users. I need information about which type of user is currently logged in different sections of my views:
Currently, I have been doing something like below on various controller methods and with the user object, I can determine which type of user it is in my view:
In Controller:
public function someMethod(){
$user = Auth::user();
return view('applications.show', compact('user'));
}
In View:
if($user->is_manager)
// do this
else if($user->is_admin)
// do that
Because I need information about the user-type in various views, I have been calling Auth::user() in several places and I am beginning to think that this is adding some load on the DB. Is it better to store the user-type in a session variable and what kind of data should I be storing in my session?
It wouldn't be an issue storing it in the session.
In the is_manager function in your User class, you could do something like the following...
public function is_manager()
{
// Check if the session has been set first.
if(\Session::has('is_manager')) {
return \Session::get('is_manager');
}
// Do your necessary logic to determine if the user is a manager, ex...
$is_manager = $this->roles()->where('name', '=', 'manager')->count() == 1;
// Drop it in the session
\Session::put('is_manager', $is_manager);
return $is_manager;
}
Keep in mind if your session driver is set to database, then this obviously isn't going to help.
We have organized common model calls as follows;
Base classes for controllers, models, libraries, composers, commands and jobs which get these models. All related classes extend from these base classes thus have everything they have.
Master view composer to serve as base data gatherer for all views.
Query caching through Redis. All the above classes get the user through a cached query with timeout of an hour thus the query is executed only once per hour per user.
I've started development on a CakePHP project since a few weeks now. Since the beginning I was struggling with the amount of code inside the controllers. The controllers have, in most cases more lines of code than the models. By knowing the expression "Skinny controller, fat model" I'm searching for some days now for a way to put more code in the models.
The question arises at this point is, "where to draw the line". What should the controller do and what should the model do. There are already some questions/answers on this only I'm searching for a more practical explanation. For example I've put a function below which is now inside the controller. I think a part of this code must and can be moved to the model. So my question is: what part can I move to the model and what can remain in the controller.
/**
* Save the newly added contacts and family members.
*/
public function complete_contacts()
{
if ($this->request->is('post')) {
if (isset($this->data['FamilyMembers'])) {
$selected_user = $this->Session->read('selected_user');
$family_members = $this->data['FamilyMembers'];
$this->ContactsConnection->create();
foreach ($family_members as $family_member) {
// connection from current user to new user
$family_member['ContactsConnection']['contact_family_member_id'] = $selected_user['id'];
$family_member['ContactsConnection']['nickname'] = $selected_user['first_name'];
$this->ContactsConnection->saveAll($family_member);
// inverted connection from new user to current user
$inverted_connection['ContactsConnection']['family_member_id'] = $selected_user['id'];
$inverted_connection['ContactsConnection']['contact_family_member_id'] = $this->FamilyMember->inserted_id;
$inverted_connection['ContactsConnection']['nickname'] = $family_member['FamilyMember']['nickname'];
$this->ContactsConnection->saveAll($inverted_connection);
}
}
}
}
Should I create a function in the FamilyMember model called: "save_new_family_member($family_member, $selected_user)"?
As far as the purposes of the M and the C
The model manages the behavior and data of the application domain,
responds to requests for information about its state (usually from the
view), and responds to instructions to change state (usually from the
controller).
The controller receives user input and initiates a response by making
calls on model objects. A controller accepts input from the user and
instructs the model and a view port to perform actions based on that
input.
I would suggest you can pass
$selected_user = $this->Session->read('selected_user');
To your Model and perform your for each inside of your Model. You may want to change rules as to how the data is stored or perform some transformations on it and the Controller should be blind to this. Basically use the Controller to get your information [from the View often] to the Model. Don't directly manipulate the Model from the Controller. In short YES create the function that you suggested :)
That being said sometimes I find myself in a position where my Controller has to do more than I'd like, in which case at least break the task down into helper methods that way your controller is more manageable and you can reuse code where needed.
You are doing it right.
You can of course create some methods in model and make it fat with:
function updateContactFamilyMemberId($id){}
function updateNickname($nickname){}
...
In my opinion it still will be correct, but unnecessary.