Best Practices for Laravel 5 Command Bus - php

I am trying to refactor controllers and have taken a look at Laravel's command bus.
After reading a bunch of articles and watching a few videos, it seems that this may be a great way to refactor my controllers.
However, it also seems that I shouldn't be returning anything from a command.
When using commands
you follow the Command-query separation (CQS) principle: a function is
either a query (i.e. it returns something) or a command (i.e. it
affects state). Both are mutually exclusive. So a command is not
supposed to return anything and a query is not supposed to modify
anything.
source
I have created command CreateCustomerCommand:
namespace App\Commands;
use QuickBooks_IPP_Service_Customer;
use App\Commands\Command;
use Illuminate\Contracts\Bus\SelfHandling;
class CreateCustomer extends Command implements SelfHandling
{
private $qb;
private $customer_service;
private $customer;
private $is_same_address;
private $name;
private $primary_phone;
...
/**
* Create a new command instance.
*
* #return void
*/
public function __construct()
{
$this->qb = Quickbooks::Instance();
$this->qb->ipp->version(QuickBooks_IPP_IDS::VERSION_3);
$this->customer_service = new QuickBooks_IPP_Service_Customer();
$this->is_same_address = $request->input('customer.isSameAddress');
$this->name = ucwords(strtolower($request->input('customer.name')));
$this->primary_phone = $request->input('customer.primaryPhone');
}
...
/**
* Execute the command.
*
* #return void
*/
public function handle()
{
$this->customer->setDisplayName($this->name);
...
$this->customer_service->add(...);
}
}
Three questions regarding best practices:
After calling $this->customer_service->add(), a customer id is returned. How can I send this id back to the controller?
Where would it be best to incorporate an activity log?
Activity:
$activity = new Activity();
$activity->event = 'Created Customer: ' . $this->name;
$activity->user = Auth::user()->name;
$activity->save();
Would it be best to just include this at the end of CreateCustomerCommand?
What about an event?
Event:
event(new CustomerWasCreatedOrUpdated);
I am new to application architecture and am looking for a way to keep my controllers simple and maintainable. I would love if someone could point me in the right direction.

First, kudos to you for striving to keep your controllers "simple and maintainable". You may not always achieve this goal, but reaching towards it will more often than not pay off.
How can I send the ID back to the controller?
A command is a specialized case of a general service. Your command is free to declare additional public methods that interrogate the result of changing state. If the command were used by a CLI application, that application might do something like echo $this->command->getAddedCustomerId(). A web-based controller could use it similiarly.
However, the advice you quoted -- to either change state with no output or to query with output -- is sage. If you're changing state and you need to know the result of changing that state, you're probably abusing a command.
As an analogy consider the Linux command "useradd", which you would invoke like useradd -m 'Clara Barton' cbarton. That command runs and gives you just a success or failure indication. But note that you gave it the primary key, "cbarton". You can independently query for that key, like grep cbarton /etc/passwd, but importantly useradd didn't create an ID for you.
In summary, a command that changes state should at most tell you success or failure. If you wish to inspect the result of that state change, you should have given the command the keys necessary to locate the state change.
So what you probably want is a general service. A command might use that service, a controller might use the service, a model might use the service. But a service is just a general class that performs one job, and gives you the necessary API.
Where to incorporate an activity log?
Assuming you're not using PHP-AOP, a careful and rigorous practice for activity logging should be established up front and followed throughout the development lifecycle.
To a large extent, the location of the activity log depends upon the major architectural model of your system. If you rely heavily on events, then a good place might be in an extension of the Event facade or a log event. If you rely on DI extensively, then you might pass the Logger in code you decide needs logging.
In the specific case of a command, you would go either way, again depending upon your major architectural model. If you eschewed events, then you would inject the logger via Laravel's normal type-hinting DI. If you leveraged events, then you might do something like Event::fire('log', new LogState('Blah Blah', compact ($foo, $bar)));
That said, the most important thing is that you rely on a pluggable and configurable logging service, one that you can swap out and tweak for testing, QA, and production needs.
What about events?
Well, events are great. Until they're not. In my experience, events can really get complicated as they're, IMO, abused for passing data around and affecting state.
Events are like teleporters: you're going down a path, then the event fires, and suddenly you're teleported all the way across the code base and pop up in a totally different place, then you do something and get dropped right back where you were. You have to think a certain way and be efficient at following the code when Events are at play.
If Laravel events are your first introduction to events, I would discourage you from leveraging them heavily. Instead, I would suggest you limit them to one particular package or portion of your application until you get a feel for the power they offer, and the architectural and developmental rigor they require.

Related

Laravel patterns - Usage of jobs vs services

I was wondering how most developers use this two Laravel tools.
In Laravel, you can handle business logic with Services, or with Jobs (let's talk only about not-queueable jobs, only those ones that run in the same process).
For example, an user wants to create an entity, let's say a Book, you can handle the entity creation with a service or dispatching a job.
Using a job it would be something like this:
class PostBook extends Job
{
...
public function handle(Book $bookEntity)
{
// Business logic here.
}
...
}
class BooksController extends Controller
{
public function store(Request $request)
{
...
dispatch(new PostBook($request->all()));
...
}
}
Using a service, it would be something like this:
class BookService
{
public function store(Request $request)
{
// Business logic here.
}
}
class BooksController extends Controller
{
public function store(Request $request)
{
...
// I could inject the service instead.
$bookService = $this->app()->make(App\Services\BookService::class);
$bookService->store($request);
...
}
}
The question is, how do you mostly choose to use one way or the other one? And why?
Surely there are two "schools" in this matter, but I would want to understand pros & cons of each one.
"Business logic" can be handled with anything, so it seems like what's really being asked is which option is better for repeating the same business logic without repeating code.
A Job class typically does one thing, as defined by its handle() method. It's difficult to exclude queued jobs from the comparison because running them synchronously usually defeats the purpose, which is to handle slow, expensive, or unreliable actions (such as calling a web API) after the current request has been completed and a response has been shown to the user.
If all jobs were expected to be synchronous, it wouldn't be much different than defining a function for your business logic. That's actually very close to what dispatching a synchronous job does: Somewhere down the call stack it ends up running call_user_func([$job, 'handle']) to invoke a single method on your job object. More importantly, a synchronous job lacks the mechanism for retrying jobs that might have failed due to external causes such as network failures.
Services, on the other hand, are an easy way to encapsulate the logic around a component, and they may do more than one thing. In this context, a component may be thought of as a piece of your application that could be swapped out for a different implementation without having to modify code that was using it. A perfect example, included in the framework, is the Filesystem service (most commonly accessed with the Storage facade).
Consider if you didn't store books by inserting them into your database, but instead by posting to an external API. You may have a BookRepository service that not only has a store() method, but also has a get(), update(), list(), delete(), or any number of other methods. All of these requests share logic for authenticating to the external web service (like adding headers to requests), and your BookRepository class can encapsulate that re-usable logic. You can use this service class inside of scheduled artisan commands, web controllers, api controllers, jobs, middleware, etc. — without repeating code.
Using this example, you might create a Job for storing a new book so you don't make users wait when there are slow API responses (and it can retry when there are failures). Internally, your job calls your Service's store() method when it runs. The work done by the service is scheduled by the job.

Output progress for console and web applications

I'm writing an yii2 app that is mainly used as an console application. I have components or (micro)services that fetches some data from a server, handle them and save the information I need to db. Of course there are several for loops and in these loops I output my current progress with the use of yii\helpers\Console::updateProgress or just echo to watch the progress and testing output (like starting with xxx products). On some events I log important informations with Yii::info() or Yii::error() and so on. Normally a cron handling tasks like pullProductUpdates or something else and i.
However in some cases I need the method (i.e. pullProductUpdates) in my webapplication too. But then there must not be any echo command active or Console::updateProgress commands.
Of course I don't have problems with the logging methods from Yii because I configured the log targets and they will not echoing something. But I'm uncertain how to handle the echo commands...
One way is to check wether $_SERER['REMOTE_ADDR'] is set or not. In console it will evaluate to null so I can warp an if {} else {} around. A probably better solution is to write a log() or progress() method. A trait could be useful?
So how should I design a solution? Is there any pattern for this? Should my services implement an interface like loggable or proressable? Or use an Logger/ Progress objects and use some kind of DI (dependency injection)? Because I don't want to write those log() or progress() methods functions more than one time. Besides I can't use a progress function in a webapplication. One reason is I don't know how to to that (if its possible with php here), but this would be another question.
Thanks in advance.
As a PHP programmer you should be aware of and use the PSR. In this case you should use dependency injection and the LoggerInterfaces.
For web application you should configure your composition root to use a logger implementation that logs to a file. For console application you should log to the terminal.
The composition root is the place where you configure your Dependency Injection Container (DIC). See more about Yii DIC here.
In order to do that you should be able to switch between these two composition roots by an environment variable or by php_sapi_name.

Decoupling output in Symfony command line

Note: I refer to the Symfony Console component quite a lot in my question, but I think this question could be considered broader if thought about in the context of any user interface.
I am using the Symfony Console component to create a console application. I am trying to keep class coupling to a minimum, as it makes unit testing easier and is generally better practice.
My app has some processes which may take some time to complete, so I want to keep the user informed as to what is happening using progress bars and general text output as it runs. Symfony requires an instance of the Symfony Console OutputInterface to be passed to any command's execute method. So far so good; I can create progress bars and output text as I please. However, all of the heavy lifting of my app doesn't happen in the commands' execute methods and is instead within the core classes of my application. These classes shouldn't and don't know they are being used in a console application.
I am struggling to keep it this way because I don't know how to provide feedback to the console (or whatever user interface) without injecting the output class into my core. Doing so would result in tight coupling between the console output class and my application core. I have thought about using an event dispatcher (Symfony has one), but that too means my core will be coupled with the dispatcher (maybe that's fine). Ideally, I need to sort of "bubble" my application state back up to the execute method of the invoked command, where I can then perform output.
Could someone point me in the right direction, please? I feel like this must actually be quite a common case but can't find much about it.
Thanks for your time in advance!
I have succesfully used the event dispatcher approach before. You can trigger events at the start, progress, and end of processing for example, and have an event listener update the progress bar based on that.
<?php
$progress = $this->getHelperSet()->get('progress');
$dispatcher = $this->getContainer()->get('event_dispatcher');
$dispatcher->addListener('my_start_event', function (GenericEvent $event) use ($progress, $output) {
$progress->start($output, $event->getArgument('total'));
});
$dispatcher->addListener('my_progress_event', function () use ($progress) {
$progress->advance();
});
$dispatcher->addListener('my_finish_event', function () use ($progress) {
$progress->finish();
});
If you really want to avoid coupling of the event dispatcher in your service, you could extend or decorate your class (probably implementing a shared interface), and only use the event dispatcher there. You would need an extension point (public or protected method) in the base class however to be able to notify of any progress.

DDD, PHP - where to perform the validation?

I started playing with DDD recently. Today I'm having a problem with placing validation logic in my application. I'm not sure what layer should I pick up. I searched over the internet and can't find an unified solution that solves my problem.
Let's consider the following example. User entity is represented by ValueObjects such as id (UUID), age and e-mail address.
final class User
{
/**
* #var \UserId
*/
private $userId;
/**
* #var \DateTimeImmutable
*/
private $dateOfBirth;
/**
* #var \EmailAddress
*/
private $emailAddress;
/**
* User constructor.
* #param UserId $userId
* #param DateTimeImmutable $dateOfBirth
* #param EmailAddress $emailAddress
*/
public function __construct(UserId $userId, DateTimeImmutable $dateOfBirth, EmailAddress $emailAddress)
{
$this->userId = $userId;
$this->dateOfBirth = $dateOfBirth;
$this->emailAddress = $emailAddress;
}
}
Non business logic related validation is performed by ValueObjects. And it's fine.
I'm having a trouble placing business logic rules validation.
What if, let's say, we would need to let Users have their own e-mail address only if they are 18+?
We would have to check the age for today, and throw an Exception if it's not ok.
Where should I put it?
Entity - check it while creating User entity, in the constructor?
Command - check it while performing Insert/Update/whatever command? I'm using tactician in my project, so should it be a job for
Command
Command Handler
Where to place validators responsible for checking data with the repository?
Like email uniqueness. I read about the Specification pattern. Is it ok, if I use it directly in Command Handler?
And last, but not least.
How to integrate it with UI validation?
All of the stuff I described above, is about validation at domain-level. But let's consider performing commands from REST server handler. My REST API client expects me to return a full information about what went wrong in case of input data errors. I would like to return a list of fields with error description.
I can actually wrap all the command preparation in try block an listen to Validation-type exceptions, but the main problem is that it would give me information about a single error, until the first exception.
Does it mean, that I have to duplicate my validation logic in controller-level (ie with zend-inputfilter - I'm using ZF2/3)? It sounds incosistent...
Thank you in advance.
I will try to answer your questions one by one and additionally give my two cents here and there and how I would solve the problems.
Non business logic related validation is performed by ValueObjects
Actually ValueObjects represent concepts from your business domain, so these validations are actually business logic validations too.
Entity - check it while creating User entity, in the constructor?
Yes, in my opinion you should try to add this kind of behavior as deep down in the Aggregates as you can. If you put it into Commands or Command Handlers you loose cohesiveness and business logic is leaking out into the Application layer. And I would even go further. Ask yourself the question if there are hidden concepts within your model that are not made explicit. In your case that is an AdultUser and an UnderagedUser (they could both implement a UserInterface) that actually have different behavior. In these cases I always strive for modelling this explicitly.
Like email uniqueness. I read about the Specification pattern. Is it ok, if I use it directly in Command Handler?
The Specification pattern is nice if you want to be able to combine complex queries with logical operators (especially for the Read Model). In your case I think this is an overkill. Adding a simple containsUserForEmail($emailValueObject) method into the UserRepositoryInterface and call this from the Use Case is fine.
<?php
$userRepository
->containsUserForEmail($emailValueObject)
->hasOrThrow(new EmailIsAlreadyRegistered($emailValueObject));
How to integrate it with UI validation?
So first of all there already should be client side validation for the fields in question. Make it easy to use your system in the right way and hard to use it in the wrong way.
Of course there still needs to be server side validation. We currently use the schema validation approach where we have a central schema registry from which we fetch a schema for a given payload and then can validate JSON payloads against that JSON Schema. If it fails we return a serialized ValidationErrors object. We also tell the client via the Content-Type: application/json; profile=https://some.schema.url/v1/user# header how it can build a valid payload.
You can find some nice articles on how to build a RESTful API on top of a CQRS architecture here and here.
Just to expand on what tPl0ch said, as I have found helpful... While I have not been in the PHP stack in many years, this largely is theoretical discussion, anyhow.
One of the larger problems faced in practical applications of DDD is that of validation. Traditional logic would dictate that validation has to live somewhere, where it really should live everywhere. What has probably tripped people up more than anything, when applying this to DDD is the qualities of a domain never being "in an invalid state". CQRS has gone a far way to address this, and you are using commands.
Personally, the way that I do this, is that commands are the only way to alter state. Even if I require the creation of a domain service for a complex operation, it is the commands which will do the work. A traditional command handler will dispatch a command against an aggregate and put the aggregate into a transitional state. All of this is fairly standard, but I additionally delegate the responsibility of validation of the transition to the commands themselves, as they already encompass business logic, as well. If I am creating a new Account, for example, and I require a first name, last name, and email address, I should be validating that as being present in the command, before it ever is attempted to be applied to the aggregate through the command handler. As such, each of my command handlers have not just awareness of the command, but also a command validator.
This validator ensures that the state of the command will not compromise the domain, which allows me to validate the command itself, and at a point where I do not incur additional cost related to having to validate somewhere in the infrastructure or implementation. Since the only way that I have to mutate state is solely in the commands, I do not put any of that logic directly into the domain objects themselves. That is not to say that the domain model is anemic, far from it, actually. There is an assumption that if you are not validating in the domain objects themselves, that the domain immediately becomes anemic. But, the aggregate needs to expose the means to set these values - generally through a method - and the command is translated to provide these values to that method. On of the semi-common approaches that you see is that logic is put into the property setters, but since you are only setting a single property at a time, you could more easily leave the aggregate in an invalid state. If you look at the command as being validated for the purpose of mutating that state as a single operation, you see that the command is a logical extension of the aggregate (and from a code organizational standpoint, lives very near, if not under, the aggregate).
Since I am only dealing with command validation at that point, I generally will have persistence validation, as well. Essentially, right before the aggregate is persisted, the entire state of the aggregate will be validated at once. The ultimate goal is to get a command to persist, so that means that I will have a single persistence validator per aggregate, but as many command validators as I have commands. That single persistence validator will provide the infallible validation that the command has not mutated the aggregate in a way that violates the overarching domain concerns. It will also have awareness that a single aggregate can have multiple valid transitional states, which is something not easily caught in a command. By multiple states, I mean that the aggregate may be valid for persistence as an "insert" for persistence, but perhaps is not valid for an "update" operation. The easiest example of that would be that I could not update or delete an aggregate which has not been persisted.
All of these can be surfaced to the UI, in my own implementation. The UI will hand the data to an application service, the application service will create the command, and it will invoke a "Validate" method on my handler which will return any validation failures within the command without executing it. If validation errors are present, the application service can yield to the controller, returning any validation errors that it finds, and allow them to surface up. Additionally, pre-submit, the data can be sent in, follow the same path for validation, and return those validation errors without physically submitting the data. It is the best of both worlds. Command violations can happen often, if the user is providing invalid input. Persistence violations, on the other hand, should happen rarely, if ever at all, outside of testing. It would imply that a command is mutating state in a way that is not supported by the domain.
Finally, post-validation of a command, the application service can execute it. The way that I have built my own infrastructure is that the command handler is aware of if the command was validated immediately before execution. If it was not, the command handler will execute the same validation that is exposed by the "Validate" method. The difference, however, is that it will be surfaced as an exception. Goal at this point is to halt execution, as an invalid command cannot enter the domain.
Although the samples are in Java (again, not my platform of choice), I highly recommend Vaughn Vernon's "Implementing Domain-Driven Design". It really pulls a lot of the concepts in the Evans' material together with the advances in the DDD paradigm, such as CQRS+ES. At least for me, the material in Vernon's book, which is also a part of the "DDD Series" of books, changed the way I fundamentally approach DDD as much as the Blue Book introduced me to it.

SUT - Testing Internal Behaviour TDD

I have a question about testing requirements.
Let's take example:
Class Article {
public void deactive() {//some behaviour, which deactives article}
}
I have in my requirements, that Article can be deactived. I will implement it as part of my Article class.
Test will call deactive method, but what now? I need to check at the end of the test, if requirement was fulfilled so I need to implement method isDeactived.
And here is my question, if I really should do it? This method will be only used by my test case, nowhere else. So I am complicating my class interface, just to see assert, if is really deactived.
What is the right way to implement it?
It's generally considered ok to add test hooks to a class. It's better to have a slightly more cluttered interface and know your class works than it is to make it untestable. But there are some other solutions.
If you're ok making your method protected or package-private, you might be able to use something like Guava's #VisibleForTesting annotation. Languages other than Java might have other similar libraries.
You could also inherit from Article to get access to private fields.
class ArticleTest extends Article {
#Test
public void deactiveTest() {
this.deactive();
assertTrue(this.isDeactive);
}
}
This all assumes you have some field you're using to mark whether the object is active or not.
It might be the case that you're causing some side effects, like calling the database, and a couple of services to say you're deactivating that article. If so, you should mock the collaborators that you're using to make the side effects and verifying that you're calling them correctly.
For example (in java/mockito like pseudocode):
#Test
public void deactiveTest() {
Article underTest = new Article(databaseMock, httpMock); //or use dependency injection framework of your choice...
underTest.deactive();
verify(databaseMock).calledOnce().withParams(expectedParams);
verify(httpMock).calledOnce().withParams(expectedParams);
}
A final possibility, if that field affects the behavior of some other method or function, you could try (again in pseudocode):
article.deactive()
result = article.post() // maybe returns true if active, else false?
assertFalse(result)
This way, you're testing the resulting behavior, not just checking the internal state.
It sounds like you are writing a test along the lines of:
assertThatCallingDeactiveMarksArticleAsDeactivated
With an isDeactivated method, this test becomes trivial, however as you've said, your Article class doesn't contain this method. So, the question becomes should it have that method. The answer really depends on what it really means for the Article class to be deactive.
I would expect and active Article to behave differently in some way from a deactive Article. Otherwise, it seems like the state change doesn't have a reason / there is nothing to test.
To give a practical example, looking at it from the perspective of a client of the Article class. Something triggers the call to deactive. It may be something as simple as the user clicking on a Deactivate button/link on the user interface, which calls through to the Article class. After this call, I'd expect the user interface to reflect in some way that the Article was deactive (for example by greying out the button/link), but for it to do that, the UI needs to be able to read the state of the Article, which brings us back to the question how does it do that and/or why isn't the isDeactivated method needed by the current code?
Without knowing more about the implementation of the deactive method (does it simply set a flag, or does it call out to other code in an observable way) and how the state change effects the behaviour of Article and it clients it's hard to give a more concrete response.
Ideally you don't want to test the internals of a method or class, as this makes tests brittle and tightly coupled. If you refactor the production code, your test has a higher change of also needing to be refactored, thus going the benefit of the test. You want to try and test the classes behaviour as a whole (i.e what does calling deactivate do)
Check out Kent Beck's 4 rules for simple design (in priority order).
1) All Tests Pass
2) Expresses Intent
3) Eliminates Duplication
4) Fewest Elements
The last rule is that a system shouldn't be any larger than it needs to be, which is where your question falls. Given this is the least important element and that its better to 1) pass a test and 2) express intent, it would be acceptable (in my view) to simply add an isActive() method.
This also makes the class more useful and generic, as is you deactivate something, it seems logical to be able to verify its state.
Also as previously mentioned, there must be an effect of calling deactivate which in itself should be tested, so try and test this - It maybe an Integration Test is better placed, or you have to mock or stub another class.

Categories