Preface: I'm attempting to use the repository pattern in an MVC architecture with relational databases.
I've recently started learning TDD in PHP, and I'm realizing that my database is coupled much too closely with the rest of my application. I've read about repositories and using an IoC container to "inject" it into my controllers. Very cool stuff. But now have some practical questions about repository design. Consider the follow example.
<?php
class DbUserRepository implements UserRepositoryInterface
{
protected $db;
public function __construct($db)
{
$this->db = $db;
}
public function findAll()
{
}
public function findById($id)
{
}
public function findByName($name)
{
}
public function create($user)
{
}
public function remove($user)
{
}
public function update($user)
{
}
}
Issue #1: Too many fields
All of these find methods use a select all fields (SELECT *) approach. However, in my apps, I'm always trying to limit the number of fields I get, as this often adds overhead and slows things down. For those using this pattern, how do you deal with this?
Issue #2: Too many methods
While this class looks nice right now, I know that in a real-world app I need a lot more methods. For example:
findAllByNameAndStatus
findAllInCountry
findAllWithEmailAddressSet
findAllByAgeAndGender
findAllByAgeAndGenderOrderByAge
Etc.
As you can see, there could be a very, very long list of possible methods. And then if you add in the field selection issue above, the problem worsens. In the past I'd normally just put all this logic right in my controller:
<?php
class MyController
{
public function users()
{
$users = User::select('name, email, status')
->byCountry('Canada')->orderBy('name')->rows();
return View::make('users', array('users' => $users));
}
}
With my repository approach, I don't want to end up with this:
<?php
class MyController
{
public function users()
{
$users = $this->repo->get_first_name_last_name_email_username_status_by_country_order_by_name('Canada');
return View::make('users', array('users' => $users))
}
}
Issue #3: Impossible to match an interface
I see the benefit in using interfaces for repositories, so I can swap out my implementation (for testing purposes or other). My understanding of interfaces is that they define a contract that an implementation must follow. This is great until you start adding additional methods to your repositories like findAllInCountry(). Now I need to update my interface to also have this method, otherwise, other implementations may not have it, and that could break my application. By this feels insane...a case of the tail wagging the dog.
Specification Pattern?
This leads me to believe that repository should only have a fixed number of methods (like save(), remove(), find(), findAll(), etc). But then how do I run specific lookups? I've heard of the Specification Pattern, but it seems to me that this only reduces an entire set of records (via IsSatisfiedBy()), which clearly has major performance issues if you're pulling from a database.
Help?
Clearly, I need to rethink things a little when working with repositories. Can anyone enlighten on how this is best handled?
I thought I'd take a crack at answering my own question. What follows is just one way of solving the issues 1-3 in my original question.
Disclaimer: I may not always use the right terms when describing patterns or techniques. Sorry for that.
The Goals:
Create a complete example of a basic controller for viewing and editing Users.
All code must be fully testable and mockable.
The controller should have no idea where the data is stored (meaning it can be changed).
Example to show a SQL implementation (most common).
For maximum performance, controllers should only receive the data they need—no extra fields.
Implementation should leverage some type of data mapper for ease of development.
Implementation should have the ability to perform complex data lookups.
The Solution
I'm splitting my persistent storage (database) interaction into two categories: R (Read) and CUD (Create, Update, Delete). My experience has been that reads are really what causes an application to slow down. And while data manipulation (CUD) is actually slower, it happens much less frequently, and is therefore much less of a concern.
CUD (Create, Update, Delete) is easy. This will involve working with actual models, which are then passed to my Repositories for persistence. Note, my repositories will still provide a Read method, but simply for object creation, not display. More on that later.
R (Read) is not so easy. No models here, just value objects. Use arrays if you prefer. These objects may represent a single model or a blend of many models, anything really. These are not very interesting on their own, but how they are generated is. I'm using what I'm calling Query Objects.
The Code:
User Model
Let's start simple with our basic user model. Note that there is no ORM extending or database stuff at all. Just pure model glory. Add your getters, setters, validation, whatever.
class User
{
public $id;
public $first_name;
public $last_name;
public $gender;
public $email;
public $password;
}
Repository Interface
Before I create my user repository, I want to create my repository interface. This will define the "contract" that repositories must follow in order to be used by my controller. Remember, my controller will not know where the data is actually stored.
Note that my repositories will only every contain these three methods. The save() method is responsible for both creating and updating users, simply depending on whether or not the user object has an id set.
interface UserRepositoryInterface
{
public function find($id);
public function save(User $user);
public function remove(User $user);
}
SQL Repository Implementation
Now to create my implementation of the interface. As mentioned, my example was going to be with an SQL database. Note the use of a data mapper to prevent having to write repetitive SQL queries.
class SQLUserRepository implements UserRepositoryInterface
{
protected $db;
public function __construct(Database $db)
{
$this->db = $db;
}
public function find($id)
{
// Find a record with the id = $id
// from the 'users' table
// and return it as a User object
return $this->db->find($id, 'users', 'User');
}
public function save(User $user)
{
// Insert or update the $user
// in the 'users' table
$this->db->save($user, 'users');
}
public function remove(User $user)
{
// Remove the $user
// from the 'users' table
$this->db->remove($user, 'users');
}
}
Query Object Interface
Now with CUD (Create, Update, Delete) taken care of by our repository, we can focus on the R (Read). Query objects are simply an encapsulation of some type of data lookup logic. They are not query builders. By abstracting it like our repository we can change it's implementation and test it easier. An example of a Query Object might be an AllUsersQuery or AllActiveUsersQuery, or even MostCommonUserFirstNames.
You may be thinking "can't I just create methods in my repositories for those queries?" Yes, but here is why I'm not doing this:
My repositories are meant for working with model objects. In a real world app, why would I ever need to get the password field if I'm looking to list all my users?
Repositories are often model specific, yet queries often involve more than one model. So what repository do you put your method in?
This keeps my repositories very simple—not an bloated class of methods.
All queries are now organized into their own classes.
Really, at this point, repositories exist simply to abstract my database layer.
For my example I'll create a query object to lookup "AllUsers". Here is the interface:
interface AllUsersQueryInterface
{
public function fetch($fields);
}
Query Object Implementation
This is where we can use a data mapper again to help speed up development. Notice that I am allowing one tweak to the returned dataset—the fields. This is about as far as I want to go with manipulating the performed query. Remember, my query objects are not query builders. They simply perform a specific query. However, since I know that I'll probably be using this one a lot, in a number of different situations, I'm giving myself the ability to specify the fields. I never want to return fields I don't need!
class AllUsersQuery implements AllUsersQueryInterface
{
protected $db;
public function __construct(Database $db)
{
$this->db = $db;
}
public function fetch($fields)
{
return $this->db->select($fields)->from('users')->orderBy('last_name, first_name')->rows();
}
}
Before moving on to the controller, I want to show another example to illustrate how powerful this is. Maybe I have a reporting engine and need to create a report for AllOverdueAccounts. This could be tricky with my data mapper, and I may want to write some actual SQL in this situation. No problem, here is what this query object could look like:
class AllOverdueAccountsQuery implements AllOverdueAccountsQueryInterface
{
protected $db;
public function __construct(Database $db)
{
$this->db = $db;
}
public function fetch()
{
return $this->db->query($this->sql())->rows();
}
public function sql()
{
return "SELECT...";
}
}
This nicely keeps all my logic for this report in one class, and it's easy to test. I can mock it to my hearts content, or even use a different implementation entirely.
The Controller
Now the fun part—bringing all the pieces together. Note that I am using dependency injection. Typically dependencies are injected into the constructor, but I actually prefer to inject them right into my controller methods (routes). This minimizes the controller's object graph, and I actually find it more legible. Note, if you don't like this approach, just use the traditional constructor method.
class UsersController
{
public function index(AllUsersQueryInterface $query)
{
// Fetch user data
$users = $query->fetch(['first_name', 'last_name', 'email']);
// Return view
return Response::view('all_users.php', ['users' => $users]);
}
public function add()
{
return Response::view('add_user.php');
}
public function insert(UserRepositoryInterface $repository)
{
// Create new user model
$user = new User;
$user->first_name = $_POST['first_name'];
$user->last_name = $_POST['last_name'];
$user->gender = $_POST['gender'];
$user->email = $_POST['email'];
// Save the new user
$repository->save($user);
// Return the id
return Response::json(['id' => $user->id]);
}
public function view(SpecificUserQueryInterface $query, $id)
{
// Load user data
if (!$user = $query->fetch($id, ['first_name', 'last_name', 'gender', 'email'])) {
return Response::notFound();
}
// Return view
return Response::view('view_user.php', ['user' => $user]);
}
public function edit(SpecificUserQueryInterface $query, $id)
{
// Load user data
if (!$user = $query->fetch($id, ['first_name', 'last_name', 'gender', 'email'])) {
return Response::notFound();
}
// Return view
return Response::view('edit_user.php', ['user' => $user]);
}
public function update(UserRepositoryInterface $repository)
{
// Load user model
if (!$user = $repository->find($id)) {
return Response::notFound();
}
// Update the user
$user->first_name = $_POST['first_name'];
$user->last_name = $_POST['last_name'];
$user->gender = $_POST['gender'];
$user->email = $_POST['email'];
// Save the user
$repository->save($user);
// Return success
return true;
}
public function delete(UserRepositoryInterface $repository)
{
// Load user model
if (!$user = $repository->find($id)) {
return Response::notFound();
}
// Delete the user
$repository->delete($user);
// Return success
return true;
}
}
Final Thoughts:
The important things to note here are that when I'm modifying (creating, updating or deleting) entities, I'm working with real model objects, and performing the persistance through my repositories.
However, when I'm displaying (selecting data and sending it to the views) I'm not working with model objects, but rather plain old value objects. I only select the fields I need, and it's designed so I can maximum my data lookup performance.
My repositories stay very clean, and instead this "mess" is organized into my model queries.
I use a data mapper to help with development, as it's just ridiculous to write repetitive SQL for common tasks. However, you absolutely can write SQL where needed (complicated queries, reporting, etc.). And when you do, it's nicely tucked away into a properly named class.
I'd love to hear your take on my approach!
July 2015 Update:
I've been asked in the comments where I ended up with all this. Well, not that far off actually. Truthfully, I still don't really like repositories. I find them overkill for basic lookups (especially if you're already using an ORM), and messy when working with more complicated queries.
I generally work with an ActiveRecord style ORM, so most often I'll just reference those models directly throughout my application. However, in situations where I have more complex queries, I'll use query objects to make these more reusable. I should also note that I always inject my models into my methods, making them easier to mock in my tests.
Based on my experience, here are some answers to your questions:
Q: How do we deal with bringing back fields we don't need?
A: From my experience this really boils down to dealing with complete entities versus ad-hoc queries.
A complete entity is something like a User object. It has properties and methods, etc. It's a first class citizen in your codebase.
An ad-hoc query returns some data, but we don't know anything beyond that. As the data gets passed around the application, it is done so without context. Is it a User? A User with some Order information attached? We don't really know.
I prefer working with full entities.
You are right that you will often bring back data you won't use, but you can address this in various ways:
Aggressively cache the entities so you only pay the read price once from the database.
Spend more time modeling your entities so they have good distinctions between them. (Consider splitting a large entity into two smaller entities, etc.)
Consider having multiple versions of entities. You can have a User for the back end and maybe a UserSmall for AJAX calls. One might have 10 properties and one has 3 properties.
The downsides of working with ad-hoc queries:
You end up with essentially the same data across many queries. For example, with a User, you'll end up writing essentially the same select * for many calls. One call will get 8 of 10 fields, one will get 5 of 10, one will get 7 of 10. Why not replace all with one call that gets 10 out of 10? The reason this is bad is that it is murder to re-factor/test/mock.
It becomes very hard to reason at a high level about your code over time. Instead of statements like "Why is the User so slow?" you end up tracking down one-off queries and so bug fixes tend to be small and localized.
It's really hard to replace the underlying technology. If you store everything in MySQL now and want to move to MongoDB, it's a lot harder to replace 100 ad-hoc calls than it is a handful of entities.
Q: I will have too many methods in my repository.
A: I haven't really seen any way around this other than consolidating calls. The method calls in your repository really map to features in your application. The more features, the more data specific calls. You can push back on features and try to merge similar calls into one.
The complexity at the end of the day has to exist somewhere. With a repository pattern we've pushed it into the repository interface instead of maybe making a bunch of stored procedures.
Sometimes I have to tell myself, "Well it had to give somewhere! There are no silver bullets."
I use the following interfaces:
Repository - loads, inserts, updates and deletes entities
Selector - finds entities based on filters, in a repository
Filter - encapsulates the filtering logic
My Repository is database agnostic; in fact it doesn't specify any persistence; it could be anything: SQL database, xml file, remote service, an alien from outer space etc.
For searching capabilities, the Repository constructs an Selector which can be filtered, LIMIT-ed, sorted and counted. In the end, the selector fetches one or more Entities from the persistence.
Here is some sample code:
<?php
interface Repository
{
public function addEntity(Entity $entity);
public function updateEntity(Entity $entity);
public function removeEntity(Entity $entity);
/**
* #return Entity
*/
public function loadEntity($entityId);
public function factoryEntitySelector():Selector
}
interface Selector extends \Countable
{
public function count();
/**
* #return Entity[]
*/
public function fetchEntities();
/**
* #return Entity
*/
public function fetchEntity();
public function limit(...$limit);
public function filter(Filter $filter);
public function orderBy($column, $ascending = true);
public function removeFilter($filterName);
}
interface Filter
{
public function getFilterName();
}
Then, one implementation:
class SqlEntityRepository
{
...
public function factoryEntitySelector()
{
return new SqlSelector($this);
}
...
}
class SqlSelector implements Selector
{
...
private function adaptFilter(Filter $filter):SqlQueryFilter
{
return (new SqlSelectorFilterAdapter())->adaptFilter($filter);
}
...
}
class SqlSelectorFilterAdapter
{
public function adaptFilter(Filter $filter):SqlQueryFilter
{
$concreteClass = (new StringRebaser(
'Filter\\', 'SqlQueryFilter\\'))
->rebase(get_class($filter));
return new $concreteClass($filter);
}
}
The ideea is that the generic Selector uses Filter but the implementation SqlSelector uses SqlFilter; the SqlSelectorFilterAdapter adapts a generic Filter to a concrete SqlFilter.
The client code creates Filter objects (that are generic filters) but in the concrete implementation of the selector those filters are transformed in SQL filters.
Other selector implementations, like InMemorySelector, transform from Filter to InMemoryFilter using their specific InMemorySelectorFilterAdapter; so, every selector implementation comes with its own filter adapter.
Using this strategy my client code (in the bussines layer) doesn't care about a specific repository or selector implementation.
/** #var Repository $repository*/
$selector = $repository->factoryEntitySelector();
$selector->filter(new AttributeEquals('activated', 1))->limit(2)->orderBy('username');
$activatedUserCount = $selector->count(); // evaluates to 100, ignores the limit()
$activatedUsers = $selector->fetchEntities();
P.S. This is a simplification of my real code
I'll add a bit on this as I am currently trying to grasp all of this myself.
#1 and 2
This is a perfect place for your ORM to do the heavy lifting. If you are using a model that implements some kind of ORM, you can just use it's methods to take care of these things. Make your own orderBy functions that implement the Eloquent methods if you need to. Using Eloquent for instance:
class DbUserRepository implements UserRepositoryInterface
{
public function findAll()
{
return User::all();
}
public function get(Array $columns)
{
return User::select($columns);
}
What you seem to be looking for is an ORM. No reason your Repository can't be based around one. This would require User extend eloquent, but I personally don't see that as a problem.
If you do however want to avoid an ORM, you would then have to "roll your own" to get what you're looking for.
#3
Interfaces aren't supposed be hard and fast requirements. Something can implement an interface and add to it. What it can't do is fail to implement a required function of that interface. You can also extend interfaces like classes to keep things DRY.
That said, I'm just starting to get a grasp, but these realizations have helped me.
I can only comment on the way we (at my company) deal with this. First of all performance is not too much of an issue for us, but having clean/proper code is.
First of all we define Models such as a UserModel that uses an ORM to create UserEntity objects. When a UserEntity is loaded from a model all fields are loaded. For fields referencing foreign entities we use the appropriate foreign model to create the respective entities. For those entities the data will be loaded ondemand. Now your initial reaction might be ...???...!!! let me give you an example a bit of an example:
class UserEntity extends PersistentEntity
{
public function getOrders()
{
$this->getField('orders'); //OrderModel creates OrderEntities with only the ID's set
}
}
class UserModel {
protected $orm;
public function findUsers(IGetOptions $options = null)
{
return $orm->getAllEntities(/*...*/); // Orm creates a list of UserEntities
}
}
class OrderEntity extends PersistentEntity {} // user your imagination
class OrderModel
{
public function findOrdersById(array $ids, IGetOptions $options = null)
{
//...
}
}
In our case $db is an ORM that is able to load entities. The model instructs the ORM to load a set of entities of a specific type. The ORM contains a mapping and uses that to inject all the fields for that entity in to the entity. For foreign fields however only the id's of those objects are loaded. In this case the OrderModel creates OrderEntitys with only the id's of the referenced orders. When PersistentEntity::getField gets called by the OrderEntity the entity instructs it's model to lazy load all the fields into the OrderEntitys. All the OrderEntitys associated with one UserEntity are treated as one result-set and will be loaded at once.
The magic here is that our model and ORM inject all data into the entities and that entities merely provide wrapper functions for the generic getField method supplied by PersistentEntity. To summarize we always load all the fields, but fields referencing a foreign entity are loaded when necessary. Just loading a bunch of fields is not really a performance issue. Load all possible foreign entities however would be a HUGE performance decrease.
Now on to loading a specific set of users, based on a where clause. We provide an object oriented package of classes that allow you to specify simple expression that can be glued together. In the example code I named it GetOptions. It's a wrapper for all possible options for a select query. It contains a collection of where clauses, a group by clause and everything else. Our where clauses are quite complicated but you could obviously make a simpler version easily.
$objOptions->getConditionHolder()->addConditionBind(
new ConditionBind(
new Condition('orderProduct.product', ICondition::OPERATOR_IS, $argObjProduct)
)
);
A simplest version of this system would be to pass the WHERE part of the query as a string directly to the model.
I'm sorry for this quite complicated response. I tried to summarize our framework as quickly and clear as possible. If you have any additional questions feel free to ask them and I'll update my answer.
EDIT: Additionally if you really don't want to load some fields right away you could specify a lazy loading option in your ORM mapping. Because all fields are eventually loaded through the getField method you could load some fields last minute when that method is called. This is not a very big problem in PHP, but I would not recommend for other systems.
These are some different solutions I've seen. There are pros and cons to each of them, but it is for you to decide.
Issue #1: Too many fields
This is an important aspect especially when you take in to account Index-Only Scans. I see two solutions to dealing with this problem. You can update your functions to take in an optional array parameter that would contain a list of a columns to return. If this parameter is empty you'd return all of the columns in the query. This can be a little weird; based off the parameter you could retrieve an object or an array. You could also duplicate all of your functions so that you have two distinct functions that run the same query, but one returns an array of columns and the other returns an object.
public function findColumnsById($id, array $columns = array()){
if (empty($columns)) {
// use *
}
}
public function findById($id) {
$data = $this->findColumnsById($id);
}
Issue #2: Too many methods
I briefly worked with Propel ORM a year ago and this is based off what I can remember from that experience. Propel has the option to generate its class structure based off the existing database schema. It creates two objects for each table. The first object is a long list of access function similar to what you have currently listed; findByAttribute($attribute_value). The next object inherits from this first object. You can update this child object to build in your more complex getter functions.
Another solution would be using __call() to map non defined functions to something actionable. Your __call method would be would be able to parse the findById and findByName into different queries.
public function __call($function, $arguments) {
if (strpos($function, 'findBy') === 0) {
$parameter = substr($function, 6, strlen($function));
// SELECT * FROM $this->table_name WHERE $parameter = $arguments[0]
}
}
I hope this helps at least some what.
Issue #3: Impossible to match an interface
I see the benefit in using interfaces for repositories, so I can swap
out my implementation (for testing purposes or other). My
understanding of interfaces is that they define a contract that an
implementation must follow. This is great until you start adding
additional methods to your repositories like findAllInCountry(). Now I
need to update my interface to also have this method, otherwise, other
implementations may not have it, and that could break my application.
By this feels insane...a case of the tail wagging the dog.
My gut tells me this maybe requires an interface that implements query optimized methods alongside generic methods. Performance sensitive queries should have targeted methods, while infrequent or light-weight queries get handled by a generic handler, maybe the the expense of the controller doing a little more juggling.
The generic methods would allow any query to be implemented, and so would prevent breaking changes during a transition period. The targeted methods allow you to optimize a call when it makes sense to, and it can be applied to multiple service providers.
This approach would be akin to hardware implementations performing specific optimized tasks, while software implementations do the light work or flexible implementation.
I think graphQL is a good candidate in such a case to provide a large scale query language without increasing the complexity of data repositories.
However, there's another solution if you don't want to go for the graphQL for now. By using a DTO where an object is used for carring the data between processes, in this case between the service/controller and the repository.
An elegant answer is already provided above, however I'll try to give another example that I think it's simpler and could serve as a starting point for a new project.
As shown in the code, we would need only 4 methods for CRUD operations. the find method would be used for listing and reading by passing object argument.
Backend services could build the defined query object based on a URL query string or based on specific parameters.
The query object (SomeQueryDto) could also implement specific interface if needed. and is easy to be extended later without adding complexity.
<?php
interface SomeRepositoryInterface
{
public function create(SomeEnitityInterface $entityData): SomeEnitityInterface;
public function update(SomeEnitityInterface $entityData): SomeEnitityInterface;
public function delete(int $id): void;
public function find(SomeEnitityQueryInterface $query): array;
}
class SomeRepository implements SomeRepositoryInterface
{
public function find(SomeQueryDto $query): array
{
$qb = $this->getQueryBuilder();
foreach ($query->getSearchParameters() as $attribute) {
$qb->where($attribute['field'], $attribute['operator'], $attribute['value']);
}
return $qb->get();
}
}
/**
* Provide query data to search for tickets.
*
* #method SomeQueryDto userId(int $id, string $operator = null)
* #method SomeQueryDto categoryId(int $id, string $operator = null)
* #method SomeQueryDto completedAt(string $date, string $operator = null)
*/
class SomeQueryDto
{
/** #var array */
const QUERYABLE_FIELDS = [
'id',
'subject',
'user_id',
'category_id',
'created_at',
];
/** #var array */
const STRING_DB_OPERATORS = [
'eq' => '=', // Equal to
'gt' => '>', // Greater than
'lt' => '<', // Less than
'gte' => '>=', // Greater than or equal to
'lte' => '<=', // Less than or equal to
'ne' => '<>', // Not equal to
'like' => 'like', // Search similar text
'in' => 'in', // one of range of values
];
/**
* #var array
*/
private $searchParameters = [];
const DEFAULT_OPERATOR = 'eq';
/**
* Build this query object out of query string.
* ex: id=gt:10&id=lte:20&category_id=in:1,2,3
*/
public static function buildFromString(string $queryString): SomeQueryDto
{
$query = new self();
parse_str($queryString, $queryFields);
foreach ($queryFields as $field => $operatorAndValue) {
[$operator, $value] = explode(':', $operatorAndValue);
$query->addParameter($field, $operator, $value);
}
return $query;
}
public function addParameter(string $field, string $operator, $value): SomeQueryDto
{
if (!in_array($field, self::QUERYABLE_FIELDS)) {
throw new \Exception("$field is invalid query field.");
}
if (!array_key_exists($operator, self::STRING_DB_OPERATORS)) {
throw new \Exception("$operator is invalid query operator.");
}
if (!is_scalar($value)) {
throw new \Exception("$value is invalid query value.");
}
array_push(
$this->searchParameters,
[
'field' => $field,
'operator' => self::STRING_DB_OPERATORS[$operator],
'value' => $value
]
);
return $this;
}
public function __call($name, $arguments)
{
// camelCase to snake_case
$field = strtolower(preg_replace('/(?<!^)[A-Z]/', '_$0', $name));
if (in_array($field, self::QUERYABLE_FIELDS)) {
return $this->addParameter($field, $arguments[1] ?? self::DEFAULT_OPERATOR, $arguments[0]);
}
}
public function getSearchParameters()
{
return $this->searchParameters;
}
}
Example usage:
$query = new SomeEnitityQuery();
$query->userId(1)->categoryId(2, 'ne')->createdAt('2020-03-03', 'lte');
$entities = $someRepository->find($query);
// Or by passing the HTTP query string
$query = SomeEnitityQuery::buildFromString('created_at=gte:2020-01-01&category_id=in:1,2,3');
$entities = $someRepository->find($query);
I suggest https://packagist.org/packages/prettus/l5-repository as vendor to implement Repositories/Criterias etc ... in Laravel5 :D
I agree with #ryan1234 that you should pass around complete objects within the code and should use generic query methods to get those objects.
Model::where(['attr1' => 'val1'])->get();
For external/endpoint usage I really like the GraphQL method.
POST /api/graphql
{
query: {
Model(attr1: 'val1') {
attr2
attr3
}
}
}
class Criteria {}
class Select {}
class Count {}
class Delete {}
class Update {}
class FieldFilter {}
class InArrayFilter {}
// ...
$crit = new Criteria();
$filter = new FieldFilter();
$filter->set($criteria, $entity, $property, $value);
$select = new Select($criteria);
$count = new Count($criteria);
$count->getRowCount();
$select->fetchOne(); // fetchAll();
So i think
I understand that multiple inheritance1 is simply not supported in PHP, and while many "hacks" or workarounds exist to emulate it, I also understand that an approach such as object composition is likely more flexible, stable, and understandable than such workarounds. Curiously, PHP's 5.4's traits will be the fitting solution, but we're not quite there yet, are we.
Now, this isn't simply an "amidoinitrite?" question, but I'd like to ensure that my approach makes sense to others.
Given I have classes Action and Event (there are more, but we'll keep it brief) and they both require (near) identical methods, the obvious approach would be; create a common base class, extend and go; they are, after all, conceptually similar enough to constitute being siblings in a class hierarchy (I think)
The problem is Event needs to extend a class (Exception) that itself cannot extend anything. The methods (and properties) all pertain to "attribute" values, we'll call them "options" and "data", where "options" are values stored at class level, and "data" are values stored at instance level.
With exception of (no pun intended) the Exception class, I can simply create a common class that all pertinent objects extend in order to inherit the necessary functionality, but I'm wondering what I can do to avoid the seemingly inevitable code duplication in Event; also, other classes that are not conceptually similar enough to be siblings need this functionality.
So far the answer seems to be, using the object composition approach, create a Data class, and manage it at two points:
At object instantiation, create a Data instance to be used with the object as "data".
At some point (through a static initialize() method perhaps) create a Data instance to be used statically with the class as "options".
Interfaces, named IData and IOption for example, would be implemented by classes needing this functionality. IData simply enforces the instance methods of the Data class on the consumer, and calls would be forwarded to the instance Data property object, whereas IOption would enforce similarly named methods (substitute "data" for "option") and those methods would forward to the static Data property object.
What I'm looking at is something like this (the methods are somewhat naive in appearance, but I've slimmed them for brevity here):
interface IData{
public function setData($name, $value);
public function putData($name, &$variable);
public function getData($name = null);
}
interface IOption{
public static function initializeOptions();
public static function setOption($name, $value);
public static function setOptions(Array $options);
public static function getOptions($name = null);
}
class Data implements IData{
private $_values = array();
public function setData($name, $value){
$this->_values[$name] = $value;
}
public function putData($name, &$variable){
$this->_values[$name] = &$variable;
}
public function getData($name = null){
if(null === $name){
return $this->_values;
}
if(isset($this->_values[$name])){
return $this->_values[$name];
}
return null;
}
}
class Test implements IData, IOption{
private static $_option;
private $_data;
public static function initializeOptions(){
self::$_option = new Data();
}
public static function setOption($name, $value){
self::$_option->setData($name, $value);
}
public static function setOptions(Array $options){
foreach($options as $name => $value){
self::$_option->setData($name, $value);
}
}
public static function getOptions($name = null){
return self::$_option->getOptions($name);
}
public function __construct(){
$this->_data = new Data();
}
public function setData($name, $value){
$this->_data->setData($name, $value);
return $this;
}
public function putData($name, &$variable){
$this->_data->putData($name, $variable);
return $this;
}
public function getData($name = null){
return $this->_data->getData($name);
}
}
So where do I go from here? I can't shake the feeling that I'm moving away from good design with this; I've introduced an irreversible dependency between the client classes and the storage classes, which the interfaces can't explicitly enforce.
Edit: Alternatively, I could keep the reference to Data (wherever necessary) public, eliminating the need for proxy methods, thus simplifying the composition. The problem then, is that I cannot deviate from the Data class functionality, say for instance if I need to make getData() act recursively, as this snippet exemplifies:
function getData($name = null){
if(null === $name){
// $parent_object would refer to $this->_parent
// in the Test class, given it had a hierarchal
// implementation
return array_replace($parent_object->getData(), $this->_values);
}
// ...
}
Of course, this all boils down to separate definitions on a per-class basis, to support any deviation from a default implementation.
I suppose the end-all here, is that I'm having trouble understanding where code duplication is "alright" (or more accurately, unavoidable) and where I can extract common functionality into a container, and how to reference and use the contained functionality across classes, deviating (typically negligibly) where necessary. Again, traits (in my cursory testing on beta) seem to be a perfect fit here, but the principle of composition has existed long before 5.4 (and PHP entirely for that matter) and I'm certain that there is a "classic" way to accomplish this.
1. Interestingly, the page for multiple inheritance at Wikipedia has been flagged for copyright investigation. Diamond problem seemed like a fitting substitute.
EDIT: I've just read your question again and you seem to be suggesting that you are actually using the getters and setters to manipulate the data. If this is the case then could you provide me with more detail on what it is that you're trying to achieve. I suspect that how you've decided to model your objects and data is what has led you to this situation and that an alternative would solve the problem.
You don't need multiple inheritance. You don't even need most of the code you've written.
If the purposes of classes 'Data' and 'Option' is to simply store data then use an array. Or, if you prefer the syntax of an object cast the array to an object or an instance of stdClass:
$person = (object)array(
'name' => 'Peter',
'gender' => 'Male'
);
OR
$person = new stdClass;
$person->name = 'Peter';
$person->gender = 'Male';
Having a whole bunch of getters and setters that don't actually do anything to the data are pointless.
(This question uses PHP as context but isn't restricted to PHP only. e.g. Any language with built in hash is also relevant)
Let's look at this example (PHP):
function makeAFredUsingAssoc()
{
return array(
'id'=>1337,
'height'=>137,
'name'=>"Green Fred");
}
Versus:
class Fred
{
public $id;
public $height;
public $name;
public function __construct($id, $height, $name)
{
$this->id = $id;
$this->height = $height;
$this->name = $name;
}
}
function makeAFredUsingValueObject()
{
return new Fred(1337, 137, "Green Fred");
}
Method #1 is of course terser, however it may easily lead to error such as
$myFred = makeAFredUsingAssoc();
return $myFred['naem']; // notice teh typo here
Of course, one might argue that $myFred->naem will equally lead to error, which is true. However having a formal class just feels more rigid to me, but I can't really justify it.
What would be the pros/cons to using each approach and when should people use which approach?
Under the surface, the two approaches are equivalent. However, you get most of the standard OO benefits when using a class: encapsulation, inheritance, etc.
Also, look at the following examples:
$arr['naem'] = 'John';
is perfectly valid and could be a difficult bug to find.
On the other hand,
$class->setNaem('John');
will never work.
A simple class like this one:
class PersonalData {
protected $firstname;
protected $lastname;
// Getters/setters here
}
Has few advantages over an array.
There is no possibility to make some typos. $data['firtsname'] = 'Chris'; will work while $data->setFirtsname('Chris'); will throw en error.
Type hinting: PHP arrays can contain everything (including nothing) while well defined class contains only specified data.
public function doSth(array $personalData) {
$this->doSthElse($personalData['firstname']); // What if "firstname" index doesn't exist?
}
public function doSth(PersonalData $personalData) {
// I am guaranteed that following method exists.
// In worst case it will return NULL or some default value
$this->doSthElse($personalData->getFirstname());
}
We can add some extra code before set/get operations, like validation or logging:
public function setFirstname($firstname) {
if (/* doesn't match "firstname" regular expression */) {
throw new InvalidArgumentException('blah blah blah');
}
if (/* in debbug mode */) {
log('Firstname set to: ' . $firstname);
}
$this->firstname = $firstname;
}
We can use all the benefits of OOP like inheritance, polymorphism, type hinting, encapsulation and so on...
As mentioned before all of our "structs" can inherit from some base class that provides implementation for Countable, Serializable or Iterator interfaces, so our structs could use foreach loops etc.
IDE support.
The only disadvantage seems to be speed. Creation of an array and operating on it is faster. However we all know that in many cases CPU time is much cheaper than programmer time. ;)
After thinking about it for some time, here's my own answer.
The main thing about preferring value objects over arrays is clarity.
Consider this function:
// Yes, you can specify parameter types in PHP
function MagicFunction(Fred $fred)
{
// ...
}
versus
function MagicFunction(array $fred)
{
}
The intent is clearer. The function author can enforce his requirement.
More importantly, as the user, I can easily look up what constitutes a valid Fred. I just need to open Fred.php and discover its internals.
There is a contract between the caller and the callee. Using value objects, this contract can be written as syntax-checked code:
class Fred
{
public $name;
// ...
}
If I used an array, I can only hope my user would read the comments or the documentation:
// IMPORTANT! You need to specify 'name' and 'age'
function MagicFunction(array $fred)
{
}
Depending on the UseCase I might use either or. The advantage of the class is that I can use it like a Type and use Type Hints on methods or any introspection methods. If I just want to pass around some random dataset from a query or something, I'd likely use the array. So I guess as long as Fred has special meaning in my model, I'd use a class.
On a sidenote:
ValueObjects are supposed to be immutable. At least if you are refering to Eric Evan's definition in Domain Driven Design. In Fowler's PoEA, ValueObjects do not necessarily have to be immutable (though it is suggested), but they should not have identity, which is clearly the case with Fred.
Let me pose this question to you:
What's so different about making a typo like $myFred['naem'] and making a typo like $myFred->naem? The same issue still exists in both cases and they both error.
I like to use KISS (keep it simple, stupid) when I program.
If you are simply returning a subset of a query from a method, simply return an array.
If you are storing the data as a public/private/static/protected variable in one of your classes, it would be best to store it as a stdClass.
If you are going to later pass this to another class method, you might prefer the strict typing of the Fred class, i.e. public function acceptsClass(Fred $fredObj)
You could have just as easily created a standard class as opposed to an array if it is to be used as a return value. In this case you could care less about strict typing.
$class = new stdClass();
$class->param = 'value';
$class->param2 = 'value2';
return $class;
A pro for the hash: It is able to handle name-value combinations which are unknown at design time.
When the return value represents an entity in your application, you should use an object, as this is the purpose of OOP. If you just want to return a group of unrelated values then it's not so clear cut. If it's part of a public API, though, then a declared class is still the best way to go.
Honestly, I like them both.
Hash arrays are way faster than making objects, and time is money!
But, JSON doesn't like hash arrays (which seems a bit like OOP OCD).
Maybe for projects with multiple people, a well-defined class would be better.
Hash arrays might take more CPU time and memory (an object has a predefined amount), though its hard to be sure for every scenario.
But what really sucks is thinking about which one to use too much. Like I said, JSON doesn't like hashes. Oops, I used an array. I got to change a few thousand lines of code now.
I don't like it, but it seems that classes are the safer way to go.
The benefit of a proper Value Object is that there's no way to actually make an invalid one and no way to change one that exists (integrity and "immutability"). With only getters and type hinting parameters, there's NO WAY to screw it up in compilable code, which you can obviously easily do with malleable arrays.
Alternatively you could validate in a public constructor and throw an exception, but this provides a gentler factory method.
class Color
{
public static function create($name, $rgb) {
// validate both
if ($bothValid) {
return new self($name, $rgb);
} else {
return false;
}
}
public function getName() { return $this->_name; }
public function getRgb() { return $this->_rgb; }
protected function __construct($name, $rgb)
{
$this->_name = $name;
$this->_rgb = $rgb;
}
protected $_name;
protected $_rgb;
}
I have worked with OOP Languages over 10 years.
If you understand the way objects work you will love it.
Inheritance, Polymorphism, Encapsulation, Overloading are the key advantage of OOP.
On the other hand when we talk about PHP we have to consider that PHP isn't a full featured Object Oriented language.
For example we cant use method overloading or constructor overloading (straightforward).
Associative arrays in PHP is a VERY nice feature but i think that harms php enterprise applications.
When you write code you want to get clean and maintainable application.
Another think that you loose with Associative arrays is that you can't use intellisense.
So i think if you want to write cleanner and more maintainable code you have to use the OOP features when it is provided.
I prefer to have hard-coded properties like in your second example. I feel like it more clearly defines the expected class structure (and all possible properties on the class). As opposed to the first example which boils down to just always remembering to use the same key names. With the second you can always go back and look at the class to get an idea of the properties just by looking at the top of the file.
You'll better know you're doing something wrong with the second one -- if you try to echo $this->doesntExist you'll get an error whereas if you try to echo array['doesntExist'] you won't.