I'll admit, I haven't unit tested much... but I'd like to. With that being said, I have a very complex registration process that I'd like to optimize for easier unit testing. I'm looking for a way to structure my classes so that I can test them more easily in the future. All of this logic is contained within an MVC framework, so you can assume the controller is the root where everything gets instantiated from.
To simplify, what I'm essentially asking is how to setup a system where you can manage any number of third party modules with CRUD updates. These third party modules are all RESTful API driven and response data is stored in local copies. Something like the deletion of a user account would need to trigger the deletion of all associated modules (which I refer to as providers). These providers may have a dependency on another provider, so the order of deletions/creations is important. I'm interested in which design patterns I should specifically be using to support my application.
Registration spans several classes and stores data in several db tables. Here's the order of the different providers and methods (they aren't statics, just written that way for brevity):
Provider::create('external::create-user') initiates registration at a particular step of a particular provider. The double colon syntax in the first param indicates the class should trigger creation on providerClass::providerMethod. I had made a general assumption that Provider would be an interface with the methods create(), update(), delete() that all other providers would implement it. How this gets instantiated is likely something you need to help me with.
$user = Provider_External::createUser() creates a user on an external API, returns success, and user gets stored in my database.
$customer = Provider_Gapps_Customer::create($user) creates a customer on a third party API, returns success, and stores locally.
$subscription = Provider_Gapps_Subscription::create($customer) creates a subscription associated to the previously created customer on the third party API, returns success, and stores locally.
Provider_Gapps_Verification::get($customer, $subscription) retrieves a row from an external API. This information gets stored locally. Another call is made which I'm skipping to keep things concise.
Provider_Gapps_Verification::verify($customer, $subscription) performs an external API verification process. The result of which gets stored locally.
This is a really dumbed down sample as the actual code relies upon at least 6 external API calls and over 10 local database rows created during registration. It doesn't make sense to use dependency injection at the constructor level because I might need to instantiate 6 classes in the controller without knowing if I even need them all. What I'm looking to accomplish would be something like Provider::create('external') where I simply specify the starting step to kick off registration.
The Crux of the Problem
So as you can see, this is just one sample of a registration process. I'm building a system where I could have several hundred service providers (external API modules) that I need to sign up for, update, delete, etc. Each of these providers gets related back to a user account.
I would like to build this system in a manner where I can specify an order of operations (steps) when triggering the creation of a new provider. Put another way, allow me to specify which provider/method combination gets triggered next in the chain of events since creation can span so many steps. Currently, I have this chain of events occurring via the subject/observer pattern. I'm looking to potentially move this code to a database table, provider_steps, where I list each step as well as it's following success_step and failure_step (for rollbacks and deletes). The table would look as follows:
# the id of the parent provider row
provider_id int(11) unsigned primary key,
# the short, slug name of the step for using in codebase
step_name varchar(60),
# the name of the method correlating to the step
method_name varchar(120),
# the steps that get triggered on success of this step
# can be comma delimited; multiple steps could be triggered in parallel
triggers_success varchar(255),
# the steps that get triggered on failure of this step
# can be comma delimited; multiple steps could be triggered in parallel
triggers_failure varchar(255),
created_at datetime,
updated_at datetime,
index ('provider_id', 'step_name')
There's so many decisions to make here... I know I should favor composition over inheritance and create some interfaces. I also know I'm likely going to need factories. Lastly, I have a lot of domain model shit going on here... so I likely need business domain classes. I'm just not sure how to mesh them all together without creating an utter mess in my pursuit of the holy grail.
Also, where would be the best place for the db queries to take place?
I have a model for each database table already, but I'm interested in knowing where and how to instantiate the particular model methods.
Things I've Been Reading...
Design Patterns
The Strategy Pattern
Composition over Inheritance
The Factory method pattern
The Abstract factory pattern
The Builder pattern
The Chain-of-responsibility pattern
You're already working with the pub/sub pattern, which seems appropriate. Given nothing but your comments above, I'd be considering an ordered list as a priority mechanism.
But it still doesn't smell right that each subscriber is concerned with the order of operations of its dependents for triggering success/failure. Dependencies usually seem like they belong in a tree, not a list. If you stored them in a tree (using the composite pattern) then the built-in recursion would be able to clean up each dependency by cleaning up its dependents first. That way you're no longer worried about prioritizing in which order the cleanup happens - the tree handles that automatically.
And you can use a tree for storing pub/sub subscribers almost as easily as you can use a list.
Using a test-driven development approach could get you what you need, and would ensure your entire application is not only fully testable, but completely covered by tests that prove it does what you want. I'd start by describing exactly what you need to do to meet one single requirement.
One thing you know you want to do is add a provider, so a TestAddProvider() test seems appropriate. Note that it should be pretty simple at this point, and have nothing to do with a composite pattern. Once that's working, you know that a provider has a dependent. Create a TestAddProviderWithDependent() test, and see how that goes. Again, it shouldn't be complex. Next, you'd likely want to TestAddProviderWithTwoDependents(), and that's where the list would get implemented. Once that's working, you know you want the Provider to also be a Dependent, so a new test would prove the inheritance model worked. From there, you'd add enough tests to convince yourself that various combinations of adding providers and dependents worked, and tests for exception conditions, etc. Just from the tests and requirements, you'd quickly arrive at a composite pattern that meets your needs. At this point I'd actually crack open my copy of GoF to ensure I understood the consequences of choosing the composite pattern, and to make sure I didn't add an inappropriate wart.
Another known requirement is to delete providers, so create a TestDeleteProvider() test, and implement the DeleteProvider() method. You won't be far away from having the provider delete its dependents, too, so the next step might be creating a TestDeleteProviderWithADependent() test. The recursion of the composite pattern should be evident at this point, and you should only need a few more tests to convince yourself that deeply nested providers, empty leafs, wide nodes, etc., all will properly clean themselves up.
I would assume that there's a requirement for your providers to actually provide their services. Time to test calling the providers (using mock providers for testing), and adding tests that ensure they can find their dependencies. Again, the recursion of the composite pattern should help build the list of dependencies or whatever you need to call the correct providers correctly.
You might find that providers have to be called in a specific order. At this point you might need to add prioritization to the lists at each node within the composite tree. Or maybe you have to build an entirely different structure (such as a linked list) to call them in the right order. Use the tests and approach it slowly. You might still have people concerned that you delete dependents in a particular externally prescribed order. At this point you can use your tests to prove to the doubters that you will always delete them safely, even if not in the order they were thinking.
If you've been doing it right, all your previous tests should continue to pass.
Then come the tricky questions. What if you have two providers that share a common dependency? If you delete one provider, should it delete all of its dependencies even though a different provider needs one of them? Add a test, and implement your rule. I figure I'd handle it through reference counting, but maybe you want a copy of the provider for the second instance, so you never have to worry about sharing children, and you keep things simpler that way. Or maybe it's never a problem in your domain. Another tricky question is if your providers can have circular dependencies. How do you ensure you don't end up in a self-referential loop? Write tests and figure it out.
After you've got this whole structure figured out, only then would you start thinking about the data you would use to describe this hierarchy.
That's the approach I'd consider. It may not be right for you, but that's for you to decide.
Unit Testing
With unit testing, we only want to test the code that makes up the individual unit of source code, typically a class method or function in PHP (Unit Testing Overview). Which indicates that we don't want to actually test the external API in Unit Testing, we only want to test the code we are writing locally. If you do want to test entire workflows, you are likely wanting to perform integration testing (Integration Testing Overview), which is a different beast.
As you specifically asked about designing for Unit Testing, lets assume you actually mean Unit Testing as opposed to Integration Testing and submit that there are two reasonable ways to go about designing your Provider classes.
Stub Out
The practice of replacing an object with a test double that (optionally) returns configured return values is refered to as stubbing. You can use a stub to "replace a real component on which the SUT depends so that the test has a control point for the indirect inputs of the SUT. This allows the test to force the SUT down paths it might not otherwise execute". Reference & Examples
Mock Objects
The practice of replacing an object with a test double that verifies expectations, for instance asserting that a method has been called, is referred to as mocking.
You can use a mock object "as an observation point that is used to verify the indirect outputs of the SUT as it is exercised. Typically, the mock object also includes the functionality of a test stub in that it must return values to the SUT if it hasn't already failed the tests but the emphasis is on the verification of the indirect outputs. Therefore, a mock object is lot more than just a test stub plus assertions; it is used a fundamentally different way".
Reference & Examples
Our Advice
Design your class to both all both Stubbing and Mocking. The PHP Unit Manual has an excellent example of Stubbing and Mocking Web Service. While this doesn't help you out of the box, it demonstrates how you would go about implementing the same for the Restful API you are consuming.
Where is the best place for the db queries to take place?
We suggest you use an ORM and not solve this yourself. You can easily Google PHP ORM's and make your own decision based off your own needs; our advice is to use Doctrine because we use Doctrine and it suits our needs well and over the past few years, we have come to appreciate how well the Doctrine developers know the domain, simply put, they do it better than we could do it ourselves so we are happy to let them do it for us.
If you don't really grasp why you should use an ORM, see Why should you use an ORM? and then Google the same question. If you still feel like you can roll your own ORM or otherwise handle the Database Access yourself better than the guys dedicated to it, we would expect you to already know the answer to the question. If you feel you have a pressing need to handle it yourself, we suggest you look at the source code for a number a of ORM's (See Doctrine on Github) and find the solution that best fits your scenario.
Thanks for asking a fun question, I appreciate it.
Every single dependency relationship within your class hierarchy must be accessible from outside world (shouldn't be highly coupled). For instance, if you are instantiating class A within class B, class B must have setter/getter methods implemented for class A instance holder in class B.
http://en.wikipedia.org/wiki/Dependency_injection
The furthermost problem I can see with your code - and this hinders you from testing it actually - is making use of static class method calls:
Provider::create('external::create-user')
$user = Provider_External::createUser()
$customer = Provider_Gapps_Customer::create($user)
$subscription = Provider_Gapps_Subscription::create($customer)
...
It's epidemic in your code - even if you "only" outlined them as static for "brevity". Such attitiude is not brevity it's counter-productive for testable code. Avoid these at all cost incl. when asking a question about Unit-Testing, this is known bad practice and it is known that such code is hard to test.
After you've converted all static calls into object method invocations and used Dependency Injection instead of static global state to pass the objects along, you can just do unit-testing with PHPUnit incl. making use of stub and mock objects collaborating in your (simple) tests.
So here is a TODO:
Refactor static method calls into object method invocations.
Use Dependency Injection to pass objects along.
And you very much improved your code. If you argue that you can not do that, do not waste your time with unit-testing, waste it with maintaining your application, ship it fast, let it make some money, and burn it if it's not profitable any longer. But don't waste your programming life with unit-testing static global state - it's just stupid to do.
Think about layering your application with defined roles and responsibilities for each layer. You may like to take inspiration from Apache-Axis' message flow subsystem. The core idea is to create a chain of handlers through which the request flows until it is processed. Such a design facilitates plugable components which may be bundled together to create higher order functions.
Further you may like to read about Functors/Function Objects, particularly Closure, Predicate, Transformer and Supplier to create your participating components. Hope that helps.
Have you looked at the state design pattern? http://en.wikipedia.org/wiki/State_pattern
You could make all your steps as different states in state machine and it would look like graph. You could store this graph in your database table/xml, also every provider can have his own graph which represents order in which execution should happen.
So when you get into certain state you may trigger event/events (save user, get user). I dont know your application specific, but events can be res-used by other providers.
If it fails on some of the steps then different graph path is executed.
If you will correctly abstract it you could have loosely coupled system which follows orders given by graph and executes events based on state.
Then later if you need add some other provider you only need to create graph and/or some new events.
Here is some example: https://github.com/Metabor/Statemachine
Related
I'm a novice, but struggling hard to implement this interactive application I'm working on "the right way" or at least a good way in terms of scalability, maintainability, modularity, development speed and tool independence. That's why I chose the REST design guides and a framework which implements MVC.
However I can't get my head around where to put what in the following situation and any input or reading material from a more experienced developer in this techniques would be greatly appreciated :
I'm developing a single page web app which creates a resource that has several nested resources within. In the create methods and alike, I need to call the create methods from the nested resources. Right now every GET request is responded with a JSON, which the front end then parses, shows and add dynamically to the page accordingly. The question is : where should this create and store methods from nested resources be, in the controller or in the model?
Currently, my approach is : since the controller function is to handle user input, interact with model and return the view accordingly, the nested store methods are in the model since they're not created independently, their create methods are in the controller since they're requested from ajax calls, but this isn't nested, and so on. I'm worried that this is too mixed up and not general.
Am I ok ? Am I mixed up? I don't wanna make a mess for my coworkers to understand. Theory becomes tricky when applied..
I'm gonna have a go at this. I am myself still learning about this as well, so if any information is wrong, please correct me.
In terms of scalability, you should always be able to create any model independently, even though at this point it appears not strictly necessary. The REST paradigm stands for exactly this: Each model (a.k.a. resource) has its own (sub)set of CRUD endpoints, which a client application can use to perform any action, on any composition of data (compositions in which elementary entities are mostly the models you specify).
Furthermore, a model should be concerned with its own data only, and that data is typically found in a single table (in the case of relational datastores). In many cases models specify facilities to read related resources, so that this data can be included when requested. That might look like the line below, and the response is ideally fully compliant with the JSON API specification:
GET //api/my-resources/1?include=related-resource
However, a model should never create (POST), update (PUT) or delete (DELETE) these relations, not at all without explicit instructions to do so.
If you have a procedure where a model and its nested models (I assume related models) are to be created in a single go, an extra endpoint can be created for this action. You'd have to come up with a sensible name for that set of resources, and use that throughout your application's HTTP/support layer.For instance, for creation of such a set, the request might be:
POST //api/sensible-name { your: 'data' }
Keep the { your: 'data' }
part as close to a typical JSON API format as possible, preferably fully compliant. Then, in your back-end (I suppose Laravel, inn your case) you'd want to create a factory implementation that might be called <SensibleName>Factory that takes care of figuring out how to map the posted data to different models, and how their relations should be specified. Under the hood, this factory just uses the model classes' creation facilities to get you where you want to go.
When you would instead automate this process in your model it would be impossible to create the resources independently.
When you would instead automate this process in any single-resource controller that would be non-compliant with the REST paradigm.
With the factory pattern, you explicitly use that class to perform the higher level action, and none of the above concerns apply, not speaking about whether this approach is in accordance with REST at all.
The key takeaway is that the exact same result must still be achievable by performing multiple requests to single-resource endpoints, and your additional /api/sensible-name endpoint just substitutes for the need to call to those multiple endpoints, for the purpose of convenience, when you DO want to create multiple records in a single go.
Note that my statement has little to do with what endpoints to create to fetch nested resources. This SO question has some pretty good conversation as to what is acceptable, and how your specific needs might relate to that.
In the end, it's all about what works for you and your application, and REST and the like are just philosophies that propose to you an approach for similar needs in general web development as well as possible.
I need help with something I can’t get my head wrapped around regarding the Repository and Service/Use-case pattern (part of DDD design) I want to implement in my next (Laravel PHP) project.
All seems clear. Just one part of DDD that is confusing is the data structures from repositories. People seem to choose data structures the Repository should return (arrays or entities) but it all has disadvantages. One of which is performance looking at my experiences in the past. And one is which you don’t have interfaces for simple data structures (array or simple object attributes).
I’ll start with explaining the experience I have with a previous project. This project had flaws but some good strengths I learned from and like to see in my new project but with solving some design mistakes.
Previous experience
In the past I’ve build a website that was API Centric using the Kohana framework and Doctrine 2 ORM (data mapper pattern). The flow looked like this:
Website controller → API client (HMVC calls) → API controller → Custom Repository → Doctrine 2 ORM native Repository/Entity-manager
My custom Repository returned plain arrays using Doctrine2 DQL. Doctrine2 recommends array result data for read only operations. And yes, it made my site nice and light. The API controller just converted the array data to JSON. Simple as that.
In the past my company created projects relying fully on loaded Doctrine2 entities and it’s something we regretted due to performance.
My REST API supported queries like
/api/users?include_latest_adverts=2&include_location=true
on the users resource. The API controller passed include_location to the repository which directly included the location relation. The controller read latest_adverts=2 and called the adverts repository to get the latest 2 adverts of each user. Arrays were returned.
For example first user array:
[
name
avatar
adverts [
advert 1 [
name
price
]
advert 2 [
….
]
]
]
This proved to be very successful. My whole website was using the API. It would be very easy to to add a new client because the API was perfectly in production already using oauth. The whole website runs on it.
But this design had flaws too. My controller still contained A LOT of logic for validation, mailing, params or filters like has_adverts=true to get users with adverts only. It would mean that if I created a new port, like a total new CLI interface, I would have to duplicate alot of these controllers due to all the validation etc. But no duplication if I would create a new client. So at least one problem was solved :-)
My admin panels were completely coupled to the doctrine2 repository/entity-manager to speed up development (sort of). Why? Because my API had fat controllers with special functionality for the website only (special validation, mailing for registering etc). I would have to redo work or refactor a lot. So decided to use the Entities directly to still have some sort clear way of writing code instead of rewriting all my API controllers and move them to Services (for site & admin) for instance. Time was an issue in fixing my design mistakes.
For my next project I want all code to go through my own custom repositories and services. One flow for good separation.
New project (using DDD ideas) and dilemma with data structures
While I like the idea of being API centric, I don’t want my next project to be API centric in core because I think the same functionality should be available without the HTTP protocol in between. I want to design the core using DDD ideas.
But I liked the idea using a layer that just talked as a API and returns simple arrays. The perfect base for any new port, including my own frontend. My idea is to consider my Service classes as the API interface (return the array data), do the validation etc. I could have Services specially for the website (registering) and plain services used by the Admin or background processes. In some admin cases a Service would not be required anyway for simple CRUD editing, I could just use Repositories directly. Controllers would be very thin. With this creating a real REST API would just be a matter to create new controllers using the same Services my frontend controller classes do.
For internal logic like business rules it would be useful to have Entities (clear interfaces) instead of arrays from repositories. This way I could benefit from defining some methods that did some logic based on attributes. BUT If I would be using Doctrine2 and my repositories would always return Entities my application would suffer a big performance hit!!
One data structure ensures performance but no clear interfaces, the other ensures clear interfaces but bad performance when using a Data Pattern pattern like Doctrine 2 (now or in the future). Also I could end up with two data types which would be confusing.
I was thinking something similar to this flow:
Controller (thin) → UserService (incl. validation) → UserRepository (just storage) → Eloquent ORM
Why Eloquent instead of Doctrine2? Because I want to stick a bit to what’s common within the Laravel framework and community. So I could benefit from third party modules, for example to generate admin interfaces or similar based on models (bypassing my DDD rules). Other than using third party modules, I would design my core stuff so switching should always be easy and not affect data structure choices or performance.
Eloquent is an activerecord pattern. So I would be tempted to convert this data to POPO’s like Doctrine2 entities are. But nope... as said above, with doctrine2 real models would make the system very fat. So I fall back to simple arrays again. Knowing this would work for both and any other implementation in the future.
But it feels bad always rely on arrays. Especially when creating internal business rules. A developer would have to guess values on arrays, have no autocompletion in his IDE, could not have special methods like in Entity classes. But making two ways of dealing with data feels bad too. Or I am just too perfectionist ;) I want ONE clear data structure for all!
Building interfaces and POPO’s would mean a lot of duplicate work. I would need to convert an Eloquent model (just a table mapper, not entity) to an entity object implementing this interface. All is extra work. And eventually my last layer would be just like a API, thus converting it to arrays again. Which is extra work too. Arrays seem the deal again.
It seemed so easy reading up into DDD and Hexagonal. It seems so logic! But in reality I struggle with this one simple issue trying to stick to OOP principles. I want to use arrays because it’s the only way to be 100% sure I am not depended on any model choice and querying choice from my ORM regarding performance etc and don't have duplicate work in converting to arrays for views or an API. But there's no clear contract on how a user array could look. I want to speed up my project using these patterns, not slow them down :-) So not an option to have many converters.
Now I read a lot of topics. One makes POPO’s & interfaces that conform proper entities like Doctrine2 could return, but with all the extra work for Eloquent. Switching to Doctrine2 should be fairly easy, but would impact performance so bad or one would need to convert Doctrine2 array data to these own entity interfaces. Others choose to return simple arrays.
One convinces people to use Doctrine2 instead of Eloquent, but they leave out the fact that Doctrine2 is heavy and you really need to use array results for read only operations.
We design repositories to be changeable right? Not because it’s “nice” by design only. So how could we rely on full Entities if it has such big impact on performance or duplicate work? Even when using Doctrine2 only (coupled) this same issue would arise due to its performance!
All ORM implementations would be able to return arrays, thus no duplicate work there. Good performance. But we miss clear contracts. And we don’t have interfaces for arrays or class attributes (as a workaround)... Ugh ;)
Do I just miss a missing block in our programming languages? Interfaces on simple data structures??
Is it wise to make all arrays and have advanced business logic talk to these arrays? Thus no classes with clear interfaces. Any precalculated data (normally would be returned by an Entity method) would be within an array key defined the Service class. if not wise, what’s the alternative considering all of the above?
I would really appreciate if someone with great experience in this “domain” considering performance, different ORM implementations, etc could tell me how he/she has dealt with this?
Thanks in advance!
I think what you are dealing with is something similiar I'm struggling with. The solution I'm thinking works best is:
Entities/Repositories
Use and pass around Entities always when performing Write operations (Creating things, Updating things, Deleting things, and complex combinations thereof).
Sometimes you may use Entities when doing Read operations (when you anticipate the Read might need to be used for a Write soon after...ie. ->findById is soon followed by ->save).
Anytime you are working with an Entity (whether it be Write or Read), the Repositories need to be the place to go. You should be able to tell new developers that they can only persist to the database through Entities and the Repository.
The Entities will have properties that represent some Domain Object (many times they represent a database table with the fields of a table, but not always). They will also contain the domain logic/rules with them (ie. validation, calculations) so they are not anemic. You may additionally have some domain services if your Entities need help interacting with other Entities (need to trigger other events), or you just need an additional place to handle some extra domain logic (perform Repository calls to check for some unique conditions).
Your Repositories will solely be for working with Entities. The Repositories could accept Entities and do some persistence work with them. Or they could accept just some parameters, and do some reading/fetching into full Entities.
Some Repositories will know how to save some Domain Objects that are more complex than others. Perhaps an Entity that has a property which contains a list of other Entities that need to be saved along side the main entity (you can dive deeper into learning about Aggregate roots if you want).
The interfaces to Repositories rest in your Domain layer, but not the actual implementations of those Repositories. That way you can have an Eloquent version or whatever.
Other Queries (Table Data Gateway)
These queries won't work with Entities. They'll just be accepting parameters and returning things like Arrays or POPO's (Plain Old PHP Objects).
Many times you will need to perform Reads that do not return nicely into a single Entity. These Reads are typically more for reporting (not for CRUD-like operations, like Reading a user into an edit form that is eventually submitted and saved). For example, you might have a report that is 200 rows of JOINed data. If you used the Repositiory and tried to return large deep objects (with all the relationships populated, or even lazy-loaded) then you are going to have performance issues. Instead, use the Table Data Gatway pattern. You are just displaying data and not really needing OOP power here. The outputted data could however contain ID's, which through the UI could be used to initiate calls to Repository persistence methods.
As you are developing your app, when you come across the need for a new Read/Report query, create a new method in some class somewhere in your Table Data Gatway folder. You may find you have already created a similar query, so see how you can consolidate the other query. Use some parameters if necessary to make the gateway method's queries more flexible in particular ways (ie. columns to select, sort order, pagination, etc.). Don't make your queries too flexible though, this is where query builders/ORMs go wrong! You need to constrain your queries to a certain extent to where if you need to replace them (perhaps a different database engine) then you can easily perceive what the allowed variations are and aren't. It's up to you to find the right balance between flexibility (so you have more DRY code) and constraints (so you can optimize/replace queries later).
You can create services in your Domain to handle receiving parameters, then passing them to the Table Data Gateway, and then receiving back arrays to do some more mutating on. This will keep your Domain logic in the domain (and out of the infrastructure/persistence layer of the Repository & Table Data Gateway).
Again, just like the Repository, use interfaces in your domain services so that the implementation details stay out of your Domain layer, and resides in the actual Table Data Gateway folder.
Separation of Concerns or Single Responsibility Principle
The majority of the questions in the dropdown list of questions that "may already have your answer" only explain "theory" and are not concrete examples that answer my simple question.
What I'm trying to accomplish
I have a class named GuestbookEntry that maps to the properties that are in the database table named "guestbook". Very simple!
Originally, I had a static method named getActiveEntries() that retrieved an array of all GuestbookEntry objects that had entries in the database. Then while learning how to properly design php classes, I learned two things:
Static methods are not desirable.
Separation of Concerns
My question:
Dealing with Separation of Concerns, if the GuestbookEntry class should only be responsible for managing single guestbook entries then where should this getActiveEntries() method go? I want to learn the absolute proper way to do this.
I guess there are actually two items in the question:
As #duskwuff points out, there is nothing wrong with static methods per-se; if you know their caveats (e.g. "Late Static Binding") and limitations, they are just another tool to work with. However, the way you model the interaction with the DB does have an impact on separation of concerns and, for example, unit testing.
For different reasons there is no "absolute proper way" of doing persistence. One of the reasons is that each way of tackling it has different tradeoffs; which one is better for you project is hard to tell. The other important reason is that languages evolve, so a new language feature can improve the way frameworks handle things. So, instead of looking for the perfect way of doing it you may want to consider different ways of approaching OO persistence assuming that you want so use a relational database:
Use the Active Record pattern. What you have done so far looks like is in the Active Record style, so you may find it natural. The active record has the advantage of being simple to grasp, but tends to be tightly coupled with the DB (of course this depends on the implementation). This is bad from the separation of concerns view and may complicate testing.
Use an ORM (like Doctrine or Propel). In this case most of the hard work is done by the framework (BD mapping, foreign keys, cascade deletes, some standard queries, etc.), but you must adapt to the framework rules (e.g. I recall having a lot of problems with the way Doctrine 1 handled hierarchies in a project. AFAIK this things are solved in Doctrine 2).
Roll your own framework to suite your project needs and your programming style. This is clearly a time consuming task, but you learn a lot.
As a general rule of thumb I try to keep my domain models as independent as possible from things like DB, mainly because of unit tests. Unit tests should be running all the time while you are programming and thus they should run fast (I always keep a terminal open while I program and I'm constantly switching to it to run the whole suite after applying changes). If you have to interact with the DB then your tests will become slow (any mid-sized system will have 100 or 200 test methods, so the methods should run in the order of milliseconds to be useful).
Finally, there are different techniques to cope with objects that communicate with DBs (e.g. mock objects), but a good advise is to always have a layer between your domain model and the DB. This will make your model more flexible to changes and easier to test.
EDIT: I forgot to mention the DAO approach, also stated in #MikeSW answer.
HTH
Stop second-guessing yourself. A static method getActiveEntries() is a perfectly reasonable way to solve this problem.
The "enterprisey" solution that you're looking for would probably involve something along the lines of creating a GuestbookEntryFactory object, configuring it to fetch active entries, and executing it. This is silly. Static methods are a tool, and this is an entirely appropriate job for them.
GetActiveEntries should be a method of a repository/DAO which will return an array of GuestBookEntry. That repository of course can implement an interface so here you have easy testing.
I disagree in using a static method for this, as it's clearly a persistence access issue. The GuestBookEntry should care only about the 'business' functionality and never about db. That's why it's useful to use a repository, that object will bridge the business layer to the db layer.
Edit My php's rusty but you get the idea.
public interface IRetrieveEntries
{
function GetActiveEntries();
}
public class EntriesRepository implements IRetrieveEntries
{
private $_db;
function __constructor($db)
{
$this->_db=$db;
}
function GetActiveEntries()
{
/* use $db to retreive the actual entries
You can use an ORM, PDO whatever you want
*/
//return entries;
}
}
You'll pass the interface around everywhere you need to access the functionality i.e you don't couple the code to the actual repository. You will use it for unit testing as well. The point is that you encapsulate the real data access in the GetActiveEntries method, the rest of the app won't know about the database.
About repository pattern you can read some tutorials I've wrote (ignore the C# used, the concepts are valid in any language)
I am designing a new architecture for my company's software product. I am fairly new to unit testing. I read some horror stories about the use of singleton and static methods, but I am not clearly understanding the problem with using them and would appreciate some enlightenment.
Here is what I am doing:
I have a multi-tier architecture. At the server side level, I am using a series of reusable objects to represent database tables, called "Handlers". These handlers use other objects, like XMLObjects, XMLTables, different Datastructures, etc. Most of these are homemade, not your prepacked objects. Anyway, on top of this layer is a pseudo-singleton layer. The primary purpose of this is to simplify higher levels of server side code and create seamless class management. I can say:
$tablehandler = databasename::Handler('tablename')
...to get a table. I don't see the inherent problem with this. I am using a stack of handlers (an associative array) to contain instances of the different objects. I am not using globals, and all static datamembers are protected or private. My question is how this can cause problems with unit testing. I am not looking for bandwagon rhetoric, I am looking for a causal relationship. I would also appreciate any other insight to this. I feel like its an extremely flexible and efficient architecture at this point. Any help here would be great.
Using static methods to provide access to managed instances--whether they be singletons, pooled objects, or on-the-fly instances--isn't a problem for testing as long as you provide a way for the tests to inject their own mocks and stubs as needed and remove them once complete. From your description I don't see anything that would block that ability. It's merely up to you to build it when necessary.
You'll experience difficulty if you have a class that makes static calls to methods that do the work without an extension point or going through an instance. For example, a call like this
UserTable::findByEmail($email);
makes testing difficult because you cannot plug in a mock, memory-only table, etc. However, changing it to
UserTable::get()->findByEmail($email);
solves the problem because the test can call UserTable::set($mock) in the setup code. For a more detailed example, see this answer.
Recently i moved to Symfony 2 and i have a litte question.
Let's say i have the following Model:
"catalog" which contains "catalogs". The model gets its data from files but also needs a database connection to verify things.
In the Zend framework or other past projects i loaded the dependencies by a static object which forms a kind of "registry".
As i understand, Symfony 2 uses their service pattern (dependencie injection) instead. But how does this apply to my case.
Must i create a service for every model class i use to auto inject all dependencies? Or is it perfectly valid when i create a instance from my object myself and set for example the database connection in my constructor?
To create a service for every class which needs dependencies, seems a little bit overkill to me.
You can certainly create classes and inject dependencies the old-fashion way but take the time to learn the details of creating services. I think you will find:
Adding a new service is trivial. Copy/paste a few lines of configuration, adjust the class, id and maybe some parameters and you are done. Takes much less time than creating the actual class.
Very soon you will progress from just injecting a database connection to injecting other services as well as perhaps some parameters. Do you really want to have to remember to do all that stuff each time you need to new an object?
Using service id's can divorce your controllers from the exact location/name of a class. The first time you need to do some refactoring and maybe move some services into their own bundle or perhaps swap out a service with another you will be glad that you won't need to hunt down all your code and make changes.
S2 is not really focused on "Models". Instead, think in terms of a service named CatalogManager which wraps access to assorted catalog functionality.