Execute a method on every instantiated object of a certain PHP class - php

So I have this problem in PHP, I have a class called unit that is a reference from a table called units, so when i update a row on the table units, i have to update my object unit, calling a method called refresh(), like $unit->refresh(). This works ok for me because all the updates to a single row is made from the object unit.
The problem arises when another classes updates the units table. For example lets say i have a class called units (in plural). This class makes massive changes to the units table, changing rows that may be referenced by objects of the unit class.
So i was thinking if maybe an static method on unit class could make all the created objects of the type unit, can make them call they refresh() method, or maybe there is another way to work this (like an ORM, or a design pattern). I have two requisites, 1) i will work on posgtres and i will not change this, 2) I use a lot of user functions and triggers and complex SQL ( lots of time and date calculations, inners selects and so on).
So what could be useful in this kind of situation?

Your Unit sounds like an ActiveRecord. While you could, in theory, have your Units collection do something like this:
public function refreshAll()
{
foreach ($this->units as $unit) {
if ($unit->isModified) {
$unit->update()
}
}
}
I strongly discourage to do so because that would result in one query per Unit instance. Roundtrips to the database are most often a bottleneck. Just imagine you have to update a couple hundred or even thousand of instances.
A better approach would be to just collect all the queries required in the next transaction and issue them in one request, e.g. something like
public function refreshAll()
{
$sql = '';
foreach ($this->units as $unit) {
if ($unit->isModified) {
$sql .= $unit->getSql();
}
}
// these are dummy method calls.
// i dont know if postgres supports transactions or multiqueries
$this->dbAdapter->startTransaction();
$this->dbAdapter->multiQuery($sql);
$this->dbAdapter->commitOrRollback();
}
Another option would be to use a dedicated Unit of Work pattern.
Excerpts at Google Books:
Unit of Work in Martin Fowler's POEAA
Unit of Work in Matt Zandstra's PHP Objects, Patterns and Practice
Note that when using a Unit of Work, your also might want to consider to remove the database access code from your Unit instances completely (remove the ActiveRecord), because you are shifting the responsibility to save your objects into other classes then (which is good).

Why not create an object pool and then call the refresh method on every object in the pool when needed? It's by far the simplest and most elegant solution for the problem at hand.

You already said the answer. I would go for an ORM (since working with PHP, your best bet is Doctrine. As Gordon mentioned "Unit of WorK" in the comments, Doctrine uses this pattern for exactly this purpose. To give an example of the refreshing, look at the DOctrine Docs on refreshing Objects/Relations.
If an ORM is to heavy for you, the already mentioned Unit of Work is your answer and you could write your own, light-weight UoW for the project. This example explains the Doctrine 2.0 UoW and what to do with it.

Related

How to make database transaction in PHP OOP

In my obsolate procedural code (which I'd like now to translate into OOP) I have simple database transaction code like this:
mysql_query("BEGIN");
mysql_query("INSERT INTO customers SET cid=$cid,cname='$cname'");
mysql_query("INSERT INTO departments SET did=$did,dname='$dname'");
mysql_query("COMMIT");
If I build OOP classes Customer and Department for mapping customers and departments database tables I can insert table records like:
$customer=new Customer();
$customer->setId($cid);
$customer->setName($cname);
$customer->save();
$department=new Department();
$department->setId($did);
$department->setName($dname);
$department->save();
My Customer and Department classes internally use other DB class for querying database.
But how to make $customer.save() and $department.save() parts of a database transaction?
Should I have one outer class starting/ending transaction with Customer and Department classes instantiated in it or transaction should be started somehow in Customer (like Customer.startTransaction()) and ended in Department (like Department.endTransaction())? Or...
Additional object is the way to go. Something like this:
$customer=new Customer();
$customer->setId($cid);
$customer->setName($cname);
$department=new Department();
$department->setId($did);
$department->setName($dname);
$transaction = new Transaction();
$transaction->add($customer);
$transaction->add($department);
$transaction->commit();
You can see that there is no call to save() method on $customer and $department anymore. $transaction object takes care of that.
Implementation can be as simple as this:
class Transaction
{
private $stack;
public function __construct()
{
$this->stack = array();
}
public function add($entity)
{
$this->stack[] = $entity;
}
public function commit()
{
mysql_query("BEGIN");
foreach ($this->stack as $entity) {
$entity->save();
}
mysql_query("COMMIT");
}
}
How to make $customer.save() and $department.save() parts of a database transaction?
You don't have to do anything besides start the transaction.
In most DBMS interfaces, the transaction is "global" to the database connection. If you start a transaction, then all subsequent work is automatically done within the scope of that transaction. If you commit, you have committed all changes since the last transaction BEGIN. If you rollback, you discard all changes since the last BEGIN (there's also an option to rollback to the last transaction savepoint).
I've only used one database API that allowed multiple independent transactions to be active per database connection simultaneously (that was InterBase / Firebird). But this is so uncommon, that standard database interfaces like ODBC, JDBC, PDO, Perl DBI just assume that you only get one active transaction per db connection, and all changes happen within the scope of the one active transaction.
Should I have one outer class starting/ending transaction with Customer and Department classes instantiated in it or transaction should be started somehow in Customer (like Customer.startTransaction()) and ended in Department (like Department.endTransaction())? Or...
You should start a transaction, then invoke domain model classes like Customer and Department, then afterwards, either commit or rollback the transaction in the calling code.
The reason for this is that domain model methods can call other domain model methods. You never know how many levels deep these calls go, so it's really difficult for the domain model to know when it's time to commit or rollback.
For some pitfalls of doing this, see How do detect that transaction has already been started?
But they don't have to know that. Customer and Department should just do their work, inserting and deleting and updating as needed. Once they are done, the calling code decides if it wants to commit or rollback the whole set of work.
In a typical PHP application, a transaction is usually the same amount of work as one PHP request. It's possible, though uncommon, to do more than one transaction during a given PHP request, and it's not possible for a transaction to span across multiple PHP requests.
So the simple answer is that your PHP script should start a transaction near the beginning of the script, before invoking any domain model classes, then commit or rollback at the end of the script, or once the domain model classes have finished their work.
You are migrating to OOP, and thats great, but soon you will find yourself migrating to an arquitecture with a well diferenciated Data Access Layer, including a more complex way of separating data from control. Now, i guess you are using some kind of Data access object, that is a great first approach pattern, but for sure you can go further. Some of the answer here already lead you in that direction. You shouldent think in your objects as the basis of your arquitecture, and use some helper objects to query database. Instead, you should think about a fully featured layer, with all required generic classes that takes care of the comunication with the database, that you will use in all your projects, and then have the business-level-objects, like customer or department, than know as litle as possible about database implementations.
For this, for sure you will have an outer class handling transactions, but probably also other taking care of security, other for building queries providing a unique api regardless or the database engine, and even more, a class that reads objects in order to put them in the database, so the object itself doesn't even know that it is meant to end in a database.
Achieve this, would be a hard and long work, but after that, you could have a custom and widely reusable layer that will make your projects more escalable, more stable, and more trustable. And that will be great and you will learn a lot and after that you would fill quite good. You will have some kind of DBAL or ORM.
But that wouldnt also be the best solution, since there are people that already have been years doing that, and it will be hard to achieve what the already have.
So, what i recommend, for any medium size project, is that you take data base abstraction as serious as you can, and any opensource ORM, that happens to be easy to use, and finally you will save time and get a system much better.
for example, doctrine has a very nice way of handling transactions and concurrency, in two ways: implicit, taking automatically care of the normal operations, or implicit, when you need to take over and control transaction demarcation yourself. check it out here. Also, there are some other complex posibilities like transaction nesting, and others.
The most famous and reliable ORM are
Doctrine, and
Propel
I use doctrine mostly, since it has a module to integrate with Zend Framework 2 that i like, but propel has some aspects that i like a lot.
Probably you would have to refactor somethings, and you dont feel like doing it at this point, but i can say for my experience, that this is one of those things you dont even want to think about, and years after you start using it and realize how you wasted time :-)recommend you to consider this if not know, in your very next project.
UPDATE
Some thoughts after Tomas' comment.
It's true that for not so big projects (especially if you are not very familiar with orms, or your model is very complex) it can be a big effort to integrate a vendor orm.
But what i can say after years developing projects of any size, is that for any medium size one, i would use at least a custom, less serious and more flexible home-made orm, with a sort of generic classes, and as few as possible business oriented repositories, where an entity knows its table, and probably other related tables, and where you can encapsulate some sql or custom query function calls, but all around that entity (for example the main table of the entity, the table of pictures associated to that entity, and so) in order to provide to the controller a single interface to the data, so at any range the database engine is independent of the API of the model, and as much important as that, the controller doesn't have to be aware of any DBMS aspects, like the use of a transactions, something that is meant just to ensure a behavior that is purely model-related, and in a scandalous low level: related pretty much to DBMS technical needs. i mean, your controller could know that it is storing stuff in a database, but for sure it doesn't have to even know what a transaction is.
For sure this is a philosophical discussion, and it could be many equally valid points of view.
For any custom ORM, i would recommend to start looking for some DAO/DTO generator that can help you to create the main classes from your database, so you only need to adapt them to your needs at the points where you find exceptions to the normal behavior of a normal create-read-update-delete. This reminds me that you can also look for PHP CRUD and find some useful and fun tools.

PHP DataMapper with multiple persistence layers

I am writing a system in PHP that has to write to three persistence layers:
One web service
Two databases (one mysql one mssql)
The reason for this is legacy systems and cannot be changed.
I am wanting to use the DataMapper pattern and I am trying to establish the best way to achieve what I want. I have an interface like follows:
<?php
$service = $factory->getService()->create($entity);
?>
Below is some contrived and cut down code for brevity:
<?php
class Post extends AbstractService
{
protected $_mapper;
public function create(Entity $post)
{
return $this->_mapper->create($post);
}
}
class AbstractMapper
{
protected $_persistence;
public function create(Entity $entity)
{
$data = $this->_prepareForPersistence($entity);
return $this->_persistence->create($data);
}
}
?>
My question is that because there are three persistence layers, there will also therefore likely be a need for three mappers for each. I'd like a clean design pattern inspired interface to make this work.
I see it as having three options:
Inject three mappers into the Service and call create on each
$_mapper is an array/collection and it iterates through them calling create on each
$_mapper is actually a container object that acts as a
further proxy and calls create on each
Something strikes me as wrong with each of these solutions and would appreciate any feedback/recognised design patterns that might fit this.
I have had to solve a similar problem but very many years ago in the days of PEAR DB. In that particular case, there was a need to replicate the data across multiple databases.
We did not have the problem of the different databases having different mappings though so it was a fair bit simpler.
What we did was to facade the DB class and override the getResult function (or whatever it was called). This function then analysed the SQL and if it was a read - it would send it to just one backed and if it was a write, it would send it to all.
This actually worked really well for a very heavily utilised site.
From that background, I would suggest entirely facading all of the persistence operations. Once you have done that, the implementation details are less relevant and can be changed at any time.
From this perspective, any of your implementation ideas seem like a reasonable approach. There are various things you will want to think about though.
What if one of the backends throw an error?
What is the performance impact of writing to three database servers?
Can the writes be done asynchronously (if so, ask the first question again)
There is potentially another way to solve this problem as well. That is to use stored procedures. If you have a primary database server, you could write a trigger which, on commit (or thereabouts) connects to the other database and sychronises the data.
If the data update does not need to be immediate, you could get the primary database to log changes and have another script that regularly "fed" this data into the other system. Again, the issue of errors will need to be considered.
Hope this helps.
First, a bit of terminology: what you call three layers, are in fact, three modules, not layers. That is, you have three modules within the persistence layer.
Now, the basic premise of this problem is this: you MUST have three different persistence logic, corresponding to three different storage sources. This is something that you can't avoid. Therefore, the question is just about how to invoke write operation on this modules (assuming that for read you don't need to call all three, or if you do, that is a separate question any ways).
From the three options you have listed, in my opinion the first one is better. Because, that is the simplest of the three. The other two, will still need to call three modules separately, with the additional work of introducing a container or some sort of data structure. You still can't avoid calling three modules somewhere.
If working with the first option, then you obviously need to work with interfaces, to provide a uniform abstraction for the user/client (in this case the service).
My point is that:
1. Their is an inherent complexity in your problem, which you can't simplify further.
2. The first option is better, because the other two, make things more complex, not simple.
I think option #2 is the best in my opinion. I would go with that. If you had like 10+ mappers than option #3 would make sense to shift the create logic to the mapper itself, but since you have a reasonable number of mappers it makes more sense to just inject them and iterate over them. Extending functionality by adding another mapper would be a matter of just adding 1 line to your dependency injection configuration.

php oop MVC design - proper architecture for an application to edit data

Now that I have read an awfull lot of posts, articles, questions and answers on OOP, MVC and design patterns, I still have questions on what is the best way to build what i want to build.
My little framework is build in an MVC fashion. It uses smarty as the viewer and I have a class set up as the controller that is called from the url.
Now where I think I get lost is in the model part. I might be mixing models and classes/objects to much (or to little).
Anyway an example. When the aim is to get a list of users that reside in my database:
the application is called by e.g. "users/list" The controller then runs the function list, that opens an instance of a class "user" and requests that class to retrieve a list from the table. once returned to the controller, the controller pushes it to the viewer by assigning the result set (an array) to the template and setting the template.
The user would then click on a line in the table that would tell the controler to start "user/edit" for example - which would in return create a form and fill that with the user data for me to edit.
so far so good.
right now i have all of that combined in one user class - so that class would have a function create, getMeAListOfUsers, update etc and properties like hairType and noseSize.
But proper oop design would want me to seperate "user" (with properties like, login name, big nose, curly hair) from "getme a list of users" what would feel more like a "user manager class".
If I would implement a user manager class, how should that look like then? should it be an object (can't really compare it to a real world thing) or should it be an class with just public functions so that it more or less looks like a set of functions.
Should it return an array of found records (like: array([0]=>array("firstname"=>"dirk", "lastname"=>"diggler")) or should it return an array of objects.
All of that is still a bit confusing to me, and I wonder if anyone can give me a little insight on how to do approach this the best way.
The level of abstraction you need for your processing and data (Business Logic) depends on your needs. For example for an application with Transaction Scripts (which probably is the case with your design), the class you describe that fetches and updates the data from the database sounds valid to me.
You can generalize things a bit more by using a Table Data Gateway, Row Data Gateway or Active Record even.
If you get the feeling that you then duplicate a lot of code in your transaction scripts, you might want to create your own Domain Model with a Data Mapper. However, I would not just blindly do this from the beginning because this needs much more code to get started. Also it's not wise to write a Data Mapper on your own but to use an existing component for that. Doctrine is such a component in PHP.
Another existing ORM (Object Relational Mapper) component is Propel which provides Active Records.
If you're just looking for a quick way to query your database, you might find NotORM inspiring.
You can find the Patterns listed in italics in
http://martinfowler.com/eaaCatalog/index.html
which lists all patterns in the book Patterns of Enterprise Application Architecture.
I'm not an expert at this but have recently done pretty much exactly the same thing. The way I set it up is that I have one class for several rows (Users) and one class for one row (User). The "several rows class" is basically just a collection of (static) functions and they are used to retrieve row(s) from a table, like so:
$fiveLatestUsers = Users::getByDate(5);
And that returns an array of User objects. Each User object then has methods for retrieving the fields in the table (like $user->getUsername() or $user->getEmail() etc). I used to just return an associative array but then you run into occasions where you want to modify the data before it is returned and that's where having a class with methods for each field makes a lot of sense.
Edit: The User object also have methods for updating and deleting the current row;
$user->setUsername('Gandalf');
$user->save();
$user->delete();
Another alternative to Doctrine and Propel is PHP Activerecords.
Doctrine and Propel are really mighty beasts. If you are doing a smaller project, I think you are better off with something lighter.
Also, when talking about third-party solutions there are a lot of MVC frameworks for PHP like: Kohana, Codeigniter, CakePHP, Zend (of course)...
All of them have their own ORM implementations, usually lighter alternatives.
For Kohana framework there is also Auto modeler which is supposedly very lightweight.
Personally I'm using Doctrine, but its a huge project. If I was doing something smaller I'd sooner go with a lighter alternative.

beginning OOP question about classes using classes

I'm trying to replace a site written procedurally with a nice set of classes as a learning exercise.
So far, I've created a record class that basically holds one line in the database's main table.
I also created a loader class which can:
loadAllFromUser($username)
loadAllFromDate($date)
loadAllFromGame($game)
These methods grab all the valid rows from the database, pack each row into a record, and stick all the records into an array.
But what if I want to just work with one record? I took a stab at that and ended up with code that was nearly identical to my procedural original.
I also wasn't sure where that one record would go. Does my loader class have a protected record property?
I'm somewhat confused.
EDIT - also, where would I put something like the HTML template for outputting a record to the site? does that go in the record class, in the loader, or in a 3rd class?
I recommend looking into using something like Doctrine for abstracting your db-to-object stuff, other than for learning purposes.
That said, there are many ways to model this type of thing, but in general it seems like the libraries (home-grown or not) that handle it tend to move towards having, at a high level:
A class that represents an object that is mapped to the db
A class that represents the way in which that object is mapped to the db
A class that represents methods for retrieving objects from the db
Think about the different tasks that need done, and try to encapsulate them cleanly. The Law of Demeter is useful to keep in mind, but don't get too bogged down with trying to grok everything in object-oriented design theory right this moment -- it can be much more useful to think, design, code, and see where weaknesses in your designs lie yourself.
For your "work with one record, but without duplicating a bunch of code" problem, perhaps something like having your loadAllFromUser methods actually be methods that call a private method that takes (for instance) a parameter that is the number of records to be retrieved, where if that parameter is null it retrieves all the records.
You can take that a step further, and implement __call on your loader class. Assuming it can know or find out about the fields that you want to load by, you can construct the parameters to a function that does the loading programatically -- look at the common parts of your functions, see what differs, and see if you can find a way to make those different parts into function parameters, or something else that allows you to avoid repetition.
MVC is worth reading up on wrt your second question. At the least, I would probably want to have that in a separate class that expects to be passed a record to render. The record probably shouldn't care about how it's represented in html, the thing that makes markup for a record shouldn't care about how the record is gotten. In general, you probably want to try to make things as standalone as possible.
It's not an easy thing to get used to, and most of "getting good" at this sort of design is a matter of practice. For actual functionality, tests can help a lot -- say you're writing your loader class, and you know that if you call loadAllFromUser($me) that you should get an array of three specific records with your dataset (even if it's a dataset used for testing only), if you have something you can run which would call that on your loader and check for the right results, it can help you know that your code is at least right from the standpoint of behavior, if not from design -- and when you change the design you can ensure that it still behaves correctly. PHPUnit seems to be the most popular tool for this in php-land.
Hopefully this points you in a useful group of directions instead of just being confusing :) Good luck, and godspeed.
You can encapsulate the unique parts of loadAllFrom... and loadOneFrom... within utility methods:
private function loadAll($tableName) {
// fetch all records from tableName
}
private function loadOne($tableName) {
// fetch one record from tableName
}
and then you won't see so much duplication:
public function loadAllFromUser() {
return $this->loadAll("user");
}
public function loadOneFromUser() {
return $this->loadOne("user");
}
If you like, you can break it down further like so:
private function load($tableName, $all = true) {
// return all or one record from tableName
// default is all
}
you can then replace all of those methods with calls such as:
$allUsers = $loader->load("users");
$date = $loader->load("date", false);
You could check the arguments coming into your method and decide from there.
$args = func_get_args();
if(count($args) > 1)
{
//do something
}
else // do something else
Something simple liek this could work. Or you could make two seperate methods inside your class for handling each type of request much like #karim's example. Whichever works best for what you would like to do.
Hopefully I understand what you are asking though.
To answer your edit:
Typically you will want to create a view class. This will be responsible for handling the HTML output of the data. It is good practice to keep these separate. The best way to do this is by injecting your 'data class' object directly into the view class like such:
class HTMLview
{
private $data;
public function __construct(Loader $_data)
{
$this->data = $_data;
}
}
And then continue with the output now that this class holds your processed database information.
It's entirely possible and plausible that your record class can have a utility method attached to itself that knows how to load a single record, given that you provide it a piece of identifying information (such as its ID, for example).
The pattern I have been using is that an object can know how to load itself, and also provides static methods to perform "loadAll" actions, returning an array of those objects to the calling code.
So, I'm going through a lot of this myself with a small open source web app I develop as well, I wrote most of it in a crunch procedurally because it's how I knew to make a working (heh, yeah) application in the shortest amount of time - and now I'm going back through and implementing heavy OOP and MVC architecture.

What does a Data Mapper typically look like?

I have a table called Cat, and an PHP class called Cat. Now I want to make a CatDataMapper class, so that Cat extends CatDataMapper.
I want that Data Mapper class to provide basic functionality for doing ORM, and for creating, editing and deleting Cat.
For that purpose, maybe someone who knows this pattern very well could give me some helpful advice? I feel it would be a little bit too simple to just provide some functions like update(), delete(), save().
I realize a Data Mapper has this problem: First you create the instance of Cat, then initialize all the variables like name, furColor, eyeColor, purrSound, meowSound, attendants, etc.. and after everything is set up, you call the save() function which is inherited from CatDataMapper. This was simple ;)
But now, the real problem: You query the database for cats and get back a plain boring result set with lots of cats data.
PDO features some ORM capability to create Cat instances. Lets say I use that, or lets even say I have a mapDataset() function that takes an associative array. However, as soon as I got my Cat object from a data set, I have redundant data. At the same time, twenty users could pick up the same cat data from the database and edit the cat object, i.e. rename the cat, and save() it, while another user still things about setting another furColor. When all of them save their edits, everything is messed up.
Err... ok, to keep this question really short: What's good practice here?
From DataMapper in PoEA
The Data Mapper is a layer of software
that separates the in-memory objects
from the database. Its responsibility
is to transfer data between the two
and also to isolate them from each
other. With Data Mapper the in-memory
objects needn't know even that there's
a database present; they need no SQL
interface code, and certainly no
knowledge of the database schema. (The
database schema is always ignorant of
the objects that use it.) Since it's a
form of Mapper (473), Data Mapper
itself is even unknown to the domain
layer.
Thus, a Cat should not extend CatDataMapper because that would create an is-a relationship and tie the Cat to the Persistence layer. If you want to be able to handle persistence from your Cats in this way, look into ActiveRecord or any of the other Data Source Architectural Patterns.
You usually use a DataMapper when using a Domain Model. A simple DataMapper would just map a database table to an equivalent in-memory class on a field-to-field basis. However, when the need for a DataMapper arises, you usually won't have such simple relationships. Tables will not map 1:1 to your objects. Instead multiple tables could form into one Object Aggregate and viceversa. Consequently, implementing just CRUD methods, can easily become quite a challenge.
Apart from that, it is one of the more complicated patterns (covers 15 pages in PoEA), often used in combination with the Repository pattern among others. Look into the related questions column on the right side of this page for similar questions.
As for your question about multiple users editing the same Cat, that's a common problem called Concurrency. One solution to that would be locking the row, while someone edits it. But like everything, this can lead to other issues.
If you rely on ORM's like Doctrine or Propel, the basic principle is to create a static class that would get the actual data from the database, (for instance Propel would create CatPeer), and the results retrieved by the Peer class would then be "hydrated" into Cat objects.
The hydration process is the process of converting a "plain boring" MySQL result set into nice objects having getters and setters.
So for a retrieve you'd use something like CatPeer::doSelect(). Then for a new object you'd first instantiate it (or retrieve and instance from the DB):
$cat = new Cat();
The insertion would be as simple as doing: $cat->save(); That'd be equivalent to an insert (or an update if the object already exists in the db... The ORM should know how to do the difference between new and existing objects by using, for instance, the presence ort absence of a primary key).
Implementing a Data Mapper is very hard in PHP < 5.3, since you cannot read/write protected/private fields. You have a few choices when loading and saving the objects:
Use some kind of workaround, like serializing the object, modifying it's string representation, and bringing it back with unserialize
Make all the fields public
Keep them private/protected, and write mutators/accessors for each of them
The first method has the possibility of breaking with a new release, and is very crude hack, the second one is considered a (very) bad practice.
The third option is also considered bad practice, since you should not provide getters/setters for all of your fields, only the ones that need it. Your model gets "damaged" from a pure DDD (domain driven design) perspective, since it contains methods that are only needed because of the persistence mechanism.
It also means that now you have to describe another mapping for the fields -> setter methods, next to the fields -> table columns.
PHP 5.3 introduces the ability to access/change all types of fields, by using reflection:
http://hu2.php.net/manual/en/reflectionproperty.setaccessible.php
With this, you can achieve a true data mapper, because the need to provide mutators for all of the fields has ceased.
PDO features some ORM capability to
create Cat instances. Lets say I use
that, or lets even say I have a
mapDataset() function that takes an
associative array. However, as soon as
I got my Cat object from a data set, I
have redundant data. At the same time,
twenty users could pick up the same
cat data from the database and edit
the cat object, i.e. rename the cat,
and save() it, while another user
still things about setting another
furColor. When all of them save their
edits, everything is messed up.
In order to keep track of the state of data typically and IdentityMap and/or a UnitOfWork would be used keep track of all teh different operations on mapped entities... and the end of the request cycle al the operations would then be performed.
keep the answer short:
You have an instance of Cat. (Maybe it extends CatDbMapper, or Cat3rdpartycatstoreMapper)
You call:
$cats = $cat_model->getBlueEyedCats();
//then you get an array of Cat objects, in the $cats array
Don't know what do you use, you might take a look at some php framework to the better understanding.

Categories