Related
I'm developing a stock and warehouse management system using relational databases (MySQL) and PHP. Due to the fact that the stock products will have multiple characteristics (widths, heights, weights, measures, colors, etc) there raises the need of having a database model approach of storing the attributes and the possibility to add/edit new attributes, alter product types and so on.
So, in the current concept I can see only 3 viable models:
store all attributes in a single table, as separated column and
based on product type (probably category) to serve them to the end
user to fill
the EAV (Entity - Attribute - Value) model that will involve
something like this:
a category table containing classes of attributes
a class of attributes table that will contain separate classes with multiple attributes (in this manner we ensure that we can add to a category a class of attributes without the need to manually add to similar categories attributes one after the other)
a attributes table responsible for the attribute itself
a attributes values table where we store the values
Store all common attributes in a single table and create multiple tables for all different category type: this model would require to change the database every time we encounter a new category type
The second model is inspired from here.
After reading a lot regarding the EAV model I now have doubts over this model and I am little concern regarding the ways I will have to connect different product attributes in orders / invoices and so on.. Even the validation of forms seems that it will be a real pain of using the EAV model, but still.. I wouldn't like to have a single table with 100+ columns and then to be ready to add new columns whenever a new attribute is to be added..
So, the question would be: is there a cheaper solution? Or could the EAV model be improved?
I know it's a long and old debate, but everybody is just pointing to NoSQL and I only rely on RDBMS..
EDIT:
The downside of those approaches (or of most of the approaches found) is that:
for a specified attribute there probably should exist a measure unit
(eq. attribute weight should have a drop down with measuring units)
a specified attribute should be mandatory or not
all attributes should have a validation on form submit
Until now, the only feasible solution would be to create a new table for every new category, and deal in that table with all custom attributes and rules. But, yet again, it would end up to a real pain when a new category is to be set up.
EDIT 2:
The option of using a Json column in MySQL, does not solve from my point of view any of the downsides mentioned above.. OR, maybe I am wrong and I don't clearly see the big picture..
I gather that these are your primary requirements:
Flexible attributes
Your exact need here is unclear: it sounds like you either expect the attributes to change, or at least expect that all attributes will not always be applicable to all products (i.e. a sparse matrix)
Products are also categorized, and the category will (at least partially) determine what attributes are applicable to a product
The attributes themselves may have additional properties aside from their value, that must be provided by the user (i.e. a unit that goes with a weight)
Input validation is a must, and checks things like:
All required attributes are present
Attributes which are not applicable are not present
Attributes have valid values
User-provided attribute properties have valid values
You probably also want to make sure you can search/filter efficiently by attributes
These different requirements all result in different technical needs, and different technical solutions. Some are matters of database, and some will have to be solved in code regardless of database choice. Obviously you are aware of some of these issues, but I think it is worth really breaking it down:
Flexible Attributes
Having a list of flexible attributes (as you know) does not work well with RDBMS systems where your table schema has to be pre-defined. This includes pretty much all of the SQLs, and definitely MySQL. The issue is that changing the table schema is expensive and for large tables can take minutes or hours, making it practically impossible to add attributes if you have to add a column to a table to do it.
Even if your list of attributes rarely changes, a large table of attributes is very inefficient if most products don't have a value for most attributes (i.e. a sparse matrix).
In the long run, you just won't get anywhere if your attributes are stored as a column in tables. Even if you break it down per-category, you are still going to have large empty tables that you can't add columns to dynamically.
If you stick with an RDBMS your only option is really an EAV system. Having considered, researched, and implemented EAV systems, I wouldn't worry too much about all the hype you hear about them on the internet. I know that there are lots of articles out there talking about the EAV "anti-pattern", and I'm the kind of person who takes proper use of software design patterns seriously, but EAV does have a perfectly valid time and place, and this is it. In the long run you will not be able to do this on an RDBMS without EAV. You could certainly look at a NoSQL system that is designed for this specific kind of problem, but when the rest of your database is in a standard RDBMS, installing or switching to a NoSQL system just to store your attribute values is almost certainly overkill. You certainly aren't going to want to lose the ACID compliance that a RDMBS comes with, and most NoSQL systems don't guarantee ACID compliance. There is a wave of NewSQL systems out there that are designed to get the best of both worlds, but if this is just one part of a larger application (which I'm sure is the case), it probably isn't worth investigating completely new technologies just to make this one feature happen. You could also consider using something like JSON storage inside MySQL to store your attribute values. That is a viable option now that MySQL has better JSON support, but that only makes a small change to the big picture: you would still need all your other EAV tables to keep track of allowed attributes, categories, etc. It is only the attribute values that you would be able to place inside of the JSON data, so the potential benefits of JSON storage are relatively small (and have other issues that I will mention down the road).
So in summary, I would say that as long as the rest of your application runs on a RDBMS, it is perfectly reasonable to use EAV to manage flexible attributes. If you were trying to build your entire system in an EAV inside of a RDBMS, then you would definitely be wasting your time and I'd tell you to go find a good NoSQL database that fits the problem you are trying to solve. The disadvantages of EAV do still apply though: you can't easily perform consistency checks within your RDBMS system, and will have to do that yourself in code.
Categorized products with category-specific attributes
You've pretty much got it here. This is relatively straight-forward inside an EAV system. You will have your attributes table, you will have a category table, and then you will need a standard one-to-many or many-to-many relationship between the attributes and categories table which will determine which attributes are available to which category. You obviously also have a relationship between products and categories, so you know which products therefore need which attributes.
Your option #3 is designed to fulfill this requirement, but having a table with each attribute as a column will scale very poorly as your system grows, and will definitely break if you ever need to dynamically add attributes. You don't want to be running ALTER TABLE statements on the fly, especially if you have more than a few thousand records.
Managing attribute properties
It is one thing to store dynamic attributes and values. It is another problem entirely to store dynamic attributes, values, and associated meta data (i.e. store a weight as well as the unit the weight is in). This however is no longer a database problem, but rather a code problem. In terms of actually storing the information your best bet is to probably store your meta data inside your attribute values table, and rely upon some code abstractions to handle the input validation as well as form building. That can get quite complicated quite fast, especially if done wrong, and talking through such a system would take another entire post. However, I think you are on the right track: for a fancier attribute that requires both a value and meta data, you need to somehow assign a class that is responsible for input processing and form validation. For instance for a simple text field you have a "text" class that reads the user's value out of the form and stores it in the proper "attribute_values" table, with no meta data stored. Then for your "weight" attribute you would have a "weight" attribute that stores the number given by the user (i.e. 0.5) but then also stores the unit the user specified with that number (i.e. 'lbs') and persists both to the "attribute_values" table (in pseudo-SQL): INSERT INTO attribute_values value='0.5', meta_data='{"unit":"lbs"}', product_id=X, attribute_id=X. Ironically JSON probably would be a good way to store this meta data, since the exact meta data kept will also vary by attribute type, and I doubt you would another level of tables to handle that variation in your EAV tables.
Again, this is more of a code problem than storage problem. If you decided to do JSON tables the overall picture to meet this requirement wouldn't change: your "attribute type classes" would simply store the meta data in a different way. That would probably look something like: UPDATE products SET attributes='{"weight":0.5,"unit":"lbs"}' WHERE id=X
Input Validation
This will have to be handled exclusively by code regardless of how you store your data, so this requirement doesn't matter much in terms of deciding your database structure. A class-based system as described above will also be able to handle input validation, if properly executed.
Sort/Search/Filter
This doesn't matter if you are exclusively using your attributes for data storage/retrieval, but will you be searching on attributes at all? With a proper EAV system and good indexes, you can actually search/sort efficiently in an RDBMS system (although it can start to get painful if you search by more than a handful of indexes at a time). I haven't looked in detail, but I'm pretty sure that using JSON for storage won't scale well when it comes to searching. While MySQL can work with JSON now and search the columns directly, I seriously doubt that such searching/sorting makes use of MySQL indexes, which means that it won't work with large databases. I could be wrong on that one though. It would be worth digging into before committing to a MySQL/JSON storage setup, if you were going to do something like that.
Depending on your needs, this is also a good place to compliment an RDBMS system with a NoSQL system. Having managed large-ish (~1.5 million product) e-commerce systems before, I have found that MySQL tends to fall flat in the searching/sorting category, especially if you are doing any kind of text searching. In an e-commerce system a query like: "Show me the results that best match the term 'blue truck' and have the attribute 'For ages 3-5'" is common, but doing something like that in MySQL is about impossible, primarily because of the need for relevancy based sorting and scoring. We solved this problem by using Apache Solr (Elastic is a similar solution) and it managed our searching/sorting/search term scoring very well. In this case it was a two database solution. MySQL kept all the actual data and stored attributes in EAV tables, and anytime something got updated we pushed a record of everything to Apache Solr for additional storage. When a query came in from a user we would query Apache Solr which was an expert at text searching and could also handle the attribute filtering with no trouble, and then we would pull the full product record out of our MySQL database. The system worked beautifully. We had 1.5 million products, thousands of custom attributes, and had no trouble running the whole thing off of a single virtual server. Obviously there was a lot of code going on behind the scenes, but the point is that it definitely worked and wasn't difficult to maintain. Never had any issues with performance from either MySQL or Solr.
Well, this is just one approach. You could simplify this if you don't need or want all of this.
You could, for example, use a Json column in Mysql, to store all of the extra attributes. Another idea, in the product type, add a json column to store the custom attributes and types, and use this to draw the form on the screen.
I would recommend you to go through an EAV database first in order to understand the database creation & its values.
You can follow magento DB structure which uses EAV model.
EAV stands for Entity attribute and value model. Let’s closely have a look at all parts.
Entity: Data items are represented as entity, it can be a product or customer or a category. In the database each entity have a record.
Attribute: These are belongs to different entity, for example a Customer entity have attributes like Name, Age, Address etc. In Magento database all attributes are listed in a single table.
Value: Simply the values of the attributes, for example for the Name attribute the value will be “Rajat”.
EAV is used when you have many attributes for an entity and these attribute are dynamic (added/removed).
Also there is a high possibility that many of these attribute would have empty or null value most of the time.
In such a situation EAV structure has many advantages mainly with optimized mysql storage
For Your case - Category can also have attributes, products can also have attributes so on with customers etc ...
Let's take an example of categories. Following are the tables provided by magento:
1. catalog_category_entity
2. catalog_category_entity_datetime
3. catalog_category_entity_decimal
4. catalog_category_entity_int
5. catalog_category_entity_text
6. catalog_category_entity_varchar
7. catalog_category_flat
Follow this link to know more about table
Magento Category Tables
For attributes which are select box. You can put dropdown values under option values.
Follow this to link to understand magento eav structure which will give you clear picture about how EAV model work & how you can make a best use of it.
magento table structure
There are three approaches if you want to stick with a relational database.
The first is best if you know in advance the attributes for all the products. You chose one of the three ways to store polymorphic data in a relational model.
It's "clean" from a relational point of view - you're just using rows and columns, but each of the 3 options has its own benefits and drawbacks.
If you don't know your attributes at development time, I'd recommend against these solutions - they'd require significant additional tooling.
The next option is EAV. The benefits and drawbacks are well documented - but your focus on "validating input forms" is only one use case for the data, and I think you could easily find your data becomes "write only". Providing sorting/filtering, for instance, becomes really hard ("find all products with a height of at least 12, and sort by material_type" is almost impossible using the EAV model).
The option I prefer is a combination of relational data for the core, invariant data, and document-centric (JSON/XML) for the variant data.
MySQL can query JSON natively - so you can sort/filter by the variant attributes. You'd have to create your own validation logic, though - perhaps by integrating JSON Schema in your data entry applications.
By using JSON Schema, you can introduce concepts that "belong together", and provide lookup values. For instance, if you have product weight, your schema might say weight always must have a unit of measure, with the valid options being kilogram, milligram, ounce, pound etc.
If you have foreign key relationships in the variant data, you have a problem - for instance, "manufacturer" might link to a manufacturers table. You can either model this as an explicit column, or in the JSON and do without SQL's built-in foreign key tools like joins.
I try to insert data into a mySQL table and i don't know yet how many parameters there will be.
So i need to find out a way how to insert parameters dynamically.
You can do it that way:
Insert into mytable (parameters) VALUES ('Ford;red;100kW;diesel;');
Insert into mytable (parameters) VALUES ('Ford;red;100kW;electric;40kWH');
So if you have to add a electric car, you need the kWH, which you didn't need for the petrol car.
Or you do it that way:
Insert into mytable(name, color, kW, engine) VALUES ('Ford', 'red', '100', 'diesel')
ALTER TABLE mytable ADD kWh VARCHAR( 255 ) after engine
Insert into mytable(name, color, kW, engine, kWh) VALUES ('Ford', 'red', '100', 'electric', '40')
In first case you have to hande all the data with string operations like explode, in second case you have to allow the user to add columns with php.
What is the better way? Or is there another way that is even better?
I don't see any similarity with the other question.
This is a verry dificult question. I hope i understood it correctly.
Disclaimer
This is a long post, it may contain errors. Feel free to correct those or ask for clarifiation. It also is (because of the nature of the question) somewhat oppinion based. I tried to balance all possibilities.
I assumed a object orriented approach to this, i.e. that object should be stored.
TL;DR: It might be best to not do this programmatically.
Evaluation
The first answer has the advantage, that you may split the text at runtime and dynamically create a new Object, which will not lead to a PDOException (or whatever you are using). This however also has it's disadvantages. It can lead to you using reflections a lot. Why? If you want to alter the Table at runtime, i have to asume you do not know what kind of Objects are to be expected. This leads to you creating those objects "on the fly". This would also suggest, that you should store the Objects name somehow.
The seconds answer raises a Question. How do you read from that Database?
If you dynamically read of that database (i.e. programmatically defining which columns you are going to need): How do you know, which columns to read from? What ensures you, that the columns will exist? You would have to check that the column you request exists. This may get messy real fast, because you would have to check for each column. And if it does exist, but is not set, what will be the default value for it?
If you statically read of that database: Why not design the database beforehand to hold the kWh column? It might be null at some point, but you could compensate for that, by ignoring them.
If you know the Object you want to use beforehand, design your Database to be able to hold it.
Another way to aproach this
Or is there another way that is even better?
You may be happy to use Relational Databases and abstruct those with a Data Access Object. Even tho this answer dipps deep into design aspects, you may go best with designing your application first. Go ahead an create a EER-Diagram, that represents your data-structure. You can have a generic car entity, that is extended by the patrol car and the electric car (and even a fusion car). There are plenty of tools out there, that help you create such a diagram and convert it into an DDL for the database of your choice.
Conclusion
To be concrete, if you realy have to alter the table at runtime, i would recommend going with the second approach and add a default value to it. However, based on the question you asked, i can't realy see you getting far with it. It would mean that you would have an unkown Object that you want to store.
If that is the case, why not create a new Table with the objects name, that holds the fields of the object as columns? That would allow you to have an acces like this (asuming a repository that stores said object):
$object = new TestObject();
repository->store($object);
Upon calling repository->store(), the repository will check if the database has a table called "TestObject" and, if not, create it. If the table was created, it than could proceed to alter it and add the columns. So, the following would use this:
$object = $repository->load("TestObject");
The Repository would now check for the column TestObject and may create a new TestObject at runtime like this:
function load($name) {
$returnValue = new $name;
// Set the fields based on the database entries.
}
It has the big advantage, that you would only have to check, whether or not the table exists and (if yes) create a new Object or (if not) throw an Exception.
Ofcourse this comes with more problems. Error handling is not done here (for example, what happens with namespaces and what happens, if there are more objects with the same name), to keep it simple. But this should bring the point accross.
Sorry for the long post, have a nice day :)
I've been trying to create an application where everything is effectively an object with a series of fields. I've abstracted it to the level that you have the following tables:
ObjectTemplate
Field
LinkObjectTemplateField
FieldType
Each ObjectTemplate has a series of fields (a many-to-many relationship), which can be found in LinkObjectTemplateField. Field is linked to FieldType (many-to-one relationship). Field also has an ObjectTemplateID field - so let's suppose we have an object template called Section, and another object template called Question (as in for a questionnaire). Section would have Question as a field for questionnaire designers to use to define which questions appear in a section. Each Question would then be linked to a series of Values (or none at all in the case that it is of FieldType 'Text'.
We're able to create fields, field types and object templates so far. However I've come to realise that actually all 3 of these could be represented within the above tables, and I could probably kill off one of these tables too (so I only have ObjectTemplate and LinkObjectTemplateField, where Field is an ObjectTemplate in it's own right so there is a link simply between ObjectTemplate and itself via LinkObjectTemplateField).
My aim is to have one table structure for ALL object types, both as it currently stands and in the future. I'll have a class which picks up all of the fields for a particular object, and the fields it is expecting based on the objecttemplate, and decides how to present the fields based on the template. This seems to be getting very complex and I keep finding myself getting confused. I have a week left to work on this, so my questions are: should I plough on with this? Are there any better techniques to achieve this, or any flaws in my approach? Should I have stuck with the old structure (an entire table for each object type, with the same fields as most other object types for the core details - name, description, deleted etc.)?
Edit
I have been going over my approach again and come to the following conclusions:
Each object type, including object template itself, should have its' own record in the objecttemplate table.
Each object template, field and fieldtype should then have its' own row in the object table.
In this way, for example, Text, Dropdown etc. will be objects using the fieldtype object template. The IDs of these will be used in the functions for writing the forms - they will be declared as constants and referenced via MAIN::TEXT, MAIN::DROPDOWN and so on.
You are effectively trying to implement o form of EAV, and unless you actually need the flexibility it brings, is considered an anti-pattern.
Such "inner platform" is usually a poor replica of the real thing. In a nutshell:
It's difficult to enforce constraints that are otherwise available to "normal" tables and fields, including data types, NULL-ability, CHECKs, keys, and foreign keys.
You no longer have a good "target" for setting permissions or creating triggers.
It's difficult to limit an index to a specific "column", or make it use a "native" type.
It's difficult to reconstruct the "original" object. Usually, a lot of JOINing is required and the resulting object is not represented as a single row (which may be awkward to the client). Indexes and query optimizer can no longer work optimally.
So unless you absolutely have to be able to change data structure without changing database structure, just use what DBMS already provides through "normal" tables/columns/constraints...
My aim is to have one table structure for ALL object types, both as it currently stands and in the future.
Well, you kind of already have that built-in to your DBMS: it's called "data dictionary". Yes, you change it through CREATE/ALTER/DROP statements instead of INSERT/UPDATE/DELETE, but at the logical level it's a similar thing.
Should I have stuck with the old structure (an entire table for each object type, with the same fields as most other object types for the core details - name, description, deleted etc.)?
Probably.
BTW, if you have a lot of common fields (and/or constraints), consider putting them in a common "base" table and then "inheriting" other tables from it.
I'm currently at an impasse in reguards to the structural design of my website. At the moment I'm using objects to simplify the structure of my site (I have a person object, a party object, a position object, etc...) and in theory each of these is a row from it's respective table in the database.
Now from what I've learnt, OO Design is good for keeping things simple and easy to use/implement, which I agree with - it makes my code look so much cleaner and easier to maintain, but what I'm confused about is how I go about linking my objects to the database.
Let's say there is a person page. I create a person object, which equals one mysql query (which is reasonable), but then that person might have multiple positions which I need to fetch and display on a single page.
What I am currently doing is using a method called getPositions from the person object which gets the data from mysql and creates a separate position object for each row, passing in the data as an array. That keeps the queries down to a minimum (2 to a page) but it seems like a horrible implementation and to me, breaks the rules of object orientated design (should I want to change a mysql row, I'd need to change it in multiple places) but the alternative is worse.
In this case the alternative is just getting the ID's that I need and then creating separate positions, passing in the ID which then goes on to getting the row from the database in the constructor. If you have 20 positions per page, it can quickly add up and I've read about how much Wordpress is criticised for it's high number of queries per page and it's CPU usage. The other thing I'll need to consider in this case is sorting, and doing it this way means I'll need to sort the data using PHP, which surely can't be as efficient as natively doing it in mysql.
Of course, pages will be (and can be) cached, but to me, this seems almost like cheating for poorly built applications. In this case, what is the correct solution?
The way you're doing it now is at least on the right track. Having an array in the parent object with references to the children is basically how the data is represented in the database.
I'm not completely sure from your question if you're storing the children as references in the parent's array, but you should be and that's how PHP should store them by default. If you also use a singleton pattern for your objects that are pulled from the database, you should never need to modify multiple objects to change one row as you suggest in your question.
You should probably also create multiple constructors for your objects (using static methods that return new instances) so you can create them from their ID and have them pull the data or just create them from data you already have. The latter case would be used when you're creating children; you can have the parent pull all of the data for its children and create all of them using only one query. Getting a child from its ID will probably be used somewhere else so its good just to have if its needed.
For sorting, you could create additional private (or public if you want) arrays that have the children sorted in a particular way with references to the same objects the main array references.
I have a table called Cat, and an PHP class called Cat. Now I want to make a CatDataMapper class, so that Cat extends CatDataMapper.
I want that Data Mapper class to provide basic functionality for doing ORM, and for creating, editing and deleting Cat.
For that purpose, maybe someone who knows this pattern very well could give me some helpful advice? I feel it would be a little bit too simple to just provide some functions like update(), delete(), save().
I realize a Data Mapper has this problem: First you create the instance of Cat, then initialize all the variables like name, furColor, eyeColor, purrSound, meowSound, attendants, etc.. and after everything is set up, you call the save() function which is inherited from CatDataMapper. This was simple ;)
But now, the real problem: You query the database for cats and get back a plain boring result set with lots of cats data.
PDO features some ORM capability to create Cat instances. Lets say I use that, or lets even say I have a mapDataset() function that takes an associative array. However, as soon as I got my Cat object from a data set, I have redundant data. At the same time, twenty users could pick up the same cat data from the database and edit the cat object, i.e. rename the cat, and save() it, while another user still things about setting another furColor. When all of them save their edits, everything is messed up.
Err... ok, to keep this question really short: What's good practice here?
From DataMapper in PoEA
The Data Mapper is a layer of software
that separates the in-memory objects
from the database. Its responsibility
is to transfer data between the two
and also to isolate them from each
other. With Data Mapper the in-memory
objects needn't know even that there's
a database present; they need no SQL
interface code, and certainly no
knowledge of the database schema. (The
database schema is always ignorant of
the objects that use it.) Since it's a
form of Mapper (473), Data Mapper
itself is even unknown to the domain
layer.
Thus, a Cat should not extend CatDataMapper because that would create an is-a relationship and tie the Cat to the Persistence layer. If you want to be able to handle persistence from your Cats in this way, look into ActiveRecord or any of the other Data Source Architectural Patterns.
You usually use a DataMapper when using a Domain Model. A simple DataMapper would just map a database table to an equivalent in-memory class on a field-to-field basis. However, when the need for a DataMapper arises, you usually won't have such simple relationships. Tables will not map 1:1 to your objects. Instead multiple tables could form into one Object Aggregate and viceversa. Consequently, implementing just CRUD methods, can easily become quite a challenge.
Apart from that, it is one of the more complicated patterns (covers 15 pages in PoEA), often used in combination with the Repository pattern among others. Look into the related questions column on the right side of this page for similar questions.
As for your question about multiple users editing the same Cat, that's a common problem called Concurrency. One solution to that would be locking the row, while someone edits it. But like everything, this can lead to other issues.
If you rely on ORM's like Doctrine or Propel, the basic principle is to create a static class that would get the actual data from the database, (for instance Propel would create CatPeer), and the results retrieved by the Peer class would then be "hydrated" into Cat objects.
The hydration process is the process of converting a "plain boring" MySQL result set into nice objects having getters and setters.
So for a retrieve you'd use something like CatPeer::doSelect(). Then for a new object you'd first instantiate it (or retrieve and instance from the DB):
$cat = new Cat();
The insertion would be as simple as doing: $cat->save(); That'd be equivalent to an insert (or an update if the object already exists in the db... The ORM should know how to do the difference between new and existing objects by using, for instance, the presence ort absence of a primary key).
Implementing a Data Mapper is very hard in PHP < 5.3, since you cannot read/write protected/private fields. You have a few choices when loading and saving the objects:
Use some kind of workaround, like serializing the object, modifying it's string representation, and bringing it back with unserialize
Make all the fields public
Keep them private/protected, and write mutators/accessors for each of them
The first method has the possibility of breaking with a new release, and is very crude hack, the second one is considered a (very) bad practice.
The third option is also considered bad practice, since you should not provide getters/setters for all of your fields, only the ones that need it. Your model gets "damaged" from a pure DDD (domain driven design) perspective, since it contains methods that are only needed because of the persistence mechanism.
It also means that now you have to describe another mapping for the fields -> setter methods, next to the fields -> table columns.
PHP 5.3 introduces the ability to access/change all types of fields, by using reflection:
http://hu2.php.net/manual/en/reflectionproperty.setaccessible.php
With this, you can achieve a true data mapper, because the need to provide mutators for all of the fields has ceased.
PDO features some ORM capability to
create Cat instances. Lets say I use
that, or lets even say I have a
mapDataset() function that takes an
associative array. However, as soon as
I got my Cat object from a data set, I
have redundant data. At the same time,
twenty users could pick up the same
cat data from the database and edit
the cat object, i.e. rename the cat,
and save() it, while another user
still things about setting another
furColor. When all of them save their
edits, everything is messed up.
In order to keep track of the state of data typically and IdentityMap and/or a UnitOfWork would be used keep track of all teh different operations on mapped entities... and the end of the request cycle al the operations would then be performed.
keep the answer short:
You have an instance of Cat. (Maybe it extends CatDbMapper, or Cat3rdpartycatstoreMapper)
You call:
$cats = $cat_model->getBlueEyedCats();
//then you get an array of Cat objects, in the $cats array
Don't know what do you use, you might take a look at some php framework to the better understanding.