I'm developing a Laravel application & started using Redis as a caching system. I'm thinking of caching the data of all of a specific model I have, as a user may make an API request that this model is involved in quite often. Would a valid solution be storing each model in a hash, where the field is that record's unique ID, and the values are just the unique model's data, or is this use case too complicated for a simple key value database like Redis? I"m also curious as to how I would create model instances from the hash, when I retrieve all the data from it. Replies are appreciated!
Short answer: Yes, you can store a model, or collections, or basically anything in the key-value caching of Redis. As long as the key provided is unique and can be retraced. Redis could even be used as a primary database.
Long answer
Ultimately, I think it depends on the implementation. There is a lot of optimization that can be done before someone can/should consider caching all models. For "simple" records that involve large datasets, I would advise to first optimize your queries and code and check the results. Examples:
Select only data you need, not entire models.
Use the Database Query Builder for interacting with the database when targeting large records, rather than Eloquent (Eloquent is significantly slower due to the Active Record pattern).
Consider using the toBase() method. This retrieves all data but does not create the Eloquent model, saving precious resources.
Use tools like the Laravel debugbar to analyze and discover potential long query loads.
For large datasets that do not change often or optimization is not possible anymore: caching is the way to go!
There is no right answer here, but maybe this helps you on your way! There are plenty of packages that implement similar behaviour.
Related
I'm doing a web app here using Laravel + AngularJS and I have a question.Do I need a model for each table that I have in my database? There are 87 tables in my database and I need to query all of them according to with the input that the User wants.
I just want to make sure with all tables must have a model file or if just one is enough.
There are 2 ways by which you can access your DB tables:
Eloquent ORM (dependent on Models)
DB Facade Query Builder(independent on Models)
Former, is more clean and best approach to perform DB query and related task, whereas latter is not clean, and it is going to be difficult for you to manage large application, as you told there are 80+ tables in your application.
Also, if you're using Eloquent way, then it's also a better to have a base model, which will have common code which you can inherit in child models. Like if you want to store "user id" who did some DB changes, then in the boot function, you can write Auth::id() and assign that value to changed_by field on your table.
In DB Facade way, you've to hard code table name every time you're performing DB operation, and which leads to inconsistency when you found that you've to change the name of the table, it's a rare scenario still it'll be difficult to manage if in a file there are multiple tables DB operation going on. There are options like, creating a global table name variable which can be accessed to perform DB operation.
Conclusion:
Yes, creating 80+ model for implementing Eloquent way is painful, but for a short term, as the application grows it will be easy for you to manage, it will be good for other developer if they start working on it, as it will give a overview of DB and it will improves code readability.
It depends on how you'd like to handle queries.
If you'd like to use Eloquent ORM, you need model classes to handle objects and relationships. That is a model for a table, except intermediate relationship tables, which may be accessed through pivot attribute.
Raw SQL queries are also supported. You don't really need model classes for them, as each result within the result array will be a PHP StdClass object. You need to write raw SQL though.
See Laravel documentation.
I am working on a project that uses the ORM heavily instead of model relationships in the controller to get to the data (for eg: using leftJoins instead of establishing proper model relationships using hasMany etc and then retrieving through that)
My question is actually to do with the performance. Is using model relationships to access data faster than using leftJoins? I am not sure if Laravel actually does perform a leftJoin under the hood.
I am on a tight schedule so I want to decide if its actually worth trying to refactor the code to use the model relationships and if it would provide gains in performance?
Thanks in advance
Laravel anbd Eloquent may execure queries differently using the relationship as opposed to the join. You can use the barryvdh/laravel-debugbar to see query execution.
My observation is that Laravel will tend to use two queries instead of a join if possible. It will execute the first query and then using the IDs returned from that query execute a second query on the "joined" table as a "where myID in (1,2,3,4,5..)" and then stitch the results together.
As others have suggested, building up a quick test and seeing what works best in your environment would be the best bet. I prefer to use the relationships as much as possible becuase you can easily read what your code is doing.
Joins and relationship are not opposite of each other. Relationships are for ease of development, cleaner, more readable code, easy to maintain. Performance difference is almost non existant unless you are developing a data miner of some sort.
Edited
If you are focusing on performance, you can work on implementing caching in a efficient way.
I have a form that a user will enter their studentID,Name,Major etc. then use those info to fill up 'student' class to create a 'student' object.
Now, I want to store these objects somewhere, somehow, and I want to be able to pull them back to use its data. I've looked into 'object serialization' but not quite sure if this will fulfill my needs, as I don't fully understand how this thing works...any help would be great, thanks.
And, I don't want to create a database, at all. No Mysql is allowed for this little assignment of mine...
You could serialize your object as is and persist it to disk, but what happens if you want to search for students based on some criteria?
Usually, a relational database is used and an ORM to map your PHP objects to a rdbms table
Wikipedia has a list of ORM's and frameworks for PHP, If you want something more lightweight then managin a db server, at least look into Sqlite
Be careful when storing serialize objects. You might loose the ability to do certain things like search records. And if you don't understand how they work then they might behave unexpectedly.
A better approach would be to store individual properties as rows in database tables and then fetch the data into objects. To do this you can use ORMs like Doctrine which will map objects to database tables and persist them. Or a simple database class using PDO should fulfill your basic needs as well.
I started some time working with the Yii Framework and I saw some things "do not let me sleep." Here I talk about my doubts about how Yii users use the Active Record.
I saw many people add business rules of the application directly in Active Record, the same generated by Gii. I deeply believe that this is a misinterpretation of what is Active Record and a violation of SRP.
Early on, SRP is easier to apply. ActiveRecord classes handle persistence, associations and not much else. But bit-by-bit, they grow. Objects that are inherently responsible for persistence become the de facto owner of all business logic as well. And a year or two later you have a User class with over 500 lines of code, and hundreds of methods in it’s public interface. Callback hell ensues.
When I talked about it with some people and my view was criticized. But when asked:
And when you need to regenerate your Active Record full of business rules through Gii what do you do? Rewrite? Copy and Paste? That's great, congratulations!
Got an answer, only the silence.
So, I:
What I am currently doing in order to reach a little better architecture is to generate the Active Records in a folder /ar. And inside the /models folder add the Domain Model.
By the way, is the Domain Model who owns the business rules, and is the Domain Model that uses the Active Records to persist and retrieve data, and this is the Data Model.
What do you think of this approach?
If I'm wrong somewhere, please tell me why before criticizing harshly.
Some of the comments on this article are quite helpful:
http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/
In particular, the idea that your models should grow out of a strictly 'fat model' setup as you need more seems quite wise.
Are you having issues now or mainly trying to plan ahead? This may be hard to plan ahead for and may just need refactoring as you go ...
Edit:
Regarding moveUserToGroup (in your comment below), I could see how having that might bother you. Found this as I was thinking about your question: https://gist.github.com/justinko/2838490 An equivalent setup that you might use for your moveUserToGroup would be a CFormModel subclass. It'll give you the ability to do validations, etc, but could then be more specific to what you're trying to handle (and use multiple AR objects to achieve your objectives instead of just one).
I often use CFormModel to handle forms that have multiple AR objects or forms where I want to do other things.
Sounds like that may be what you're after. More details available here:
http://www.yiiframework.com/doc/guide/1.1/en/form.overview
The definition of Active Record, according to Martin Fowler:
An object carries both data and behavior. Much of this data is persistent and needs to be stored in a database. Active Record uses the most obvious approach, putting data access logic in the domain object. This way all people know how to read and write their data to and from the database.
When you segregate data and behavior you no longer have an Active Record. Two common related patterns are Data Mapper and Table/Row Gateway (this one more related to RDBMS's).
Again, Fowler says:
The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other. With Data Mapper the in-memory objects needn't know even that there's a database present; they need no SQL interface code, and certainly no knowledge of the database schema.
And again:
A Table Data Gateway holds all the SQL for accessing a single table or view: selects, inserts, updates, and deletes. Other code calls its methods for all interaction with the database.
A Row Data Gateway gives you objects that look exactly like the record in your record structure but can be accessed with the regular mechanisms of your programming language. All details of data source access are hidden behind this interface.
A Data Mapper is usualy storage independent, the mapper recovers data from the storage and creates mapped objects (Plain-old objects). The mapped object knows absolutely nothing about being stored somewhere else.
As I said, TDG/RDG are more inwardly related to a relational table. TDG object represents the structure of the table and implements all common operations. RGD object contains data related to one single row of the table. Unlike mapped object of Data Mapper, the RDG object has conscience that it is part of a whole, because it references its container TDG.
What is the best way in doctrine2 to deal with different bases but with the same schema. Currently I
generate entities separately for every database, adding namespace and name of database to every metadata object, putting them in the different namespaces (XXX\Base\EntityClass), but with the same alias
create one EntityManager per base (even if they are sharing same connection)
create a proxy which passes calls to multiple EntityManagers and collects responses
merge responses in one output
Is there simpler way of dealing with multiple bases in doctrine2 ?
I can't answer for doctrine2, but I'm doing this in C#.
One set of entities, with strong names and strong types, defined in terms of what the rest of the application needs. This maps the schema, but isn't tied to either database.
One facade, the knows which database you're using at the moment, and directs requests to one of two...
Separate data access namespaces, that handle a common set of operations, and populate results into the single set of entities, which are returned to the requestor through the facade.
Static code generators, based on reading the scema from the database catalog, are useful. You may want to pick one as a model, if you can infer everything you need to know about the other database.
Dynamic code generators are also useful, for inserts, updates, where clauses, etc.
Invest some time in a framework to support all of this. Decide whether you need to keep metadata at run time, and whether it's primarily to support queries or change operations. Provide a common method for extracting data from results sets for either database, so that you can get strongly named and typed result sets back to your application without regard to the underlying database.