I'm using mysql DB, I've got table "posts" with columns id, title, text, author_id, image at the moment.
I need to provide a possibility to upload several images to one post in my blog. What's the best way of organizing my DB structure in this case and how it's usually done in Yii 2?
At the moment I just have functionality for saving 1 image and keeping it's path in table field.
Should I keep an array in DB or create another relations table?
When you're working with a conventional RDBMS like MySQL:
It seems you're going from a one-to-one, to a zero- or one-to-many relation, in which case i'd recommend creating another table for your files (for example: image*), containing a foreign key image.post_id to posts.id. Added benefit is that you will be able to more neatly store some metadata about the image, instead of creating a load of extra (but perhaps unneeded columns) in the posts table.
The cleanest solution (imho) is usually to stay close to the data structure of your DMBS, instead of placing arbitrary data structures inside a text field, no matter what framework or language you use.
This is different when working with no-sql database like for example MongoDB, where depending on the use-case you may want to use an array property images on your posts document containing image objects.
*yii naming convention for tables is singular instead of plural
Related
I have an old db under my application where not all relations are actually SQL relations, but some of them are stored in a string column.
Ex.
Tables: Tags, Articles
Table Articles has 'tags_ids' column where I have '33;44;82;' (the tag ids)
I would like to know if I can use the Laravel Backpack relationships UI with this kind of data.
I surely will have to "mutate" the data during the get and the set, but I can't find a way to do it.
I dont think laravel supports something like that directly as a relationship.
You could certainly write a custom column template in backpack that would explode $entry->tags_ids on ;, query Tags for those ids and then display them in a loop.
That said, IMHO, you'd be much better of adding an intermediate table like article_has_tag to map your tags and articles properly, then you could use all the built in features of laravel and backpack normally. If you still need the old application to work with the original data structure, you could write an "after insert/update" trigger for the articles and article_has_tag tables to keep them in sync (being careful to not cause an infinite loop of course).
I'm pretty new to Laravel, so I'm struggling with the logic for what is essentially a CMS with multiple content types.
Say I have 3 content types; Food, Books and Cars. Every item in all content types has a name, URL and a couple of other fields.
I can create, update and delete any of these resources with most likely the same code replicated 3 times. The only difference would be with a create or update as the field names would differ between them.
Should I just duplicate these fields/functions for each controller, or create some common ground in one place?
The crossover of fields/functions initially will not be huge, however, it seems inefficient let's say if I had 10 content types and I want to add one field to all of them I have to update code in a large number of places.
If I had a central "Node" that contained the id's and common fields for ALL items in every content type, then have this linked to individual tables for the custom fields, I'm in a much better position when I want to add, update or delete common fields.
I've currently got 3 controllers and have only worked on one so far so I have an index(), show() and edit() function in the controller.
As a test, I created a Node model with php artisan make:model Node -mcr and simply extended the existing Controllers so they were extending NodeController. Which just threw up an error like this;
Declaration of App\Http\Controllers\FoodController::show(App\Food$food) should be compatible with App\Http\Controllers\NodeController::show(App\Node $node)
This is likely not the way to go about it anyway, but I simply do not know the recommended practice for this.
Most appropriate and standard best practice for your problem is,
have a single database table, let's say table name as node, which will contain all the common fields, and have another table as categories and relate it with node table (1-m) to categorize type of node such as car,book,food etc., and make one more table, let's say node_meta which will store all additional attributes depending on the type of node,
(you may have a look on the wordpress CMS database ER Diagram which has similar db design.)
Polymorphic relation is not a good idea for this as stated by another user above, it has some limitation when it comes to querying underlying data, for example you cannot apply whereHas query and still there is no official solution to this problem.
I've been trying to create an application where everything is effectively an object with a series of fields. I've abstracted it to the level that you have the following tables:
ObjectTemplate
Field
LinkObjectTemplateField
FieldType
Each ObjectTemplate has a series of fields (a many-to-many relationship), which can be found in LinkObjectTemplateField. Field is linked to FieldType (many-to-one relationship). Field also has an ObjectTemplateID field - so let's suppose we have an object template called Section, and another object template called Question (as in for a questionnaire). Section would have Question as a field for questionnaire designers to use to define which questions appear in a section. Each Question would then be linked to a series of Values (or none at all in the case that it is of FieldType 'Text'.
We're able to create fields, field types and object templates so far. However I've come to realise that actually all 3 of these could be represented within the above tables, and I could probably kill off one of these tables too (so I only have ObjectTemplate and LinkObjectTemplateField, where Field is an ObjectTemplate in it's own right so there is a link simply between ObjectTemplate and itself via LinkObjectTemplateField).
My aim is to have one table structure for ALL object types, both as it currently stands and in the future. I'll have a class which picks up all of the fields for a particular object, and the fields it is expecting based on the objecttemplate, and decides how to present the fields based on the template. This seems to be getting very complex and I keep finding myself getting confused. I have a week left to work on this, so my questions are: should I plough on with this? Are there any better techniques to achieve this, or any flaws in my approach? Should I have stuck with the old structure (an entire table for each object type, with the same fields as most other object types for the core details - name, description, deleted etc.)?
Edit
I have been going over my approach again and come to the following conclusions:
Each object type, including object template itself, should have its' own record in the objecttemplate table.
Each object template, field and fieldtype should then have its' own row in the object table.
In this way, for example, Text, Dropdown etc. will be objects using the fieldtype object template. The IDs of these will be used in the functions for writing the forms - they will be declared as constants and referenced via MAIN::TEXT, MAIN::DROPDOWN and so on.
You are effectively trying to implement o form of EAV, and unless you actually need the flexibility it brings, is considered an anti-pattern.
Such "inner platform" is usually a poor replica of the real thing. In a nutshell:
It's difficult to enforce constraints that are otherwise available to "normal" tables and fields, including data types, NULL-ability, CHECKs, keys, and foreign keys.
You no longer have a good "target" for setting permissions or creating triggers.
It's difficult to limit an index to a specific "column", or make it use a "native" type.
It's difficult to reconstruct the "original" object. Usually, a lot of JOINing is required and the resulting object is not represented as a single row (which may be awkward to the client). Indexes and query optimizer can no longer work optimally.
So unless you absolutely have to be able to change data structure without changing database structure, just use what DBMS already provides through "normal" tables/columns/constraints...
My aim is to have one table structure for ALL object types, both as it currently stands and in the future.
Well, you kind of already have that built-in to your DBMS: it's called "data dictionary". Yes, you change it through CREATE/ALTER/DROP statements instead of INSERT/UPDATE/DELETE, but at the logical level it's a similar thing.
Should I have stuck with the old structure (an entire table for each object type, with the same fields as most other object types for the core details - name, description, deleted etc.)?
Probably.
BTW, if you have a lot of common fields (and/or constraints), consider putting them in a common "base" table and then "inheriting" other tables from it.
I have a database with a table post, a user can submit many types of posts, and all these types share properties, and have properties of their own. For e.x: both video and standard posts have a description, but only video posts have a video link/file property.
What I did is creating a table post containing the common properties, like the creationDate and description.
And I created other tables, containing the other properties, and as the number of post types grows, I think I will have to add more tables.
Of course, the problem with this design is that when I want to retrieve one post, I have to retrieve its data from the posts table, then use its ID and Type to retrieve data from the type table (ex : videos table). And when I want to retrieve data of different types in one page I'll have to handle many tables.
Which seems not to be practical since I'm working with PHP/MySQL in an Apache server.
Is there any other better idea I can implement to get the same result?
Your design follows the technique described in class-table-inheritance. There is an outline of the technique in the info tab.
You might want to explore using shared-primary-key as a way to speed things up, and as a way to use post-id as a FK without having to have a different FK type for each post type. It also speeds up joins.
The down side is that you will have to add a new table whenever you discover a new post type. But then, if you were building an object model you would have to add a new subclass at the same discovery.
I am lost on how to best approach the site search component. I have a user content site similar to yelp. People can search for local places, local events, local photos, members, etc. So if i enter "Tom" in the search box I expect the search to return results from all user objects that match with Tom. Now the word Tom can be anywhere, like a restaurant name or in the description of the restaurant or in the review, or in someone's comment, etc.
So if i design this purely using normalized sql I will need to join about 15 object tables to scan all the different user objects + scan multiple colunms in each table to search all the fields/colunms. Now I dont know if this is how it is done normally or is there a better way? I have seen stuff like Solr/Apache/Elasticsearch but I am not sure how these fit in to myusecase and even if i use these I assume i still need to scan all the 15 tables + 30-40 colunms correct? My platform is php/mysql. Also any coding / component architecture / DB design practice to follow for this? A friend said i should combine all objects into 1 table but that wont work as you cant combine photos, videos, comments, pages, profiles, etc into 1 table so I am lost on how to implement this.
Probably your friend meant combining all the searchable fields into one table.
The basic idea would be to create a table that acts as the index. One column is indexable and stores words, whereas the other column contains a list of references to objects that contain that word in one of those fields (for example, an object may be a picture, and its searchable fields might be title and comments).
The list of references can be stored in many ways, so you could for example have string of variable length, say a BLOB, and in it store a JSON-encoded array of the ids & types of objects, so that you could easily find them afterwards by doing a search for that id in the table corresponding to the type of object).
Of course, on any addition / removal / modification of indexable data, you should update your index accordingly (but you can use lazy update techniques that eventually update the index in the background - that is because most people expect indexes to be accurate within maybe a few minutes to the current state of the data. One implementation of such an index is Apache Cassandra, but I wouldn't use it for small-scale projects, where you don't need distributed databases and such).