I have an entity that contains several choice fields and to normalize the database, the best way is to have these fields linked to a lookup table. The lookup tables are two columns with the first as the primary key as an integery type and the second is the lookup value, usually a string of several words.
To display an entity object, I need to query each lookup table to get the values. Is this the standard wy of doing it or does anyone else have another method? Should there be only one lookup table or would I need a different lookup for each field? I think I need one for each field since I have to allow a user to choose the field that applies to them and only want to show the appropiate choices for each field.
Everything is stored in doctrine and the database, correct? No arrays or simple lookup objects stored only in Symfony/php?
I am using this as a reference for my naming and creating queries
doctrine join multiple tables
In the end, I decided to add a lookup table to my database and add another entity to my Symfony project. The form to display the choice selection uses the entity type, and when I need to display the underlying selection in twig, I added a data transformer according to the docs from Symfony http://symfony.com/doc/current/cookbook/form/data_transformers.html
Related
I have already indexed a database (SQL) with a single table which is in sync with its Elasticsearch index. Now I want to index a database with multiple normalized tables. So, how should I index those tables? Should I write multiple JOIN queries in my logstash file during indexing the database tables, or should I index each table one by one and perform multiple index search? But for second way, I do not know how to form elasticsearch query for the relevant SQL queries. I am new to Elasticsearch. So any guidance for the problem would be appreciated. Here I am also attaching the schema of the database. One more thing, I am using PHP client for searching and displaying data.
First of all, everything will depend on how you want to build your indexes in elasticsearch, that is, if you want an index for each table or an index for several tables.
My advice is:
Create a trigger in the database to audit every change (insert, update, delete) and store it in a news table along with the action and a state.
Create a view for each type of novelty or table, this view will depend on how you want to index everything.
Use jdbc to call the views says the state is equal to pending (raw).
Use filters to normalize your data and adjust it to your elasticsearch structure.
Use JDBC output to update the database by setting the newness to processed to prevent it from appearing in the query.
In addition to these points, my recommendation is that you have those tables in a single index, for example, employee, where you can create different nested objects for each entity in the database, such as department, etc. where you could add the code tags and description of each one. You can check if it has not been clear 😀
For the past couple years I've been working on my own lightweight PHP CMS that I use for my personal projects. The one thing its missing is an easy databasing solution.
I am looking to create a simple content type database framework in which I can specify a new type (user, book, event..ect) and then be able to load everything related to it automatically.
For some content types, there could be fields that can only have 1 value and some that can have zero to many values so I will use a new table for these. Take the example:
table: event
columns: id, name, description, date
table: event_people:
columns: id_event, id_user
table: event_pictures:
columns: id_event, picture
Events will have a bunch of fields that contain a value such as the description, but there could also be a bunch of pictures and people going to it.
I want to be able to create a generic PHP class that will load all the information on a content type. My current thought process is to make entity loader function that I can give it an id and type:
Entity:load($id, "event");
From this I was going to get all of the tables with the prefix of "event", load all of the data with the passed in ID and then store it in a multidimensional array. I feel like there is probably a more efficient way for this however. I'd like to stay away from having a config file someplace that specifies all of the content types and their child tables because I want to be able to add a new child table and have it pick it up automatically.
Is there anyway to store this relationship directly within the MySQL table? I don't do a lot of databasing and I've just recently started to use foreign keys (what a life saver). Would I be more efficient to see which tables have a foreign key related to the id column in the event table, and if so how would this be done? I'm also open to different ways of storing this information.
Note: I'm doing this just for fun so please don't refer me to use any premade frameworks. I'd like to create this myself.
I think your approach of searching for all tables with prefix name event is sensible. The only way I can think to be more efficient is to have an "entity_relationship" table that you could query. It would allow you flexibility in your naming convention, avoid naming conflicts, and this lookup should be more efficient than a pattern match search.
Then whenever a new object type with its own table was added, then you could make an entry on the relationship table.
INSERT INTO entity_relationship VALUES
('event','event_people'),
('event','event_pictures'),
('event','event_documents'),
('event','event_performers');
I'm hitting a dead with the best practice for storing a large amount of options and values in my MYSQL database and then assigning them to properties. The way I usually do this (example is for real estate) is to create a table called "pool" then have an auto increment value as the ID and a varchar to store the value, in this case "Above Ground" and another row for "In-ground". Then in my property table I would have a column for "has_pool" with the proper ID value from the "pool" table assigned. Obviously the problem is that with hundreds of options (fireplace, water view, etc) for each property, my number of database tables will get very large, very fast and my left joins would become out of control on the front side.
Can someone point me in the right direction on what the best practice would be to easily populate new values for the property attributes and keep the query count down to a minimum? I feel like there is a simple solution but my research so far has not made it apparent to me. Thank you!
One way you could do this is create an 'options' table with four columns: id, menuId, value
Create another table called menus, with two fields; id and name.
Add the menu names (pool, fireplace etc.) to the menus table, and then add the possible values to the options table, including the id of the menu it is related to.
I'd store all the values serialized (e.g. JSON or XML or YAML) into a blob, and then define inverted index tables for attributes I want to be searchable.
I describe this technique and alternatives in my presentation Extensible Data Modeling with MySQL.
Also see http://bret.appspot.com/entry/how-friendfeed-uses-mysql
In my database there is a table, which has a column of the type text. This column holds a serialized array. This array is read and stored by another application, and I cannot change its format.
The serialized array holds a selection of database names, table names and column names in two different languages.
I would like to write a controller, entity, form, etc. in Symfony2 that is able to modify this serialized array.
There is a script that I can use that can provide an array of all possible db names, table names and column names that each serialized array may contain.
The goal is to present a list of check boxes where users can select db's, tables and columns. Next, they can do a translation of the names.
Since all data is so volatile, I am not sure whether this is even possible in Symfony2.
An alternative is to make the following entities: { database, table, column } and do it fully OO. And then I could export a selection in a serialized array, to the external application that expects it that way...
Can you guys follow my reasoning? Am I overlooking a strategy here...?
Added:
The array is a nested array up to the fifth degree. Databases contain tables, which contain columns. And every item has an original name and a translated name.
I think you answered your own guestion:
An alternative is to make the following entities: { database, table, column } and do it fully OO.
And then I could export a selection in a serialized array, to the external application that
expects it that way...
You would start with a master entity mapped to your table.
class SomeEntity
{
protected $serializedInfo;
public getDatabases()
{
// Process serializedInfo into an array of database objects and return
You then pass SomeEntity to SomeEntityFormType which in turn uses a collection of DatabaseFormTypes. The DatabaseFormType then has a collection of TableFormTypes and so on.
Eventually your form would be posted and SomeEntity would be updated. You then serialize before posting. Should be straight forward. Might be a bit more challenging if you want users to add information but even then it is doable.
I know it's really late but I was really busy with university so I couldn't answer sooner
This is what I think is the best to do
Imagine that the table that contains the column which contains your array is called foo
So you make an entity which is called Foo and contains a field(type text) that has the name you like
Now the tricky part is to make an object called Database that contains all the relations you need(To a Table object and Table objects to column Objects)
So even though I told you to make the field type as text you will pas the object Database to this field
So how it's going to work
The Database object will have a __string method that will return the serialized array of the object the way you want
This way when doctrine2 tries to save the Database object in the text field it will be saved as the string that __string method returns
And you will have getDatabase that will converts the serialized array to the database object
This is the idea that I have and not sure if it suits you or not
i want to create a database for my users in which i will need to store around 50 different peace of info for each user.
example
Contact info will hv (address,phone,email,home_phone,etc...)
personal info will hv (name,last_name,dob,birth_city,work,etc...)
refree info ...(6 items)
etc
so i have many categories each contain at least 5-6 elements, so my question is
Should i create a column for each item (will have about 50 field per user) or better to create one column for each category and use serialize to store array into that field (will have around 6 columns (each will hold array that will hold 6-7 items) ?
what would be best practice ? and in case i go for array choice should i make column type as text "cuz i wont be able to decide exact varchar size for all items" ?
I think serializing an array and storing that array in a relational database is a bad idea. For being able to employ the full power of a relational database including the bunch of possible sql queries to work on your data, you should think about a proper relational database design including one or more tables and relations between them. Think about primary and foreign keys and normalization. For more advice, you should post more info about your example.