Most efficient method to store MYSQL options and values in database - php

I'm hitting a dead with the best practice for storing a large amount of options and values in my MYSQL database and then assigning them to properties. The way I usually do this (example is for real estate) is to create a table called "pool" then have an auto increment value as the ID and a varchar to store the value, in this case "Above Ground" and another row for "In-ground". Then in my property table I would have a column for "has_pool" with the proper ID value from the "pool" table assigned. Obviously the problem is that with hundreds of options (fireplace, water view, etc) for each property, my number of database tables will get very large, very fast and my left joins would become out of control on the front side.
Can someone point me in the right direction on what the best practice would be to easily populate new values for the property attributes and keep the query count down to a minimum? I feel like there is a simple solution but my research so far has not made it apparent to me. Thank you!

One way you could do this is create an 'options' table with four columns: id, menuId, value
Create another table called menus, with two fields; id and name.
Add the menu names (pool, fireplace etc.) to the menus table, and then add the possible values to the options table, including the id of the menu it is related to.

I'd store all the values serialized (e.g. JSON or XML or YAML) into a blob, and then define inverted index tables for attributes I want to be searchable.
I describe this technique and alternatives in my presentation Extensible Data Modeling with MySQL.
Also see http://bret.appspot.com/entry/how-friendfeed-uses-mysql

Related

How to store multiple values in a row, column or table? What is most efficient?

I have this problem where I want to create a data page where you can see how you've progressed in losing weight. I start by collecting their weight, but in order for me to actually do stuff with the data, I need multiple values.
For example - Below is a picture of my colums, where vaegtUsers and hoejdeUsers is the weight and height, however, I can't really figure out how to store multiple weight values in one single column? Is there some way to get around this? I've done a bit of research and almost all say it's not possible. Should I just add a new column for each "new" weight ID or keep creating tables for each individual user? Or should I do something completely else?
Since it appears a single user may have multiple weights at different times, you have a one to n relationships. You should create a second table (for instance, Measure) which refers to your User table.
This table could contain columns such as ID, UserId, MeasureDate, and weight.
You could also include height measure in this table if your users are not yet fully grown, and therefore susceptible to have varying height at different points in time. Otherwise, height could be stored inside user table.
On a side note, i would advise you to check database normalization for relational databases.

Creating a entity framework SQL table structure

For the past couple years I've been working on my own lightweight PHP CMS that I use for my personal projects. The one thing its missing is an easy databasing solution.
I am looking to create a simple content type database framework in which I can specify a new type (user, book, event..ect) and then be able to load everything related to it automatically.
For some content types, there could be fields that can only have 1 value and some that can have zero to many values so I will use a new table for these. Take the example:
table: event
columns: id, name, description, date
table: event_people:
columns: id_event, id_user
table: event_pictures:
columns: id_event, picture
Events will have a bunch of fields that contain a value such as the description, but there could also be a bunch of pictures and people going to it.
I want to be able to create a generic PHP class that will load all the information on a content type. My current thought process is to make entity loader function that I can give it an id and type:
Entity:load($id, "event");
From this I was going to get all of the tables with the prefix of "event", load all of the data with the passed in ID and then store it in a multidimensional array. I feel like there is probably a more efficient way for this however. I'd like to stay away from having a config file someplace that specifies all of the content types and their child tables because I want to be able to add a new child table and have it pick it up automatically.
Is there anyway to store this relationship directly within the MySQL table? I don't do a lot of databasing and I've just recently started to use foreign keys (what a life saver). Would I be more efficient to see which tables have a foreign key related to the id column in the event table, and if so how would this be done? I'm also open to different ways of storing this information.
Note: I'm doing this just for fun so please don't refer me to use any premade frameworks. I'd like to create this myself.
I think your approach of searching for all tables with prefix name event is sensible. The only way I can think to be more efficient is to have an "entity_relationship" table that you could query. It would allow you flexibility in your naming convention, avoid naming conflicts, and this lookup should be more efficient than a pattern match search.
Then whenever a new object type with its own table was added, then you could make an entry on the relationship table.
INSERT INTO entity_relationship VALUES
('event','event_people'),
('event','event_pictures'),
('event','event_documents'),
('event','event_performers');

save array in mysql field and search in that field

I have a mysql table looking like this:
id
some_field1
some_field2
variable_fields
datetime
...
Now I want to store more than 1 value in variable_fields like this:
user_id:5;message_id:10
The reason why I do not create a separate field for every value I want to store is that these values differ throughout the project. So I am storing different values along the project.
At some time variable_fields contains this value:
user_id:5;message_id:10
And at some other time it contains this value:
car_id:56;payment_id:45
This wouldn't be a big problem but I want to be able to search in this field. So something like: variable_fields LIKE '%payment_id:45%'.
This obviously takes time for mysql.. Is there another way of handling this instead of creating a field for every value? So some kind of dynamic field in mysql?
I happy for every kind of help. Thank you in advance!
Best regards,
Freddy
If you'll add a myisam full-text index or employ any other full-text tools on that column (e.g. sphinx, lucene) those searches you described will work much better, however that isn't advisable.
I would suggest either to divide the dynamic meta data into different tables per case, and keep a type_id in the main table, or keep columns for all options that are set to NULL by default. Really depends if there is a simple division or is this really dynamic and changing over time. In case you're diving the data into several tables, a JOIN according to type_id will give the ability to query by those specific fields values. Be sure to create an index in both tables on the mutual id.

Mysql -> How many columns can i have in users table ? should i store arrays or not?

i want to create a database for my users in which i will need to store around 50 different peace of info for each user.
example
Contact info will hv (address,phone,email,home_phone,etc...)
personal info will hv (name,last_name,dob,birth_city,work,etc...)
refree info ...(6 items)
etc
so i have many categories each contain at least 5-6 elements, so my question is
Should i create a column for each item (will have about 50 field per user) or better to create one column for each category and use serialize to store array into that field (will have around 6 columns (each will hold array that will hold 6-7 items) ?
what would be best practice ? and in case i go for array choice should i make column type as text "cuz i wont be able to decide exact varchar size for all items" ?
I think serializing an array and storing that array in a relational database is a bad idea. For being able to employ the full power of a relational database including the bunch of possible sql queries to work on your data, you should think about a proper relational database design including one or more tables and relations between them. Think about primary and foreign keys and normalization. For more advice, you should post more info about your example.

How to apply normalization on mysql using php

Please I don't have any idea. Although I've made some readings on the topic. All I know is it is used to make the data in the database more efficient and easy to handle. And It can also be used to save disk space. And lastly, if you used normalization. You will have to generate more tables.
Now I have a lot of questions to ask.
First, how will normalization help to save disk space or whatever space occupied by the database.
Second, Is it possible to add data on multiple tables using only 1 query.
Please help, I'm just a newbie wanting to learn from you. Thanks.
Ok, couple of things:
php has got nothing to do with this. normalization is about modelling data
normalization is not about saving disk space. It is about organizing data so that it is easily maintainable, which in turn is a way to maintain data-integrity.
normalization is typically described in a few stages or 'normal forms'. In practice, people that design relational databases often intuitively 'get it right' most of the time. But it is still good to be aware of the normal forms and what their characteristics are. There is a lot of documentation on that on the internet (fe http://en.wikipedia.org/wiki/Database_normalization), and you should certainly do you own research, but the most important stages are:
unormalized data: in this stage, data is not truly tabular ('relational'). There is a lot of discussion of what tabular really means, and experts disagree with one another. but most people agree that data is unnormalized in case there are multi-valued attributes (=columns that can for one row contain lists as value), or in case there are repeating groups (=multiple columns or multiple groups of columns for storing the same type of data)
Example of multi-valued column: person (first_name, last_name, phonenumbers)
Here, phonenumbers implies there could be more phonenumbers, stored in one column
Example of repeating group: person(first_name, last_name, child1_first_name, child1_birth_date, child2_first_name, child2_birth_date..., childN_first_name, childN_birth_date)
Here, the person table has a number of column pairs (child_first_name, child_birth_date) to store the person's children.
Note that something like order (shipping_address, billing_address) is not a repeating group: the addresses for billing and shipping may be similar pieces of data, but each has its own distinct role for an order, both just represent a different aspect of an order. child1 thru child10 do not - children do not have specific roles, and the list of children is variable (you never know how many groups you should reserve in advance)
In both cases, multi-valued columns and repeating groups, you basically have "nested table" structure - a table within a table. Data is said to be in 1NF (first normal form) if neither of these occur.
The 1NF is about structural characeristics: the tabular form of the data. All subsequenct normal forms have to do with eliminating redundancy. Redundancy occurs when the same information is independently stored multiple times. Redundancy is bad: if you want to change some fact, you have to change it in multiple places. If you forget to chance one of them, you have inconsistent data - the data is contradicting itself.
There are a lot of processes that can eliminate redundancy, each leading to a higher normal form, all the way from 1nf up to 6nf. However, typically most databases are adequately normalized at 3nf (or a lsight variation of that called boyce-codd normal form, BCNF) You should study 2nf and 3nf, but the principle is very simple: a table is adequately normalized, if:
the table is in 1nf
the table has a key (a column or column combination whose values are required, and which uniquely identifies a row - ie. there can be only one row having that combination of values in the key columns)
there are no functional dependencies between the non-key columns
non-key columns are not functionally dependent upon part of the key (but are completely functionally dependent upon the entire key).
functional dependency means that a column's value can be derived from another column. simple example:
order_item (order_id, item_number, customer_id, product_code, product_description, amount)
let's assume (order_id, item_number) is key. product_code and product description are functionally dependent upon each other: for one particular product_code, you will always find the same product description (as if product description is a function of product_code). The problem is now: suppose a product description changes for a particualr product code, you have to change all orders that us that product_code. forget only one and you have an inconsistent database.
The way to solve it is to create a new product table with (product_code, product_description), having (product_code) as key, and then instead of storing all product fields in order, only store a reference to a row in the product table in the order_item records (in this case, order_item should only keep product_code, which is sufficient to look up a row in the product table and find the product_description)
So as you u can see, with this solution you do actually save space (by not storing all these product descriptions in each order_item that happens to order the product) and you do get more tables (split off product from order_item) But just remember that it is not because of saving diskspace: it is because you eliminate redundancy, thus making it easier to maintain the data. because now you only have to change one row in the product table to change the description
There are a lot of similar questions on StackOverflow already, for example, Can someone please give an example of 1NF, 2NF and 3NF in plain english?
Look in the Related sidebar to the right for a bunch of them. That'll get you started.
As for your specific questions:
Normalization saves disk space by reducing redundant data storage. This has another benefit: if you have multiple copies of a given entity attribute in your database, they can get out of sync, while if you have a normalized database and use referential integrity, this cannot happen.
The INSERT statement references only one table. A TRIGGER on the insert statement can add rows to other tables, but there's no way to supply data to the trigger other than those columns in the table that spawned it.
When you need to insert dependent rows after inserting a row to the parent table, use the LAST_INSERT_ID() function to retrieve the auto-generated primary key value of the last INSERT statement in your session.
I think you will learn this when you start creating the schema for your database.
Please think reverse when you add a field that exists somewhere else in your database.
By reverse I mean, ask yourself: if I have to modify the field, how many queries do I have to run?
Probably you end up, with the answer, that you will have to run 2 or X times the query to modify the content of your column.
Keep it simple, that means assign an ID to each content you have duplicated in your database.
For example taking column address
this is not good
update clients set address = 'new address' where clientid=500;
update orders set address = 'new address' where orderid=300;
good approach would be
create a addresses table
//and run a single query
update addresses set address = 'new address' where addressid=100;
And use the address id 100 everywhere in your database table as a foreign key reference (clients+orders), this way you achieve that the id 100 is not changed, but if you update the content of the address all linked tables will pick up the change.
Level 3 of normalization is enough this time for you.
Normalization is a set of rules. The more you follow, the higher a "level" of normalisation your database has. In general, level 3 is the highest level sought after.
Normalised data is theoretically "purer" than non-normalised data. This makes it easier to rationalise about it, and it removes redundancy, which is reduces the chance of data getting out of sync.
From a pratical viewpoint however, normalised data isn't always the best design, even if it is in theory. If you don't really know the finer points, aiming for normalised data isn't such a bad idea though.
in phpmyadmin > 4.3.0, in structure -> Table structure, we got above the table:
"Print" "Propose table structure" "Track table" "Move columns" "Improve table structure" , in "Improve table structure" you got a wizard which says :
Improve table structure (Normalization):
Select up to what step you want to normalize
First step of normalization (1NF)
Second step of normalization (1NF+2NF)
Third step of normalization (1NF+2NF+3NF)
To question 2: No it is not possible to insert data into multiple tables with one query.
See the INSERT syntax.
In addition to other answers, you can also search here on SO for normalization and find e.g. the question: Normalization in MySQL

Categories