Get details from another mysql table - php

I have a table which would contain information about a certain month, and one column in that row would have mysql row id's for another table in it to grab multiple information from
is there a more efficent way to get the information than exploding the ids and doing seperate sql queryies on each... here is an example:
Row ID | Name | Other Sources
1 Test 1,2,7
the Other Sources has the id's of the rows from the other table which are like so
Row ID | Name | Information | Link
1 John | No info yet? | http://blah.com
2 Liam | No info yet? | http://blah.com
7 Steve| No info yet? | http://blah.com
and overall the information returned wold be like the below
Hi this page is called test... here is a list of our sources
- John (No info yet?) find it here at http://blah.com
- Liam (No info yet?) find it here at http://blah.com
- Steve (No info yet?) find it here at http://blah.com
i would do this... i would explode the other sources by , and then do a seperate SQL query for each, i am sure there could be a better way?

Looks like a classic many-to-many relationship. You have pages and sources - each page can have many sources and each source could be the source for many pages?
Fortunately this is very much a solved problem in relational database design. You would use a 3rd table to relate the two together:
Pages (PageID, Name)
Sources (SourceID, Name, Information, Link)
PageSources (PageID, SourceID)
The key for the "PageSources" table would be both PageID and SourceID.
Then, To get all the sources for a page for example, you would use this SQL:
SELECT s.*
FROM Sources s INNER JOIN PageSources ps ON s.SourceID = ps.SourceID
AND ps.PageID = 1;

Not easily with your table structure. If you had another table like:
ID Source
1 1
1 2
1 7
Then join is your friend. With things the way they are, you'll have to do some nasty splitting on comma-separated values in the "Other Sources" field.

Maybe I'm missing something obvious (been known to), but why are you using a single field in your first table with a comma-delimited set of values rather than a simple join table. The solution if do that is trivial.

The problem with these tables is that having a multi-valued column doesn't work well with SQL. Tables in this format are considered to be normalized, as multi-valued columns are forbidden in First Normal Form and above.
First Normal Form means...
There's no top-to-bottom ordering to the rows.
There's no left-to-right ordering to the columns.
There are no duplicate rows.
Every row-and-column intersection contains exactly one
value from the applicable domain (and
nothing else).
All columns are regular [i.e. rows have no hidden components such as
row IDs, object IDs, or hidden timestamps].
—Chris Date, "What First Normal Form Really Means", pp. 127-8[4]
Anyway, the best way to do it is to have a many to many relationship. This is done by putting a third table in the middle, like Dominic Rodger does in his answer.

Related

List table content based on table settings (php / mysql)

I have a clue on how to do this, but I was wondering if there's other methods out there, maybe a "best practice" approach.
I have a page that lists a number of datasets that can be found in a "catalogue" table in mysql, like the one below.
+----+----------+------+--------------------------+
| id | name | type | listItems |
+----+----------+------+--------------------------+
| 1 | dataset1 | SQL | id, name, location, type |
| 2 | dataset2 | SQL | id, gdp, import, export |
+----+----------+------+--------------------------+
The datasets are different, have different structures etc. What I'm trying to achieve is that when I click one of these links, I'm being shown all the records in the respective table. Normally this is a matter of extracting data from a table, but as I mentioned, the data could be different. From the first dataset, I want to list the id, name, location and type field, whereas from the second dataset, I'm looking for id, gdp, import, export and abbreviation. Not only are the columns different, but I don't want to extract all columns, just some.
My initial thought was to have an extra column in the catalogue table (the listItems column), specifying each table's default columns to be extracted. These would be stored in the following format:
id, name, location, type
Then, when I list items, I identify which dataset I'm using, I'm extracting these values from the catalogue table and then I query the database.
Is there a better way to do this?
You are part way there.
Next, you write PHP code to create the SELECT statement using the dataset name and list of columns.
After that, you may realize that you want different formatting: right justified numbers, maybe with commas; anchor tags for values that look like hyperlinks; left justify strings; etc.
How far do you want to take this? It can all be done in PHP, and there is where most of it belongs. Your "catalog" is about the only thing to store in the database, and very little is done via SQL.

PHP & MySQL performance - One big query vs. multiple small

For an MySQL table I am using the InnoDB engine and the structure of my tables looks like this:
Table user
id | username | etc...
----|------------|--------
1 | bruce | ...
2 | clark | ...
3 | tony | ...
Table user-emails
id | person_id | email
----|-------------|---------
1 | 1 | bruce#wayne-ent.com
2 | 1 | ceo#wayne-ent.com
3 | 2 | clark.k#daily-planet.com
To fetch data from the database I've written a tiny framework. E.g. on __construct($id) it checks if there is a person with the given id, if yes it creates the corresponding model and saves only the field id to an array. During runtime, if I need another field from the model it fetches only the value from the database, saves it to the array and returns it. E.g. same with the field emails for that my code accesses the table user-emails and get all the emails for the corresponding user.
For small models this works alright, but now I am working on another project where I have to fetch a lot of data at once for a list and that takes some time. Also I know that many connections to MySQL and many queries are quite stressful for the server, so..
My question now is: Should I fetch all data at once (with left joins etc.) while constructing the model and save the fields as an array or should I use some other method?
Why do people insist on referring to the entities and domain objects as "models".
Unless your entities are extremely large, I would populate the entire entity, when you need it. And, if "email list" is part of that entity, I would populate that too.
As I see it, the question is more related to "what to do with tables, that are related by foreign keys".
Lets say you have Users and Articles tables, where each article has a specific owner associate by user_id foreign key. In this case, when populating the Article entity, I would only retrieve the user_id value instead of pulling in all the information about the user.
But in your example with Users and UserEmails, the emails seem to be a part of the User entity, and something that you would often call via $user->getEmailList().
TL;DR
I would do this in two queries, when populating User entity:
select all you need from Users table and apply to User entity
select all user's emails from the UserEmails table and apply it to User entity.
P.S
You might want to look at data mapper pattern for "how" part.
In my opinion you should fetch all your fields at once, and divide queries in a way that makes your code easier to read/manage.
When we're talking about one query or two, the difference is usually negligible unless the combined query (with JOINs or whatever) is overly complex. Usually an index or two is the solution to a very slow query.
If we're talking about one vs hundreds or thousands of queries, that's when the connection/transmission overhead becomes more significant, and reducing the number of queries can make an impact.
It seems that your framework suffers from premature optimization. You are hyper-concerned about fetching too many fields from a row, but why? Do you have thousands of columns or something?
The time consuming part of your query is almost always the lookup, not the transmission of data. You are causing the database to do the "hard" part over and over again as you pull one field at a time.

MYSQL output multiple rows with just a single row in mysql database

I have this data that should output to corresponding number of social media that he interacted with.
There's 4 interaction which is fblike_point, fbshare_point, tweet_point, and follow_point
So let's say, I've interacted with fblike_point and tweet_point judging from the data below.
So what I want to do is, it should output 2 times since I've interacted with fblike_point and tweet_point.
Output:
2013-05-14 | fblike_point
2013-05-14 | tweet_point
If I interacted 4 times, it should output 4 times with the corresponding social media interaction that he made.
Well I can manage to do this stuff but, it was like redundancy, for example I'm using a mysql query in PHP for selecting data:
SELECT date_participated, fblike_point FROM table WHERE fblike_point = 1
SELECT date_participated, fbshare_point FROM table WHERE fbshare_point = 1
SELECT date_participated, tweet_point FROM table WHERE tweet_point = 1
SELECT date_participated, follow_point FROM table WHERE follow_point = 1
So is there any other way to have a short method or something?
If I interacted 4 times, it should output 4 times
With your data schema, you'd either need the four distinct queries you quoted, or a UNION over these.
it was like redundancy
This is redundant because the way your schema is organized. If you want to be able to treat these different interactions alike (which makes a lot of sense), then you'd want an extra table for these, with one column identifying the row of your original table that this refers to, and a second column (probably of an ENUM type) identifying the social media. Both together would form the primary key of that table.
You can then create a VIEW from the actual tables which looks just like your table does now. That way you can maintain compatibility to existing queries and still provide more flexible queries for those cases where you need them.

Multilang catalog(with custom fields) DB structure design

Soon I'll be working on catalog(php+mysql) that will have multilang content support. And now I'm considering the best approach to design the database structure. At the moment I see 3 ways for multilang handling:
1) Having separate tables for each language specific data, i.e. schematicly it'll look like this:
There will be one table Main_Content_Items, storing basic data that cannot be translated like ID, creation_date, hits, votes on so on - it will be only one and will refer to all languages.
And here are tables that will be dublicated for each language:
Common_Data_LANG table(example: common_data_en_us) (storing common/"static" fields that can be translated, but are present for eny catalog item: title, desc and so on...)
Extra_Fields_Data_LANG table (storing extra fields data that can be translated, but can be different for custom item groups, i.e. like: | id | item_id | field_type | value | ...)
Then on items request we will look in table according to user/default language and join translatable data with main_content table.
Pros:
we can update "main" data(i.e. hits, votes...) that are updated most often with only one query
we don't need o dublicate data 4x or more times if we have 4 or more languages in comparison with structure using only one table with 'lang' field. So MySql queries would take less time to go through 100000(for example) records catalog rather then 400000 or more
Cons:
+2 tables for each language
2) Using 'lang' field in content tables:
Main_Content_Items table (storing basic data that cannot be translated like ID, creation_date, hits, votes on so on...)
Common_Data table (storing common/"static" fields that can be translated, but are present for eny catalog item: | id | item_id | lang | title | desc | and so on...)
Extra_Fields_Data table (storing extra fields data that can be translated, but can be different for custom item groups, i.e. like: | id | item_id | lang | field_type | value | ...)
So we'll join common_data and extra_fields to main_content_items according to 'lang' field.
Pros:
we can update "main" data(i.e. hits, votes...) that are updated most often with only one query
we only 3 tables for content data
Cons:
we have custom_data and extra_fields table filled with data for all languages, so its X time bigger and queries run slower
3) Same as 2nd way, but with Main_Content_Items table merged with Common_Data, that has 'lang' field:
Pros:
...?
Cons:
we need to update update "main" data(i.e. hits, votes...) that are updated most often with for every language
we have custom_data and extra_fields table filled with data for all languages, so its X time bigger and queries run slower
Will be glad to hear suggestions about "what is better" and "why"? Or are there better ways?
Thanks in advance...
I've given a similar anwer in this question and highlighted the advantages of this technique (it would be, for example, important for me to let the application decide on the language and build the query accordingly by only changing the lang parameter in the WHERE clause of the SQL query.
This get's pretty close to your second solution. I didn't quite got the "extra_fields" but if it makes sense, you could(!) merge it into the common_data table. I would advise you against the first idea since there will be too many tables and it can be easy to lose track about the items in there.
To your edit: I still consider the second approach the better one (it's my optinion so it's relative ;)) I'm no expert on optimization but I think that with proper indexes and proper table structure speed should be not be a problem. As always, the best way to find the most effective way is doing both methods and see which is best since speed will vary from data, structure, ....

One ID for every database column, how to do?

I working on a food database, every food has a list of properties (fats, energy, vitamins, etc.)
These props are composed by 50 different columns of proteins, fat, carbohydrates, vitamins, elements, etc.. (they are a lot)
the number of columns could increase in the future, but not too much, 80 for extreme case
Each column needs an individual reference to one bibliography of a whole list from another table (needed to check if the value is reliable or not).
Consider the ids, should contain a number, a NULL val, or 0 for one specific exception reference (will point to another table)
I've though some solution, but they are very different eachothers, and I'm a rookie with db, so I have no idea about the best solution.
consider value_1 as proteins, value_2 as carbohydrates, etc..
The best (I hope) 2 alternatives I thought are:
(1) create one varchar(255?) column, with all 50 ids, so something like this:
column energy (7.00)
column carbohydrates (89.95)
column fats (63.12)
column value_bil_ids (165862,14861,816486) ## as a varchar
etc...
In this case, I can split it with "," to an array and check the ids, but I'm still worried about coding praticity... this could save too many columns, but I don't know how much could be pratical in order to scalability too.
Principally, I thought this option usual for query optimization (I hope!)
(2) Simply using an additional id column for every value, so:
column energy (7.00)
column energy_bibl_id (165862)
column carbohydrates (89.95)
column carbohydrates_bibl_id (14861)
column fats (63.12)
column fats_bibl_id (816486)
etc...
It seems to be a weightful number of columns, but much clear then first, especially for the relation of any value column and his ID.
(3) Create a relational table behind values and bibliographies, so
table values
energy
carbohydrates
fats
value_id --> point to table values_and_bibliographies val_bib_id
table values_and_bibliographies
val_bib_id
energy_id --> point to table bibliographies biblio_id
carbohydrates_id --> point to table bibliographies biblio_id
fats_id --> point to table bibliographies biblio_id
table bibliographies
biblio_id
biblio_name
biblio_year
I don't know if these are the best solutions, and I shall be grateful if someone will help me to bring light on it!
You need to normalize that table. What you are doing is madness and will cause you to loose hair. They are called relational databases so you can do what you want without adding of columns. You want to structure it so you add rows.
Please use real names and we can whip a schema out.
edit Good edit. #3 is getting close to a sane design. But you are still very unclear about what a bibliography is doing in a food schema! I think this is what you want. You can have a food and its components linked to a bibliography. I assume bibliography is like a recipe?
FOODS
id name
1 broccoli
2 chicken
COMPONENTS
id name
1 carbs
2 fat
3 energy
BIBLIOGRAPHIES
id name year
1 chicken soup 1995
FOOD_COMPONENTS links foods to their components
id food_id component_id bib_id value
1 1 1 1 25 grams
2 1 2 1 13 onces
So to get data you use a join.
SELECT * from FOOD_COMPONENTS fc
INNER JOIN COMPONENTS c on fc.component_id = c.id
INNER JOIN FOODS f on fc.foods_id = f.id
INNER JOIN BIBLIOGRAPHIES b on fc.bib_id = b.id
WHERE
b.name = 'Chicken Soup'
You seriously need to consider redesiging your database structure - it isn't recommended to keep adding columns to a table when you want to store additional data that relates to it.
In a relational database you can relate tables to one another through the use of foreign keys. Since you want to store a bunch of values that relate to your data, create a new table (called values or whatever), and then use the id from your original table as a foreign key in your new table.
Such a design that you have proposed will make writing queries a major headache, not to mention the abundance of null values you will have in your table assuming you don't need to fill every column..
Here's one approach you could take to allow you to add attributes all day long without changing your schema:
Table: Food - each row is a food you're describing
Id
Name
Description
...
Table: Attribute - each row is a numerical attribute that a food can have
Id
Name
MinValue
MaxValue
Unit (probably a 'repeating group', so should technically be in its own table)
Table: Bibliography - i don't know what this is, but you do
Id
...
Table: FoodAttribute - one record for each instance of a food having an attribute
Food
Attribute
Bibliography
Value
So you might have the following records
Food #1 = Cheeseburger
Attribute #1 = Fat (Unit = Grams)
Bibliography #1 = whatever relates to cheeseburgers and fat
Then, if a cheeseburger has 30 grams of fat, there would be an entry in the FoodAttribute table with 1 in the Food column, 1 in the Attribute column, a 1 in the Bibliography column, and 30 in the Value column.
(Note, you may need some other mechanisms to deal with non-numeric attributes.)
Read about Data Modeling and Database Normalization for more info on how to approach these types of problems...
Appending more columns to a table isn't recommended nor popular in the DB world, except with a NoSQL system.
Elaborate your intentions please :)
Why, for the love of $deity, are you doing this by columns? That way lies madness!
Decompose this table into rows, then put a column on each row. Without knowing more about what this is for and why it is like it is, it's hard to say more.
I re-read your question a number of times and I believe you are in fact attempting a relational schema and your concern is with the number of columns (you mention possibly 80) associated with a table. I assure you that 80 columns on a table is fine from a computational perspective. Your database can handle it. From a coding perspective, it may be high.
Proposed (1) Will fail when you want to add a column. You're effectively storing all your columns in a comma delimited single column. Bad.
I don't understand (2). It sounds the same as (3)
(3) is correct in spirit, but your example is muddled and unclear. Whittle your problem down to a simple case with five columsn or something and edit your question or post again.
In short, don't worry about number of columns right now. Low on the priority list.
If you have no need to form queries based on arbitrary key/value pairs you'd like to add to every record, you could in a pinch serialize()/unserialize() an associative array and put that into a single field

Categories