SELECT FROM 100 tables in 1 database - php

I am trying to SELECT id, description, title FROM table1, table2, table100
Say I get this working, is it better for me to just combine all my tables in phpmyadmin?
The problem is I have around 100 tables all of different categories of books so I want to keep them seperated in their individual tables.
I am trying to make a search engine that searches all the books in the entire database. All tables have the same column names.
So really all I really am trying to do is search the entire database's tables for an id, description, title. My search works, just I can only search 1 table and every solution online I have found only really works efficiantly with 2 or 3 tables.
Thanks in advance.

The best is to redesign your database, everything into a single table with an additional "category" column.
in the meantime, you can create a view which union the tables with an additional column for the category.

I recommend redesign the model and unifique this 100 tables to 1, and add a new column with category but integer value, not string value. In this way, you can index the category column with the other fields (id, description, title) for speed up the query.
This resolution is more easy for avoid pain later.

I recommend keeping one table A with id, description, title, category and create another table B with categories. Table A has to have a foreign key with table categories. Then create a query to retrieve the books with a specific category.
Example:
SELECT id, description, title, category FROM books WHERE category = "drama"

I think it speaks to the database design itself as mentioned by most here. You've a few options depending on how much time you have on your hands:
(Short Term / Quick Fix) Central table with all your current fields plus category as a flag to differentiate between the current tables you have. So your insert will be something like "INSERT INTO newtable (ID,AssetID,ServiceID,Category) SELECT id, description, title, 'Fiction' FROM table1 ;"
If you tables are incrementally named like table1, table2 upto table100, you could then maybe write a quick php script that will iterate through the insert loop while incrementing on table on each iteration until the last table.
In the long run, you could invest in a json field that will house all your other data excluding keys that pertaining to a single entry

Related

Add second (conditional) result from second table to SQL query

I have two tables in a database. One stores names/details of users with an index ID; the other stores articles they have written, which just keeps the user's ID as a reference (field author). So far so simple. I can easily query a list of articles and include in the query a request for the user's name and status:
SELECT a.name, a.status, s.* FROM articles s, author a WHERE s.author=a.id
The problem comes when I occasionally have a second author credit, referenced in field author2. Up till now I've been doing what I assume is a very inefficient second query when I iterate through the results, just to get the second author's name and status from the table (pseudocode):
while ( fetch a row ) {
if (author2 != 0) {
query("SELECT name, status FROM author WHERE id=author2") }
etc. }
While this worked fine in PHP/MySQL (even if clunky), I'm forced to upgrade to PHP7/PDO and I'd like to get the benefits of unbuffered queries, so this nested query won't work. Obviously one simple solution would be to PDO->fetchALL() the entire results first before iterating all the result rows in a foreach loop and doing these extra queries per row.
But it would be far more efficient to get that second bit of data somehow incorporated into the main query, pulling from the author table using the second ID (author2) as well as the main ID, so that there are name2 and status2 fields added to each row. I just cannot see how to do it...
It should be noted that while the primary author ID field is ALWAYS non-zero, the author2 field will contain zero if there is no second ID, and there is NO author ID 0 in the author table, so any solution would need to handle an author2 ID of 0 by providing null strings or something in those fields, rather than giving an error. (Or far less elegantly, a dummy author ID 0 with null data could be added to the author table, I suppose.)
Can anyone suggest a revised original query that can avoid such secondary queries?
Never use commas in the FROM clause. Always use proper, explicit, standard JOIN syntax.
For your query, use LEFT JOIN:
SELECT s.*, a1.name, a1.status, a2.name, a2.status
FROM articles s LEFT JOIN
author a1
ON s.author = a1.id LEFT JOIN
author a2
ON s.author2 = a2.id
Gordon Linoff's answer looks like what you need.
I would have added this as a comment but it is too long of a message...
I just have a question/comment regarding normalization of the database. Would there ever be an instance when there is an author3? If so then you should probably have an ArticleAuthor table. Since you are rebuilding the code anyway this may be an improvement to consider.
I don't know the names and data types of the information you are storing so this is a primitive example of the structure I would suggest.
Table Article
ArticleID
ArticleData...
Table Author
AuthorID
AuthorName
AuthorStatus
Table ArticleAuthor
ArticleID
AuthorID
If the Status is dependent on the Author Article combination then AuthorStatus would be moved to ArticleAuthor table like this.
Table ArticleAuthor
ArticleID
AuthorID
Status

MySQL performance with large WHERE IN() clause

Let's say we have a table with 4 columns: id (int 11, indexed), title, content, category (varchar 5).
I have a user select a category. Each category can contain up to 999 objects. Using SELECT id FROM table WHERE category = ? I get a list of all objects.
I then have the user select/deselect some of the objects. After which I need to select the content of the remaining selected objects.
Now my question is as follows, should I worry about performance when using SELECT content FROM table WHERE id IN($array)? Would it be better to use SELECT content FROM table WHERE category = ? AND id IN($array). The idea here being I filter it down to 999 objects before performing the IN...
Does this make any sense? Or should I not be using the IN() at all?
It sounds like you always have content showing on the screen?
999 is a long list to put on the screen. Re-think your UI.
When selected/deselected, what happens? Do you gray out the content? If so, that is a UI issue, not a database issue. If you store the subset that is currently "selected", then how/where is that stored? And, do you want to store it after each select/deselect? Or wait until he clicks "Submit"?
In other words, I don't see why this is a database question.
Back to the queries in question:
INDEX(category)
SELECT ... FROM tbl WHERE category = ...; -- This is optimal
PRIMARY KEY(id)
SELECT ... FROM tbl WHERE id IN (...); -- optimal for an arbitrary set
INDEX(category, id)
SELECT ... FROM tbl WHERE category = ... AND id IN (...)
-- use this only if you both parts are needed for filtering
-- not for optimizing

MySQL Merge 3 Tables into 1

I am creating a search box in PHP and using MySQL as the database but when searching there are 3 tables, Colours, Products and Categories, these all have an ID number and can be linked. I have tried to use INNER JOIN, LEFT, RIGHT, everywhere but no luck, the query will sometimes work, spit out multiple items. So I am looking at creating a one-table-fits-all scenario where all the table field names will be in one and I can easily query that table. I have manually created the table but is there anyway of coping the data from the 3 tables into that main one? I do not mind doing it separately if it is a query that only handles one table but I would love not to have to manually type all the data as there is 600+ rows.
Here is the code I am currently trying to use:
SELECT
categories.Product_Type, items_colors.ColourImageurl,
items_list.description, items_list.Description2,
items_list.title, items_list.id, categories.title AS title2,
items_colors.itemID, Colour Name
FROM items_list
LEFT JOIN categories ON categories.Product_Type = items_list.CatID
LEFT JOIN items_colors ON items_list.id = items_colors.itemID
WHERE items_list.visible = 1 AND
Colour Name LIKE '%".$search."%'
Categories defines what type of product you are selecting, items_list has a list of all the sub category names and item_colors has a list of all the colour names that link to the items_list products. When I use this query it outputs 4 copies of one item and I'm not sure why.
If you are getting data from a query, you can use "create table as select" statement to create the new table, with data from old tables.
CREATE [TEMPORARY] TABLE [IF NOT EXISTS] tbl_name
[(create_definition,...)]
[table_options]
[partition_options]
select_statement
check here for more info : http://dev.mysql.com/doc/refman/5.1/en/create-table.html

Updating a many-to-many connector table all at once

Say I have two tables:
articles
categories
There's a many to many table that connects them.
CREATE TABLE cat2art (
article_id INT,
category_id INT
);
For a specific article, I have a 'new list' of category id's, and we need to update the cat2art table with these.
Some categories got removed, some got added and some stayed where they were. What is the most effective way to update this table?
I could naively delete all records with the specified article_id and simply add them again. However, if I were to record a date in that same table that tracks when an article was linked to a category, that information will now be destroyed.
I'm looking for a great pattern that easily solves this issue. This question is specifically for PHP and MySQL, but answers in other languages are also fine provided they are applicable to PHP+MySQL as well.
Other systems support MERGE statement which would do exactly what you want.
However, in MySQL, you would need at least two queries (it cannot delete and insert/update in a single statement):
DELETE
FROM cat2art
WHERE art_id = $art_id
AND cat_id NOT IN ($new_cat_1, $new_cat_2, …);
INSERT
IGNORE
INTO cat2art
VALUES
($art_id, $new_cat_1),
($art_id, $new_cat_2),
…;
You can define (article_id, category_id) as unique key, and when inserting a connection use INSERT IGNORE syntax. That way if this connection already exists it will not be added again, nor will it update the existing record, and the create_date column stays untouched.
example:
INSERT IGNORE INTO cat2art (article_id, category_id, create_date)
VALUES(100,200,NOW());

group by mysql option

I am writing a converter to transfer data from old systems to new systems. I am using php+mysql.
I have one table that contains millions records with duplicate entries. I want to transfer that data in a new table and remove all entries. I am using following queries and pseudo code to perform this task
select *
from table1
insert into table2
ON DUPLICATE KEY UPDATE customer_information = concat('$firstName',',','$lastName')
It takes ages to process one table :(
I am pondering that is it possible to use group by and get all grouped record automatically?
Other than going through each record and checking duplicate etc.?
For example
select *
from table1
group by firstName, lastName
insert into table 2 only one record and add all users'
first last name into column ALL_NAMES with comma
EDIT
There are different records for each customers with different information. Each row is called duplicated if first and last name of user is same. In new table, we will just add one customer and their bought product in different columns (we have only 4 products).
I don't know what you are trying to do with customer_information, but if you just want to transfer the non-duplicated set of data from one table to another, this will work:
INSERT IGNORE INTO table2(field1, field2, ... fieldx)
SELECT DISTINCT field1, field2, ... fieldx
FROM table1;
DISTINCT will take care of rows that are exact duplicates. But if you have rows that are only partial duplicates (like the same last and first names but a different email) then IGNORE can help. If you put a unique index on table2(lastname,firstname) then IGNORE will make sure that only the first record with lastnameX, firstnameY from table1 is inserted. Of course, you might not like which record of a pair of partial duplicates is chosen.
ETA
Now that you've updated your question, it appears that you want to put the values of multiple rows into one field. This is, generally speaking, a bad idea because when you denormalize your data this way you make it much less accessible. Also, if you are grouping by (lastname, firstname), there will not be names in allnames. Because of this, my example uses allemails instead. In any event, if you really need to do this, here's how:
INSERT INTO table2(lastname, firstname, allemails)
SELECT lastname, firstname, GROUP_CONCAT(email) as allemails
FROM table1
GROUP BY lastname, firstname;
If they are really duplicate rows (every field is the the same) then you can use:
select DISTINCT * from table1
instead of :
select * from table1

Categories