I want to code a basic search box in my php based online shopping website. the problem is the data is in 50 tables categorised based on product type.
ie
table 1 - Mobile Phones
table 2 - Laptops
table 50 - Air Conditioners
i can code it using query like
select * from table 1
if 0 rows returned
select * from table 2
if 0 rows returned
next
till table 50
but this code can slow down the website as each keypress will lead into 100 queries execution is there anything else i can do about it ?
Options:
1) Normalise your tables so there's just one table to search (may be difficult if different products have different fields)
2) Use SQL Unions (can be very slow):
SELECT column_name(s) FROM table1
UNION
SELECT column_name(s) FROM table2;
3) Query each table, store results in array, use usort to sort the array and then output them.
Related
We have records with a count field on an unique id.
The columns are:
mainId = unique
mainIdCount = 1320 (this 'views' field gets a + 1 when the page is visited)
How can you insert all these mainIdCount's as seperate records in another table IN ANOTHER DBASE in one query?
Yes, I do mean 1320 times an insert with the same mainId! :-)
We actually have records that go over 10,000 times an id. It just has to be like this.
This is a weird one, but we do need the copies of all these (just) counts like this.
The most straightforward way to this is with a JOIN operation between your table, and another row source that provides a set of integers. We'd match each row from our original table to as many rows from the set of integer as needed to satisfy the desired result.
As a brief example of the pattern:
INSERT INTO newtable (mainId,n)
SELECT t.mainId
, r.n
FROM mytable t
JOIN ( SELECT 1 AS n
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
UNION ALL SELECT 5
) r
WHERE r.n <= t.mainIdCount
If mytable contains row mainId=5 mainIdCount=4, we'd get back rows (5,1),(5,2),(5,3),(5,4)
Obviously, the rowsource r needs to be of sufficient size. The inline view I've demonstrated here would return a maximum of five rows. For larger sets, it would be beneficial to use a table rather than an inline view.
This leads to the followup question, "How do I generate a set of integers in MySQL",
e.g. Generating a range of numbers in MySQL
And getting that done is a bit tedious. We're looking forward to an eventual feature in MySQL that will make it much easier to return a bounded set of integer values; until then, having a pre-populated table is the most efficient approach.
I have a product table and on it has 1.5m rows. I want to get products and filter on it ( about its categories, brands etc)
i do it like that :
( scenario is that : i enter the category whose id is 35 and i want to fetch its and its subs product)
SELECT distinct r_category as r_category,brand as brand,price as price,offer_id FROM Product where Product.status = 1 and Product.r_category IN (35,279,280,274,276,278,277,275,39,294,295,296,38,292,293,290,291,289,34,271,272,273,36,283,282,284,281,37,285,286,288,287,350,351,348,349,4)
this code give me what categories and brands and prices they have.
i got products about 0.049 second , but when i run this code its raising to 7 sec.
0.14223003387451 first query (with limit 20)
7.0965619087219 last query time ( about 957895 rows)
( my table is innodb , and i have indexes r_category, brand,price etc)
Thanks for all
To decrease time of query you can do:
Add indexes in all columns which you use in where clause (status, r_category in this example)
Add force FORCE INDEX(r_category) -> SELECT distinct r_category as r_category,brand as brand,price as price,offer_id FROM Product FORCE INDEX(r_category) where Product.status = 1 and Product.r_category IN (35,279,280,274,276,278,277,275,39,294,295,296,38,292,293,290,291,289,34,271,272,273,36,283,282,284,281,37,285,286,288,287,350,351,348,349,4)
If it is possible paginate your query by limit and offset. There are a lot of memory allocations when you receive 957895 rows from database in PHP. When you use less memory your script will work a little faster.
I'm struggling a bit on the best way to do this with as little performance hit as possible.
Here's the setup...
Search results page with search refining filters that make an AJAX call to a PHP handler which returns a new (refined) set of results.
I have 4 tables that contain all of the data I need to connect to in the PHP handler code.
Table 1 - Main table of records with main details
Table 2 - Ratings for each product from professional rating company #1
Table 3 - Ratings for each product from professional rating company #2
Table 4 - Ratings for each product from professional rating company #3
The refiners on the search results page are jquery sliders with ranges from the lowest allowed rating to the highest for each.
When a slider handle is moved, a new AJAX call is made with the new value(s) and the database query will run to create a fresh set of refined results.
Getting the data I need from Table 1 is the easy part. What I'm struggling with is how to efficiently include a join on the other 3 tables and only picking up rows that match the refining values/ranges. Table 2, 3, and 4 all have multiple columns for year (2004-2012) and when I made an initial attempt to put it all into one query, it bogged down.
Table 2, 3, and 4 hold the various ratings for each record in Table 1.
The columns in Table 2, 3, and 4 are...
id - productID - y2004 - y2005 - y2006 - y2007 - ... you get the idea.
Each year column has a numeric value for each record (default is 0).
What I need to do is efficiently select records that match the refiner ranges selected by the user across all 4 tables at once.
An example refiner search would be...get all records from Table 1 where price is between $25 and $50 AND where Table 2 records have a rating (from any year/column) between 1 - 4 AND where Table 3 records have a rating (from any year/column) between 80 - 100 AND where Table 4 records have a rating (from any year/column) between 80 - 100.
Any advice on how to set this up with as much performance as possible?
My suggestion would be to use a different table structure. You should merge Table 2, 3 and 4 into a single ratings table with the following structure:
id | productID | companyID | year | rating
Then you could rewrite your query as:
SELECT *
FROM products p
JOIN ratings r ON p.id = r.productID
WHERE p.price BETWEEN 25 AND 50
AND (
( r.companyID = 1 AND r.rating BETWEEN 1 AND 4 )
OR ( r.companyID = 2 AND r.rating BETWEEN 80 AND 100 )
OR ( r.companyID = 3 AND r.rating BETWEEN 80 AND 100 )
)
This way the performance would surely increase. Also, your tables will be more scalable, both with the years and the number of companies.
One more thing: if you have a lot of fields in your products table, it might be more useful to execute 2 queries instead of joining. The reason for this is that you are fetching redundant data - every joined row will have the columns for product, even though you only need it once. This is a side-effect of joins, and there is probably a performance threshold where it will be more useful to query twice than to join. It is up to you to decide if/when that is the case.
I took over managing an internal website for the company that I'm working for and I need to get data out of a mysql database. The problem that I'm encountering is that the data is in 6 different tables, all with the same fields but the rows are all unique (the row starts in one table and then gets completely moved to a different table after it is processed by an employee).
Is there an easy way to query against all 6 at once? It would also be useful to be able to retrieve the title of the table it came from.
I'm using PHP to run the query and display it. Would it be better to create another table that defines where all the rows are, have a unique id and then another field for which table it's in?
To complete this query, use union all:
select
'Table1' as TableName,
*
from
Table1
union all
select
'Table2',
*
from
Table2
union all
select
'Table3',
*
from
Table3
...and so on
For better database design, you would want to either have one table with all the rows in it, and a designated Status table where you can link a StatusID column to that says what status that given row is in. A table for each stage in a process is a poor design and will only lead to massive headaches down the road.
If you can't reorganize the tables so that you have just one with all rows and a marker for where in the process they are, I would go for a UNION-approach. Ie:
SELECT 'Data from Table 1', t1.field1, ...
FROM Table1 t1
UNION
SELECT 'Data from Table 2', t2.field1, ...
FROM Table2 t2
UNION
(Table3, 4, 5 and 6 in the same manner)
....
That way you can see where the data is originating from and you get all 6 at once. Just remember that you have to have the exact same field list in all parts of the UNION.
You could create a code generator that generates SQL statements to query the 6 tables. The generator would create a UNION of 6 selects and add a "table" column to each select with a constant value equal to the name of the table queried. That would make writing the statements easy, though I wouldn't say that writing the generator would be.
I have a table bundled among 100 databases in MYSQL (i.e. 1st x rows of the table in database_1, 2nd x rows of the table in database_2, ... , last x rows of the table in database_100)
Each table has a row whenever a user visits a friend for a game.
The columns are iuin, logtime, beuin.
iuin is the user id of the visitor.
beuin is the user id of the friend who was visited.
logtime is when the visit was made.
I would like to find the # of distinct friends who were visited during a week.
There is roughly 300k distinct users who are visited per day.
However, when I extended my code to calculate for a week, I ran out of memory.
My code basically does an SQL query using SELECT DISTINCT beuin for a selected week for the table in each database. I store all the beuin in an array if it's not already stored (so I count distinct friends who were visited), and return the size of the array at the end.
FYI, I can't edit the database around such as joining all the tables in different databases into one table.
Is there any alternative ways i can do this?
Thanks
It's hard to say anything about your without the one. But I think you can solve this problem using mysql. My quick solution:
Create table - CREATE table if not exist users_ids(user_id INT NOT NULL DEAULT 0 PRIMARY KEY(UNIQUE)); in the first db
Truncate users_ids
Run 100 queries like INSERT IGNORE INTO db1.users_ids select distinct user_id from db1.table1;
Select count(*) from users_ids;