Multiple 'Select column where ...' vs Select all the column - php

I have to select 4 rows randomly from a column.
Is is better to generate randomly 4 id and to perform 4 requests 'select column from database where id = ... '
Or to select all the rows in one request and to choose after?

If you are capable of generating random existing id's, I think the best approach is to use a clause like where id in (id1, id2, id3, id4). This will result in getting 4 records in one query, so no unnecessary query's or records are fetched.

As told before, where id in (id1, id2, id3, id4) is the fastest way from the MySQL perspective. How ever, you will need some logic in the application generating those IDs : All 4 IDs shall exist, be randomly distributed, and you want to avoid duplicates. In worst case you will be retrieving a list of all existend IDs with a huge query, extracting 4 random values, and querying again.
With all that logic to be done, it can be wise to move selection into MySQL:
SELECT * FROM foobar
ORDER BY RAND()
LIMIT 4;
You must understand that this is slow in mysql, but you have a speed gain in the application logic and can be sure to get random values equally seed all over your table.
EDIT:
The comment asks if PHP is fasten in this task then MySQL. Answer is no.
It is not done by "using rand". You need to have an array containing all those IDs in PHP. That is a huge query, lots of TCP traffic, huge array to be buildt in php, huge btree to be buildt by zend engine. Then, with the IDs, you must fire a second query to get the rows for those IDs.

Although the RAND() function may be slow, so far I have not had significant problems with speed. MY strategy is actually to join the database back to a query of itself returning a list of random IDs with a limit.
SELECT *
FROM table AS t1
JOIN (
SELECT rowID
FROM table
ORDER BY RAND()
LIMIT 4
) AS t2
WHERE t1.rowID = t2.rowID
There is also a more robust solution that exist - try checking out this question (asked in 2010).

Related

How to order the ORDER BY using the IN() mysql? [duplicate]

I am wondering if there is away (possibly a better way) to order by the order of the values in an IN() clause.
The problem is that I have 2 queries, one that gets all of the IDs and the second that retrieves all the information. The first creates the order of the IDs which I want the second to order by. The IDs are put in an IN() clause in the correct order.
So it'd be something like (extremely simplified):
SELECT id FROM table1 WHERE ... ORDER BY display_order, name
SELECT name, description, ... WHERE id IN ([id's from first])
The issue is that the second query does not return the results in the same order that the IDs are put into the IN() clause.
One solution I have found is to put all of the IDs into a temp table with an auto incrementing field which is then joined into the second query.
Is there a better option?
Note: As the first query is run "by the user" and the second is run in a background process, there is no way to combine the 2 into 1 query using sub queries.
I am using MySQL, but I'm thinking it might be useful to have it noted what options there are for other DBs as well.
Use MySQL's FIELD() function:
SELECT name, description, ...
FROM ...
WHERE id IN([ids, any order])
ORDER BY FIELD(id, [ids in order])
FIELD() will return the index of the first parameter that is equal to the first parameter (other than the first parameter itself).
FIELD('a', 'a', 'b', 'c')
will return 1
FIELD('a', 'c', 'b', 'a')
will return 3
This will do exactly what you want if you paste the ids into the IN() clause and the FIELD() function in the same order.
See following how to get sorted data.
SELECT ...
FROM ...
WHERE zip IN (91709,92886,92807,...,91356)
AND user.status=1
ORDER
BY provider.package_id DESC
, FIELD(zip,91709,92886,92807,...,91356)
LIMIT 10
Two solutions that spring to mind:
order by case id when 123 then 1 when 456 then 2 else null end asc
order by instr(','||id||',',',123,456,') asc
(instr() is from Oracle; maybe you have locate() or charindex() or something like that)
If you want to do arbitrary sorting on a query using values inputted by the query in MS SQL Server 2008+, it can be done by creating a table on the fly and doing a join like so (using nomenclature from OP).
SELECT table1.name, table1.description ...
FROM (VALUES (id1,1), (id2,2), (id3,3) ...) AS orderTbl(orderKey, orderIdx)
LEFT JOIN table1 ON orderTbl.orderKey=table1.id
ORDER BY orderTbl.orderIdx
If you replace the VALUES statement with something else that does the same thing, but in ANSI SQL, then this should work on any SQL database.
Note:
The second column in the created table (orderTbl.orderIdx) is necessary when querying record sets larger than 100 or so. I originally didn't have an orderIdx column, but found that with result sets larger than 100 I had to explicitly sort by that column; in SQL Server Express 2014 anyways.
SELECT ORDER_NO, DELIVERY_ADDRESS
from IFSAPP.PURCHASE_ORDER_TAB
where ORDER_NO in ('52000077','52000079','52000167','52000297','52000204','52000409','52000126')
ORDER BY instr('52000077,52000079,52000167,52000297,52000204,52000409,52000126',ORDER_NO)
worked really great
Ans to get sorted data.
SELECT ...
FROM ...
ORDER BY FIELD(user_id,5,3,2,...,50) LIMIT 10
The IN clause describes a set of values, and sets do not have order.
Your solution with a join and then ordering on the display_order column is the most nearly correct solution; anything else is probably a DBMS-specific hack (or is doing some stuff with the OLAP functions in standard SQL). Certainly, the join is the most nearly portable solution (though generating the data with the display_order values may be problematic). Note that you may need to select the ordering columns; that used to be a requirement in standard SQL, though I believe it was relaxed as a rule a while ago (maybe as long ago as SQL-92).
Use MySQL FIND_IN_SET function:
SELECT *
FROM table_name
WHERE id IN (..,..,..,..)
ORDER BY FIND_IN_SET (coloumn_name, .., .., ..);
For Oracle, John's solution using instr() function works. Here's slightly different solution that worked -
SELECT id
FROM table1
WHERE id IN (1, 20, 45, 60)
ORDER BY instr('1, 20, 45, 60', id)
I just tried to do this is MS SQL Server where we do not have FIELD():
SELECT table1.id
...
INNER JOIN
(VALUES (10,1),(3,2),(4,3),(5,4),(7,5),(8,6),(9,7),(2,8),(6,9),(5,10)
) AS X(id,sortorder)
ON X.id = table1.id
ORDER BY X.sortorder
Note that I am allowing duplication too.
Give this a shot:
SELECT name, description, ...
WHERE id IN
(SELECT id FROM table1 WHERE...)
ORDER BY
(SELECT display_order FROM table1 WHERE...),
(SELECT name FROM table1 WHERE...)
The WHEREs will probably take a little tweaking to get the correlated subqueries working properly, but the basic principle should be sound.
My first thought was to write a single query, but you said that was not possible because one is run by the user and the other is run in the background. How are you storing the list of ids to pass from the user to the background process? Why not put them in a temporary table with a column to signify the order.
So how about this:
The user interface bit runs and inserts values into a new table you create. It would insert the id, position and some sort of job number identifier)
The job number is passed to the background process (instead of all the ids)
The background process does a select from the table in step 1 and you join in to get the other information that you require. It uses the job number in the WHERE clause and orders by the position column.
The background process, when finished, deletes from the table based on the job identifier.
I think you should manage to store your data in a way that you will simply do a join and it will be perfect, so no hacks and complicated things going on.
I have for instance a "Recently played" list of track ids, on SQLite i simply do:
SELECT * FROM recently NATURAL JOIN tracks;

Repeated Insert copies on ID

We have records with a count field on an unique id.
The columns are:
mainId = unique
mainIdCount = 1320 (this 'views' field gets a + 1 when the page is visited)
How can you insert all these mainIdCount's as seperate records in another table IN ANOTHER DBASE in one query?
Yes, I do mean 1320 times an insert with the same mainId! :-)
We actually have records that go over 10,000 times an id. It just has to be like this.
This is a weird one, but we do need the copies of all these (just) counts like this.
The most straightforward way to this is with a JOIN operation between your table, and another row source that provides a set of integers. We'd match each row from our original table to as many rows from the set of integer as needed to satisfy the desired result.
As a brief example of the pattern:
INSERT INTO newtable (mainId,n)
SELECT t.mainId
, r.n
FROM mytable t
JOIN ( SELECT 1 AS n
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
UNION ALL SELECT 5
) r
WHERE r.n <= t.mainIdCount
If mytable contains row mainId=5 mainIdCount=4, we'd get back rows (5,1),(5,2),(5,3),(5,4)
Obviously, the rowsource r needs to be of sufficient size. The inline view I've demonstrated here would return a maximum of five rows. For larger sets, it would be beneficial to use a table rather than an inline view.
This leads to the followup question, "How do I generate a set of integers in MySQL",
e.g. Generating a range of numbers in MySQL
And getting that done is a bit tedious. We're looking forward to an eventual feature in MySQL that will make it much easier to return a bounded set of integer values; until then, having a pre-populated table is the most efficient approach.

Fetching few records from mysql database having million entries

I have a table which has million rows. It has user id as its primary key. I have an array having 500 user ids in it.
I want to select all the records from the table whose user ids are in the array. I know one method to do this is to change the array into a string and run IN query by passing the string.
But I think it is not the efficient way to do it. So kindly suggest other ways.
I am assuming that your ids are integer. Maybe you are getting this list of Ids from some other sources so that a join on mysql side is not desired solution. If yes, then find the maximum and minimum id present in your 500 Ids list. You can do this in php side. When you have the max and min value, then query mysql db with a where clause
select ...
from table_name
where min_id <= id and id <= max_id
id is the primary key so the advantage is that it is already indexed.
I have done this in the past, I am not sure that my method is the most efficient.
I create a string out of the ids: where id = a or id = b or id = c ...
then I add the select statement in front of it, and do a fetchall.
My guess is that you're getting these user IDs from another table and that you are storing them in an array. If this is correct, then you should change the query that fetches these user IDs so that it uses a join instead.
Joins will help you there, because IN() is not a good programming practice.
You can learn about joins here : http://mysqljoin.com/

Getting random results from large tables

I'm trying to get 4 random results from a table that holds approx 7 million records. Additionally, I also want to get 4 random records from the same table that are filtered by category.
Now, as you would imagine doing random sorting on a table this large causes the queries to take a few seconds, which is not ideal.
One other method I thought of for the non-filtered result set would be to just get PHP to select some random numbers between 1 - 7,000,000 or so and then do an IN(...) with the query to only grab those rows - and yes, I know that this method has a caveat in that you may get less than 4 if a record with that id no longer exists.
However, the above method obviously will not work with the category filtering as PHP doesn't know which record numbers belong to which category and hence cannot select the record numbers to select from.
Are there any better ways I can do this? Only way I can think of would be to store the record id's for each category in another table and then select random results from that and then select only those record ID's from the main table in a secondary query; but I'm sure there is a better way!?
You could of course use the RAND() function on a query using a LIMIT and WHERE (for the category). That however as you pointed out, entails a scan of the database which takes time, especially in your case due to the volume of data.
Your other alternative, again as you pointed out, to store id/category_id in another table might prove a bit faster but again there has to be a LIMIT and WHERE on that table which will also contain the same amount of records as the master table.
A different approach (if applicable) would be to have a table per category and store in that the IDs. If your categories are fixed or do not change that often, then you should be able to use that approach. In that case you will effectively remove the WHERE from the clause and getting a RAND() with a LIMIT on each category table would be faster since each category table will contain a subset of records from your main table.
Some other alternatives would be to use a key/value pair database just for that operation. MongoDb or Google AppEngine can help with that and are really fast.
You could also go towards the approach of a Master/Slave in your MySQL. The slave replicates content in real time but when you need to perform the expensive query you query the slave instead of the master, thus passing the load to a different machine.
Finally you could go with Sphinx which is a lot easier to install and maintain. You can then treat each of those category queries as a document search and let Sphinx randomize the results. This way you offset this expensive operation to a different layer and let MySQL continue with other operations.
Just some issues to consider.
Working off your random number approach
Get the max id in the database.
Create a temp table to store your matches.
Loop n times doing the following
Generate a random number between 1 and maxId
Get the first record with a record Id greater than the random number and insert it into your temp table
Your temp table now contains your random results.
Or you could dynamically generate sql with a union to do the query in one step.
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
Note: my sql may not be valid, as I'm not a mySql guy, but the theory should be sound
First you need to get number of rows ... something like this
select count(1) from tbl where category = ?
then select a random number
$offset = rand(1,$rowsNum);
and select a row with offset
select * FROM tbl LIMIT $offset, 1
in this way you avoid missing ids. The only problem is you need to run second query several times. Union may help in this case.
For MySQl you can use
RAND()
SELECT column FROM table
ORDER BY RAND()
LIMIT 4

MYSQL query optimization

I'm trying to optimize a report query run on an ecommerce site. I'm pretty sure that I'm doing something stupid, since this query shouldn't be taking nearly as long to run as it does.
The query in question is:
SELECT inventories_name, inventories_code, SUM(shop_orders_inventories_qty) AS qty,
SUM(shop_orders_inventories_price) AS tot_price, inventories_categories_name,
inventories_price_list, inventories_id
FROM shop_orders
LEFT JOIN shop_orders_inventories ON (shop_orders_id = join_shop_orders_id)
LEFT JOIN inventories ON (join_inventories_id = inventories_id)
WHERE {$date_type} BETWEEN '{$start_date}' AND '{$end_date}'
AND shop_orders_x_response_code = 1
GROUP BY join_inventories_id, join_shop_categories_id
{$order}
{$limit}
It's basically trying to get total sales per item over a period of time; values in curly brackets are filled in via a form. It works fine for a period of a couple days, but querying a time interval of a week or more can take 30 seconds+.
I feel like it's joining way too many rows in order to calculate the aggregate values and sucking up huge amounts of memory, but I'm not sure how to limit it.
Note - I realize that I'm selecting fields which aren't in the group by, but they correspond 1-1 with inventory ID, which is in the group by.
Any suggestions?
-- Edit --
The current indices are:
inventories:
join_categories - BTREE
inventories_name, inventories_code, inventories_description - FULLTEXT
shop_orders_inventories:
shop_orders_inventories_id - BTREE
shop_orders:
shop_orders_id - BTREE
Two sequential left joins will work quite long on a big table. Try to use "join" instead of "left join" (unless you have records in shop_orders with now matching records in shop_orders_inventories or inventories) or split this query to couple of small ones. Also by using "sum" and "group by" you are forcing MySQL to create temp tables - you might want to increase MySQL cache so those tables would fit in to memory (otherwise MySQL will dump them to disk which will also increase SQL execution time).
The first and foremost rule to indexing is... index the columns that you will search on!
For each possible value of {$date_type}, create an index for that date column.
Once you have lots of data in the table (say 2 years or 100 weeks), a single week's data is 1% of the index, so it becomes a good starting point.
Even though MySQL allows non-aggregates in the SELECT clause, I personally would sync the two
SELECT inventories_name, inventories_code,
SUM(shop_orders_inventories_qty) AS qty,
SUM(shop_orders_inventories_price) AS tot_price,
inventories_categories_name, inventories_price_list, inventories_id
FROM ...
GROUP BY inventories_id, join_shop_categories_id, inventories_name,
inventories_code, inventories_categories_name, inventories_price_list
...

Categories