Is not advisable to move calculations from MySQL to PHP? - php

I got stuck with the question/problem I've asked earlier about – How to optimise a table for AVG query? It turns out that #MichaelT was right about one thing – calculating AVG is faster using PHP than MySQL (like 80% faster with 5m records and 24 GB RAM machine).
It isn't always even an options. However, consider this code example (dataset size 5m records).
The MySQL way.
1) aggregate data (creating a temporary data) (500ms)
CREATE TEMPORARY TABLE `temporary_grouped_data` AS
(
SELECT
`r1`.`id`,
`c1`.`wt`,
`c1`.`cpu`,
`c1`.`mu`,
`c1`.`pmu`
FROM
`requests` `r1`
INNER JOIN
`request_hosts` `rh1`
ON
`rh1`.`id` = `r1`.`request_host_id`
INNER JOIN
`request_uris` `ru1`
ON
`ru1`.`id` = `r1`.`request_uri_id`
INNER JOIN
`calls` `c1`
ON
`c1`.`id` = `r1`.`request_caller_id`
WHERE
1=1 {$sql_query['where']}
);
2) get overall AVG (300ms)
SELECT COUNT(`id`), MIN(`wt`), MAX(`wt`), AVG(`wt`), MIN(`cpu`), MAX(`cpu`), AVG(`cpu`), MIN(`mu`), MAX(`mu`), AVG(`mu`), MIN(`pmu`), MAX(`pmu`), AVG(`pmu`) FROM `temporary_grouped_data`;
3) calculate 95th percentile (200ms)
SELECT `wt` FROM `temporary_grouped_data` ORDER BY `wt` ASC LIMIT 1726, 1;
SELECT `cpu` FROM `temporary_grouped_data` ORDER BY `cpu` ASC LIMIT 1726, 1;
SELECT `mu` FROM `temporary_grouped_data` ORDER BY `mu` ASC LIMIT 1726, 1;
SELECT `pmu` FROM `temporary_grouped_data` ORDER BY `pmu` ASC LIMIT 1726, 1;
4) calculate mode (200ms)
SELECT `wt`, COUNT(`wt`) `quantity` FROM `temporary_grouped_data` GROUP BY `wt` ORDER BY `quantity` DESC LIMIT 1;
SELECT `cpu`, COUNT(`cpu`) `quantity` FROM `temporary_grouped_data` GROUP BY `cpu` ORDER BY `quantity` DESC LIMIT 1;
SELECT `mu`, COUNT(`mu`) `quantity` FROM `temporary_grouped_data` GROUP BY `mu` ORDER BY `quantity` DESC LIMIT 1;
SELECT `pmu`, COUNT(`pmu`) `quantity` FROM `temporary_grouped_data` GROUP BY `pmu` ORDER BY `quantity` DESC LIMIT 1
The PHP way.
1) Get all the relevant records into an array (200ms).
SELECT
`r1`.`id`,
`c1`.`wt`,
`c1`.`cpu`,
`c1`.`mu`,
`c1`.`pmu`
FROM
`requests` `r1`
INNER JOIN
`request_hosts` `rh1`
ON
`rh1`.`id` = `r1`.`request_host_id`
INNER JOIN
`request_uris` `ru1`
ON
`ru1`.`id` = `r1`.`request_uri_id`
INNER JOIN
`calls` `c1`
ON
`c1`.`id` = `r1`.`request_caller_id`
2) Perform all the calculations (200ms).
The PHP approach is by far faster. Is there any reason why I shouldn't use PHP to perform these calculations?

Shifting the work off to PHP means having to transfer the whole result set over the wire, which could be really bad depending on the size. Also, I am not a database person by any stretch of the imagination but these results are unexpected. You should look into the possibility you are doing things the wrong way in the database version.

Speed isn't everything. I'd put it where it belongs to.
Will it scale in PHP?
Also with the PHP approach, you have to transfer all the data from the db to php. How much RAM does it cost, etc.
Maybe your database isn't well optimized. Your DB's disks might be bad, etc.
Seriously, I'd leave it in the db and check to see why it performs so badly.

Related

Single query that allows alias with it's own limit

I would like to better optimize my code. I'd like to have a single query that allows an alias name to have it's own limit and also include a result with no limit.
Currently I'm using two queries like this:
// ALL TIME //
$mikep = mysqli_query($link, "SELECT tasks.EID, reports.how_did_gig_go FROM tasks INNER JOIN reports ON tasks.EID=reports.eid WHERE `priority` IS NOT NULL AND `partners_name` IS NOT NULL AND mike IS NOT NULL GROUP BY EID ORDER BY tasks.show_date DESC;");
$num_rows_mikep = mysqli_num_rows($mikep);
$rating_sum_mikep = 0;
while ($row = mysqli_fetch_assoc($mikep)) {
$rating_mikep = $row['how_did_gig_go'];
$rating_sum_mikep += $rating_mikep;
}
$average_mikep = $rating_sum_mikep/$num_rows_mikep;
// AND NOW WITH A LIMIT 10 //
$mikep_limit = mysqli_query($link, "SELECT tasks.EID, reports.how_did_gig_go FROM tasks INNER JOIN reports ON tasks.EID=reports.eid WHERE `priority` IS NOT NULL AND `partners_name` IS NOT NULL AND mike IS NOT NULL GROUP BY EID ORDER BY tasks.show_date DESC LIMIT 10;");
$num_rows_mikep_limit = mysqli_num_rows($mikep_limit);
$rating_sum_mikep_limit = 0;
while ($row = mysqli_fetch_assoc($mikep_limit)) {
$rating_mikep_limit = $row['how_did_gig_go'];
$rating_sum_mikep_limit += $rating_mikep_limit;
}
$average_mikep_limit = $rating_sum_mikep_limit/$num_rows_mikep_limit;
This allows me to show an all-time average and also an average over the last 10 reviews. Is it really necessary for me to set up two queries?
Also, I understand I could get the sum in the query, but not all the values are numbers, so I've actually converted them in PHP, but left out that code in order to try and simplify what is displayed in the code.
All-time average and average over the last 10 reviews
In the best case scenario, where your column how_did_gig_go was 100% numeric, a single query like this could work like so:
SELECT
AVG(how_did_gig_go) AS avg_how_did_gig_go
, SUM(CASE
WHEN rn <= 10 THEN how_did_gig_go
ELSE 0
END) / 10 AS latest10_avg
FROM (
SELECT
#num + 1 AS rn
, tasks.show_date
, reports.how_did_gig_go
FROM tasks
INNER JOIN reports ON tasks.EID = reports.eid
CROSS JOIN ( SELECT #num := 0 AS n ) AS v
WHERE priority IS NOT NULL
AND partners_name IS NOT NULL
AND mike IS NOT NULL
ORDER BY tasks.show_date DESC
) AS d
But; Unless all the "numbers" are in fact numeric you are doomed to sending every row back from the server for php to process unless you can clean-up the data in MySQL somehow.
You might avoid sending all that data twice if you establish a way for your php to use only the top 10 from the whole list. There are probably way of doing that in PHP.
If you wanted assistance in SQL to do that, then maybe having 2 columns would help, it would reduce the number of table scans.
SELECT
EID
, how_did_gig_go
, CASE
WHEN rn <= 10 THEN how_did_gig_go
ELSE 0
END AS latest10_how_did_gig_go
FROM (
SELECT
#num + 1 AS rn
, tasks.EID
, reports.how_did_gig_go
FROM tasks
INNER JOIN reports ON tasks.EID = reports.eid
CROSS JOIN ( SELECT #num := 0 AS n ) AS v
WHERE priority IS NOT NULL
AND partners_name IS NOT NULL
AND mike IS NOT NULL
ORDER BY tasks.show_date DESC
) AS d
In future (MySQL 8.x) ROW_NUMBER() OVER(order by tasks.show_date DESC) would be a better method than the "roll your own" row numbering (using #num+1) shown before.

Random content definitive method [duplicate]

How can I best write a query that selects 10 rows randomly from a total of 600k?
A great post handling several cases, from simple, to gaps, to non-uniform with gaps.
http://jan.kneschke.de/projects/mysql/order-by-rand/
For most general case, here is how you do it:
SELECT name
FROM random AS r1 JOIN
(SELECT CEIL(RAND() *
(SELECT MAX(id)
FROM random)) AS id)
AS r2
WHERE r1.id >= r2.id
ORDER BY r1.id ASC
LIMIT 1
This supposes that the distribution of ids is equal, and that there can be gaps in the id list. See the article for more advanced examples
SELECT column FROM table
ORDER BY RAND()
LIMIT 10
Not the efficient solution but works
Simple query that has excellent performance and works with gaps:
SELECT * FROM tbl AS t1 JOIN (SELECT id FROM tbl ORDER BY RAND() LIMIT 10) as t2 ON t1.id=t2.id
This query on a 200K table takes 0.08s and the normal version (SELECT * FROM tbl ORDER BY RAND() LIMIT 10) takes 0.35s on my machine.
This is fast because the sort phase only uses the indexed ID column. You can see this behaviour in the explain:
SELECT * FROM tbl ORDER BY RAND() LIMIT 10:
SELECT * FROM tbl AS t1 JOIN (SELECT id FROM tbl ORDER BY RAND() LIMIT 10) as t2 ON t1.id=t2.id
Weighted Version: https://stackoverflow.com/a/41577458/893432
I am getting fast queries (around 0.5 seconds) with a slow cpu, selecting 10 random rows in a 400K registers MySQL database non-cached 2Gb size. See here my code: Fast selection of random rows in MySQL
$time= microtime_float();
$sql='SELECT COUNT(*) FROM pages';
$rquery= BD_Ejecutar($sql);
list($num_records)=mysql_fetch_row($rquery);
mysql_free_result($rquery);
$sql="SELECT id FROM pages WHERE RAND()*$num_records<20
ORDER BY RAND() LIMIT 0,10";
$rquery= BD_Ejecutar($sql);
while(list($id)=mysql_fetch_row($rquery)){
if($id_in) $id_in.=",$id";
else $id_in="$id";
}
mysql_free_result($rquery);
$sql="SELECT id,url FROM pages WHERE id IN($id_in)";
$rquery= BD_Ejecutar($sql);
while(list($id,$url)=mysql_fetch_row($rquery)){
logger("$id, $url",1);
}
mysql_free_result($rquery);
$time= microtime_float()-$time;
logger("num_records=$num_records",1);
logger("$id_in",1);
logger("Time elapsed: <b>$time segundos</b>",1);
From book :
Choose a Random Row Using an Offset
Still another technique that avoids problems found in the preceding
alternatives is to count the rows in the data set and return a random
number between 0 and the count. Then use this number as an offset
when querying the data set
$rand = "SELECT ROUND(RAND() * (SELECT COUNT(*) FROM Bugs))";
$offset = $pdo->query($rand)->fetch(PDO::FETCH_ASSOC);
$sql = "SELECT * FROM Bugs LIMIT 1 OFFSET :offset";
$stmt = $pdo->prepare($sql);
$stmt->execute( $offset );
$rand_bug = $stmt->fetch();
Use this solution when you can’t assume contiguous key values and
you need to make sure each row has an even chance of being selected.
Its very simple and single line query.
SELECT * FROM Table_Name ORDER BY RAND() LIMIT 0,10;
Well if you have no gaps in your keys and they are all numeric you can calculate random numbers and select those lines. but this will probably not be the case.
So one solution would be the following:
SELECT * FROM table WHERE key >= FLOOR(RAND()*MAX(id)) LIMIT 1
which will basically ensure that you get a random number in the range of your keys and then you select the next best which is greater.
you have to do this 10 times.
however this is NOT really random because your keys will most likely not be distributed evenly.
It's really a big problem and not easy to solve fulfilling all the requirements, MySQL's rand() is the best you can get if you really want 10 random rows.
There is however another solution which is fast but also has a trade off when it comes to randomness, but may suit you better. Read about it here: How can i optimize MySQL's ORDER BY RAND() function?
Question is how random do you need it to be.
Can you explain a bit more so I can give you a good solution.
For example a company I worked with had a solution where they needed absolute randomness extremely fast. They ended up with pre-populating the database with random values that were selected descending and set to different random values afterwards again.
If you hardly ever update you could also fill an incrementing id so you have no gaps and just can calculate random keys before selecting... It depends on the use case!
How to select random rows from a table:
From here:
Select random rows in MySQL
A quick improvement over "table scan" is to use the index to pick up random ids.
SELECT *
FROM random, (
SELECT id AS sid
FROM random
ORDER BY RAND( )
LIMIT 10
) tmp
WHERE random.id = tmp.sid;
I improved the answer #Riedsio had. This is the most efficient query I can find on a large, uniformly distributed table with gaps (tested on getting 1000 random rows from a table that has > 2.6B rows).
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max := (SELECT MAX(id) FROM table)) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1)
Let me unpack what's going on.
#max := (SELECT MAX(id) FROM table)
I'm calculating and saving the max. For very large tables, there is a slight overhead for calculating MAX(id) each time you need a row
SELECT FLOOR(rand() * #max) + 1 as rand)
Gets a random id
SELECT id FROM table INNER JOIN (...) on id > rand LIMIT 1
This fills in the gaps. Basically if you randomly select a number in the gaps, it will just pick the next id. Assuming the gaps are uniformly distributed, this shouldn't be a problem.
Doing the union helps you fit everything into 1 query so you can avoid doing multiple queries. It also lets you save the overhead of calculating MAX(id). Depending on your application, this might matter a lot or very little.
Note that this gets only the ids and gets them in random order. If you want to do anything more advanced I recommend you do this:
SELECT t.id, t.name -- etc, etc
FROM table t
INNER JOIN (
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max := (SELECT MAX(id) FROM table)) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1) UNION
(SELECT id FROM table INNER JOIN (SELECT FLOOR(RAND() * #max) + 1 as rand) r on id > rand LIMIT 1)
) x ON x.id = t.id
ORDER BY t.id
All the best answers have been already posted (mainly those referencing the link http://jan.kneschke.de/projects/mysql/order-by-rand/).
I want to pinpoint another speed-up possibility - caching. Think of why you need to get random rows. Probably you want display some random post or random ad on a website. If you are getting 100 req/s, is it really needed that each visitor gets random rows? Usually it is completely fine to cache these X random rows for 1 second (or even 10 seconds). It doesn't matter if 100 unique visitors in the same 1 second get the same random posts, because the next second another 100 visitors will get different set of posts.
When using this caching you can use also some of the slower solution for getting the random data as it will be fetched from MySQL only once per second regardless of your req/s.
I've looked through all of the answers, and I don't think anyone mentions this possibility at all, and I'm not sure why.
If you want utmost simplicity and speed, at a minor cost, then to me it seems to make sense to store a random number against each row in the DB. Just create an extra column, random_number, and set it's default to RAND(). Create an index on this column.
Then when you want to retrieve a row generate a random number in your code (PHP, Perl, whatever) and compare that to the column.
SELECT FROM tbl WHERE random_number >= :random LIMIT 1
I guess although it's very neat for a single row, for ten rows like the OP asked you'd have to call it ten separate times (or come up with a clever tweak that escapes me immediately)
I needed a query to return a large number of random rows from a rather large table. This is what I came up with. First get the maximum record id:
SELECT MAX(id) FROM table_name;
Then substitute that value into:
SELECT * FROM table_name WHERE id > FLOOR(RAND() * max) LIMIT n;
Where max is the maximum record id in the table and n is the number of rows you want in your result set. The assumption is that there are no gaps in the record id's although I doubt it would affect the result if there were (haven't tried it though). I also created this stored procedure to be more generic; pass in the table name and number of rows to be returned. I'm running MySQL 5.5.38 on Windows 2008, 32GB, dual 3GHz E5450, and on a table with 17,361,264 rows it's fairly consistent at ~.03 sec / ~11 sec to return 1,000,000 rows. (times are from MySQL Workbench 6.1; you could also use CEIL instead of FLOOR in the 2nd select statement depending on your preference)
DELIMITER $$
USE [schema name] $$
DROP PROCEDURE IF EXISTS `random_rows` $$
CREATE PROCEDURE `random_rows`(IN tab_name VARCHAR(64), IN num_rows INT)
BEGIN
SET #t = CONCAT('SET #max=(SELECT MAX(id) FROM ',tab_name,')');
PREPARE stmt FROM #t;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET #t = CONCAT(
'SELECT * FROM ',
tab_name,
' WHERE id>FLOOR(RAND()*#max) LIMIT ',
num_rows);
PREPARE stmt FROM #t;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END
$$
then
CALL [schema name].random_rows([table name], n);
Here is a game changer that may be helpfully for many;
I have a table with 200k rows, with sequential id's, I needed to pick N random rows, so I opt to generate random values based in the biggest ID in the table, I created this script to find out which is the fastest operation:
logTime();
query("SELECT COUNT(id) FROM tbl");
logTime();
query("SELECT MAX(id) FROM tbl");
logTime();
query("SELECT id FROM tbl ORDER BY id DESC LIMIT 1");
logTime();
The results are:
Count: 36.8418693542479 ms
Max: 0.241041183472 ms
Order: 0.216960906982 ms
Based in this results, order desc is the fastest operation to get the max id,
Here is my answer to the question:
SELECT GROUP_CONCAT(n SEPARATOR ',') g FROM (
SELECT FLOOR(RAND() * (
SELECT id FROM tbl ORDER BY id DESC LIMIT 1
)) n FROM tbl LIMIT 10) a
...
SELECT * FROM tbl WHERE id IN ($result);
FYI: To get 10 random rows from a 200k table, it took me 1.78 ms (including all the operations in the php side)
I used this http://jan.kneschke.de/projects/mysql/order-by-rand/ posted by Riedsio (i used the case of a stored procedure that returns one or more random values):
DROP TEMPORARY TABLE IF EXISTS rands;
CREATE TEMPORARY TABLE rands ( rand_id INT );
loop_me: LOOP
IF cnt < 1 THEN
LEAVE loop_me;
END IF;
INSERT INTO rands
SELECT r1.id
FROM random AS r1 JOIN
(SELECT (RAND() *
(SELECT MAX(id)
FROM random)) AS id)
AS r2
WHERE r1.id >= r2.id
ORDER BY r1.id ASC
LIMIT 1;
SET cnt = cnt - 1;
END LOOP loop_me;
In the article he solves the problem of gaps in ids causing not so random results by maintaining a table (using triggers, etc...see the article);
I'm solving the problem by adding another column to the table, populated with contiguous numbers, starting from 1 (edit: this column is added to the temporary table created by the subquery at runtime, doesn't affect your permanent table):
DROP TEMPORARY TABLE IF EXISTS rands;
CREATE TEMPORARY TABLE rands ( rand_id INT );
loop_me: LOOP
IF cnt < 1 THEN
LEAVE loop_me;
END IF;
SET #no_gaps_id := 0;
INSERT INTO rands
SELECT r1.id
FROM (SELECT id, #no_gaps_id := #no_gaps_id + 1 AS no_gaps_id FROM random) AS r1 JOIN
(SELECT (RAND() *
(SELECT COUNT(*)
FROM random)) AS id)
AS r2
WHERE r1.no_gaps_id >= r2.id
ORDER BY r1.no_gaps_id ASC
LIMIT 1;
SET cnt = cnt - 1;
END LOOP loop_me;
In the article i can see he went to great lengths to optimize the code; i have no ideea if/how much my changes impact the performance but works very well for me.
You can easily use a random offset with a limit
PREPARE stm from 'select * from table limit 10 offset ?';
SET #total = (select count(*) from table);
SET #_offset = FLOOR(RAND() * #total);
EXECUTE stm using #_offset;
You can also apply a where clause like so
PREPARE stm from 'select * from table where available=true limit 10 offset ?';
SET #total = (select count(*) from table where available=true);
SET #_offset = FLOOR(RAND() * #total);
EXECUTE stm using #_offset;
Tested on 600,000 rows (700MB) table query execution took ~0.016sec HDD drive.
EDIT: The offset might take a value close to the end of the table, which will result in the select statement returning less rows (or maybe only 1 row), to avoid this we can check the offset again after declaring it, like so
SET #rows_count = 10;
PREPARE stm from "select * from table where available=true limit ? offset ?";
SET #total = (select count(*) from table where available=true);
SET #_offset = FLOOR(RAND() * #total);
SET #_offset = (SELECT IF(#total-#_offset<#rows_count,#_offset-#rows_count,#_offset));
SET #_offset = (SELECT IF(#_offset<0,0,#_offset));
EXECUTE stm using #rows_count,#_offset;
I know it is not what you want, but the answer I will give you is what I use in production in a small website.
Depending on the quantity of times you access the random value, it is not worthy to use MySQL, just because you won't be able to cache the answer. We have a button there to access a random page, and a user could click in there several times per minute if he wants. This will cause a mass amount of MySQL usage and, at least for me, MySQL is the biggest problem to optimize.
I would go another approach, where you can store in cache the answer. Do one call to your MySQL:
SELECT min(id) as min, max(id) as max FROM your_table
With your min and max Id, you can, in your server, calculate a random number. In python:
random.randint(min, max)
Then, with your random number, you can get a random Id in your Table:
SELECT *
FROM your_table
WHERE id >= %s
ORDER BY id ASC
LIMIT 1
In this method you do two calls to your Database, but you can cache them and don't access the Database for a long period of time, enhancing performance. Note that this is not random if you have holes in your table. Having more than 1 row is easy since you can create the Id using python and do one request for each row, but since they are cached, it's ok.
If you have too many holes in your table, you can try the same approach, but now going for the total number of records:
SELECT COUNT(*) as total FROM your_table
Then in python you go:
random.randint(0, total)
And to fetch a random result you use the LIMIT like bellow:
SELECT *
FROM your_table
ORDER BY id ASC
LIMIT %s, 1
Notice it will get 1 value after X random rows. Even if you have holes in your table, it will be completely random, but it will cost more for your database.
If you want one random record (no matter if there are gapes between ids):
PREPARE stmt FROM 'SELECT * FROM `table_name` LIMIT 1 OFFSET ?';
SET #count = (SELECT
FLOOR(RAND() * COUNT(*))
FROM `table_name`);
EXECUTE stmt USING #count;
Source: https://www.warpconduit.net/2011/03/23/selecting-a-random-record-using-mysql-benchmark-results/#comment-1266
This is super fast and is 100% random even if you have gaps.
Count the number x of rows that you have available SELECT COUNT(*) as rows FROM TABLE
Pick 10 distinct random numbers a_1,a_2,...,a_10 between 0 and x
Query your rows like this: SELECT * FROM TABLE LIMIT 1 offset a_i for i=1,...,10
I found this hack in the book SQL Antipatterns from Bill Karwin.
The following should be fast, unbiased and independent of id column. However it does not guarantee that the number of rows returned will match the number of rows requested.
SELECT *
FROM t
WHERE RAND() < (SELECT 10 / COUNT(*) FROM t)
Explanation: assuming you want 10 rows out of 100 then each row has 1/10 probability of getting SELECTed which could be achieved by WHERE RAND() < 0.1. This approach does not guarantee 10 rows; but if the query is run enough times the average number of rows per execution will be around 10 and each row in the table will be selected evenly.
If you have just one Read-Request
Combine the answer of #redsio with a temp-table (600K is not that much):
DROP TEMPORARY TABLE IF EXISTS tmp_randorder;
CREATE TABLE tmp_randorder (id int(11) not null auto_increment primary key, data_id int(11));
INSERT INTO tmp_randorder (data_id) select id from datatable;
And then take a version of #redsios Answer:
SELECT dt.*
FROM
(SELECT (RAND() *
(SELECT MAX(id)
FROM tmp_randorder)) AS id)
AS rnd
INNER JOIN tmp_randorder rndo on rndo.id between rnd.id - 10 and rnd.id + 10
INNER JOIN datatable AS dt on dt.id = rndo.data_id
ORDER BY abs(rndo.id - rnd.id)
LIMIT 1;
If the table is big, you can sieve on the first part:
INSERT INTO tmp_randorder (data_id) select id from datatable where rand() < 0.01;
If you have many read-requests
Version: You could keep the table tmp_randorder persistent, call it datatable_idlist. Recreate that table in certain intervals (day, hour), since it also will get holes. If your table gets really big, you could also refill holes
select l.data_id as whole
from datatable_idlist l
left join datatable dt on dt.id = l.data_id
where dt.id is null;
Version: Give your Dataset a random_sortorder column either directly in datatable or in a persistent extra table datatable_sortorder. Index that column. Generate a Random-Value in your Application (I'll call it $rand).
select l.*
from datatable l
order by abs(random_sortorder - $rand) desc
limit 1;
This solution discriminates the 'edge rows' with the highest and the lowest random_sortorder, so rearrange them in intervals (once a day).
Another simple solution would be ranking the rows and fetch one of them randomly and with this solution you won't need to have any 'Id' based column in the table.
SELECT d.* FROM (
SELECT t.*, #rownum := #rownum + 1 AS rank
FROM mytable AS t,
(SELECT #rownum := 0) AS r,
(SELECT #cnt := (SELECT RAND() * (SELECT COUNT(*) FROM mytable))) AS n
) d WHERE rank >= #cnt LIMIT 10;
You can change the limit value as per your need to access as many rows as you want but that would mostly be consecutive values.
However, if you don't want consecutive random values then you can fetch a bigger sample and select randomly from it. something like ...
SELECT * FROM (
SELECT d.* FROM (
SELECT c.*, #rownum := #rownum + 1 AS rank
FROM buildbrain.`commits` AS c,
(SELECT #rownum := 0) AS r,
(SELECT #cnt := (SELECT RAND() * (SELECT COUNT(*) FROM buildbrain.`commits`))) AS rnd
) d
WHERE rank >= #cnt LIMIT 10000
) t ORDER BY RAND() LIMIT 10;
One way that i find pretty good if there's an autogenerated id is to use the modulo operator '%'. For Example, if you need 10,000 random records out 70,000, you could simplify this by saying you need 1 out of every 7 rows. This can be simplified in this query:
SELECT * FROM
table
WHERE
id %
FLOOR(
(SELECT count(1) FROM table)
/ 10000
) = 0;
If the result of dividing target rows by total available is not an integer, you will have some extra rows than what you asked for, so you should add a LIMIT clause to help you trim the result set like this:
SELECT * FROM
table
WHERE
id %
FLOOR(
(SELECT count(1) FROM table)
/ 10000
) = 0
LIMIT 10000;
This does require a full scan, but it is faster than ORDER BY RAND, and in my opinion simpler to understand than other options mentioned in this thread. Also if the system that writes to the DB creates sets of rows in batches you might not get such a random result as you where expecting.
I think here is a simple and yet faster way, I tested it on the live server in comparison with a few above answer and it was faster.
SELECT * FROM `table_name` WHERE id >= (SELECT FLOOR( MAX(id) * RAND()) FROM `table_name` ) ORDER BY id LIMIT 30;
//Took 0.0014secs against a table of 130 rows
SELECT * FROM `table_name` WHERE 1 ORDER BY RAND() LIMIT 30
//Took 0.0042secs against a table of 130 rows
SELECT name
FROM random AS r1 JOIN
(SELECT CEIL(RAND() *
(SELECT MAX(id)
FROM random)) AS id)
AS r2
WHERE r1.id >= r2.id
ORDER BY r1.id ASC
LIMIT 30
//Took 0.0040secs against a table of 130 rows
SELECT
*
FROM
table_with_600k_rows
WHERE
RAND( )
ORDER BY
id DESC
LIMIT 30;
id is the primary key, sorted by id,
EXPLAIN table_with_600k_rows, find that row does not scan the entire table
I Use this query:
select floor(RAND() * (SELECT MAX(key) FROM table)) from table limit 10
query time:0.016s
This is how I do it:
select *
from table_with_600k_rows
where rand() < 10/600000
limit 10
I like it because does not require other tables, it is simple to write, and it is very fast to execute.
Use the below simple query to get random data from a table.
SELECT user_firstname ,
COUNT(DISTINCT usr_fk_id) cnt
FROM userdetails
GROUP BY usr_fk_id
ORDER BY cnt ASC
LIMIT 10
I guess this is the best possible way..
SELECT id, id * RAND( ) AS random_no, first_name, last_name
FROM user
ORDER BY random_no

Efficiency of queries against DATABASE Issue

I`m trying to query my database with 4 queries, each query will give me the number of rows that I need but I faced with but I notice when I do the query on the server that strong, everything works fast (relatively), but if I make it a relatively simple server that I deserve the maximum working time (30 seconds, the limit execution time(PHP)).
my while loop works with list of MokedCcode each MokedCcode goes into the queries and go to the next MokedCcode.
Notes :
1) I know it will be proportional to the number of MokedCcode-s.
2) I need to increase the execution time limit?
1) there is more efficient way to make those queries? maybe I dont use the mysql features right.
for example the first query called $emerg, this query need to give me the number of rows between dates, where WCODE have priority 1 and it has to be match of MokedCcode on both tables ( t and e ).
$emerg = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY='1'
AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
In addition I added the 3 more queries, I would like to get some advice how to make it faster Or I dont have any choice and keep it like that.
The 3 other queries:
$regular = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY!='1'
AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
$regHandled = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY!='1'
AND t.EventHandling!='0' AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
$emergHandled = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY='1'
AND t.EventHandling!='0' AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
I didn't exactly understand what you are trying to achieve.
If you just want the count Why are you selecting all the rows? cant you use COUNT() MySQL function? It is always slow to get all the data to your script and then count the rows.
Even if you want to select all those columns, try using t.field1, t.field2, ... instead of t.*

How do I sort the following table & get those top5- 5 recoreds,top20- twenty records?

This query giving strange result:
SELECT `user_id`,`rankType`
FROM `ranks`
WHERE `user_id` =23
AND (`rankType` = "top5"
OR `rankType` = "top20")
ORDER BY rankType
LIMIT 0 , 30
here the SQLfiddle.
Want I am trying to achieve is:
1)To get only 5 records of top5 rank type, 20 records of rank type top20
2)I want to show the result in ascending order of rank type.(but if you see in the demo fiddle it's showing apposite, may be it is only considering 2 from 20 & 5)
(SELECT `id`,`user_id`,`rankType`
FROM `ranks`
WHERE `user_id` =23
AND `rankType` = "top5"
ORDER BY rankType
LIMIT 0, 5)
union
(SELECT `id`,`user_id`,`rankType`
FROM `ranks`
WHERE `user_id` =23
AND `rankType` = "top20"
ORDER BY rankType
LIMIT 0, 20)
If later on you want to add another set of sorting/filtering columns, wrap it all into something like
select * from ( /* previous query goes here */ ) tt
where id > 100
order by id
Note that ranktype is varchar, so it's sorted lexicographically, so top20 < top5. You'll have to employ natural sorting or some other means to get it right.
SELECT `id`,`user_id`,`rankType`
FROM `ranks`
WHERE `user_id` =23
AND `rankType` = "top5" limit 5
union
SELECT `id`,`user_id`,`rankType`
FROM `ranks`
WHERE `user_id` =23
AND `rankType` = "top20" limit 20
Your result is actually in ascending order, since column rank_type is of varchar type top20 comes first than top5 as in string comparison.
If you only want to deal between top5 and top20, a dirty solution could be:
ORDER BY rankType desc
One possibility without doing two queries and UNIONing them:
ORDER BY FIND_IN_SET(rankType,'top5,top20')

Need expert advice on complex nested queries

I have 3 queries. I was told that they were potentially inefficient so I was wondering if anyone who is experienced could suggest anything. The logic is somewhat complex so bear with me.
I have two tables: shoutbox, and topic. Topic stores all information on topics that were created, while shoutbox stores all comments pertaining to each topic. Each comment comes with a group labelled by reply_chunk_id. The earliest timestamp is the first comment, while any following with the same reply_chunk_id and a later timestamp are replies. I would like to find the latest comment for each group that was started by the user (made first comment) and if the latest comment was made this month display it.
What I have written achieves that with one problem: all the latest comments are displayed in random order. I would like to organize these groups/latest comments. I really appreciate any advice
Shoutbox
Field Type
-------------------
id int(5)
timestamp int(11)
user varchar(25)
message varchar(2000)
topic_id varchar(35)
reply_chunk_id varchar(35)
Topic
id mediumint(8)
topic_id varchar(35)
subject_id mediumint(8)
file_name varchar(35)
topic_title varchar(255)
creator varchar(25)
topic_host varchar(255)
timestamp int(11)
color varchar(10)
mp3 varchar(75)
custom_background varchar(55)
description mediumtext
content_type tinyint(1)
Query
$sql="SELECT reply_chunk_id FROM shoutbox
GROUP BY reply_chunk_id
HAVING count(*) > 1
ORDER BY timestamp DESC ";
$stmt16 = $conn->prepare($sql);
$result=$stmt16->execute();
while($row = $stmt16->fetch(PDO::FETCH_ASSOC)){
$sql="SELECT user,reply_chunk_id, MIN(timestamp) AS grp_timestamp
FROM shoutbox WHERE reply_chunk_id=? AND user=?";
$stmt17 = $conn->prepare($sql);
$result=$stmt17->execute(array($row['reply_chunk_id'],$user));
while($row2 = $stmt17->fetch(PDO::FETCH_ASSOC)){
$sql="SELECT t.topic_title, t.content_type, t.subject_id,
t.creator, t.description, t.topic_host,
c1.message, c1.topic_id, c1.user, c1.timestamp AS max
FROM shoutbox c1
JOIN topic t ON (t.topic_id = c1.topic_id)
WHERE reply_chunk_id = ? AND c1.timestamp > ?
ORDER BY c1.timestamp DESC, c1.id
LIMIT 1";
$stmt18 = $conn->prepare($sql);
$result=$stmt18->execute(array($row2['reply_chunk_id'],$month));
while($row3 = $stmt18->fetch(PDO::FETCH_ASSOC)){
Make the first query:
SELECT reply_chunk_id FROM shoutbox
GROUP BY reply_chunk_id
HAVING count(*) > 1
ORDER BY timestamp DESC
This does the same, but is faster.
Make sure you have an index on reply_chunk_id.
The second query:
SELECT user,reply_chunk_id, MIN(timestamp) AS grp_timestamp
FROM shoutbox WHERE reply_chunk_id=? AND user=?
The GROUP BY is unneeded, because only one row gets returned, because of the MIN() and the equality tests.
The third query:
SELECT t.topic_title, t.content_type, t.subject_id,
t.creator, t.description, t.topic_host,
c1.message, c1.topic_id, c1.user, c1.timestamp AS max
FROM shoutbox c1
JOIN topic t ON (t.topic_id = c1.topic_id)
WHERE reply_chunk_id = ? AND c1.timestamp > ?
ORDER BY c1.timestamp DESC, c1.id
LIMIT 1
Doing it all in one query:
SELECT
t.user,t.reply_chunk_id, MIN(t.timestamp) AS grp_timestamp,
t.topic_title, t.content_type, t.subject_id,
t.creator, t.description, t.topic_host,
c1.message, c1.topic_id, c1.user, c1.timestamp AS max
FROM shoutbox c1
INNER JOIN topic t ON (t.topic_id = c1.topic_id)
LEFT JOIN shoutbox c2 ON (c1.id = c2.id and c1.timestamp < c2.timestamp)
WHERE c2.timestamp IS NULL AND t.user = ?
GROUP BY t.reply_chunk_id
HAVING count(*) > 1
ORDER BY t.reply_chunk_id
or the equivalent
SELECT
t.user,t.reply_chunk_id, MIN(t.timestamp) AS grp_timestamp,
t.topic_title, t.content_type, t.subject_id,
t.creator, t.description, t.topic_host,
c1.message, c1.topic_id, c1.user, c1.timestamp AS max
FROM shoutbox c1
INNER JOIN topic t ON (t.topic_id = c1.topic_id)
WHERE c1.timestamp = (SELECT max(timestamp) FROM shoutbox c2
WHERE c2.reply_chunk_id = c1.reply_chunk_id)
AND t.user = ?
GROUP BY t.reply_chunk_id
HAVING count(*) > 1
ORDER BY t.reply_chunk_id
How does this work?
The group by selects one entry per topic.reply_chunk_id
The left join (c1.id = c2.id and c1.`timestamp` < c2.`timestamp`) + WHERE c2.`timestamp` IS NULL selects only those items from shoutbox which have the highest timestamp. This works because MySQL keeps increasing c1.timestamp to get c2.timestamp to be null as soon as that is true, it c1.timestamp will have reached its maximum value and will select that row within the possible rows to choose from.
If you don't understand point 2, see: http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html
Note that the PDO is autoescaping the fields with backticks
Sounds like most of it should be directly from your ShoutBox table. Prequery to find all "Chunks" the user replied to... of those chunks (and topic_ID since each chunk is always the same topic), get their respective minimum and maximum. Using the "Having count(*) > 1" will force only those that HAVE a second posting by a given user (what you were looking for).
THEN, re-query to the chunks to get the minimum regardless of user. This prevents the need of querying ALL chunks. Then join only what a single user is associated with back to the Topic.
Additionally, and I could be incorrect and need to adjust (minimally), but it appears that the SOUNDBOX table ID column would be an auto-increment column, and just happens to be time-stamped too at time of creation. That said, for a given "Chunk", the earliest ID would be the same as the earliest timestamp as they would be stamped at the same time they are created. Also makes easier on subsequent JOINs and sub query too.
By using STRAIGHT_JOIN, should force the "PreQuery" FIRST, come up with a very limited set, then qualify the WHERE clause and joins afterwords.
select STRAIGHT_JOIN
T.topic_title,
T.content_type,
T.subject_id,
T.creator,
T.description,
T.topic_host,
sb2.Topic_ID
sb2.message,
sb2.user,
sb2.TimeStamp
from
( select
sb1.Reply_Chunk_ID,
sb1.Topic_ID,
count(*) as TotalEntries,
min( sb1.id ) as FirstIDByChunkByUser,
min( sbJoin.id ) as FirstIDByChunk,
max( sbJoin.id ) as LastIDByChunk,
max( sbJoin.timestamp ) as LastTimeByChunk
from
ShoutBox sb1
join ShoutBox sbJoin
on sb1.Reply_Chunk_ID = sbJoin.Reply_Chunk_ID
where
sb1.user = CurrentUser
group by
sb1.Reply_Chunk_ID,
sb1.Topic_ID
having
min( sb1.id ) = min( sbJoin.ID ) ) PreQuery
join Topic T on
PreQuery.Topic_ID = T.ID
join ShoutBox sb2
PreQuery.LastIDByChunk = sb2.ID
where
sb2.TimeStamp >= YourTimeStampCriteria
order by
sb2.TimeStamp desc
EDIT ---- QUERY EXPLANATION -- with Modified query.
I've changed the query from re-reading (as was almost midnight when answered after holiday weekend :)
First, "STRAIGHT_JOIN" is a MySQL clause telling the engine to "do the query in the way / sequence I've stated". Basically, sometimes an engine will try to think for you and optimize in ways that may appear more efficient, but if based on your data, you know what will retrieve the smallest set of data first, and then join to other lookup fields next might in fact be better. Second the "PreQuery". If you have a "SQL-Select" statement (within parens) as Alias "From" clause, The "PreQuery" is just the name of the alias of the resultset... I could have called it anything, just makes sense that this is a stand-alone query of it's own. (Ooops... fixed to ShoutBox :) As for case-sensitivity, typically Column names are NOT case-sensitive... However, table names are... You could have a table name "MyTest" different than "mytest" or "MYTEST". But by supplying "alias", it helps shorten readability (especially with VeryLongTableNamesUsed ).
Should be working after the re-reading and applying adjustments.. Try the first "Prequery" on its own to see how many records it returns. On its own merits, it should return... for a single "CurrentUser" parameter value, every "Reply_Chunk_ID" (which will always have the same topic_id", get the first ID the person entered (min()). By JOINing again to Shoutbox on the chunk id, we (only those qualified as entered by the user), get the minimum and maximum ID per the chunk REGARDLESS of who started or responded. By applying the HAVING clause, this should only return those where the same person STARTED the topic (hence both have the same min() value.)
Finally, once those have been qualified, join directly to the TOPIC and SHOUTBOX tables again on their own merits of topic_id and LastIDByChunk and order the final results by the latest comment response timestamp descending.
I've added a where clause to further limit your "timestamp" criteria where the most recent final timestamp is on/after the given time period you want.
I would be curious how this query's time performance works compared to your already accepted answer too.

Categories