Okay so I have a table which currently has 40000 rows and I need to SELECT them all. I use a index for id and url column and if I have to SELECT a value by id or url it's instant but if I have to SELECT * it's very slow. What I'm trying to do is searching my database and output the matches and I did this with a
while($arr = mysqli_fetch_array($query))
{ #code... echo $arr['whatever_i_need']."<br>"; }
$query = mysqli_query($link,"SELECT * FROM table");
In the future I will have hundreds of millions of rows in the database so I would like to return the search results fast in 1 sec or something. If you can give me solutions I would really appreciate! Thanks!
EDIT:
I don't want to display all of the data but I want to loop through it quickly to find all the matches
If you want speed then you definitely don't want the query to return every row from the table, and then "loop through" every row returned by the query to identify the ones you are interested in returning. That approach might give acceptable performance with small tables, but it definitely doesn't scale.
For performance, you want the database to locate just the rows you want to return, filter out the ones you don't want, and return just the subset.
And that comes down to writing an appropriate SQL query; executing an appropriate SELECT statement.
SELECT t.col1
, t.col2
, t.col3
FROM mytable t
WHERE t.col3 LIKE '%foo%'
AND t.col2 >= '2016-03-15'
AND t.col2 < '2016-06-15'
ORDER BY t.col2 DESC, t.col1 DESC
LIMIT 200
Performance is about making sure appropriate indexes are available and that the query execution is making effective use of the available indexes.
Related
Lets start by saying that I cant use INDEXING as I need the INSERT, DELETE and UPDATE for this table to be super fast, which they are.
I have a page that displays a summary of order units collected in a database table. To populate the table an order number is created and then individual units associated with that order are scanned into the table to recored which units are associated with each order.
For the purposes of this example the table has the following columns.
id, UID, order, originator, receiver, datetime
The individual unit quantities can be in the 1000's per order and the entire table is growing to hundreds of thousands of units.
The summary page displays the number of units per order and the first and last unit number for each order. I limit the number of orders to be displayed to the last 30 order numbers.
For example:
Order 10 has 200 units. first UID 1510 last UID 1756
Order 11 has 300 units. first UID 1922 last UID 2831
..........
..........
Currently the response time for the query is about 3 seconds as the code performs the following:
Look up the last 30 orders by by id and sort by order number
While looking at each order number in the array
-- Count the number of database rows that have that order number
-- Select the first UID from all the rows as first
-- Select the last UID from all the rows as last
Display the result
I've determined the majority of the time is taken by the Count of the number of units in each order ~1.8 seconds and then determining the first and last numbers in each order ~1 second.
I am really interested in if there is a way to speed up these queries without INDEXING. Here is the code with the queries.
First request selects the last 30 orders processed selected by id and grouped by order number. This gives the last 30 unique order numbers.
$result = mysqli_query($con, "SELECT order, ANY_VALUE(receiver) AS receiver, ANY_VALUE(originator) AS originator, ANY_VALUE(id) AS id
FROM scandb
GROUP BY order
ORDER BY id
DESC LIMIT 30");
While fetching the last 30 order numbers count the number of units and the first and last UID for each order.
while($row=mysqli_fetch_array($result)){
$count = mysqli_fetch_array(mysqli_query($con, "SELECT order, COUNT(*) as count FROM scandb WHERE order ='".$row['order']."' "));
$firstLast = mysqli_fetch_array(mysqli_query($con, "SELECT (SELECT UID FROM scandb WHERE orderNumber ='".$row['order']."' ORDER BY UID LIMIT 1) as 'first', (SELECT UID FROM barcode WHERE order ='".$row['order']."' ORDER BY UID DESC LIMIT 1) as 'last'"));
echo "<td align= center>".$count['count']."</td>";
echo "<td align= center>".$firstLast['first']."</td>";
echo "<td align= center>".$firstLast['last']."</td>";
}
With 100K lines in the database this whole query is taking about 3 seconds. The majority of the time is in the $count and $firstlast queries. I'd like to know if there is a more efficient way to get this same data in a faster time without Indexing the table. Any special tricks that anyone has would be greatly appreciated.
Design your database with caution
This first tip may seems obvious, but the fact is that most database problems come from badly-designed table structure.
For example, I have seen people storing information such as client info and payment info in the same database column. For both the database system and developers who will have to work on it, this is not a good thing.
When creating a database, always put information on various tables, use clear naming standards and make use of primary keys.
Know what you should optimize
If you want to optimize a specific query, it is extremely useful to be able to get an in-depth look at the result of a query. Using the EXPLAIN statement, you will get lots of useful info on the result produced by a specific query, as shown in the example below:
EXPLAIN SELECT * FROM ref_table,other_table WHERE ref_table.key_column=other_table.column;
Don’t select what you don’t need
A very common way to get the desired data is to use the * symbol, which will get all fields from the desired table:
SELECT * FROM wp_posts;
Instead, you should definitely select only the desired fields as shown in the example below. On a very small site with, let’s say, one visitor per minute, that wouldn’t make a difference. But on a site such as Cats Who Code, it saves a lot of work for the database.
SELECT title, excerpt, author FROM wp_posts;
Avoid queries in loops
When using SQL along with a programming language such as PHP, it can be tempting to use SQL queries inside a loop. But doing so is like hammering your database with queries.
This example illustrates the whole “queries in loops” problem:
foreach ($display_order as $id => $ordinal) {
$sql = "UPDATE categories SET display_order = $ordinal WHERE id = $id";
mysql_query($sql);
}
Here is what you should do instead:
UPDATE categories
SET display_order = CASE id
WHEN 1 THEN 3
WHEN 2 THEN 4
WHEN 3 THEN 5
END
WHERE id IN (1,2,3)
Use join instead of subqueries
As a programmer, subqueries are something that you can be tempted to use and abuse. Subqueries, as show below, can be very useful:
SELECT a.id,
(SELECT MAX(created)
FROM posts
WHERE author_id = a.id)
AS latest_post FROM authors a
Although subqueries are useful, they often can be replaced by a join, which is definitely faster to execute.
SELECT a.id, MAX(p.created) AS latest_post
FROM authors a
INNER JOIN posts p
ON (a.id = p.author_id)
GROUP BY a.id
Source: http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/
suppose I have a table t and table t has 15000 entries
suppose the query
SELECT * FROM t WHERE t.nid <1000
returns 1000 rows
but then I only want the first 10 rows so I do a LIMIT
SELECT * FROM t WHERE t.nid <1000 LIMIT 10
is it possible to construct a single query in which in addition to returning the 10 rows information with the LIMIT clause above, it also returns the total count of the rows that satisfy the conditions set in the WHERE clause, hence in addition to returning the 10 rows above, it also returns 1000 since there are a total of 1000 rows satisfying the WHERE clause...and have both returned in a single query
Preferred solution
First of all, the found_rows() function is not portable (it is a MySQL extension) and is going to be removed. As user #Zveddochka pointed out, it has already been deprecated in MySQL 8.0.17.
But more importantly, it turns out that if you use proper indexing, then running two queries is actually faster. The SQL_CALC_FOUND_ROWS directive is achieved through a "virtual scan" that incurs an additional recovery cost. When the query is not indexed, then this cost would be the same of a COUNT(), and therefore running two queries will cost double - i.e., using SQL_CALC_FOUND_ROWS will make things run 50% faster.
But what happens when the query is properly indexed? The guys at Percona checked it out. And it turns out that not only the COUNT() is blazing fast since it only accesses metadata and indexes, and the query without SQL_CALC_FOUND_ROWS is faster because it doesn't incur any additional cost; the cost of the two queries combined is less than the cost of the enhanced single query:
Results with SQL_CALC_FOUND_ROWS are following: for each b value it
takes 20-100 sec to execute uncached and 2-5 sec after warmup. Such
difference could be explained by the I/O which required for this query
– mysql accesses all 10k rows this query could produce without LIMIT
clause.
The results are following: it takes 0.01-0.11 sec to run this query
first time and 0.00-0.02 sec for all consecutive runs.
So, as we can see, total time for SELECT+COUNT (0.00-0.15 sec) is much
less than execution time for original query (2-100 sec). Let’s take a
look at EXPLAINs...
So, what to do?
// Run two queries ensuring they satisfy exactly the same conditions
$field1 = "Field1, Field2, blah blah blah";
$field2 = "COUNT(*) AS rows";
$where = "Field5 = 'X' AND Field6 = 'Y' AND blah blah";
$cntQuery = "SELECT {$field2} FROM {$joins} WHERE {$where}";
$rowQuery = "SELECT {$field1} FROM {$joins} WHERE {$where} LIMIT {$limit}";
Now the first query returns the count, the second query returns the actual data.
Old answer (useful just for non-indexed tables)
Don't do this. If you find out this section of the answer works for you better than the section above, it's almost certainly a signal that something else is not optimal in your setup - most likely you're not using the indexes properly, or you need to update your MySQL server, or run an analyze/optimize of the database to update cardinality statistics.
You can, but I think it would be a performance killer.
Your best option would be to use the SQL_CALC_FOUND_ROWS MySQL extension and issue a second query to recover the full number of rows using FOUND_ROWS().
SELECT SQL_CALC_FOUND_ROWS * FROM t WHERE t.nid <1000 LIMIT 10;
SELECT FOUND_ROWS();
See e.g http://www.arraystudio.com/as-workshop/mysql-get-total-number-of-rows-when-using-limit.html
Or you could simply run the full query without LIMIT clause, and retrieve only the first ten rows. Then you can use one query as you wanted, and also get the row count through mysql_num_rows(). This is not ideal, but also not so catastrophic for most queries.
If you do this last, though, be very careful to close the query and free its resources: I have found out that retrieving less than the full resultset and forgetting to free the rs handle is one outstanding cause of "metadata locking".
You can try SQL_CALC_FOUND_ROWS, which can get a count of total records without running the statement again.
SELECT SQL_CALC_FOUND_ROWS * FROM t WHERE t.nid <1000 LIMIT 10; -- get records
SELECT FOUND_ROWS(); -- get count
Reference: http://dev.mysql.com/doc/refman/5.0/en/information-functions.html
"is it possible to construct a single query in which in addition to returning the 10 rows information with the LIMIT clause above, it also returns the total count of the rows that satisfy the conditions set in the WHERE clause"
Yes, it is possible to do both in single query, by using windowed function i.e. COUNT(*) OVER()(MySQL 8.0+):
SELECT t.*, COUNT(*) OVER() AS cnt
FROM t
WHERE t.nid <1000
LIMIT 10;
db<>fiddle demo
Sidenote:
LIMIT without explicit ORDER BY is non-deterministic. It could return different results between multiple runs.
There are many things that need discussing.
A LIMIT without an ORDER BY is somewhat unpredictable, hence somewhat meaningless.
But if you add an ORDER BY, it may need to find all the rows, sort them then deliver only the 10 you want.
Or, the ORDER BY may be handled adequately by an INDEX.
Your particular query, if turned into 2 queries (as needed after 8.0.17), would be
SELECT * FROM t WHERE t.nid < 1000 LIMIT 10;
SELECT COUNT(*) FROM t WHERE t.nid < 1000;
Note that each of those would benefit from INDEX(nid). The first would pick 10 items from the index's BTree, then look them up in the data's BTree -- only 10 rows touched in each. The second would scan the INDEX until it hits 1000, and not touch the data BTree.
If you add an ORDER BY as advised, then, the first query:
SELECT * FROM t WHERE t.nid < 1000 ORDER BY t.nid LIMIT 10;
will work identically as above. But
SELECT * FROM t WHERE t.nid < 1000 ORDER BY t.abcd LIMIT 10;
will need to scan lots of rows, and be quite slow. And probably use a temp table and filesort. (Check EXPLAIN for details.) INDEX(nid, abcd) would help, but only a little.
And there are other variants, such as when the index can be "covering".
What is the goal of having "one query"?
Speed? -- as discussed above, there are other factors that are more pertinent.
Consistency? -- You may need a transaction to avoid, for example, getting N rows from the first query and a smaller number from the COUNT.
BEGIN;
SELECT * ...
SELECT COUNT(*) ...
COMMIT;
Single command? -- Consider a stored procedure that combines the 2 statements. Or
SELECT * FROM t WHERE t.nid < 1000 LIMIT 10
UNION ALL
SELECT COUNT(*) FROM t WHERE t.nid < 1000;
but that gets tricky because the number of columns is different, so some kludge would be needed to make the second query have the same number of columns. Another variant involves GROUP BY WITH ROLLUP. (But it may be even harder to fabricate.)
Lukasz's Answer looks promising. However, it gives an extra column (which might be good) and its performance needs to be tested. If you are on 8.0 and their answer works well for you, accept that Answer.
Count(*) time complexity is O(1), so you can use a subquery
SELECT *, (SELECT COUNT(*) FROM t WHERE t.nid <1000) AS cnt
FROM t
WHERE t.nid <1000
LIMIT 10
Sounds like you want FOUND_ROWS()
SELECT SQL_CALC_FOUND_ROWS * FROM t WHERE t.nid <1000 LIMIT 10;
SELECT FOUND_ROWS();
I'm trying to get 4 random results from a table that holds approx 7 million records. Additionally, I also want to get 4 random records from the same table that are filtered by category.
Now, as you would imagine doing random sorting on a table this large causes the queries to take a few seconds, which is not ideal.
One other method I thought of for the non-filtered result set would be to just get PHP to select some random numbers between 1 - 7,000,000 or so and then do an IN(...) with the query to only grab those rows - and yes, I know that this method has a caveat in that you may get less than 4 if a record with that id no longer exists.
However, the above method obviously will not work with the category filtering as PHP doesn't know which record numbers belong to which category and hence cannot select the record numbers to select from.
Are there any better ways I can do this? Only way I can think of would be to store the record id's for each category in another table and then select random results from that and then select only those record ID's from the main table in a secondary query; but I'm sure there is a better way!?
You could of course use the RAND() function on a query using a LIMIT and WHERE (for the category). That however as you pointed out, entails a scan of the database which takes time, especially in your case due to the volume of data.
Your other alternative, again as you pointed out, to store id/category_id in another table might prove a bit faster but again there has to be a LIMIT and WHERE on that table which will also contain the same amount of records as the master table.
A different approach (if applicable) would be to have a table per category and store in that the IDs. If your categories are fixed or do not change that often, then you should be able to use that approach. In that case you will effectively remove the WHERE from the clause and getting a RAND() with a LIMIT on each category table would be faster since each category table will contain a subset of records from your main table.
Some other alternatives would be to use a key/value pair database just for that operation. MongoDb or Google AppEngine can help with that and are really fast.
You could also go towards the approach of a Master/Slave in your MySQL. The slave replicates content in real time but when you need to perform the expensive query you query the slave instead of the master, thus passing the load to a different machine.
Finally you could go with Sphinx which is a lot easier to install and maintain. You can then treat each of those category queries as a document search and let Sphinx randomize the results. This way you offset this expensive operation to a different layer and let MySQL continue with other operations.
Just some issues to consider.
Working off your random number approach
Get the max id in the database.
Create a temp table to store your matches.
Loop n times doing the following
Generate a random number between 1 and maxId
Get the first record with a record Id greater than the random number and insert it into your temp table
Your temp table now contains your random results.
Or you could dynamically generate sql with a union to do the query in one step.
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
Note: my sql may not be valid, as I'm not a mySql guy, but the theory should be sound
First you need to get number of rows ... something like this
select count(1) from tbl where category = ?
then select a random number
$offset = rand(1,$rowsNum);
and select a row with offset
select * FROM tbl LIMIT $offset, 1
in this way you avoid missing ids. The only problem is you need to run second query several times. Union may help in this case.
For MySQl you can use
RAND()
SELECT column FROM table
ORDER BY RAND()
LIMIT 4
I am building a php site using jquery and the DataTables plugin. My page is laid out just as it needs to be with pagination working but in dealing with large datasets, I have noticed the server is pulling ALL returned rows as opposed to the 10 row (can be more) limit stated within each 'page'.
Is it possible to limit the results of a query and yet keep say the ID numbers of those results in memory so when page 2 is hit (or the result number is changed) only new data is sought after?
Does it even make sense to do it this way?
I just dont want to query a DB with 2000 rows returned then have a 'front-end-plugin' make it look like the other results are hidden when they are truthfully on the page from the start.
The LIMIT clause in SQL has two parts -- the limit and the offset.
To get the first 10 rows:
SELECT ... LIMIT 0,10
To get the next 10 rows:
SELECT ... LIMIT 10,10
To get the next 10 rows:
SELECT ... LIMIT 20,10
As long as you ORDER the result set the same each time, you absolutely don't have to (and don't want to) first ask the database to send you all 2000 rows.
To display paging in conjunction with this, you still need to know how many total rows match your query. There are two ways to handle that --
1) Ask for a row count with a separate query
SELECT COUNT(*) FROM table WHERE ...
2) Use the SQL_CALC_FOUND_ROWS hint in your query, which will tell MySQL to calculate how many rows match the query before returning only the 10 you asked for. You then issue a SELECT FOUND_ROWS() query to get that result.
SELECT SQL_CALC_FOUND_ROWS column1, column2 ... LIMIT 0,10
2 is preferable since it does not add an extra query to each page load.
To randomly select records from one table; do I have to always set a temporary variable in PHP? I need some help with selecting random rows within a CodeIgniter model, and then display three different ones in a view every time my homepage is viewed. Does anyone have any thoughts on how to solve this issue? Thanks in advance!
If you don't have a ton of rows, you can simply:
SELECT * FROM myTable ORDER BY RAND() LIMIT 3;
If you have many rows, this will get slow, but for smaller data sets it will work fine.
As Steve Michel mentions in his answer, this method can get very ugly for large tables. His suggestion is a good place to jump off from. If you know the approximate maximum integer PK on the table, you can do something like generating a random number between one and your max PK value, then grab random rows one at a time like:
$q="SELECT * FROM table WHERE id >= {$myRandomValue}";
$row = $db->fetchOne($q); //or whatever CI's interface to grab a single is like
Of course, if you need 3 random rows, you'll have three queries here, but as they're entirely on the PK, they'll be fast(er than randomizing the whole table).
I would do something like:
SELECT * FROM table ORDER BY RAND() LIMIT 1;
This will put the data in a random order and then return only the first row from that random order.
I have this piece of code in production to get a random quote. Using MySQL's RAND function was super slow. Even with 100 quotes in the database, I was noticing a lag time on the website. With this, there was no lag at all.
$result = mysql_query('SELECT COUNT(*) FROM quotes');
$count = mysql_fetch_row($result);
$id = rand(1, $count[0]);
$result = mysql_query("SELECT author, quote FROM quotes WHERE id=$id");
you need a query like this:
SELECT *
FROM tablename
WHERE somefield='something'
ORDER BY RAND() LIMIT 3
It is taken from the second result of
http://www.google.com/search?q=mysql+random
and it should work ;)
Ordering a big table by rand() can be very expensive if the table is very large. MySQL will need to build a temporary table and sort it. If you have primary key and you know how many rows are in the table, use LIMIT x,1 to grab a random row, where x is the number of the row you want to get.