How can I optimise this MySQL query? - php

I am using the following MySQL query in a PHP script on a database that contains over 300,000,000 (yes, three hundred million) rows. I know that it is extremely resource intensive and it takes ages to run this one query. Does anyone know how I can either optimise the query or get the information in another way that's quicker?
I need to be able to use any integer between 1 and 15 in place of the 14 in MID(). I also need to be able to match strings of lengths within the same range in the LIKE clause.
Table Info:
games | longint, unsigned, Primary Key
win | bit(1)
loss | bit(1)
Example Query:
SELECT MID(`game`,14,1) AS `move`,
COUNT(*) AS `games`,
SUM(`win`) AS `wins`,
SUM(`loss`) AS `losses`
FROM `games`
WHERE `game` LIKE '1112223334%'
GROUP BY MID(`game`,1,14)
Thanks in advance for your help!

First, have an index on the game field... :)
The query seems simple and straightforward, but it hides that fact that a datasbase design change is probably required.
In such cases I always prefer to maintain a field that holds aggregated data, either per day, per user, or per any other axis. This way you can have a daily task that aggregates the relevant data and saves it in the database.
If indeed you call this query often, you should use the principle of decreasing the efficiency of insertion for increasing the efficiency of retrieval.

It looks like the game column is storing two (or possibly more) different things that this query is using:
Filtering by the start of game (first 10 characters)
Grouping by and returning MID(game,1,14) (I'm assuming one of the MID expressions is a typo.
I'd split that up so that you don't have to use string operations on the game column, and also put indexes on the new columns so you can filter and group them properly.
This query is doing a lot of conversions (long to string) and string manipulations that wouldn't be necessary if the table were normalized (as in one piece of information per column instead of multiple like it is now).
Leave the game column the way it is, and create a game_filter string column based on it to use in your WHERE clause. Then set up a game_group column and populate it with the MID expression on insert. Set up these two columns as your clustered index, first game_filter, then game_group.

The query is simple and, aside from making sure there are all the necessary indexes ("game" field obviously), there may be no obvious way to make it faster by rewriting the query only.
Some modification of data structures will probably be necessary.
One way: precalculate the sums. Each of these records will most likely have a create_date or an autoincremented key field. Precalculate the sums for all records, where this field is ≤ some X, put results in a side table, and then you only need to calculate for all records > X, then summarize these partial results with your precalculated ones.

You could precompute the MID(game,14,1) and MID(game,1,14) and store the first ten digits of the game in a separate gameid column which is indexed.
It might also be an idea to investigate if you could just store an aggregate table of the precomputed values so you increment the count and wins or losses column on insert instead.

SELECT MID(`game`,14,1) AS `move`,
COUNT(*) AS `games`,
SUM(`win`) AS `wins`,
SUM(`loss`) AS `losses`
FROM `games`
WHERE `game` LIKE '1112223334%'
Create an index on game:
CREATE INDEX ix_games_game ON games (game)
and rewrite your query as this:
SELECT move,
(
SELECT COUNT(*)
FROM games
WHERE game >= move
AND game < CONCAT(SUBSTRING(move, 1, 13), CHR(ASCII(SUBSTRING(move, 14, 1)) + 1))
),
(
SELECT SUM(win)
FROM games
WHERE game >= move
AND game < CONCAT(SUBSTRING(move, 1, 13), CHR(ASCII(SUBSTRING(move, 14, 1)) + 1))
),
(
SELECT SUM(lose)
FROM games
WHERE game >= move
AND game < CONCAT(SUBSTRING(move, 1, 13), CHR(ASCII(SUBSTRING(move, 14, 1)) + 1))
)
FROM (
SELECT DISTINCT SUBSTRING(q.game, 1, 14) AS move
FROM games
WHERE game LIKE '1112223334%'
) q
This will help to use the index on game more efficiently.

Can you cache the result set with Memcache or something similar? That would help with repeated hits. Even if you only cache the result set for a few seconds, you might be able to avoid a lot of DB reads.

Related

Count rows until value is reached

I am trying to find a way to count the number of users until the number is reached. Here's somewhat of how my table is setup.
ID Quantity
1 10
2 30
3 20
4 28
Basically, I want to organize the row quantity to be in order from greatest to least. Then I want it to count how many rows it takes from going from the highest quantity to whatever ID you supply it with. So for example, If I was looking for the ID #4, It would look through the quantity from from greatest to least, then tell me that it is row #2 because it took only 2 rows to reach it since it contains the 2nd highest quantity.
There is another way I can code this, but I feel it is too demanding of a resource and involves PHP. I can do a loop on my database based on the greatest to least, and every time it goes through another loop, I add +1. So, that way, I could do an IF statement to determine when it reaches my value. However, when I have thousands of values it would have to go through, I feel like that would be too resource demanding.
Overall, this is a simple sort problem. Any data structure can give you the row of an item, with minor modifications in some cases.
If you are planning on using this operation multiple times, it is possible to beat the theoretical O(n log(n)) running time with an amortized O(log(n)) by maintaining a separate sorted copy of your table sorted by quantity. This reduces the problem to a binary search.
A third alternative is to maintain a virtual linked list of table entries in the new sort order. This would increase the insert times into the table to O(n), but would reduce this problem to O(1)
A fourth solution would be to maintain a virtual balanced tree, however, despite the good theoretical performance, this solution is likely to be extremely hard to implement.
It might not be the answer you are expecting but: you can't "stop" the execution of a query after you reach a certain value. MySQL always generate the full result set before you can analyse it. This is because, it order to sort the results by Quantity, MySQL needs to have all the rows.
So if you want to do this is pure MySQL, you need to count the row numbers (as explained here MySQL - Get row number on select) in a temporary table and then select your ID from there.
Example:
SET #rank = 0;
SELECT *
FROM (
SELECT Id, Quantity, #rank := #rank + 1 as rank
FROM table
ORDER BY Quantity
) as ordered_table
WHERE Id = 4;
If performance is an issue, you could probably speed this up a bit with an index on Quantity (to be tested). Otherwise the best way is to store the "rank" value in a separate table (containing only 2 columns: Id and Rank), possibly with a trigger to refresh the table on insert/update.

Compare table rows, big data amount

I have a quite interesting task. But I don't know how to call it in one word in order to search for related topics. Even this topic title might not reflect what I need. So, if somebody has better title - welcome.
I'll try to explain my problem.
I have about 100,000 rows in MySQL db table. And I need to "compare" entries from the table.
"compare" doesn't mean just equal. There is an algorithm for calculation comparison level. I have weight coefficient for each table column. Means that if entry#1's column1 equals to entry#2's column2 then I give, say, 5 point to this pair. And so on for each column.
The most straight forward way to do this - apply calculation rules for each couple of entries. Why am I afraid of this? 100,000 entries means about 5 billion "compare" operations. For sure, I can calculate this on demand and store the result somewhere in cache. But I believe that the most obvious way is not the most effective.
So, my first question is: Is there any other better way to achive my goal except of brute force?
My second question is related to tool which is better for calculations.
Application language is PHP. Hence, I need to load into memory whole
table and iterate over the data.
Create stored procedure in MySQL.
Using MongoDB's aggregation framework or MapReduce.
The least of all I like the first way. The most of all - the last.
I'm looking for any suggestion or advice from people who have experience in such sort of cases.
Since, I don't know how to ask google for help, any links will be appreciated.
UPDATE:
Calculation rules are a bit more complicated then I described...
Table has a set of related columns which are to be used at once as group(not one by one).
Let's assume:
table has fields, say, tag_1, tag_2, .., tag_n.
row_1 and row_2 - entries in the table.
The rule(pseudo-code):
if(row_1.tag_1==row_2.tag_1)
{
// gives 10 points
}
elseif(row_1.tag_1 is in row_2.tags && row_1.tag_1!=row_2.tag_1)
{
// gives 5 points
}
....
// and so on
Basically, I need to check find intersection of two arrays. If it is not empty - points are given. If indexes of tags in two rows match the additional points are given.
I'm wondering, how this can be accomplished using Stored Procedures Language? Because it can be done pretty easy using any programming language.
If stored procedure can do this then it is my choice.
If you have a static table, then it doesn't make a difference which you choose, so long as you store the results somewhere (presumably back in the database).
If your data is changing, then you need to compare each new row to all rows, which is essentially a full-table scan. This is probably best done in a database.
If the data fits into memory (and 500,000 rows should fit into memory), then (2) will probably be faster than (3) on equivalent hardware. "Equivalent hardware" is a very important consideration.
In most cases, I would opt for (2). It sounds like the query is something like:
select t.id, t2.id,
((case when t1.col1 = t2.col1 then 5 else 0 end) +
(case when t2.col2 = t2.col2 then 7 else 0 end) +
. . .
)
from t cross join t2
If you are much more comfortable with map-reduce, then you might find it easier to code there. I know both languages and prefer SQL for something like this.
Can't you do something like this:
UPDATE table SET points = points+5 WHERE column1 = column2
If you have too check for a specific value, you could try something like this:
UPDATE table SET points = points+5 WHERE column1 = 'somevalue' AND column2 = 'somevalue'

Should one use/create as many indices as possible in MySQL?

I realized, that the response to a MySQL query becomes much faster, when creating an index for the column you use for "ORDER BY", e.g.
SELECT username FROM table ORDER BY registration_date DESC
Now I'm wondering which indices I should create to optimize the request time.
For example I frequently use the following queries:
SELECT username FROM table WHERE
registration_date > ".(time() - 10000)."
SELECT username FROM table WHERE
registration_date > ".(time() - 10000)."
&& status='active'
SELECT username FROM table WHERE
status='active'
SELECT username FROM table ORDER BY registration_date DESC
SELECT username FROM table WHERE
registration_date > ".(time() - 10000)."
&& status='active'
ORDER BY birth_date DESC
Question 1:
Should I set up separate indices for the first three request types? (i.e. one index for the column "registration_date", one index for the column "status", and another column for the combination of both?)
Question 2:
Are different indices independently used for "WHERE" and for "ORDER BY"? Say, I have a combined index for the columns "status" and "registration_date", and another index only for the column "birth_date". Should I setup another combined index for the three columns ("status", "registration_date" and "birth_date")?
There are no hard-and-fast rules for indices or query optimization. Each case needs to be considered and examined.
Generally speaking, however, you can and should add indices to columns that you frequently sort by or use in WHERE statements. (Answer to Question 2 -- No, the same indices are potentially used for ORDER BY and WHERE) Whether to do a multi-column index or a single-column one depends on the frequency of queries. Also, you should note that single-column indices may be combined by mySQL using the Index Merge Optimization:
The Index Merge method is used to retrieve rows with several range
scans and to merge their results into one. The merge can produce
unions, intersections, or unions-of-intersections of its underlying
scans. This access method merges index scans from a single table; it
does not merge scans across multiple tables.
(more reading: http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html)
Multi-column indices also require that you take care to structure your queries in such a way that your use of indexed columns matches the column order in the index:
MySQL cannot use an index if the columns do not form a leftmost
prefix of the index. Suppose that you have the SELECT statements shown
here:
SELECT * FROM tbl_name WHERE col1=val1; SELECT * FROM tbl_name WHERE
col1=val1 AND col2=val2;
SELECT * FROM tbl_name WHERE col2=val2; SELECT * FROM tbl_name WHERE
col2=val2 AND col3=val3;
If an index exists on (col1, col2, col3), only the first two queries
use the index. The third and fourth queries do involve indexed
columns, but (col2) and (col2, col3) are not leftmost prefixes of
(col1, col2, col3).
Bear in mind that indices DO have a performance consideration of their own -- it is possible to "over-index" a table. Each time a record is inserted or an indexed column is modified, the index/indices will have to be rebuilt. This does demand resources, and depending on the size and structure of your table, it may cause a decrease in responsiveness while the index building operations are active.
Use EXPLAIN to find out exactly what is happening in your queries. Analyze, experiment, and don't over-do it. The shotgun approach is not appropriate for database optimization.
Documentation
MySQL EXPLAIN - http://dev.mysql.com/doc/refman/5.0/en/explain.html
How MySQL uses indices - http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
Index Merge Optimization - http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html
To quote this page:
[Indices] will slow down your updates and inserts.
That's the tradeoff you have to calculate. To optimize your table, you should put indices only in the columns you are most likely to apply conditions to - the more indices you have, the slower your data-changing operations become. In that sense, I personally don't see much merit in creating combined indices - if you create all 7 possible permutations of indices for 3 columns, you are most definitely putting more drag on your updates and inserts than just using 3 indices for 3 columns (and even that can be debatable). On the other hand, if the data is being edited much, much less than it is being SELECTed, then indices can really help you speed things up.
Something else to take into consideration (again quoting the above page):
If your table is very small [...] it's worse to use an index than to leave it out and just let it do a table scan. Indexes really only come in handy with tables that have a lot of rows.
Yes, it is a good idea to have indexes on your column you often use, both for order by and in your where clauses.
But be aware: UPDATES, INSERTS and DELETE slow down if you have indexes.
That is because after such an operation, the index must be updated too.
So, as a rule-of-thumb: If your application is read-intensive, use the indexes where you think they help.
If your application is often updating the data, be careful, because that may get slow because of the indexes.
When in doubt, you must simply get dirty hands, and study the results of EXPLAIN.
http://dev.mysql.com/doc/refman/5.6/en/explain.html
As for the first two examples, you can satisfy them with one index: {registration_date, status}. Such an index can support filters on the first item (registration_date) or on both.
It does not work for status alone, however. The question on status is how selective is the status. That is, what proportion of records have status = "active". If this is a high proportion (so, on average, every database page would have an active record), then an index may not help very much.
The order by's are trickier. I don't know if mysql uses indexes for this purpose. Often, using an index for sorting entire records is less efficient than just sorting the records. Using the index causes a random access pattern to the records in the pages, which can cause major performance problems for tables larger than the page cache.
Use the explain function on your select statements to determine where your joins are slowing down (the more rows that are referenced, the slower it will be). Then apply your indices to those columns.
EXPLAIN SELECT * FROM table JOIN table 2 ON a = b WHERE conditions;

GROUP BY and ORDER BY too slow. How to make faster?

I've trying to create some stats for my table but it has over 3 million rows so it is really slow.
I'm trying to find the most popular value for column name and also showing how many times it pops up.
I'm using this at the momment but it doesn't work cause its too slow and I just get errors.
$total = mysql_query("SELECT `name`, COUNT(*) as b FROM `people` GROUP BY `name` ORDER BY `b` DESC LIMIT 0,5;")or die(mysql_error());
As you may see I'm trying to get all the names and how many times that name has been used but only show the top 5 to hopefully speed it up.
I would like to be able to then do get the values like
while($row = mysql_fetch_array($result)){
echo $row['name'].': '.$row['b']."\r\n";
}
And it will show things like this;
Bob: 215
Steve: 120
Sophie: 118
RandomGuy: 50
RandomGirl: 50
I don't care much about ordering the names afterwards like RandomGirl and RandomGuy been the wrong way round.
I think I've have provided enough information. :) I would like the names to be case-insensitive if possible though. Bob should be the same as BoB, bOb, BOB and so on.
Thank-you for your time
Paul
Limiting results on the top 5 won't give you a lot of speed-up, you'll gain time in the result retrieval, but in mySQL side the whole table still needs to be parsed (to count).
You will speed-up your count query having index on name column, of course as only the index will be parsed and not the table.
Now if you really want to speed up the result and avoid parsing the name index when you need this result (which will still be quite slow if you really have millions of rows), then the only other solution is computing the stats when inserting, deleting or updating rows on this table. That is using triggers on this table to maintain a statistics table near this one. Then you will really only have a simple select query on this statistics table, with only 5 rows parsed. But you will slow down your inserts, delete and update operations (which are already quite slow, especially if you maintain indexes, so if the stats are important you should study this solution).
Do you have an index on name? It might help.
Since you are doing the counting/grouping and then sorting an index on name doesn't help at all MySql should go through all rows every time, there is no way to optimize this. You need to have a separate stats table like this:
CREATE TABLE name_stats( name VARCHAR(n), cnt INT, UNIQUE( name ), INDEX( cnt ) )
and you should update this table whenever you add a new row to 'people' table like this:
INSERT INTO name_stats VALUES( 'Bob', 1 ) ON DUPLICATE KEY UPDATE cnt = cnt + 1;
Querying this table for the list of top names should give you the results instantaneously.

Running queries on tables with more than 1million rows in

I am indexing all the columns that I use in my Where / Order by, is there anything else I can do to speed the queries up?
The queries are very simple, like:
SELECT COUNT(*)
FROM TABLE
WHERE user = id
AND other_column = 'something'`
I am using PHP 5, MySQL client version: 4.1.22 and my tables are MyISAM.
Talk to your DBA. Run your local equivalent of showplan. For a query like your sample, I would suspect that a covering index on the columns id and other_column would greatly speed up performance. (I assume user is a variable or niladic function).
A good general rule is the columns in the index should go from left to right in descending order of variance. That is, that column varying most rapidly in value should be the first column in the index and that column varying least rapidly should be the last column in the index. Seems counter intuitive, but there you go. The query optimizer likes narrowing things down as fast as possible.
If all your queries include a user id then you can start with the assumption that userid should be included in each of your indexes, probably as the first field. (Can we assume that the user id is highly selective? i.e. that any single user doesn't have more than several thousand records?)
So your indexes might be:
user + otherfield1
user + otherfield2
etc.
If your user id is really selective, like several dozen records, then just the index on that field should be pretty effective (sub-second return).
What's nice about a "user + otherfield" index is that mysql doesn't even need to look at the data records. The index has a pointer for each record and it can just count the pointers.

Categories