Mysql 5.7 Innodb Delete query very slow randomly - php

I have table with 5 simple fields. Total rows in table is cca 250.
When I use PHPmyAdmin with one DELETE query it is processed in 0.05 sec. (always).
Problem is that my PHP application (PDO connection) processing same query between other queries and this query is extremely slow (cca 10 sec.). And another SELECT query on table with 5 rows too (cca 1 sec.). It happened only sometimes!
Other queries (cca 100) are always OK with normal time response.
What problem should be or how to find what is the problem?
Table:
CREATE TABLE `list_ip` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`type` CHAR(20) NOT NULL DEFAULT '',
`address` CHAR(50) NOT NULL DEFAULT '',
`description` VARCHAR(50) NOT NULL DEFAULT '',
`datetime` DATETIME NOT NULL DEFAULT '1000-01-01 00:00:00',
PRIMARY KEY (`id`),
INDEX `address` (`address`),
INDEX `type` (`type`),
INDEX `datetime` (`datetime`) ) COLLATE='utf8_general_ci' ENGINE=InnoDB;
Query:
DELETE FROM list_ip WHERE address='1.2.3.4' AND type='INT' AND datetime<='2017-12-06 08:04:30';
As I said before table has only 250 rows. Size of table is 96Kib.
I tested also with empty table and its slow too.

Wrap your query in EXPLAIN and see if it's running a sequential select, not using indexes. EXPLAIN would be my first stop in determining if I have a data model problem (bad / missing indexes would be one model issue).
About EXPLAIN: https://dev.mysql.com/doc/refman/5.7/en/explain.html
Another tool I'd recommend is running 'mytop' and looking at the server activity/load during those times when it's bogging down. http://jeremy.zawodny.com/mysql/mytop/

There was some network problem. I uninstalled docker app with some network peripherals and looks much beter.

Related

Doctrine / MySQL Slow query even when using indexes

I cleaned the question a little bit because it was getting very big and unreadable.
Running on my localhost.
As you can see in the image below, the query takes 755.15 ms when selecting from the table Job that contains 15000 rows (with the where conditions returning 6650)
The table Company contains 1000 rows.
The table geo__name contains 84300 rows approx and is not giving me any problem, so I believe the problem is the database structure or something.
The structure of these 2 tables is the following:
Table Job is:
CREATE TABLE `job` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`company_id` int(11) NOT NULL,
`activity_sector_id` int(11) DEFAULT NULL,
`status` int(11) NOT NULL,
`active` datetime NOT NULL,
`contract_type_id` int(11) NOT NULL,
`salary_type_id` int(11) NOT NULL,
`workday_id` int(11) NOT NULL,
`geoname_id` int(11) NOT NULL,
`title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`minimum_experience` int(11) DEFAULT NULL,
`min_salary` decimal(7,2) DEFAULT NULL,
`max_salary` decimal(7,2) DEFAULT NULL,
`zip_code` int(11) DEFAULT NULL,
`vacancies` int(11) DEFAULT NULL,
`show_salary` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `created_at` (`created_at`,`active`,`status`) USING BTREE,
CONSTRAINT `FK_FBD8E0F823F5422B` FOREIGN KEY (`geoname_id`) REFERENCES `geo__name` (`id`),
CONSTRAINT `FK_FBD8E0F8398DEFD0` FOREIGN KEY (`activity_sector_id`) REFERENCES `activity_sector` (`id`),
CONSTRAINT `FK_FBD8E0F85248165F` FOREIGN KEY (`salary_type_id`) REFERENCES `job_salary_type` (`id`),
CONSTRAINT `FK_FBD8E0F8979B1AD6` FOREIGN KEY (`company_id`) REFERENCES `company` (`id`),
CONSTRAINT `FK_FBD8E0F8AB01D695` FOREIGN KEY (`workday_id`) REFERENCES `workday` (`id`),
CONSTRAINT `FK_FBD8E0F8CD1DF15B` FOREIGN KEY (`contract_type_id`) REFERENCES `job_contract_type` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=15001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The table company is:
CREATE TABLE `company` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`logo` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`website` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`user_id` int(11) NOT NULL,
`phone` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`cifnif` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`type` int(11) NOT NULL,
`subscription_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_4FBF094FA76ED395` (`user_id`),
KEY `IDX_4FBF094F9A1887DC` (`subscription_id`),
KEY `name` (`name`(191)),
CONSTRAINT `FK_4FBF094F9A1887DC` FOREIGN KEY (`subscription_id`) REFERENCES `subscription` (`id`),
CONSTRAINT `FK_4FBF094FA76ED395` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The query is the following:
SELECT
j0_.id AS id_0,
j0_.status AS status_1,
j0_.title AS title_2,
j0_.min_salary AS min_salary_3,
j0_.max_salary AS max_salary_4,
c1_.id AS id_5,
c1_.name AS name_6,
c1_.logo AS logo_7,
a2_.id AS id_8,
a2_.name AS name_9,
g3_.id AS id_10,
g3_.name AS name_11,
j4_.id AS id_12,
j4_.name AS name_13,
j5_.id AS id_14,
j5_.name AS name_15,
w6_.id AS id_16,
w6_.name AS name_17
FROM
job j0_
INNER JOIN company c1_ ON j0_.company_id = c1_.id
INNER JOIN activity_sector a2_ ON j0_.activity_sector_id = a2_.id
INNER JOIN geo__name g3_ ON j0_.geoname_id = g3_.id
INNER JOIN job_salary_type j4_ ON j0_.salary_type_id = j4_.id
INNER JOIN job_contract_type j5_ ON j0_.contract_type_id = j5_.id
INNER JOIN workday w6_ ON j0_.workday_id = w6_.id
WHERE
j0_.active >= CURRENT_TIMESTAMP
AND j0_.status = 1
ORDER BY
j0_.created_at DESC
When executing the above query I have these results:
In MYSQL Workbench: 0.578 sec / 0.016 sec
In Symfony profiler: 755.15 ms
The question is: Is the duration of this query correct? if not, how can I improve the speed of the query? it seems too much.
The Symfony debug toolbar if it helps:
As you can see in the below image, I'm only getting the data I really need:
The explain query:
The timeline:
The MySQL server can't handle the load being placed on it. This could be due to resource contention, or because it has not been appropriately tuned and it could also be a problem with your hard drive.
First, I would start your performance by adding MySQL keyword "STRAIGHT_JOIN" which tells MySQL to query the data in the order I have provided, dont try to think the relationships for me. However, on your dataset being so small, and already 1/2 second, don't know if that will help as much, but on larger datasets I have known it to SIGNIFICANTLY improve performance.
Next, you appear to be getting lookup descriptions based on the PK/FK relationship results. Not seeing the indexes on those tables, I would suggest doing covering indexes which contain both the key and description so the join can get the data from the index pages it uses for the JOIN instead of use index page, find the actual data pages to get the description and continue.
Last, your job table with the index on (created_at,active,status), might perform better if the index had the index as ( status, active, created_at ).
With your existing index, think of it this way, each day of data is put into a single box. Within each day box that is sorted by an active timestamp (even if simplified by active date), THEN the status.
So, for each day CREATED, you open a box. Look at secondary boxes, one for each "Active" timestamp (ex: by day). Within each Active timestamp (day), only now can you see if the "Status = 1" records. So open each active timestamp day, assess Status = 1, then close each created day box and go to the next created day box and repeat. So look at the labor intensive of open each box per day, each active box within that day.
Now, under the suggested index starting with status. You now have a very finite number of boxes, one for each status. Open only the 1 box for status = 1 These are the only ones you want to consider... All the others you don't care. Inside that, you have the actual records based on ACTIVE Timestamp and that is sub-sorted. From that, you can jump directly to those at the current timestamp. From the first record and the rest within the box, you now have all the records that qualify. Done. Since these records (index) ALSO has the Created_at as part of the index, it can optimize that with the descending sort order.
For ensuring "covering indexes" for the other lookup tables if they do not yet exist, I suggest the following.
table index
company ( id, name, logo )
activity_sector (id, name )
geo__name ( id, name )
job_salary_type ( id, name )
job_contract_type ( id, name )
workday ( id, name )
And the MySQL Keyword...
SELECT STRAIGHT_JOIN (rest of query...)
There are several reasons as to why Symfony is slow.
1. Server fault
First, it could be the server fault. Server performances may hinder your query time.
2. Data size and defered rendering
Then comes the data size. As you can see on the image below, the query on one of my project have a 50Mb data size (currently about 20k rows).
Parsing 50Mb in HTML can take some time, mostly because of loops.
Still, there are solutions about this, like defered rendering.
Defered rendering is quite simple, instead of parsing data in your twig you,
send all data to a javascript varaible, and use javascript to parse/render data once the DOM is loaded.
3. Query optimisation
As I wrote in comment, you can check the following question, on which I explained why custom queries are important.
Are Doctrine relations affecting application performance?
In this question, you will read that order matter... It's in fact the most important thing.
While static data in your databases are often inserted in the right order,
it's rarely the case for dynamic data (data provided by user during the website life)
Which is why, using ORDER BY in your query will often speed up the page rendering,
as doctrine won't be doing extra queries on it's own.
As exemple, One of my site have about 700 entries diplayed on the index.
First, here is the query count while using findAll() :
It show 254 query (253 duplicates) in 144ms, plus 39 render time.
Next, using the second parameter of findBy(), ORDER BY, I get this result :
You can see the full query here (sreenshot is big)
Much better, 1 query only in 8ms, and about the same render time.
But, here, I don't use any fields from associations.
From the moment I will do it, doctrine qui do some extra query, and query count and time will skyrocket.
In the end, it will turn back to something like findAll()
And last, this is the custom query :
In this custom query, the query time went from 8ms to 38ms.
But, unlike the previous query, I got way more data in my result,
which will prevent doctrine from doing extra query.
Again, ORDER BY() matter in this query. Without it, I skyrocket back to 84 queries.
4. Partials
When you do custom query, you can load partials objects instead of full data.
As you said in your question, description field seems to slow down your loading speed,
with partials, you can avoid to load some fields from the table, which will speed up query speed.
First, instead of your regular syntax, this is how you will create the query builder :
$em=$this->getEntityManager();
$qb=$em->createQueryBuilder();
Just in case, I prefer to keep $em as a separate variable (if I want to fetch some class repository for example).
Then you can start your partial select. Careful, first select can't include any association fields :
$qb->select("partial job.{id, status, title, minimum_experience, min_salary, max_salary, zip_code, vacancies")
->from(Job::class, "job");
Then you can add your associations :
$qb->addSelect("company")
->join("job.company", "company");
Or even add partial association in case you don't need all the data of the association :
$qb->addSelect("partial activitySector.{id}")
->join("job.activitySector", "activitySector");
$qb->addSelect("partial job.{id, company_id, activity_sector_id, status, active, contract_type_id, salary_type_id, workday_id, geoname_id, title, minimum_experience, min_salary, max_salary, zip_code, vacancies, show_salary");
5. Caches
You could also use various caches, like Zend OPCache for PHP, which you will find some advices in this question: Why Symfony3 so slow?
There is also the SQL cache Varnish.
This round up about everything I can share to lower your loading time.
Hope it will prove useful and you will be able to solve your problem.
So many keys , try to minimize the number of keys.

Same MySql Query Long execution time but short on archive table with 6million more records

I am a bit stumped on this wierdness.
I have a gps tracking app that logs gps points into a track_log table.
When I do a basic query on the running log table it takes about 50 seconds to complete:
SELECT * FROM track_log WHERE node_id = '26' ORDER BY time_stamp DESC LIMIT 1
When I run the exact same query on the archived table where I copied most of the logs to to reduce the running table's logs to about 1.2 million records.
The archive table is 7.5 million records big.
The exact same query on the archive table runs for 0.1 seconds on the same server even though it's six times bigger!
What's going on?
Here's the full Create Table schema:
CREATE TABLE `track_log` (
`id_track_log` INT(11) NOT NULL AUTO_INCREMENT,
`node_id` INT(11) DEFAULT NULL,
`client_id` INT(11) DEFAULT NULL,
`time_stamp` DATETIME NOT NULL,
`latitude` DOUBLE DEFAULT NULL,
`longitude` DOUBLE DEFAULT NULL,
`altitude` DOUBLE DEFAULT NULL,
`direction` DOUBLE DEFAULT NULL,
`speed` DOUBLE DEFAULT NULL,
`event_code` INT(11) DEFAULT NULL,
`event_description` VARCHAR(255) DEFAULT NULL,
`street_address` VARCHAR(255) DEFAULT NULL,
`mileage` INT(11) DEFAULT NULL,
`run_time` INT(11) DEFAULT NULL,
`satellites` INT(11) DEFAULT NULL,
`gsm_signal_status` DOUBLE DEFAULT NULL,
`hor_pos_accuracy` double DEFAULT NULL,
`positioning_status` char(1) DEFAULT NULL,
`io_port_status` char(16) DEFAULT NULL,
`AD1` decimal(10,2) DEFAULT NULL,
`AD2` decimal(10,2) DEFAULT NULL,
`AD3` decimal(10,2) DEFAULT NULL,
`battery_voltage` decimal(10,2) DEFAULT NULL,
`ext_power_voltage` decimal(10,2) DEFAULT NULL,
`rfid` char(8) DEFAULT NULL,
`pic_name` varchar(255) DEFAULT NULL,
`temp_sensor_no` char(2) DEFAULT NULL,
PRIMARY KEY (`id_track_log`),
UNIQUE KEY `id_track_log_UNIQUE` (`id_track_log`),
KEY `client_id_fk_idx` (`client_id`),
KEY `track_log_node_id_fk_idx` (`node_id`),
KEY `track_log_event_code_fk_idx` (`event_code`),
KEY `track_log_time_stamp_index` (`time_stamp`),
CONSTRAINT `track_log_client_id` FOREIGN KEY (`client_id`) REFERENCES `clients` (`client_id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `track_log_event_code_fk` FOREIGN KEY (`event_code`) REFERENCES `event_codes` (`event_code`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `track_log_node_id_fk` FOREIGN KEY (`node_id`) REFERENCES `nodes` (`id_nodes`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=8632967 DEFAULT CHARSET=utf8
TL;DR
Make sure the indexes are defined in both tables, for this query node_id and time_stamp are good indexes.
Defragment your table: https://dev.mysql.com/doc/refman/5.5/en/innodb-file-defragmenting.html (This could help, but should not make this much of a difference).
Make sure your query is not being blocked by other queries. If data is being inserted in the track_log table at continuously, those queries might block your query. You can prevent this by changing the transaction isolation level, see https://dev.mysql.com/doc/refman/5.5/en/set-transaction.html for more information. Caution: be carefull with this!
Indexes
I'm guessing this has something to do with the indexes you defined on the tables. Could you post the SHOW CREATE TABLES track_log output and the output of your archive table as well? The query you are executing would require an index on node_id and time_stamp for optimal performance.
Defragmentation
Besides this indexes you defined on the table, this might have something to do with data fragmentation. I'm assuming you are using InnoDB as your table engine now. Depending on your settings, every table in a database is stored in a separate file or every table in the database is stored in a single file (innodb_file_per_table variable). Those files will never shrink in size. If your track_log table has grown to 8.7 million records, on disk, it still takes up space for all those 8.7 million records.
If you have moved records from your track_log table to your archive table, the data might still be at the beginning and the end of the physical file for track_log. If no index is defined at time_stamp, a full table scan is still required to order by the timestamp. This means: reading the complete file from disk. Because the records you deleted still take up space in the file, this could make a difference.
Edit:
Transactions
Other transactions might be blocking your SELECT query. This can happen with the InnoDB engine. If you continously insert a lot of data into your track_log table, those queries might block your query. It will have to wait until no other transactions are being performed at this table.
There is a way around this, but you should be careful with this. You are able to change to transaction isolation level of your query. By setting the transaction isolation level to READ UNCOMMITTED you will be able to read data, while the other inserts are running. But it might not always give you the latest data. If you want to sacrifice this depends on your situation. If you are going to alter the data and update the data later, you generally do not want to change the transaction isolation level. But, for example, when showing statistics which should not always be accurate and up to date, this could be something that really speeds up your query.
I use this myself sometimes when I need to show statistics from large tables which are updated regularly.
This is almost certainly because your archive table has superior indexing to your track_log table.
To satisfy this query efficiently you need a compound index on (node_id, time_stamp) Why does this work? Because InnoDB and MyISAM indexes are so-called BTREE indexes, which means our intuition about searching them in order will work. Your query looks for a specific value of node_id, which means it can jump to that value in the index efficiently. The query then calls for the highest possible value of time_stamp related to that node_id value. Now that's in the same index, and in the right order to access it quickly too. So the row you need can be random-accessed, and MySQL doesn't have to hunt for it by scanning the table row by row. That scanning is almost certainly what's taking the time in your query.
Three things to keep in mind:
One: lots of indexes on single columns can't help a query as much as well-chosen compound indexes. Read this http://use-the-index-luke.com/
Two: SELECT * is usually harmful on a table with as many columns as the one you have shown. Instead, you should enumerate the columns you actually need in your SELECT query. That way MySQL doesn't have to sling as much data.
Three: The DOUBLE datatype is overkill for commercial-grade GPS data. FLOAT is plenty of precision.
Let us analyze your query:
SELECT * FROM track_log WHERE node_id = '26' ORDER BY time_stamp DESC LIMIT 1
The above mentioned query first sorts all the data present in the table based on time_stamp and then returns the top row.
But, when this query is executed on archived table, order by clause might be ignored (based on compression and system setting) and hence it returns the first row it encountered in the table.
You may verify the output of archived table by comparing the result with actual latest row.

Efficiently send multiple mysqli queries from php

I'm currently trying to create a logparser for Call of Duty 4. The parser itself is in php and reads through every line of the logfile for a specific server, and writes all the statistics to a database with mysqli. The databases are already in place and I'm fairly certain (with my limited experience) that they're well-organized. However, I'm not sure in what way I should send the update/insert queries to the database, or rather, which way is optimal.
My databases are structured as follows
-- --------------------------------------------------------
--
-- Table structure for table `servers`
--
CREATE TABLE IF NOT EXISTS `servers` (
`server_id` tinyint(3) unsigned NOT NULL auto_increment,
`servernr` smallint(1) unsigned NOT NULL default '0',
`name` varchar(30) NOT NULL default '',
`gametype` varchar(8) NOT NULL default '',
PRIMARY KEY (`server_id`),
UNIQUE KEY (`servernr`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
-- --------------------------------------------------------
--
-- Table structure for table `players`
--
CREATE TABLE IF NOT EXISTS `players` (
`player_id` tinyint(3) unsigned NOT NULL auto_increment,
`guid` varchar(8) NOT NULL default '0',
`fixed_name` varchar(30) NOT NULL default '',
`hide` smallint(1) NOT NULL default '0',
PRIMARY KEY (`player_id`),
UNIQUE KEY (`guid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
-- --------------------------------------------------------
--
-- Table structure for table `playerstats`
--
CREATE TABLE IF NOT EXISTS `playerstats` (
`pid` mediumint(9) unsigned NOT NULL auto_increment,
`guid` varchar(8) NOT NULL default '0',
`servernr` smallint(1) unsigned NOT NULL default '0',
`kills` mediumint(8) unsigned NOT NULL default '0',
`deaths` mediumint(8) unsigned NOT NULL default '0',
# And more stats...
PRIMARY KEY (`pid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
In short, servers and players contain unique entities, and they are combined in playerstats (i.e. statistics for a player in a specific server). In addition to the stats, they are also given a player id (pid) for use in later databases. Similarly, the database contains the tables weapons (unique weapons) and weaponstats (statistics for a weapon in a server), attachments and attachstats, and maps and mapstats. Once I get all of this working, I would like to implement more relations between these stats (i.e. a player's stats for a specific weapon in a specific server, using pid and wid).
The PHP parser copies the log of each server (there are 6 atm) over http and then reads through them every 5 minutes (I'm not too sure on that yet). One can assume that during this parsing, every table has to be queried (either with UPDATE or INSERT) at least once (and probably alot more). Right now, I have a number of options on how to send queries (that I know of):
1: Use regular queries, i.e.
$statdb = new mysqli($sqlserver,$user,$pw, $db);
foreach( $playerlist as $guid => $data ){
$query = 'INSERT INTO `playerstats`
VALUES (NULL, '$guid', $servernr, $data[0], $data[1])';
$statdb->query($query);
}
2: Use multi query
$statdb = new mysqli($sqlserver,$user,$pw, $db);
foreach( $playerlist as $guid => $data ){
$query = "INSERT INTO `playerstats`
VALUES (NULL, '$guid', $servernr, $data[0], $data[1]);";
$totalquery .= $query;
}
$statdb->multi_query($totalquery);
3: Use prepared statements; I haven't actually tried this yet. It seems like a good idea, but then I have to make a prepared statement for every table (I think). Will that even be possible, and if so, will it be efficient?
4: As you might be able to see from the aforementioned code, I initially count all the statistics for each player,weapon,map, etc. into an array. Once the parser has read through the entire file, it sends a query with those accumulated stats to the mysql server. However, I have also seen (more often then not) in other logparsers, that queries are being sent whenever a new line of the logfile has been parsed, so something like:
UPDATE playerstats
SET kills = kills+1
WHERE guid = $guid
It doesn't seem very efficient to me, but then again I'm just starting out with both php and sql so what do I know :>
So, in short; what would be the most efficient way to query the database, considering that the logparser reads through every line one by one? Of course, any other advice or suggestion is always welcome.
.5. create a single multi-insert query using mysql's support for the queries like
INSERT INTO table (fields) VALUES(data),VALUES(data)...
It seems most efficient of them all, including prepared statements
The most efficient way to me would be to scan the server every so often, once every 5 minutes or so, then scan the list of stats into an array (e.g. in 5 mins 38 people have been on the server so you have an array of 38 IDs, each with the accumulated stats changes of those 38 IDs that need to be updated in the server). Run one query to check to see if a user has an existing ID in stats, and then 2 more queries, one to create new users (multi query insert) and one to update users (single query with CASE update). That limits you to 3 queries every 5 minutes.

Comparison time for 2 large MySQL database table

I have imported 2 .csv file that I wanted to compare into MySQL table. now i want to compare both of them using join.
However, whenever I include both table in my queries, i get no response from phpMyAdmin ( sometimes it shows 'max execution time exceeded).
The record size in both db tables is 73k max. I dont think thats huge on data. Even a simple query like
SELECT *
FROM abc456, xyz456
seems to hang. I did an explain and I got this below. I dont know what to take from this.
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE abc456 ALL NULL NULL NULL NULL 73017
1 SIMPLE xyz456 ALL NULL NULL NULL NULL 73403 Using join buffer
can someone please help?
UPDATE: added the structure of the table with composite keys. There are around 100000+ records that would be inserted in this table.
CREATE TABLE IF NOT EXISTS `abc456` (
`Col1` varchar(4) DEFAULT NULL,
`Col2` varchar(12) DEFAULT NULL,
`Col3` varchar(9) DEFAULT NULL,
`Col4` varchar(3) DEFAULT NULL,
`Col5` varchar(3) DEFAULT NULL,
`Col6` varchar(40) DEFAULT NULL,
`Col7` varchar(200) DEFAULT NULL,
`Col8` varchar(40) DEFAULT NULL,
`Col9` varchar(40) DEFAULT NULL,
`Col10` varchar(40) DEFAULT NULL,
`Col11` varchar(40) DEFAULT NULL,
`Col12` varchar(40) DEFAULT NULL,
`Col13` varchar(40) DEFAULT NULL,
`Col14` varchar(20) DEFAULT NULL,
KEY `Col1` (`Col1`,`Col2`,`Col3`,`Col4`,`Col5`,`Col6`,`Col7`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
It looks like you are doing a pure catesian join in your query.
Shouldn't you be joining the tables on certain fields? If you do that and the query still takes a long time to execute, you should put appropriate indexes to speed up the query.
The reason that it is taking so long is that it is trying to join every single row of the first table to every single row of the second table.
You need a join condition, some way of identifying which rows should be matched up:
SELECT * FROM abc456, xyz456 WHERE abc456.id = xyz456.id
Add indexes on joining columns. That should help with performance.
Use MySQL Workbench or MySQL Client (console) for long queries. phpmyadmin is not designed to display queries that return 100k rows :)
If you REALLY have to use phpmyadmin and you need to run long queries you can use Firefox extension that prevents phpmyadmin timeout: phpMyAdmin Timeout Preventer (direct link!)
There is a direct link, because i couldnt find english description.

How can I make this MySQL Query faster?

I have a MySQL query that sometimes takes over 1 second to execute. The query is as follows:
SELECT `id`,`totaldistance` FROM `alltrackers` WHERE `deviceid`='FT_99000083426364' AND (`gpsdatetime` BETWEEN 1341100800 AND 1342483200) ORDER BY `id` DESC LIMIT 1
This query is run in a loop to retrieve rows on certain days of the month and such. This causes the page to take over 25 seconds to load sometimes...
The table structure is as follows:
CREATE TABLE IF NOT EXISTS `alltrackers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`deviceid` varchar(50) NOT NULL,
`lat` double NOT NULL,
`long` double NOT NULL,
`gpsdatetime` int(11) NOT NULL,
`version` int(11) DEFAULT NULL,
`totaldistance` int(11) NOT NULL DEFAULT '0',
`distanceprocessed` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `id_deviceid` (`id`,`deviceid`),
UNIQUE KEY `deviceid_id` (`deviceid`,`id`),
KEY `deviceid` (`deviceid`),
KEY `deviceid_gpsdatetime` (`deviceid`,`gpsdatetime`),
KEY `gpsdatetime_deviceid` (`gpsdatetime`,`deviceid`),
KEY `gpsdatetime` (`gpsdatetime`),
KEY `id_deviceid_gpsdatetime` (`id`,`deviceid`,`gpsdatetime`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=677242 ;
I have added all kinds of index combinations (please tell me which to remove) in order to try and get MySQL to use indices for the query, but to no avail.
Here is the EXPLAIN output:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE alltrackers index deviceid_id,deviceid,deviceid_gpsdatetime,gpsdatet... PRIMARY 4 NULL 677238 Using where
The reason I'm using ORDER BY ASC/DESC LIMIT 1 is because I need the first and last rows of the query. Would it be faster to just run the query without LIMIT 1 and use PHP to retrieve the first and last rows?
Many thanks for your help!
I can't say for your exact case, but I have found that querying all rows and ignoring all but the first is faster than a limit statement (this was using SQL Server though). Its easy to do - remove your limit 1 clause and give it a try. You can then use the PHP to read the first and last thus reducing the load on the MySQL instance (ie running a single query rather than 2).
Incidentally, why do you have 2 unique keys with the same columns in them - id_deviceid and deviceid_id? Remove all the indexes and then add them back in again, you really want as few indexes as possible for fast DBs.
Couple things I can think of off the top of my head:
1.) I'm not a mySQL/DBA guru by any stretch, but the index there seems like a bit of overkill. Typically I'll make sure that columns that are either queried on or joined on are indexed; so you'd want one for deviceid and gpsdatetime. 1 second per query isn't horrendous, so your returns here might be limited.
2.) Try to eliminate the looping. If you're going back to the database 25 times; you're going to incur overhead simply opening/closing connections and such. It might be faster to go to the database once, and then process the results using PHP to get the final data you need.

Categories