Mysql query running COUNT every time the same value is fetched - php

I am currently using Laravels QueryBuilder to fetch all users and the number of open tickets they have.
The query looks like this:
return DB::table('users')
->leftjoin('tickets', 'users.organisation', '=', 'tickets.organisation')
->leftjoin('tickets_statuses', 'tickets.ticket_id', '=', 'tickets_statuses.ticket_id')
->select(
'users.organisation',
DB::raw('COUNT(tickets.ticket_id) as tickets')
)
->where('tickets_statuses.status_id', 0)
->where('users.admin', false)
->groupby('users.organisation')
->get();
The issue is, I may have 6 users that belong to the same organisation. So if 80 open tickets belong to the organisation, the query would do 80 x 6 for some reason resulting in 480 open ticket count when it should be 80.
/** EDIT **/
-- ----------------------------
-- Table structure for users
-- ----------------------------
DROP TABLE IF EXISTS `users`;
CREATE TABLE `users` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`email` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`password` varchar(60) COLLATE utf8_unicode_ci NOT NULL,
`organisation` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`parent_account` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`admin` tinyint(1) NOT NULL,
`remember_token` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
UNIQUE KEY `users_email_unique` (`email`)
) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
+----+---------------------------+-------------------------------------+---------------------------------+
| id | name | email | organisation |
+----+---------------------------+-------------------------------------+---------------------------------+
| 6 | Bruce Mayert MD | Hodkiewicz.Nicolette#gmail.com | Corkery Group |
| 30 | Mr. Willard Bogisich III | Rowena13#gmail.com | Corkery Group |
| 31 | Jacinthe Murphy | Schuyler57#Pfeffer.org | Corkery Group |
| 32 | Zelda Koss PhD | iTillman#Spinka.biz | Corkery Group |
| 33 | Mr. Kevon McCullough MD | kMarks#Green.org | Corkery Group |
| 34 | Prof. Cleveland Prohaska | Ibrahim.Schneider#hotmail.com | Corkery Group |
As you can see, multiple users belong to an organisation. Now if I ran the following query:
SELECt ticket_id from tickets where organisation = 'Corkery Group';
I receive the following:
80 rows in set (0.00 sec)
From the query, I am wanting to get the organisation name and count all the tickets that belong to an organisation.
When I run the original query, it returns a result set of 480 rows when it should only return 80 rows.

There was a very good answer here Why do the results of this MySQL query get multiplied by each other? that put me in the right direction.
In my scenario, for every user with an organisation the COUNT() would be executed. The issue is, if I have 6 users which are part of one organisation, it's going to run the COUNT() statement six times.
So going back to my query, I didn't need the users table. It was irrelevant because I was only trying to access organisations with tickets & the organisation name - all of which are stored in the tickets & tickets_statuses tables. Which left me with a working query:
return DB::table('tickets')
->leftjoin('tickets_statuses', 'tickets.ticket_id', '=', 'tickets_statuses.ticket_id')
->select('tickets.organisation', DB::raw('COUNT(tickets.ticket_id) as tickets'))
->where('tickets_statuses.status_id', 0)
->groupby('tickets.organisation')
->havingRaw('COUNT(tickets.ticket_id) > 1')
->get();
Thanks to everyone who replied!

Related

Update/Insert Large table records based on another Table ReferenceID.

I have an Internal Inventory system with the below 3 tables as
a. Stocks - Daily updated from a CSV file.
---------------------------------
| id | MODELNO | Discount | MRP |
---------------------------------
| 1 | MODEL_1 | 40% | 900 |
| 2 | MODEL_A | 20% | 600 |
---------------------------------
Everyday this table is truncated and new stocks data are imported from a CSV file of a merchant.(around 6 Million records)
b. Cloths Master - The master clothes database
----------------------------------------
| ref_id | MODELNO | Name | MRP |
----------------------------------------
| 80 | MODEL_1 |Some Dress | 900 |
| 81 | MODEL_A |Another Dress| 600 |
----------------------------------------
The MODELNO is unique and ref_id the primary key. This table is part of the internal Inventory application (Has around 4.5 Million records)
c. Inventory table - It's part of the internal Applications
-------------------------------------------------
| id | ref_id | Name | MRP | status |
-------------------------------------------------
| 1 | 80 |Some Dress | 900 | ACTIVE |
| 2 | 81 |Another Dress| 600 | INACTIVE |
--------------------------------------------------
This table stores the available inventory for the product, based on the stocks and if the discount if above 40% the product is ACTIVE else by default INACTIVE.
The required functionality is that every day I need to run a script that would loop throught stock table records, and for the MODELNO update the stock on the Inventory table and If the record in Inventory table does not exist then it needs to be added.
What I have tried till now is a PHP script that would.
a. Firstly, set status in Inventory table for all records to INACTIVE.
b. And for each of the records in the stocks table, check if the MODELNO exists in Cloths Master table.
b. If the records exists then get the ref_id, and check if the ref_id exists in the Inventory Table and Update/Insert record accordingly.
The problem is that the script takes more than 8+ Hrs to complete.
Can you a suggest an efficient way, that can be used to implement the above functionality.
Note :
All the inserts and updated to the Inventory table are done using CodeIgniter's batch insert/update function.
I set all the status to INACTIVE, as there may be few products that are not present in the Stock DB.
The question in this case which comes in mind is - why not using a trigger ?
Create a table so_stocks
CREATE TABLE IF NOT EXISTS `so_stocks` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`MODELNO` varchar(50) COLLATE uft8_general_ci NOT NULL DEFAULT '0',
`Discount` int(10) DEFAULT '0',
`MRP` int(11) DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COLLATE=uft8_general_ci;
INSERT INTO `so_stocks` (`id`, `MODELNO`, `Discount`, `MRP`) VALUES
(1, 'MODEL_1', 40, 900),
(2, 'MODEL_A', 20, 600);
Create a table so_inventory
CREATE TABLE `so_inventory` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`ref_id` INT(11) NOT NULL DEFAULT '0',
`Name` VARCHAR(255) NOT NULL DEFAULT '0' COLLATE 'uft8_general_ci',
`MRP` INT(11) NOT NULL DEFAULT '0',
`status` TINYINT(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
)
COLLATE='uft8_general_ci'
ENGINE=MyISAM
AUTO_INCREMENT=1
;
And finally a table so_cloths
CREATE TABLE `so_cloths` (
`ref_id` INT(11) NOT NULL AUTO_INCREMENT,
`MODELNO` VARCHAR(50) NOT NULL DEFAULT '0' COLLATE 'uft8_general_ci',
`Name` VARCHAR(255) NOT NULL DEFAULT '0' COLLATE 'uft8_general_ci',
`MRP` INT(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`ref_id`)
)
COLLATE='uft8_general_ci'
ENGINE=MyISAM
AUTO_INCREMENT=1
;
And now the trigger
CREATE DEFINER=`root`#`::1` TRIGGER `so_cloths_after_insert` AFTER INSERT ON `so_cloths` FOR EACH ROW BEGIN
INSERT INTO so_inventory (ref_id,Name,MRP,status)
select sc.ref_id,sc.Name, sc.MRP, if (ss.Discount >= 40, 1,0) AS active from so_cloths AS sc
LEFT JOIN so_stocks AS ss ON (sc.MODELNO = ss.MODELNO)
WHERE sc.ref_id = new.ref_id;
END
Everytime you insert something into so_cloths an insert would be made into so_inventory.
Obviously it depends whether you want to insert data after inserting it into so_stocks or into so_cloths - you've to decide it - but the example should give you some insight.
The definer in the trigger statement has to be changed to your settings

Query execution time issue

I have some trouble with the execution time of a script. The query takes a lot of time. This is the query:
select avg(price) from voiture where duration<30 AND make="Audi" AND model="A4"
+---+---+---+---+---+---+---+---+---+---+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+---+---+---+---+---+---+---+---+---+
| 1 | SIMPLE | voiture | ALL | NULL | NULL | NULL | NULL | 1376949 | Using where |
+---+---+---+---+---+---+---+---+---+---+
select price from voiture where duration<30 AND make="Audi" AND model="A4"
+---+---+---+---+---+---+---+---+---+---+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+---+---+---+---+---+---+---+---+---+
| 1 | SIMPLE | voiture | ALL | NULL | NULL | NULL | NULL | 1376949 | Using where |
+---+---+---+---+---+---+---+---+---+---+
This query take around 2 seconds to be executed on the phpMyAdmin interface. I tried to see what the issue was and removing the avg function makes the query lasts around 0.0080 seconds.
I asked myself how long it would take to make the calculate the avg in the php script, but the query with or withouth avg takes around 2 seconds both.
So I decided to take all the values of my table and make the process in the script, so I use this query:
select * from voiture where duration<30 AND make="Audi" AND model="A4"
+---+---+---+---+---+---+---+---+---+---+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+---+---+---+---+---+---+---+---+---+
| 1 | SIMPLE | voiture | ALL | NULL | NULL | NULL | NULL | 1376949 | Using where |
+---+---+---+---+---+---+---+---+---+---+
On the phpMyAdmin interface, it takes 0.0112 seconds. But in my php script, it takes 25 seconds !
$timestart2=microtime(true);
$querysec='select * from voiture where duration<30 AND make="Audi" AND model="A4"';
$requestsec=mysql_query($querysec) or die(mysql_error());
$timeend2=microtime(true);
$time2=$timeend2-$timestart2;
$page_load_time2 = number_format($time2, 9);
Here is the table structure:
CREATE TABLE `voiture` (
`carid` bigint(9) NOT NULL,
`serviceid` bigint(9) NOT NULL,
`service` varchar(256) NOT NULL,
`model` varchar(256) DEFAULT NULL,
`gearingType` varchar(256) DEFAULT NULL,
`displacement` int(5) DEFAULT NULL,
`cylinders` int(2) DEFAULT NULL,
`fuel` varchar(32) DEFAULT NULL,
`mileage` int(7) DEFAULT NULL,
`existFlag` tinyint(1) DEFAULT NULL,
`lastUpdate` date DEFAULT NULL,
`version` varchar(256) DEFAULT NULL,
`bodyType` varchar(256) DEFAULT NULL,
`firstRegistration` date DEFAULT NULL,
`powerHp` int(4) DEFAULT NULL,
`powerKw` int(4) DEFAULT NULL,
`vat` varchar(256) DEFAULT NULL,
`price` decimal(12,2) DEFAULT NULL,
`duration` int(3) DEFAULT NULL,
`pageUrl` varchar(256) DEFAULT NULL,
`carImg` varchar(256) DEFAULT NULL,
`color` varchar(256) DEFAULT NULL,
`doors` int(1) DEFAULT NULL,
`seats` int(1) DEFAULT NULL,
`prevOwner` int(1) DEFAULT NULL,
`co2` varchar(256) DEFAULT NULL,
`consumption` varchar(256) DEFAULT NULL,
`gears` int(1) DEFAULT NULL,
`equipment` varchar(1024) NOT NULL,
`make` varchar(256) NOT NULL,
`country` varchar(3) NOT NULL
)
There's an index on carid and serviceid
Why does my query takes so long to be executed ? Is there a way it can be improved ?
Why is the execution time different from phpMyAdmin and my php script ?
On the phpMyAdmin interface, it takes 0.0112 seconds. But in my php
script, it takes 25 seconds!
phpMyAdmin interface adds LIMIT to each query. By default it's LIMIT 30.
To decrease time of your aggregate query you need to create indexes for each condition you use(or one composite index, may be).
So, try to create indexes for your model, make and duration fields.
Also, your table is too denormalized. You can to create pair of table to normalize it for a bit.
Ex: Vendors(id, name), Models(id, name) and modify your voiture to have vendor_id/model_id fields instead of text make/model.
Then your initial query will look like:
select avg(t.price) from voiture t
INNER JOIN Models m ON m.id = t.model_id
INNER JOIN Vendors v ON v.id = t.vendor_id
where t.duration<30 AND v.name="Audi" AND m.name="A4"
It will scan light lookup tables for text matches and operate with your heavy table with indexed ids.
There are many possible solution, first thing you can do is create index at DB level this can also improve your execution time.
second this check server, there may be some processes which would occupying your server resources, and making your server slow.
You can make it faster using GROUP BY like
select AVG(price) from voiture where duration<30 AND make="Audi" AND model="A4" GROUP BY make
Also you can add an index for make column
ALTER TABLE `voiture` ADD INDEX `make` (`make`)

MySQL Requirement to upgrade Rank

Im creating a Rank and Requirement table for Martial Arts School.
Each student holds a rank in the martial arts. The rank name, belt color, and rank requirements are stored. Each rank will have numerous rank requirements. Each requirement is considered a requirement just for the rank at which the requirement is introduced. Every requirement is associated with a particular rank. All ranks except white belt have at least one requirement.
My ER Diagram:
Rank and Requirement ER Diagram
Rank Table:
CREATE TABLE IF NOT EXISTS `rank` (
`rank_id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`rank_nme` VARCHAR(45) NOT NULL,
PRIMARY KEY (`rank_id`))
ENGINE = InnoDB;
Output:
+------------+--------------+----------------+
| rank_id | INT(10) | AUTO_INCREMENT |
+------------+--------------+----------------+
| rank_nme | VARCHAR(45) | |
+------------+--------------+----------------+
Requirement Table:
CREATE TABLE IF NOT EXISTS `requirement` (
`req_id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`rank_id` INT UNSIGNED NOT NULL,
`req_nme` VARCHAR(45) NOT NULL,
`req_rank_nme` VARCHAR(45) NULL,
PRIMARY KEY (`req_id`),
INDEX `requirement_rank_id_idx` (`rank_id` ASC),
CONSTRAINT `requirement_rank_id_idx`
FOREIGN KEY (`rank_id`)
REFERENCES `rank` (`rank_id`)
ON DELETE RESTRICT
ON UPDATE CASCADE)
ENGINE = InnoDB;
Output:
+---------------+--------------+----------------+
| req_id | INT(10) | AUTO_INCREMENT |
+---------------+--------------+----------------+
| rank_id | INT(10) | |
+---------------+--------------+----------------+
| req_nme | VARCHAR(45) | |
+---------------+--------------+----------------+
| req_rank_nme | VARCHAR(45) | |
+---------------+--------------+----------------+
Need help if Im doing it right or wrong or you guys have modification or any suggestions! Thanks!
First in the first table you will need an id that is unique and auto increment for db good practice. Then you go with the rank_id, rank_name. If you want to associate the rank with a user, you will need another foreign key to do it. user_id from users.id .
Now you have rank_id which increments and i don't see the point of doing that. The rest is ok i think.

Thoughts on improving these queries (MySQL) ? My production machine is seeing 10 sec times on them (using a sample size of over 1000)

Table definition and queries explained:
item |
CREATE TABLE `item` (
`item_id` int(11) NOT NULL AUTO_INCREMENT,
`item_type_id` int(11) NOT NULL,
`brand_id` int(11) NOT NULL,
`site_id` int(11) NOT NULL,
`seller_id` int(11) NOT NULL,
`title` varchar(175) NOT NULL,
`desc` text NOT NULL,
`url` varchar(767) NOT NULL,
`price` int(11) NOT NULL,
`photo` varchar(255) NOT NULL,
`photo_file` varchar(255) NOT NULL,
`photo_type` varchar(32) NOT NULL,
`has_photo` enum('yes','no','pending') NOT NULL DEFAULT 'pending',
`added_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`updated_at` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`created_at` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`normalized_time` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`location` varchar(128) NOT NULL,
PRIMARY KEY (`item_id`),
KEY `item_type_id` (`item_type_id`),
KEY `brand_id` (`brand_id`),
KEY `site_id` (`site_id`),
KEY `seller_id` (`seller_id`),
KEY `created_at` (`created_at`),
KEY `added_at` (`added_at`),
KEY `normalized_time` (`normalized_time`),
KEY `typephototime` (`item_type_id`,`has_photo`,`normalized_time`),
KEY `brandidphoto` (`brand_id`,`item_type_id`,`has_photo`),
KEY `brandidphoto2` (`brand_id`,`item_type_id`,`has_photo`),
KEY `idphoto` (`item_type_id`,`has_photo`),
KEY `idphototime` (`item_type_id`,`has_photo`,`normalized_time`),
KEY `idphoto2` (`item_type_id`,`has_photo`),
KEY `typepricebrandid` (`item_type_id`,`price`,`brand_id`,`item_id`),
KEY `sellertypephototime` (`seller_id`,`item_type_id`,`has_photo`,`normalized_time`),
KEY `typephoto` (`item_type_id`,`has_photo`)
) ENGINE=MyISAM AUTO_INCREMENT=508885 DEFAULT CHARSET=latin1 |
mysql> explain SELECT item.* FROM item WHERE item.item_type_id = "1" AND item.has_photo = "yes" ORDER BY normalized_time DESC LIMIT 1;
+----+-------------+-------+------+------------------------------------------------------------------------------------+---------------+---------+-------------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+------------------------------------------------------------------------------------+---------------+---------+-------------+-------+-------------+
| 1 | SIMPLE | item | ref | item_type_id,typephototime,idphoto,idphototime,idphoto2,typepricebrandid,typephoto | typephototime | 5 | const,const | 69528 | Using where |
+----+-------------+-------+------+------------------------------------------------------------------------------------+---------------+---------+-------------+-------+-------------+
1 row in set (0.02 sec)
mysql> explain SELECT * FROM item WHERE item_type_id = "1" AND (price BETWEEN "25" AND "275") AND brand_id = "10" ORDER BY item_id DESC LIMIT 1;
+----+-------------+-------+-------+------------------------------------------------------------------------------------------------------------------------+---------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+------------------------------------------------------------------------------------------------------------------------+---------+---------+------+------+-------------+
| 1 | SIMPLE | item | index | item_type_id,brand_id,typephototime,brandidphoto,brandidphoto2,idphoto,idphototime,idphoto2,typepricebrandid,typephoto | PRIMARY | 4 | NULL | 203 | Using where |
+----+-------------+-------+-------+------------------------------------------------------------------------------------------------------------------------+---------+---------+------+------+-------------+
1 row in set (0.01 sec)
mysql> explain SELECT item.* FROM item WHERE item.brand_id = "10" AND item.item_type_id = "1" AND item.has_photo = "yes" ORDER BY normalized_time DESC LIMIT 1;
+----+-------------+-------+-------+------------------------------------------------------------------------------------------------------------------------+-----------------+---------+------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+------------------------------------------------------------------------------------------------------------------------+-----------------+---------+------+--------+-------------+
| 1 | SIMPLE | item | index | item_type_id,brand_id,typephototime,brandidphoto,brandidphoto2,idphoto,idphototime,idphoto2,typepricebrandid,typephoto | normalized_time | 8 | NULL | 502397 | Using where |
+----+-------------+-------+-------+------------------------------------------------------------------------------------------------------------------------+-----------------+---------+------+--------+-------------+
1 row in set (2.15 sec)
mysql> explain SELECT COUNT(*) FROM item WHERE item.item_type_id = "1" AND item.has_photo = "yes" ;
+----+-------------+-------+------+------------------------------------------------------------------------------------+-----------+---------+-------------+-------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+------------------------------------------------------------------------------------+-----------+---------+-------------+-------+--------------------------+
| 1 | SIMPLE | item | ref | item_type_id,typephototime,idphoto,idphototime,idphoto2,typepricebrandid,typephoto | typephoto | 5 | const,const | 71135 | Using where; Using index |
+----+-------------+-------+------+------------------------------------------------------------------------------------+-----------+---------+-------------+-------+--------------------------+
1 row in set (0.01 sec)
The following indexes are redundant because they match the left columns of another index. You can almost certainly drop these indexes and save some space and overhead.
KEY `item_type_id` (`item_type_id`), /* redundant */
KEY `brand_id` (`brand_id`), /* redundant */
KEY `seller_id` (`seller_id`), /* redundant */
KEY `idphototime` (`item_type_id`,`has_photo`,`normalized_time`), /* redundant */
KEY `brandidphoto2` (`brand_id`,`item_type_id`,`has_photo`), /* redundant */
KEY `idphoto` (`item_type_id`,`has_photo`), /* redundant */
KEY `idphoto2` (`item_type_id`,`has_photo`), /* redundant */
KEY `typephoto` (`item_type_id`,`has_photo`) /* redundant */
That leaves the following indexes:
KEY `site_id` (`site_id`),
KEY `created_at` (`created_at`),
KEY `added_at` (`added_at`),
KEY `normalized_time` (`normalized_time`),
KEY `brandidphoto` (`brand_id`,`item_type_id`,`has_photo`),
KEY `typephototime` (`item_type_id`,`has_photo`,`normalized_time`),
KEY `typepricebrandid` (`item_type_id`,`price`,`brand_id`,`item_id`),
KEY `sellertypephototime` (`seller_id`,`item_type_id`,`has_photo`,`normalized_time`),
You can also use a tool like pt-duplicate-key-checker to find redundant indexes.
Next consider the storage engine:
) ENGINE=MyISAM AUTO_INCREMENT=508885 DEFAULT CHARSET=latin1;
Almost always, InnoDB is a better choice than MyISAM. Not only for performance, but for data integrity and crash safety. InnoDB has been the default storage engine since 2010, and it's the only storage engine that is actively getting improved. I'd recommend making a copy of this table, changing the storage engine to InnoDB, and compare its performance with respect to your queries.
Next let's consider indexes for the queries:
SELECT item.* FROM `item` WHERE item.item_type_id = "1" AND item.has_photo = "yes"
ORDER BY normalized_time DESC LIMIT 1;
I would choose an index on (item_type_id, has_photo, normalized_time) and that's the index it's currently using, which is typephototime.
One way to optimize this further would be to fetch only the columns in the index. That's when you see "Using index" in the EXPLAIN plan, it can be a huge improvement for performance.
Another important factor is to make sure that your index is cached in memory: increase key_buffer_size if you use MyISAM or innodb_buffer_pool_size if you use InnoDB to be as large as all the indexes you want to remain in memory. Because you don't want to run a query that needs to scan an index larger than your buffers; it causes a lot of swapping.
SELECT * FROM `item` WHERE item_type_id = "1" AND (price BETWEEN "25" AND "275") AND brand_id = "10"
ORDER BY item_id DESC LIMIT 1;
I would choose an index on (item_type_id, brand_id, price), but this query is currently using the PRIMARY index. You should create a new index.
SELECT item.* FROM `item` WHERE item.brand_id = "10" AND item.item_type_id = "1" AND item.has_photo = "yes"
ORDER BY normalized_time DESC LIMIT 1;
I would choose an index on (item_type_id, brand_id, has_photo, normalized_time). You should create a new index.
SELECT COUNT(*) FROM `item` WHERE item.item_type_id = "1" AND item.has_photo = "yes" ;
I would choose an index on (item_type_id, has_photo) and that's the index it's currently using, which is typephoto. It's also getting the "Using index" optimization, so the only other improvement could be to make sure there's enough buffer to hold the index in memory.
It's hard to optimize SELECT COUNT(*) queries because they naturally have to scan a lot of rows.
The other strategy to optimize COUNT(*) is to calculate the counts offline, and store them either in a summary table or in an in-memory cache like memcached so you don't have to recalculate them every time someone loads a page. But that means you have to update the counts every time someone adds or deletes a row in the item table, which could be more costly depending on how frequently that happens.
A few things I would suggest changing:
You don't need all those indexes. You really only need indexes on fields that are accessed a lot, like foreign key fields. Remove all the indexes except ones on ID fields.
You should be storing dates as nulls unless there is actual data.
Stay away from the enum data type, use smallint with flags representing each value. Example, 0 pending, 1 yes, 2 no.
Alongside reducing the size of the database, it makes things much cleaner. Your new table structure would look like so:
CREATE TABLE `item` (
`item_id` int(11) NOT NULL AUTO_INCREMENT,
`item_type_id` int(11) NOT NULL,
`brand_id` int(11) NOT NULL,
`site_id` int(11) NOT NULL,
`seller_id` int(11) NOT NULL,
`title` varchar(175) NOT NULL,
`desc` text NOT NULL,
`url` varchar(767) NOT NULL,
`price` int(11) NOT NULL,
`photo` varchar(255) NOT NULL,
`photo_file` varchar(255) NOT NULL,
`photo_type` varchar(32) NOT NULL,
`has_photo` smallint NOT NULL DEFAULT 0,
`added_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`updated_at` datetime NULL DEFAULT NULL,
`created_at` datetime NULL DEFAULT NULL,
`normalized_time` datetime NULL DEFAULT NULL,
`location` varchar(128) NULL,
PRIMARY KEY (`item_id`),
KEY `item_type_id` (`item_type_id`),
KEY `brand_id` (`brand_id`),
KEY `site_id` (`site_id`),
KEY `seller_id` (`seller_id`)
) ENGINE=MyISAM AUTO_INCREMENT=508885 DEFAULT CHARSET=latin1;
I would also suggest using utf8_unicode_ci as the collate, utf8 as the charset, and InnoDB as the engine.
But first off, remove all those keys and try again. Also remove the aliasing on the 3rd query.
SELECT * FROM item WHERE brand_id = "10" AND item_type_id = "1" AND has_photo = "yes" ORDER BY normalized_time DESC LIMIT 1;

Using MySQL, What is the best way to not to select users that exist in a different table?

My problem is the following:
I have two tables; persons and teams, I want to select all the persons with role_id = 2, that exist in persons but not in teams.
Table teams stores the hashes for the team leader who can only lead one team at a time. When creating teams, I just want to show administrators the people who is not currently leading a team, basically exclude all the ones who are already leaders of any given team.
My structure is as follows:
mysql> desc persons;
+-------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| firstname | varchar(9) | YES | | NULL | |
| lastname | varchar(10) | YES | | NULL | |
| role_id | int(2) | YES | | NULL | |
| hash | varchar(32) | NO | UNI | NULL | |
+-------------+-------------+------+-----+---------+-------+
mysql> desc teams;
+--------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(20) | YES | | NULL | |
| leader | varchar(32) | NO | | NULL | |
+--------+-------------+------+-----+---------+----------------+
3 rows in set (0.00 sec)
My current SQL is as follows:
SELECT CONCAT( `persons`.`firstname` ," ", `persons`.`lastname` ) AS `manager`,
`hash` FROM `persons`
WHERE `persons`.`role_id` =2 AND `persons`.`hash` !=
(SELECT `leader` FROM `teams` );
The latter SQL Query works when the table teams only has 1 record, but as soon as I add another one, MySQL complaints about the subquery producing two records.
In the WHERE Clause, instead of subqueries I've also tried the following:
WHERE `persons`.`role_id` = 2 AND `persons`.`hash` != `teams`.`leader`
but then it complaints about column leader not existing in table teams
I was also thinking about using some kind of inverse LEFT JOIN, but I haven't been able to come up with an optimal solution.
Any help is greatly appreciated!
Thanks
P.S.: Here is the SQL statements should you want to have a scenario similar to mine:
DROP TABLE IF EXISTS `teams`;
CREATE TABLE IF NOT EXISTS `teams` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`leader` varchar(32) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ;
INSERT INTO `teams` (`id`, `name`, `leader`) VALUES
(1, 'Team 1', '406a3f5892e0fcb22bfc81ae023ce252'),
(2, 'Team 2', 'd0ca479152996c8cabd89151fe844e63');
DROP TABLE IF EXISTS `persons`;
CREATE TABLE IF NOT EXISTS `persons` (
`firstname` varchar(9) DEFAULT NULL,
`lastname` varchar(10) DEFAULT NULL,
`role_id` int(2) DEFAULT NULL,
`hash` varchar(32) NOT NULL,
PRIMARY KEY `hash` (`hash`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `persons` (`firstname`, `lastname`, `role_id`,`hash`) VALUES
('John', 'Doe', 2, '406a3f5892e0fcb22bfc81ae023ce252'),
('Jane', 'Doe', 2, 'd0ca479152996c8cabd89151fe844e63'),
('List', 'Me', 2, 'fbde2c4eeee7f455b655fe4805cfe66a'),
('List', 'Me Too', 2, '6dee2c4efae7f452b655abb805cfe66a');
You don't need a subquery to do that. A LEFT JOIN is enough:
SELECT
CONCAT (p.firstname, " ", p.lastname) AS manager
, p.hash
FROM persons p
LEFT JOIN teams t ON p.hash = t.leader
WHERE
p.role_id = 2
AND t.id IS NULL -- the trick
I think you want an IN clause.
SELECT CONCAT( `persons`.`firstname` ," ", `persons`.`lastname` ) AS `manager`,
`hash` FROM `persons`
WHERE `persons`.`role_id` =2 AND `persons`.`hash` NOT IN
(SELECT `leader` FROM `teams` );
As pointed out, this is not optimal. You may want to do a join instead.

Categories