improve MySQL query performance from slow query log - php

I turned on slow query monitor in MySQL config.
Below is the query and time:
Time: 160330 20:54:11
User#Host: user[user] # [xx.xx.xxx.xxx]
Query_time: 8.794170 Lock_time: 0.000141 Rows_sent: 3942 Rows_examined: 4742825
SET timestamp=1459371251;
SELECT (SELECT (CASE WHEN ce_type = 'IN' then SUM(payment_amount)
END) as debit
FROM customer_payment_options cpo
WHERE wallet_id=cw.id
AND (cpo.real_account_type='HQ')
AND cpo.source_country_id='40'
GROUP BY cpo.wallet_id)
as debit,
(SELECT SUM(payment_amount)
as credit
FROM customer_payment_options cpo
WHERE wallet_id=cw.id
AND (cpo.real_account_type='HQ')
AND cpo.tran_id IS NOT NULL
AND cpo.source_country_id='40'
GROUP BY cpo.wallet_id)
as credit
FROM customer_wallet cw
WHERE cw.company_id='1'
AND cw.currency='40'
AND cw.is_approved = '1'
AND DATE(cw.date_added) < '2016-03-30';
Indexes on customer_payment_options:
company_id
tran_id
ce_id
wallet_id
What should I do to improve it's performance?
EXPLAIN:
http://i.stack.imgur.com/iH8rt.png
SCHEMA
CREATE TABLE `customer_payment_options` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`company_id` int(11) NOT NULL,
`local_branch_id` int(11) NOT NULL,
`tran_id` bigint(11) DEFAULT NULL,
`ce_id` int(11) DEFAULT NULL,
`wallet_id` int(11) DEFAULT NULL,
`reward_credit_id` int(11) DEFAULT NULL,
`ce_invoice_id` varchar(32) DEFAULT NULL,
`ce_type` enum('IN','OUT') DEFAULT NULL,
`payment_type` enum('CASH','DEBIT','CREDIT','CHEQUE','DRAFT','BANK_DEPOSIT','EWIRE','WALLET','LOAN','REWARD_CREDIT') NOT NULL,
`payment_amount` varchar(20) NOT NULL,
`payment_type_number` varchar(100) DEFAULT NULL,
`source_country_id` int(11) NOT NULL,
`real_account_id` int(11) DEFAULT NULL,
`real_account_type` enum('LOCAL','HQ') DEFAULT NULL,
`date_added` datetime NOT NULL,
`event_type` enum('MONEY_TRANSFER','CURRENCY_EXCHANGE','WALLET') DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `company_id` (`company_id`),
KEY `real_account_type` (`real_account_type`),
KEY `tran_id` (`tran_id`),
KEY `ce_id` (`ce_id`),
KEY `wallet_id` (`wallet_id`),
CONSTRAINT `customer_payment_options_ibfk_4` FOREIGN KEY (`wallet_id`) REFERENCES `customer_wallet` (`id`),
CONSTRAINT `customer_payment_options_ibfk_1` FOREIGN KEY (`company_id`) REFERENCES `company` (`id`),
CONSTRAINT `customer_payment_options_ibfk_2` FOREIGN KEY (`tran_id`) REFERENCES `transaction` (`id`),
CONSTRAINT `customer_payment_options_ibfk_3` FOREIGN KEY (`ce_id`) REFERENCES `currency_exchange` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=412 DEFAULT CHARSET=utf8
CREATE TABLE `customer_wallet` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`wallet_unique_id` varchar(100) DEFAULT NULL,
`company_id` int(11) NOT NULL,
`branch_admin_id` int(11) DEFAULT NULL,
`emp_id` int(11) DEFAULT NULL,
`emp_type` enum('SUPER_ADMIN','ADMIN','AGENT_ADMIN','AGENT','OVER_AGENT_ADMIN','OVER_AGENT') DEFAULT NULL,
`cus_id` bigint(11) NOT NULL,
`tran_id` bigint(11) DEFAULT NULL,
`beehive_id` int(11) DEFAULT NULL,
`type` enum('DEPOSIT','WITHDRAW','TRANSACTION') NOT NULL,
`sub_type` enum('MONEY_TRANSFER','BEEHIVE_DEPOSIT') DEFAULT NULL,
`credit_in` varchar(20) DEFAULT NULL,
`credit_out` varchar(20) DEFAULT NULL,
`currency` varchar(20) NOT NULL,
`date_added` datetime NOT NULL,
`note` varchar(255) DEFAULT NULL,
`location` enum('DIRECT') DEFAULT NULL,
`is_approved` enum('0','1') NOT NULL DEFAULT '1',
`idebit_issconf` varchar(50) DEFAULT NULL,
`idebit_issname` varchar(50) DEFAULT NULL,
`idebit_isstrack2` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `cus_id` (`cus_id`),
KEY `company_id` (`company_id`),
KEY `branch_admin_id` (`branch_admin_id`),
KEY `emp_id` (`emp_id`),
KEY `tran_id` (`tran_id`),
KEY `beehive_id` (`beehive_id`),
CONSTRAINT `customer_wallet_ibfk_1` FOREIGN KEY (`cus_id`) REFERENCES `customers` (`id`),
CONSTRAINT `customer_wallet_ibfk_2` FOREIGN KEY (`company_id`) REFERENCES `company` (`id`),
CONSTRAINT `customer_wallet_ibfk_3` FOREIGN KEY (`tran_id`) REFERENCES `transaction` (`id`),
CONSTRAINT `customer_wallet_ibfk_4` FOREIGN KEY (`emp_id`) REFERENCES `employees` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=152 DEFAULT CHARSET=utf8

What you are doing as a correlated query on every wallet ID to get the corresponding debits and credits. It appears you are getting one record per wallet id. This is very busy. Having a join to the customer payments table on your criteria that is common (including the join per wallet id). Then, simplify the CASE as a SUM( case/when ) as respective debit / credit.
I don't know your underlying criteria of table columns, but I would even hedge to (and did) include NOT the CE_TYPE = 'IN' as that appears basis of a debit and you would not want to falsely count as part of a credit too. Again, dont know correlation of fields, trans_id, types.
Now, as stated, having individual indexes on individual fields will not help optimize this query. I would suggest the following indexes.
table index
customer_wallet ( company_id, is_approved, currency, id, date_added )
customer_payment_options ( wallet_id, account_type, country_id )
SELECT
cw.wallet_id,
SUM( case when cpo.ce_type = 'IN'
then cpo.payment_amount
ELSE 0 end ) as Debit,
SUM( case when NOT cpo.ce_type = 'IN'
AND cpo.tran_id IS NOT NULL
then cpo.payment_amount
ELSE 0 end ) as Credit
FROM
customer_wallet cw
JOIN customer_payment_options cpo
ON cw.id = cpo.wallet_id
AND cpo.real_account_type = 'HQ'
AND cpo.source_country_id = '40'
WHERE
cw.company_id = '1'
AND cw.currency = '40'
AND cw.is_approved = '1'
AND cw.date_added < '2016-03-30'
GROUP BY
cw.id
One additional comment. if your ID columns, Currency flag, country ID, approved are actually numeric values in the table structure, remove the quotes and let compare directly on the numeric value. Also, for your date_added. You had that based on DATE( date_added ). Doing a function on a column can not fully utilize the index. Since date() strips off any time portion of a date/time stamp column, and you are asking for all added less than Mar 30, then date added of March 29 # 11:59:59pm is still less than Mar 30 at 12:00:00am, so no date conversion is required.
As commented by Ivan (below), if you want ALL Wallet IDs regardless of having any payments (debit or credit), then change from a join to a LEFT JOIN.

You need to add indexes and multi-column indexes to make it fast.
Please keep in mind, that if you have large table, extra indexes will slow-down the insertions , since index file update will take more time.
If a multiple-column index exists on col1 and col2, the appropriate
rows can be fetched directly. If separate single-column indexes exist
on col1 and col2, the optimizer attempts to use the Index Merge
optimization (see Section 8.2.1.4, “Index Merge Optimization”), or
attempts to find the most restrictive index by deciding which index
excludes more rows and using that index to fetch the rows.
If the table has a multiple-column index, any leftmost prefix of the
index can be used by the optimizer to look up rows. For example, if
you have a three-column index on (col1, col2, col3), you have indexed
search capabilities on (col1), (col1, col2), and (col1, col2, col3).
Read more

Related

MySQL query with large data performance

I'm a bit new to MySQL and I would like to know if I'm going right with these tables and query:
tb_anuncio
CREATE TABLE `tb_anuncio` (
`anuncio_id` int(11) NOT NULL auto_increment,
`anuncio_titulo` varchar(120) NOT NULL,
`anuncio_valor` decimal(10,2) NOT NULL,
`anuncio_valorTipo` int(11) default NULL,
`anuncio_telefone` varchar(20) NOT NULL,
`anuncio_descricao` text,
`anuncio_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`bairro_id` int(11) NOT NULL,
`anuncio_status` int(11) default '0',
PRIMARY KEY (`anuncio_id`),
KEY `ta001_ix` (`bairro_id`)
) ENGINE=InnoDB DEFAULT charset utf8;
ALTER TABLE `tb_anuncio`
ADD CONSTRAINT `ta001_ix` FOREIGN KEY (`bairro_id`) REFERENCES `tb_bairro` (`bairro_id`) ON DELETE CASCADE ON UPDATE CASCADE;
tb_estado
CREATE TABLE `tb_estado` (
`estado_id` int(11) NOT NULL auto_increment,
`estado_nome` varchar(2) NOT NULL,
`estado_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`estado_url` varchar(2) NOT NULL,
PRIMARY KEY (`estado_id`),
UNIQUE KEY `estado_url` (`estado_url`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
tb_cidade
CREATE TABLE `tb_cidade` (
`cidade_id` int(11) NOT NULL auto_increment,
`cidade_nome` varchar(100) NOT NULL,
`cidade_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`estado_id` int(11) NOT NULL,
`cidade_url` varchar(150) NOT NULL,
PRIMARY KEY (`cidade_id`),
UNIQUE KEY `cidade_url` (`cidade_url`),
KEY `tc001_ix` (`estado_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `tb_cidade`
ADD CONSTRAINT `tc001_ix` FOREIGN KEY (`estado_id`) REFERENCES `tb_estado` (`estado_id`) ON DELETE CASCADE ON UPDATE CASCADE;
tb_bairro
CREATE TABLE `tb_bairro` (
`bairro_id` int(11) NOT NULL auto_increment,
`bairro_nome` varchar(100) NOT NULL,
`bairro_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`cidade_id` int(11) NOT NULL,
`bairro_url` varchar(150) NOT NULL,
PRIMARY KEY (`bairro_id`),
UNIQUE KEY `bairro_url` (`bairro_url`),
KEY `tb001_ix` (`cidade_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `tb_bairro`
ADD CONSTRAINT `tb001_ix` FOREIGN KEY (`cidade_id`) REFERENCES `tb_cidade` (`cidade_id`) ON DELETE CASCADE ON UPDATE CASCADE;
Well I'm doing a query to show ads of a city/state, my query looks like:
Query
select a.anuncio_id,a.anuncio_titulo,a.anuncio_valor,a.anuncio_valorTipo,a.anuncio_descricao
from tb_anuncio a inner join(
tb_bairro b inner join(
tb_cidade c inner join
tb_estado d on d.estado_id=c.estado_id) on c.cidade_id=b.cidade_id) on b.bairro_id=a.bairro_id
where a.anuncio_status=1 and d.estado_id=:estado_id and c.cidade_id=:cidade_id and b.bairro_id=:bairro_id
group by a.anuncio_id
order by a.anuncio_id desc
limit :limit
I would like to know if I'm going right and it will work well when these tables get about 5k-10k of records.
I'm using PHP PDO MySQL.
Thanks.
Although it doesn't affect performance, the typical way to write a query would not have parentheses in the FROM clause. Also, I doubt the group by is necessary:
select a.*
from tb_anuncio a inner join
tb_bairro b
on b.bairro_id = a.bairro_id inner join
tb_cidade c
on c.cidade_id = b.cidade_id inner join
tb_estado e
on e.estado_id = c.estado_id
where a.anuncio_status = 1 and e.estado_id = :estado_id and
c.cidade_id = :cidade_id and b.bairro_id = :bairro_id
order by a.anuncio_id desc
limit :limit;
You can simplify this, because you do not need all the joins -- the join keys are in referencing tables:
select a.*
from tb_anuncio a inner join
tb_bairro b
on b.bairro_id = a.bairro_id inner join
tb_cidade c
on c.cidade_id = b.cidade_id
where a.anuncio_status = 1 and c.estado_id = :estado_id and
c.cidade_id = :cidade_id and b.bairro_id = :bairro_id
order by a.anuncio_id desc
limit :limit;
I don't know Portuguese, but seems like one estado contains many cidades, which contains many bairros. If this is correct, then the schema is wrong. Fixing the schema will lead to improving the performance.
There should be one bairro in the query, not three such items in the WHERE.
Furthermore, it is usually more practical for tb_bairro to include information about the cidade and estado, not tb_anuncio.
Once you have done those things, the GROUP BY can probably be eliminated, thereby adding more performance.
And add
INDEX(anuncio_status, bairro_id, anuncio_id)

MySQL - GROUP BY slow down the page

GROUP BY clause in the query below slow down the page, please help to resolve this issue
SELECT
`a`.*,
CONCAT(a.`firstname`, " ", a.`lastname`) AS `cont_name`,
CONCAT(a.`position`, " / ", a.`company`) AS `comp_pos`,
CONCAT(f.`firstname`, " ", f.`lastname`) AS `created_by`
FROM
`contacts` AS `a`
LEFT JOIN `users` AS `f` ON f.id = a.user_id
LEFT JOIN `user_centres` AS `b` ON a.user_id = b.user_id
WHERE b.centre_id IN (23, 24, 25, 26, 20, 21, 22, 27, 28)
GROUP BY `a`.`id`
ORDER BY `a`.`created` desc
Here the join with user_centres table is for centre wise filtering of data. EXPLAIN gives the result as:
- 1 SIMPLE a index PRIMARY,user_id,area_id,industry_id,country PRIMARY 4 NULL 20145 Using temporary; Using filesort
Our requirement is as below
Listing of all contacts in admin login
Centre wise listing of contacts in manager/clerk login
Total records in contact table is > 20K.
There will be multiple entry for users in user_centres table, ie: a user is assigned to more than one centre.
While executing the query in server by excluding GROUP BY is nearly 300k data which makes the problem.
Db stucture
Table structure for table contacts
CREATE TABLE IF NOT EXISTS `contacts` (
`id` int(11) NOT NULL,
`user_id` int(11) DEFAULT NULL,
`imported` tinyint(4) NOT NULL DEFAULT '0',
`situation` char(10) DEFAULT NULL,
`firstname` varchar(150) DEFAULT NULL,
`lastname` varchar(150) DEFAULT NULL,
`position` varchar(150) DEFAULT NULL,
`dob` datetime DEFAULT NULL,
`office_contact` varchar(100) DEFAULT NULL,
`mobile_contact` varchar(100) DEFAULT NULL,
`email` varchar(255) NOT NULL,
`company` varchar(150) DEFAULT NULL,
`industry_id` int(11) DEFAULT NULL,
`address` varchar(255) DEFAULT NULL,
`city` varchar(150) DEFAULT NULL,
`country` int(11) DEFAULT NULL,
`isclient` tinyint(4) NOT NULL DEFAULT '0',
`classification` varchar(100) DEFAULT NULL,
`created` datetime NOT NULL,
`updated` datetime NOT NULL,
`unsubscribe` enum('Y','N') NOT NULL DEFAULT 'N'
) ENGINE=InnoDB AUTO_INCREMENT=25203 DEFAULT CHARSET=latin1;
Indexes for table contacts
ALTER TABLE `contacts`
ADD PRIMARY KEY (`id`), ADD KEY `user_id` (`user_id`),
ADD KEY `industry_id` (`industry_id`), ADD KEY `country` (`country`);
Constraints for table contacts
ALTER TABLE `contacts`
ADD CONSTRAINT `contacts_ibfk_4` FOREIGN KEY (`user_id`)
REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE NO ACTION,
ADD CONSTRAINT `contacts_ibfk_6` FOREIGN KEY (`industry_id`)
REFERENCES `industries` (`id`) ON DELETE SET NULL ON UPDATE NO ACTION,
ADD CONSTRAINT `contacts_ibfk_7` FOREIGN KEY (`country`)
REFERENCES `country` (`id`) ON DELETE SET NULL ON UPDATE NO ACTION;
Table structure for table users
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL,
`role_id` int(11) NOT NULL,
`email` varchar(250) NOT NULL,
`password` varchar(45) NOT NULL,
`salt` varchar(45) DEFAULT NULL,
`status_id` int(11) DEFAULT NULL,
`status` tinyint(1) DEFAULT '1',
`firstname` varchar(255) NOT NULL,
`lastname` varchar(255) NOT NULL,
`created` datetime DEFAULT NULL,
`updated` datetime DEFAULT NULL
) ENGINE=InnoDB AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
Indexes for table users
ALTER TABLE `users`
ADD PRIMARY KEY (`id`), ADD UNIQUE KEY `email_UNIQUE` (`email`),
ADD KEY `type_id_idx` (`role_id`), ADD KEY `status_id_idx` (`status_id`);
Constraints for table users
ALTER TABLE `users`
ADD CONSTRAINT `role_id` FOREIGN KEY (`role_id`)
REFERENCES `users_roles` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `status_id` FOREIGN KEY (`status_id`)
REFERENCES `users_status` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
ADD CONSTRAINT `users_ibfk_1` FOREIGN KEY (`area`)
REFERENCES `area` (`id`) ON DELETE SET NULL ON UPDATE NO ACTION;
Table structure for table user_centres
CREATE TABLE IF NOT EXISTS `user_centres` (
`id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`area_id` int(11) NOT NULL,
`centre_id` int(11) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=72 DEFAULT CHARSET=utf8;
Indexes for table user_centres
ALTER TABLE `user_centres`
ADD PRIMARY KEY (`id`), ADD KEY `user_id` (`user_id`),
ADD KEY `centre_id` (`centre_id`), ADD KEY `area_id` (`area_id`);
Constraints for table user_centres
ALTER TABLE `user_centres`
ADD CONSTRAINT `user_centres_ibfk_1` FOREIGN KEY (`user_id`)
REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION,
ADD CONSTRAINT `user_centres_ibfk_2` FOREIGN KEY (`centre_id`)
REFERENCES `centre` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION;
Also please refer EXPLAIN screens - http://prntscr.com/6o5h8s
Index were not used because of the different ORDER BY a GROUP BY clauses.
http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
You are spending a lot of time checking user_centres, but not needing anything from it. Remove it from the query.
users can be made into a correlates subquery:
SELECT `a`.*,
CONCAT(a.`firstname`, " ", a.`lastname`) AS `cont_name`,
CONCAT(a.`position`, " / ", a.`company`) AS `comp_pos`,
( SELECT CONCAT(f.`firstname`, " ", f.`lastname`)
FROM `users` AS `f` ON f.id = a.user_id
) AS `created_by`
FROM `contacts` AS `a`
GROUP BY `a`.`id`
ORDER BY `a`.`created` desc
Do you really need all 20K rows?? The sheer bulk of the result is part of the sluggishness.
Thanks all, based on the feedback got from all of you I have tried the query below now and give me the speed improvement from 30 secs to 15 secs
SELECT `a`.`id`, `a`.`user_id`, `a`.`imported`, `a`.`created`,
`a`.`unsubscribe`, CONCAT(a.firstname, " ", a.lastname) AS `cont_name`,
CONCAT(a.position, " / ", a.company) AS `comp_pos`,
( SELECT COUNT(uc.id)
FROM `user_centres` AS `uc`
WHERE (uc.user_id = a.user_id)
AND (uc.centre_id IN (29))
GROUP BY `uc`.`user_id`
) AS `centre_cnt`,
( SELECT GROUP_CONCAT(DISTINCT g.group_name
ORDER BY g.group_name ASC SEPARATOR ", ")
FROM `groups` AS `g`
INNER JOIN `group_contacts` AS `gc` ON g.id = gc.group_id
WHERE (gc.contact_id = a.id)
GROUP BY `gc`.`contact_id`
) AS `group_name`,
( SELECT CONCAT(u.`firstname`, " ", u.`lastname`)
FROM `users` AS `u`
WHERE (u.id = a.user_id)
) AS `created_by`, `e`.`name` AS `industry_name`
FROM `contacts` AS `a`
LEFT JOIN `industries` AS `e` ON e.id = a.industry_id
WHERE (1)
HAVING (centre_cnt is NOT NULL)
ORDER BY `a`.`id` desc
Let me know Is there a way to improve the speed and make the page loading below 5 secs.
Please see the interface (noted the filtering and sorting fields) - http://prntscr.com/6q6q70

Optimizing queries on larger MySQL database

I'm coding a website which will store some offers (ex. job offers). In the end, it could contain more than 1M offers. Now I have problems with some inefficient SQL queries.
Scenario:
Each offer can be assigned into category (ex. IT jobs)
Each category has custom fields (ex. IT jobs can have custom field of type "price" which will represent text box accepting number (price) - in our example, let's say we have price input of expected salary)
Each offer stores meta data with values of these category custom fields
DB fields which will be used for filtering have indexes
Table category (I'm using nested sets to store categories hierarchy):
CREATE TABLE `category` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`parent_id` int(11) DEFAULT NULL,
`lft` int(11) DEFAULT NULL,
`rgt` int(11) DEFAULT NULL,
`depth` int(11) DEFAULT NULL,
`order` int(11) NOT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `category_parent_id_index` (`parent_id`),
KEY `category_lft_index` (`lft`),
KEY `category_rgt_index` (`rgt`)
) ENGINE=InnoDB AUTO_INCREMENT=44 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Table category_field:
CREATE TABLE `category_field` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`category_id` int(10) unsigned NOT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`optional` tinyint(1) NOT NULL DEFAULT '0',
`type` enum('price','number','date','color') COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
KEY `category_field_category_id_index` (`category_id`),
CONSTRAINT `category_field_category_id_foreign` FOREIGN KEY (`category_id`) REFERENCES `category` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Table offer:
CREATE TABLE `offer` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`text` text COLLATE utf8_unicode_ci NOT NULL,
`category_id` int(10) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `offer_category_id_index` (`category_id`),
CONSTRAINT `offer_category_id_foreign` FOREIGN KEY (`category_id`) REFERENCES `category` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Table offer_meta:
CREATE TABLE `offer_meta` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`offer_id` int(10) unsigned NOT NULL,
`category_field_id` int(10) unsigned NOT NULL,
`price` double NOT NULL,
`number` int(11) NOT NULL,
`date` date NOT NULL,
`color` varchar(7) COLLATE utf8_unicode_ci NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `offer_meta_offer_id_index` (`offer_id`),
KEY `offer_meta_category_field_id_index` (`category_field_id`),
KEY `offer_meta_price_index` (`price`),
KEY `offer_meta_number_index` (`number`),
KEY `offer_meta_date_index` (`date`),
KEY `offer_meta_color_index` (`color`),
CONSTRAINT `offer_meta_category_field_id_foreign` FOREIGN KEY (`category_field_id`) REFERENCES `category_field` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
CONSTRAINT `offer_meta_offer_id_foreign` FOREIGN KEY (`offer_id`) REFERENCES `offer` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=107769 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
When I set up some filters on my page (for example, for our salary custom field) I have to start with query which returns MIN and MAX prices in available offer_meta records (I want to show a range slider to user in front-end, so I need MIN/MAX values for this range):
select MIN(`price`) AS min, MAX(`price`) AS max from `offer_meta` where `category_field_id` = ? limit 1
I found out that these queries are most inefficient from all queries I'm making (above query takes over 500ms when offer_meta table has few thousand of records).
Other inefficient queries (offer_meta has 107k records):
Obtaining MIN and MAX values for slider to filter numbers
select MIN(`number`) AS min, MAX(`number`) AS max from `offer_meta` where `category_field_id` = ? limit 1
Obtaining MIN and MAX prices for slider to filter by prices
select MIN(`price`) AS min, MAX(`price`) AS max from `offer_meta` where `category_field_id` = ? limit 1
Obtaining MIN and MAX date for date range restrictions
select MIN(`date`) AS min, MAX(`date`) AS max from `offer_meta` where `category_field_id` = ? limit 1
Obtaining colors with counts to show list of colors with numbers
select `color`, count(*) as `count` from `offer_meta` where `category_field_id` = ? group by `color`
Example of full query to get offers count with multiple filter criteria (0.5 sec)
select count(*) as count from `offer` where id in (select
distinct offer_id
from offer_meta om
where offer_id in (select
distinct offer_id
from offer_meta om
where offer_id in (select
distinct offer_id
from offer_meta om
where offer_id in (select
distinct om.offer_id
from offer_meta om
join category_field cf on om.category_field_id = cf.id
where
cf.category_id in (2,3,4,41,43,5,6,7,8,37) and
om.category_field_id = 1 and
om.number >= 1 and
om.number <= 50) and
om.category_field_id = 2 and
om.price >= 2 and
om.price <= 4545) and
om.category_field_id = 3 and
om.date >= '0000-00-00' and
om.date <= '2015-04-09') and
category_field_id = 4 and
om.color in ('#0000ff'))
The same query without aggregation function (COUNT) is few times faster (just to get IDs).
Question:
Is it possible to tweak those queries, or do you have any suggestion on how to implement my logic (offers with categories and custom fields dynamically added in admin to each category) with different table schema? I tried few more schemes, but no success.
Question 2:
Do you think this is my MySQL server problem and if I buy VPS, it will be okay?
Help to understand even better:
I was strongly inspired by WordPress schema for custom fields, so the logic is similar.
Last notes:
Also, I'm working on Laravel framework and I'm using Eloquent ORM.
Sorry for my english, I hope I made my problem clear :-)
Thank you in advance,
Patrik
It is not a MySql problem. in your scenario we found huge data collection. naturally relational databases are not efficient for some queries.(i faced a situation with oracle)
the practice for win this kind of situations is using graph databases.
it seems it is hard with the situation you are facing at the movement.
I heard that the Lucene has some kind of support for indexing large databases for selecting purpose. i dont know how exactly do it.
http://en.wikipedia.org/wiki/Lucene

best practice db tree sort

i have table like this:
CREATE TABLE `tree` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`parent_id` int(11) DEFAULT NULL,
`version` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`title` varchar(255) NOT NULL,
`sort` int(11) NOT NULL,
`rating` double NOT NULL DEFAULT '0',
`global_sort` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `IDX_22359CF6727ACA70` (`parent_id`),
CONSTRAINT `FK_22359CF6727ACA70` FOREIGN KEY (`parent_id`) REFERENCES `tree` (`id`) ON DELETE SET NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
in each level items can be sorted by: version, sort or rating.
I need to return all items in correct order.
as for now i use php recursion for each level & updating field global_sort concerning to level sorting.
any better ideas?
EDIT1
and as variant: if correct order must be only inside one level (no worries for global)?
EDIT2
i mean somshing like this
http://joxi.ru/WmYPUxjKTJAfVs5nCE4
when some items must be sorted by rating ask and some rating desc

MySql Properly Join Complex Data/Tables

Abstract:
Every client is given a specific xml ad feed (publisher_feed table). Everytime there is a query or a click on that feed, it gets recorded (publisher_stats_raw table) (Each query/click will have multiple rows depending on the subid passed by the client (We can sum the clicks together)). The next day, we pull stats from an API to grab the previous days revenue numbers (rev_stats table) (Each revenue stat might have multiple rows depending on the country of the click (We can sum the revenue together)). Been having a hard time trying to link together these three tables to find the average RPC for each client for the previous day.
Table Structure:
CREATE TABLE `publisher_feed` (
`publisher_feed_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`alias` varchar(45) DEFAULT NULL,
`user_id` int(10) unsigned DEFAULT NULL,
`remote_feed_id` int(10) unsigned DEFAULT NULL,
`subid` varchar(255) DEFAULT '',
`requirement` enum('tq','tier2','ron','cpv','tos1','tos2','tos3','pv1','pv2','pv3','ar','ht') DEFAULT NULL,
`status` enum('enabled','disabled') DEFAULT 'enabled',
`tq` decimal(4,2) DEFAULT '0.00',
`clicklimit` int(11) DEFAULT '0',
`prev_rpc` decimal(20,10) DEFAULT '0.0000000000',
PRIMARY KEY (`publisher_feed_id`),
UNIQUE KEY `alias_UNIQUE` (`alias`),
KEY `publisher_feed_idx` (`remote_feed_id`),
KEY `publisher_feed_user` (`user_id`),
CONSTRAINT `publisher_feed_feed` FOREIGN KEY (`remote_feed_id`) REFERENCES `remote_feed` (`remote_feed_id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `publisher_feed_user` FOREIGN KEY (`user_id`) REFERENCES `user` (`user_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=124 DEFAULT CHARSET=latin1$$
CREATE TABLE `publisher_stats_raw` (
`publisher_stats_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`unique_data` varchar(350) NOT NULL,
`publisher_feed_id` int(10) unsigned DEFAULT NULL,
`date` date DEFAULT NULL,
`subid` varchar(255) DEFAULT NULL,
`queries` int(10) unsigned DEFAULT '0',
`impressions` int(10) unsigned DEFAULT '0',
`clicks` int(10) unsigned DEFAULT '0',
`filtered` int(10) unsigned DEFAULT '0',
`revenue` decimal(20,10) unsigned DEFAULT '0.0000000000',
PRIMARY KEY (`publisher_stats_id`),
UNIQUE KEY `unique_data_UNIQUE` (`unique_data`),
KEY `publisher_stats_raw_remote_feed_idx` (`publisher_feed_id`)
) ENGINE=InnoDB AUTO_INCREMENT=472 DEFAULT CHARSET=latin1$$
CREATE TABLE `rev_stats` (
`rev_stats_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`date` date DEFAULT NULL,
`remote_feed_id` int(10) unsigned DEFAULT NULL,
`typetag` varchar(255) DEFAULT NULL,
`subid` varchar(255) DEFAULT NULL,
`country` varchar(2) DEFAULT NULL,
`revenue` decimal(20,10) DEFAULT NULL,
`tq` decimal(4,2) DEFAULT NULL,
`finalized` int(11) DEFAULT '0',
PRIMARY KEY (`rev_stats_id`),
KEY `rev_stats_remote_feed_idx` (`remote_feed_id`),
CONSTRAINT `rev_stats_remote_feed` FOREIGN KEY (`remote_feed_id`) REFERENCES `remote_feed` (`remote_feed_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=58 DEFAULT CHARSET=latin1$$
Context:
Each remote_feed has a specific subid/typetag given to it. So we need to match up the both the remote_feed_id and the subid columsn from the publisher_feed table to the remote_feed_id and typetag columns in the revenue stats table.
My current, non working, implementation:
SELECT
pf.publisher_feed_id, psr.date, sum(clicks), sum(rs.revenue)
FROM
xml_network.publisher_feed pf
JOIN
xml_network.publisher_stats_raw psr
ON
psr.publisher_feed_id = pf.publisher_feed_id
JOIN
xml_network.rev_stats rs
ON
rs.remote_feed_id = pf.remote_feed_id
WHERE
pf.requirement = 'tq'
AND
pf.subid = rs.typetag
AND
psr.date <> date(curdate())
GROUP BY
psr.date
ORDER BY
psr.date DESC
LIMIT 1;
The above keeps pulling the wrong data out of the rev_stats table (pulls the sum of the correct stats, but repeats it over because of a join). Any help with how I would be able to properly pull the correct data would be greatly helpful ( I could use multiple queries and PHP to get the correct results, but what's the fun in that!)
Figured out a way to get this accomplished. Its def not a fast method by any means, needing 4 selects to get it done, but it works flawlessly =)
SELECT
pf.publisher_feed_id,
round(
(
SELECT
SUM(rs.revenue)
FROM
xml_network.rev_stats rs
WHERE
rs.remote_feed_id = pf.remote_feed_id
AND
rs.typetag = pf.subid
AND
rs.date = subdate(current_date, 1)
),10)as revenue,
(
SELECT
MAX(rs.tq)
FROM
xml_network.rev_stats rs
WHERE
rs.remote_feed_id = pf.remote_feed_id
AND
rs.typetag = pf.subid
AND
rs.date = subdate(current_date, 1)
) as tq,
(
SELECT
SUM(psr.clicks)-SUM(psr.filtered)
FROM
xml_network.publisher_stats_raw psr
WHERE
psr.publisher_feed_id = pf.publisher_feed_id
AND
psr.date = subdate(current_date, 1)
) as clicks
FROM
xml_network.publisher_feed pf
WHERE
pf.requirement = 'tq';

Categories