I'm facing some performance issues with MySql. The query to select the comments related to the specific url id took about 1.5 ~ 2 seconds to complete.
Comments Table
CREATE TABLE `comments` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`url_id` INT UNSIGNED NOT NULL,
`user_id` INT UNSIGNED NOT NULL,
`published` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`votes_up` SMALLINT UNSIGNED NOT NULL DEFAULT 0,
`votes_down` SMALLINT UNSIGNED NULL DEFAULT 0,
`text` TEXT,
PRIMARY KEY (id),
INDEX (url_id),
INDEX (user_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
I have inserted 100.000 comments, and executed this query: SELECT * FROM comments WHERE url_id = 33 ORDER BY published ASC LIMIT 0,5.
Is this normal? A simple query taking almost 2 seconds to complete? Should I create a separate table just for the comment's text?
Youtube, Facebook and so on has millions (or billions) of comments, how they get the comments for that object (video, post, etc) so fast?
To resume my question:
I stop worrying about performance and stick with this and when the website reaches certain amount of user activity, I start worrying about this.
If I need to worry about this, what's wrong to my table structure? What I need to change to reduce the completion time of that query?
Update
The explain output:
+----+-------------+----------+------+---------------+----------+---------+-------+------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+------+---------------+----------+---------+-------+------+-----------------------------+
| 1 | SIMPLE | comments | ref | url_id | url_id | 4 | const | 549 | Using where; Using filesort |
+----+-------------+----------+------+---------------+----------+---------+-------+------+-----------------------------+
The problem here is that mysql uses only one index per table. That's why your index on published wasn't used. Your explain shows that it's using the index to identify what rows to return, that leaves the RDBMS unable to use an index for the sorting.
What you should do is to create a composite index on (user_id,published)
Related
How to ignore automatically increased primary id on myIsam insert ignore?
How to solve this problem?
This is increased primary id rapidly. How to solve this
MY TABLE STRUCTURE
| dates_tbl | CREATE TABLE `dates_tbl` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`date_raw` int(8) unsigned NOT NULL DEFAULT '0',
`date` date NOT NULL DEFAULT '0000-00-00',
`day` tinyint(2) unsigned NOT NULL DEFAULT '0',
`week` tinyint(2) unsigned NOT NULL DEFAULT '0',
`month` tinyint(2) unsigned NOT NULL DEFAULT '0',
`year` year(4) NOT NULL DEFAULT '0000',
`created_on` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `date_raw_unique` (`date_raw`),
KEY `date_raw` (`date_raw`),
KEY `month_year` (`month`,`year`),
KEY `year` (`year`)
) ENGINE=InnoDB AUTO_INCREMENT=21628 DEFAULT CHARSET=latin1 |
INSERT QUERY
$q = "INSERT IGNORE INTO dates_tbl(date_raw,date,day,week,month,year,created_on)
values $dynamic_value";
RESULT
mysql> select id from dates_tbl limit 10;
+-------+
| id |
+-------+
| 19657 |
| 19681 |
| 19729 |
| 19777 |
| 19825 |
| 19873 |
| 19884 |
| 19913 |
| 19960 |
| 20007 |
+-------+
10 rows in set (0.01 sec)
The table is InnoDB, why do you mention MyISAM?
Get rid of id, it probably serves no purpose. Instead, promote date_raw to be the PRIMARY KEY.
Avoid redundant indexes -- a PRIMARY KEY is a UNIQUE KEY is a KEY.
Usually it is better to keep year+month+day in a single DATE column, then pick it apart when needed.
Do you actually use created_on and modified_on? Or is that an artifact of some 3rd party software?
It is more efficient to have an index on a DATE, then use something like
this:
WHERE ymd >= '2010-03-01'
AND ymd < '2010-03-01' + INTERVAL 1 MONTH
With those changes, you have eliminated half the columns and most of the indexes. And the INSERT IGNORE will stop giving you troubles.
If you keep the AUTO_INCREMENT, then let's see some more of the logic -- need to see why IGNORE is taking effect. The solution may involve some of the other code in addition to the INSERT.
Of this is part of a star schema in Data Warehousing, then I will rant about how it is bad to normalize "continuous" values, such as DATE. At that point, the whole table vanishes. And the code will run faster!
I’m using a host which only supports MyISAM tables engines for MySQL. I’m trying to create a CMS using php and MySQL, however, I’m having issues working out how to create relationships between the tables. For example, one of the features within this system is being able to assign tags to an article/blogpost, similar to how stack overflow has tags on their questions.
My question is, as I cannot change my tables to use InnoDB, how can I form a relationship between the two tables? I am unable to use foreign keys as they are not supported in MyISAM, or at least not enforced.
So far, all I've found when searching is keeping track of it through PHP by ensuring that I update multiple tables at a time, but there must be a way of doing this on the MySQL side.
Below are examples of the Article and Tag tables.
+---------------------------+ +---------------------------+
| Article | | Tags |
+---------------------------+ +---------------------------+
| articleID int(11) | | tagID int(11) |
| title varchar(150) | | tagString varchar(15) |
| description varchar(150) | +---------------------------+
| author varchar(30) |
| content text |
| created datetime |
| edited datetime |
+---------------------------+
I’ve found loads of related questions on this site, but most of them InnoDB, which I cannot do as my host does not support it.
I've found a solution (kind of). I've added another table called ArticleTags
+---------------------------+
| ArticleTags |
+---------------------------+
| articleID int(11) |
| tagID int(11) |
+---------------------------+
This query returns the correct result, but I'm not sure if it's a bit of a hack, or if there is a better way to do it.
SELECT `tagString`
FROM `Tags`
WHERE id
IN (
SELECT `tagID`
FROM `ArticleTags`
WHERE `articleID` = :id
)
ORDER BY `Tags`.`tagString`
Can someone tell me if this this right?
Try TRIGGERs:
Enforcing Foreign Keys Programmatically in MySQL
Emulating Cascading Operations From InnoDB to MyISAM Tables
Example MyIsam with Foreign-Key:
Create parent table:
CREATE TABLE myisam_parent
(
mparent_id INT NOT NULL,
PRIMARY KEY (mparent_id)
) ENGINE=MYISAM;
Create child table:
CREATE TABLE myisam_child
(
mparent_id INT NOT NULL,
mchild_id INT NOT NULL,
PRIMARY KEY (mparent_id, mchild_id)
) ENGINE = MYISAM;
Create trigger (with DELIMITER):
DELIMITER $$
CREATE TRIGGER insert_myisam_child
BEFORE INSERT ON myisam_child
FOR EACH ROW
BEGIN
IF (SELECT COUNT(*) FROM myisam_parent WHERE mparent_id=new.mparent_id)=0 THEN
INSERT error_msg VALUES ('Foreign Key Constraint Violated!');//Custom error
END IF;
END;$$
DELIMITER ;
Test case:
Try insert (create 3 lines in myisam_parent and 6 lines in myisam_child):
INSERT INTO myisam_parent VALUES (1), (2), (3);
INSERT INTO myisam_child VALUES (1,1), (1,2), (2,1), (2,2), (2,3), (3,1);
Try insert:
INSERT INTO myisam_child VALUES (7, 1);
Returns this error:
ERROR 1062 (23000): Duplicate entry 'Foreign Key Constraint Violated!' for key 'PRIMARY'
Note:
This example is for INSERT, for "triggers" with DELETE and UPDATE read link (at the beginning the question)
I'm currently developing a small invoicing system for a company that has many branches. I was wondering how can I generate different invoice autonumbers based on each branch.
Here is my table structure for table invoice_header:
id | number | cust | grand_total
Should I create a different table for each branch? For example invoice_header_1, invoice_header_2 , invoice_header_3.
It's nicer and more workable to make a table with the branches types in it.
And then you add only 1 column to the table invoice_header and there you give in the id of the exact branch from the branch table.
This way you can connect these two and keep it nice and clean. And easier to edit.
So:
table invoice_header
id | number | cust | grand_total | branch_id
table branch
id | branch_name
If you use ENGINE=MyISAM, MySQL will do this for you:
CREATE TABLE invoice_header (
branch_id INT UNSIGNED NOT NULL,
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
number INT UNSIGNED NOT NULL DEFAULT 1,
cust INT UNSIGNED NOT NULL,
grand_total DECIMAL(10,2) NOT NULL DEFAULT 0,
PRIMARY KEY(branch_id, id)
) ENGINE=MyISAM;
MySQL will generate an unique value for the id column, starting at 1, but For each different branch_id. This only works if the engine is MyISAM.
Search for 'multiple-column index' in http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html for more information.
The table ps_category_product in PrestaShop has the following structure
# Obtained using SHOW CREATE TABLE `ps_category_product`
CREATE TABLE `ps_category_product` (
`id_category` int(10) unsigned NOT NULL,
`id_product` int(10) unsigned NOT NULL,
`position` int(10) unsigned NOT NULL DEFAULT '0',
KEY `category_product_index` (`id_category`,`id_product`),
KEY `id_product` (`id_product`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
For me is not very clear, but it seems that the fields id_category and id_product should be unique among the table, but for some reason MySQL allows me to insert duplicates:
mysql> select * from ps_category_product limit 10;
+-------------+------------+----------+
| id_category | id_product | position |
+-------------+------------+----------+
| 11 | 1 | 1 |
| 11 | 2 | 1 |
| 11 | 3 | 1 |
| 11 | 4 | 1 |
| 11 | 5 | 1 |
| 11 | 6 | 1 |
| 11 | 7 | 1 |
| 11 | 8 | 1 |
| 11 | 9 | 1 |
| 11 | 10 | 1 |
+-------------+------------+----------+
10 rows in set (0.00 sec)
mysql> INSERT INTO `ps_category_product` VALUES(11, 1, 1);
Query OK, 1 row affected (0.05 sec)
How can I prevent this from happening?
Later edit
It was a bug in prestashop. Take a look at http://forge.prestashop.com/browse/PSCFI-4397
Specifying KEY will not enforce a unique constraint unless you specify UNIQUE KEY or PRIMARY KEY.
Try recreating the table using the following DDL:
CREATE TABLE `ps_category_product` (
`id_category` int(10) unsigned NOT NULL,
`id_product` int(10) unsigned NOT NULL,
`position` int(10) unsigned NOT NULL DEFAULT '0',
UNIQUE KEY `category_product_index` (`id_category`,`id_product`),
KEY `id_product` (`id_product`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
That should do the trick.
Have a look at the MySQL CREATE TABLE syntax for more info.
The constraint should be imposed via the admin interface and underlying object code, so you shouldn't ever have a situation where there are duplicates, although it would be easy enough to write a cron job to remove any that did occur.
You could force this unique, but that doesn't solve the fundamental problem as to why this might happen.... I honestly don't see what the issue is that you're trying to solve? If you're importing products yourself, then you should use the object interface rather than writing to these tables directly, otherwise, yes - weird things might happen.
I'm trying to create a page that tracks some basic user statistics in a database. I'm starting small, by trying to keep track of how many people come using what User Agent, but I've come across a stumbling block. How can I check the User Agent being added to the table to see if it is already there or not?
You can make the column that stores the User Agent string unique, and do INSERT ... ON DUPLICATE KEY UPDATE for your stats insertions
For the table:
CREATE TABLE IF NOT EXISTS `user_agent_stats` (
`user_agent` varchar(255) collate utf8_bin NOT NULL,
`hits` int(21) NOT NULL default '1',
UNIQUE KEY `user_agent` (`user_agent`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+-------+
| user_agent | varchar(255) | NO | PRI | NULL | |
| hits | int(21) | NO | | NULL | |
+------------+--------------+------+-----+---------+-------+
You could use the following query to insert user agents:
INSERT INTO user_agent_stats( user_agent ) VALUES('user agent string') ON DUPLICATE KEY UPDATE hits = hits+1;
Executing the above query multiple times gives:
+-------------------+------+
| user_agent | hits |
+-------------------+------+
| user agent string | 6 |
+-------------------+------+
Before adding it to the database, SELECT from the table where you're inserting the User Agent string. If mysql_num_rows is greater than 0, the User Agent you're trying to add already exists. If mysql_num_rows is less than or equal to 0, the User Agent you're adding is new.