MyISAM racing conditions / LOCK TABLES - php

My 'invoices' table:
CREATE TABLE `invoices` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`invoice_id` int(11) NOT NULL,
PRIMARY KEY (`id`,`invoice_id`),
UNIQUE KEY `invoice_id` (`invoice_id`),
KEY `order_id` (`order_id`)
) ENGINE=MyISAM AUTO_INCREMENT=115 DEFAULT CHARSET=utf8
When I try the query :
mysqli_query($conn, "LOCK TABLES 'invoices' WRITE");
in a php script, it doesn't work as I can insert a new row in the "locked" table using phpMyAdmin's SQL console during lock time.
May I be totally confident that a query like this
INSERT INTO `invoices` (`invoice_id`) SELECT MAX(`invoice_id`)+100 FROM `invoices`
can successfully prevents race conditions so to use this instead of a LOCK TABLES query;
NOTES:
I did not create this table.
I may not alter the table.

When you write an sql query you should be wrap table|column names with back ticks but not single quotes.
In your case
mysqli_query($conn, "LOCK TABLES `invoices` WRITE");
Note But I would recommend you to stop trying to "resolve" racing condition. Why did you decide that it is a problem in your case?
Racing condition could be a big problem for some projects. But I doubt that it is your case. I would support #Dave comment, you already have auto incremented index. That is more than enough in many cases.
Imho you don't need this "locks".
INSERT INTO `invoices` (`invoice_id`) SELECT MAX(`invoice_id`)+100 FROM `invoices`
This query has almost no sense. Could you explain why are you trying to do this weird insert?

Please note that the lock only lasts for the duration of your database session - in this case the duration of your script call.

Related

MYSQL MEMBER LOG QUERY SLOW - PERFOMANCE PROBLEM

I have a table where I log members.
There are 1,486,044 records here.
SELECT * FROM `user_log` WHERE user = '1554143' order by id desc
However, this query takes 5 seconds. What do you recommend ?
Table construction below;
CREATE TABLE IF NOT EXISTS `user_log` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user` int(11) NOT NULL,
`operation_detail` varchar(100) NOT NULL,
`ip_adress` varchar(50) NOT NULL,
`l_date` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
COMMIT;
For this query:
SELECT * FROM `user_log` WHERE user = 1554143 order by id desc
You want an index on (user, id desc).
Note that I removed the single quotes around the filtering value for user, since this column is a number. This does not necessarily speeds things up, but is cleaner.
Also: select * is not a good practice, and not good for performance. You should enumerate the columns you want in the resultset (if you don't need them all, do not select them all). If you want all columns, since your table has not a lot of columns, you might want to try a covering index on all 5 columns, like: (user, id desc, operation_detail, ip_adress, l_date).
In addition to the option of creating an index on (user, id), which has already been mentioned, a likely better option is to convert the table to InnoDB as create an index only on (user).

Unexpected double inserts with MyISAM

I have a like table for customers like products in my website.
The problem is, I use this table:
CREATE TABLE IF NOT EXISTS `likes` (
`id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT,
`user` varchar(40) NOT NULL,
`post` int(11) UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 AUTO_INCREMENT=1 ;
user who like, post is product id.
like button send a ajax request to this php:
session_start();
$user = $_SESSION["user"];
$pinsid=$_POST['id'];
$stmt = $mysqli_link->prepare("SELECT * FROM likes WHERE post=? AND user=?");
$stmt->bind_param('is', $pinsid, $user);
$stmt->execute();
$result = $stmt->get_result();
$stmt->close();
$chknum = $result->num_rows;
if($chknum==0){
$stmt = $mysqli_link->prepare("INSERT INTO likes (user, post) VALUES (?,?)");
$stmt->bind_param('si', $user, $pinsid);
$stmt->execute();
$stmt->close();
$response = 'success';
}
echo json_encode($response);
My problem is, I have double inserts in like from the same person. eg:
1 josh 5
2 josh 5
but it only happens if MySQL engine is set as InnoDB, if I change it to MyISAM I have only 1 insert.
What is happening? What should I do to make it work properly?
but it only happens if MySQL engine is set as InnoDB, if I change it to MyISAM I have only 1 insert.
What is happening? What should I do to make it work properly?
The MyISAM engine uses table level locking, which means that if an operation is executing on a table, all other operations wait executing till that oparation is finished.
InnoDb is transactional and uses row-level locking, since you're not using transactions nothing is locked.
As mentioned in the comments and answers the simplest solution is to create an unique constraint on user and post, in youre case you can use both as primary key because the auto-increment column has no added value.
To create a unique constraint:
ALTER TABLE likes ADD UNIQUE KEY uk_user_post (user,post);
As for your question:
but it can slow down my inserts?
If we speak solely about the insert operation at the table, yes it does slow down because each index has to be rebuild after an insert,update or delete operation. How much it slows down depends on the size of the index(es) and the number of rows in the table.
However in your current table structure you have no indexes at all on user and post, and in your application you perform a select with a lookup on both colums, which will result in a full table scan.
With the unique index (user,post) you can skip the select because when the unique constraint is violate you'll get an SQL error.
Also user and post are foreign keys so the should be indexed anyway.
The unique index (user,post) covers the user FK, so you will also need an index on post separatly
One way of doing this would be to set up a unique key for user and post in the likes table (see https://dev.mysql.com/doc/refman/5.0/en/constraint-primary-key.html).
If that was in place, the database would ensure that there are no duplicates of user and post. However, for data which are already in the table, it could be problematic if there are already duplicates

Transactions and Order Safety with MySQL (InnoDB)

Scenario
Say I have a list of voucher codes that I am giving away, I need to ensure that if two persons place an order at the exact same time, that they do not get the same voucher.
Tables
CREATE TABLE IF NOT EXISTS `order` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`voucher_id` bigint(20) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `voucher_id` (`voucher_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
ALTER TABLE `order` ADD CONSTRAINT `order_fk` FOREIGN KEY (`voucher_id`) REFERENCES `voucher` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;
CREATE TABLE IF NOT EXISTS `voucher` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`code` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
Sample data
INSERT INTO `voucher` (`code`) VALUES ('A'), ('B'), ('C');
Sample Query
SELECT #voucher_id := v.id FROM `voucher` v LEFT JOIN `order` o ON o.voucher_id = v.id WHERE o.id IS NULL;
INSERT INTO `order` (`voucher_id`) VALUES (#voucher_id);
Question
I believe the UNIQUE KEY on voucher_id in the order table will prevent two orders having the same voucher_id, giving an error / throwing an exception if the same voucher id is inserted twice. This would require a while loop to retry upon failure.
The alternative is read locking the vouchers table before the SELECT and releasing that lock after the INSERT, ensuring the same voucher isn't picked twice.
My question is therefore:
Which is faster?
A while loop in PHP code.
Read locking the vouchers table.
Is there another way?
Edits
ALTER TABLEorderCHANGEvoucher_idvoucher_idBIGINT(20) UNSIGNED NOT NULL
will cause the INSERT to fail if #voucher_id is null (as desired, as there would be no vouchers left).
The "correct" and by that I mean best way to do what you're looking to do is to generate the voucher at the time you place the order. Look at the documentation for the sha1() function in php. You can seed it with unique information to prevent duplicates and use that for your voucher along with an auto_increment field for the unique ID.
When the order is placed, PHP generates a new voucher, saves it to the database, and sends it to the user. This way you're only storing valid vouchers and you're also preventing duplicates from being created.
You can use START TRANSACTION, COMMIT, and ROLLBACK to prevent race conditions in your SQL. http://dev.mysql.com/doc/refman/4.1/en/commit.html
In your case, I would just put your transactions into a critical area bounded by these tokens.

mysql effecient query (select and update)

I have a table that its structure is as like as follow:
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`ttype` int(1) DEFAULT '19',
`title` mediumtext,
`tcode` char(2) DEFAULT NULL,
`tdate` int(11) DEFAULT NULL,
`visit` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `tcode` (`tcode`),
KEY `ttype` (`ttype`),
KEY `tdate` (`tdate`)
ENGINE=MyISAM
I have two query on x.php same as:
SELECT * FROM table_name WHERE id='10' LIMIT 1
UPDATE table_name SET visit=visit+1 WHERE id='10' LIMIT 1
My first problem is that whether updating 'visit' in table cause reindexing and decreasing performance or not? Note to this point that 'visit' is not key.
Second method may be creating new table that contain 'visit' like as follow:
'newid' int(10) unsigned NOT NULL ,
`visit` int(11) DEFAULT '0',
PRIMARY KEY (`newid`),
ENGINE=MyISAM
So selecting by
SELECT w.*,q.visit FROM table_name w LEFT JOIN table_name2 q
ON (w.id=q.newid) WHERE w.id='10' LIMIT 1
UPDATE table_name2 SET visit=visit+1 WHERE newid='10' LIMIT 1
Is second method prefered rescpect to first method? Which one would have better performance and would be quick?
Note: all sql queries would be run by PHP (mysql_query command). Also I need first table indexes for other queries on other pages.
I'd say your first method is the best, and simplest. Updating visit will be very fast and no updating of indexes needs to be performed.
I'd prefer the first, and have used that for similar things in the past with no problems. You can remove the limit clause since id is your primary key you will never have more than 1 result, although the query optimizer probably does this for you.
There was a question someone asked earlier to which I responded with a solution you may want to consider as well. When you do 'count' columns you lose the ability to mine the data later. With a transaction table not only can you get 'views' counts, but you can also query for date ranges etc. Sure you will carry the weight of storing potentially hundreds of thousands of rows, but the table is narrow and indices numeric.
I cannot see a solution on the database side... Perhaps you can do it in PHP: If the user has a PHP session, you could, for example, only update the visitor count each 10th time, like:
<?php
session_start();
$_SESSION['count']+=1;
if ($_SESSION['count'] > 10) {
do_the_function_that_updates_the_count_plus_10();
$_SESSION['count'] = 0;
}
Of course you loose some counts, this way, but perhaps this is not that important?

Preserve Autoincrement ID Within MySQL Transaction

I have two MySQL database tables that are meant to hold data for eshop orders. They're built as such (extremely simplified version):
CREATE TABLE `orders` (
`id` int(11) NOT NULL auto_increment
PRIMARY KEY (`id`)
);
CREATE TABLE `order_items` (
`id` int(11) NOT NULL auto_increment,
`orderID` int(11) NOT NULL,
PRIMARY KEY (`id`)
)
The relationship between the two is that orders.id corresponds to order_items.orderID.
I'm using a transaction to place a new order, however have a problem preserving the above relationship. In order to get the new order id. I have to commit the orders INSERT query, get the autoincremented id and then start another transaction for the order items. Which pretty much defeats the point of using transactions.
I could insert the new order in the orders table and then try something like
INSERT INTO order_items(orderID) VALUES(LAST_INSERT_ID())
which I assume would work. However after the first order item is inserted LAST_INSERT_ID() would stop returning the order id and instead return the order item id making it impossible to use this query to insert another order item.
Is there a way to make this whole thing work within a single transaction or should I give up and use a procedure instead?
WOuld this work?:
INSER QUERY;
SET #insertid = LAST_INSERT_ID();
INSERT INTO `order_items` SET `OrderID` = #insertid;
All in one statement. You will have to double check the syntax
You can't count on LAST_INSERT_ID() because it also changes when you insert values to order_items because it inserts its id which is also auto_imcrement.
Maybe you can try this.
INSERT INTO order_items(orderID) VALUES((SELECT id FROM orders ORDER BY id desc LIMIT 1))

Categories