InnoDB locking a row to prevent multiple concurrent sessions from reading it - php

I'm using InnoDB.
I have table A
ID | DATA
1 | Something
2 | something else
table B
user_id | DATA
1 | NULL
my program reads a row from table A and updates table B, then deletes the row from table A after the update statement.
is it possible for two users (2 different concurrent sessions) to read the same row from table A? how can I avoid that?
that's my program
$core = Database::getInstance();
$q = $core->dbh->prepare("SELECT * FROM `tableA` LIMIT 1");
$q->execute();
$result = $q->fetch();
$q = $core->dbh->prepare("UPDATE `tableB` SET `data` = ? where `user_id`= ?");
$q->execute(array($result['data'],$ID));
// how to prevent a second user from reading the same row before the next statement gets executed
$q = $core->dbh->prepare("DELETE FROM `tableA` where `ID`= ?");
$q->execute(array($result['ID']));

Hope this helps and makes clear the view of what you want to achieve.
https://dev.mysql.com/doc/refman/5.0/en/internal-locking.html

You can SELECT ... FOR UPDATE which puts an exclusive lock onto the row (at least if the transaction isolation level is set to somewhat reasonable).
This only works if the storage engine of the table is InnoDB. Also you have to execute all queries inside of the same transaction (so execute BEGIN query at the beginning and COMMIT at the end).

Related

How can I prevent two users from accessing MySQL table at the same time?

Let's say I got a website which, when visited, shows what is your lucky word today. The problem is that every word can be lucky for only one person so you need to be fast visiting the website. Below is a sample table with lucky words:
+---------------------+
| lucky_word |
+---------------------+
| cat |
| moon |
| piano |
| yellow |
| money |
+---------------------+
My question is: how can I prevent two (or more) users from accessing that table at one time. I assume that every user reads the first lucky_word from the existing table and the chosen word is deleted immediately so it won't be the next user's lucky word. For instance, I want to avoid cat to be shown to more than one visitor.
Should I solve this using an appropriate MySQL query or some lines in a PHP code or both?
You can use a locking read within a transaction; for example, using PDO:
$pdo = new PDO('mysql:charset=utf8;dbname='.DBNAME, USERNAME, PASSWORD);
$pdo->beginTransaction();
$word = $pdo->query('SELECT lucky_word FROM myTable LIMIT 1 FOR UPDATE')
->fetchColumn();
$pdo->prepare('DELETE FROM myTable WHERE lucky_word = ?')
->execute(array($word));
$pdo->commit();
In MySQL you can lock tables, to prevent other sessions reading and/or writing to the table. In the case of WRITE locks, the first session to request the lock will hold the table until it is released, and then the second session will get it until unlocked, and so forth. That way you can be sure that no two sessions are accessing or manipulating the same data at the same time.
Read all about it in the manual:
https://dev.mysql.com/doc/refman/5.6/en/lock-tables.html
How about adding a datestamp to the table updated when that particular word is used?
You could then use the following pseudo sql...
select word from words where lastdate <> [today];
update words set lastdate = today where word = [word];
A quickly method I used for a similar task:
1) create a table "unique_sequence" with just un field: id -> INT AUTOINCREMENT
CREATE TABLE `erd`.`unique_sequence` (
`id` INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`));
2) when a user arrives on the site:
INSERT INTO unique_sequence VALUES();
SELECT word FROM lucky_word WHERE id = LAST_INSERT_ID();
As The ID that was generated by LAST_INSERT_ID() is maintained in the server on a per-connection basis it should be multi-user safe...
... and so we can be sure that every new user will get a unique ID that match the ones in lucky_word table.

SELECT and lock a row and then UPDATE

I have a script that selects a row from MySQL database.
Then updates this row. Like this:
$statement = $db->prepare("SELECT id, link from persons WHERE processing = 0");
$statement->execute();
$row = $statement->fetch();
$statement = $db->prepare("UPDATE persons SET processing = 1 WHERE id = :id");
$success = $statement->execute(array(':id' => $row['id']));
The script calls this php code multiple times simultaneously. And sometimes it SELECTS the row eventhough it should be "processing = 1" because the other script call it at the exact time.
How can I avoid this?
What you need to do is add some kind of lock here to prevent race conditions like the one you've created:
UPDATE persons SET processing=1 WHERE id=:id AND processing=0
That will avoid double-locking it.
To improve this even more, create a lock column you can use for claiming:
UPDATE persons
SET processing=:processing_uuid
WHERE processing IS NULL
LIMIT 1
This requires a VARCHAR, indexed processing column used for claiming that has a default of NULL. If you get a row modified in the results, you've claimed a record and can go and work with it by using:
SELECT * FROM persons WHERE processing=:processing_uuid
Each time you try and claim, generate a new claim UUID key.
Try using transactions for your queries. Read about them at the mysql dev site
You can wrap your code with:
$dbh->beginTransaction();
// ... your transactions here
$dbh->commit();
You'll find the documentation here.
Use SELECT FOR UPDATE
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html
eg
SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;
Wrap this in a transaction and the locks will be released when the transaction ends.

Incorrect data in MyISAM database due to concurrency

Problem
I have a webpage that does the following (the code is much simplified to show only relevant code.
mysql_query("insert into table1 (field1) values ('value')");
$last_id = mysql_insert_id();
$result = mysql_query("select * from table1 t inner join ... where id = $last_id");
write_a_file_using_result($result);
It happened, that the file was created using a different data set than what I found in the table row.
The only explanation I have is:
call1: page was called 1. time with data set 1.
call1: data set 1 gets inserted for connection 1 but gets not committed to the table.
call2: page was called 2. time with data set 2
call2: data set 2 gets inserted for connection 2 and mysql_insert_id returns the same value
call1: file is generated with date set 1
call2: file cannot be written, because it already exists
Result: The file is generated with data set 1 while the table row contains data row 2.
Config
mysql 5.0.51b
The table:
CREATE TABLE `table1` (
`id` int(11) NOT NULL auto_increment,
(...)
Question
I know that MyISAM does not support transactions. But I really expect that it is impossible to insert two rows and get twice the same id inserted, so that the row can be overwritten.
Is MyISAM unsafe to this point or is there another explanation that I overlook ?
Note
I know the mysql extension for php is outdated, but I did not yet rewrite the application.
Is MyISAM unsafe to this point
No. mysql_insert_id guaranteed to return the right value only.
or is there another explanation that I overlook ?
Most likely. Check your code.
Haven't heard about id issues in MyISAM.
You can try to set link identifier when calling last_insert_id, for example
$link = mysql_connect(...);
mysql_query("insert into table1 (field1) values ('value')",$link);
$last_id = mysql_insert_id($link);
$result = mysql_query("select * from table1 t inner join ... where id = $last_id",$link);
write_a_file_using_result($result);

Keep only 10 records per user

I run points system on my site so I need to keep logs of different action of my users into database. The problem is that I have too many users and keeping all the records permanently may cause server overload... I there a way to keep only 10 records per user and automatically delete older entries? Does mysql have some function for this?
Thanks in advance
You can add a trigger that takes care of removing old entries.
For instance,
DELIMITER //
CREATE definer='root'#'localhost' TRIGGER afterMytableInsert AFTER INSERT ON MyTable
FOR EACH ROW
BEGIN
DELETE FROM MyTable WHERE user_id = NEW.user_id AND id NOT IN
(SELECT id FROM MyTable WHERE user_id = NEW.user_id ORDER BY action_time DESC LIMIT 10);
END//
Just run an hourly cron job that deletes the 11th - n records.
Before insert a record you could check how many the user has first. If they have >=10 delete the oldest one. Then insert the new one.
If your goal is to have the database ensure that for a given table there are never more than N rows per a given subkey (user) then the correct way to solve this will be either:
Use stored procedures to manage inserts in the table.
Use a trigger to delete older rows after an insert.
If you're already using stored procedures for data access, then modifying the insert procedure would make the most sense, otherwise a trigger will be the easiest solution.
Alternately if your goal is to periodically remove old data, then using a cron job to start a stored procedure to prune old data would make the most sense.
When you are inserting a new record for a user. Just do a query like this before (Don't forget the where-condition):
DELETE FROM tablename WHERE userID = 'currentUserId' LIMIT 9, 999999
After that you can insert new data. This keeps the data always to ten records for each user.
INSERT INTO tablename VALUES(....)
DELETE FROM Table WHERE ID NOT IN (SELECT TOP 10 ID FROM Table WHERE USER_ID = 1) AND USER_ID = 1
Clearer Version
DELETE FROM Table
WHERE ID NOT IN
(
SELECT TOP 10 ID FROM Table WHERE USER_ID = 1
)
AND USER_ID = 1

Is it possible to execute the two update queries in phpmyadmin together?

Is it possible to execute the two update queries in phpmyadmin together?
Like wise
UPDATE jos_menu SET home = 0 WHERE 1;
UPDATE jos_menu SET home = 1 WHERE id = 9;
Now can we copy both these queries together and Run it on phpmyadmin sql query panel?
will it be executed?
Yes, both queries will be executed. The only additional thing you might add is transaction. Thanks to that you'll be sure that both queries executed successful:
START TRANSACTION;
UPDATE jos_menu SET home = 0 WHERE 1;
UPDATE jos_menu SET home = 1 WHERE id = 9;
COMMIT;
update jos_menu set home=case id when 9 then 1 else 0 end
this will update all rows, setting 1 to all that have id=9, and 0 to the rest
If you're not sure if some SQL will break your live site and you don't have a dev server, make a copy of the DB table and test it on that.
CREATE TABLE jos_menu_test LIKE jos_menu;
INSERT jos_menu_test SELECT * FROM jos_menu;
Based on #crozin answer I did following queries:
START TRANSACTION;
SELECT id into #idTech FROM `team` WHERE abbr = 'D19';
delete from team_dayoff where team_id = #idTech;
delete from team_layer_lease where team_id = #idTech;
delete from team_product_linker where team_id = #idTech;
delete from team where id = #idTech;
COMMIT;

Categories