Locking innodb row and get a live version - php

I'm writing an online game, there is a section named send troops. When two or more users on one account try to send one movement the troops get doubled.
I want to get a live version of the row from mysql and prevent any read, write, update anything on that row untill I finish.
Is it actually possible? Because I sae only select for update and lock in share mode in innodb reference.
Any help is appericiated.

BEGIN;
SELECT ... FROM t ... WHERE ... FOR UPDATE;
...
UPDATE t ...;
COMMIT;
Others can read the rows from t, but they will either be delayed or deadlocked if they try to modify the row(s) touched by the SELECT.
Do you really need to prevent all reads for a given row? Please explain your scenario further.

Please find mysql document below in order to lock a particular row in innodb:
https://dev.mysql.com/doc/refman/5.5/en/innodb-locking.html

Related

MySQL/InnoDB transactions with table locks in InnoDB

I did a lot of research and I found a lot of information about all the relevant topics. However, I am not confident that I do now understand, how to put all this information together properly.
This application is written in PHP.
For queries I use PDO.
The MySQL database is configured as InnoDB.
What I need
SELECT ... FROM tableA;
// PHP looks at what comes back and does some logic.
INSERT INTO tableA ...;
INSERT INTO tableB ...;
Conditions:
The INSERTs need to be atomic. If one of them fails I want to roll back.
No reads and writes from/to tableA are allowed to happen between the SELECT and the INSERT from/into tableA.
This to me looks like a very simple problem. Yet I am not able to figure out, how to do this properly. So my question is:
What is the best approach?
This is an outline for my current plan, heavily simplified:
try {
SET autocommit = 0;
BLOCK TABLES tableA WRITE, tableB WRITE;
SELECT ... FROM tableA;
INSERT INTO tableA ...;
INSERT INTO tableB ...;
COMMIT;
UNLOCK TABLES;
SET autocommit = 1;
}
catch {
ROLLBACK;
UNLOCK TABLES;
SET autocommit = 1;
}
I feel like there is a lot that could be done better, but I don't know how :/
Why do it like this?
I need some kind of transaction to be able to do a rollback if INSERTs fail.
I need to lock tableA to make sure that no other INSERTs or UPDATEs take place.
Transactions and table locks don't work well together
(https://dev.mysql.com/doc/refman/8.0/en/lock-tables-and-transactions.html)
I want to use autocommit as a standard throughout the rest of my application, which is why I set it back to "1" at the end.
I am really not sure about this: But I somewhere picked up, that after locking a table, I can (from within the current connection) only query to this table until I unlock it (this does not make sense to me). This is why I locked tableB too, altough otherwise I wouldn't need to.
I am open for completely different approaches
I am open for any suggestion within the framework conditions PHP, MySQL, PDO and InnoDB.
Thank You!
Edit 1 (2018-06-01)
I feel like my problem/question needs some more clarification.
Starting point:
If have two tables, t1 and t2.
t1 has multiple columns of non-unique values.
The specifics of t2 are irrelevant for this problem.
What I want to do:
Step by step:
Select multiple columns and rows from t1.
In PHP analyse the retrieved data. Based on the results of this analysis put together a dataset.
INSERT parts of the dataset into t1 and parts of it into t2.
Additional information:
The INSERTs into the 2 tables must be atomic. This can be achieved using transactions.
No INSERTs from a different connection are allowed to occur between steps 1 and 3. This is very important, because every single INSERT into t1 has to occur with full awareness of the current state of the table. I'll best describe this in more detail. I will leave t2 out of this for now, to make things easier to understand.
Imagine this sequence of events (connections con1 and con2):
con1: SELECT ... FROM t1 WHERE xyz;
con1: PHP processes the information.
con2: SELECT ... FROM t1 WHERE uvw;
con2: PHP processes the information.
con1: INSERT INTO t1 ...;
con2: INSERT INTO t1 ...;
So both connections see t1 in the same state. However, they select different information. Con1 takes the information gathered, does some logic with it and then INSERTs data into a new row in t1. Con2 does the same, but using different information.
The problem is this: Both connections INSERTed data based on calculations that did not take into account whatever the other connection inserted into t1, because this information wasn't there when they read from t1.
Con2 might have inserted a row into t1 that would have met the WHERE-conditions of con1's SELECT-statement. In other words: Had con2 inserted its row earlier, con1 might have created completely different data to insert into t1. This is to say: The two INSERTs might have completely invalidated each others inserts.
This is why I want to make sure, that only one connection can work with the data in t1 at a time. No other connection is allowed to write, but also no other connection is allowed to read until the current connection is done.
I hope this clarifies things a bit... :/
Thoughts:
My thoughts were:
I need to make the INSERTs into the 2 tables atomic. --> I will use a transaction for this. Something like this:
try {
$pdo->beginTransaction();
// INSERT INTO t1 ...
// INSERT INTO t2 ...
$pdo->commit();
}
catch (Exception $e) {
$pdo->rollBack();
throw $e;
}
I need to make sure, no other connection writes to or reads from t1. This is where I decided that I need LOCK TABLES.
Assuming I had to use LOCK TABLES, I was confronted with the problem that LOCK TABLES is not transaction aware. Which is why I decided to go with the solution proposed here (https://dev.mysql.com/doc/refman/8.0/en/lock-tables-and-transactions.html) and also in multiple answers on stackoverflow.
But I wasn't happy with how the code looked like, which is why I came here to ask this (meanwhile rather lengthy) question.
Edit 2 (2018-06-01)
This process will not run often. So there is no significant need for high performance and effiency. This, of course, also means that the chances of two of those processes infering with eachother are rather minute. Stil, I'd like to make sure nothing can happen.
Case 1:
BEGIN;
INSERT ..
INSERT ..
COMMIT;
Other connections will not see the inserted rows until after the commit. That is, BEGIN...COMMIT made the two inserts "atomic".
If anything fails, you still need the try/catch to deal with it.
Do not use LOCK TABLES on InnoDB tables.
Don't bother with autocommit; BEGIN..COMMIT overrides it.
My statements apply to (probably) all frameworks. (Except that some do not have "try" and "catch".)
Case 2: Lock a row in anticipation of possibly modifying it:
BEGIN;
SELECT ... FROM t1 FOR UPDATE;
... work with the values SELECTed
UPDATE t1 ...;
COMMIT;
This keeps others away from the rows SELECTed until after the COMMIT.
Case 3: Sometimes IODKU is useful to do two things in a single atomic statement:
INSERT ...
ON DUPLICATE KEY UPDATE ...
instead of
BEGIN;
SELECT ... FOR UPDATE;
if no row found
INSERT ...;
else
UPDATE ...;
COMMIT;
Class 4: Classic banking example:
BEGIN;
UPDATE accounts SET balance = balance - 1000.00 WHERE id='me';
... What if crash occurs here? ...
UPDATE accounts SET balance = balance + 1000.00 WHERE id='you';
COMMIT;
If the system crashes between the two UPDATEs, the first update will be undone. This keeps the system from losing track of the funds transfer.
Case 5: Perhaps close to what the OP wants. It is mostly a combination of Cases 2 and 1.
BEGIN;
SELECT ... FROM t1 FOR UPDATE; -- see note below
... work with the values SELECTed
INSERT INTO t1 ...;
COMMIT;
Notes on Case 5: The SELECT..FOR UPDATE must include any rows that you don't want the other connection to see. This has the effect of delaying the other connection until this connection COMMITs. (Yes, this feels a lot like LOCK TABLES t1 WRITE.)
Case 6: The "processing" that needs to be inside the BEGIN..COMMIT will take too long. (Example: the typical online shopping cart.)
This needs a locking mechanism outside of InnoDB's transactions. One way (useful for shopping cart) is to use a row in some extra table, and have everyone check it. Another way (more practical within a single connection) is to use GET_LOCK('foo') and it's friends.
General Discussion
All of the above examples lock only the row(s) involved, not the entire table(s). This makes the action much less invasive, and allows for the system to handle much more activity.
Also, read about MVCC. This is a general technique used under the cover to let one connection see the values of the table(s) at some instant in time, even while other connections are modifying the table(s).
"Prevent inserts" -- With MVCC, if you start a SELECT, it is like getting a snapshot in time of everything you are looking at. You won't see the INSERTs until after you complete the transaction that the SELECT is in. You can have your cake and eat it, too. That is, it appears as if the inserts were blocked, but you get the performance benefit of them happening in parallel. Magic.

Check if transaction on a innoDB row is occurring?

If a database transaction is occurring on one thread is there a way for other threads to check to see if this transaction is already occurring before attempting the transaction? I know innoDB has row-level locking but I want the transaction to not be attempted if its already occurring on another thread, instead of waiting for the lock to be released and then attempting it.
To make my question clearer, an explanation of what I am trying to do may help:
I am creating a simple raffle using php and a innoDB table with MySQL. When a user loads the page to view the raffle it checks the raffle's database row to see if its scheduled end time has passed and if its "processed" column in the database is true or false.
If the raffle needs to be processed it will begin a database transaction which takes about 5 seconds before being committed and marked as "processed" in the database.
If multiple users load the page at around the same time I feel that it will process the raffle more than once which is not what I want. Ideally it would only attempt to process the raffle if no other threads are processing it, otherwise it would do nothing.
How would I go about doing this? Thanks.
You could implement table level locking and handle any subsequent connections to either be run in a queue or fail quietly:
https://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
From the MySQL docs:
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;

Properly locking my database during a script run

I have no knowledge of locking whatsoever. I have been looking through some MySQL documentation and can't fully understand how this whole process goes about. What I need, is for the following events in my script to happen:
step 1) table user gets locked
step 2) my script selects two rows from table user
step 3) my script makes an update to table user
step 4) table user gets unlocked because the script is done
How do I go about this exactly? And what happens when another user runs this same script while the table is locked? Is there a way for the script to know when to proceed (when the table becomes unlocked?). I have looked into start transaction and select for update but the documentation is very unclear. Any help is appreciated. And yes, the table is innodb.
I believe what you are look for is the SELECT ... FOR UPDATE syntax available for InnoDB tables. This will lock only the records you want to update. You do need to wrap it in a transaction.
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html
For example, run your queries like this:
START TRANSACTION
SELECT ... FOR UPDATE
UPDATE ...
COMMIT
Eliminate step 2 by performing your select query as part of your update call. Then MySQL takes care of the rest. Only one write query can run at the same time, others will be queued behind.

MYSQL table locking with PHP

I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?

Table [tablename] is not locked

I am writing a MySQL query that locks a table:
"LOCK TABLE table_1 WRITE"
After that i am executing some functions, and in one of those functions, I am executing another query, on another table that I haven't locked:
"SELECT * FROM completely_different_table_2"
Then i get the following error message as result:
Table 'completely_different_table_2' was not locked with LOCKED TABLES
Indeed, MySql is right to tell me that the table is not locked. But why does it throws an error? Anyone any ideas how I could solve this?
Thanks in advance.
You have to lock every table, that you want to use until the LOCK is released. You can give completely_different_table_2 only a READ LOCK, which allows other processes to read this table while it is locked:
LOCK TABLES table_1 WRITE, completely_different_table_2 READ;
PS: MySQL has a reason to do so. If you request a LOCK, you want to freeze a consistent state of your data. If you read data from completely_different_table_2 inside your LOCK, your data written to table_1 will in some way depend on this other table. Therefore you don’t want anyone to change this table during your LOCK and request a READ LOCK for this second table as well. If your data written to table_1 doesn’t depend on the other table, simply don’t query it until the LOCK is released.

Categories