MYSQL table locking with PHP - php

I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?

"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.

Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.

http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?

Related

MySQL/InnoDB transactions with table locks in InnoDB

I did a lot of research and I found a lot of information about all the relevant topics. However, I am not confident that I do now understand, how to put all this information together properly.
This application is written in PHP.
For queries I use PDO.
The MySQL database is configured as InnoDB.
What I need
SELECT ... FROM tableA;
// PHP looks at what comes back and does some logic.
INSERT INTO tableA ...;
INSERT INTO tableB ...;
Conditions:
The INSERTs need to be atomic. If one of them fails I want to roll back.
No reads and writes from/to tableA are allowed to happen between the SELECT and the INSERT from/into tableA.
This to me looks like a very simple problem. Yet I am not able to figure out, how to do this properly. So my question is:
What is the best approach?
This is an outline for my current plan, heavily simplified:
try {
SET autocommit = 0;
BLOCK TABLES tableA WRITE, tableB WRITE;
SELECT ... FROM tableA;
INSERT INTO tableA ...;
INSERT INTO tableB ...;
COMMIT;
UNLOCK TABLES;
SET autocommit = 1;
}
catch {
ROLLBACK;
UNLOCK TABLES;
SET autocommit = 1;
}
I feel like there is a lot that could be done better, but I don't know how :/
Why do it like this?
I need some kind of transaction to be able to do a rollback if INSERTs fail.
I need to lock tableA to make sure that no other INSERTs or UPDATEs take place.
Transactions and table locks don't work well together
(https://dev.mysql.com/doc/refman/8.0/en/lock-tables-and-transactions.html)
I want to use autocommit as a standard throughout the rest of my application, which is why I set it back to "1" at the end.
I am really not sure about this: But I somewhere picked up, that after locking a table, I can (from within the current connection) only query to this table until I unlock it (this does not make sense to me). This is why I locked tableB too, altough otherwise I wouldn't need to.
I am open for completely different approaches
I am open for any suggestion within the framework conditions PHP, MySQL, PDO and InnoDB.
Thank You!
Edit 1 (2018-06-01)
I feel like my problem/question needs some more clarification.
Starting point:
If have two tables, t1 and t2.
t1 has multiple columns of non-unique values.
The specifics of t2 are irrelevant for this problem.
What I want to do:
Step by step:
Select multiple columns and rows from t1.
In PHP analyse the retrieved data. Based on the results of this analysis put together a dataset.
INSERT parts of the dataset into t1 and parts of it into t2.
Additional information:
The INSERTs into the 2 tables must be atomic. This can be achieved using transactions.
No INSERTs from a different connection are allowed to occur between steps 1 and 3. This is very important, because every single INSERT into t1 has to occur with full awareness of the current state of the table. I'll best describe this in more detail. I will leave t2 out of this for now, to make things easier to understand.
Imagine this sequence of events (connections con1 and con2):
con1: SELECT ... FROM t1 WHERE xyz;
con1: PHP processes the information.
con2: SELECT ... FROM t1 WHERE uvw;
con2: PHP processes the information.
con1: INSERT INTO t1 ...;
con2: INSERT INTO t1 ...;
So both connections see t1 in the same state. However, they select different information. Con1 takes the information gathered, does some logic with it and then INSERTs data into a new row in t1. Con2 does the same, but using different information.
The problem is this: Both connections INSERTed data based on calculations that did not take into account whatever the other connection inserted into t1, because this information wasn't there when they read from t1.
Con2 might have inserted a row into t1 that would have met the WHERE-conditions of con1's SELECT-statement. In other words: Had con2 inserted its row earlier, con1 might have created completely different data to insert into t1. This is to say: The two INSERTs might have completely invalidated each others inserts.
This is why I want to make sure, that only one connection can work with the data in t1 at a time. No other connection is allowed to write, but also no other connection is allowed to read until the current connection is done.
I hope this clarifies things a bit... :/
Thoughts:
My thoughts were:
I need to make the INSERTs into the 2 tables atomic. --> I will use a transaction for this. Something like this:
try {
$pdo->beginTransaction();
// INSERT INTO t1 ...
// INSERT INTO t2 ...
$pdo->commit();
}
catch (Exception $e) {
$pdo->rollBack();
throw $e;
}
I need to make sure, no other connection writes to or reads from t1. This is where I decided that I need LOCK TABLES.
Assuming I had to use LOCK TABLES, I was confronted with the problem that LOCK TABLES is not transaction aware. Which is why I decided to go with the solution proposed here (https://dev.mysql.com/doc/refman/8.0/en/lock-tables-and-transactions.html) and also in multiple answers on stackoverflow.
But I wasn't happy with how the code looked like, which is why I came here to ask this (meanwhile rather lengthy) question.
Edit 2 (2018-06-01)
This process will not run often. So there is no significant need for high performance and effiency. This, of course, also means that the chances of two of those processes infering with eachother are rather minute. Stil, I'd like to make sure nothing can happen.
Case 1:
BEGIN;
INSERT ..
INSERT ..
COMMIT;
Other connections will not see the inserted rows until after the commit. That is, BEGIN...COMMIT made the two inserts "atomic".
If anything fails, you still need the try/catch to deal with it.
Do not use LOCK TABLES on InnoDB tables.
Don't bother with autocommit; BEGIN..COMMIT overrides it.
My statements apply to (probably) all frameworks. (Except that some do not have "try" and "catch".)
Case 2: Lock a row in anticipation of possibly modifying it:
BEGIN;
SELECT ... FROM t1 FOR UPDATE;
... work with the values SELECTed
UPDATE t1 ...;
COMMIT;
This keeps others away from the rows SELECTed until after the COMMIT.
Case 3: Sometimes IODKU is useful to do two things in a single atomic statement:
INSERT ...
ON DUPLICATE KEY UPDATE ...
instead of
BEGIN;
SELECT ... FOR UPDATE;
if no row found
INSERT ...;
else
UPDATE ...;
COMMIT;
Class 4: Classic banking example:
BEGIN;
UPDATE accounts SET balance = balance - 1000.00 WHERE id='me';
... What if crash occurs here? ...
UPDATE accounts SET balance = balance + 1000.00 WHERE id='you';
COMMIT;
If the system crashes between the two UPDATEs, the first update will be undone. This keeps the system from losing track of the funds transfer.
Case 5: Perhaps close to what the OP wants. It is mostly a combination of Cases 2 and 1.
BEGIN;
SELECT ... FROM t1 FOR UPDATE; -- see note below
... work with the values SELECTed
INSERT INTO t1 ...;
COMMIT;
Notes on Case 5: The SELECT..FOR UPDATE must include any rows that you don't want the other connection to see. This has the effect of delaying the other connection until this connection COMMITs. (Yes, this feels a lot like LOCK TABLES t1 WRITE.)
Case 6: The "processing" that needs to be inside the BEGIN..COMMIT will take too long. (Example: the typical online shopping cart.)
This needs a locking mechanism outside of InnoDB's transactions. One way (useful for shopping cart) is to use a row in some extra table, and have everyone check it. Another way (more practical within a single connection) is to use GET_LOCK('foo') and it's friends.
General Discussion
All of the above examples lock only the row(s) involved, not the entire table(s). This makes the action much less invasive, and allows for the system to handle much more activity.
Also, read about MVCC. This is a general technique used under the cover to let one connection see the values of the table(s) at some instant in time, even while other connections are modifying the table(s).
"Prevent inserts" -- With MVCC, if you start a SELECT, it is like getting a snapshot in time of everything you are looking at. You won't see the INSERTs until after you complete the transaction that the SELECT is in. You can have your cake and eat it, too. That is, it appears as if the inserts were blocked, but you get the performance benefit of them happening in parallel. Magic.

MySQL Table with frequent TRUNCATE, UPDATE, & SELECT

I am building an application that requires a MySQL table to be emptied and refilled with fresh data every minute. At the same time, it is expected that the table will receive anywhere from 10-15 SELECT statements per second constantly. The SELECT statements should in general be very fast (selecting 10-50 medium length strings every time). A few things I'm worried about:
Is there the potential for a SELECT query to run in between the TRUNCATE and UPDATE queries as to return 0 rows? Do I need to lock the table when executing the TRUNCATE-UPDATE query pair?
Are there any significant performance issues I should worry about regarding this setup?
There most propably is a better way to achieve your goal. But here's a possible answer to your question anyway: You can encapsulate queries that are meant to be executed together in a transaction. Off the top of my head something like
BEGIN TRANSACTION;
TRUNCATE foo;
INSERT INTO foo ...;
COMMIT;
EDIT: The above part is plain wrong, see Philip Devine's comment. Thanks.
Regarding the performance question: Repeatedly connecting to the server can be costly. If you have a persistent connection, you should be fine. You can save little bits here and there by executing multiple queries in a batch or using Prepared Statements.
Why do you need to truncate it every minute? Yes that will result in your users having no rows returned. Just update the rows instead of truncate and insert.
A second option is to insert the new values into a new table, rename the two tables as so:
RENAME TABLE tbl_name TO new_tbl_name
[, tbl_name2 TO new_tbl_name2]
Then truncate the old table.
Then your users see zero down time. The truncate in the other answer ignores transactions and happens immediately so dont do that!!

Disabling MySQL Concurrency

Does the following code really disable concurrent execution?
LOCK TABLES codes WRITE;
INSERT INTO codes VALUES ();
SELECT mid FROM codes ORDER BY expired DESC LIMIT 0,1;
UNLOCK TABLES;
The PHP will execute SQL. The PHP file will be requested by many users via HTTP. Would it really execute in isolation for every users?
mid is something which has to be unique for every user so I think I should use MySQL locks to achieve that.
If you have a table with an auto incremented key and you use mysql_insert_id just after insertion, it is guaranteed to be unique and it won't mix user threads (it fetches the ID on the connection you give). No need to lock the table.
http://php.net/manual/en/function.mysql-insert-id.php

Locking row for two updates

I need to do two updates to rows but I need to make sure they are done together and that no other query from another user could interfere with them. I know about SELECT...FOR UPDATE but I imagine after the first update it will of course be unlocked which means someone could interfere with the second update. If someone else updates that row first, the update will work but will mess up the data. Is there anyway to ensure that the two updates happen how they are supposed to? I have been told about transactions but as far as I know they are only good for making sure the two updates actually happen and not whether they happen "together," unless I am mistaken and the rows will be locked until the transaction is committed?
Here are the queries:
SELECT z FROM table WHERE id='$id'
UPDATE table SET x=x+2 WHERE x>z
UPDATE table SET y=y+2 WHERE y>z
I made a mistake and didn't give full information. That was my fault. I have updated the queries. The issue I have is that z can be updated as well. If z is updated after the SELECT but before the other two updates, the data can get messed up. Does doing the transaction BEGIN/COMMIT work for that?
Learn about TRANSACTION
http://dev.mysql.com/doc/refman/5.0/en/commit.html
[... connect ...]
mysql_query("BEGIN");
$query1 = mysql_query('UPDATE table SET x=x+2 WHERE x>y');
$query2 = mysql_query('UPDATE table SET y=y+2 WHERE y>y');
if($query1 && $query2) {
mysql_query("COMMIT");
echo 'Save Done. All UPDATES done.';
} else {
mysql_query("ROLLBACK");
echo 'Error Save. All UPDATES reverted, and not done.';
}
There are various levels of transaction, but basically as per ACID properties you should expect that within a given transaction, all reads and updates are performed consistently meaning it will be kept in a valid state, but more importantly a transaction is isolated in that work being done in another transaction (thread) will not interfere with your transaction (your grouping of select & update SQL statements): this allows you to take a broad assumption that you are the only thread of execution within the system allowing you to commit that group of work (atomically) or roll it all back.
Each database may handle the semantics differently (some may lock rows or columns, some may re-order, some may serialize) but that's the beauty of a declarative database interface: you worry about the work you want to get done.
As stated, on MySQL InnoDB is transactional and will support what is mentioned above so ensure your tables are organized with InnoDB, other non-transactional engines (e.g. MyISAM) are not transactional and thus will force you to manage those transactional semantics (locking) manually.
One approach would be to lock the entire table:
LOCK TABLE `table` WRITE;
SELECT z FROM `table` WHERE id='$id';
UPDATE `table` SET x=x+2 WHERE x>z;
UPDATE `table` SET y=y+2 WHERE y>z;
UNLOCK TABLES;
This will prevent other sessions from writing, and reading, from the table table during the SELECTs and UPDATEs.
Whether this is an appropriate solution does depend on how appropriate it is for sessions to wait to read or write from the table.

Table [tablename] is not locked

I am writing a MySQL query that locks a table:
"LOCK TABLE table_1 WRITE"
After that i am executing some functions, and in one of those functions, I am executing another query, on another table that I haven't locked:
"SELECT * FROM completely_different_table_2"
Then i get the following error message as result:
Table 'completely_different_table_2' was not locked with LOCKED TABLES
Indeed, MySql is right to tell me that the table is not locked. But why does it throws an error? Anyone any ideas how I could solve this?
Thanks in advance.
You have to lock every table, that you want to use until the LOCK is released. You can give completely_different_table_2 only a READ LOCK, which allows other processes to read this table while it is locked:
LOCK TABLES table_1 WRITE, completely_different_table_2 READ;
PS: MySQL has a reason to do so. If you request a LOCK, you want to freeze a consistent state of your data. If you read data from completely_different_table_2 inside your LOCK, your data written to table_1 will in some way depend on this other table. Therefore you don’t want anyone to change this table during your LOCK and request a READ LOCK for this second table as well. If your data written to table_1 doesn’t depend on the other table, simply don’t query it until the LOCK is released.

Categories