I have a table where sensitive data is stored and need to take care, that only one session is able to read/write on a specific row.
My table has 2 coloumns
id (int) primary
amount (int) index
so I want to lock the table but only one row
something like
LOCK TABLEROWS `mytable` WRITE WHERE `id` = 1
im using pdo and startTransaction wont prevent other sessions to read/write due that time
i read the InnoDB Documentation but didnt get it to run
EDIT:
$_PDO->exec('START TRANSACTION');
$_PDO->query('SELECT * FROM `currency` WHERE `id` = '.$userid.' FOR UPDATE');
//maybe do update or not check if
$_PDO->exec('COMMIT');
so thats all i need to do?
The example you show will cause other sessions doing SELECT...FOR UPDATE to wait for your COMMIT. The locks requested by SELECT...FOR UPDATE are exclusive locks, so only one session at a time can acquire the lock. Therefore if your session holds the lock, other sessions will wait.
You cannot block non-locking reads. Another session can run SELECT with no locking clause, and still read the data. But they can't update the data, nor can they request a locking read.
You could alternatively make each session request a lock on the table with LOCK TABLES, but you said you want locks on a row scale.
You can create your own custom locks with the GET_LOCK() function. This allows you to make a distinct lock for each user id. If you do this for all code that accesses the table, you don't need to use FOR UPDATE.
$lockName = 'currency' . (int) $userid;
$_PDO->beginTransaction();
$stmt = $_PDO->prepare("SELECT GET_LOCK(?, -1)");
$stmt->execute([$lockName]);
$stmt = $_PDO->prepare('SELECT * FROM `currency` WHERE `id` = ?');
$stmt->execute([$userid]);
//maybe do update or not check if
$_PDO->commit();
$stmt = $_PDO->prepare("SELECT RELEASE_LOCK(?)");
$stmt->execute([$lockName]);
This depends on all client code cooperating. They all need to acquire the lock before they work on a given row. You can either use SELECT...FOR UPDATE or else you can use GET_LOCK().
But you can't block clients that want to do non-locking reads with SELECT.
Related
I have one table that is read at the same time by different threads.
Each thread must select 100 rows, execute some tasks on each row (unrelated to the database) then they must delete the selected row from the table.
rows are selected using this query:
SELECT id FROM table_name FOR UPDATE;
My question is: How can I ignore (or skip) rows that were previously locked using a select statement in MySQL ?
I typically create a process_id column that is default NULL and then have each thread use a unique identifier to do the following:
UPDATE table_name SET process_id = #{process.id} WHERE process_id IS NULL LIMIT 100;
SELECT id FROM table_name WHERE process_id = #{process.id} FOR UPDATE;
That ensures that each thread selects a unique set of rows from the table.
Hope this helps.
Even though it is not the best solution, as there is no way that I know to ignore locked rows, I select a random one and try to obtain a lock.
START TRANSACTION;
SET #v1 =(SELECT myId FROM tests.table WHERE status is NULL LIMIT 1);
SELECT * FROM tests.table WHERE myId=#v1 FOR UPDATE; #<- lock
Setting a small timeout for the transaction, if that row is locked the transaction is aborted and I try another one. If I obtain the lock, I process it. If (bad luck) that row was locked, it is processed and the lock is released before my timeout, I then select a row that has already been 'processed'! However, I check a field that my processes set (e.g. status): if the other process transaction ended OK, that field tells me that work has already been done and I do not process that row again.
Every other possible solution without transactions (e.g. setting another field if the row has no status and ... etc.) can easily provide race conditions and missed processes (e.g. one thread abruptly dies, the allocated data is still tagged, while a transaction expires; ref. comment here
Hope it helps
I have a table that needs to be locked from being inserted but it also needs to be able to be updated while inserts are prevented.
function myfunction() {
$locked = mysql_result(mysql_query("SELECT locked FROM mylock"),0,0);
if ( $locked ) return false;
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1");
mysql_query("UNLOCK TABLES");
/* I'm checking another table to see if a record doesn't exist already */
/* If it doesn't exist then I'm inserting that record */
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=0");
mysql_query("UNLOCK TABLES");
}
But this isn't enough, the function is called again from another script and simultaneously inserts are happening from the 2 calls to the function, and I can't have that because it's causing duplicate records.
This is urgent please help. I thought of using UNIQUE on the fields but there are 2 fields (player1, player2), and NEITHER cannot contain a duplicate of a player ID.
Unwanted behavior:
Record A = ( Player1: 123 Player2: 456 )
Record B = ( Player1: 456 Player2: 123 )
I just noticed you suffer form a race condition in your code. Assuming there isn't an error (see my comments)... two processes could check and get "not locked" result. The "LOCK TABLES" will serialize their access, but they'll both continue on thinking they have the lock and thus duplicate records.
You could rewrite it as this:
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1 WHERE locked=0");
$have_lock = mysql_affected_rows() > 0;
mysql_query("UNLOCK TABLES");
if (!$have_lock ) return false;
I suggest not using locks at all. Instead, when insterting the data, do like this:
mysql_query("INSERT IGNORE INTO my_table VALUES(<some values here>)");
if(mysql_affected_rows()>0)
{
// the data was inserted without error
$last_id = mysql_insert_id();
// add what you need here
}
else
{
// the data could not be inserted (because it already exists in the table)
// query the table to retrieve the data
mysql_query("SELECT * FROM my_table WHERE <some_condition>");
// add what you need here
}
When adding the IGNORE keyword to an INSERT statement, MySQL will attempt to insert the data. If it doesn't work because a record with the same primary key is already in the table, it will fail silently. mysql_affected_rows is used to know the number of inserted records and decide what to do.
You don't need table level locking here, better use row level locking. Row level locking means only the one row they're modifying is locked. The usual alternatives are to either lock the entire table for the duration of the modification, or else to lock some subset of the table. Row-level locking simply reduces that subset of the rows to the smallest number that still ensures integrity.
In the InnoDB transaction model, the goal is to combine the best properties of a multi-versioning database with traditional two-phase locking. InnoDB does locking on the row level and runs queries as nonlocking consistent reads by default, in the style of Oracle. The lock table in InnoDB is stored so space-efficiently that lock escalation is not needed: Typically, several users are permitted to lock every row in InnoDB tables, or any random subset of the rows, without causing InnoDB memory exhaustion.
If your problem yet not solved, then memory size may be the issue. InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you're doing anything with InnoDB on your server.
Luckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There's simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post's writing).
Before you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it's shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won't use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.
Does the following code really disable concurrent execution?
LOCK TABLES codes WRITE;
INSERT INTO codes VALUES ();
SELECT mid FROM codes ORDER BY expired DESC LIMIT 0,1;
UNLOCK TABLES;
The PHP will execute SQL. The PHP file will be requested by many users via HTTP. Would it really execute in isolation for every users?
mid is something which has to be unique for every user so I think I should use MySQL locks to achieve that.
If you have a table with an auto incremented key and you use mysql_insert_id just after insertion, it is guaranteed to be unique and it won't mix user threads (it fetches the ID on the connection you give). No need to lock the table.
http://php.net/manual/en/function.mysql-insert-id.php
I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?
I have mutliple workers SELECTing and UPDATing row.
id status
10 new
11 new
12 old
13 old
Worker selects a 'new' row and updates its status to 'old'.
What if two workers select same row at the same time?
I mean worker1 selects a new row, and before worker one updates its status, worker2 selects the same row?
Should I SELECT and UPDATE in one query or is there another way?
You can use LOCK TABLES but sometimes I prefer the following solution (in pseudo-code):
// get 1 new row
$sql = "select * from table where status='new' limit 0, 1";
$row = mysql_query($sql);
// update it to old while making sure no one else has done that
$sql = "update table set status='old' where status='new' and id=row[id]";
mysql_query($sql);
// check
if (mysql_affected_rows() == 1)
// status was changed
else
// failed - someone else did it
You could LOCK the table before your read, and unlock it after your write. This would eliminate the chance of two workers updating the same record at the same time.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
Depending on your database storage engine (InnoDB, MyIsam, etc), you may be able to lock the table while a person is modifing it. It would then keep simotanious actions to the same table.
could you put conditions in your PHP logic to imply a lock? Like set a status attribute on a row that would prevent the second user from performing an update. This would possibly require querying the database before an update to make sure the row is not locked.