I have a table that needs to be locked from being inserted but it also needs to be able to be updated while inserts are prevented.
function myfunction() {
$locked = mysql_result(mysql_query("SELECT locked FROM mylock"),0,0);
if ( $locked ) return false;
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1");
mysql_query("UNLOCK TABLES");
/* I'm checking another table to see if a record doesn't exist already */
/* If it doesn't exist then I'm inserting that record */
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=0");
mysql_query("UNLOCK TABLES");
}
But this isn't enough, the function is called again from another script and simultaneously inserts are happening from the 2 calls to the function, and I can't have that because it's causing duplicate records.
This is urgent please help. I thought of using UNIQUE on the fields but there are 2 fields (player1, player2), and NEITHER cannot contain a duplicate of a player ID.
Unwanted behavior:
Record A = ( Player1: 123 Player2: 456 )
Record B = ( Player1: 456 Player2: 123 )
I just noticed you suffer form a race condition in your code. Assuming there isn't an error (see my comments)... two processes could check and get "not locked" result. The "LOCK TABLES" will serialize their access, but they'll both continue on thinking they have the lock and thus duplicate records.
You could rewrite it as this:
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1 WHERE locked=0");
$have_lock = mysql_affected_rows() > 0;
mysql_query("UNLOCK TABLES");
if (!$have_lock ) return false;
I suggest not using locks at all. Instead, when insterting the data, do like this:
mysql_query("INSERT IGNORE INTO my_table VALUES(<some values here>)");
if(mysql_affected_rows()>0)
{
// the data was inserted without error
$last_id = mysql_insert_id();
// add what you need here
}
else
{
// the data could not be inserted (because it already exists in the table)
// query the table to retrieve the data
mysql_query("SELECT * FROM my_table WHERE <some_condition>");
// add what you need here
}
When adding the IGNORE keyword to an INSERT statement, MySQL will attempt to insert the data. If it doesn't work because a record with the same primary key is already in the table, it will fail silently. mysql_affected_rows is used to know the number of inserted records and decide what to do.
You don't need table level locking here, better use row level locking. Row level locking means only the one row they're modifying is locked. The usual alternatives are to either lock the entire table for the duration of the modification, or else to lock some subset of the table. Row-level locking simply reduces that subset of the rows to the smallest number that still ensures integrity.
In the InnoDB transaction model, the goal is to combine the best properties of a multi-versioning database with traditional two-phase locking. InnoDB does locking on the row level and runs queries as nonlocking consistent reads by default, in the style of Oracle. The lock table in InnoDB is stored so space-efficiently that lock escalation is not needed: Typically, several users are permitted to lock every row in InnoDB tables, or any random subset of the rows, without causing InnoDB memory exhaustion.
If your problem yet not solved, then memory size may be the issue. InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you're doing anything with InnoDB on your server.
Luckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There's simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post's writing).
Before you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it's shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won't use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.
Related
I have a table where sensitive data is stored and need to take care, that only one session is able to read/write on a specific row.
My table has 2 coloumns
id (int) primary
amount (int) index
so I want to lock the table but only one row
something like
LOCK TABLEROWS `mytable` WRITE WHERE `id` = 1
im using pdo and startTransaction wont prevent other sessions to read/write due that time
i read the InnoDB Documentation but didnt get it to run
EDIT:
$_PDO->exec('START TRANSACTION');
$_PDO->query('SELECT * FROM `currency` WHERE `id` = '.$userid.' FOR UPDATE');
//maybe do update or not check if
$_PDO->exec('COMMIT');
so thats all i need to do?
The example you show will cause other sessions doing SELECT...FOR UPDATE to wait for your COMMIT. The locks requested by SELECT...FOR UPDATE are exclusive locks, so only one session at a time can acquire the lock. Therefore if your session holds the lock, other sessions will wait.
You cannot block non-locking reads. Another session can run SELECT with no locking clause, and still read the data. But they can't update the data, nor can they request a locking read.
You could alternatively make each session request a lock on the table with LOCK TABLES, but you said you want locks on a row scale.
You can create your own custom locks with the GET_LOCK() function. This allows you to make a distinct lock for each user id. If you do this for all code that accesses the table, you don't need to use FOR UPDATE.
$lockName = 'currency' . (int) $userid;
$_PDO->beginTransaction();
$stmt = $_PDO->prepare("SELECT GET_LOCK(?, -1)");
$stmt->execute([$lockName]);
$stmt = $_PDO->prepare('SELECT * FROM `currency` WHERE `id` = ?');
$stmt->execute([$userid]);
//maybe do update or not check if
$_PDO->commit();
$stmt = $_PDO->prepare("SELECT RELEASE_LOCK(?)");
$stmt->execute([$lockName]);
This depends on all client code cooperating. They all need to acquire the lock before they work on a given row. You can either use SELECT...FOR UPDATE or else you can use GET_LOCK().
But you can't block clients that want to do non-locking reads with SELECT.
I have a mysql database (InnoDB engine) where I lock a few tables before doing some work. According to the documentation
"The correct way to use LOCK TABLES and UNLOCK TABLES with transactional tables, such as InnoDB tables, is to begin a transaction with SET autocommit = 0 (not START TRANSACTION) followed by LOCK TABLES, and to not call UNLOCK TABLES until you commit the transaction explicitly."
So I'm doing (pseudocode):
mysqli_query("SET autocommit=0");
mysqli_query("LOCK TABLES table1 WRITE, table2 READ ...");
mysqli_query("SOME SELECTS AND INSERTS HERE");
mysqli_query("COMMIT");
mysqli_query("UNLOCK TABLES");
Now, should I also do this:
mysqli_query("SET autocommit=1");
According to the documentation again,
"When you call LOCK TABLES, InnoDB internally takes its own table lock, and MySQL takes its own table lock. InnoDB releases its internal table lock at the next commit, but for MySQL to release its table lock, you have to call UNLOCK TABLES. You should not have autocommit = 1, because then InnoDB releases its internal table lock immediately after the call of LOCK TABLES, and deadlocks can very easily happen. InnoDB does not acquire the internal table lock at all if autocommit = 1, to help old applications avoid unnecessary deadlocks."
I think the documentation is a bit ambigous at this point. As I'm interpreting it, you shouldn't use SET autocommit=1 in place of UNLOCK TABLES.
However, there shouldn't be any harm in doing it AFTER the tables have been unlocked?
But still I'm unsure if it's necessary. I have a single select running in the same script after the COMMIT, and it appears to autocommit even if I don't SET autocommit=1. Why?
I have a lot of console applications that perform different tasks. I getting unique task from php script:
$mysqli->autocommit(FALSE);
$result = $mysqli->query("SELECT id, task FROM queue WHERE locked = 0 LIMIT 1 FOR UPDATE;");
while($row = $result->fetch_assoc()){
$mysqli->query('UPDATE queue SET locked = 1 WHERE id="'.$row['id'].'";');
$mysqli->commit();
$response["response"]["task"] = $row["task"];
}
$mysqli->close();
echo json_encode($response);
Sometimes I have duplicate task and, "Deadlock found when trying to get lock; try restarting transaction".
What am I doing wrong?
UPD: set index on "locked" column solve problem
From the MySQL documentation How to Minimize and Handle Deadlocks:
Add well-chosen indexes to your tables. Then your queries need to scan fewer index records and consequently set fewer locks.
Adding an index to the locked column should solve this. Without it, the SELECT query has to scan through the table, looking for a row with locked = 0, and all the rows it steps through must be locked.
If you add an index to the column in the WHERE clause, it can go directly to that record and lock it. No other records are locked, so the possibility of deadlock is reduced.
I need to do two updates to rows but I need to make sure they are done together and that no other query from another user could interfere with them. I know about SELECT...FOR UPDATE but I imagine after the first update it will of course be unlocked which means someone could interfere with the second update. If someone else updates that row first, the update will work but will mess up the data. Is there anyway to ensure that the two updates happen how they are supposed to? I have been told about transactions but as far as I know they are only good for making sure the two updates actually happen and not whether they happen "together," unless I am mistaken and the rows will be locked until the transaction is committed?
Here are the queries:
SELECT z FROM table WHERE id='$id'
UPDATE table SET x=x+2 WHERE x>z
UPDATE table SET y=y+2 WHERE y>z
I made a mistake and didn't give full information. That was my fault. I have updated the queries. The issue I have is that z can be updated as well. If z is updated after the SELECT but before the other two updates, the data can get messed up. Does doing the transaction BEGIN/COMMIT work for that?
Learn about TRANSACTION
http://dev.mysql.com/doc/refman/5.0/en/commit.html
[... connect ...]
mysql_query("BEGIN");
$query1 = mysql_query('UPDATE table SET x=x+2 WHERE x>y');
$query2 = mysql_query('UPDATE table SET y=y+2 WHERE y>y');
if($query1 && $query2) {
mysql_query("COMMIT");
echo 'Save Done. All UPDATES done.';
} else {
mysql_query("ROLLBACK");
echo 'Error Save. All UPDATES reverted, and not done.';
}
There are various levels of transaction, but basically as per ACID properties you should expect that within a given transaction, all reads and updates are performed consistently meaning it will be kept in a valid state, but more importantly a transaction is isolated in that work being done in another transaction (thread) will not interfere with your transaction (your grouping of select & update SQL statements): this allows you to take a broad assumption that you are the only thread of execution within the system allowing you to commit that group of work (atomically) or roll it all back.
Each database may handle the semantics differently (some may lock rows or columns, some may re-order, some may serialize) but that's the beauty of a declarative database interface: you worry about the work you want to get done.
As stated, on MySQL InnoDB is transactional and will support what is mentioned above so ensure your tables are organized with InnoDB, other non-transactional engines (e.g. MyISAM) are not transactional and thus will force you to manage those transactional semantics (locking) manually.
One approach would be to lock the entire table:
LOCK TABLE `table` WRITE;
SELECT z FROM `table` WHERE id='$id';
UPDATE `table` SET x=x+2 WHERE x>z;
UPDATE `table` SET y=y+2 WHERE y>z;
UNLOCK TABLES;
This will prevent other sessions from writing, and reading, from the table table during the SELECTs and UPDATEs.
Whether this is an appropriate solution does depend on how appropriate it is for sessions to wait to read or write from the table.
I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?