Concurrent getting tasks from MySQL table - php

I have a lot of console applications that perform different tasks. I getting unique task from php script:
$mysqli->autocommit(FALSE);
$result = $mysqli->query("SELECT id, task FROM queue WHERE locked = 0 LIMIT 1 FOR UPDATE;");
while($row = $result->fetch_assoc()){
$mysqli->query('UPDATE queue SET locked = 1 WHERE id="'.$row['id'].'";');
$mysqli->commit();
$response["response"]["task"] = $row["task"];
}
$mysqli->close();
echo json_encode($response);
Sometimes I have duplicate task and, "Deadlock found when trying to get lock; try restarting transaction".
What am I doing wrong?
UPD: set index on "locked" column solve problem

From the MySQL documentation How to Minimize and Handle Deadlocks:
Add well-chosen indexes to your tables. Then your queries need to scan fewer index records and consequently set fewer locks.
Adding an index to the locked column should solve this. Without it, the SELECT query has to scan through the table, looking for a row with locked = 0, and all the rows it steps through must be locked.
If you add an index to the column in the WHERE clause, it can go directly to that record and lock it. No other records are locked, so the possibility of deadlock is reduced.

Related

How to prevent selection of the row while it is being handled?

I have MySQL (InnoDB) table with the column is_locked which shows current state of the record (is it being handled by system now, or not).
On the other hand, I have many nodes that perform SELECT * FROM table_name WHERE is_locked = 0 and then handles got rows from this table.
In my code I do this:
System takes the row from DB (SELECT * FROM table_name WHERE is_locked = 0)
System lockes the row by command UPDATE table_name SET is_locked = 1 WHERE id = <id>
Problem:
Nodes are working very fast, all of them may get the same row, before first of them will update the row and set is_locked to 1
I found out LOCKING of the tables, but I don't think it is the right way.
Can anybody tell me, how to handle such cases?
I recommend two things:
Limit your select to one, as you're dealing with concurrency issues, it is better to take smaller "bites" with each iteration
Use transactions, this allows you to start the transaction, get the record, lock it and then commit the transaction. This will force mysql to enforce your concurrency locks.

PDO Lock Table Row INNODB

I have a table where sensitive data is stored and need to take care, that only one session is able to read/write on a specific row.
My table has 2 coloumns
id (int) primary
amount (int) index
so I want to lock the table but only one row
something like
LOCK TABLEROWS `mytable` WRITE WHERE `id` = 1
im using pdo and startTransaction wont prevent other sessions to read/write due that time
i read the InnoDB Documentation but didnt get it to run
EDIT:
$_PDO->exec('START TRANSACTION');
$_PDO->query('SELECT * FROM `currency` WHERE `id` = '.$userid.' FOR UPDATE');
//maybe do update or not check if
$_PDO->exec('COMMIT');
so thats all i need to do?
The example you show will cause other sessions doing SELECT...FOR UPDATE to wait for your COMMIT. The locks requested by SELECT...FOR UPDATE are exclusive locks, so only one session at a time can acquire the lock. Therefore if your session holds the lock, other sessions will wait.
You cannot block non-locking reads. Another session can run SELECT with no locking clause, and still read the data. But they can't update the data, nor can they request a locking read.
You could alternatively make each session request a lock on the table with LOCK TABLES, but you said you want locks on a row scale.
You can create your own custom locks with the GET_LOCK() function. This allows you to make a distinct lock for each user id. If you do this for all code that accesses the table, you don't need to use FOR UPDATE.
$lockName = 'currency' . (int) $userid;
$_PDO->beginTransaction();
$stmt = $_PDO->prepare("SELECT GET_LOCK(?, -1)");
$stmt->execute([$lockName]);
$stmt = $_PDO->prepare('SELECT * FROM `currency` WHERE `id` = ?');
$stmt->execute([$userid]);
//maybe do update or not check if
$_PDO->commit();
$stmt = $_PDO->prepare("SELECT RELEASE_LOCK(?)");
$stmt->execute([$lockName]);
This depends on all client code cooperating. They all need to acquire the lock before they work on a given row. You can either use SELECT...FOR UPDATE or else you can use GET_LOCK().
But you can't block clients that want to do non-locking reads with SELECT.

PHP + Locking MySQL Table fails

I have a table that needs to be locked from being inserted but it also needs to be able to be updated while inserts are prevented.
function myfunction() {
$locked = mysql_result(mysql_query("SELECT locked FROM mylock"),0,0);
if ( $locked ) return false;
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1");
mysql_query("UNLOCK TABLES");
/* I'm checking another table to see if a record doesn't exist already */
/* If it doesn't exist then I'm inserting that record */
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=0");
mysql_query("UNLOCK TABLES");
}
But this isn't enough, the function is called again from another script and simultaneously inserts are happening from the 2 calls to the function, and I can't have that because it's causing duplicate records.
This is urgent please help. I thought of using UNIQUE on the fields but there are 2 fields (player1, player2), and NEITHER cannot contain a duplicate of a player ID.
Unwanted behavior:
Record A = ( Player1: 123 Player2: 456 )
Record B = ( Player1: 456 Player2: 123 )
I just noticed you suffer form a race condition in your code. Assuming there isn't an error (see my comments)... two processes could check and get "not locked" result. The "LOCK TABLES" will serialize their access, but they'll both continue on thinking they have the lock and thus duplicate records.
You could rewrite it as this:
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1 WHERE locked=0");
$have_lock = mysql_affected_rows() > 0;
mysql_query("UNLOCK TABLES");
if (!$have_lock ) return false;
I suggest not using locks at all. Instead, when insterting the data, do like this:
mysql_query("INSERT IGNORE INTO my_table VALUES(<some values here>)");
if(mysql_affected_rows()>0)
{
// the data was inserted without error
$last_id = mysql_insert_id();
// add what you need here
}
else
{
// the data could not be inserted (because it already exists in the table)
// query the table to retrieve the data
mysql_query("SELECT * FROM my_table WHERE <some_condition>");
// add what you need here
}
When adding the IGNORE keyword to an INSERT statement, MySQL will attempt to insert the data. If it doesn't work because a record with the same primary key is already in the table, it will fail silently. mysql_affected_rows is used to know the number of inserted records and decide what to do.
You don't need table level locking here, better use row level locking. Row level locking means only the one row they're modifying is locked. The usual alternatives are to either lock the entire table for the duration of the modification, or else to lock some subset of the table. Row-level locking simply reduces that subset of the rows to the smallest number that still ensures integrity.
In the InnoDB transaction model, the goal is to combine the best properties of a multi-versioning database with traditional two-phase locking. InnoDB does locking on the row level and runs queries as nonlocking consistent reads by default, in the style of Oracle. The lock table in InnoDB is stored so space-efficiently that lock escalation is not needed: Typically, several users are permitted to lock every row in InnoDB tables, or any random subset of the rows, without causing InnoDB memory exhaustion.
If your problem yet not solved, then memory size may be the issue. InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you're doing anything with InnoDB on your server.
Luckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There's simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post's writing).
Before you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it's shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won't use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.

MYSQL table locking with PHP

I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?

MYSQL How can I make sure row does not get UPDATED more than once?

I have mutliple workers SELECTing and UPDATing row.
id status
10 new
11 new
12 old
13 old
Worker selects a 'new' row and updates its status to 'old'.
What if two workers select same row at the same time?
I mean worker1 selects a new row, and before worker one updates its status, worker2 selects the same row?
Should I SELECT and UPDATE in one query or is there another way?
You can use LOCK TABLES but sometimes I prefer the following solution (in pseudo-code):
// get 1 new row
$sql = "select * from table where status='new' limit 0, 1";
$row = mysql_query($sql);
// update it to old while making sure no one else has done that
$sql = "update table set status='old' where status='new' and id=row[id]";
mysql_query($sql);
// check
if (mysql_affected_rows() == 1)
// status was changed
else
// failed - someone else did it
You could LOCK the table before your read, and unlock it after your write. This would eliminate the chance of two workers updating the same record at the same time.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
Depending on your database storage engine (InnoDB, MyIsam, etc), you may be able to lock the table while a person is modifing it. It would then keep simotanious actions to the same table.
could you put conditions in your PHP logic to imply a lock? Like set a status attribute on a row that would prevent the second user from performing an update. This would possibly require querying the database before an update to make sure the row is not locked.

Categories