I have a table which gets lots of calls(about 30/sec) lets call it service.
service build as the following
`id`|`name`|`lastUpdate`|`inUse`|`usedBy`
While I change one of the rows I don't want it to be shown in any other select\update statements till the update is finished.
for example-
session 1:
UPDATE `service` SET `inUse`=1, `usedBy`='xxx'
WHERE `inUse`=0 ORDER BY `lastUpdate` ASC LIMIT 1
session 2 :
UPDATE `service` SET `inUse`=1, `usedBy`='xxx'
WHERE `inUse`=0 ORDER BY `lastUpdate` ASC LIMIT 1
despite both queries run in same moment I want them to update different rows.
I know there's some articles about this question, but for some reason each one got different answer, probably it changed over the years..
I'm working with doctrine but I don't mind to work with regular mysql solution
despite both queries run in same moment I want them to update
different rows.
NO, right? AFAIK, DML statement like UPDATE takes implicit locks on the table/row and so once an UPDATE is happening another UPDATE/INSERT can not take place since it already acquired a exclusive lock on the table.
See Locks Set by Different SQL Statements in InnoDB
I'm not entirely sure if this is best practice, but you could lock the table
LOCK TABLE `service` WRITE;
UPDATE `service` SET `inUse`=1, `usedBy`='xxx'
WHERE `inUse`=0 ORDER BY `lastUpdate` ASC LIMIT 1;
UNLOCK TABLES;
See MySQL Docs.
Related
I am building a PHP RESTful-API for remote "worker" machines to self-assign tasks. The MySQL InnoDB table on the API host holds pending records that the workers can pick up from the API whenever they are ready to work on a record. How do I prevent concurrently requesting worker system from ever getting the same record?
My initial plan to prevent this is to UPDATE a single record with a uniquely generated ID in a default NULL field, and then poll for the details of the record where the unique ID field matches.
For example:
UPDATE mytable SET status = 'Assigned', uniqueidfield = '3kj29slsad'
WHERE uniqueidfield IS NULL LIMIT 1
And in the same PHP instance, the next query:
SELECT id, status, etc FROM mytable WHERE uniqueidfield = '3kj29slsad'
The resulting record from the SELECT statement above is then given to the worker. Would this prevent simultaneously requesting workers from getting the same records shown to them? I am not exactly sure on how MySQL handles the lookups within an UPDATE query, and if two UPDATES could "find" the same record, and then update it sequentially. If this works, is there a more elegant or standardized way of doing this (not sure if FOR UPDATE would need to be applied to this)? Thanks!
Nevermind my previous answer. I believe I understand what you are asking. I'll reword it so maybe it is clearer to others.
"If I issue two of the above update statements at the same time, what would happen?"
According to http://dev.mysql.com/doc/refman/5.0/en/lock-tables-restrictions.html, the second statement would not interfere with the first one.
Normally, you do not need to lock tables, because all single UPDATE
statements are atomic; no other session can interfere with any other
currently executing SQL statement.
A more elegant way is probably opinion based, but I don't see anything wrong with what you're doing.
Does the following code really disable concurrent execution?
LOCK TABLES codes WRITE;
INSERT INTO codes VALUES ();
SELECT mid FROM codes ORDER BY expired DESC LIMIT 0,1;
UNLOCK TABLES;
The PHP will execute SQL. The PHP file will be requested by many users via HTTP. Would it really execute in isolation for every users?
mid is something which has to be unique for every user so I think I should use MySQL locks to achieve that.
If you have a table with an auto incremented key and you use mysql_insert_id just after insertion, it is guaranteed to be unique and it won't mix user threads (it fetches the ID on the connection you give). No need to lock the table.
http://php.net/manual/en/function.mysql-insert-id.php
I have 10 seperate php chron jobs running that select 100 records at a time from the same table using
SELECT `username` FROM `data` where `id` <> = '' limit 0,100
How do I ensure that each of these recordsets are unique? Is there a way of ensuring that each chron job does not select the same 100 records?
username is a unique if that helps.
Thanks
Jonathan
You can either choose different 100 records:
limit 100,100, limit 200,100 ...
Or choose 100 randomly:
...FROMdatawhereid<> = '' ORDER BY RAND() LIMIT 0,100
If you want to ensure that a record would not be chosen twice, you'll have to mark that record ("make it dirty"), so other cron jobs would be able to query only ones that were not chosen already. just add another boolean key called chosen, and mark it true after a given record was chosen. You'll have to run the cron jobs one by one, or use locking or mutex mechanism to ensure they won't run in parallel and race each other.
What you could do is 'mark' the records each job is going to use - the trick would be ensuring there's no race condition in marking them. Here's one way to do that.
create table job
(
job_id int not null auto_increment,
#add any other fields for a job you might want
primary key(job_id)
);
# add a job_id column to data
alter table data add column job_id not null default '0', add index(job_id);
Now, when you want to get 100 data rows to work on, get a unique job_id by inserting a row into job and obtaining the automatically generated id. Here's how you might do this in the mysql command line client, easy to see how it is adapted to code though:
insert into job (job_id) values(0);
set #myjob=last_insert_id();
Then, mark a hundred rows which are currently 0
update data set job_id=#myjob where job_id=0 limit 100;
Now, you can take your time and process all rows where job_id=#myjob, safe in the knowledge no other process will touch them.
No doubt you'll need to tailor this to suit your problem, but this illustrates how you can use simple features of MySQL to avoid a race condition among parallel processes competing for access to the same records.
I need to do two updates to rows but I need to make sure they are done together and that no other query from another user could interfere with them. I know about SELECT...FOR UPDATE but I imagine after the first update it will of course be unlocked which means someone could interfere with the second update. If someone else updates that row first, the update will work but will mess up the data. Is there anyway to ensure that the two updates happen how they are supposed to? I have been told about transactions but as far as I know they are only good for making sure the two updates actually happen and not whether they happen "together," unless I am mistaken and the rows will be locked until the transaction is committed?
Here are the queries:
SELECT z FROM table WHERE id='$id'
UPDATE table SET x=x+2 WHERE x>z
UPDATE table SET y=y+2 WHERE y>z
I made a mistake and didn't give full information. That was my fault. I have updated the queries. The issue I have is that z can be updated as well. If z is updated after the SELECT but before the other two updates, the data can get messed up. Does doing the transaction BEGIN/COMMIT work for that?
Learn about TRANSACTION
http://dev.mysql.com/doc/refman/5.0/en/commit.html
[... connect ...]
mysql_query("BEGIN");
$query1 = mysql_query('UPDATE table SET x=x+2 WHERE x>y');
$query2 = mysql_query('UPDATE table SET y=y+2 WHERE y>y');
if($query1 && $query2) {
mysql_query("COMMIT");
echo 'Save Done. All UPDATES done.';
} else {
mysql_query("ROLLBACK");
echo 'Error Save. All UPDATES reverted, and not done.';
}
There are various levels of transaction, but basically as per ACID properties you should expect that within a given transaction, all reads and updates are performed consistently meaning it will be kept in a valid state, but more importantly a transaction is isolated in that work being done in another transaction (thread) will not interfere with your transaction (your grouping of select & update SQL statements): this allows you to take a broad assumption that you are the only thread of execution within the system allowing you to commit that group of work (atomically) or roll it all back.
Each database may handle the semantics differently (some may lock rows or columns, some may re-order, some may serialize) but that's the beauty of a declarative database interface: you worry about the work you want to get done.
As stated, on MySQL InnoDB is transactional and will support what is mentioned above so ensure your tables are organized with InnoDB, other non-transactional engines (e.g. MyISAM) are not transactional and thus will force you to manage those transactional semantics (locking) manually.
One approach would be to lock the entire table:
LOCK TABLE `table` WRITE;
SELECT z FROM `table` WHERE id='$id';
UPDATE `table` SET x=x+2 WHERE x>z;
UPDATE `table` SET y=y+2 WHERE y>z;
UNLOCK TABLES;
This will prevent other sessions from writing, and reading, from the table table during the SELECTs and UPDATEs.
Whether this is an appropriate solution does depend on how appropriate it is for sessions to wait to read or write from the table.
I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?