Disabling MySQL Concurrency - php

Does the following code really disable concurrent execution?
LOCK TABLES codes WRITE;
INSERT INTO codes VALUES ();
SELECT mid FROM codes ORDER BY expired DESC LIMIT 0,1;
UNLOCK TABLES;
The PHP will execute SQL. The PHP file will be requested by many users via HTTP. Would it really execute in isolation for every users?
mid is something which has to be unique for every user so I think I should use MySQL locks to achieve that.

If you have a table with an auto incremented key and you use mysql_insert_id just after insertion, it is guaranteed to be unique and it won't mix user threads (it fetches the ID on the connection you give). No need to lock the table.
http://php.net/manual/en/function.mysql-insert-id.php

Related

mysql - lock row while updating

I have a table which gets lots of calls(about 30/sec) lets call it service.
service build as the following
`id`|`name`|`lastUpdate`|`inUse`|`usedBy`
While I change one of the rows I don't want it to be shown in any other select\update statements till the update is finished.
for example-
session 1:
UPDATE `service` SET `inUse`=1, `usedBy`='xxx'
WHERE `inUse`=0 ORDER BY `lastUpdate` ASC LIMIT 1
session 2 :
UPDATE `service` SET `inUse`=1, `usedBy`='xxx'
WHERE `inUse`=0 ORDER BY `lastUpdate` ASC LIMIT 1
despite both queries run in same moment I want them to update different rows.
I know there's some articles about this question, but for some reason each one got different answer, probably it changed over the years..
I'm working with doctrine but I don't mind to work with regular mysql solution
despite both queries run in same moment I want them to update
different rows.
NO, right? AFAIK, DML statement like UPDATE takes implicit locks on the table/row and so once an UPDATE is happening another UPDATE/INSERT can not take place since it already acquired a exclusive lock on the table.
See Locks Set by Different SQL Statements in InnoDB
I'm not entirely sure if this is best practice, but you could lock the table
LOCK TABLE `service` WRITE;
UPDATE `service` SET `inUse`=1, `usedBy`='xxx'
WHERE `inUse`=0 ORDER BY `lastUpdate` ASC LIMIT 1;
UNLOCK TABLES;
See MySQL Docs.

Locking transaction on mysql table with multi threading [duplicate]

I have one table that is read at the same time by different threads.
Each thread must select 100 rows, execute some tasks on each row (unrelated to the database) then they must delete the selected row from the table.
rows are selected using this query:
SELECT id FROM table_name FOR UPDATE;
My question is: How can I ignore (or skip) rows that were previously locked using a select statement in MySQL ?
I typically create a process_id column that is default NULL and then have each thread use a unique identifier to do the following:
UPDATE table_name SET process_id = #{process.id} WHERE process_id IS NULL LIMIT 100;
SELECT id FROM table_name WHERE process_id = #{process.id} FOR UPDATE;
That ensures that each thread selects a unique set of rows from the table.
Hope this helps.
Even though it is not the best solution, as there is no way that I know to ignore locked rows, I select a random one and try to obtain a lock.
START TRANSACTION;
SET #v1 =(SELECT myId FROM tests.table WHERE status is NULL LIMIT 1);
SELECT * FROM tests.table WHERE myId=#v1 FOR UPDATE; #<- lock
Setting a small timeout for the transaction, if that row is locked the transaction is aborted and I try another one. If I obtain the lock, I process it. If (bad luck) that row was locked, it is processed and the lock is released before my timeout, I then select a row that has already been 'processed'! However, I check a field that my processes set (e.g. status): if the other process transaction ended OK, that field tells me that work has already been done and I do not process that row again.
Every other possible solution without transactions (e.g. setting another field if the row has no status and ... etc.) can easily provide race conditions and missed processes (e.g. one thread abruptly dies, the allocated data is still tagged, while a transaction expires; ref. comment here
Hope it helps

PHP MySQL Task API, Prevent Duplicate Records

I am building a PHP RESTful-API for remote "worker" machines to self-assign tasks. The MySQL InnoDB table on the API host holds pending records that the workers can pick up from the API whenever they are ready to work on a record. How do I prevent concurrently requesting worker system from ever getting the same record?
My initial plan to prevent this is to UPDATE a single record with a uniquely generated ID in a default NULL field, and then poll for the details of the record where the unique ID field matches.
For example:
UPDATE mytable SET status = 'Assigned', uniqueidfield = '3kj29slsad'
WHERE uniqueidfield IS NULL LIMIT 1
And in the same PHP instance, the next query:
SELECT id, status, etc FROM mytable WHERE uniqueidfield = '3kj29slsad'
The resulting record from the SELECT statement above is then given to the worker. Would this prevent simultaneously requesting workers from getting the same records shown to them? I am not exactly sure on how MySQL handles the lookups within an UPDATE query, and if two UPDATES could "find" the same record, and then update it sequentially. If this works, is there a more elegant or standardized way of doing this (not sure if FOR UPDATE would need to be applied to this)? Thanks!
Nevermind my previous answer. I believe I understand what you are asking. I'll reword it so maybe it is clearer to others.
"If I issue two of the above update statements at the same time, what would happen?"
According to http://dev.mysql.com/doc/refman/5.0/en/lock-tables-restrictions.html, the second statement would not interfere with the first one.
Normally, you do not need to lock tables, because all single UPDATE
statements are atomic; no other session can interfere with any other
currently executing SQL statement.
A more elegant way is probably opinion based, but I don't see anything wrong with what you're doing.

MYSQL table locking with PHP

I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?

Table [tablename] is not locked

I am writing a MySQL query that locks a table:
"LOCK TABLE table_1 WRITE"
After that i am executing some functions, and in one of those functions, I am executing another query, on another table that I haven't locked:
"SELECT * FROM completely_different_table_2"
Then i get the following error message as result:
Table 'completely_different_table_2' was not locked with LOCKED TABLES
Indeed, MySql is right to tell me that the table is not locked. But why does it throws an error? Anyone any ideas how I could solve this?
Thanks in advance.
You have to lock every table, that you want to use until the LOCK is released. You can give completely_different_table_2 only a READ LOCK, which allows other processes to read this table while it is locked:
LOCK TABLES table_1 WRITE, completely_different_table_2 READ;
PS: MySQL has a reason to do so. If you request a LOCK, you want to freeze a consistent state of your data. If you read data from completely_different_table_2 inside your LOCK, your data written to table_1 will in some way depend on this other table. Therefore you don’t want anyone to change this table during your LOCK and request a READ LOCK for this second table as well. If your data written to table_1 doesn’t depend on the other table, simply don’t query it until the LOCK is released.

Categories