mysql 'FOR UPDATE' command not working correctly - php

I have two php page, page1.php & page2.php
page1.php
execute_query('START TRANSACTION');
$res =execute_query('SELECT * FROM table WHERE id = 1 FOR UPDATE');
sleep(20);
print $res->first_name;
execute_query('COMMIT');
print"\n OK";
page2.php
$res =execute_query('SELECT * FROM table WHERE id = 1');
print $res->first_name;
I executing both pages almost same time
So according to the mysql 'FOR UPDATE' condition,the result in page2.php will display only after the execution of page1.php (ie after display 'OK' in page1.php), because both page reading same row.
But what is happening is,
page2.php suddenly display the result, even before completing the execution of page1.php
May i know whats wrong with ' FOR UPDATE' command.?

I'm assuming that the table is InnoDB (not MyISAM or MEMORY).
You are using a SELECT within a transaction. I don't know your isolation level, but I guess that your transactions are not blocking each other.
See this page for details: http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html
EDIT:
I'm going to explain better this concept, as requested. The isolation level is a session/global variable which determines the way the transactions are performed. Some isolation levels block other transactions when they try to modify the same row, but some isolation levels don't.
For example, if you used UNCOMMITTED, it doesn't block anything, because you access the actual version of the rows (which may become obsolete before the transaction ends). The other SELECT (page2) only reads the table, so it doesn't have to wait that the first transaction ends.
SERIALIZABLE is much more safe. It is not the default because it is the slowest isolation level. If you are using it, make sure that FOR UPDATE still makes sense for you.

I Think your SELECT FOR UPDATE is inside BEGIN TRANSACTION, so it will not lock the record until COMMIT statement reached , and you delayed execution with sleep(20). so page2.php will be execute.

Related

Mysql unable to update a row, when multiple selects are in process or taking too much time

I have a table called Settings which has only one row. The settings are very important in all the cases for my program, The Settings is been read by 200 to 300 users every second. I haven't used any caching yet. I cannot update the settings table for a value like Limit. Change the limit from 5 -10 Or anything from an API.
Ex: Limit Products 5 - 10. The update query runs forever.
From the Workbench, I can update the record, But from Admin Panel through API it's not updating or take too much time. Table - InnoDB
1. Already Tried Locking With Read - Write.
2. Transaction.
3. Made a View of the table and Tried to update the table, the Same Issue remains.
4. The Update query is fine from Workbench, But through an API. It runs all day.
Is there anyway, I can lock the read operations on the table and update the table. I have only one row in a table.
Any help would be highly appreciated, Thanks in advance.
This sounds like a really good use case for using query cache.
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries.
To enable the query cache, you can run:
SET GLOBAL query_cache_size = 1000000;
And then edit your mysql config file (typically /etc/my.cnf or /etc/mysql/my.cnf):
query_cache_size=1000000
query_cache_type=2
query_cache_limit=100000
And then for your query you can change it to:
SELECT SQL_CACHE * FROM your_table;
And that should make it so you are able to update the table (as it won't be constantly locked).
You would need to restart the server.
As an alternative, you could implement cacheing in your PHP application. I would use something like memcached, but as a very simplistic solution you could do something like:
$settings = json_decode(file_get_contents("/path/to/settings.json"), true);
$minute = intval(date('i'));
if (isset($settings['minute']) && $settings['minute'] !== $minute) {
$settings = get_settings_from_mysql();
$settings['minute'] = intval(date('i'));
file_put_contents("/path/to/settings.json", json_encode($settings), LOCK_EX);
}
Are the queries being run in the context of transactions with say a transaction isolation level for repeatable read? It sounds like the update isn't able to complete due to a lock on the table, in which case caching isn't likely to be able to help you, as on a write the cache will be purged. More information on repeatable reads can be found at https://www.percona.com/blog/2012/08/28/differences-between-read-committed-and-repeatable-read-transaction-isolation-levels/.

Understanding pdo mysql transactions

The PHP Documentation says:
If you've never encountered transactions before, they offer 4 major
features: Atomicity, Consistency, Isolation and Durability (ACID). In
layman's terms, any work carried out in a transaction, even if it is
carried out in stages, is guaranteed to be applied to the database
safely, and without interference from other connections, when it is
committed.
QUESTION:
Does this mean that I can have two separate php scripts running transactions simultaneously without them interfering with one another?
ELABORATING ON WHAT I MEAN BY "INTERFERING":
Imagine we have the following employees table:
__________________________
| id | name | salary |
|------+--------+----------|
| 1 | ana | 10000 |
|------+--------+----------|
If I have two scripts with similar/same code and they run at the exact same time:
script1.php and script2.php (both have the same code):
$conn->beginTransaction();
$stmt = $conn->prepare("SELECT * FROM employees WHERE name = ?");
$stmt->execute(['ana']);
$row = $stmt->fetch(PDO::FETCH_ASSOC);
$salary = $row['salary'];
$salary = $salary + 1000;//increasing salary
$stmt = $conn->prepare("UPDATE employees SET salary = {$salary} WHERE name = ?");
$stmt->execute(['ana']);
$conn->commit();
and assuming the sequence of events is as follows:
script1.php selects data
script2.php selects data
script1.php updates data
script2.php updates data
script1.php commit() happens
script2.php commit() happens
What would the resulting salary of ana be in this case?
Would it be 11000? And would this then mean that 1 transaction will overlap the other because the information was obtained before either commit happened?
Would it be 12000? And would this then mean that regardless of the order in which data was updated and selected, the commit() function forced these to happen individually?
Please feel free to elaborate as much as you want on how transactions and separate scripts can interfere (or don't interfere) with one another.
You are not going to find the answer in php documentation because this has nothing to do with php or pdo.
Innodb table engine in mysql offers 4 so-called isolation levels in line with the sql standard. The isolation levels in conjunction with blocking / non-blocking reads will determine the result of the above example. You need to understand the implications of the various isolation levels and choose the appropriate one for your needs.
To sum up: if you use serialisable isolation level with autocommit turned off, then the result will be 12000. In all other isolation levels and serialisable with autocommit enabled the result will be 11000. If you start using locking reads, then the result could be 12000 under all isolation levels.
Judging by the given conditions (a solitary DML statement), you don't need a transaction here, but a table lock. It's a very common confusion.
You need a transaction if you need to make sure that ALL your DML statements were performed correctly or weren't performed at all.
Means
you don't need a transaction for any number of SELECT queries
you don't need a transaction if only one DML statement is performed
Although, as it was noted in the excellent answer from Shadow, you may use a transaction here with appropriate isolation level, it would be rather confusing. What you need here is table locking. InnoDB engine lets you lock particular rows instead of locking the entire table and thus should be preferred.
In case you want the salary to be 1200 - then use table locks.
Or - a simpler way - just run an atomic update query:
UPDATE employees SET salary = salary + 1000 WHERE name = ?
In this case all salaries will be recorded.
If your goal is different, better express it explicitly.
But again: you have to understand that transactions in general has nothing to do with separate scripts execution. Regarding your topic of race condition you are interested not in transactions but in table/row locking. This is a very common confusion, and you better learn it straight:
a transaction is to ensure that a set of DML queries within one script were executed successfully.
table/row locking is to ensure that other script executions won't interfere.
The only topic where transactions and locking interfere is a deadlock, but again - it's only in case when a transaction is using locking.
Alas, the "without interference" needs some help from the programmer. It needs BEGIN and COMMIT to define the extent of the 'transaction'. And...
Your example is inadequate. The first statement needs SELECT ... FOR UPDATE. This tells the transaction processing that there is likely to be an UPDATE coming for the row(s) that the SELECT fetches. That warning is critical to "preventing interference". Now the timeline reads:
script1.php BEGINs
script2.php BEGINs
script1.php selects data (FOR UPDATE)
script2.php selects data is blocked, so it waits
script1.php updates data
script1.php commit() happens
script2.php selects data (and will get the newly-committed value)
script2.php updates data
script2.php commit() happens
(Note: This is not a 'deadlock', just a 'wait'.)

Fast refreshing a php get with multiple queries

Things to note before reading:
I am aware the code isn't that brilliant. Please don't comment on my old work ;)
I am aware that mysql_query is deprecated. Updating that at the moment isn't within the scope of this question
Question background
I got an interesting bug report via an old website today which has caused me a huge amount of concern as I didn't ever expect this bug to occur.
The page is simple. On the original load a table is displayed after looping through a mysql query to the database. Each of those rows display a link with:
url.com/items.php?use=XXX&confirm=0
The XXX relates to the ID of the item in the items table in the database. The confirm=0 has the following code:
if(isset($_GET['use'])){
$id=#mysql_real_escape_string($_GET['use']);
if(isset($_GET['confirm'])){
$confirm=#mysql_real_escape_string($_GET['confirm']);
if($confirm==0){
// show a confirm button of YES / NO for them
// to click which has 1 for confirm
The user can then click on YES which transfers them to:
url.com/items.php?use=XXX&confirm=1
The code then goes to an else from the above code which does the following checks:
if($id<1){
echo "<p class='error-message'>An error has occurred.</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
if(empty($id)){
echo "<p class='error-message'>An error has occurred.</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
$quantity = 0;
$result=#mysql_query("SELECT * FROM inventory WHERE item_id=$id AND u_id=$user_id");
$num_rows=#mysql_num_rows($result);
$r=#mysql_fetch_array($result);
$quantity=$r['quantity'];
if($num_rows==0){
echo "<p class='error-message'>You do not own any of these.</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
if($quantity<1){
echo "<p class='error-message'>You don't have any of these left!</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
$result=#mysql_query("SELECT * FROM items WHERE id=$id");
$r=#mysql_fetch_array($result);
$type=$r['type'];
$item_name=$r['item_name'];
The above performs the relevant checks to make sure the ID exists and then queries the database to get the current quantity from the inventory and checks it's not below 0. If it is below 0 then it blocks the page at that point.
The code after this point removes the quantity of the item from the database and implements the "effect" of the item. Let's just assume an update is performed.
The problem:
The actual problem I am having here is that if a user refreshes the page multiple times they can actually get the update query to perform but they can actually skip the check on the quantity. The update query is run over and over but the check on the quantity is never run more than once as there are no error messages. An example today was when I had 3 items in my inventory and I pressed f5 about 100 times. I managed to get the query update to run 16 times without any error message displaying. If I then waited a few seconds and pressed f5 again it would display an error message saying I didn't have any of those items.
The following solutions are not an option as I don't want to waste time coding:
Create an ajax call to prevent multiple submits before all queries have been processed.
Implementing an MVC structure and redirecting the user to a separate page which prevents multiple submits
If anyone could explain the reason for this bug (with relevant reading material) or even offer a solution to resolve it that would be great! Thanks!
You have a race condition due to the time between querying the database for the stock level, and a subsequent update to reduce it. If you send several requests very quickly then each will receive the same stock level (3 in this case) before the first request has had time to update the stock level.
You need to change your code such that your query & decrement is atomic - i.e. there are no gaps.
One possible solution is to attempt an update, where stock level > 0 and see how many rows are affected.
UPDATE products set `stockLevel`=`stocklevel`-1 where `productId` = 'something' and `stocklevel`>0
If the number of rows affected is 0, you had no stock. If the number of rows affected is 1 then you had stock. Multiple queries will reduce stock to zero, at which point you should see some error messages.
The problem is likely due to having multiple concurrent threads running on your web server, responding to requests simultaneously for non-blocking / non-transactional database operations. Some of the requests may pass the inventory quantity check while other requests are still being processed.
One possible solution would be to use MySQL transactions, but this would probably require migrating to mysqli or PDO which seems to be outside the scope of your desired solution, and require InnoDB tables which you might not have.
Should you ever choose to upgrade to use mysqli, here is some useful information:
http://dev.mysql.com/doc/refman/5.0/en/commit.html
http://coders-view.blogspot.com/2012/03/how-to-use-mysql-transactions-with-php.html
Another solution would be to implement "locking" functionality.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
mysql_query("LOCK TABLES inventory WRITE;");
// all your other PHP/SQL here
mysql_query("UNLOCK TABLES;");
This will prevent other clients from reading the inventory table while the first client is still busy processing your PHP/MySQL code

Why doesn't LOCK TABLES [table] WRITE prevent table reads?

According to http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html if I lock a table for writing in mysql, no-one else should have access until it's unlocked. I wrote this script, loaded either as script.php or script.php?l=1 depending on what you want to do:
if ($_GET['l'])
{
mysqli_query("LOCK TABLES mytable WRITE");
sleep(10);
mysqli_query("UNLOCK TABLES");
}
else
{
$res=mysqli_query("SELECT * FROM mytable");
// Print Result
}
If I load script.php?l=1 in one browser window then, while it's sleeping, I should be able to load script.php in another window and it should wait until script.php?l=1 is finished, right?
Thing is, script.php loads right away, even though script.php?l=1 has a write lock. If I try to insert in script.php then it does wait, but why is the SELECT allowed?
Note: I am not looking for a discussion on whether to use LOCK TABLES or not. In fact I am probably going to go with a transaction, I am investigating that now, right now I just want to understand why the above doesn't work.
This happens because of query caching. There is a cache result available that doesn't 'affect' the lock, so the results are returned.
This can be avoided by adding the "SQL_NO_CACHE" keyword to the select:
SELECT SQL_NO_CACHE * FROM mytable
The point of LOCK is so that other sessions do not modify the table while you are using it during your specific session.
The reason that you are able to perform the SELECT query is because that's still considered part of the same MySQL session, even if you open up a new window.

PHP, mysqli, and table locks?

I have a database table where I need to pull a row, test user input for a match, then update the row to identify the user that made the match. Should a race condition occur, I need to ensure that the first user's update is not overwritten by another user.
To accomplish this I intend to:
1. Read row
2. Lock table
3. Read row again and compare to original row
4. If rows match update, otherwise do nothing (another user has already updated the row)
Based on information I found on Google, I expected the lock table statement to block until a lock was aquired. I set up a little test script in PHP that would stall for 10 seconds to allow me time to manually create a race condition.
// attempt to increment the victor number
$aData["round_id"] = $DATABASE["round_id"];
// routine to execute a SELECT on database (ommited for brevity)
$aRound = $oRound->getInfo($aData);
echo "Initial Round Data:";
print_r($aRound);
echo "Locking...";
echo $oRound->lock();
echo "Stalling to allow for conflict...";
sleep(10);
echo "Awake...";
$aLockedRound = $oRound->getInfo($aData);
if($aRound["victor_nation"] == $aLockedRound["victor_nation"]){
$aData["victor_nation"] = $aRound["victor_nation"] + 1;
$oRound->update($aData);
echo "Incremented Victor Nation";
}
where the lock routine is defined as
function lock(){
global $oDatabase;
$iReturn = 0;
// lock the table
$iReturn = $oDatabase->m_oConnection->query("LOCK TABLES round WRITE");
return $iReturn;
}
Above, $oDatabase->m_oConnection is a mysqli connection that I use to execute prepared statements on the database.
When I run my test script I kick off the first user and wait for "Stalling to allow for conflict..." , then start a second script. On the second script I expected it to block at "Locking...", however, the second script also continues to "Stalling to allow for conflict...".
Since the LOCK statment doesn't appear to be blocking, nor returning any indicator of acquiring the lock (return value is echoed and blank), it's unclear to me that I'm actually acquiring a lock. Even if I am, I'm not sure how to proceed.
Any pointers?
Troubleshooting: You can test for table lock success by trying to work with another table that is not locked. If you obtained the lock, trying to write to a table that was not included in the lock statement should generate an error.
You may want to consider an alternative solution. Instead of locking, perform an update that includes the changed elements as part of the where clause. If the data that you are changing has changed since you read it, the update will "fail" and return zero rows modified. This eliminates the table lock, and all the messy horrors that may come with it, including deadlocks.

Categories