I have a question regarding MySQL commits and transactions. I have a couple of PHP statements that execute MySQL queries. Do I just say the following?
mysql_query("START TRANSACTION");
//more queries here
mysql_query("COMMIT");
What exactly would this do? How does it help? For updates, deletes and insertions I also found this to block other queries from reading:
mysql_query("LOCK TABLES t1 WRITE, t2 WRITE");
//more queries here
mysql_query("UNLOCK TABLES t1, t2");
Would this block other queries whatever nature or only writes/selects?
Another question: Say one query is running and blocks other queries. Another query tries to access blocked data - and it sees that it is blocked. How does it proceed? Does it wait until the data is unblocked again and re-execute the query? Does it just fail and needs to be repeated? If so, how can I check?
Thanks a lot!
Dennis
In InnoDB, you do not need to explicitly start or end transactions for single queries if you have not changed the default setting of autocommit, which is "on". If autocommit is on, InnoDB automatically encloses every single SQL query in a transaction, which is the equivalent of START TRANSACTION; query; COMMIT;.
If you explicitly use START TRANSACTION in InnoDB with autocommit on, then any queries executed after a START TRANSACTION statement will either all be executed, or all of them will fail. This is useful in banking environments, for example: if I am transferring $500 to your bank account, that operation should only succeed if the sum has been subtracted from my bank balance and added to yours. So in this case, you'd run something like
START TRANSACTION;
UPDATE customers SET balance = balance - 500 WHERE customer = 'Daan';
UPDATE customers SET balance = balance + 500 WHERE customer = 'Dennis';
COMMIT;
This ensures that either both queries will run successfully, or none, but not just one.
This post has some more on when you should use transactions.
In InnoDB, you will very rarely have to lock entire tables; InnoDB, unlike MyISAM, supports row-level locking. This means clients do not have to lock the entire table, forcing other clients to wait. Clients should only lock the rows they actually need, allowing other clients to continue accessing the rows they need.
You can read more about InnoDB transactions here. Your questions about deadlocking are answered in sections 14.2.8.8 and 14.2.8.9 of the docs. If a query fails, your MySQL driver will return an error message indicating the reason; your app should then reissue the queries if required.
Finally, in your example code, you used mysql_query. If you are writing new code, please stop using the old, slow, and deprecated mysql_ library for PHP and use mysqli_ or PDO instead :)
Related
Executing this query may cause deadlock ? If yes, then please explain how??
$q="UPDATE SET `count` =`count` + 1 WHERE user_id='$uid' FOR UPDATE";
It will not cause a deadlock. Even if a lot of queries try to update at the same time, they will either wait for the other query to finish the update. Or Mysql optimiser will run them simultaneously if different queries are updating different rows, given that you are using InnoDB engine. In MyISAM, there is only table-level locking so the queries will end up running sequentially even if they are run at the same time.
I do not see why there will be a deadlock with this query.
I have a system that handles many queries per second. I code my system with mysql and PHP.
My problem is mysqli transaction still commit the transaction even the record is deleted by other user at the same time , all my table are using InnoDB.
This is how I code my transaction with mysqli:
mysqli_autocommit($dbc,FALSE);
$all_query_ok=true;
$q="INSERT INTO Transaction() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
$q="INSERT INTO Statement() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
if($all_query_ok==true){
//all success
mysqli_commit($dbc);
}else{
//one of it failed , rollback everything.
mysqli_rollback($dbc);
}
Below are the query performed at the same time in other script by another user and then end up messing the expected system behaviour,
$q="DELETE FROM Transaction...";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
Please advice , did I implement the transaction wrongly? I have read about row-level locking and believe that innoDB does lock the record during a transaction
I don't know which kind of transactions you're talking about but with the mysqli extension I use the following methods to work with transactions:
mysqli::begin_transaction
mysqli::commit
mysqli::rollback
Then the process is like:
Starting a new transaction with mysqli::begin_transaction
Execute your SQL queries
On success use mysqli::commit to confirm changes done by your queries in step 2 OR on error during execution of your queries in step 2 use mysqli::rollback to revert changes done by them.
You can think of transactions like a temporary cache for your queries. It's someway similar to output caching in PHP with ob_* functions. As long as you didn't have flushed the cached data, nothing happens on screen. Same with transactions: as long as you didn't have commited anything (and autocommit is turned off) nothing happens in the database.
I did some research on row level locking which can lock record from delete or update
FOR UPDATE
Official Documentation
Right after the begin transaction I have to select those record I wanted to lock like below
SELECT * FROM Transaction WHERE id=1 FOR UPDATE
So that the record will be lock until transaction end.
This method doesn't work on MyISAM type table
Looks like a typical example of race condition. You execute two concurrent scripts modifying data in parallel. Probably your first script successfully inserts records and commits the transaction, and the second script successfully deletes records afterwards. I'm not sure what you mean by "the query performed at the same time in other script by other user" though.
You will have to do this this way:
mysqli_autocommit($dbc,FALSE);
$dbc->begin_transaction();
$all_query_ok=true;
$q="INSERT INTO Transaction() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
$q="INSERT INTO Statement() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
if($all_query_ok==true){
//all success
mysqli_commit($dbc);
}else{
//one of it failed , rollback everything.
mysqli_rollback($dbc);
}
you can use the object oriented or the procedural style when calling begin_transaction (I prefer the object oriented).
I am using InnoDB in MySQL and accessing the table from PHP with PDO.
I need to lock the table, do a select and then, depending on the result of that either insert a row or not. Since I want to have the table locked for as short a time as possible, can I do it like this?
prepare select
prepare insert
begin transaction
lock table
execute select
if reservation time is available then execute insert
unlock table
commit
Or do the prepares have to be inside the transaction? Or do they have to be after the lock?
Should the transaction only include the insert, or does that make any difference?
beginTransaction turns off autocommit mode, so it only affects queries that actually commit changes. This means that prepared statements, SELECT, and even LOCK TABLES are not affected by transactions at all. In fact, if you're only doing a single INSERT there's no need to even use a transaction; you would only need to use them if you wanted to do multiple write queries atomically.
I have the following PHP code:
$dbh->beginTransaction();
$dbh->exec("LOCK TABLES
`reservations` WRITE, `settings` WRITE");
$dbh->exec("CREATE TEMPORARY TABLE
temp_reservations
SELECT * FROM reservations");
$dbh->exec("ALTER TABLE
`temp_reservations`
ADD INDEX ( conf_num ) ; ");
// [...Other stuff here with temp_reservations...]
$dbh->exec("DELETE QUICK FROM `reservations`");
$dbh->exec("OPTIMIZE TABLE `reservations`");
$dbh->exec("INSERT INTO `reservations` SELECT * FROM temp_reservations");
var_dump(GlobalContainer::$dbh->inTransaction()); // true
$dbh->exec("UNLOCK TABLES");
$dbh->rollBack();
Transactions are working fine for regular updates/inserts but the above code for some reason is not. When an error happens above, I'm left with a completely empty reservations table. I read on the PDO::beginTransaction page that "some databases, including MySQL, automatically issue an implicit COMMIT when a database definition language (DDL) statement such as DROP TABLE or CREATE TABLE is issued within a transaction". The MySQL manual has a list of "Data Definition Statements", which I would assume is the same as DDL mentioned above which lists CREATE TABLE but I am only creating a temporary table. Is there any way around this?
Also, does the fact that I'm left with an empty reservations table show that a commit occurred after the DELETE QUICK FROM reservations query?
Edit: On an additional note, the INSERT INTO reservations line also produces the following error:
Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.
I tried doing $dbh->setAttribute( PDO::MYSQL_ATTR_USE_BUFFERED_QUERY , true); but this doesn't seem to affect it. I'm assuming it would have something to do with the transaction, but I'm not sure. Can anyone pinpoint what exactly is causing this error as well?
Your OPTIMIZE TABLE statement is causing an implicit commit.
I'm not sure exactly what you're trying to do, but it looks you can shorten your code to:
$dbh->exec("OPTIMIZE TABLE `reservations`");
All the other code is just making the job more complex, for no gain.
I'm also assuming you're using InnoDB tables, because MyISAM tables wouldn't support transactions anyway. Every DDL or DML operation on a MyISAM table implicitly commits immediately.
By the way, buffered queries have nothing to do with transactions. They have to do with fetching SELECT result sets one row at a time, versus fetching the whole result set into memory in PHP, then iterating through it. See explanation at: http://php.net/manual/en/mysqlinfo.concepts.buffering.php
Been scratching my head on this for a while....
I have a PDO object with pdo->setAttribute(PDO::ATTR_AUTOCOMMIT,0); as I want to use FOR UPDATE with some InnoDB tables. Reading the MySQL documentation, FOR UPDATE will only lock read rows if:
You are in a transaction
You are not in a transaction and set autocommit=0 has been issued
So, I am using ATTR_AUTOCOMMIT to allow the PDO object to lock rows. In either case, this is causing INSERT and UPDATE statements to not apply. These statements are nothing to do with FOR UPDATE they are just running through the same PDO object with prepared statements.
My MySQL query log looks like:
xxx Connect user#host
xxx Query set autocommit=0
xxx Query INSERT INTO foo_tbl (bar, baz) VALUES ('hello','world')
xxx Quit
PHP/PDO doesn't complain, but selecting from the table shows that the data hasn't been written.
The queries that I am running have been run thousands of time prior; only the ATTR_AUTOCOMMIT change has been made. Removing that option makes everything work again. Transactions are working fine with the option autocommit=0 too.
Are there additional calls that need making on the PDO object (commit() complains rightly that it isn't in a transaction) to make the changes stick? Basically, I want a plain PDO object but with the option to lock rows outside of transactions for InnoDB tables (the background to why is too long and boring for here).
I'm sure this is something stupid I am missing scratches head
$db = new PDO('mysql:dbname=test');
$db->setAttribute(PDO::ATTR_AUTOCOMMIT,0);
var_dump($db->query('SELECT ##autocommit')->fetchAll()); //OK
$db->query("INSERT INTO foo (bar) VALUES ('a');");
$db->query("COMMIT;");//do by SQL rather then by interface / PDO-method
But essentially, you're in a transaction (you just haven't started it with PDO), a rollback etc. is also still available. It's quite debatable whether this is a bug (not being able to call commit() directly).