This seems like a simple enough question, yet I couldn't find any definitive answers specific for MySQL. Look at this:
$mysqli->autocommit(false); //Start the transaction
$success = true;
/* do a bunch of inserts here, which will be rolled back and
set $success to false if they fail */
if ($success) {
if ($mysqli->commit()) {
/* display success message, possibly redirect to another page */
}
else {
/* display error message */
$mysqli->rollback(); //<----------- Do I need this?
}
}
$mysqli->autocommit(true); //Turns autocommit back on (will be turned off again if needed)
//Keep running regardless, possibly executing more inserts
The thing is, most examples I have seen just end the script if commiting failed, either by letting it finish or by explicitly calling exit(), which apparently auto rolls back the transaction(?), but what if I need to keep running and possibly execute more database-altering operations later? If this commit failed and I didn't have that rollback there, would turning autocommit back on (which according to this comment on the PHP manual's entry on autocommit does commit pending operations) or even explicitly calling another $mysqli->commit() later on, attempt to commit the previous inserts again, since it failed before and they weren't rolled back?
I hope I've been clear enough and that I can get a definitive answer for this, which has been bugging me quite a lot.
Edit: OK, maybe I phrased my question wrong in that line comment. It's not really a matter of whether or not I need the rollback, which, as it was pointed out, would depend on my use case, but really what is the effect of having or not having a rollback in that line. Perhaps a simpler question would be: does a failed commit call discards pending operations or does it just leaves them in their pending state, waiting for a rollback or another commit?
If you are NOT re-using the connection and it is closed immediately after the transaction fails, closing the connection would cause an implicit rollback anyway.
If you are re-using the connection, you should definitely do a rollback to avoid inconsistency with any follow-up statements.
And if you are not really re-using it but it is still in a blocking state (e.g. being left open for a couple of seconds or even minutes, depending e.g. whether you're on a website or a cronjob), please keep in mind that there can be many concurrent connections going on. So if you have a very large transaction, the server needs to hold it in a temporary state which might consume lots of memory (e.g. if you're doing a major database migration that affects lots of columns or tables) you should definitely do an explicit rollback or close the connection for an implicit rollback after it fails.
Another factor is => if you have lots of concurrent connections in different processes, they may or may not already see parts of the transaction, even if it's not committed yet, depending on the transaction isolation level that you are using. See also:
https://dev.mysql.com/doc/refman/8.0/en/innodb-transaction-isolation-levels.html
Related
I'm using PHP's mysqli library. Database inserts and updates are always in a try-catch block. Success of each query is checked immediately (if $result === false), and any failure throws an exception. The catch calls mysqli_rollback() and exits with a message for the user.
My question is, should I bother checking the return value of mysqli_rollback()? If so, and rollback fails, what actions should the code take?
I have a hard time understanding how a rollback could fail (barring some atrocious bug in MySQL). And since PHP is going to exit anyway, calling rollback almost seems superfluous. I certainly think it should be in the code for clarity, but when PHP exits it will close the connection to MySQL and uncommitted transactions are rolled back automatically.
if rollback fails (connection failure for example), the changes will be rollbacked after the connection close anyway, so you don't need to handle the error. When you are in transaction, unless you have explicit commit (or you are running in autocommit mode, which means you have commit after each statement), the transaction is being roll back.
If a session that has autocommit disabled ends without explicitly
committing the final transaction, MySQL rolls back that transaction.
The only case you would like to handle rollback error is if you are not exiting from your script, but starting a new transaction later, as starting transaction will implicitly commit the current one. Check out Statements That Cause an Implicit Commit
mysqli_rollback can fail if you're not (never were) connected to the database. Depends on your error-handling before-hand.
I have an application, running on php + mysql plattform, using Doctrine2 framework. I need to execute 3 db queries during one http request: first INSERT, second SELECT, third UPDATE. UPDATE is dependent on result of SELECT query. There is a high probability of concurrent http requests. If such situation occurs, and DB queries get mixed up (eg. INS1, INS2, SEL1, SEL2, UPD1, UPD2), it will result in data inconsistency. How do I assure atomicity of INS-SEL-UPD operation? Do I need to use some kind of locks, or transactions are sufficient?
The answer from #YaK is actually a good answer. You should know how to deal with locks in general.
Addressing Doctrine2 specifically, your code should look like:
$em->getConnection()->beginTransaction();
try {
$toUpdate = $em->find('Entity\WhichWillBeUpdated', $id, \Doctrine\DBAL\LockMode::PESSIMISTIC_WRITE);
// this will append FOR UPDATE http://docs.doctrine-project.org/en/2.0.x/reference/transactions-and-concurrency.html
$em->persist($anInsertedOne);
// you can flush here as well, to obtain the ID after insert if needed
$toUpdate->changeValue('new value');
$em->persist($toUpdate);
$em->flush();
$em->getConnection()->commit();
} catch (\Exception $e) {
$em->getConnection()->rollback();
throw $e;
}
The every subsequent request to fetch for update, will wait until this transaction finishes for one process which has acquired the lock. Mysql will release the lock automatically after transaction is finished successfully or failed. By default, innodb lock timeout is 50 seconds. So if your process does not finish transaction in 50 seconds it will rollback and release the lock automatically. You do not need any additional fields on your entity.
A table-wide LOCK is guaranteed to work in all situations. But they are quite bad because they kind of prevent concurrency, rather than deal with it.
However, if your script holds the locks for a very short time frame, it might be an acceptable solution.
If your table uses InnoDB engine (no support for transactions with MyISAM), transaction is the most efficient solution, but also the most complex.
For your very specific need (in the same table, first INSERT, second SELECT, third UPDATE dependending on result of SELECT query):
Start a transaction
INSERT your records. Other transactions will not see these new rows until your own transaction is committed (unless you use a non-standard isolation level)
SELECT your record(s) with SELECT...LOCK IN SHARE MODE. You now have a READ lock on these rows, no one else may change these rows. (*)
Compute whatever you need to compute to determine whether or not you need to UPDATE something.
UPDATE the rows if required.
Commit
Expect errors at any time. If a dead-lock is detected, MySQL may decide to ROLLBACK you transaction to escape the dead-lock. If another transaction is updating the rows you are trying to read from, your transaction may be locked for some time, or even time-out.
The atomicity of your transaction is guaranteed if you proceed this way.
(*) in general, rows not returned by this SELECT may still be inserted in a concurrent transaction, that is, the non-existence is not guaranteed throughout the course of the transaction unless proper precautions are taken
Transactions won't prevent thread B to read the values thread A has not locked
So you must use locks to prevent concurrency access.
#Gediminas explained how you can use locks with Doctrine.
But using locks can result in dead locks or lock timeouts.
Doctrine renders these SQL errors as RetryableExceptions.
These exceptions are often normal if you are in a high concurrency environment.
They can happen very often and your application should handle them properly.
Each time a RetryableException is thrown by Doctrine, the proper way to handle this is to retry the whole transaction.
As easy as it seems, there is a trap. The Doctrine 2 EntityManager becomes unusable after a RetryableException and you must recreate a new one to replay your whole transaction.
I wrote this article illustrated with a full example.
When a deadlock situation occurs in MySQL/InnoDB, it returns this familiar error:
'Deadlock found when trying to get lock; try restarting transaction'
So what i did was record all queries that go into a transaction so that they can simply be reissued if a statement in the transaction fails. Simple.
Problem: When you have queries that depend on the results of previous queries, this doesn't work so well.
For Example:
START TRANSACTION;
INSERT INTO some_table ...;
-- Application here gets ID of thing inserted: $id = $database->LastInsertedID()
INSERT INTO some_other_table (id,data) VALUES ($id,'foo');
COMMIT;
In this situation, I can't simply reissue the transaction as it was originally created. The ID acquired by the first SQL statement is no longer valid after the transaction fails but is used by the second statement. Meanwhile, many objects have been populated with data from the transaction which then become obsolete when the transaction gets rolled back. The application code itself does not "roll back" with the database of course.
Question is: How can i handle these situations in the application code? (PHP)
I'm assuming two things. Please tell me if you think I'm on the right track:
1) Since the database can't just reissue a transaction verbatim in all situations, my original solution doesn't work and should not be used.
2) The only good way to do this is to wrap any and all transaction-issuing code in it's own try/catch block and attempt to reissue the code itself, not just the SQL.
Thanks for your input. You rock.
A transaction can fail. Deadlock is a case of fail, you could have more fails in serializable levels as well. Transaction isolation problems is a nightmare. Trying to avoid fails is the bad way I think.
I think any well written transaction code should effectively be prepared for failing transactions.
As you have seen recording queries and replaying them is not a solution, as when you restart your transaction the database has moved. If it were a valid solution the SQL engine would certainly do that for you. For me the rules are:
redo all your reads inside the transactions (any data you have read outside may have been altered)
throw everything from previous attempt, if you have written things outside of the transaction (logs, LDAP, anything outside the SGBD) it should be cancelled because of the rollback
redo everything in fact :-)
This mean a retry loop.
So you have your try/catch block with the transaction inside. You need to add a while loop with maybe 3 attempts, you leave the while loop if the commit part of the code succeed. If after 3 retry the transaction is still failing then launch an Exception to the user -- so that you do not try an inifinite retry loop, you may have a really big problem in fact --. Note that you should handle SQL error and lock or serializable exception in different ways. 3 is an arbitrary number, you may try a bigger number of attempts.
This may give something like that:
$retry=0;
$notdone=TRUE;
while( $notdone && $retry<3 ) {
try {
$transaction->begin();
do_all_the_transaction_stuff();
$transaction->commit();
$notdone=FALSE;
} catch( Exception $e ) {
// here we could differentiate basic SQL errors and deadlock/serializable errors
$transaction->rollback();
undo_all_non_datatbase_stuff();
$retry++;
}
}
if( 3 == $retry ) {
throw new Exception("Try later, sorry, too much guys other there, or it's not your day.");
}
And that means all the stuff (read, writes, fonctionnal things) must be enclosed in the $do_all_the_transaction_stuff();. Implying the transaction managing code is in the controllers, the high-level-application-functional-main code, not split upon several low-level-database-access-models objects.
Can I insert something into a MySQL database using PHP and then immediately make a call to access that, or is the insert asynchronous (in which case the possibility exists that the database has not finished inserting the value before I query it)?
What I think the OP is asking is this:
<?
$id = $db->insert(..);
// in this case, $row will always have the data you just inserted!
$row = $db->select(...where id=$id...)
?>
In this case, if you do a insert, you will always be able to access the last inserted row with a select. That doesn't change even if a transaction is used here.
If the value is inserted in a transaction, it won't be accessible to any other transaction until your original transaction is committed. Other than that it ought to be accessible at least "very soon" after the time you commit it.
There are normally two ways of using MySQL (and most other SQL databases, for that matter):
Transactional. You start a transaction (either implicitly or by issuing something like 'BEGIN'), issue commands, and then either explicitly commit the transaction, or roll it back (failing to take any action before cutting off the database connection will result in automatic rollback).
Auto-commit. Each statement is automatically committed to the database as it's issued.
The default mode may vary, but even if you're in auto-commit mode, you can "switch" to transactional just by issuing a BEGIN.
If you're operating transactionally, any changes you make to the database will be local to your db connection/instance until you issue a commit. Issuing a commit should block until the transaction is fully committed, so once it returns without error, you can assume the data is there.
If you're operating in auto-commit (and your database library isn't doing something really strange), you can rely on data you've just entered to be available as soon as the call that inserts the data returns.
Note that best practice is to always operate transactionally. Even if you're only issuing a single atomic statement, it's good to be in the habit of properly BEGINing and COMMITing a transaction. It also saves you from trouble when a new version of your database library switches to transactional mode by default and suddenly all your one-line SQL statements never get committed. :)
Mostly the answer is yes. You would have to do some special work to force a database call to be asynchronous in the way you describe, and as long as you're doing it all in the same thread, you should be fine.
What is the context in which you're asking the question?
I am using Zend_Db to insert some data inside a transaction. My function starts a transaction and then calls another method that also attempts to start a transaction and of course fails(I am using MySQL5). So, the question is - how do I detect that transaction has already been started?
Here is a sample bit of code:
try {
Zend_Registry::get('database')->beginTransaction();
$totals = self::calculateTotals($Cart);
$PaymentInstrument = new PaymentInstrument;
$PaymentInstrument->create();
$PaymentInstrument->validate();
$PaymentInstrument->save();
Zend_Registry::get('database')->commit();
return true;
} catch(Zend_Exception $e) {
Bootstrap::$Log->err($e->getMessage());
Zend_Registry::get('database')->rollBack();
return false;
}
Inside PaymentInstrument::create there is another beginTransaction statement that produces the exception that says that transaction has already been started.
The framework has no way of knowing if you started a transaction. You can even use $db->query('START TRANSACTION') which the framework would not know about because it doesn't parse SQL statements you execute.
The point is that it's an application responsibility to track whether you've started a transaction or not. It's not something the framework can do.
I know some frameworks try to do it, and do cockamamie things like count how many times you've begun a transaction, only resolving it when you've done commit or rollback a matching number of times. But this is totally bogus because none of your functions can know if commit or rollback will actually do it, or if they're in another layer of nesting.
(Can you tell I've had this discussion a few times? :-)
Update 1: Propel is a PHP database access library that supports the concept of the "inner transaction" that doesn't commit when you tell it to. Beginning a transaction only increments a counter, and commit/rollback decrements the counter. Below is an excerpt from a mailing list thread where I describe a few scenarios where it fails.
Update 2: Doctrine DBAL also has this feature. They call it Transaction Nesting.
Like it or not, transactions are "global" and they do not obey object-oriented encapsulation.
Problem scenario #1
I call commit(), are my changes committed? If I'm running inside an "inner transaction" they are not. The code that manages the outer transaction could choose to roll back, and my changes would be discarded without my knowledge or control.
For example:
Model A: begin transaction
Model A: execute some changes
Model B: begin transaction (silent no-op)
Model B: execute some changes
Model B: commit (silent no-op)
Model A: rollback (discards both model A changes and model B changes)
Model B: WTF!? What happened to my changes?
Problem scenario #2
An inner transaction rolls back, it could discard legitimate changes made by an outer transaction. When control is returned to the outer code, it believes its transaction is still active and available to be committed. With your patch, they could call commit(), and since the transDepth is now 0, it would silently set $transDepth to -1 and return true, after not committing anything.
Problem scenario #3
If I call commit() or rollback() when there is no transaction active, it sets the $transDepth to -1. The next beginTransaction() increments the level to 0, which means the transaction can neither be rolled back nor committed. Subsequent calls to commit() will just decrement the transaction to -1 or further, and you'll never be able to commit until you do another superfluous beginTransaction() to increment the level again.
Basically, trying to manage transactions in application logic without allowing the database to do the bookkeeping is a doomed idea. If you have a requirement for two models to use explicit transaction control in one application request, then you must open two DB connections, one for each model. Then each model can have its own active transaction, which can be committed or rolled back independently from one another.
Do a try/catch: if the exception is that a transaction has already started (based on error code or the message of the string, whatever), carry on. Otherwise, throw the exception again.
Store the return value of beginTransaction() in Zend_Registry, and check it later.
Looking at the Zend_Db as well as the adapters (both mysqli and PDO versions) I'm not really seeing any nice way to check transaction state. There appears to be a ZF issue regarding this - fortunately with a patch slated to come out soon.
For the time being, if you'd rather not run unofficial ZF code, the mysqli documentation says you can SELECT ##autocommit to find out if you're currently in a transaction (err... not in autocommit mode).
For innoDB you should be able to use
SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX WHERE TRX_MYSQL_THREAD_ID = CONNECTION_ID();
This discussion is fairly old. As some have pointed out, you can do it in your application. PHP has a method since version 5 >= 5.3.3 to know if you are in the middle of a transaction. PDP::inTransaction() returns true or false. Link http://php.net/manual/en/pdo.intransaction.php
You can also write your code as per following:
try {
Zend_Registry::get('database')->beginTransaction();
}
catch (Exception $e) { }
try {
$totals = self::calculateTotals($Cart);
$PaymentInstrument = new PaymentInstrument;
$PaymentInstrument->create();
$PaymentInstrument->validate();
$PaymentInstrument->save();
Zend_Registry::get('database')->commit();
return true;
}
catch (Zend_Exception $e) {
Bootstrap::$Log->err($e->getMessage());
Zend_Registry::get('database')->rollBack();
return false;
}
In web-facing PHP, scripts are almost always invoked during a single web request. What you would really like to do in that case is start a transaction and commit it right before the script ends. If anything goes wrong, throw an exception and roll back the entire thing. Like this:
wrapper.php:
try {
// start transaction
include("your_script.php");
// commit transaction
} catch (RollbackException $e) {
// roll back transaction
}
The situation gets a little more complex with sharding, where you may be opening several connections. You have to add them to a list of connections where the transactions should be committed or rolled back at the end of the script. However, realize that in the case of sharding, unless you have a global mutex on transactions, you will not be easily able to achieve true isolation or atomicity of concurrent transactions because another script might be committing their transactions to the shards while you're committing yours. However, you might want to check out MySQL's distributed transactions.
Use zend profiler to see begin as query text and Zend_Db_Prfiler::TRANSACTION as query type with out commit or rollback as query text afterwards. (By assuming there is no ->query("START TRANSACTION") and zend profiler enabled in your application)
I disagree with Bill Karwin's assessment that keeping track of transactions started is cockamamie, although I do like that word.
I have a situation where I have event handler functions that might get called by a module not written by me. My event handlers create a lot of records in the db. I definitely need to roll back if something wasn't passed correctly or is missing or something goes, well, cockamamie. I cannot know whether the outside module's code triggering the event handler is handling db transactions, because the code is written by other people. I have not found a way to query the database to see if a transaction is in progress.
So I DO keep count. I'm using CodeIgniter, which seems to do strange things if I ask it to start using nested db transactions (e.g. calling it's trans_start() method more than once). In other words, I can't just include trans_start() in my event handler, because if an outside function is also using trans_start(), rollbacks and commits don't occur correctly. There is always the possibility that I haven't yet figured out to manage those functions correctly, but I've run many tests.
All my event handlers need to know is, has a db transaction already been initiated by another module calling in? If so, it does not start another new transaction and does not honor any rollbacks or commits either. It does trust that if some outside function has initiated a db transaction then it will also be handling rollbacks/commits.
I have wrapper functions for CodeIgniter's transaction methods and these functions increment/decrement a counter.
function transBegin(){
//increment our number of levels
$this->_transBegin += 1;
//if we are only one level deep, we can create transaction
if($this->_transBegin ==1) {
$this->db->trans_begin();
}
}
function transCommit(){
if($this->_transBegin == 1) {
//if we are only one level deep, we can commit transaction
$this->db->trans_commit();
}
//decrement our number of levels
$this->_transBegin -= 1;
}
function transRollback(){
if($this->_transBegin == 1) {
//if we are only one level deep, we can roll back transaction
$this->db->trans_rollback();
}
//decrement our number of levels
$this->_transBegin -= 1;
}
In my situation, this is the only way to check for an existing db transaction. And it works. I wouldn't say that "The Application is managing db transactions". That's really untrue in this situation. It is simply checking whether some other part of the application has started any db transactions, so that it can avoid creating nested db transactions.
Maybe you can try PDO::inTransaction...returns TRUE if a transaction is currently active, and FALSE if not.
I have not tested myself but it seems not bad!