Im facing up a doubt: what happen if i disable autocommit for 2 different connections to 2 different databases, in a nested way?
Example code:
$conn = new MainDB(); // DB class
$conn_second = new NotMainDB(); // another DB class
try {
$conn->dbh->beginTransaction(); // disable autocommit
$conn_second->dbh->beginTransaction(); // disable autocommit on 2nd DB
$run = $conn->dbh->prepare(/* UPDATE STATEMENT */);
$run->execute();
$run = $conn->dbh->prepare(/* ANOTHER UPDATE STATEMENT */);
$run->execute();
$ssp = $conn_second->dbh->prepare(/* AN INSERT STATEMENT ON ANOTHER DB */);
$ssp->execute();
$conn_second->dbh->commit();
$conn->dbh->commit();
} catch (Exception $ex) {
$conn->dbh->rollBack();
$conn_second->dbh->rollBack();
}
is there anything i have to take care of? anyone already experienced such a case?
thanks
This should work even if both of the connections refer to the same database.
Think of it this way: when you set up your Apache normally and two users visit your site at once, they open transactions simultaneously, and there are no problems whatsoever.
Basically, it's perfectly normal state for a database to handle multiple connections at once.
Just be sure not to cause any deadlocks though. Deadlocks happen when A waits for B to finish, and B waits for A to finish. I imagine this might happen for example when you use triggers with circular dependencies. These are rather rare scenarios though, especially for PHP, and deadlocks happen usually on user application level rather than on DB level.
Related
I need to put multiple values into 2 databases. The thing is, that if one of those INSERTS fails, i need all the other to rollback.
The question
Is it possible, to make simultaneously two transactions, inserting some values into databases, and then Commit or rollback both of them?
The Code
$res = new ResultSet(); //class connecting and letting me query first database
$res2 = new ResultSet2(); //the other database
$res->query("BEGIN");
$res2->query("BEGIN");
try
{
$res->query("INSERT xxx~~") or wyjatek('rollback'); //wyjatek is throwing exception if query fails
$res2->query("INSERT yyy~~")or wyjatek('rollback');
......
//if everything goes well
$res->query("COMMIT");
$res2->query("COMMIT");
//SHOW some GREEN text saying its done.
}
catch(Exception $e)
{
//if wyjatek throws an exception
$res->query("ROLLBACK");
$res2->query("ROLLBACK");
//SHOW some RED text, saying it failed
}
Summary
So is it proper way, or will it even work?
All tips appreciated.
Theoretically
If you will remove
or wyjatek('rollback')
your script will be work.
But looking to documentation
Transactions are isolated within a single "database". If you want to use multiple database transactions using MySql you can see XA Transactions.
Support for XA transactions is available for the InnoDB storage
engine.
XA supports distributed transactions, that is, the ability to permit
multiple separate transactional resources to participate in a global
transaction. Transactional resources often are RDBMSs but may be other
kinds of resources.
An application performs actions that involve different database
servers, such as a MySQL server and an Oracle server (or multiple
MySQL servers), where actions that involve multiple servers must
happen as part of a global transaction, rather than as separate
transactions local to each server.
The XA Specification. This
document is published by The Open Group and available at
http://www.opengroup.org/public/pubs/catalog/c193.htm
What about letting PostgreSQL doing the dirty work?
http://www.postgresql.org/docs/9.1/static/warm-standby.html#SYNCHRONOUS-REPLICATION
What you propose will almost always work. But for some uses, 'almost always' is not good enough.
If you have deferred constraints, the commit on $res2 could fail on a constraint violation, and then it is too late to rollback $res.
Or, one of your servers or the network could fail between the first commit and the second. If the php, database1, and database2 are all on the same hardware, the window for this failure mode is pretty small, but not negligible.
If 'almost always' is not good enough, and you cannot migrate one set of data to live inside the other database, then you might need to resort to "prepared transactions".
I am helping a friend with a web based form that is for their business. I am trying to get it ready to handle multiple users. I have set it up so that just before the record is displayed for editing I am locking the record with the following code.
$query = "START TRANSACTION;";
mysql_query($query);
$query = "SELECT field FROM table WHERE ID = \"$value\" FOR UPDATE;";
mysql_query($query);
(okay that is greatly simplified but that is the essence of the mysql)
It does not appear to be working. However, when I go directly to mysql from the command line, logging in with the same user and execute
START TRANSACTION;
SELECT field FROM table WHERE ID = "40" FOR UPDATE;
I can effectively block the web form from accessing record "40" and get the timeout warning.
I have tried using BEGIN instead of START TRANSACTION. I have tried doing SET AUTOCOMMIT=0 first and starting the transaction after locking but I cannot seem to lock the row from the PHP code. Since I can lock the row from the command line I do not think there is a problem with how the database is set up. I am really hoping that there is some simple something that I have missed in my reading.
FYI, I am developing on XAMPP version 1.7.3 which has Apache 2.2.14, MySQL 5.1.41 and PHP 5.3.1.
Thanks in advance. This is my first time posting but I have gleaned alot of knowledge from this site in the past.
The problem is not the syntax of your code, but the way you are trying to use it.
just before the record is displayed for editing I am locking the record with the following code
From this I am assuming that you select and "lock" the row, then display that edit page to your user, then when they submit the changes it saves and "unlocks" the table. Here in lies the fundamental problem. When your page is done loading, the PHP exits and closes the MySQL connection. When this happens, all the locks are immediately released. This is why the console seems to behave differently than your PHP. The equivalent in the console would be you exiting the program.
You cannot lock the table rows for editing for an extended period. This is not their design. If you want to lock a record for editing, you need to track these locks in another table. Create a new table called "edit_locks", and store the record id being locked, the user id editing, and the time it was locked. When you want to open a record for editing, lock the entire edit_locks table, and query to see if the record is locked by someone else. If it is not, insert your lock record, if it is, then display a locked error. When the user saves or cancels, remove the lock record from edit_locks. If you want to make things easy, just lock this table any time your program wants to use it. This will help you to avoid a race condition.
There is one more scenario that can cause a problem. If the user opens a record for editing, then closes the browser without saving or canceling, the edit lock will just stay there forever. This is why I said store the time it was locked. The editor itself should make an AJAX call every 2 minutes or so to say "I still need the lock!". When the PHP program receives this "relock" request, it should search for the lock, then update the timestamp to the current. This way the timestamp on the lock is always up to date within 2 minutes. You also need to create another program to remove old stale locks. This should run in a cron job every few minutes. It should search for any locks with a timestamp older than 5 minutes or so, and remove. If the timestamp is older than that, then clearly the editor was close some how or the timestamp would be up to date within 2 minutes.
Like some of the others have mentioned, you should try to use mysqli. It stands for "MySQL Improved" and is the replacement for the old interface.
This is an old discussion, but perhaps people are still following it. I use a method similar to JMack's but include the locking information in the table I want to row-lock. My new columns are LockTime and LockedBy. To attempt a lock, I do:
UPDATE table
SET LockTime='$now',LockedBy='$Userid'
WHERE Key='$value' AND (LockTime IS NULL OR LockTime<'$past' OR LockedBy='$Userid')
($past is 4 minutes ago)
If this fails, someone else has the lock. I can explicitly unlock as follows or let my lock expire:
UPDATE table
SET LockTime=NULL,LockedBy=''
WHERE Key='$value' AND LockedBy='$Userid'
A cron job could remove old locks, but it's not really necessary.
Use PDO for this (and all database operations):
$value = 40;
try {
$dbh = new PDO("mysql:host=localhost;dbname=dbname", 'username', 'password');
} catch (PDOException $e) {
die('Could not connect to database.');
}
$dbh->beginTransaction();
try {
$stm = $dbh->prepare("SELECT field FROM table WHERE ID = ? FOR UPDATE");
$stm->execute(array($value));
$dbh->commit();
} catch (PDOException $e) {
$dbh->rollBack();
}
If you must use the antiquated mysql_* functions you can something like:
mysql_query('SET AUTOCOMMIT=0');
mysql_query('START TRANSACTION');
mysql_query($sql);
mysql_query('SET AUTOCOMMIT=1');
You should not use the mysql api because it's easy to make mistakes that enables sql-injections and such, as well as it lacks some functionality. I suspect that transaction is one of these because if i'm not wrong every query is sent "by itself" and not in a larger context.
The solution however is to use some other api, i prefer mysqli because its so similar to mysql and widely supported. You can easily rewrite your code to use mysqli instead as well.
For the transaction functionality set auto-commit to false and commit yourself when you want it to. This does the same as starting and stopping transactions.
For the reference look at:
http://www.php.net/manual/en/mysqli.autocommit.php
http://www.php.net/manual/en/mysqli.commit.php
the problem are the mysql commands. You could use mysqli for this
http://nz.php.net/manual/en/class.mysqli.php
or PDO. Described here:
PHP + MySQL transactions examples
HTH
What happens when two different clients call the same php function that have pdo::beginTransaction?
Does one of them fail or can two instances of php execute the contents of a beginTranscation commit block?
IE:
try{
db::beginTransaction();
//queries here
//can two separate php instances go in here at the same time?
db:commit();
}
catch(error e)
{
db::rollback();
}
Each instance of a PHP script (more accurately, each instance of PDO) opens up a connection to the database (from the DB perspective, a new session). Backend databases (with the exception of a few flat-file ones) support multiple connections, but end up locking their individual resources differently. Depending on the queries executed in your transaction, you may end up causing a deadlock. That said, having multiple connections to the database open at the same time does not necessarily put you in a deadlock scenario.
I've got transactional application that works like so:
try {
$db->begin();
increaseNumber();
$db->commit();
} catch(Exception $e) {
$db->rollback();
}
And then inside the increaseNumber() I'll have a query like so, which is the only function that works with this table:
// I use FOR UPDATE so that nobody else can read this table until its been updated
$result = $db->select("SELECT item1
FROM units
WHERE id = '{$id}'
FOR UPDATE");
$result = $db->update("UPDATE units SET item1 = item1 + 1
WHERE id = '{$id}'");
Everything is wrapped in a transaction but lately I've been dealing with some pretty slow queries and there's a lot of concurrency going on in my application so I can't really make sure that queries are to be run in a specific order.
Can deadlocks cause ACID transactions to break? I have one function that adds something and then another that removes it but when I have deadlocks then I find the data is completely out of sync like the transactions were ignored.
Is this bound to happen or is something else wrong?
Thanks, Dominic
Well, if a transaction runs into a lock (from another transaction) that doesn't release, it'll fail after timeout. I believe the default is 30 seconds. You should make note if anyone is using any 3rd party applications on the database. I know for a fact that, for example, SQL Manager 2007 does not release locks on InnoDB unless you disconnect from database (sometimes it only takes a Commit Transaction on ... well, everything), which causes a lot of queries to fail after timeout. Of course, if your transactions are ACID-compliant, it should execute in all-or-nothing. It will break only if you break data between transactions.
You can try extending the timeout, but a 30 second lock might imply some deeper problems. It depends, of course, on what storage engine you're using (by MySQL tag and transactions I assumed InnoDB).
You can also try and turn on query profiling to see if any queries run for a ridiculous amount of time. Just note that it does, of course, decrease performance, so it may not be a production solution.
A in ACID stands for Atomic, so no deadlocks cannot make an ACID transaction break -- Rather it will make it not happen like in all-or-nothing.
More likely, if you inconsistent data, you application is doing multiple "transactions" in what is a logical single transaction, like: user creates and account (transaction-begin..-commit), user sets a password (transaction-begin...-deadlock..-rollback) your application ignored the error and continues, and now your database is left with a user created and no password.
Look in your application as what else the application is doing besides the rollback, and logically whether there is multiple parts to the build up of the consistent data.
In PHP I am using PDO to interact with databases. One procedure that commonly takes place consists of multiple queries (several SELECT and UPDATE). This works most of the time, but occasionally the data becomes corrupt where two (or more) instances of the procedure run concurrently.
What is the best way to work around this issue? Ideally I would like a solution which works with the majority of PDO drivers.
Assuming your database back end supports transactions (mysql with InnoDB, Postgres, etc), then simply wrapping the operation in question in a transaction will solve the problem. If one instance of the script is in the middle of the transaction when the second scripts attempts to start it, then the second script's database changes will be queued up and not be attempted until the first transaction completes. This means the database will always be in a valid state provided the transaction starting and committing logic is implemented correctly.
if ($inTransaction = $pdo -> beginTransaction ())
{
// Do your selects and updates here. Try to keep this section as short as possible though, as you don't want to keep other pending transactions waiting
if ($condition_for_success_met)
{
$pdo -> commit ();
}
else
{
$pdo -> rollback ();
}
}
else
{
// Couldn't start a transaction. Handle error here
}