In PHP I am using PDO to interact with databases. One procedure that commonly takes place consists of multiple queries (several SELECT and UPDATE). This works most of the time, but occasionally the data becomes corrupt where two (or more) instances of the procedure run concurrently.
What is the best way to work around this issue? Ideally I would like a solution which works with the majority of PDO drivers.
Assuming your database back end supports transactions (mysql with InnoDB, Postgres, etc), then simply wrapping the operation in question in a transaction will solve the problem. If one instance of the script is in the middle of the transaction when the second scripts attempts to start it, then the second script's database changes will be queued up and not be attempted until the first transaction completes. This means the database will always be in a valid state provided the transaction starting and committing logic is implemented correctly.
if ($inTransaction = $pdo -> beginTransaction ())
{
// Do your selects and updates here. Try to keep this section as short as possible though, as you don't want to keep other pending transactions waiting
if ($condition_for_success_met)
{
$pdo -> commit ();
}
else
{
$pdo -> rollback ();
}
}
else
{
// Couldn't start a transaction. Handle error here
}
Related
We are trying to run a transaction using the Wordpress wpdb object - not sure if that matters.
wpdb->query('BEGIN TRANSACTION');
// Run transaction related queries
if($error) {
// ROLLBACK
} else {
// COMMIT
}
Now it seems like mysql does this brilliant thing of setting auto_commit to true, which causes every query after execution to auto commit. We learnt that we can disable this amazing feature by running SET auto_commit = 0.
At the end of our query we will be running SET auto_commit = 1. My question is that would this affect any other queries being run on the DB at the same time?
Unfortunately, not every database supports transactions, so PDO needs to run in what is known as "auto-commit" mode when you first open the connection. Auto-commit mode means that every query that you run has its own implicit transaction, if the database supports it, or no transaction if the database doesn't support transactions. If you need a transaction, you must use the PDO::beginTransaction() method to initiate one.
Now guess it is good or bad ?
So, let's say I'm using two drivers at the same time (in the specific mysql and sqlite3)
I have a set of changes that must be commit()ted on both connections only if both dbms didn't fail, or rollBack()ed if one or the another did fail:
<?php
interface DBList
{
function addPDO(PDO $connection);
// calls ->rollBack() on all the pdo instances
function rollBack();
// calls ->commit() on all the pdo instances
function commit();
// calls ->beginTransaction() on all the pdo instances
function beginTransaction();
}
Question is: will it actually work? Does it make sense?
"Why not use just mysql?" you would say! I'm not a masochist! I need mysql for the classic fruition via my application, but I also need to keep a copy of a table that is always synchronized and that is also downloadable and portable!
Thank you a lot in advance!
I suspect you put the cart before the horses! If
two databases are in sync
a transaction commits successfully on one DB
No OS-level error occures
then the transaction will also commit successully on the second DB.
So what you would want to do is:
- Start the transaction on MySQL
- Record all data-changing SQL (see later)
- Commit the transaction on MySQL
- If the commit works, run the recorded SQL against SQlite
- if not, roll back MySQL
Caveat: The assumption above is only valid, if the sequence of transactions is identical on both DBs. So you would want to record the SQL into a MySQL table, which is subject to the same transaction logic as the rest. This does the serialization work for you.
You mistake PDO with a database server. PDO is just an interface, pretty much like the database console. It doesn't perform any data operations of its own. It cannot insert or select data. It cannot perform data locks or transactions. All it can do is to send your command to database server and bring back results if any. It's just an interface. It doesn't have transactions on it's own.
So, instead of such fictional trans-driver transactions you can use regular ones.
Start two, one for each driver, and then rollback them accordingly. By the way, with PDO one don't have to rollback manually. Just set PDO in exception mode, write your queries and add commit at the end. In case one of queries failed, all started transactions will be rolled back automatically due to script termination.
I need to put multiple values into 2 databases. The thing is, that if one of those INSERTS fails, i need all the other to rollback.
The question
Is it possible, to make simultaneously two transactions, inserting some values into databases, and then Commit or rollback both of them?
The Code
$res = new ResultSet(); //class connecting and letting me query first database
$res2 = new ResultSet2(); //the other database
$res->query("BEGIN");
$res2->query("BEGIN");
try
{
$res->query("INSERT xxx~~") or wyjatek('rollback'); //wyjatek is throwing exception if query fails
$res2->query("INSERT yyy~~")or wyjatek('rollback');
......
//if everything goes well
$res->query("COMMIT");
$res2->query("COMMIT");
//SHOW some GREEN text saying its done.
}
catch(Exception $e)
{
//if wyjatek throws an exception
$res->query("ROLLBACK");
$res2->query("ROLLBACK");
//SHOW some RED text, saying it failed
}
Summary
So is it proper way, or will it even work?
All tips appreciated.
Theoretically
If you will remove
or wyjatek('rollback')
your script will be work.
But looking to documentation
Transactions are isolated within a single "database". If you want to use multiple database transactions using MySql you can see XA Transactions.
Support for XA transactions is available for the InnoDB storage
engine.
XA supports distributed transactions, that is, the ability to permit
multiple separate transactional resources to participate in a global
transaction. Transactional resources often are RDBMSs but may be other
kinds of resources.
An application performs actions that involve different database
servers, such as a MySQL server and an Oracle server (or multiple
MySQL servers), where actions that involve multiple servers must
happen as part of a global transaction, rather than as separate
transactions local to each server.
The XA Specification. This
document is published by The Open Group and available at
http://www.opengroup.org/public/pubs/catalog/c193.htm
What about letting PostgreSQL doing the dirty work?
http://www.postgresql.org/docs/9.1/static/warm-standby.html#SYNCHRONOUS-REPLICATION
What you propose will almost always work. But for some uses, 'almost always' is not good enough.
If you have deferred constraints, the commit on $res2 could fail on a constraint violation, and then it is too late to rollback $res.
Or, one of your servers or the network could fail between the first commit and the second. If the php, database1, and database2 are all on the same hardware, the window for this failure mode is pretty small, but not negligible.
If 'almost always' is not good enough, and you cannot migrate one set of data to live inside the other database, then you might need to resort to "prepared transactions".
I'm making a webapp where they'll be multiple users interacting with each other and reading/making decisions on/modifying shared data.
I've read that transactions are atomic, which is what I need. However, I'm not sure how it works with the PHP PDO::beginTransaction()
I mean atomic as in if one transaction is editing some data, all other transactions also modifying/reading that data will need to wait until the first transaction finishes. Like I don't want two scripts reading a value, incrementing the old one, and effectively storing only one increment. The second script should have to wait for the first one to finish.
In almost all the examples I've seen the queries are used consecutively (example PHP + MySQL transactions examples). A lot of what I'm doing requires
querying and fetching
checking that data, and acting on it, as part of the same transaction
Will the transaction still work atomically if there is PHP code between queries?
I know you should prepare your statements outside the transaction, but is it okay to prepare it inside? Basically, I'm worried about PHP activity ruining the atomicity of a transaction.
Here's an example (this one doesn't require checking a previous value). I have a very basic inbox system which stores mail as a serialized array (if someone has a better recommendation please let me know). So I query for it, append the new message, and store it. Will it work as expected?
$getMail = $con->prepare('SELECT messages FROM inboxes WHERE id=?');
$storeMail = $con->prepare('UPDATE inboxes SET messages=? WHERE id=?');
$con->beginTransaction();
$getMail->execute(array($recipientID));
$result = $getMail->fetch();
$result = unserialize($result[0]);
$result[] = $msg;
$storeMail->execute(array(serialize($result), $recipientID));
$con->commit();
Transactions are atomic only with respect to other database connections trying to use the same data, i.e. other connections will see either no changes made by your transaction, or all changes; "atomic" meaning no other database connection will see an in-between state with some data updated and other not.
PHP code between queries won't break atomicity, and it does not matter where you prepare your statements.
What happens when two different clients call the same php function that have pdo::beginTransaction?
Does one of them fail or can two instances of php execute the contents of a beginTranscation commit block?
IE:
try{
db::beginTransaction();
//queries here
//can two separate php instances go in here at the same time?
db:commit();
}
catch(error e)
{
db::rollback();
}
Each instance of a PHP script (more accurately, each instance of PDO) opens up a connection to the database (from the DB perspective, a new session). Backend databases (with the exception of a few flat-file ones) support multiple connections, but end up locking their individual resources differently. Depending on the queries executed in your transaction, you may end up causing a deadlock. That said, having multiple connections to the database open at the same time does not necessarily put you in a deadlock scenario.