PDO beginTransaction in two separate scripts - php

What happens when two different clients call the same php function that have pdo::beginTransaction?
Does one of them fail or can two instances of php execute the contents of a beginTranscation commit block?
IE:
try{
db::beginTransaction();
//queries here
//can two separate php instances go in here at the same time?
db:commit();
}
catch(error e)
{
db::rollback();
}

Each instance of a PHP script (more accurately, each instance of PDO) opens up a connection to the database (from the DB perspective, a new session). Backend databases (with the exception of a few flat-file ones) support multiple connections, but end up locking their individual resources differently. Depending on the queries executed in your transaction, you may end up causing a deadlock. That said, having multiple connections to the database open at the same time does not necessarily put you in a deadlock scenario.

Related

PHP | MYSQL | Nested PDO::beginTransition() and PDO::commit() on different DBs

Im facing up a doubt: what happen if i disable autocommit for 2 different connections to 2 different databases, in a nested way?
Example code:
$conn = new MainDB(); // DB class
$conn_second = new NotMainDB(); // another DB class
try {
$conn->dbh->beginTransaction(); // disable autocommit
$conn_second->dbh->beginTransaction(); // disable autocommit on 2nd DB
$run = $conn->dbh->prepare(/* UPDATE STATEMENT */);
$run->execute();
$run = $conn->dbh->prepare(/* ANOTHER UPDATE STATEMENT */);
$run->execute();
$ssp = $conn_second->dbh->prepare(/* AN INSERT STATEMENT ON ANOTHER DB */);
$ssp->execute();
$conn_second->dbh->commit();
$conn->dbh->commit();
} catch (Exception $ex) {
$conn->dbh->rollBack();
$conn_second->dbh->rollBack();
}
is there anything i have to take care of? anyone already experienced such a case?
thanks
This should work even if both of the connections refer to the same database.
Think of it this way: when you set up your Apache normally and two users visit your site at once, they open transactions simultaneously, and there are no problems whatsoever.
Basically, it's perfectly normal state for a database to handle multiple connections at once.
Just be sure not to cause any deadlocks though. Deadlocks happen when A waits for B to finish, and B waits for A to finish. I imagine this might happen for example when you use triggers with circular dependencies. These are rather rare scenarios though, especially for PHP, and deadlocks happen usually on user application level rather than on DB level.

Two transactions on two databases

I need to put multiple values into 2 databases. The thing is, that if one of those INSERTS fails, i need all the other to rollback.
The question
Is it possible, to make simultaneously two transactions, inserting some values into databases, and then Commit or rollback both of them?
The Code
$res = new ResultSet(); //class connecting and letting me query first database
$res2 = new ResultSet2(); //the other database
$res->query("BEGIN");
$res2->query("BEGIN");
try
{
$res->query("INSERT xxx~~") or wyjatek('rollback'); //wyjatek is throwing exception if query fails
$res2->query("INSERT yyy~~")or wyjatek('rollback');
......
//if everything goes well
$res->query("COMMIT");
$res2->query("COMMIT");
//SHOW some GREEN text saying its done.
}
catch(Exception $e)
{
//if wyjatek throws an exception
$res->query("ROLLBACK");
$res2->query("ROLLBACK");
//SHOW some RED text, saying it failed
}
Summary
So is it proper way, or will it even work?
All tips appreciated.
Theoretically
If you will remove
or wyjatek('rollback')
your script will be work.
But looking to documentation
Transactions are isolated within a single "database". If you want to use multiple database transactions using MySql you can see XA Transactions.
Support for XA transactions is available for the InnoDB storage
engine.
XA supports distributed transactions, that is, the ability to permit
multiple separate transactional resources to participate in a global
transaction. Transactional resources often are RDBMSs but may be other
kinds of resources.
An application performs actions that involve different database
servers, such as a MySQL server and an Oracle server (or multiple
MySQL servers), where actions that involve multiple servers must
happen as part of a global transaction, rather than as separate
transactions local to each server.
The XA Specification. This
document is published by The Open Group and available at
http://www.opengroup.org/public/pubs/catalog/c193.htm
What about letting PostgreSQL doing the dirty work?
http://www.postgresql.org/docs/9.1/static/warm-standby.html#SYNCHRONOUS-REPLICATION
What you propose will almost always work. But for some uses, 'almost always' is not good enough.
If you have deferred constraints, the commit on $res2 could fail on a constraint violation, and then it is too late to rollback $res.
Or, one of your servers or the network could fail between the first commit and the second. If the php, database1, and database2 are all on the same hardware, the window for this failure mode is pretty small, but not negligible.
If 'almost always' is not good enough, and you cannot migrate one set of data to live inside the other database, then you might need to resort to "prepared transactions".

mysqli multi_query followed by query [duplicate]

This question already has answers here:
"Commands out of sync; you can't run this command now" - Caused by mysqli::multi_query
(3 answers)
Closed 7 months ago.
I am currently doing the following:
$mysqli = new mysqli($server, $username, $password, $database);
$mysqli->multi_query($multiUpdates);
while ($mysqli->next_result()) {;} // Flushing results of multi_queries
$mysqli->query($sqlInserts);
Is there a faster way to dump the results?
I do not need them and just want to run the next query however I get the error:
Commands out of sync; you can't run this command now
Problem is the while ($mysqli->next_result()) {;} takes about 2 seconds which is a waste for something I don't want.
Any better solutions out there?
Found a faster solution which saves about 2-3 seconds when updating 500 records and inserting 500 records.
function newSQL() {
global $server, $username, $password, $database;
$con = new mysqli($server, $username, $password, $database);
return $con;
}
$mysqli = newSQL();
$mysqli->multi_query($multiUpdates);
$mysqli->close();
$mysqli = newSQL();
$mysqli->query($sqlInserts);
$mysqli->close();
Not sure how practical it is but works well for speed.
If closing and reopening the connection works for you, then you might be better off changing the order:
$mysqli = newSQL();
$mysqli->query($sqlInserts);
$mysqli->multi_query($multiUpdates);
$mysqli->close();
If you don't care which runs first, the query runs more predictably. As soon as it finishes, MySQL will return the results of the insert statement to the PHP client (probably mysqlnd). The connection will then be clear and can accept the next request. No need to close and reopen after a query. So you save the time it takes to close and reopen the connection with this order.
The multi_query is more complicated. It returns the results of the first update before the PHP code continues. At this point, we don't know if the later updates have run or not. The database won't accept any more queries on this connection until it has finished with the multi_query, including passing the results back with next_result. So what happens when you close the query?
One possibility is that it blocks until the multi_query is finished but does not require the results. So closing the connection essentially skips the part where the database server returns the results but still has to wait for them to be generated. Another possibility is that the connection closes immediately and the database continues with the query (this is definitely what happens if the connection is simply dropped without formally closing it, as the database server won't know that the connection is broken until it finishes or times out, see here or here).
You'll sometimes see the claim that query and multi_query take the same time. This is not true. Under some circumstances, multi_query can be slower. Note that with a normal query (using the default MYSQLI_STORE_RESULT), the database can simply return the result as soon as it finishes. But with multi_query (or with MYSQLI_USE_RESULT), it has to retain the result on the database server. If the database server stores the result, it may have to page it out of memory or it may deliberately store the result on disk. Either way, it frees up the memory but puts the result in a state where it takes more time to access (because disk is slower than memory).
NOTE for other readers: multi_query is harder to use safely than query. If you don't really know what you are doing, you are probably better off using PDO than mysqli (because PDO does more of the work for you) and you are almost certainly better off doing your queries one at a time. You should only use multi_query if you understand why it increases the risk of SQL injections and are avoiding it. Further, one usually doesn't need it.
The only real advantage to multi_query is it allows you to do your queries in one block. If you already have queries in a block (e.g. from a database backup), this might make sense. But it generally doesn't make sense to aggregate separate queries into a block so as to use multi_query. It might make more sense to use INSERT ON DUPLICATE KEY UPDATE to update multiple rows in one statement. Of course, that trick won't work unless your updates have a unique key. But if you do, you might be able to combine both the inserts and the updates into a single statement that you can run via query.
If you really need more speed, consider using something other than PHP. PHP produces HTML in response to web requests. But if you don't need HTML/web requests and just want to manipulate a database, any shell language will likely be more performant. And certainly multithreaded languages with connection pools will give you more options.

PDO and Multiple Query / Concurrency Issues

In PHP I am using PDO to interact with databases. One procedure that commonly takes place consists of multiple queries (several SELECT and UPDATE). This works most of the time, but occasionally the data becomes corrupt where two (or more) instances of the procedure run concurrently.
What is the best way to work around this issue? Ideally I would like a solution which works with the majority of PDO drivers.
Assuming your database back end supports transactions (mysql with InnoDB, Postgres, etc), then simply wrapping the operation in question in a transaction will solve the problem. If one instance of the script is in the middle of the transaction when the second scripts attempts to start it, then the second script's database changes will be queued up and not be attempted until the first transaction completes. This means the database will always be in a valid state provided the transaction starting and committing logic is implemented correctly.
if ($inTransaction = $pdo -> beginTransaction ())
{
// Do your selects and updates here. Try to keep this section as short as possible though, as you don't want to keep other pending transactions waiting
if ($condition_for_success_met)
{
$pdo -> commit ();
}
else
{
$pdo -> rollback ();
}
}
else
{
// Couldn't start a transaction. Handle error here
}

What is the preferred way to get access a transaction to commit or rollback?

I understand how transactions work and everything functions as expected, but I do not like the way I access connections to commit or rollback transactions.
I have 3 service classes that can access the same singleton connection object. I want to wrap these three things in a single transaction, so I do this:
try {
$service1 = new ServiceOne;
$service2 = new ServiceTwo;
$service3 = new ServiceThree;
$service1->insertRec1($data);
$service2->deleteRec2($data);
$service3->updateRec3($data);
$service1->getSingletonConnection()->commit();
}
catch(Exception $ex) {
$service1->getSingletonConnection()->rollback();
}
The connection object returned by getSingletonConnection is just a wrapper around the oci8 connection, and committing is oci_commit; rollback is oci_rollback.
As I said, this works because they are all accessing the same connection, but it feels wrong to access the connection through any arbitrary service object. Also, there are two different databases used in my app so I need to be sure that I retrieve and commit the correct one... not sure if there is any way around that though.
Is there a better way to handle transactions?
it feels wrong to access the
connection through any arbitrary
service object.
I agree with you 100%.
It seems to me that if each service only makes up part of a database transaction, then the service cannot be directly responsible for determining the database session to use. You should select and manage the connection at the level of code that defines the transaction.
So your current code would be modified to something like:
try {
$conn = getSingletonConnection();
$service1 = new ServiceOne($conn);
$service2 = new ServiceTwo($conn);
$service3 = new ServiceThree($conn);
$service1->insertRec1($data);
$service2->deleteRec2($data);
$service3->updateRec3($data);
$conn->commit();
}
catch(Exception $ex) {
$conn->rollback();
}
It seems like this would simplify dealing with your two-database issue, since there would only be one place to decide which connection to use, and you would hold a direct reference to that connection until you end the transaction.
If you wanted to expand from a singleton connection to a connection pool, this would be the only way I can think of to guarantee that all three service calls used the same connection.
There's nothing intrinsically wrong with a single connection.
If you have multiple connections, then each runs an independent transaction. You basically have two options.
Maintain the current single
connection object for each of the
three services
Maintain separate
connections (with related overheads)
for each service, and commit/rollback
each individual connection separately
(not particularly safe, because you
can't guarantee the ACID consistency
then)
As a way round the two separate database instances that you're connecting to: use db links so that you only connect to a single database

Categories