Side-effects of disabling auto-commits - php

We are trying to run a transaction using the Wordpress wpdb object - not sure if that matters.
wpdb->query('BEGIN TRANSACTION');
// Run transaction related queries
if($error) {
// ROLLBACK
} else {
// COMMIT
}
Now it seems like mysql does this brilliant thing of setting auto_commit to true, which causes every query after execution to auto commit. We learnt that we can disable this amazing feature by running SET auto_commit = 0.
At the end of our query we will be running SET auto_commit = 1. My question is that would this affect any other queries being run on the DB at the same time?

Unfortunately, not every database supports transactions, so PDO needs to run in what is known as "auto-commit" mode when you first open the connection. Auto-commit mode means that every query that you run has its own implicit transaction, if the database supports it, or no transaction if the database doesn't support transactions. If you need a transaction, you must use the PDO::beginTransaction() method to initiate one.
Now guess it is good or bad ?

Related

Using PDO, is there a way to handle a transaction across two drivers?

So, let's say I'm using two drivers at the same time (in the specific mysql and sqlite3)
I have a set of changes that must be commit()ted on both connections only if both dbms didn't fail, or rollBack()ed if one or the another did fail:
<?php
interface DBList
{
function addPDO(PDO $connection);
// calls ->rollBack() on all the pdo instances
function rollBack();
// calls ->commit() on all the pdo instances
function commit();
// calls ->beginTransaction() on all the pdo instances
function beginTransaction();
}
Question is: will it actually work? Does it make sense?
"Why not use just mysql?" you would say! I'm not a masochist! I need mysql for the classic fruition via my application, but I also need to keep a copy of a table that is always synchronized and that is also downloadable and portable!
Thank you a lot in advance!
I suspect you put the cart before the horses! If
two databases are in sync
a transaction commits successfully on one DB
No OS-level error occures
then the transaction will also commit successully on the second DB.
So what you would want to do is:
- Start the transaction on MySQL
- Record all data-changing SQL (see later)
- Commit the transaction on MySQL
- If the commit works, run the recorded SQL against SQlite
- if not, roll back MySQL
Caveat: The assumption above is only valid, if the sequence of transactions is identical on both DBs. So you would want to record the SQL into a MySQL table, which is subject to the same transaction logic as the rest. This does the serialization work for you.
You mistake PDO with a database server. PDO is just an interface, pretty much like the database console. It doesn't perform any data operations of its own. It cannot insert or select data. It cannot perform data locks or transactions. All it can do is to send your command to database server and bring back results if any. It's just an interface. It doesn't have transactions on it's own.
So, instead of such fictional trans-driver transactions you can use regular ones.
Start two, one for each driver, and then rollback them accordingly. By the way, with PDO one don't have to rollback manually. Just set PDO in exception mode, write your queries and add commit at the end. In case one of queries failed, all started transactions will be rolled back automatically due to script termination.

PDO and Multiple Query / Concurrency Issues

In PHP I am using PDO to interact with databases. One procedure that commonly takes place consists of multiple queries (several SELECT and UPDATE). This works most of the time, but occasionally the data becomes corrupt where two (or more) instances of the procedure run concurrently.
What is the best way to work around this issue? Ideally I would like a solution which works with the majority of PDO drivers.
Assuming your database back end supports transactions (mysql with InnoDB, Postgres, etc), then simply wrapping the operation in question in a transaction will solve the problem. If one instance of the script is in the middle of the transaction when the second scripts attempts to start it, then the second script's database changes will be queued up and not be attempted until the first transaction completes. This means the database will always be in a valid state provided the transaction starting and committing logic is implemented correctly.
if ($inTransaction = $pdo -> beginTransaction ())
{
// Do your selects and updates here. Try to keep this section as short as possible though, as you don't want to keep other pending transactions waiting
if ($condition_for_success_met)
{
$pdo -> commit ();
}
else
{
$pdo -> rollback ();
}
}
else
{
// Couldn't start a transaction. Handle error here
}

PHP and PostgreSQL Transactions?

A long time ago I wrote a php class that handles postgresql db connections.
I've added transactions to my insert/update functions and it works just fine for me. But recently I found out about the "pg_prepare" function.
I'm a bit confused about what that function does and if it'll be better to switch to it.
Currently whenever I do an insert/update my sql looks like this:
$transactionSql = "PREPARE TRANSACTION ".md5(time()).";"
.$theUpdateOrDeleteSQL.";".
."COMMIT;";
This will return something like:
PREPARE TRANSACTION '4601a2e4b4aa2632167d3cc62b516e6d';
INSERT INTO users (username,g_id,email,password)
VALUES('test', '1', 'test','1234');
COMMIT;
I've structured my database with relations and I'm using (when it's possible):
ON DELETE CASCADE
ON UPDATE CASCADE
But I want to be 100% sure things are clean in the database and there are no leftovers if/when there is a failure upon updating/deleting or inserting.
It would be nice if you can share your opinion/experience about pg_prepare, do I really need the "prepare transaction" and any other addition things that might help me? :)
No, you don't need 2 phase commit !...
For safe PHP database handling, do not use pg_query directly, rather wrap it in a special function which does the following :
opens the database connection on your first query
if using persistent connections, ensure the connection is in a known state
register_shutdown_function to a function that issues a ROLLBACK
make sure autocommit is off, or simply issue a BEGIN before the first query
log database error and slow queries
only uses pg_query_params() which takes care of sql injections nicely
That way, if your script crashes or whatever, a rollback is issued automatically. You can only commit by explicitly comitting.
If you use persistent connections beware : php's handling of pg_pconnect is a little ... buggy.
No you don't need prepare transaction (that is intended for distributed transactions across different servers - as Milen has already pointed out.
I'm not sure how the PHP interface handles that, but as long as you can make sure you are not running in auto commit mode, things should be fine.
If you can't control the auto commit mode, simply put your statements into a BEGIN ... COMMIT block

In PHP, can I get MySQL to rollback a transaction if I disconnect without committing, rather than commit it?

If I run the following PHP, I would expect no value to be inserted into the test table, because I have a transaction that I haven't committed:
$db = mysql_connect("localhost","test","test");
mysql_select_db("test");
mysql_query("begin transaction;");
mysql_query("insert into Test values (1);") or die("insert error: ". mysql_errror());
die('Data should not be commited\n');
mysql_query("commit;"); // never occurs because of the die()
But instead it seems to commit anyway. Is there a way to turn off this behaviour without turning off autocommit for the PHP that doesn't use transactions elsewhere on the site?
Use mysql_query('BEGIN'). The SQL "BEGIN TRANSACTION" is not valid (and in fact mysql_query is returning false on that query, which means there is an error). It's not working because you never start a transaction.
The syntax to start a transaction is:
START TRANSACTION
The feature you are talking about is AUTOCOMMIT. If you don't want it, you'll have to disable it:
SET autocommit = 0
The reference can be found at http://dev.mysql.com/doc/refman/5.1/en/commit.html
I also recommend that you test the return value of all mysql_...() functions. You cannot assume that they'll always run successfully.
By default, the transaction will not be rolled back. It is the responsibility of your application code to decide how to handle this error, whether that's trying again, or rolling back.
If you want automatic rollback, that is also explained in the manual:
The current transaction is not rolled back. To have the entire transaction roll back, start the server with the `--innodb_rollback_on_timeout` option.

Can I use a database value right after I insert it?

Can I insert something into a MySQL database using PHP and then immediately make a call to access that, or is the insert asynchronous (in which case the possibility exists that the database has not finished inserting the value before I query it)?
What I think the OP is asking is this:
<?
$id = $db->insert(..);
// in this case, $row will always have the data you just inserted!
$row = $db->select(...where id=$id...)
?>
In this case, if you do a insert, you will always be able to access the last inserted row with a select. That doesn't change even if a transaction is used here.
If the value is inserted in a transaction, it won't be accessible to any other transaction until your original transaction is committed. Other than that it ought to be accessible at least "very soon" after the time you commit it.
There are normally two ways of using MySQL (and most other SQL databases, for that matter):
Transactional. You start a transaction (either implicitly or by issuing something like 'BEGIN'), issue commands, and then either explicitly commit the transaction, or roll it back (failing to take any action before cutting off the database connection will result in automatic rollback).
Auto-commit. Each statement is automatically committed to the database as it's issued.
The default mode may vary, but even if you're in auto-commit mode, you can "switch" to transactional just by issuing a BEGIN.
If you're operating transactionally, any changes you make to the database will be local to your db connection/instance until you issue a commit. Issuing a commit should block until the transaction is fully committed, so once it returns without error, you can assume the data is there.
If you're operating in auto-commit (and your database library isn't doing something really strange), you can rely on data you've just entered to be available as soon as the call that inserts the data returns.
Note that best practice is to always operate transactionally. Even if you're only issuing a single atomic statement, it's good to be in the habit of properly BEGINing and COMMITing a transaction. It also saves you from trouble when a new version of your database library switches to transactional mode by default and suddenly all your one-line SQL statements never get committed. :)
Mostly the answer is yes. You would have to do some special work to force a database call to be asynchronous in the way you describe, and as long as you're doing it all in the same thread, you should be fine.
What is the context in which you're asking the question?

Categories