SQLite transaction doesn't work as expected - php

I prepared 2 files, "1.php" and "2.php".
"1.php" is like this.
<?php
$dbh = new PDO('sqlite:test1');
$dbh->beginTransaction();
print "aaa<br>";
sleep(55);
$dbh->commit();
print "bbb";
?>
and "2.php" is like this.
<?php
$dbh = new PDO('sqlite:test1');
$dbh->beginTransaction();
print "ccc<br>";
$dbh->commit();
print "ddd";
?>
and I excute "1.php". It starts a transaction and waits 55 seconds.
So when I immediately excute "2.php", my expectation is this:
"1.php" is getting transaction and
"1" holds a database lock
"2" can not begin a transaction
"2" can not get database lock so
"2" have to wait 55 seconds
BUT, but the test went another way. When I excute "2",then
"2" immediately returned it's result
"2" did not wait
so I have to think that "1" could not get transaction, or could not get database lock.
Can anyone help?

As I understand it, SQLite transactions do not lock the database unless
a. you make them EXCLUSIVE (they are DEFERRED by default), or
b. you actually access the database
So either you explicitly call
$dbh->exec("BEGIN EXCLUSIVE TRANSACTION");
or you make a write operation (INSERT/UPDATE) to the DB before you start to sleep().
To cite the documentation (emphasis mine):
Transactions can be deferred,
immediate, or exclusive. The default
transaction behavior is deferred.
Deferred means that no locks are
acquired on the database until the
database is first accessed. Thus with
a deferred transaction, the BEGIN
statement itself does nothing. Locks
are not acquired until the first read
or write operation. The first read
operation against a database creates a
SHARED lock and the first write
operation creates a RESERVED lock.

Related

In a transaction. Do I need to rollback at every fail? or exiting the script without commit is enough?

suppose I have this code
$transactionfailed = false;
$mysqliconn->query("begin;");
//query 1
$result=$mysqliconn->query("insert into table(col1,col2) values('$val1','val2')");
if(!$result){$transactionfailed=true;}
//query 2
$result=$mysqliconn->query("insert into table2(col1,col2) values('$val1','val2')");
if(!$result){$transactionfailed=true;}
if($transactionfailed){$mysqliconn->query("rollback;");}
else{$mysqliconn->query("commit;");}
die();
I want to replace it with this one
$mysqliconn->query("begin;");
//query 1
$result=$mysqliconn->query("insert into table(col1,col2) values('$val1','val2')");
if(!$result){die("error");}
//query 2
$result=$mysqliconn->query("insert into table2(col1,col2) values('$val1','val2')");
if(!$result){die("error");}
$mysqliconn->query("commit;");
die();
I want to end the script if something wrong happened without rolling back or committing, depending on mysql database to roll back the transaction if I didn't commit it.
I tried it many times, and yes it do rollback the transaction if I exit without commit. but is this always SAFE ?. or there is something I miss. because I don't want that query2 is failed and query1 is committed one day later.
I suggest using try/catch blocks,
put 'begin' query before try block
put your 'insert' queries in the try block
put your 'rollback' query in the catch block
if no query failed in the try block 'commit' once lastly in the try block
if any query failed in the try block 'rollback' once in the catch block (all queries after the begin query will be rolled back)

MySQL Make concurrent SELECT ... FOR UPDATE queries fail instead of waiting for lock to be released

In my PHP code I'm trying to make an innoDB database transaction be ignored if another thread is already performing the transaction on the row. Here's some code as an example:
$db = connect_db();
try{
$db->beginTransaction();
$query = "SELECT val FROM numbers WHERE id=1 FOR UPDATE"; //I want this to throw an exception if row is already selected for update
make_query($db,$query); //function I use to make queries with PDO
sleep(5); //Added to delay the transaction
$num = rand(1,100);
$query = "UPDATE numbers SET val='$num' WHERE id=1";
make_query($db,$query);
$db->commit();
echo $num;
}
catch (PDOException $e){
echo $e;
}
When it makes the SELECT val FROM numbers WHERE id=1 FOR UPDATE query I need some way of knowing through php if the thread is waiting for another thread to finish it's transaction and release the lock. What ends up happening is the first thread finishes the transaction and the second thread overwrites its changes immediately afterwards. Instead I want the first transaction to finish and the second transaction to rollback or commit without making any changes.
Consider simulating record locks with GET_LOCK()
Choose a name specific to the rows you want locking. e.g. 'numbers_1'.
Call SELECT GET_LOCK('numbers_1',0) to lock the name 'numbers_1'.. it will return 1 and set the lock if the name is available, or return 0 if the lock is set already. The second parameter is the timeout, 0 for immediate. On a return of 0 you can back out.
Use SELECT RELEASE_LOCK('numbers_1') when you are finished.
Be aware; calling GET_LOCK() again in a transaction will release the previously set lock.

How does locking tables work?

I have a php script that will be requested several times "at the same time" I also have a field in a table let's call it persons as a flag for active/inactive. I want when the first instance of the script runs to set that field to inactive so that the rest instances will die when they check that field. Can someone provide a solution for that? How can I ensure that this script will run only once?
PHP, PDO, MySQL
Thank you very much in advance.
Your script should fetch the current flag within a transaction using a locking read, such as SELECT ... FOR UPDATE:
$dbh = new PDO("mysql:dbname=$dbname", $username, $password);
$dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, FALSE);
$dbh->beginTransaction();
// using SELECT ... FOR UPDATE, MySQL will hold all other connections
// at this point until the lock is released
$qry = $dbh->query('SELECT persons FROM my_table WHERE ... FOR UPDATE');
if ($qry->fetchColumn() == 'active') {
$dbh->query('UPDATE my_table SET persons = "inactive" WHERE ...');
$dbh->commit(); // releases lock so others can see they are inactive
// we are the only active connection
} else {
$dbh->rollBack();
// we are inactive
}
You can use MySQL's own 'named' locking functions without ever having to lock a table: http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock
e.g. try get_lock('got here first', 0) with a 0 timeout. if you get a lock, you're first in the gate, and any subsequent requests will NOT get the lock and immediately abort.
however, be careful with this stuff. if you don't clean up after yourself and the client which gained the lock terminates abnormally, the lock will not be released and your "need locks for this" system is dead in the water until you manually clear the lock.

Postgresql: PREPARE TRANSACTION

I've two DB servers db1 and db2.
db1 has a table called tbl_album
db2 has a table called tbl_user_album
CREATE TABLE tbl_album
(
id PRIMARY KEY,
name varchar(128)
...
);
CREATE TABLE tbl_user_album
(
id PRIMARY KEY,
album_id bigint
...
);
Now if a user wants to create an album what my php code needs to do is:
Create a record in db1 and save its id(primary key)
Create a record in db2 using it saved in first statement
Is it possible to keep these two statements in a transaction? I'm ok with a php solution too. I mean I'm fine if there is a solution that needs php code to retain db handles and commit or rollback on those handles.
Any help is much appreciated.
Yes it is possible, but do you really need it?
Think twice before you decide this really must be two separate databases.
You could just keep both connections open and ROLLBACK the first command if the second one fails.
If you'd really need prepared transactions, continue reading.
Regarding your schema - I would use sequence generators and RETURNING clause on database side, just for convenience.
CREATE TABLE tbl_album (
id serial PRIMARY KEY,
name varchar(128) UNIQUE,
...
);
CREATE TABLE tbl_user_album (
id serial PRIMARY KEY,
album_id bigint NOT NULL,
...
);
Now you will need some external glue - distributed transaction coordinator (?) - to make this work properly.
The trick is to use PREPARE TRANSACTION instead of COMMIT. Then after both transactions succeed, use COMMIT PREPARED.
PHP proof-of-concept is below.
WARNING! this code is missing the critical part - that is error control. Any error in $db2 should be caught and ROLLBACK PREPARED should be executed on $db1
If you don't catch errors you will leave $db1 with frozen transactions which is really, really bad.
<?php
$db1 = pg_connect( "dbname=db1" );
$db2 = pg_connect( "dbname=db2" );
$transid = uniqid();
pg_query( $db1, 'BEGIN' );
$result = pg_query( $db1, "INSERT INTO tbl_album(name) VALUES('Absolutely Free') RETURNING id" );
$row = pg_fetch_row($result);
$albumid = $row[0];
pg_query( $db1, "PREPARE TRANSACTION '$transid'" );
if ( pg_query( $db2, "INSERT INTO tbl_user_album(album_id) VALUES($albumid)" ) ) {
pg_query( $db1, "COMMIT PREPARED '$transid'" );
}
else {
pg_query( $db1, "ROLLBACK PREPARED '$transid'" );
}
?>
And again - think before you will use it. What Erwin proposes might be more sensible.
Oh and just one more note... To use this PostgreSQL feature, you need to set max_prepared_transactions config variable to nonzero value.
If you can access db2 from within db1, then you could optimize the process and actually keep it all inside a transaction. Use dblink or SQL MED for that.
If you roll back a transaction on the local server, what has been done via dblink on a remote server will not be rolled back. (That is one way to make changes persistent even if a transaction is rolled back.)
But you can execute code on the remote server that rolls back if not successful, and only execute it, if the operation in the local db has been successful first. If the remote operation fails you can roll back locally, too.
Also, use the RETURNING clause of INSERT to return id from a serial column.
It will be easier with PDO...
The main advantage of PDO is to capture errors (by PHP error line or returning SQL error messages) of each single SQL statment in the transaction.
See pdo.begintransaction, pdo.commit, pdo.rollback and pdo.error-handling.
Example:
$dbh->beginTransaction();
/* Do SQL */
$sth1 = $dbh->exec("CREATE TABLE tbl_album (..)");
$sth2 = $dbh->exec("CREATE TABLE tbl_user_album(..)");
/* Commit the changes */
$dbh->commit();

Running multiple queries with mysqli_multi_query and transactions

I'm developing an update system for a Web Application written in PHP. In the process of the update I might need to execute a bunch of MySQL scripts.
The basic process to run the scripts is:
Search for the Mysql scripts
Begin a transaction
Execute each script with mysqli_multi_query since a script can contain multiple queries
If everything goes ok COMMIT the transaction, otherwise ROLLBACK.
My code looks something like:
$link = mysqli_connect(...);
mysqli_autocommit($link, false);
// open dir and search for scripts in file.
// $file is an array with all the scripts
foreach ($scripts as $file) {
$script = trim(file_get_contents($scriptname));
if (mysqli_multi_query($link, $script)) {
while (mysqli_next_result($link)) {
if ($resSet = mysqli_store_result($link)) { mysqli_free_result($resSet); }
if (mysqli_more_results($link)) { }
}
}
// check for errors in any query of any script
if (mysqli_error($link)) {
mysqli_rollback($link);
return;
}
}
mysqli_commit($link);
Here is an example of the scripts (for demonstration purposes):
script.1.5.0.0.sql:
update `demo` set `alias` = 'test1' where `id` = 1;
update `users` set `alias` = 'user1' where `id` = 1;
script 1.5.1.0.sql:
insert into `users`(id, key, username) values(3, '100', 'column key does not exist');
insert into `users`(id, key, username) values(3, '1', 'column key exists');
In this case, script 1.5.0.0 would execute without errors and script 1.5.1.0 would generate an error (for demonstration purposes, let's say that column key is unique and there is already a row with key = 1).
In this case I want to rollback every query that was executed. But what happens is that the first insert of 1.5.1.0 is not in the database (correctly) but the updates from 1.5.0.0 were executed successfully.
Remarks:
My first option was to split every query from every script with ";" and execute the queries independently. This is not an option since I have to be able to insert HTML code to the database (ex: if I want to insert something like "& nbsp;")
I've already searched StackOverflow and google and came across solutions like this one but I would prefer using a solution like mysqli_multi_query rather than using a function to split every query. It's more understandable and easier for debug purposes
I haven't tested it, but I believe that I could merge all the scripts and execute just a query. However it would be usefull to execute one script at a time so that I can figure out which script has the error.
The tables engine is InnoDB.
Appreciate if you can point some way to make this work.
Edit:mysqli_multi_query() only returns false if the first query fails. If the first query doesn't fail then your code will run mysql_store_result() which if it succeeds will leave mysqli_error() empty. You need to check for errors after every mysqli function that can succeed or fail.
Ok, after spending another day debugging, i've discovered the problem.
Actually, it has nothing to do with the code itself or with mysqli functions. I'm used to MS SQL transactions which supports DDL statements. MySQL does not supports DDL statements and commits data implicitly (Implicit commit). I had one DROP Table in one of the scripts that was auto commiting data.

Categories