InnoDB transactions - doesn't work - php

I wanted to try out transactions and how they work practically. So I decided to write two scripts in order to test the main function of transactions (handling simultaneous accesses to the database)
I already asked a question here on stackOverflow, and the following was an Edit of that question. But I read through the rules again and I thought it might be wrong to post it under my original question because it's something different. So I ask this in a new question:
My Code (and the database table is set to InnoDB):
On page1.php:
$db->query("START TRANSACTION;");
$db->query("SET AUTOCOMMIT = 0;");
try {
$i = 0;
while ($i <= 120000000) {
$i++;
}
var_dump($db->query("INSERT INTO test VALUES (NULL, 'testvalue')"));
$db->query("COMMIT;");
}
catch (Exception $e) {
$db->query("ROLLBACK;");
echo $e->getMessage();
}
the query-method works. It just queries the string. And the while-loop is just for me. I need a bit time to go to the other browser tab to load page2.php:
$db->query("START TRANSACTION;");
$db->query("SET AUTOCOMMIT = 0;");
try {
// outputs an array with the data
var_dump($db->query("SELECT * FROM test", "assoc"));
$db->query("COMMIT;");
}
catch (Exception $e) {
$db->query("ROLLBACK");
echo $e->getMessage();
}
With the SELECT I get an array with all of the values inside the database table, which was empty at first.
Now I opened page1.php which will insert new data into the database. But first it runs through the loop, which takes about 3-4 seconds. Meanwhile, I open up page2.php.
From my understanding, page2.php should have waited for page1.php to complete its transaction?? But it just loads as usual and outputs an empty Array.
When I refresh page2.php after page1.php finished loading, I get the correct output with the new data.
Where is my mistake? I don't quite understand it.
EDIT: Here is another one I tried:
page1.php
$db->query("SET AUTOCOMMIT = 0;");
$db->query("START TRANSACTION;");
try {
//print_r($db->query("DELETE FROM test;", "affected"));
$i = 200;
while ($i <= 700) {
var_dump($db->query("INSERT INTO test VALUES ({$i}, 'testvaluetestvaluetestvaluetestvaluetestvalue')"));
$i++;
}
$db->query("COMMIT;");
}
catch (Exception $e) {
$db->query("ROLLBACK;");
echo $e->getMessage();
}
page2.php
$db->query("SET AUTOCOMMIT = 0;");
$db->query("START TRANSACTION;");
try {
var_dump($db->query("SELECT * FROM test", "assoc"));
$db->query("COMMIT;");
}
catch (Exception $e) {
$db->query("ROLLBACK");
echo $e->getMessage();
}
While page1.php is not completed, page 2 should output nothing, but it outputs the first 70 rows (depending on how fast I reloaded)

transactions try to avoid locking tables / rows wherever possible to improve concurrency. that is a good thing.
what they are for is to ensure that a set of sql statements all execute as an atomic unit.
meaning if an error occurs all the queries within the transaction are rolled back.
how strict / aggressive the locking is can be controlled with isolation modes, more infos in the mysql documentation.
so it sounds like you are misunderstanding theire purpose, the are not a semaphore mechanism.

This is the expected behavior by transactions... What you expected is a pessimistic locking mechanism, but every relational database uses optimistic locking and transaction isolation to make thing faster.
You can read more about this in the pgsql manual. I know your question is about mysql, but it does not really matter, because it's about concurrency control conceptions: ACID properties and transaction isolation levels, etc...

Related

Laravel: Which one is better to "reset" a table: ->truncate() or ->delete()?

I'm using ->truncate() to delete the data, but DB::rollback() in a try catch doesn't executed when I use it. Meanwhile the ->delete() is executed by DB::rollback(), but with it my auto increment doesn't reset to 1, I tried to use DB::statement('ALTER TABLE table_name AUTO_INCREMENT = 1') after ->delete() but alter table doesn't supported by DB::rollback() too, so I don't know what should I do?
Code 1:
\DB::beginTransaction();
try {
\DB::table('table_name')->truncate();
\DB::table('table_name')->insert($data);
\DB::commit();
} catch (\Exception $e){
\DB::rollback();
}
Code 2:
\DB::beginTransaction();
try {
\DB::table('table_name')->delete();
\DB::statement('ALTER TABLE table_name AUTO_INCREMENT = 1');
\DB::table('table_name')->insert($data);
\DB::commit();
} catch (\Exception $e){
\DB::rollback();
}
TLDR: I want to delete all data in a table and then insert the table with new data starting from id = 1, but those two codes don't work with DB::rollback()
I found how to do it myself, by doing this:
\DB::beginTransaction();
try {
\DB::table('table_name')->where('id', $data[0]['id'])->delete(); // delete it first to avoid id duplicate when inserting, comment this if id is auto increment
$status = \DB::table('table_name')->insert($data[0]); // to test whenever the insertion is successful or not
if ($status){
\DB::table('table_name')->delete();
\DB::statement('ALTER TABLE table_name AUTO_INCREMENT = 1');
\DB::table('table_name')->insert($data);
}
\DB::commit();
} catch (\Exception $e){
\DB::rollback();
}
for mysql
13.3.2 Statements That Cannot Be Rolled Back
Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines. see this link
It is a fairly common SQL Server belief that Truncate Cannot Be Rolled Back Because It Is Not Logged
TRUNCATE is a logged operation, but SQL Server doesn’t log every single row as it TRUNCATEs the table. SQL Server only logs the fact that the TRUNCATE operation happened. It also logs the information about the pages and extents that were deallocated. However, there’s enough information to roll back, by just re-allocating those pages. A log backup only needs the information that the TRUNCATE TABLE occurred. To restore the TRUNCATE TABLE, the operation is just reapplied. The data involved is not needed during RESTORE (like it would be for a true ‘minimally logged’ operation like a BULK INSERT).
hopefully that is help you

MSSQL UPDATE works but INSERT fails

I'm on it since 2 days and can't figure it out this case.
I'm working with PHP 5.5 ans MSSQL, and it seems I can't insert in some table whereas an update works on this table and the same insert works on another table.
Of course I've check my user have the correct rights on this table.
Here's the code maybe I'm dumb...
// Establish connection
try {
$pdo = new PDO(DSN, UID, PWD);
} catch (PDOException $e) {
die("Error! ".$e->getMessage());
}
$pdo->beginTransaction();
// Merge-like event
try {
$updateStmt->execute();
$rows = $updateStmt->rowCount();
if($rows == 0) {
$insertStmt->execute();
}
} catch (Exception $e) {
$pdo->rollBack();
die("Error! ".$e->getMessage());
} finally {
$insertHistoryStmt->execute();
$pdo->commit();
}
All my PDO Statements objects are corrects with suitable values.
I've got no error on the INSERT seems it seems to never been executed on the DB.
Please ask if you need more code to understand I don't want to put my whole code here and say "please do my work".
Thanks I'm really stuck :/
Maybe my question's title isn't suitable now that I found the error source. But I write this answer down because it's a great tutorial which explains to debug different errors from ourselves.
If you go throw any PDO errors follow this link and you will find the way.
Thank to #Your Common Sense for providing a method to learn solving errors rather than a ready to use solution.

MySQL Make concurrent SELECT ... FOR UPDATE queries fail instead of waiting for lock to be released

In my PHP code I'm trying to make an innoDB database transaction be ignored if another thread is already performing the transaction on the row. Here's some code as an example:
$db = connect_db();
try{
$db->beginTransaction();
$query = "SELECT val FROM numbers WHERE id=1 FOR UPDATE"; //I want this to throw an exception if row is already selected for update
make_query($db,$query); //function I use to make queries with PDO
sleep(5); //Added to delay the transaction
$num = rand(1,100);
$query = "UPDATE numbers SET val='$num' WHERE id=1";
make_query($db,$query);
$db->commit();
echo $num;
}
catch (PDOException $e){
echo $e;
}
When it makes the SELECT val FROM numbers WHERE id=1 FOR UPDATE query I need some way of knowing through php if the thread is waiting for another thread to finish it's transaction and release the lock. What ends up happening is the first thread finishes the transaction and the second thread overwrites its changes immediately afterwards. Instead I want the first transaction to finish and the second transaction to rollback or commit without making any changes.
Consider simulating record locks with GET_LOCK()
Choose a name specific to the rows you want locking. e.g. 'numbers_1'.
Call SELECT GET_LOCK('numbers_1',0) to lock the name 'numbers_1'.. it will return 1 and set the lock if the name is available, or return 0 if the lock is set already. The second parameter is the timeout, 0 for immediate. On a return of 0 you can back out.
Use SELECT RELEASE_LOCK('numbers_1') when you are finished.
Be aware; calling GET_LOCK() again in a transaction will release the previously set lock.

PDO let database stay open, or open and close when needed?

I have just discovered PDO and I'm very excited about it, but I have read a few tutorials on how to implement it, and they show me different ways of doing it.
So now I'm confused which way is the best.
example 1: open database once.
include("host.php"); //including the database connection
//random PDO mysql stuff here
Example 2: open close the database when needed:
try {
$dbh = new PDO(mysql stuff);
$sql = "mysql stuff";
foreach ($dbh->query($sql) as $row)
{
echo $row['something'];
}
/*** close the database connection ***/
$dbh = null;
}
catch(PDOException $e)
{
echo $e->getMessage();
}
Which is best? I would think example 2 is best, but there much more code than example 1.
Usually, there is significant time spent/lost when connecting, and you want to do it only once. Do not go closing a connection you need later on, it will only slow things down. You may consider closing a connection sooner if you are reaching the maximum connections limit, but that's more a hint you should scale up then a permanent solution IMHO.

PDO Unbuffered queries

I'm trying to get into PDO details. So I coded this:
$cn = getConnection();
// get table sequence
$comando = "call p_generate_seq('bitacora')";
$id = getValue($cn, $comando);
//$comando = 'INSERT INTO dsa_bitacora (id, estado, fch_creacion) VALUES (?, ?, ?)';
$comando = 'INSERT INTO dsa_bitacora (id, estado, fch_creacion) VALUES (:id, :estado, :fch_creacion)';
$parametros = array (
':id'=> (int)$id,
':estado'=>1,
':fch_creacion'=>date('Y-m-d H:i:s')
);
execWithParameters($cn, $comando, $parametros);
my getValue function works fine, and I get the next sequence for the table. But when I get into execWithParameters, i get this exception:
PDOException: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. in D:\Servidor\xampp_1_7_1\htdocs\bitacora\func_db.php on line 77
I tried to modify the connection attributes but it doesn't work.
These are my core db functions:
function getConnection() {
try {
$cn = new PDO("mysql:host=$host;dbname=$bd", $usuario, $clave, array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
));
$cn->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, true);
return $cn;
} catch (PDOException $e) {
print "Error!: " . $e->getMessage() . "<br/>";
die();
}
}
function getValue($cn, $comando) {
$resul = $cn->query($comando);
if (!$resul) return null;
while($res = $resul->fetch()) {
$retorno = $res[0][0];
break;
}
return $retorno;
}
function execWithParameters($cn, $comando, $parametros) {
$q = $cn->prepare($comando);
$q->execute($parametros);
if ($q->errorInfo() != null) {
$e = $q->errorInfo();
echo $e[0].':'.$e[1].':'.$e[2];
}
}
Somebody who can shed a light for this? PD. Please do not suggest doing autonumeric id, cause i am porting from another system.
The issue is that mysql only allows for one outstanding cursor at a given time. By using the fetch() method and not consuming all the pending data, you are leaving a cursor open.
The recommended approach is to consume all the data using the fetchAll() method.
An alternative is to use the closeCursor() method.
If you change this function, I think you will be happier:
<?php
function getValue($cn, $comando) {
$resul = $cn->query($comando);
if (!$resul) return null;
foreach ($resul->fetchAll() as $res) {
$retorno = $res[0];
break;
}
return $retorno;
}
?>
I don't think PDOStatement::closeCursor() would work if you're not doing a query that returns data (i.e. an UPDATE, INSERT, etc).
A better solution is to simply unset() your PDOStatement object after calling PDOStatement::execute():
$stmt = $pdo->prepare('UPDATE users SET active = 1');
$stmt->execute();
unset($stmt);
The problem seems to be---I'm not too familiar with PDO--- that after your getValue call returns, the query is still bound to the connection (You only ever ask for the first value, yet the connection returns several, or expects to do so).
Perhaps getValue can be fixed by adding
$resul->closeCursor();
before the return.
Otherwise, if queries to getValue will always return a single (or few enough) value, it seems that using fetchAll will be preferred.
I just spend 15 minutes googling all around the internet, and viewed at least 5 different Stackoverflow questions, some who claimed my bug apparently arose from the wrong version of PHP, wrong version of MySQL library or any other magical black-box stuff...
I changed all my code into using "fetchAll" and I even called closeCursor() and unset() on the query object after each and every query. I was honestly getting desperate! I also tried the MYSQL_ATTR_USE_BUFFERED_QUERY flag, but it did not work.
FINALLY I threw everything out the window and looked at the PHP error, and tracked the line of code where it happened.
SELECT AVG((original_bytes-new_bytes)/original_bytes) as saving
FROM (SELECT original_bytes, new_bytes FROM jobs ORDER BY id DESC LIMIT 100) AS t1
Anyway, the problem happened because my original_bytes and new_bytes both where unsigned bigints, and that meant that if I ever had a job where the new_bytes where actually LARGER than the original_bytes, then I would have a nasty MySQL "out of range" error. And that just happened randomly after running my minification service for a little while.
Why the hell I got this weird MySQL error instead of just giving me the plain error, is beyond me! It actually showed up in SQLBuddy (lightweight PHPMyAdmin) when I ran the raw query.
I had PDO exceptions on, so it should have just given me the MySQL error.
Never mind, the bottom line is:
If you ever get this error, be sure to check that your raw MySQL is actually correct and STILL working!!!
A friend of mine had very much the same problem with the xampp 1.7.1 build. After replacing xampp/php/* by the 5.2.9-2 php.net build and copying all necessary files to xampp/apache/bin it worked fine.
If you're using XAMPP 1.7.1, you just need to upgrade to 1.7.2.

Categories