I'm using PHP with PDO and InnoDB tables.
I only want the code to allow one user-submitted operation to complete, the user can either cancel or complete. But in the case that the user posts both operations, I want one of the requests to fail and rollback, which isn't happening right now, both are completing without exception/error. I thought deleting the row after checking it exists would be enough.
$pdo = new PDO();
try {
$pdo->beginTransaction();
$rowCheck = $pdo->query("SELECT * FROM table WHERE id=99")->rowCount();
if ($rowCheck == 0)
throw new RuntimeException("Row isn't there");
$pdo->exec("DELETE FROM table WHERE id = 99");
// either cancel, which does one bunch of queries. if (isset($_POST['cancel'])) ...
// or complete, which does another bunch of queries. if (isset($_POST['complete'])) ...
// do a bunch of queries on other tables here...
$pdo->commit();
} catch (Exception $e) {
$pdo->rollback();
throw $e;
}
How can I make the cancel / complete operations a critical section? The second operation MUST fail.
Another solution just for completeness:
private function getLock() {
$lock = $this->pdo->query("SELECT GET_LOCK('my_lock_name', 5)")->fetchColumn();
if ($lock != "1")
throw new RuntimeException("Lock was not gained: " . $lock);
}
private function releaseLock() {
$releaseLock = $this->pdo->query("SELECT RELEASE_LOCK('my_lock_name')")->fetchColumn();
if ($releaseLock != "1")
throw new RuntimeException("Lock not properly released " . $releaseLock);
}
MySQL GET_LOCK() documentation
The code is fine, with one exception: Add FOR UPDATE to the initial SELECT. That should suffice to block the second button press until the first DELETE has happened, thereby leading to the second one "failing".
https://dev.mysql.com/doc/refman/5.5/en/innodb-locking-reads.html
Note Locking of rows for update using SELECT FOR UPDATE only applies
when autocommit is disabled (either by beginning transaction with
START TRANSACTION or by setting autocommit to 0. If autocommit is
enabled, the rows matching the specification are not locked.
Related
Let's say I need to insert into one table and update another and those two things absolutely need to happen together. Example code:
$insert = query('INSERT INTO first_table');
if ($insert->successful) {
$update = query('UPDATE second_table');
if ($update->successful) {
} else {
log($update->errorMessage);
// magically revert the effects from the first query?
// store the query and try to execute it on the next request?
}
}
Obviously I would log the error but all of the data would be out of sync/corrupted. What should I do in this case? Or am I doing the entire thing wrong and it shouldn't be in two queries?
You need transactions, additionally validate the state of start transaction and commit
//Start your transaction
$start = query('START TRANSACTION');
$insert = query('INSERT INTO first_table');
if ($insert->successful) {
$update = query('UPDATE second_table');
if ($update->successful) {
//Do the changes
$state = query('COMMIT');
} else {
//Undo changes
$state = query('ROLLBACK');
log($update->errorMessage);
// magically revert the effects from the first query?
// store the query and try to execute it on the next request?
}
} else {
//Undo changes
$state = query('ROLLBACK');
}
You need to start a transaction and commit only if you have success in the two queries
https://dev.mysql.com/doc/refman/5.5/en/commit.html
I wonder whether it matters where to start the transaction.
Example 1:
$transaction = Yii::app()->db->beginTransaction();
try
{
$savedSuccessfully = $object->save();
$transaction->commit();
}
catch (Exception $ex)
{
$transaction->rollBack();
$result = $e->getMessage();
}
Example 2:
$transaction = Yii::app()->db->beginTransaction();
try
{
$object = $model()::model()->findByPk(1); //!!!!!!! - line
// what makes the difference
$savedSuccessfully = $object->save();
$transaction->commit();
}
catch (Exception $ex)
{
$transaction->rollBack();
$result = $e->getMessage();
}
Should transaction be started before selecting data from db or or just before updating/inserting data? Will yii take care of it instead of me?
Thanks
Example 2 would be the solution of choice.
By retrieving the model within the transaction, you make sure that it is consistent throughout your changes.
If you retrieve the model, like in example 1, outside the transaction, other threads/users could change the corresponding database entry before you commit your changes. So you could end up with potentially inconsistent data.
Actually 2nd one is correct , if you are saving data which is more critical like banking transaction or payment system then example 2 is very correct way. for example , you are doing some code like this.
insert into table 1
select from table 1
insert into table 2
update table 2
select from table 1.
so if you start transaction from first , it will rollback all query if any query fails which will be more efficient. for example in online payment system.
For relational databases like mysql, transaction handling in PHP is just like.
Begin transaction
...
Insert queries
...
Update queries
...
if error in any query then
Rollback transaction
...
at end, if no error in any query then
Commit transaction
How to handle transactions in neo4jphp?
I have tried same like but there was failure. Even after rollback changes were saved.
I was doing like this.
//$client = Neo4jClient
$transaction = $client->beginTransaction();
...
//Insert queries
...
//Update queries
...
//if error in any query then
$transaction->rollback();
...
// at end, if no error in any query then
$transaction->commit();
Check following code.
//$client = Neo4jClient
$transaction = $client->beginTransaction();
$dataCypherQuery = new Query($client, $dataQuery, $params);
Instead of getting resultset from query, we need to add statement into transaction.
// $dataResult = $dataCypherQuery->getResultSet(); // don't do this for transaction
Important : Pass query object to transaction's add statements method.
$dataResult = $transaction->addStatements($dataCypherQuery);
We can pass true as parameter indicating transaction commit.
//$dataResult = $transaction->addStatements($dataCypherQuery, true);
If there is an error, changes are automatically rolled back.
You can check $dataResult variable for validity, result should be returning something.
if (0 == $dataResult->count()) {
$transaction->rollback();
}
At end, if no error in any query then
$transaction->commit();
For more info see Cypher-Transactions
When I try to run the code below:
$conBud = Propel::getConnection(MyTestBudgetPeer::DATABASE_NAME); // DATABASE_NAME = 'Budget'
$conBud->beginTransaction();
$conIn = Propel::getConnection(MyTestInvoicePeer::DATABASE_NAME); // DATABASE_NAME = 'Invoice'
$conIn->beginTransaction();
$idcl = '1235';
try
{
// Do db udpates related to database Budget (here around 15 tables and 500 data rows are update)
// budExModel is a table, primary id from this table is used to update InvoiceTest Table below
$idtest = $budExModel->save($conBud);
...
// Code to Update one table for database Invoice (only one table)
// Create a Criteria object that will select the correct rows from the database
$selectCriteria = new Criteria();
$selectCriteria->add(InvoiceTestPeer::IDCL, $idcl, Criteria::EQUAL);
$selectCriteria->setDbName(InvoiceTestPeer::DATABASE_NAME);
// Create a Criteria object includes the value you want to set
$updateCriteria = new Criteria();
$updateCriteria->add(InvoiceTestPeer::IDTEST, $idtest);
// Execute the query
BasePeer::doUpdate($selectCriteria, $updateCriteria, $conIn);
$conBud->commit();
$conIn->commit();
} catch (Exception $e)
{
$conBud->rollBack();
$conIn->rollBack();
}
I get error: ["Unable to execute UPDATE statement [UPDATEinvoice_testSETIDTEST=:p1 WHERE invoice_test.IDCL=:p2 ] [wrapped: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction]
Lock wait timeout exceeded; try restarting transaction
Error I am getting is for the table/db which has lesser data and only processes for one table.
Is this not allowed for mysql?
I already changed innodb_lock_wait_timeout and tried restarting mysql so they are not an option.
Edit: Here IDTEST I am trying to udpate for table invoice_test is an fk from Table Budget_test from database Budget.
It seems that the reason behind the error was foreign key constraint on idtest.
Here $idtest is primary_key of newly saved row from table bud_ex; This retrieved from last_insert_id, this was the same id that is trying to be used in incoice_test table. Problem here is, I was trying to use $idtest, but the connection/transaction wasn't committed hence when trying to use this id, it threw an fk constraint error which in return lock time out exceeded.
To get this to work I had to run a query to set foreign key checks as false for invoice database.
set foreign_key_checks = 0;
Along with this I made certain few changes to the php code to make the try catch block more concrete.
$con1->beginTransaction();
try
{
// Do stuff
$con2->beginTransaction();
try
{
// Do stuff
$con2->commitTransaction();
}
catch (Exception $e)
{
$con2->rollbackTransaction();
throw $e;
}
try
{
$con1->commitTransaction();
}
catch (Exception $e)
{
// Oops $con2 was already committed, we need to manually revert operations done with $con2
throw $e;
}
}
catch (Exception $e)
{
$con1->rollbackTransaction();
throw $e;
}
Simple question I guess, I want to use PHP to write an update to an existing row in my database, if it doesn't happen I want to log the failure but continue executing the code. While it would be nice to have records of failures to track down issues, that the update failed isn't that important to my user, nor will it affect the running of any other code; the query is simply for a 'cosmetic' but entirely unnecessary piece of information.
My database class's query function is set to die on failure, could I modify that or is there another way of doing it without altering my standard query code?
This is what exceptions are good at.
Tiny example using mysql
class Db
{
function query( $sql )
{
$result = #mysql_query( $sql );
$error = mysql_error();
if ( !empty( $error ) )
{
throw new DbException( $error );
}
return $result;
}
}
class DbException extends Exception{}
And then
try {
$db = new Db;
$db->query( 'select * from table' );
}
catch ( DbException $e )
{
// do nothing - we want silent failure
}
if(mysql_query($sql)){
// Database command succeeds
}
else{
// Database command fails
}
get rid of die() then?
Id' suggest to use trigger_error() instead of die. You will be notified of error via standard error output.
Well, I would just take out the die(). That, alone, will keep it from stopping all CGI execution. If you somehow want to log failures, why not add a function that writes to a log file, or maybe sends you an e-mail with the failure (if you're not talking about a high fail rate, and just want to debug).
I would implement a method that logs/mails the error. Your end users won't notice anything at all.
$query = mysql_query($sql) or log_error($sql);
// continue executing code
function log_error($sql) {
// Code that writes to a log file
// Notify tech support
mail("user#example.com", "Error while updating DB", $sql . " at time: " . time());
}