I am running laravel 5.4 and noticed that rollbacks in transactions do not work. I set my database engine to InnoDB in the settings.php file and tried DB::rollback(); and DB::rollBack(); (i.e. upper and lower case b) but it does not roll back my database.
I wrote a unit test bellow. It creates a record, commits it, then rolls back. However, the last assertion fails. After it rolls back, the record is still found in the database. Is there something I am missing? Or is there a bug with laravel?
public function testRollback()
{
$this->artisan('migrate:refresh', [
'--seed' => '1'
]);
DB::beginTransaction();
Season::create(['start_date' => Carbon::now(), 'end_date' => Carbon::now(),]);
DB::commit();
$this->assertDatabaseHas('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
DB::rollBack();
// This assertion fails. It still finds the record after calling roll back
$this->assertDatabaseMissing('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
}
The transaction consists of three steps:
You start it with DB::beginTransaction or MySQL equivalent BEGIN TRANSACTION, then you execute the commands you need to and then (and here's the important part) you either COMMIT or ROLLBACK
However once you've committed the transaction is done, you cant roll it back anymore.
Change the test to:
public function testRollback()
{
$this->artisan('migrate:refresh', [
'--seed' => '1'
]);
DB::beginTransaction();
Season::create(['start_date' => Carbon::now(), 'end_date' => Carbon::now(),]);
$this->assertDatabaseHas('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
DB::rollback();
$this->assertDatabaseMissing('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
}
This should work because until the transaction is rolled back the database "thinks" the record is in there.
In practice when using transactions you want to use what's suggested in the docs, for example:
DB::transaction(function()
{
DB::table('users')->update(array('votes' => 1));
DB::table('posts')->delete();
});
This will ensure atomicity of wrapped operations and will rollback if an exception is thrown within the function body (which you can also throw yourself as a means to abort if you need to).
You cannot rollback once you commit.As i can see you have used commit
DB::commit();
before rollback
so you can rollback only when it will fails to commit .you can use try catch block
DB::beginTransaction();
try {
DB::insert(...);
DB::commit();
} catch (\Exception $e) {
DB::rollback();
}
Emmm... you have misunderstood how transactions work.
After having begun a transaction, you could either commit it or rollback it. Committing means that all changes you did to the database during the transaction so far are "finalized" (i.e. made permanent) in the database. As soon as you have committed, there is nothing to roll back.
If you want to roll back, you have to do so before you commit. Rolling back will bring the database into the state it was before you have started the transaction.
This means you exactly have two options:
1) Begin a transaction, then commit all changes made so far.
2) Begin a transaction, then roll back all changes made so far.
Both committing and rolling back are final actions for a transaction, i.e. end the transaction. When having committed or rolled back, the transaction is finished from the database's point of view.
You could also look at this in the following way:
By starting a transaction, you are telling the database that all following changes are preliminary / temporary. After having done your changes, you can either tell the database to make those changes permanent (by committing), or you tell the database to throw away (revert) the changes (by rolling back).
After you have rolled back, the changes are lost and thus cannot be committed again. After you have committed, the changes are permanent and thus cannot be rolled back. Committing and rolling back is only possible as long as the changes are in the temporary state.
Related
I have some trouble with the Laravel transaction.
Laravel 9+
PHP 8+
Firebird 2.5
I have two DB connection MySQL (default) and Firebird. MySQL works fine, like this. I get the connection data.
DB::transaction(function ($conn) use ($request) {
dd($conn)
});
When I try to use with my other connection ('firebird'), it always throws "There is already an active transaction" error.
DB::connection('firebird')->transaction(function ($conn) use ($request) {
dd($conn);
$conn->table('A')->insert();
$conn->table('B')->insert();
$conn->table('C')->insert();
});
I tried this version too, but I get the same error if I use the 'firebird' connection:
DB::connection('firebird')->beginTransaction();
If I leave out the transaction, both are working just fine, but I want to use rollback if there is any error. Any thoughts why? I'm stuck at this.
Firebird always uses transactions. The transaction is started as soon as you make a change in the database and remains open for that session until you commit. Using your code, it's simply:
DB::connection('firebird')->insert();
DB::connection('firebird')->commit() or rollback();
When you do begin tran in SQL Server, it does not mean that you're starting the transaction now. You are already in transaction, since you are connected to the database! What begin tran really does is disable the "auto-commit at each statement", which is the default state in SQL Server (unless otherwise specified).
Respectively, commit tran commits and reverts the connection to "auto-commit at each statement" state.
In any database, when you are connected, you are already in transaction. This is how databases are. For instance, in Firebird, you can perform a commit or rollback even if only ran a query.
Some databases and connection libs, in the other hand, let you use the "auto-commit at each statement" state of connection, which is what SQL Server is doing. As useful as that feature might be, it's not very didactic and lead beginners to think they are "not in a transaction".
The solution:
Need to turn off auto commit(PDO::ATTR_AUTOCOMMIT), when you define the 'Firebird' connection in 'config/database.php'
Example:
'firebird' => [
'driver' => 'firebird',
'host' => env('DB_FIREBIRD_HOST', '127.0.0.1'),
'port' => env('DB_FIREBIRD_PORT', '3050'),
'database' => env('DB_FIREBIRD_DATABASE',
'path\to\db\DEFAULT.DATABASE'),
'username' => env('DB_FIREBIRD_USERNAME', 'username'),
'password' => env('DB_FIREBIRD_PASSWORD', 'password'),
'charset' => env('DB_FIREBIRD_CHARSET', 'UTF8'),
'options' => array(
PDO::ATTR_PERSISTENT => false,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_AUTOCOMMIT => false,
)
],
Then you can use Laravel transactions like:
try{
DB::connection('firebird')->beginTransaction();
DB::connection('firebird')->insert();
DB::connection('firebird')->commit();
} catch (Exception $exception) {
DB::connection('firebird')->rollBack();
throw $exception;
}
Or you can use this too and this do the commit or rollback automatic:
DB::connection('firebird')->transaction(function () use
($request) {
DB::connection('firebird')->insert($request);
})
But dont forget! If you do this, you must start the transaction every time! Even when you are just Select some data.
DB::connection('firebird')->beginTransaction();
or you will get SQL error like:
SQLSTATE[HY000]: General error: -901 invalid transaction handle (expecting explicit transaction start)
Thank you everybody!
I am experiencing a strange issue where, on my local machine, my code runs perfectly fine without any errors, but on my live server (production) I am getting these errors when running jobs automatically (which goes to the CLI instead of running through PHP, I believe):
SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry
...this only seems to happen when the function that calls the SQL inserts is run via a "php artisan" command (or when that command is run automatically by the server/jobs/cronjobs). The inserts are done like this:
$carValues = [
'name' => $carName,
'car_id' => $carID,
'count' => $count
];
if ($car){
//Record already exists - update it
$car->fill($carValues);
$car->save(); //save changes
} else {
//Record does not exist - add new record
$car = Car::create($carValues);
}
In the above example, 'name' has a unique key and is triggering the errors. Basically if $car was already not null before the above code segment, then we'd do an update, otherwise it would create the record. This works all 100% of the time on my local machine. It seems that, somehow, on the live server only (when using php artisan command or letting the command run through CLI/scheduled jobs) it's running into these duplicate entries but it's not pointing me to any specific segment of code, the errors are dispatching as PDOException from .../vendor/laravel/framework/src/Illuminate/Database/Connection.php
Really confused on this one. Is there perhaps some way to have PDOException ignore these as it's stopping my scheduled jobs from running consistently, when ideally it should continue on without throwing these errors. Again - it works on my local machine (running Homestead/Vagrant box) which matches my online servers setup.
This is a concurrency problem.
You are only experiencing this in your production environment because that is where you have set up queue execution of jobs.
This means that there might be multiple jobs that run simultaneously.
So this happens:
job A tries to fetch $car (does not exist)
job B tries to fetch $car (does not exist)
job A then inserts it into the database
job B then tries to insert it into the database but can't because it has just been inserted by job A.
So you have to either add retries, or make use of "insert.. on duplicate key update" which makes the "update or create" on database level.
Even though laravel has a build in "updateOrCreate" function, this is not concurrency safe either!
An easy fix/way to test that this is actually the case is to wrap your code into
DB::transaction(function () {
... your code ....
}, 5);
Which will retry the transaction 5 times if it fails.
Another way to solve this is to ensure that the retry_after is longer than the longest executing job time.
So, if a job takes you 120 seconds, make sure to keep this value 180 seconds.
You can find the retry_after value in the config/queue.php file.
Here is what mine looks like for the Redis connection
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 240, // the longest job runs for 180 seconds. To prevent job duplication, make this longer
'block_for' => 2,
'after_commit' => false,
],
i'm trying to learn mongodb transactions using php-mongodb library v1.5 but i've found some problemes.
i've tried to start, commit, and abort transaction using the giving methods but abortTransaction is not working for me :
$session = self::$instance->startSession();
$this->db = self::$instance->{"mydb"};
$session->startTransaction();
$this->db->users->deleteOne([
'_id' => new MongoDB\BSON\ObjectId('5c88e197df815495df201a38')
]);
$session->abortTransaction();
$session->endSession();
the transaction is always commited even after the abort action !!!
what i'm missing here please save my day :(
the transaction is always commited even after the abort action
This is because the delete operation doesn't utilise the session object that you have instantiated. You need to pass the session as a $options parameter MongoDB\Collection::deleteOne(). Otherwise it will execute outside of the transaction. For example:
$session->startTransaction();
$this->db->users->deleteOne(
['_id' => new MongoDB\BSON\ObjectId('5c88e197df815495df201a38')],
['session' => $session]
);
See also MongoDB Transactions for more information
I need to trigger a laravel job within the transaction.
Since the jobs are asynchronous, sometimes they complete before the transaction commits. In such situations, the job cannot get the relevant raw using the id. (Because the transaction is not yet committed and changes are not visible to the outside)
Please suggest a method other than putting this part outside of the transaction to solve this problem.
DB::beginTransaction()
...
$process = DB::table("trn_users")->insertGetId([
"first_name" => $first_name,
"last_name" => $last_name
]);
$job = (new SendEmailJob([
'Table' => 'trn_users',
'Id' => $process
]))->onQueue('email_send_job');
$this->dispatch($job);
...
DB:commit()
For this purpose I've published a package http://github.com/therezor/laravel-transactional-jobs
The other option is use events:
DB::beginTransaction()
...
$process = DB::table("trn_users")->insertGetId([
"first_name" => $first_name,
"last_name" => $last_name
]);
$job = (new SendEmailJob([
'Table' => 'trn_users',
'Id' => $process
]))->onQueue('email_send_job');
Event::listen(\Illuminate\Database\Events\TransactionCommitted::class, function () use ($job) {
$this->dispatch($job);
});
...
DB:commit()
I recently solved this problem in a project.
Simply defined a "buffer" facade singleton with a dispatch() method which instead of dispatching it right away, buffers jobs in memory until transaction commit.
When the 'buffer' class is constructed, it registers an event listener for commit and rollback events, and either dispatches or forgets buffered jobs depending on which event is fired.
It does some other clever stuff around the actual transaction level and working out whether it needs to buffer or dispatch immediately.
Hopefully you get the idea, but let me know if you want me to go into more detail.
I saw some discussions similar obviously but couldn't found a solution(if there is one). I have a project running on linux machine.
The problem is that working with the Database takes forever. For example 1000 inserts takes approximately 10 seconds.
I tried reducing the time in a different ways with little success so I thought I just place here a part of my code and maybe there is something critical I'm not doing right.
First of all, in main.php the database is configured like this:
'db'=>array(
'pdoClass' => 'NestedPDO',
'connectionString' => 'sqlite:/tmp/mydb.db',
'class' => 'CDbConnection',
'schemaCachingDuration' => 100)
I work with the database in the following way:
$connection=Yii::app()->db;
$transaction = $connection->beginTransaction();
try
{
Some Code..
$transaction->commit();
}
catch (Exception $ex)
{
$transaction->rollback();
}
Inside Some Code there could be calls to different functions that the connection variable is passed to.
Finally each sqlite command (for example, 1000 inserts) is written like:
$statement= 'insert into my_tbl (id, name) VALUES(:id, :name)';
$command=$connection->createCommand($statement);
$command->bindParam(":id", $id);
$command->bindParam(":name", $name);
$command->execute();
First make sure where the bottleneck is.
try
{
$t_start = time();
$transaction->commit();
$elapsed = time() - $t_start;
}
I have add somewhat good success by committing more often.