How can you run into DEADLOCKS without LOCKS? - php

In have a class in my Laravel app which goes over a set of users and updates them within a transaction but without lock.
The code looks roughly like this:
DB::transaction(function () {
// ...
foreach($groupOfUsers as $user){
Car::where('user_id','=', $user->id)->update(['color' => 'red');
}
})
Now I used Paratest which runs multiple processes for my integration test for the above class. All process use the same database.
Everytime, one of my tests fails for the above class fails with a DEADLOCK. But I don't understand how this is possible. I thought deadlock can only occur if I actually lock rows for update or use share lock.
How can you create a DEADLOCK with updates only?

without foreach just
DB::transaction(function () {
// ...
Car::where('user_id','=', $user->id)->update(['color' => 'red');
})

Related

Laravel Eloquent pessmestic lock to update balance

I have a table that contains balances and they are deducted when a user purchases an item, I'm trying to lock to make sure the balance is valid and deducted properly. I have a for loop that takes multiple balances and decreases them one by one in a request. What I've done is wrap my code with a DB:transaction as shown below, but this seems wrong as the for loop might take longer than expected, thus every other user trying to update this balance (!!which is shared for many users that can edit simultaneously!!)
$balances->map(function ($balance) {
// Check if balance is valid
if (!Balances::where('id', $balance->id)->deductBalance($balance->balance)) {
return response()->json(['message' => 'Insufficient '], 400);
};
DB:create([
....
]);
});
deductBalance:
public function deductBalance($balance) {
$this->balance-= $balance;
if ($this->balance >= 0) {
$this->save();
return true;
}
return false;
}
I added this as shown in the documentation
DB::transaction(function () {
// Check if balance is valid
if (!Balances::where('id', $balance->id)->deductBalance($balance->balance)) {
return response()->json(['message' => 'Insufficient '], 400);
};
}, 5);
Imagine 5 users try and update 5 balances at the same exact time, is the solution above efficient to prevent the balance from being a negative value? I can already see a problem in this for loop, if one balance is invalid, the request will be rejected, but all previous balances are deducted, should I have another for loop within the transaction to check all balances first and then update them?
Thanks.
DB::transaction() doesn't lock anything. It just makes sure to roll back the every query inside the transaction if one of them fails.
What you should do if you want to lock a resource is the following:
DB::transaction(function () use ($reserva, $validated) {
// lock
$balance = Balances::lockForUpdate()->find($balance->id) // or where->(...)->get() if it's more than 1 balance.
...
});
lockForUpdate() inside a transaction prevents any query from affecting the resources selected until the transaction is over.
In this case, update/delete queries that have an effect on the Balance with id = 1 will be put on hold. OTHER balances (id != 1) can be updated and are not locked.
EDIT
Here is an easy way to see database locks in action.
run php artisan tinker on one terminal and run the following code
DB::transaction(function () {
SomeTestModel::lockForUpdate()->find(1)->update(['attribute' => 'value']);
sleep(60); // artificially increase how long the transaction is going to take by 60 seconds.
});
Do not close the terminal.
On another terminal, run php artisan tinker and run the following code:
// Will run instantly
SomeTestModel::find(2)->update(['attribute' => 'value']);
// Will not run until transaction on previous terminal is done. Might even throw a lock wait timeout, showing the resource is properly locked.
SomeTestModel::find(1)->update(['attribute' => 'value']);

Laravel Transactions Not working when this condition happenes

Let me start with my code
On my Controller file this is the code
namespace Something\Somewhere\Controller{
class Mobile extends Controller{
public function saveMobilesIntoDb(Request $request,MediaManager $manager){
$requestMobileData = $request->all();
DB::beginTransaction();
try{
/*Do something*/
..
..
$bigMediaArray = $manager->mobileImagesManager($media,$insertedMobile,$mediaSlug);
..
..
DB::commit();
}catch (\Exception $exception) {
DB::rollback();
dump($exception);
}
}
}
}
Notice I am using a service there, The service is nothing but a namespace to manage the code in the service class this what happening
namespace Something\Somewhere\Service{
class MediaManager{
public function mobileImagesManager($media, $mobileId, $slug){
//Do some stuff
///Create folder
return array;
}
}
Now the issue is when I get some error in the service and I resend the data then suppose the last id inserted into database was 5 and then the error came but didn't rolled back so it saved the new id with 7 . and I don't want this to happen. I know the rollback is not working when I am not in the scope but what I tried so far is
I wrapped the service function into try-catch and in the catch I used the DB::rollback() but it didn't helped.
Please let me know how do I solve it and rollback everything when I am not in the scope.
Thank you for you time
As Alex said, due to Mysql official docs, the auto incremented ID will not rollback after transaction failure.
In all lock modes (0, 1, and 2), if a
transaction that generated
auto-increment values rolls back,
those auto-increment values are
“lost.” Once a value is generated for
an auto-increment column, it cannot be
rolled back, whether or not the
“INSERT-like” statement is completed,
and whether or not the containing
transaction is rolled back. Such lost
values are not reused. Thus, there may
be gaps in the values stored in an
AUTO_INCREMENT column of a table.

How to lock database for Laravel's `firstOrCreate`?

We currently encounter a Duplicate entry QueryException when executing the following code:
Slug::firstOrCreate([
Slug::ENTITY_TYPE => $this->getEntityType(),
Slug::SLUG => $slug
], [
Slug::ENTITY_ID => $this->getKey()
]);
Since the firstOrCreate method by Laravel first checks if the entry with the attributes exist before inserting it, this exception should never occur. However, we have an application with million of visitors and million of actions every day and therefore also use a master DB connection with two slaves for reading. Therefore, it might be possible that some race conditions might occur.
We currently tried to separate the query and force the master connection for reading:
$slugModel = Slug::onWriteConnection()->where([
Slug::SLUG => $slug,
Slug::ENTITY_TYPE => $this->getEntityType()
])->first();
if ($slugModel && $slugModel->entity_id !== $this->getKey()) {
$class = get_class($this);
throw new \RuntimeException("The slug [{$slug}] already exists for a model of type [{$class}].");
}
if (!$slugModel) {
return $this->slugs()->create([
Slug::SLUG => $slug,
Slug::ENTITY_TYPE => $this->getEntityType()
]);
}
However the exception still occurs sometimes.
Our next approach would be to lock the table before the reading check and release the lock after the writing to prevent any inserts with the same slug from other database actions between our reading and our writing. Does anyone know how to solve this? I don`t really understand how Laravel's Pessimistic Locking can help solving the issue. We use MySql for our database.
I would not recommend to lock the table, especially if you have millions of viewers.
Most race-conditions can be fixed by locks, but this is not fixable with locks, because you cannot lock a row that does not exist (there is something like gap locking, but this won't help here.).
Laravel does not handle race-conditions by itself. If you call firstOrCreate it does two queries:
SELECT item where slug=X and entity_type=Y
If it does not exists, create it
Now because we have two queries, race condition is possible, meaning two user in parallel reach step 1, then both try to create the entry in step 2 and your system will crash.
Since you already have a Duplicate Key error, it means you aleady put a unique constrain on the tuple on the two columns that identify your row, which is good.
What you could do now, is to catch the duplicate key error like this:
try{
$slug = Slug::firstOrCreate([
Slug::ENTITY_TYPE => $this->getEntityType(),
Slug::SLUG => $slug
], [
Slug::ENTITY_ID => $this->getKey()
]);
}
catch (Illuminate\Database\QueryException $e){
$errorCode = $e->errorInfo[1];
if($errorCode == 1062){
$slug = Slug::where('slug','=', $slug)->where('entity_type','=', $this->getEntityType())->first();
}
}
one solution for this is to use Laravel queue and make sure that it runs one job at a time, in this way you will never have 2 identical queries at the same time.
for sure this will not work if you want to return back the result in the same request.

laravel migrations leave DB in an invalid state

If a migration fails half way through for any reason (E.g. typo), it commits half the migration, and leaves the rest out. It doesn't seem to try to roll back what it just did.(either by rolling back an encompassing transaction, or calling down())
If you try to manually rollback the last migration, e.g. php artisan migrate:rollback --step=1, it rolls back only the migration before last, i.e. the one before the one which failed.
Consider this migration:
public function up()
{
DB::table('address')->insert(['id'=>1,'street'=>'Demo', 'country_id'=>83]);
DB::table('customer')->insert(['id'=>1,'username'=>'demo','address_id'=>1]);
}
public function down()
{
DB::table('customer')->where('id',1)->delete();
DB::table('address')->where('id',1)->delete();
}
If the insert of the customer fails (e.g. we forgot to set a non null column, a typo, or a record exists when it should not), the address record WAS inserted.
migrate:rollback doesn't rollback this migration, it rolls back the one before, and we are left with a spurious orphaned address record. Obviously we can drop re-create the db and run the migration from scratch, but thats not the point - migrations should not leave half the migrations done and the DB in an invalid state.
Is there a solution? e.g. can one put transactions in the migration so it inserts all or nothing?
If we look in the migrations table after the half done migration has failed, it is not there.
NOTE: we use migrations to insert (and modify/delete) static data which the application requires to run. It is not dev data or test data. E.g. countries data, currencies data, as well as admin operators etc.
You should run these migrations inside a transaction:
DB::transaction(function () {
// Your code goes here.
}
or you can use a try/catch block:
try {
DB::beginTransaction();
// Your code goes here ...
DB::commit();
} catch(\Exception $e) {
DB::rollBack();
}

Laravel migration transaction

When developing i'm having so many issues with migrations in laravel.
I create a migration. When i finish creating it, there's a small error by the middle of the migration (say, a foreign key constraint) that makes "php artisan migrate" fail. He tells me where the error is, indeed, but then migrate gets to an unconsistent state, where all the modifications to the database made before the error are made, and not the next ones.
This makes that when I fix the error and re-run migrate, the first statement fails, as the column/table is already created/modified. Then the only solution I know is to go to my database and "rollback" everything by hand, which is way longer to do.
migrate:rollback tries to rollback the previous migrations, as the current was not applied succesfully.
I also tried to wrap all my code into a DB::transaction(), but it still doesn't work.
Is there any solution for this? Or i just have to keep rolling things back by hand?
edit, adding an example (not writing Schema builder code, just some kind of pseudo-code):
Migration1:
Create Table users (id, name, last_name, email)
Migration1 executed OK. Some days later we make Migration 2:
Create Table items (id, user_id references users.id)
Alter Table users make_some_error_here
Now what will happen is that migrate will call the first statement and will create the table items with his foreign key to users. Then when he tries to apply the next statement it will fail.
If we fix the make_some_error_here, we can't run migrate because the table "items" it's created. We can't rollback (nor refresh, nor reset), because we can't delete the table users since there's a foreign key constraint from the table items.
Then the only way to continue is to go to the database and delete the table items by hand, to get migrate in a consistent state.
It is not a Laravel limitation, I bet you use MYSQL, right?
As MYSQL documentation says here
Some statements cannot be rolled back. In general, these include data
definition language (DDL) statements, such as those that create or
drop databases, those that create, drop, or alter tables or stored
routines.
And we have a recommendation of Taylor Otwell himself here saying:
My best advice is to do a single operation per migration so that your
migrations stay very granular.
-- UPDATE --
Do not worry!
The best practices say:
You should never make a breaking change.
It means, in one deployment you create new tables and fields and deploy a new release that uses them. In a next deployment, you delete unused tables and fields.
Now, even if you'll get a problem in either of these deployments, don't worry if your migration failed, the working release uses the functional data structure anyway. And with the single operation per migration, you'll find a problem in no time.
I'm using MySql and I'm having this problem.
My solution depends that your down() method does exactly what you do in the up() but backwards.
This is what i go:
try{
Schema::create('table1', function (Blueprint $table) {
//...
});
Schema::create('tabla2', function (Blueprint $table) {
//...
});
}catch(PDOException $ex){
$this->down();
throw $ex;
}
So here if something fails automatically calls the down() method and throws again the exception.
Instead of using the migration between transaction() do it between this try
Like Yevgeniy Afanasyev highlighted Taylor Otwell as saying (but an approach I already took myself): have your migrations only work on specific tables or do a specific operation such as adding/removing a column or key. That way, when you get failed migrations that cause inconsistent states like this, you can just drop the table and attempt the migration again.
I’ve experienced exactly the issue you’ve described, but as of yet haven’t found a way around it.
Just remove the failed code from the migration file and generate a new migration for the failed statement. Now when it fails again the creation of the database is still intact because it lives in another migration file.
Another advantage of using this approach is, that you have more control and smaller steps while reverting the DB.
Hope that helps :D
I think the best way to do it is like shown in the documentation:
DB::transaction(function () {
DB::table('users')->update(['votes' => 1]);
DB::table('posts')->delete();
});
See: https://laravel.com/docs/5.8/database#database-transactions
I know it's an old topic, but there was activity a month ago, so here are my 2 cents.
This answer is for MySql 8 and Laravel 5.8
MySql, since MySql 8, introduced atomic DDL: https://dev.mysql.com/doc/refman/8.0/en/atomic-ddl.html
Laravel at the start of migration checks if the schema grammar supports migrations in a transaction and if it does starts it as such.
The problem is that the MySql schema grammar has it set to false. We can extend the Migrator, MySql schema grammar and MigrationServiceProvider, and register the service provider like so:
<?php
namespace App\Console;
use Illuminate\Database\Migrations\Migrator as BaseMigrator;
use App\Database\Schema\Grammars\MySqlGrammar;
class Migrator extends BaseMigrator {
protected function getSchemaGrammar( $connection ) {
if ( get_class( $connection ) === 'Illuminate\Database\MySqlConnection' ) {
$connection->setSchemaGrammar( new MySqlGrammar );
}
if ( is_null( $grammar = $connection->getSchemaGrammar() ) ) {
$connection->useDefaultSchemaGrammar();
$grammar = $connection->getSchemaGrammar();
}
return $grammar;
}
}
<?php
namespace App\Database\Schema\Grammars;
use Illuminate\Database\Schema\Grammars\MySqlGrammar as BaseMySqlGrammar;
class MySqlGrammar extends BaseMySqlGrammar {
public function __construct() {
$this->transactions = config( "database.transactions", false );
}
}
<?php
namespace App\Providers;
use Illuminate\Database\MigrationServiceProvider as BaseMigrationServiceProvider;
use App\Console\Migrator;
class MigrationServiceProvider extends BaseMigrationServiceProvider {
/**
* Register the migrator service.
* #return void
*/
protected function registerMigrator() {
$this->app->singleton( 'migrator', function( $app ) {
return new Migrator( $app[ 'migration.repository' ], $app[ 'db' ], $app[ 'files' ] );
} );
$this->app->singleton(\Illuminate\Database\Migrations\Migrator::class, function ( $app ) {
return $app[ 'migrator' ];
} );
}
<?php
return [
'providers' => [
/*
* Laravel Framework Service Providers...
*/
App\Providers\MigrationServiceProvider::class,
],
];
Of course, we have to add transactions to our database config...
DISCLAIMER - Haven't tested yet, but looking only at the code it should work as advertised :) Update to follow when I test...
Most of the answers overlook a very important fact about a very simple way to structure your development against this. If one were to make all migrations reversible and add as much of the dev testing data as possible through seeders, then when artisan migrate fails on the dev environment one can correct the error and then do
php artisan migrate:fresh --seed
Optionally coupled with a :rollback to test rolling back.
For me personally artisan migrate:fresh --seed is the second most used artisan command after artisan tinker.

Categories