I have a table that contains balances and they are deducted when a user purchases an item, I'm trying to lock to make sure the balance is valid and deducted properly. I have a for loop that takes multiple balances and decreases them one by one in a request. What I've done is wrap my code with a DB:transaction as shown below, but this seems wrong as the for loop might take longer than expected, thus every other user trying to update this balance (!!which is shared for many users that can edit simultaneously!!)
$balances->map(function ($balance) {
// Check if balance is valid
if (!Balances::where('id', $balance->id)->deductBalance($balance->balance)) {
return response()->json(['message' => 'Insufficient '], 400);
};
DB:create([
....
]);
});
deductBalance:
public function deductBalance($balance) {
$this->balance-= $balance;
if ($this->balance >= 0) {
$this->save();
return true;
}
return false;
}
I added this as shown in the documentation
DB::transaction(function () {
// Check if balance is valid
if (!Balances::where('id', $balance->id)->deductBalance($balance->balance)) {
return response()->json(['message' => 'Insufficient '], 400);
};
}, 5);
Imagine 5 users try and update 5 balances at the same exact time, is the solution above efficient to prevent the balance from being a negative value? I can already see a problem in this for loop, if one balance is invalid, the request will be rejected, but all previous balances are deducted, should I have another for loop within the transaction to check all balances first and then update them?
Thanks.
DB::transaction() doesn't lock anything. It just makes sure to roll back the every query inside the transaction if one of them fails.
What you should do if you want to lock a resource is the following:
DB::transaction(function () use ($reserva, $validated) {
// lock
$balance = Balances::lockForUpdate()->find($balance->id) // or where->(...)->get() if it's more than 1 balance.
...
});
lockForUpdate() inside a transaction prevents any query from affecting the resources selected until the transaction is over.
In this case, update/delete queries that have an effect on the Balance with id = 1 will be put on hold. OTHER balances (id != 1) can be updated and are not locked.
EDIT
Here is an easy way to see database locks in action.
run php artisan tinker on one terminal and run the following code
DB::transaction(function () {
SomeTestModel::lockForUpdate()->find(1)->update(['attribute' => 'value']);
sleep(60); // artificially increase how long the transaction is going to take by 60 seconds.
});
Do not close the terminal.
On another terminal, run php artisan tinker and run the following code:
// Will run instantly
SomeTestModel::find(2)->update(['attribute' => 'value']);
// Will not run until transaction on previous terminal is done. Might even throw a lock wait timeout, showing the resource is properly locked.
SomeTestModel::find(1)->update(['attribute' => 'value']);
Related
I tried to make my first query return affected rows: 0 to see if the transaction fails but it continued executing the second query.
Should i break the transaction manually?
DB::transaction(function () {
User::where('id', 1002)->update(['name' => 'x']); // id:1002 doesn't exist
Post::where('user_id', 1)->update(['title' => 'New Title']);
});
There's not a lot of context around your sample code, but a very basic approach would be something like this:
$user = User::findorFail(1002);
$user->update(['name' => 'x']);
if ($user->wasChanged('name')) {
Post::where('user_id', 1)->update(['title' => 'New Title']);
}
So the first line will throw an exception if the model isn't found. Then we do an update. You specifically said you were checking for 0 affected rows, so next we use the wasChanged() method. It "determines if any attributes were changed when the model was last saved within the current request cycle." If that's the case, we proceed with the next update.
There are other changes that could be made involving, for example, route model binding if there was more of your code shown in the question, but hopefully this is a helpful start.
In have a class in my Laravel app which goes over a set of users and updates them within a transaction but without lock.
The code looks roughly like this:
DB::transaction(function () {
// ...
foreach($groupOfUsers as $user){
Car::where('user_id','=', $user->id)->update(['color' => 'red');
}
})
Now I used Paratest which runs multiple processes for my integration test for the above class. All process use the same database.
Everytime, one of my tests fails for the above class fails with a DEADLOCK. But I don't understand how this is possible. I thought deadlock can only occur if I actually lock rows for update or use share lock.
How can you create a DEADLOCK with updates only?
without foreach just
DB::transaction(function () {
// ...
Car::where('user_id','=', $user->id)->update(['color' => 'red');
})
We currently encounter a Duplicate entry QueryException when executing the following code:
Slug::firstOrCreate([
Slug::ENTITY_TYPE => $this->getEntityType(),
Slug::SLUG => $slug
], [
Slug::ENTITY_ID => $this->getKey()
]);
Since the firstOrCreate method by Laravel first checks if the entry with the attributes exist before inserting it, this exception should never occur. However, we have an application with million of visitors and million of actions every day and therefore also use a master DB connection with two slaves for reading. Therefore, it might be possible that some race conditions might occur.
We currently tried to separate the query and force the master connection for reading:
$slugModel = Slug::onWriteConnection()->where([
Slug::SLUG => $slug,
Slug::ENTITY_TYPE => $this->getEntityType()
])->first();
if ($slugModel && $slugModel->entity_id !== $this->getKey()) {
$class = get_class($this);
throw new \RuntimeException("The slug [{$slug}] already exists for a model of type [{$class}].");
}
if (!$slugModel) {
return $this->slugs()->create([
Slug::SLUG => $slug,
Slug::ENTITY_TYPE => $this->getEntityType()
]);
}
However the exception still occurs sometimes.
Our next approach would be to lock the table before the reading check and release the lock after the writing to prevent any inserts with the same slug from other database actions between our reading and our writing. Does anyone know how to solve this? I don`t really understand how Laravel's Pessimistic Locking can help solving the issue. We use MySql for our database.
I would not recommend to lock the table, especially if you have millions of viewers.
Most race-conditions can be fixed by locks, but this is not fixable with locks, because you cannot lock a row that does not exist (there is something like gap locking, but this won't help here.).
Laravel does not handle race-conditions by itself. If you call firstOrCreate it does two queries:
SELECT item where slug=X and entity_type=Y
If it does not exists, create it
Now because we have two queries, race condition is possible, meaning two user in parallel reach step 1, then both try to create the entry in step 2 and your system will crash.
Since you already have a Duplicate Key error, it means you aleady put a unique constrain on the tuple on the two columns that identify your row, which is good.
What you could do now, is to catch the duplicate key error like this:
try{
$slug = Slug::firstOrCreate([
Slug::ENTITY_TYPE => $this->getEntityType(),
Slug::SLUG => $slug
], [
Slug::ENTITY_ID => $this->getKey()
]);
}
catch (Illuminate\Database\QueryException $e){
$errorCode = $e->errorInfo[1];
if($errorCode == 1062){
$slug = Slug::where('slug','=', $slug)->where('entity_type','=', $this->getEntityType())->first();
}
}
one solution for this is to use Laravel queue and make sure that it runs one job at a time, in this way you will never have 2 identical queries at the same time.
for sure this will not work if you want to return back the result in the same request.
We have an api function which check a condition on database with a select-query then if it was true we want just for one time insert some thing to database for example inserting to database that insertion done. Problem is when we call multiple times this api-function concurrently race condition happen, in another words assume we call this function 2 times, first request check the condition it's true then second request check that and it's true again so their do insert to database. But we want to when we check condition no other one can check it again until we do insertion.
We use php/Laravel and know about some ways like using insert into ... select or using some thing like replace into ... and so on.
$order = Order::find($orderId);
$logRefer = $order->user->logrefer;
if (!is_null($logRefer) && is_null($logRefer->user_turnover_id)) {
$userTurnover = new UserTurnover();
$userTurnover->user_id = $logRefer->referrer_id;
$userTurnover->order_id = $order->id;
$userTurnover->save();
$logRefer->order_id = $order->id;
$logRefer->user_turnover_id = $userTurnover->id;
$logRefer->save();
}
If logrefer not found set it and corresponding user-turnover just for one time. We expect to see just one user-turnover related to this order but after running it multiple time concurrently we see multiple user-turnover has inserted.
I usually take advantage of transaction when operations need to be sequential but i think that in your case the situation it's a bit complex due to the fact that also the condition need to be evaluated conditionally if the function it's running. So the idea that i can give you it's to have on the database a variable (another table) used as semaphore which allow or not to perform actions on the table (condition gived by the fact that you set or unset the value of the semaphore). I think as good programmer that semaphore are useful in a lot of cases of concurrential functions.
The database should have a unique key on columns expected to be unique, even if some mechanism in the code prevents duplicates.
Wrap connected queries into a transaction which will fail and roll back in a race event condition
try {
DB::transaction(function() {
$order = Order::find($orderId);
...
$logRefer->save();
});
} catch (\Illuminate\Database\QueryException $ex) {
Log::error(“failed to write to database”);
}
If a migration fails half way through for any reason (E.g. typo), it commits half the migration, and leaves the rest out. It doesn't seem to try to roll back what it just did.(either by rolling back an encompassing transaction, or calling down())
If you try to manually rollback the last migration, e.g. php artisan migrate:rollback --step=1, it rolls back only the migration before last, i.e. the one before the one which failed.
Consider this migration:
public function up()
{
DB::table('address')->insert(['id'=>1,'street'=>'Demo', 'country_id'=>83]);
DB::table('customer')->insert(['id'=>1,'username'=>'demo','address_id'=>1]);
}
public function down()
{
DB::table('customer')->where('id',1)->delete();
DB::table('address')->where('id',1)->delete();
}
If the insert of the customer fails (e.g. we forgot to set a non null column, a typo, or a record exists when it should not), the address record WAS inserted.
migrate:rollback doesn't rollback this migration, it rolls back the one before, and we are left with a spurious orphaned address record. Obviously we can drop re-create the db and run the migration from scratch, but thats not the point - migrations should not leave half the migrations done and the DB in an invalid state.
Is there a solution? e.g. can one put transactions in the migration so it inserts all or nothing?
If we look in the migrations table after the half done migration has failed, it is not there.
NOTE: we use migrations to insert (and modify/delete) static data which the application requires to run. It is not dev data or test data. E.g. countries data, currencies data, as well as admin operators etc.
You should run these migrations inside a transaction:
DB::transaction(function () {
// Your code goes here.
}
or you can use a try/catch block:
try {
DB::beginTransaction();
// Your code goes here ...
DB::commit();
} catch(\Exception $e) {
DB::rollBack();
}