How to handle multiple concurrent updates in Laravel Eloquent? - php

Laravel 5.5
I'm wondering how to properly handle the possible case of multiple updates to the same records by separate users or from different pages by the same user.
For example, if an instance of Model_1 is read from the database responding to a request from Page_1, and a copy of the same object is loaded responding to a request from Page_2, how best to implement a mechanism to prevent a second update from clobbering the first update? (Of course, the updates could occur in any order...).
I don't know if it is possible to lock records through Eloquent (I don't want to use DB:: for locking as you'd have to refer to the underlying tables and row ids), but even if it were possible, Locking when loading the page and unlocking when submitting wouldn't be proper either (I'm going to omit details).
I think detecting that a previous update has been made and failing the subsequent updates gracefully would be the best approach, but do I have to do it manually, for example by testing a timestamp (updated_at) field?
(I'm supposing Eloquent doesn't automatically compare all fields before updating, as this would be somewhat inefficient, if using large fields such as text/binary)

You should take a look at pessimistic locking, is a feature that prevents any update until the existing one its done.
The query builder also includes a few functions to help you do "pessimistic locking" on your select statements. To run the statement with a "shared lock", you may use the sharedLock method on a query. A shared lock prevents the selected rows from being modified until your transaction commits:
DB::table('users')->where('votes', '>', 100)->sharedLock()->get();
Alternatively, you may use the lockForUpdate method. A "for update" lock prevents the rows from being modified or from being selected with another shared lock:
DB::table('users')->where('votes', '>', 100)->lockForUpdate()->get();
Reference: Laravel Documentation

What I came up with was this:
<?php
namespace App\Traits;
use Illuminate\Support\Facades\DB;
trait UpdatableModelsTrait
{
/**
* Lock record for update, validate updated_at timestamp,
* and return true if valid and updatable, throws otherwise.
* Throws on error.
*
* #return bool
*/
public function update_begin()
{
$result = false;
$updated_at = DB::table($this->getTable())
->where($this->primaryKey, $this->getKey())
->sharedLock()
->value('updated_at');
$updated_at = \Illuminate\Support\Carbon::createFromFormat('Y-m-d H:i:s', $updated_at);
if($this->updated_at->eq($updated_at))
$result = true;
else
abort(456, 'Concurrency Error: The original record has been altered');
return $result;
}
/**
* Save object, and return true if successful, false otherwise.
* Throws on error.
*
* #return bool
*/
public function update_end()
{
return parent::save();
}
/**
* Save object after validating updated_at timestamp,
* and return true if successful, false otherwise.
* Throws on error.
*
* #return bool
*/
public function save(array $options = [])
{
return $this->update_begin() && parent::save($options);
}
}
Usage example:
try {
DB::beginTransaction()
$test1 = Test::where('label', 'Test 1')->first();
$test2 = Test::where('label', 'Test 1')->first();
$test1->label = 'Test 1a';
$test1->save();
$test2->label = 'Test 1b';
$test2->save();
DB::commit();
} catch(\Exception $x) {
DB::rollback();
throw $x;
}
This will cause abort as the timestamp does not match.
Notes:
This will only work properly if the storage engine supports row-locks. InnoDB does.
There is a begin and an end because you may need to update multiple (possibly related) models, and wish to see if locks can be acquired on all before trying to save. An alternative is to simply try to save and rollback on failure.
If you prefer, you could use a closure for the transaction
I'm aware that the custom http response (456) may be considered a bad practice, but you can change that to a return false or a throw, or a 500...
If you don't like traits, put the implementation in a base model
Had to alter from the original code to make it self contained: If you find any errors, please comment.

Related

Laravel: how to copy data from one connection to one other?

I've a mysql connection and a pgsql (postgres, set as default) connection.
I must read all rows from a table from mysql and then save all into postgres
I tried do this, remembering that my default connection is pgsql
$mysql_model = new Regioni();
$regioni = $mysql_model->setConnection("mysql")->all();
But it's still using pgsql connection.
I'm sure of this because i tried to insert a row into pgsql table, (it was empty), and I can dump this single row as results of all.
Also, I (only for debugging, do not hate me), modified vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php, adding a debug echo into this method
/**
* Set the connection associated with the model.
*
* #param string|null $name
* #return $this
*/
public function setConnection($name)
{
echo "Model calling setConnection($name)" . PHP_EOL;
$this->connection = $name;
return $this;
}
As a result I can see
Model calling setConnection(mysql)
Model calling setConnection()
Model calling setConnection(pgsql)
The first one is the result of my explicit setConnection.
But why calling ->all() will reset connection to default one?
Main goal/question
What's the right way to dinamically change the connection of a model?
I post my actual solution.
It's ugly, but it works.
The requisite for this is that default connection is pgsql so I must set connection explicitly to read all table record from nysql
Aftert that, I loop throught and then save to pgsql simply doing an insert at time
$regioni = (new Regioni)->setConnection('mysql')->get();
(new Regioni)->truncate();
foreach($regioni as $r) {
(new Regioni)->insert($r->toArray());
}
Note that this is specifically for my case
I have an old mysql to be migrated to the new postgresql
I already did migrations on postgresql so all structures are ready
I read all data from source and simply write into target postgres
this must be done only one time
I truncate target before insert (to be repeatable in development)
If you can suggest something more efficient, please, write an answer

Trait for counting related Laravel Models only hitting the DB if necessary

I'm wondering if anything like the following already exists for Laravel? It's a trait I wrote called CarefulCount.
What it does: It returns a count of related models (using any defined relation), but only hits the DB if absolutely necessary. First, it tries two options to avoid hitting the DB, if the information is already available:
Was the count retrieved using withCount('relation') when the model was retrieved - i.e. does $model->relation_count exist? If so, just return that.
Has the relation been eager-loaded? If so, count the models in the Eloquent collection without hitting the DB, with $model->relation->count().
Only then resort to calling $model->relation()->count() to retrieve the count from the DB.
To enable it for any model class, you simply need to include the trait with use CarefulCount. You can then call $model->carefulCount('relation') for any defined relation.
For example, in my application there is a suburbs table with a has-many relation to both the users table and churches tables (i.e. there can be many users and many churches in a single suburb). Simply by adding use CarefulCount to the Suburb model, I can then call both $suburb->carefulCount('users') and $suburb->carefulCount('churches').
My use case: I've come across this a number of times - where I need a count of related models, but it's in a lower-level part of my application that may be called from several places. So I can't know how the model was retrieved and whether the count information is already there.
In those situations, the default would be to call $model->relation()->count(). But this can lead to the N+1 query problem.
In fact, the specific trigger came from adding Marcel Pociot's excellent Laravel N+1 Query Detector package to my project. It turned up a number of N+1 query problems that I hadn't picked up, and most were cases when I had already eager-loaded the related models. But in my Blade templates, I use Policies to enable or disable deleting of records; and the delete($user, $suburb) method of my SuburbPolicy class included this:
return $suburb->users()->count() == 0 && $suburb->churches()->count() == 0;
This introduced the N+1 problem - and obviously I can't assume, in my Policy class (or my Model class itself), that the users and churches are eager-loaded. But with the CarefulCount trait added, that became:
return $suburb->carefulCount('users') == 0 && $suburb->carefulCount('churches') == 0;
Voila! Tinkering with this and checking the query log, it works. For example, with the users count:
If $suburb was retrieved using Suburb::withCount('users'), no extra query is executed.
Similarly, if it was retrieved using Suburb::with('users'), no extra query is executed.
If neither of the above were done, then there is a select count(*) query executed to retrieve the count.
As I said, I'd love to know whether something like this already exists and I haven't found it (either in the core or in a package) - or whether I've plain missed something obvious.
Here's the code for my trait:
use Illuminate\Support\Str;
trait CarefulCount
{
/**
* Implements a careful and efficient count algorithm for the given
* relation, only hitting the DB if necessary.
*
* #param string $relation
*
* #return integer
*/
public function carefulCount(string $relation): int
{
/*
* If the count has already been loaded using withCount('relation'),
* use the 'relation_count' property.
*/
$prop = Str::snake($relation) . "_count";
if (isset($this->$prop)) {
return $this->$prop;
}
/*
* If the related models have already been eager-loaded using
* with('relation'), count the loaded collection.
*/
if ($this->relationLoaded($relation)) {
return $this->$relation->count();
}
/*
* Neither loaded, so hit the database.
*/
return $this->$relation()->count();
}
}

phpunit with dbunit: how can i keep data in my db across tests?

i have a phpunit question regarding dbunit and how to keep data created in the database by one test for use in the next. i'm new to phpunit (we've been using an in-house tester for years but are finally trying to get with the modern age), so i apologize if this is a trivial issue.
the desired effect
i have a mysql table that contains a column that is a unique key. if an attempt is made to insert a duplicate of this column, special things happen that i would like to be able to test. i have written a test to insert a value into this column (and test its success) and then written another test immediately afterwards to test how the class fails on attempting a duplicate value. i'd like to be able to catch that exception and test it. i am using dbunit to pre-fill my db with all the pre-filly stuff i need.
the problem
at the commencement of each test it appears as if getDataSet() is called and, as a result, the unique key data i insert in the first test is no longer there to test against. consequently, i can't test the anticipated failure of inserting duplicate unique keys.
what i'm looking for
well, obviously some way to persist the database data across tests; avoid calling getDataSet(), perhaps, at the beginning of the second test.
i certainly hope this is possible. i can't imagine why it wouldn't be; it seems like people should want to test duplicate insert! i am willing to entertain other solutions if they accomplish the task.
thanks in advance!
here's my test, stripped down to the relevant bits.
<?php
class UserPOSTTest extends \PHPUnit_Extensions_Database_TestCase
{
static private $pdo = null;
private $conn = null;
/**
* #return PHPUnit_Extensions_Database_DB_IDatabaseConnection
*/
public function getConnection()
{
if($this->conn === null) {
if (self::$pdo == null) {
self::$pdo = new \PDO('mysql:host=localhost;dbname=thedatabase', 'user', '*********');
}
$this->conn = $this->createDefaultDBConnection(self::$pdo, "db");
}
return $this->conn;
}
/**
* #return PHPUnit_Extensions_Database_DataSet_IDataSet
*/
public function getDataSet()
{
// this is returned at the beginning of every test
return $this->createFlatXmlDataSet(dirname(__FILE__) . '/some_data_set.xml');
}
/**
* test the insertion of the value "unique key value" into a column set as UNIQUE KEY in mysql
* since getDataSet() has cleared this table, it passes.
*/
public function uniqueKeyTest_passes()
{
$inserter = new Inserter("unique key value");
$this->assertEquals($inserter->one,1); // just some bogus assertion
} // uniqueKeyTest_passes
/**
* run the exact same insert as in uniqueKeyTest_passes() above. the purpose of this test is to
* confirm how the Inserter class fails on the attempt to insert duplicate data into a UNIQUE KEY column.
* however, the data inserted in uniqueKeyTest_passes() has been scrubbed out by getDataSet()
* this is the crux of my question
*/
public function uniqueKeyTest_should_fail()
{
try {
// exact same insert as above, should fail as duplicate
$inserter = new Inserter("unique key value");
}
catch(Exception $e) {
// if an exception is thrown, that's a pass
return;
}
// the insert succeeds when it should not
$this->fail("there should be an exception for attempting insert of unique key value here");
} // uniqueKeyTest_should_fail
}
The fact that each test runs independently from others is a feature and a design goal of unit testing.
In your case you can simply use this:
/**
* Confirm how the Inserter class fails on the attempt to
* insert duplicate data into a UNIQUE KEY column.
*
* #expectedException Exception
*/
public function uniqueKeyTest_should_fail()
{
$inserter = new Inserter("unique key value");
// exact same insert as above, should fail as duplicate
$inserter = new Inserter("unique key value");
}
Please note the usage of #expectedException which simplifies the test code a lot. However, I would write my code in a way that it throws a DuplicateKeyException, this would make the test more specific. In the current form, you would for example not being able to detect a database connection error anymore (and the test would succeed!).

Check table exists

I need to check if a table exists in a database. I currently develop using Yii2.
My case is a bit different from this question because the table to be checked is not (and can not be) a model.
I have tried (new \yii\db\Query())->select('*')->from($mysticTable)->exists());
The above throws a yii\db\Exception because, according to question linked above, the yii\db\Query() class tries to ->queryScalar() when asked if ->exists(). Invariably, this method checks if the result-set exists.
How do I check if a table exists?
For Yii2 you can use:
$tableSchema = Yii::$app->db->schema->getTableSchema('tableName');
If the table does not exist, it will return null, so you can check returned value for being null:
if ($tableSchema === null) {
// Table does not exist
}
You can find this method in official docs here.
Good that you get an exception. Simply parse the exception message. You would get a very very specific message and SQL error code for missing table.
That is what I do when checking e.g. IF an error was due to something that can be recovered, say a broken connection, versus some other error.
OR I see that many people have pointed out much more direct ways of getting that information.
A spin off #msfoster's answer got me closer to a solution in yii2
/**
* #param $tableName
* #param $db string as config option of a database connection
* #return bool table exists in schema
*/
private function tableExists($tableName, $db = null)
{
if ($db)
$dbConnect = \Yii::$app->get($db);
else
$dbConnect = \Yii::$app->get('db');
if (!($dbConnect instanceof \yii\db\Connection))
throw new \yii\base\InvalidParamException;
return in_array($tableName, $dbConnect->schema->getTableNames());
}
This also serves multiple databases.

Doctrine 2 - How to use objects retrieved from cache in relationships

I'm working in a project that use Doctrine 2 in Symfony 2 and I use MEMCACHE to store doctrine's results.
I have a problem with objects that are retrieved from MEMCACHE.
I found this post similar, but this approach not resolves my problem: Doctrine detaching, caching, and merging
This is the scenario
/**
* This is in entity ContestRegistry
* #var contest
*
* #ORM\ManyToOne(targetEntity="Contest", inversedBy="usersRegistered")
* #ORM\JoinColumn(name="contest_id", referencedColumnName="id", onDelete="CASCADE"))
*
*/
protected $contest;
and in other entity
/**
* #var usersRegistered
*
* #ORM\OneToMany(targetEntity="ContestRegistry", mappedBy="contest")
*
*/
protected $usersRegistered;
Now imagine that Contest is in cache and I want to save a ContestRegistry entry.
So I retrieve the object contest in cache as follows:
$contest = $cacheDriver->fetch($key);
$contest = $this->getEntityManager()->merge($contest);
return $contest;
And as last operation I do:
$contestRegistry = new ContestRegistry();
$contestRegistry->setContest($contest);
$this->entityManager->persist($contestRegistry);
$this->entityManager->flush();
My problem is that doctrine saves the new entity correctly, but also it makes an update on the entity Contest and it updates the column updated. The real problem is that it makes an update query for every entry, I just want to add a reference to the entity.
How I can make it possible?
Any help would be appreciated.
Why
When an entity is merged back into the EntityManager, it will be marked as dirty. This means that when a flush is performed, the entity will be updated in the database. This seems reasonable to me, because when you make an entity managed, you actually want the EntityManager to manage it ;)
In your case you only need the entity for an association with another entity, so you don't really need it to be managed. I therefor suggest a different approach.
Use a reference
So don't merge $contest back into the EntityManager, but grab a reference to it:
$contest = $cacheDriver->fetch($key);
$contestRef = $em->getReference('Contest', $contest->getId());
$contestRegistry = new ContestRegistry();
$contestRegistry->setContest($contestRef);
$em->persist($contestRegistry);
$em->flush();
That reference will be a Proxy (unless it's already managed), and won't be loaded from the db at all (not even when flushing the EntityManager).
Result Cache
In stead of using you own caching mechanisms, you could use Doctrine's result cache. It caches the query results in order to prevent a trip to the database, but (if I'm not mistaken) still hydrates those results. This prevents a lot of issues that you can get with caching entities themselves.
What you want to achieve is called partial update.
You should use something like this instead
/**
* Partially updates an entity
*
* #param Object $entity The entity to update
* #param Request $request
*/
protected function partialUpdate($entity, $request)
{
$parameters = $request->request->all();
$accessor = PropertyAccess::createPropertyAccessor();
foreach ($parameters as $key => $parameter) {
$accessor->setValue($entity, $key, $parameter);
}
}
Merge requires the whole entity to be 100% fullfilled with data.
I haven't checked the behavior with children (many to one, one to one, and so on) relations yet.
Partial update is usually used on PATCH (or PUT) on a Rest API.

Categories