Using Symfony 4 with doctrine, I want to save pageviews for one entity Program in the database. I want to save this to the database because I want to give some users the rights to view these numbers. What I have done is add a property to the entity like this:
Program.php
/**
* #ORM\Column(type="integer", nullable=true)
*/
private $pageViews;
/**
* #return mixed
*/
public function getPageViews()
{
return $this->pageViews;
}
/**
* #param mixed $pageViews
*/
public function setPageViews($pageViews)
{
$this->pageViews = $pageViews;
}
And in my ProgramController.php in the function showProgram
//...
$program->setPageViews($program->getPageViews()+1);
$em->persist($program);
$em->flush();
This works and adds 1 to the existing number every time the page is refreshed. My question is, is this an acceptable method or are there faster/better alternatives? And does this slow down performance or is that negligible?
Since you don't really need the entity you could directly do this with SQL, using Doctrine's connection:
$connection = $this->getDoctrine()->getConnection();
$connection->executeUpdate('UPDATE page_view_counter SET page_view = page_view+1;');
or using a preprared statement:
$connection = $this->getDoctrine()->getConnection();
$statement = $connection->prepare(
'UPDATE programs SET page_views = page_views + 1 WHERE programs.id = :id'
);
$statement->bindValue('id', $id);
$statement->execute();
This would speed up things a bit by not using some of the more complex features of the ORM that you don't need for your case.
Another alternative for speeding things up could be to switch technologies, like storing the data in a cache like redis. Whether this will actually improve performance (especially under a heavier load) would need to be verified using some measuring tool like JMeter.
Related
Laravel 5.5
I'm wondering how to properly handle the possible case of multiple updates to the same records by separate users or from different pages by the same user.
For example, if an instance of Model_1 is read from the database responding to a request from Page_1, and a copy of the same object is loaded responding to a request from Page_2, how best to implement a mechanism to prevent a second update from clobbering the first update? (Of course, the updates could occur in any order...).
I don't know if it is possible to lock records through Eloquent (I don't want to use DB:: for locking as you'd have to refer to the underlying tables and row ids), but even if it were possible, Locking when loading the page and unlocking when submitting wouldn't be proper either (I'm going to omit details).
I think detecting that a previous update has been made and failing the subsequent updates gracefully would be the best approach, but do I have to do it manually, for example by testing a timestamp (updated_at) field?
(I'm supposing Eloquent doesn't automatically compare all fields before updating, as this would be somewhat inefficient, if using large fields such as text/binary)
You should take a look at pessimistic locking, is a feature that prevents any update until the existing one its done.
The query builder also includes a few functions to help you do "pessimistic locking" on your select statements. To run the statement with a "shared lock", you may use the sharedLock method on a query. A shared lock prevents the selected rows from being modified until your transaction commits:
DB::table('users')->where('votes', '>', 100)->sharedLock()->get();
Alternatively, you may use the lockForUpdate method. A "for update" lock prevents the rows from being modified or from being selected with another shared lock:
DB::table('users')->where('votes', '>', 100)->lockForUpdate()->get();
Reference: Laravel Documentation
What I came up with was this:
<?php
namespace App\Traits;
use Illuminate\Support\Facades\DB;
trait UpdatableModelsTrait
{
/**
* Lock record for update, validate updated_at timestamp,
* and return true if valid and updatable, throws otherwise.
* Throws on error.
*
* #return bool
*/
public function update_begin()
{
$result = false;
$updated_at = DB::table($this->getTable())
->where($this->primaryKey, $this->getKey())
->sharedLock()
->value('updated_at');
$updated_at = \Illuminate\Support\Carbon::createFromFormat('Y-m-d H:i:s', $updated_at);
if($this->updated_at->eq($updated_at))
$result = true;
else
abort(456, 'Concurrency Error: The original record has been altered');
return $result;
}
/**
* Save object, and return true if successful, false otherwise.
* Throws on error.
*
* #return bool
*/
public function update_end()
{
return parent::save();
}
/**
* Save object after validating updated_at timestamp,
* and return true if successful, false otherwise.
* Throws on error.
*
* #return bool
*/
public function save(array $options = [])
{
return $this->update_begin() && parent::save($options);
}
}
Usage example:
try {
DB::beginTransaction()
$test1 = Test::where('label', 'Test 1')->first();
$test2 = Test::where('label', 'Test 1')->first();
$test1->label = 'Test 1a';
$test1->save();
$test2->label = 'Test 1b';
$test2->save();
DB::commit();
} catch(\Exception $x) {
DB::rollback();
throw $x;
}
This will cause abort as the timestamp does not match.
Notes:
This will only work properly if the storage engine supports row-locks. InnoDB does.
There is a begin and an end because you may need to update multiple (possibly related) models, and wish to see if locks can be acquired on all before trying to save. An alternative is to simply try to save and rollback on failure.
If you prefer, you could use a closure for the transaction
I'm aware that the custom http response (456) may be considered a bad practice, but you can change that to a return false or a throw, or a 500...
If you don't like traits, put the implementation in a base model
Had to alter from the original code to make it self contained: If you find any errors, please comment.
I'm working in a project that use Doctrine 2 in Symfony 2 and I use MEMCACHE to store doctrine's results.
I have a problem with objects that are retrieved from MEMCACHE.
I found this post similar, but this approach not resolves my problem: Doctrine detaching, caching, and merging
This is the scenario
/**
* This is in entity ContestRegistry
* #var contest
*
* #ORM\ManyToOne(targetEntity="Contest", inversedBy="usersRegistered")
* #ORM\JoinColumn(name="contest_id", referencedColumnName="id", onDelete="CASCADE"))
*
*/
protected $contest;
and in other entity
/**
* #var usersRegistered
*
* #ORM\OneToMany(targetEntity="ContestRegistry", mappedBy="contest")
*
*/
protected $usersRegistered;
Now imagine that Contest is in cache and I want to save a ContestRegistry entry.
So I retrieve the object contest in cache as follows:
$contest = $cacheDriver->fetch($key);
$contest = $this->getEntityManager()->merge($contest);
return $contest;
And as last operation I do:
$contestRegistry = new ContestRegistry();
$contestRegistry->setContest($contest);
$this->entityManager->persist($contestRegistry);
$this->entityManager->flush();
My problem is that doctrine saves the new entity correctly, but also it makes an update on the entity Contest and it updates the column updated. The real problem is that it makes an update query for every entry, I just want to add a reference to the entity.
How I can make it possible?
Any help would be appreciated.
Why
When an entity is merged back into the EntityManager, it will be marked as dirty. This means that when a flush is performed, the entity will be updated in the database. This seems reasonable to me, because when you make an entity managed, you actually want the EntityManager to manage it ;)
In your case you only need the entity for an association with another entity, so you don't really need it to be managed. I therefor suggest a different approach.
Use a reference
So don't merge $contest back into the EntityManager, but grab a reference to it:
$contest = $cacheDriver->fetch($key);
$contestRef = $em->getReference('Contest', $contest->getId());
$contestRegistry = new ContestRegistry();
$contestRegistry->setContest($contestRef);
$em->persist($contestRegistry);
$em->flush();
That reference will be a Proxy (unless it's already managed), and won't be loaded from the db at all (not even when flushing the EntityManager).
Result Cache
In stead of using you own caching mechanisms, you could use Doctrine's result cache. It caches the query results in order to prevent a trip to the database, but (if I'm not mistaken) still hydrates those results. This prevents a lot of issues that you can get with caching entities themselves.
What you want to achieve is called partial update.
You should use something like this instead
/**
* Partially updates an entity
*
* #param Object $entity The entity to update
* #param Request $request
*/
protected function partialUpdate($entity, $request)
{
$parameters = $request->request->all();
$accessor = PropertyAccess::createPropertyAccessor();
foreach ($parameters as $key => $parameter) {
$accessor->setValue($entity, $key, $parameter);
}
}
Merge requires the whole entity to be 100% fullfilled with data.
I haven't checked the behavior with children (many to one, one to one, and so on) relations yet.
Partial update is usually used on PATCH (or PUT) on a Rest API.
I recently started using Symfony2-Doctrine2. I'm not getting how to save data in inheritance mapping.
My requirements:
For learning exercise:
I'm making a library application for testing (Requirements might not be practical).
At high level, library may contain many different type of items like books, articles, manuals as example for now.
They have some common fields like name, publish year etc and some item specific details like book has IDBN, publisher; Manual have company, product.
Again to make problem little more complex, there is another 'item_content' table to have some description in different language.
To quickly visualize, I've following structure:
I achieved above structure as per doctrine docs for inheritance mapping & Bidirectional one to many relation
My Question: How to save data using Symfony2 (I've proper routing/actions running, just need code to write in controller or better in repository). While saving data (say for manual) I want to save data in Item, Manual and ItemContect table but getting confused due to discr field in database. I didn't find code for saving data in above structure. I don't need full code, just few hints will be sufficient. My Item class is as follow (Other classes have proper inverse as mentioned in doctrine docs):
/**
* Article
*
* #ORM\Table(name="item")
* #ORM\Entity(repositoryClass="Test\LibraryBundle\Entity\ItemRepository")
* #ORM\InheritanceType("JOINED")
* #ORM\DiscriminatorColumn(name="discr", type="string")
* #ORM\DiscriminatorMap({"book" = "Book", "manual" = "Manual", "article" = "Article"})
*/
class Item
{
//...
/**
* For joining with ItemContent
*
* #ORM\OneToMany(targetEntity="ItemContent", mappedBy="item")
**/
private $itemContents;
public function __construct()
{
$this->itemContents = new ArrayCollection();
}
//...
}
The discriminator field will be automatically filled by Doctrine
$em = $this->getDoctrine()->getManager();
$item = new Manual(); // discr field = "manual"
$itemContent = new ItemContent();
$item->addItemContent($itemContent);
$itemContent->setItem($item);
$em->persist($item);
$em->persist($itemContent);
$em->flush();
Is that the answer you're waiting ?
Is it possible to apply the remember(60) function to something like Service::all()?
This is a data set that will rarely change. I've attempted several variations with no success:
Service::all()->remember(60);
Service::all()->remember(60)->get();
(Service::all())->remember(60);
Of course, I am aware of other caching methods available, but I prefer the cleanliness of this method if available.
Yes, you should be able to simply swap the two such as
Change
Service::get()->remember(60);
to
Service::remember(60)->get();
An odd quirk I agree, but after I ran into this a few weeks back and realized all I had to do was put remember($time_to_remember) in front of the rest of the query builder it works like a charm.
For your perusing pleasure: See the Laravel 4 Query Builder Docs Here
/**
* Indicate that the query results should be cached.
*
* #param int $minutes
* #param string $key
* #return \Illuminate\Database\Query\Builder
*/
public function remember($minutes, $key = null)
{
list($this->cacheMinutes, $this->cacheKey) = array($minutes, $key);
return $this;
}
L4 Docs - Queries
Here is the use case. Some fields on the document are serializable/deserializable, others don't(see #JMS\ReadOnly).
/**
* #JMS\Groups({"board_list", "board_details"})
* #JMS\Type("string")
* #MongoDB\string
*/
protected $slug;
/**
* #JMS\Groups({"board_list", "board_details"})
* #JMS\ReadOnly
* #MongoDB\Increment
*/
protected $views;
When in the controller I do an action to update the document:
/**
* [PUT] /boards/{slug} "put_board"
* #ParamConverter("board", converter="fos_rest.request_body")
* #Rest\Put("/boards/{slug}")
* #Rest\View(statusCode=204)
*/
public function putBoardAction($slug, Board $board)
{
$dm = $this->get('doctrine_mongodb')->getManager();
$board = $dm->merge($board);
$dm->flush();
return true;
}
If the views field had some value before the action, after the action it gets reset to 0. How to avoid it? Is there a work-around Merge or Persist?
If the $views property is read-only, and not set upon deserialization, it will be 0 at the time the action is invoked. When merging, ODM is first going to try to look up the Board document by its identifier. When it finds it in the database, its $views property will be the current value stored in the database. That document now becomes the managed copy that merge() will ultimately return. From there, we proceed to copy values from the Board document passed to merge(). In doing so, $views is set to 0, over-writing whatever positive number it may have stored. When ODM goes to flush this change, it calculates the differences between the new and original values (likely the original view count multiplied by -1) and uses that for an $inc. That update brings the database value back to zero.
My advice would be to issue a separate update to increment $views, perhaps using the query builder. Even if $views was not read-only for the JMS Serializer service, you could still inadvertently decrement the counter if a Board with $views less than the corresponding database value was sent into the API.