Laravel - Associate vs. Setting the ID for Observer Events - php

I have a Sample model that has a status (string) and a current task (foreign key to the Task model).
field type
---------------------------
id int
status string -> I use an Enum to store possible values
current_task_id int -> foreign key to the Task model
On my model, I define the relationship as follows:
public function currentTask()
{
return $this->belongsTo(Task::class, 'current_task_id', 'id');
}
Now, I've created an Observer with the following function:
public function updated(Sample $sample)
{
// check if the current task is null and if not change the status to in progress
Log::info('Sample status changed to in progress', ['sample' => $sample->toArray()]);
if ($sample->currentTask()->exists()) {
$sample->status = 'in progress';
$sample->save();
}
}
I want this to trigger when a sample is updated, check if there is an associated Task, and change the status to in progress if so.
I've encountered two issues:
When updating the current_task_id field manually and running save(), I get a memory leak caused somehow by the observer code.
When running the "associate" method to assign the currentTask, the observer does not trigger.
See the code below that I run in Tinkerwell
$sample = Sample::factory()->create();
echo $sample->currentTask(); // null
echo $sample->status; // not started
$sample->current_task_id = 2;
$sample->save(); // memory leak, additionally, if I check $sample->currentTask it gives me null...
Or with associate:
$sample = Sample::factory()->create();
echo $sample->currentTask(); // null
echo $sample->status; // not started
$sample->currentTask()->associate(2); // does not trigger observer
echo $sample->currentTask(); // Task object
echo $sample->status; // not started
How can I trigger the observer on associate? Or alternatively, why would the save() cause a memory leak?

Here's my recommendation:
Keep using an observer but use the saving (or updating) method:
public function saving(Sample $sample)
{
// check if the current task changed
Log::info('Sample status changed to in progress', ['sample' => $sample->toArray()]);
if ($sample->isDirty('current_task_id') && CurentTask::where('id', $sample->current_task_id)->exists()) {
$sample->status = 'in progress';
}
}
If you don't use the isDirty check in a saving or updating event you will end up with an infinite loop of saving, triggering updated and saving again

Related

Exceeded maximum time error when overriding the newQuery on Laravel 4.0

So, I was trying to implement this answer for my other question on the same subject... and it keeps givin me the exceeded time error. Any clues?
This is on my product model. It inherits from Eloquent.
public function newQuery($excludeDeleted = true)
{
$user_permission = Auth::user()->permissions;
if( $user_permission->master )
return parent::newQuery();
else if( $user_permission->web_service )
{
$allowed_ids = array();
foreach( $user_permission->allowed_products()->get() as $allowed)
$allowed_ids[] = $allowed->id;
return parent::newQuery()->whereIn('id', $allowed_ids);
}
return parent::newQuery();
}
If the user is master there is no need to query scope on the request. But, if it isn't then I need to filter by the logged user's permissions.
UPDATE:
I tried the following code in a controller and it works alright:
$user_permission = Auth::user()->permissions;
echo "<PRE>"; print_r($user_permission->allowed_products()->get()); exit;
UPDATE 2:
Guys, I just found out that the problem was in this peace of code:
$allowed = Auth::user()->permissions()->first()->allowed_products()->get()->list('id');
It somehow give me an Maximum execution time of 30 seconds exceeded. If I put the exact same code in a controller, works like a charm, though! I also tried to put it in a scope, also worked. This it's really grinding my gears!
Elloquent has a function called newQuery. Controller does not. When you implement this function in a Model you are overriding the one in Elloquent. If you then invoke Elloquent methods that need a new query for your model before they can return, like ->allowed_products()->get(). Then you are calling your own newQuery() method recursively. Since the user permissions have not changed, this results in infinite recursion. The only outcome can be a timeout because it will keep on trying to determine a filtered product list which causes your newQuery() method to be called, which tries to determine the filtered product list before returning the query, and so on.
When you put the method into a Controller, it is not overriding the Elloquent newQuery method so there is no infinite recursion when trying to get the allowed_product list.
It would be more efficient to apply the filter to the product query based on whether the id is in the user's allowed_products() list using ->whereExists() and build up the same query as allowed_products() except now add condition that id from the query you are filtering is equal to the product id in the allowed products query. That way the filtering is done in the database instead of PHP and all is done in the same query so there is no recursion.
I don't see how your update code works. Illuminate\Database\Eloquent\Collection does not have any magic methods to call the relation functions, you should get an undefined method error trying to do that.
Can you try something like
public function newQuery($excludeDeleted = true)
{
// Returns `Illuminate\Database\Eloquent\Collection`
$user_permission = Auth::user()->permissions;
if ($user_permission->master)
{
return parent::newQuery();
}
else if ($user_permission->web_service)
{
// If here you was to call $user_permission->allowed_products()->get() not much is going to happen, besides probably getting an undefined method error.
$allowed_ids = Auth::user()->permissions()->allowed_products()->get()->lists('id');
return parent::newQuery()->whereIn('id', $allowed_ids);
}
return parent::newQuery();
}
Update: as per comments below I believe the problem is due to newQuery() being called multiple times as the code works just fine when called once in a controller. When this is applied to every query there is no need to collect all the IDs over and over again (assuming they're not going to change each time you call for them). Something such as the below will allow you to store these and only process them once per request rather than every time a query is run.
private $allowed_ids_cache = null;
public function newQuery($excludeDeleted = true)
{
$user_permission = Auth::user()->permissions;
if ($user_permission->master)
{
return parent::newQuery();
}
else if ($user_permission->web_service)
{
if ($this->allowed_ids_cache === null)
{
$this->allowed_ids_cache = Auth::user()->permissions()->allowed_products()->get()->lists('id');
}
return parent::newQuery()->whereIn('id', $this->allowed_ids_cache);
}
return parent::newQuery();
}

Yii ActiveRecord - Is there way to update only dirty attributes?

In my application I have 2 processes which run almost at the same time and update same AR models.
I was facing the bug when it looked like some of this processes was not completed, but debugging them separately gave no errors.
Then I understood that the problem perhaps happens in the next case:
Process A selects row X
Process B selects row X
Process B updates row X
Process A updates row X
In the described case, process A will overwrite everything process B wrote.
Both B and A update different attributes.
Is there some way to avoid this overwriting? Is there some mechanism to make AR update only 'Dirty' attributes instead of all model attributes?
Please, do not explain me the solution without using AR. I understand it. But I would like to hear if there is some solution which will allow me do the required updates correctly still using AR.
Thanks.
YourTable::model()->updateByPk($id, array(
'field1' => NewVal1,
'field2' => NewVal2,
'field3' => NewVal3
));
and make use of transactions:
$transaction=Yii::app()->db->beginTransaction();
try
{
//.... SQL executions OR model save()
$transaction->commit();
}
catch(Exception $e) // an exception is raised if a query fails
{
$transaction->rollback();
}
I dont know how that will come along but here is a very dangerous idea to do that , please read the thread
create another table for locking with respective model
mylocks(object, object_type ,lock_type) , made it too generic
e.g a record would be
mylocks('post','table','write')
`class Post extends ActiveRecord {
public static $dirtyData=array();
protected $semaphore=false;
//if its locked true, else false
protected function hasSemaphore(){
$c = new CDbCriteria;
$c->compare('object',$this->getTableName());
$c->compare('object_type','table');
$lock=MyLocks::model()->find($criteria)
return $lock!=null;
}
//
public function setSemaphore(){
if($this->semaphore==true)
return true ;
if($this->hasSemaphore()){
Yii::app()->db->createCommand('LOCK TABLE '.$this->getTableName().' WRITE;')->execute();
//insert a record to MyLocks
//insert into mylocks(object,object_type,lock_type)
//values ('post' ,'table','WRITE');
$this->semaphore=true;
return true;
}
$this->semaphore=false;
return false;
}
protected function mergeDirtyData(){
//as I am holding write lock i should collect all dirty
// data from other models to save it ....
}
protected function releaseSemaphore(){
if($this->semaphore){
//delete matching from mylocks table -- sorry I am lazy
Yii::app()->db->createCommand('UNLOCK TABLES;')->execute();
$this->semaphore=false;
$this->mergeDirtyData()
return true;
}
return false;
}
....
public function beforeSave() {
//if I am holding lock - release it
if(!$this->releaseSemaphore()){
//probably someone else is holding
if($this->hasSemaphore())
//set values to dirtyData
//self::dirtyData[]=array(attrA=>valueA,....);
return false; // disable saving
}
return parent::beforeSave();
}
}'
so here will be your flow of operations
//process A
$postA=Post::model();
...
$postA->setSemaphore();
... update some fields
//process B
$postB=Post::model();
... update some fields of $postB
$postB->update();
$postA->update()
Other possible scenarios not handled
You will not be able to insert a record while there is a read lock by process one so you have
to make come around that issue by , getting the lock release by Process B while inserting an
it resets that lock again(borrow it and give it back) sth like that
I have not handled the dirtyData .The idea is only one model will correctly write the data to db
which as set the semaphore ==lock
Note
Not production ready code , not tested so possibly with flaws

Yii on update, detect if a specific AR property has been changed on beforeSave()

I am raising a Yii event on beforeSave of the model, which should only be fired if a specific property of the model is changed.
The only way I can think of how to do this at the moment is by creating a new AR object and querying the DB for the old model using the current PK, but this is not very well optimized.
Here's what I have right now (note that my table doesn't have a PK, that's why I query by all attributes, besides the one I am comparing against - hence the unset function):
public function beforeSave()
{
if(!$this->isNewRecord){ // only when a record is modified
$newAttributes = $this->attributes;
unset($newAttributes['level']);
$oldModel = self::model()->findByAttributes($newAttributes);
if($oldModel->level != $this->level)
// Raising event here
}
return parent::beforeSave();
}
Is there a better approach? Maybe storing the old properties in a new local property in afterFind()?
You need to store the old attributes in a local property in the AR class so that you can compare the current attributes to those old ones at any time.
Step 1. Add a new property to the AR class:
// Stores old attributes on afterFind() so we can compare
// against them before/after save
protected $oldAttributes;
Step 2. Override Yii's afterFind() and store the original attributes immediately after they are retrieved.
public function afterFind(){
$this->oldAttributes = $this->attributes;
return parent::afterFind();
}
Step 3. Compare the old and new attributes in beforeSave/afterSave or anywhere else you like inside the AR class. In the example below we are checking if the property called 'level' is changed.
public function beforeSave()
{
if(isset($this->oldAttributes['level']) && $this->level != $this->oldAttributes['level']){
// The attribute is changed. Do something here...
}
return parent::beforeSave();
}
Just in one line
$changedArray = array_diff_assoc($this->attributes,
$this->oldAttributes);
foreach($changedArray as $key => $value){
//What ever you want
//For attribute use $key
//For value use $value
}
In your case you want to use if($key=='level') inside of foreach
Yii 1.1: mod-active-record at yiiframework.com
or Yii Active Record instance with "ifModified then ..." logic and dependencies clearing at gist.github.com
You can store old properties with hidden fields inside update form instead of loading model again.

PHP: Get id from database, send to other function to get content?

I hope the title was descriptive enough, i wasn't sure how to name it.
Let's say i have the following code:
Class Movie_model {
public method getMoviesByDate($date) {
// Connects to db
// Gets movie IDs from a specific date
// Loop through movie IDs
// On each ID, call getMovieById() and store the result in an array
// When all IDs has looped, return array with movies returned from getMovieById().
}
public function getMovieById($id) {
// Get movie by specified ID
// Also get movie genres from another method
// Oh, and it gets movie from another method as well.
}
}
I always want to get the same result when getting a movie (I always want the result from getMovieById().
I hope you get my point. I will have many other functions like getMoviesByDate(), i will also have getMoviesByGenre() for example, and i want that to return the same movie info as getMovieById() as well.
It it "ok" to do it this way? I know this puts more load on the server and increases load time, but is there any other, better way that i don't know of?
EDIT: I clarified the code in getMoviesByDate() a bit. Also, getMovieByDate() is just an example. As i said, i will be calling methods like getMoviesByGenre() also.
EDIT: I'm currently running 48 database queries on the frontpage of my project, and the frontpage is still far from finished, so that number would at least triple when i'm done. Almost all queries take around 0.0002, but as the database keeps growing that number will rise dramatically i'm guessing. I need to change something.
I don't think it's good to work like this in this particular case. The function getMoviesByDate would return an amount of "n" movies (or movie ids) from a single query. For each id in this query you would have a separate query to get the movie by the specified ID.
This would mean if the first function would return 200 movies, you would run the getMovieById() function (and the query inside it) 200 times. A better practice (IMO) would be to just get all the info you require in the getMoviesByDate() function and return it as a collection.
It doesn't seem very logical to have getMoviesByDate() and getMoviesById() methods on a Movie class.
An alternative would be to have some sort of MovieManager class that does all of the retrieving, and returns Movie objects.
class MovieManager {
public function getMoviesByDate($date) {
// get movies by date, build an array of Movie objects and return
}
public function getMoviesByGenre($genre) {
// get movies by genre, build an array of Movie objects and return
}
public function getMovieById($id) {
// get movie by id, return Movie object
}
}
Your Movie class would just have properties and methods specific to a single movie:
class Movie {
public id;
public name;
public releaseDate;
}
It's OK to have separate methods for getting by date, genre etc etc, but you must ensure that you are not calling for the same records multiple times - in that case you will want a single query that could join the various tables you need.
Edit - after you have clarified your question:
The idea of getting movie IDs by date, then running them all through getMovieById() is bad! The movie data should be pulled when getting by date, so you don't have to hit the database again.
You can modified your getMovieById function. You can pass date as a parameter, the function should return the movies by their id and filtered by date.
To keep track which records you've already loaded into RAM previously you can use a base class for your models which saves the id's of the records already loaded and a reference to object the model object in the RAM.
class ModelBase {
/* contains the id of the current record, null if new record */
protected $id;
// keep track of records already loaded
static $loaded_records = Array();
public function __construct(Array $attr_values) {
// assign $attr_values to this classes attributes
// save this instance in class variable to reuse this object
if($attr_values['id'] != null) {
self::$loaded_records[get_called_class()][$attr_values['id']] = $this;
}
}
public static function getConcurrentInstance(Array $attr_values) {
$called_class = get_called_class();
if(isset(self::$loaded_records[$called_class][$attr_values['id']])) {
// this record was already loaded into RAM
$record = self::$loaded_records[$called_class][$attr_values['id']];
// you may need to update certain fields of $record
// from the data in $attr_values, because the data in the RAM may
// be old data.
} else {
// create the model with the given values
$record = new $called_class($attr_values);
}
return $record;
}
// provides basic methods to update records in ram to database etc.
public function save() {
// create query to save this record to database ...
}
}
Your movie model could look something like this.
Class MovieModel extends ModelBase {
// additional attributes
protected $title;
protected $date;
// more attributes ...
public static function getMoviesByDate($date) {
// fetches records from database
// calls getConcurrentInstance() to return an instance of MovieModel() for every record
}
public static function getMovieById($id) {
// fetches record from database
// calls getConcurrentInstance() to return an instance of MovieModel()
}
}
Other things you could do do decrease the load on the DB:
Only connect once to the database per request. There are also possibilities to share a connection to a database between multiple requests.
Index thefields in your database which get searched often.
only fetch the records you need
Prevent to load the same record twice (if it didn't change)

Symfony app - how to add calculated fields to Propel objects?

What is the best way of working with calculated fields of Propel objects?
Say I have an object "Customer" that has a corresponding table "customers" and each column corresponds to an attribute of my object. What I would like to do is: add a calculated attribute "Number of completed orders" to my object when using it on View A but not on Views B and C.
The calculated attribute is a COUNT() of "Order" objects linked to my "Customer" object via ID.
What I can do now is to first select all Customer objects, then iteratively count Orders for all of them, but I'd think doing it in a single query would improve performance. But I cannot properly "hydrate" my Propel object since it does not contain the definition of the calculated field(s).
How would you approach it?
There are several choices. First, is to create a view in your DB that will do the counts for you, similar to my answer here. I do this for a current Symfony project I work on where the read-only attributes for a given table are actually much, much wider than the table itself. This is my recommendation since grouping columns (max(), count(), etc) are read-only anyway.
The other options are to actually build this functionality into your model. You absolutely CAN do this hydration yourself, but it's a bit complicated. Here's the rough steps
Add the columns to your Table class as protected data members.
Write the appropriate getters and setters for these columns
Override the hydrate method and within, populate your new columns with the data from other queries. Make sure to call parent::hydrate() as the first line
However, this isn't much better than what you're talking about already. You'll still need N + 1 queries to retrieve a single record set. However, you can get creative in step #3 so that N is the number of calculated columns, not the number of rows returned.
Another option is to create a custom selection method on your TablePeer class.
Do steps 1 and 2 from above.
Write custom SQL that you will query manually via the Propel::getConnection() process.
Create the dataset manually by iterating over the result set, and handle custom hydration at this point as to not break hydration when use by the doSelect processes.
Here's an example of this approach
<?php
class TablePeer extends BaseTablePeer
{
public static function selectWithCalculatedColumns()
{
// Do our custom selection, still using propel's column data constants
$sql = "
SELECT " . implode( ', ', self::getFieldNames( BasePeer::TYPE_COLNAME ) ) . "
, count(" . JoinedTablePeer::ID . ") AS calc_col
FROM " . self::TABLE_NAME . "
LEFT JOIN " . JoinedTablePeer::TABLE_NAME . "
ON " . JoinedTablePeer::ID . " = " . self::FKEY_COLUMN
;
// Get the result set
$conn = Propel::getConnection();
$stmt = $conn->prepareStatement( $sql );
$rs = $stmt->executeQuery( array(), ResultSet::FETCHMODE_NUM );
// Create an empty rowset
$rowset = array();
// Iterate over the result set
while ( $rs->next() )
{
// Create each row individually
$row = new Table();
$startcol = $row->hydrate( $rs );
// Use our custom setter to populate the new column
$row->setCalcCol( $row->get( $startcol ) );
$rowset[] = $row;
}
return $rowset;
}
}
There may be other solutions to your problem, but they are beyond my knowledge. Best of luck!
I am doing this in a project now by overriding hydrate() and Peer::addSelectColumns() for accessing postgis fields:
// in peer
public static function locationAsEWKTColumnIndex()
{
return GeographyPeer::NUM_COLUMNS - GeographyPeer::NUM_LAZY_LOAD_COLUMNS;
}
public static function polygonAsEWKTColumnIndex()
{
return GeographyPeer::NUM_COLUMNS - GeographyPeer::NUM_LAZY_LOAD_COLUMNS + 1;
}
public static function addSelectColumns(Criteria $criteria)
{
parent::addSelectColumns($criteria);
$criteria->addAsColumn("locationAsEWKT", "AsEWKT(" . GeographyPeer::LOCATION . ")");
$criteria->addAsColumn("polygonAsEWKT", "AsEWKT(" . GeographyPeer::POLYGON . ")");
}
// in object
public function hydrate($row, $startcol = 0, $rehydrate = false)
{
$r = parent::hydrate($row, $startcol, $rehydrate);
if ($row[GeographyPeer::locationAsEWKTColumnIndex()]) // load GIS info from DB IFF the location field is populated. NOTE: These fields are either both NULL or both NOT NULL, so this IF is OK
{
$this->location_ = GeoPoint::PointFromEWKT($row[GeographyPeer::locationAsEWKTColumnIndex()]); // load gis data from extra select columns See GeographyPeer::addSelectColumns().
$this->polygon_ = GeoMultiPolygon::MultiPolygonFromEWKT($row[GeographyPeer::polygonAsEWKTColumnIndex()]); // load gis data from extra select columns See GeographyPeer::addSelectColumns().
}
return $r;
}
There's something goofy with AddAsColumn() but I can't remember at the moment, but this does work. You can read more about the AddAsColumn() issues.
Here's what I did to solve this without any additional queries:
Problem
Needed to add a custom COUNT field to a typical result set used with the Symfony Pager. However, as we know, Propel doesn't support this out the box. So the easy solution is to just do something like this in the template:
foreach ($pager->getResults() as $project):
echo $project->getName() . ' and ' . $project->getNumMembers()
endforeach;
Where getNumMembers() runs a separate COUNT query for each $project object. Of course, we know this is grossly inefficient because you can do the COUNT on the fly by adding it as a column to the original SELECT query, saving a query for each result displayed.
I had several different pages displaying this result set, all using different Criteria. So writing my own SQL query string with PDO directly would be way too much hassle as I'd have to get into the Criteria object and mess around trying to form a query string based on whatever was in it!
So, what I did in the end avoids all that, letting Propel's native code work with the Criteria and create the SQL as usual.
1 - First create the [get/set]NumMembers() equivalent accessor/mutator methods in the model object that gets returning by the doSelect(). Remember, the accessor doesn't do the COUNT query anymore, it just holds its value.
2 - Go into the peer class and override the parent doSelect() method and copy all code from it exactly as it is
3 - Remove this bit because getMixerPreSelectHook is a private method of the base peer (or copy it into your peer if you need it):
// symfony_behaviors behavior
foreach (sfMixer::getCallables(self::getMixerPreSelectHook(__FUNCTION__)) as $sf_hook)
{
call_user_func($sf_hook, 'BaseTsProjectPeer', $criteria, $con);
}
4 - Now add your custom COUNT field to the doSelect method in your peer class:
// copied into ProjectPeer - overrides BaseProjectPeer::doSelectJoinUser()
public static function doSelectJoinUser(Criteria $criteria, ...)
{
// copied from parent method, along with everything else
ProjectPeer::addSelectColumns($criteria);
$startcol = (ProjectPeer::NUM_COLUMNS - ProjectPeer::NUM_LAZY_LOAD_COLUMNS);
UserPeer::addSelectColumns($criteria);
// now add our custom COUNT column after all other columns have been added
// so as to not screw up Propel's position matching system when hydrating
// the Project and User objects.
$criteria->addSelectColumn('COUNT(' . ProjectMemberPeer::ID . ')');
// now add the GROUP BY clause to count members by project
$criteria->addGroupByColumn(self::ID);
// more parent code
...
// until we get to this bit inside the hydrating loop:
$obj1 = new $cls();
$obj1->hydrate($row);
// AND...hydrate our custom COUNT property (the last column)
$obj1->setNumMembers($row[count($row) - 1]);
// more code copied from parent
...
return $results;
}
That's it. Now you have the additional COUNT field added to your object without doing a separate query to get it as you spit out the results. The only drawback to this solution is that you've had to copy all the parent code because you need to add bits right in the middle of it. But in my situation, this seemed like a small compromise to save all those queries and not write my own SQL query string.
Add an attribute "orders_count" to a Customer, and then write something like this:
class Order {
...
public function save($conn = null) {
$customer = $this->getCustomer();
$customer->setOrdersCount($customer->getOrdersCount() + 1);
$custoner->save();
parent::save();
}
...
}
You can use not only the "save" method, but the idea stays the same. Unfortunately, Propel doesn't support any "magic" for such fields.
Propel actually builds an automatic function based on the name of the linked field. Let's say you have a schema like this:
customer:
id:
name:
...
order:
id:
customer_id: # links to customer table automagically
completed: { type: boolean, default false }
...
When you build your model, your Customer object will have a method getOrders() that will retrieve all orders associated with that customer. You can then simply use count($customer->getOrders()) to get the number of orders for that customer.
The downside is this will also fetch and hydrate those Order objects. On most RDBMS, the only performance difference between pulling the records or using COUNT() is the bandwidth used to return the results set. If that bandwidth would be significant for your application, you might want to create a method in the Customer object that builds the COUNT() query manually using Creole:
// in lib/model/Customer.php
class Customer extends BaseCustomer
{
public function CountOrders()
{
$connection = Propel::getConnection();
$query = "SELECT COUNT(*) AS count FROM %s WHERE customer_id='%s'";
$statement = $connection->prepareStatement(sprintf($query, CustomerPeer::TABLE_NAME, $this->getId());
$resultset = $statement->executeQuery();
$resultset->next();
return $resultset->getInt('count');
}
...
}

Categories