I have a Propel 1.6 generated class Group that has Inits related to it, and Inits have Resps related to them. Pretty straightforward.
I don't understand the difference between these two pieces of Propel code. Here in the first one, I re-create the $notDeleted criteria on every loop. This code does what I want -- it gets all the Resps into the $data array.
foreach ($group->getInits() as $init) {
$notDeleted = RespQuery::create()->filterByIsDeleted(false);
foreach ($init->getResps($notDeleted) as $resp) {
$data[] = $resp;
}
}
Here in the second code, I had the $notDeleted criteria pulled out of the loop, for (what I thought were) obvious efficiency reasons. This code does not work the way I want -- it only gets the Resps from one of the Inits.
$notDeleted = RespQuery::create()->filterByIsDeleted(false);
foreach ($group->getInits() as $init) {
foreach ($init->getResps($notDeleted) as $resp) {
$data[] = $resp;
}
}
I thought it must be something to do with how the getResps() method caches the results, but that's not how the docs or the code reads in that method. The docs and the code say that if the criteria passed in to getResps() is not null, it will always get the results from the database. Maybe some other Propel cache?
(First off, I'm guessing you meant to use $init versus $initiative in your loops. That or there's some other code we're not seeing here.)
Here's my guess: In your second example you pull out the $notDeleted Criteria object, but each time through the inner foreach the call to getResps($notDeleted) is going to make Propel do a filterByInit() on the Criteria instance with the current Init instance. This will add a new WHERE condition to the SQL, but obviously a Resp can only have one Init.Id value, hence the lone result.
I don't think there is a good reason to pull that out though, under the covers Propel is just creating a new Criteria object, cloning the one you pass in - thus no real memory saved.
Related
I have a strange problem with \Doctrine\ORM\UnitOfWork::getScheduledEntityDeletions used inside onFlush event
foreach ($unitOfWork->getScheduledEntityDeletions() as $entity) {
if ($entity instanceof PollVote) {
$arr = $entity->getAnswer()->getVotes()->toArray();
dump($arr);
dump($entity);
dump(in_array($entity, $arr, true));
dump(in_array($entity, $arr));
}
}
And here is the result:
So we see that the object is pointing to a different instance than the original, therefore in_array no longer yields expected results when used with stick comparison (AKA ===). Furthermore, the \DateTime object is pointing to a different instance.
The only possible explanation I found is the following (source):
Whenever you fetch an object from the database Doctrine will keep a copy of all the properties and associations inside the UnitOfWork. Because variables in the PHP language are subject to “copy-on-write” the memory usage of a PHP request that only reads objects from the database is the same as if Doctrine did not keep this variable copy. Only if you start changing variables PHP will create new variables internally that consume new memory.
However, I did not change anything (even the created field is kept as it is). The only operations that were preformed on entity are:
\Doctrine\ORM\EntityRepository::findBy (fetching from DB)
\Doctrine\Common\Persistence\ObjectManager::remove (scheduling for removal)
$em->flush(); (triggering synchronization with DB)
Which leads me to think (I might be wrong) that the Doctrine's change tracking method has nothing to do with the issue that I'm experiencing. Which leads me to following questions:
What causes this?
How to reliably check if an entity scheduled for deletion is inside a collection (\Doctrine\Common\Collections\Collection::contains uses in_array with strict comparison) or which items in a collection are scheduled for deletion?
The problem is that when you tell doctrine to remove entity, it is removed from identity map (here):
<?php
public function scheduleForDelete($entity)
{
$oid = spl_object_hash($entity);
// ....
$this->removeFromIdentityMap($entity);
// ...
if ( ! isset($this->entityDeletions[$oid])) {
$this->entityDeletions[$oid] = $entity;
$this->entityStates[$oid] = self::STATE_REMOVED;
}
}
And when you do $entity->getAnswer()->getVotes(), it does the following:
Load all votes from database
For every vote, checks if it is in identity map, use old one
If it is not in identity map, create new object
Try to call $entity->getAnswer()->getVotes() before you delete entity. If the problem disappears, then I am right. Of cause, I would not suggest this hack as a solution, just to make sure we understand what is going on under the hood.
UPD instead of $entity->getAnswer()->getVotes() you should probably do foreach for all votes, because of lazy loading. If you just call $entity->getAnswer()->getVotes(), Doctrine probably wouldn't do anytning, and will load them only when you start to iterate through them.
From the doc:
If you call the EntityManager and ask for an entity with a specific ID twice, it will return the same instance
So calling twice findOneBy(['id' => 12]) should result in two exact same instances.
So it all depends on how both instances are retrieved by Doctrine.
In my opinion, the one you get in $arr is from a One-to-Many association on $votes in the Answer entity, which results in a separate query (maybe a id IN (12)) by the ORM.
Something you could try is to declare this association as EAGER (fetch="EAGER"), it may force the ORM to make a specific query and keep it in cache so that the second time you want to get it, the same instance is returned ?
Could you have a look at the logs and post them here ? It may indicates something interesting or at least relevant to investigate further.
I am using fatfree framework with the cortex ORM plugin. I am trying to get the number of records matching a specific criteria. My code:
$group_qry = new \models\system\UserGroup;
$group_qry->load(array('type=?',$name));
echo $group_qry->count(); //always returns 3, i.e total number of records in table
Initially I was thinking that this maybe because the filtering wasn't working and it always fetched everything, but that is not the case, cause I verified it with
while(!$group_qry->dry()){
echo '<br/>'.$group_qry->type;
$group_qry->next();
}
So how do I get the number of records actually loaded after filtering?
Yeah this part is confusing: the count() method actually executes a SELECT COUNT(*) statement. It takes the same arguments as the load() method, so in your case :
$group_qry->count(array('type=?',$name));
It is not exactly what you need, since it will execute a second SELECT, which will reduce performance.
What you need is to count the number of rows in the result array. Since this array is a protected variable, you'll need to create a dedicated function for that in the UserGroup class:
class UserGroup extends \DB\SQL\Mapper {
function countResults() {
return count($this->query);
}
}
If you feel that it's a bit of overkill for such a simple need, you can file a request to ask for the framework to handle it. Sounds like a reasonable demand.
UPDATE:
It's now part of the framework. So calling $group_qry->loaded() will return the number of loaded records.
xfra35 is right, there was currently no method for just returning the loaded items, when using the mappers as active record. alternatively you can just count the collection:
$result = $group_qry->find(array('type=?',$name));
echo count($result); // gives you the number of all found results
in addition, i've just added the countResults method, like suggested. thx xfra35.
https://github.com/ikkez/F3-Sugar/commit/92fa18130892ab7d2303edc31d2b1f4bba70e881
I'm querying big chunks of data with cachephp's find. I use recursive 2. (I really need that much recursion sadly.) I want to cache the result from associations, but I don't know where to return them. For example I have a Card table and card belongs to Artist. When I query something from Card, the find method runs in the Card table, but not in the Artist table, but I get the Artist value for the Card's artist_id field and I see a query in the query log like this:
`Artist`.`id`, `Artist`.`name` FROM `swords`.`artists` AS `Artist` WHERE `Artist`.`id` = 93
My question is how can I cache this type of queries?
Thanks!
1. Where does Cake "do" this?
CakePHP does this really cool but - as you have discovered yourself - sometimes expensive operation in its different DataSource::read() Method implementations. For example in the Dbo Datasources its here. As you can see, you have no direct 'hooks' (= callbacks) at the point where Cake determines the value of the $recursive option and may decides to query your associations. BUT we have before and after callbacks.
2. Where to Cache the associated Data?
Such an operation is in my opinion best suited in the beforeFind and afterFind callback method of your Model classes OR equivalent with Model.beforeFind and Model.afterFind event listeners attached to the models event manager.
The general idea is to check your Cache in the beforeFind method. If you have some data cached, change the $recursive option to a lower value (e.g. -1, 0 or 1) and do the normal query. In the afterFind method, you merge your cached data with the newly fetched data from your database.
Note that beforeFind is only called on the Model from which you are actually fetching the data, whereas afterFind is also called on every associated Model, thus the $primary parameter.
3. An Example?
// We are in a Model.
protected $cacheKey;
public function beforeFind($query) {
if (isset($query["recursive"]) && $query["recursive"] == 2) {
$this->cacheKey = genereate_my_unique_query_cache_key($query); // Todo
if (Cache::read($this->cacheKey) !== false) {
$query["recursive"] = 0; // 1, -1, ...
return $query;
}
}
return parent::beforeFind($query);
}
public function afterFind($results, $primary = false) {
if ($primary && $this->cacheKey) {
if (($cachedData = Cache::read($this->cacheKey)) !== false) {
$results = array_merge($results, $cachedData);
// Maybe use Hash::merge() instead of array_merge
// or something completely different.
} else {
$data = ...;
// Extract your data from $results here,
// Hash::extract() is your friend!
// But use debug($results) if you have no clue :)
Cache::write($this->cacheKey, $data);
}
$this->cacheKey = null;
}
return parent::afterFind($results, $primary);
}
4. What else?
If you are having trouble with deep / high values of $recursion, have a look into Cake's Containable Behavior. This allows you to filter even the deepest recursions.
As another tip: sometimes such deep recursions can be a sign of a general bad or suboptimal design (Database Schema, general Software Architecture, Process and Functional flow of the Appliaction, and so on). Maybe there is an easier way to achieve your desired result?
The easiest way to do this is to install the CakePHP Autocache Plugin.
I've been using this (with several custom modifications) for the last 6 months, and it works extremely well. It will not only cache the recursive data as you want, but also any other model query. It can bring the number of queries per request to zero, and still be able to invalidate its cache when the data changes. It's the holy grail of caching... ad-hoc solutions aren't anywhere near as good as this plugin.
Write query result like following
Cache::write("cache_name",$result);
When you want to retrieve data from cache then write like
$results = Cache::read("cache_name");
I know this is more of PHP problem because of it's loose Typing of arrays but I see this problem all over the place in a project I took over and not sure of the best way to refactor it. Suppose you have two sets of data, both multi dimensional arrays, $results_by_entity and $target_limits and we want to check what the target is foreach result_by_entity so we can set some state
foreach ($results_by_entity AS $result_by_entity) {
foreach ($target_limits AS $target_limit) {
if ($target_limit['activity_id'] == $result_by_entity['activity_id']) {
$result_by_entity->target = $target_limit->quantity;
$result_by_entity->progress = $target_limit->score;
}
}
}
There are a couple of main problems here
1-The data is really strongly tied together, so it is really hard to refactor $results_by_entity into it's own class and $target_limits into it's own class
2-The time taken to process this grows exponentially as the data size grows
I read the Refactoring book by Martin Fowler and it was really helpful but this style of problem doesn't really show up I think mostly because his examples are in JAVA which is strongly typed. The class is super run on so really hard to debug and extend but all the data is so tied together primarily because of these types of loops so not to sure how to solve. Any recommendations would be really appreciated
What you want is to index your data pre-emptively if possible:
$results_index = array();
foreach ($results_by_entity AS $result_by_entity) {
//Index this value
$results_index[$result_by_entity['activity_id']] = $result_by_entity; //Add a & in front if it's a scalar value, but it looks like it's an object in your example
}
foreach ($target_limits AS $target_limit) {
//Find the corresponding activity id in results
if (isset($results_index[$target_limit['activity_id']])) {
$result_by_entity = $results_index[$target_limit['activity_id']];
$result_by_entity->target = $target_limit->quantity;
$result_by_entity->progress = $target_limit->score;
}
}
I realize the knee-jerk response to this question is that "you dont.", but hear me out.
Basically I am running on an active-record system on a SQL, and in order to prevent duplicate objects for the same database row I keep an 'array' in the factory with each currently loaded object (using an autoincrement 'id' as the key).
The problem is that when I try to process 90,000+ rows through this system on the odd occasion, PHP hits memory issues. This would very easily be solved by running a garbage collect every few hundred rows, but unfortunately since the factory stores a copy of each object - PHP's garbage collection won't free any of these nodes.
The only solution I can think of, is to check if the reference count of the objects stored in the factory is equal to one (i.e. nothing is referencing that class), and if so free them. This would solve my issue, however PHP doesn't have a reference count method? (besides debug_zval_dump, but thats barely usable).
Sean's debug_zval_dump function looks like it will do the job of telling you the refcount, but really, the refcount doesn't help you in the long run.
You should consider using a bounded array to act as a cache; something like this:
<?php
class object_cache {
var $objs = array();
var $max_objs = 1024; // adjust to fit your use case
function add($obj) {
$key = $obj->getKey();
// remove it from its old position
unset($this->objs[$key]);
// If the cache is full, retire the eldest from the front
if (count($this->objs) > $this->max_objs) {
$dead = array_shift($this->objs);
// commit any pending changes to db/disk
$dead->flushToStorage();
}
// (re-)add this item to the end
$this->objs[$key] = $obj;
}
function get($key) {
if (isset($this->objs[$key])) {
$obj = $this->objs[$key];
// promote to most-recently-used
unset($this->objs[$key]);
$this->objs[$key] = $obj;
return $obj;
}
// Not cached; go and get it
$obj = $this->loadFromStorage($key);
if ($obj) {
$this->objs[$key] = $obj;
}
return $obj;
}
}
Here, getKey() returns some unique id for the object that you want to store.
This relies on the fact that PHP remembers the order of insertion into its hash tables; each time you add a new element, it is logically appended to the array.
The get() function makes sure that the objects you access are kept at the end of the array, so the front of the array is going to be least recently used element, and this is the one that we want to dispose of when we decide that space is low; array_shift() does this for us.
This approach is also known as a most-recently-used, or MRU cache, because it caches the most recently used items. The idea is that you are more likely to access the items that you have accessed most recently, so you keep them around.
What you get here is the ability to control the maximum number of objects that you keep around, and you don't have to poke around at the php implementation details that are deliberately difficult to access.
It seems like the best answer was still getting the reference count, although debug_zval_dump and ob_start was too ugly a hack to include in my application.
Instead I coded up a simple PHP module with a refcount() function, available at: http://github.com/qix/php_refcount
Yes, you can definitely get the refcount from PHP. Unfortunately, the refcount isn't easily gotten for it doesn't have an accessor built into PHP. That's ok, because we have PREG!
<?php
function refcount($var)
{
ob_start();
debug_zval_dump($var);
$dump = ob_get_clean();
$matches = array();
preg_match('/refcount\(([0-9]+)/', $dump, $matches);
$count = $matches[1];
//3 references are added, including when calling debug_zval_dump()
return $count - 3;
}
?>
Source: PHP.net
I know this is a very old issue, but it still came up as a top result in a search so I thought I'd give you the "correct" answer to your problem.
Unfortunately getting the reference count as you've found is a minefield, but in reality you don't need it for 99% of problems that might want it.
What you really want to use is the WeakRef class, quite simply it holds a weak reference to an object, which will expire if there are no other references to the object, allowing it to be cleaned up by the garbage collector. It needs to be installed via PECL, but it really is something you want in every PHP installation.
You would use it like so (please forgive any typos):
class Cache {
private $max_size;
private $cache = [];
private $expired = 0;
public function __construct(int $max_size = 1024) { $this->max_size = $max_size; }
public function add(int $id, object $value) {
unset($this->cache[$id]);
$this->cache[$id] = new WeakRef($value);
if ($this->max_size > 0) && ((count($this->cache) > $this->max_size)) {
$this->prune();
if (count($this->cache) > $this->max_size) {
array_shift($this->cache);
}
}
}
public function get(int $id) { // ?object
if (isset($this->cache[$id])) {
$result = $this->cache[$id]->get();
if ($result === null) {
// Prune if the cache gets too empty
if (++$this->expired > count($this->cache) / 4) {
$this->prune();
}
} else {
// Move to the end so it is culled last if non-empty
unset($this->cache[$id]);
$this->cache[$id] = $result;
}
return $result;
}
return null;
}
protected function prune() {
$this->cache = array_filter($this->cache, function($value) {
return $value->valid();
});
}
}
This is the overkill version that uses both weak references and a max size (set it to -1 to disable that). Basically if it gets too full or too many results were expired, then it will prune the cache of any empty references to make space, and only drop non-empty references if it has to for sanity.
PHP 7.4 now has WeakReference
To know if $obj is referenced by something else or not, you could use:
// 1: create a weak reference to the object
$wr = WeakReference::create($obj);
// 2: unset our reference
unset($obj);
// 3: test if the weak reference is still valid
$res = $wr->get();
if (!is_null($res)) {
// a handle to the object is still held somewhere else in addition to $obj
$obj = $res;
unset($res);
}
I had a similar problem with the Incredibly Flexible Data Storage (IFDS) file format with trying to keep track of references to objects in an in-memory data cache. How I solved it was to create a ref-counting class that wrapped a reference to the underlying array. I generally prefer arrays over objects as PHP has traditionally tended to handle arrays better than objects with regards to unfortunate things like memory leaks.
class IFDS_RefCountObj
{
public $data;
public function __construct(&$data)
{
$this->data = &$data;
$this->data["refs"]++;
}
public function __destruct()
{
$this->data["refs"]--;
}
}
Since 'refs' is tracked as a regular value in the data, it is possible to know when the last reference to the data has gone away. Regardless of whether multiple variables reference the refcounting object or it is cloned, the refcount will always be non-zero until all references are gone. I don't need to care how many actual references there are internally in PHP as long as the value is correctly zero vs. non-zero. The IFDS implementation also tracks an estimated amount of RAM being used by each object (again, being exact isn't super important as long as it is in the ballpark), allowing it to prioritize writing and releasing unused objects that are occupying system resources first and then writing and releasing portions of still-referenced objects that are caching large quantities of DATA chunk information.
To get back to the topic/question, with this ref-counting class-based approach, it is, for example, mostly straightforward to prune to ~5,000 records in a cache upon hitting 10,000 records in the cache. General strategy is to not get rid of records still being referenced plus keep the most recently requested/used records that aren't being referenced because they are likely to be referenced again. Upon every new reference, unset() and then setting the item again will move the item to the end of the array so that the oldest probably unreferenced items appear first and the newest probably still referenced items appear last.
Weak references, as several people have suggested, won't solve every caching issue. They don't work in caching scenarios where you don't want to remove an item from the cache until the application is done working with it (i.e. deleting an item that the application later attempts to use) but also want to keep it around as long as RAM overhead permits even if the application stops referencing it temporarily but might need it again in a moment. Weak references are also incapable of working in scenarios where the item in the cache is holding onto unwritten data that may or may not be fine with staying unwritten even if there are no references to it in the application. In short, when there is a balancing act to maintain, weak references cannot be used.