Recently I integrated Doctrine 2 ORM into CodeIgniter 2. I configured Doctrine 2 as a library and autoloaded it in CodeIgniter. Within a page I instantiate doctrine entity manager in the following way:
private static $em = null;
public function __construct() {
parent::__construct();
$this->em = $this->doctrine->em;
}
And then I start to use the Entity Manager when needed. The issue I have is that in each page request Entity Manager takes some time to initialize (appr. 1 second). This causes the user to wait until the page is loaded. Below you can see some performance results I measured:
BENCHMARKS
Loading Time: Base Classes 0.0166
Doctrine 0.0486
GetArticle 1.0441
Functions 0.0068
Controller Execution Time 1.1770
Total Execution Time 1.1938
The GetArticle function basicly makes an EntityManager->find() call:
$currentart = $this->em->find('Entities\Article', $artid);
I have to wait that 1 second even if I use the EntityManager->createQuery() method.
In every page, I have a time loss of approximately 1 second because of EntityManager's first request.
Is this common?
Does this 1 second come from the fact that EntityManager needs to establish a connection to the DB? The functions/requests after the first request are quite fast though.
The most time consuming thing that Doctrine does is load metadata for your entities, whether it's annotations, XML, or YAML. Doctrine lazy loads the metadata when possible, so you will not see the performance hit until you start using entities. Since the metadata doesn't change unless you make changes in your code, Doctrine allows you to cache the metadata across requests. DQL queries also need to be parsed into SQL, so Doctrine provides another caching configuration for this.
In a production environment you should set these caches up (it sounds like you have already, but for others reading this):
$cache = new \Doctrine\Common\Cache\ApcCache(); // or MemcacheCache
$configuration->setMetadataCachImpl($cache); // caches metadata for entities
$configuration->setQueryCachImpl($cache); // caches SQL from DQL queries
In order to prevent the first page load from taking the full metadata load, you can set up a cache warmer that loads all of the class metadata and save it to the cache.
$em->getMetadataFactory()->getAllMetadata();
Another potential bottleneck is the generation of proxy classes. If this is not configured correctly in a production environment, Doctrine will generate the classes and save them to the file system on every page load. These proxy classes do not change unless the entity's code changes, so it is again unnecessary for this to happen. To speed things up, you should generate the proxies using the command line tool (orm:generate-proxies) and disable auto-generation:
$configuration->setAutoGenerateProxyClasses(false);
Hopefully this helps you out. Some more information can be found at http://www.doctrine-project.org/docs/orm/2.0/en/reference/improving-performance.html#bytecode-cache
Related
Does Laravel have any Caching mechanism to cache something just for the duration of the current script/request? Whether that's using a Cache driver like FileCache or DatabaseCache, or just in-memory cache.
For example, I have some data that is quite violate and changes often, and my scripts gets it in multiple places. So I would like to cache it once I got it the first time, but forget it after the execution of the current script has finished (so on next request it fetches it again).
It would be equivalent of me having some global variable $cache or similar, where I could store for example $cache['options']. Is there something like that in Laravel already?
Thank you.
One way is just following the singleton pattern for the class that does the repeated functionality.
You can also just bind an instance of a class to the Service Container and pull in that dependency where you need to use it.
https://laravel.com/docs/5.4/container
Singleton or instance binding would allow your application to share the same instance of a class anywhere during a single execution.
I'm having a strange problem in my application (Yii Framework 1.1.8).
I called a function as follows:
UserDataModel::model()->cache(3600, $dependency)->getAttributes();
After calling this function I called another model and fetched the data.
ProfileModel::model()->findAll();
To my surprise, ProfileModel was also cached. When I remove the first line (UserDataModel), the ProfileModel fetches the uncached data. Since both models are different, why first model is forcing cache for the next model call ?
Is there anything wrong with my implementation ?
Thanks.
Arfeen
I hope I can help you out, as I can see you are not specifying the third parameter in the cache which indicates the number of queries to be cached. My guess is that if the dependency is true, everything from that line on to the bottom will be cached in the cfilecache created totally independent to the model. In fact I have cache which implements dependency on several tables so I can cache more than one query and in the third parameter I tell the cache how many queries will I save
In our Symfony2 project, we would like to ensure that modifications across resources are transactional. For example, something like:
namespace ...;
use .../TransactionManager;
class MyService {
protected $tm;
public function __construct(TransactionManager $tm)
{
$this->tm = $tm;
}
/**
* #ManagedTransaction
*/
public function doSomethingAcrossResources()
{
...
// where tm is the transaction manager
// tm is exposing a Doctrine EntityManager adapter here
$this->tm->em->persist($entity);
...
// tm is exposing a redis adapter here
$this->tm->redis->set('foo', 'bar');
if ($somethingWentWrong) {
throw new Exception('Something went terribly wrong');
}
}
}
So there are a couple things to note here:
Every resource will need an adapter exposing it's API (e.g. a Doctrine adapter, Redis adapter, Memcache adapter, File adapter, etc.)
In the case that something goes wrong (Exception thrown), nothing should get written to any managed resource (i.e. rollback everything).
If nothing goes wrong, all resources will get updated as expected
doSomethingAcrossResources function does not have to worry about un-doing changes it made to non-transactional resources like Files and Memcache for example. This is key, because otherwise, this code would likely become a tangled mess of only writing to redis at the appropriate time, etc.
#ManagedTransacton annotation will take care of the rest (commiting / rolling back / starting the transactions required (based on the adapters), etc.)
In the simplest implementation, the tm can simply manage a queue and dequeue all items serially. If an exception is thrown, it simply won't dequeue anything. So the adapters are the transaction manager's knowledge of how to commit each item in the queue.
If an exception occurs during a dequeue, then the transaction manager will look to it's adapters for how to rollback the already dequeued items (probably placed in a rollback stack). This might get tricky for resources like the EntityManager that would need to manage a transaction internally in order to rollback the changes easily. However, redis adapter might cache a previous value during an update, or during an ADD simply issue a DELETE during a rollback.
Does a transaction manager like this already exist? Is there a better way of achieving these goals? Are there caveats that I may be overlooking?
Thanks!
It turns out that we ended up not needing to ensure atomicity across our resources. We do want to be atomic with our database interactions when multiple rows / tables are involved, but we decided to use an event driven architecture instead.
If, say, updating redis fails inside of an event listener, we will stop propagation, but it's not the end of the world -- allowing us to inform the user of a successful operation (even if side effects were not successful).
We can run background jobs to occasionally update redis as needed. This enables us to focus the core business logic in a service method, and then dispatch an event upon success allowing non-critical side effects (updating cache, sending emails, updating elastic search, etc.) to take place, isolated from one another, outside the main business logic.
I have an application in which I repeatedly use the same (big) class. Since I use AJAX for that App, I always have to create a new object of this class. Someone advised me to cache an instance of this class and use it whenever it is required (using apc in a php environment)
What are the benefits of it ? is it really saving some time ?
$this->tickets_persist = unserialize(#apc_fetch("Tickets"));
if (!$this->tickets_persist) {
$this->tickets_persist = new Tickets_Persistance(); // Take long time
apc_store("Tickets", serialize($this->tickets_persist));
}
The benefits are only really realized if you are dealing with a class that has an expensive instantiation cost. If there is things that take a lot of time, memory or other resource being done in the constructor of the class (for example: reading an XML sitemap and building a complex data structure to build your navigation.) you can dodge this by leveraging caching.
It's also worth noting that resources (like database links and such) are not able to be cached and they would have to be re-established after the object is unserialized (here is where the __sleep and __wakeup magic method comes in).
It would only be worth it if your object requires a lot of processing during instantiation. Caching will not help you with "big" objects, it will help you when you want to avoid processing that can be repeated. In your case, it would only be worth it if your construct method required a lot of processing. Let's take an example of how caching would work in the context of a webpage :
On first page load, instantiate and cache the object for x hours
On any subsequent page load for the next x hours, it will directly return the object, without processing the instantiation
After x hours, the cached object will be expired, the next page load will re instantiate the object and re cache it
Your application will behave in the same way, the only difference is that you will "re-use" the instantiation process that has already been done.
I moved from an old sever running centOS on a managed hoster to a new one running Ubuntu in AWS.
Post the move I've noticed that the page that loads a list of items is now taking ~10-12 secs to render (sometimes even up to 74secs). This was never noticed on the old server. I used newrelic to look at what was taking so long and found that the sfPHPView->render() was taking 99% of the time. From nerelic there is approximately ~500 calls to the DB to render the page.
The page is a list of ideas, with each idea a row. I use the $idea->getAccounts()->getSlug() ability of Doctrine 1.2. Where accounts is another table linked to the idea as a foreign relation. This is called several times for each idea row. A partial is not currently used to hold the code for each row element.
Is there a performance advantage to using a partial for the row element? (ignoring for now the benefit of code maintability)
What is best practice for referencing data connected via a foreign relation? I'm surprised that a call is made to the DB everytime $idea->getAccounts()->getSlug() is called.
Is there anything obvious in ubuntu that would otherwise be making sfPHPView->render() run slower than centOS?
I'll give you my thought
When using a partial for a row element, it's more easy to put it in cache, because you can affine the caching by partial.
Because you don't explicit define the relation when making the query, Doctrine won't hydrate all elements with the relation. For this point, you can manually define relations you want to hydrate. Then, your call $idea->getAccounts()->getSlug() won't perform a new query every time.
$q = $this->createQuery();
$q->leftJoin('Idea.Account');
No idea for the point 3.
PS: for the point 2, it's very common to have lots of queries in admin gen when you want to display an information from a relation (in the list view). The solution is to define the method to retrieve data:
In your generator.yml:
list:
table_method: retrieveForBackendList
In the IdeaTable:
public function retrieveForBackendList(Doctrine_Query $q)
{
$rootAlias = $q->getRootAlias();
$q->leftJoin($rootAlias . '.Account');
return $q;
}
Though I would add what else I did to improve the speed of page load in addition to jOk's recommendations.
In addition to the explicit joins I did the following:
Switched to returning a DQL Query object which was then passed to Doctrine's paginator
Changed from using include_partial() to PHP's include() which reduced the object creation time of include_partial()
Hydrate the data from the DB as an array instead of an object
Removed some foreach loops by doing more leftJoins in the DB
Used result & query caching to reduce the number of DB calls
Used view caching to reduce PHP template generation time
Interestingly, by doing 1 to 4 it made 5 and 6 more effective and easier to implement. I think there is something to be said for improving your code before jumping in with caching.