ZendFramework 2 - painfully slow? - php

I have installed the ZendSkelletonApp on my webserver, which runs with php-fpm (5.5+, so opcache is enabled) and apache.
However, response time is - for the out of the box example application - 110ms, which seems like a lot to me. A "static" php-file is served in ~30ms. I am not saying that this should be possible with a php framework looping through listeners and whatnot, but serving a static controller & template with > 100ms is really slow.
Even with generating class- and templatemaps ( http://akrabat.com/zend-framework-2/using-zendloaderautoloader/ ) and enabling module and configuration caching in the application.config.php , I couldn't get below the 100ms mark.
Are there any other ways of enhancing performance for zf2?

ZF2, due to its nature, has a lot of file-IO for every request. A single page load request to load a data set from a database with doctrine and display the results can result in the opening around 200 php files. (Run an xdebug cache grind and you can see how much time is spent checking for, and opening files. It can be substantial.)
Most of what's being opened is "small," and executes very quickly once it's been read off-disk, but the actual file-io itself can cause significant delays.
A couple things you need to go with a ZF2 app in PRODUCTION:
1) Run "composer dump-autoload -o" which will cache a full auto-load map for the vendor directory. This keeps the autoload system from having to run a "file_exists()" before including a needed file.
2) Generate an autoload classmap for your project itself and make sure the project is configured to use it.
3) Make sure you've set up a template map in your config so ZF2 doesn't have to "assume" the location of your templates, which results in disk IO.
4) Make sure you have an opcode caching solution in place such as Zend Opcache or APC (depending on your PHP version). You will want it set up to have a medium-term cache timeout (an hour or more), and file stat should be disabled in production. You should hard-clear this cache every time you deploy code (can be accomplished via apache restart, a script, etc).
5) If you're using anything that depends on Annotations, such as Doctrine, etc, you MUST make sure the annotations are cached. APC is a good solution for this, but even a file cache is much better than no cache at all. Parsing these annotations is very expensive.
This combination resulted in "instantaneous" page loads for ZF2 for me.
While developing, don't sweat it too much. Install opcode caching if you want, but make sure it will stat files to check if they're changed...otherwise it'll ignore changes you make to the files.

Related

Too many cache file in system/cache folder codeigniter

I have a music site developed on CodeIgniter with youtube API without database. Recently I have noticed that too many files are getting generated in system/cache folder.
How can I stop generating this cache files? Note that I am not using any cache method.
system/cache is NOT default codeigniter cache directory at first. I would not store cache in there, as its framework main folder. Default is application/cache.
By default, CI does NOT cache anything. So your application is build with caching.
You told you don't use database, so it's not DB cache I assume.
Check in your app for somethign like "$this->load->driver('cache'".
Caching can be loaded WITHOUT additional parameters like
$this->load->driver('cache'); OR with parameters like
$this->load->driver('cache',array('adapther'=>'xxx'));
https://www.codeigniter.com/userguide3/libraries/caching.html
Now, in your app search for $this->cache->save OR $this->cache->file->save
if you found this, it means you are using CI caching.
Problem is, you cannot just remove cache loading, as app initiates cache object, and your app will fail, unless you rewrite all places where caching is used.
Now, you have few choices:
1.just clean cache dir with some script periodically via cron.
you can change cache folder permissions to NON writable, which will generate warnings in your logs, so logging should be disabled. This is not the right way IMHO, as can cause fatal errors/blank pages but just one of possible solutions. If file caching is used, this should not cause issues, while in other cases it could.
you can extend caching library, and simply create empty cache SAVE function. In this case your files will not be saved.
you can cache to memcached, if you have it on your server. Well, if your caching is written like $this->cache->file->{operation}, then you will need update all those to $this->cache->memcached->{operation}. If caching is written like $this->cache->{operation}, you can just adjust configuration something like
$this->load->driver('cache',array('adapther'=>'memcached'));
and set memcached server info in config file. (config/memcached.php)
You told you are not using any caching method. So you should not find any of code I put above.
The last thing I can think about is
$this->output->cache(xxx);
where xxx is cache time in minutes.
it forces entire generated page to be cached;
if you find such lines, you can try comment them out and see what happens
https://www.codeigniter.com/user_guide/general/caching.html
there is a good note: If you change configuration options that might affect your output, you have to manually delete your cache files.
If absolutely none from the examples above is not found, you might use some custom make caching.
Good luck!
Put this in your common controller
$this->output->delete_cache();

Laravel Cache: how do identify what writes data to it

Recently, in one of our projects that use Laravel 5.4, we have noticed that some data is being cached in /storage/framework/cache/data - we are using file cache. The contents of the files in the cache are things like: 1529533237i:1;. Several files are created in the cache throughout the day with content similar to that. So many files are created that we have to clean this cache periodically in order not to run into disk space issues by running out of inodes.
I know that an alternative to using file cache are things like Redis or Memcache, but the issue is, we're not sure what is this data being cached or what component of the project is caching it. We do use several external libraries so it could be one of many, but we don't know for sure what. I've already looked into all configuration files of the project, but couldn't identify anything that is obviously controlling data caching.
Are there any recommendations on trying to identify which piece of code is writing this data so we can better handle the caching of this data, whatever it may be?
Laravel has several events that dispatch during caching.
Create a new listener that listens on the Illuminate\Cache\Events\KeyWritten event. You could log the backtrace to see exactly what leads to specific items being cached.

Handling Symfony's cache in production

I have a Symfony2 website that I'm testing in production. I went ahead and cleared its cache because I've made and will probably make more modifications, however there is a small problem:
While the cache is being cleared and say, afterwards I want to warm it up, someone that accesses the website rebuilds the cache. That creates a small problem as the cache is being built, but not completely, while half of it gets deleted because the deletion is still in progress.
What happens afterwards is, the cache is built, but only a part of it. Symfony thinks that the cache is built entirely, and runs without trying to build it anymore, but it runs on a half-built cache. The deletion process is a bit long (~15 sec), so in this timeframe nobody must try and create the cache by accessing the website.
Either that, or the cache is completely built, it overwrites the old cache, and the system treats these new files as old ones, deletes part of them and some others remain. Not entirely sure, I'm not sure how to check this.
For instance, one of the errors that I'd get is
The directory "D:\xampp\htdocs\med-app\app\app\cache\dev/jms_diextra/metadata" does not exist.
If I wouldn't use that bundle I'd get another cache problem from Doctrine. This appears at every website access until I delete the cache again WITHOUT anyone accessing the website. it completely blocks access to the website and makes it non-functional.
Also, what about the warmup? That takes a while, too. What if someone accesses the website while the cache is being warmed up? Doesn't that create a conflict, too?
How to handle this problem? Do I need to close the apache service, clear and warm cache and then restart apache? How is this handled with a website in production?
EDIT
Something interesting that I have discovered. The bug occurs when I delete the cache/prod folder. If I delete the contents of the folder without deleting the folder itself, it seems the bug does not occur. I wonder why.
Usually it is good practice to lock the website into maintenance mode if you're performing updates, or clearing the cache for any other reason in the production. Sometimes web hosting services have this option to handle this for you, or there is a nice bundle for handling maintenance easily from the command line.
This way you can safely delete the cache and be sure no-one visits the page and rebuilds the cache incorrectly.
Usually if you have to clear the Symfony cache it means you're updating to a new version - so not only are you having to clear the cache, but you're probably having to dump assets and perform other tasks. In this case what I've done in the past that has worked very well is to treat each production release as its own version n its own folder - so when you install a new version you do it unconnected from the webserver, and then just change your webserver to point to the new version when you are done. The added benefit is if you mess something up and have to perform a rollback, you just immediately link back to the previous version.
For example, say your Apache config has DocumentRoot always points to a specific location:
DocumentRoot /var/www/mysite/web
You would make that root a symlink to your latest version:
/var/www/mysite/web -> /var/www/versions/1.0/web
Now say you have version 1.1 of your site to install. You simply install it to /var/www/versions/1.1 - put the code there, install your assets, update the cache, etc. Then simply change the symlink:
/var/www/mysite/web -> /var/www/versions/1.1/web
Now if the site crashes horribly you can simply point the symlink back. The benefit here is that there is no downtime to your site and it's easy to rollback if you made a mistake. To automate this I use a bash script that installs a new version and updates the symlinks with a series of commands connected via && so if one step of the install fails, the whole install fails and you're not stuck between version limbo.
Granted there are probably better ways to do all of the above or ways to automate it further, but the point is if you're changing production you'll want to perform the Symfony installation/setup without letting users interfere with that.

Making Symfony 2 Assetic development comfortable

I'm looking for ways to make Symfony 2 Assetic 1.0.2 development easier. I use Assetic for dumping/publishing my assets.
Currently I keep running this command in the background:
php app/console assetic:dump --watch
It helps a lot, every change I make to JS or CSS files will automatically get dumped to the public directory where the resources are fetched from by the browser.
However, I have issues with this:
If I add a new CSS/JS file, for some reason it does not get dumped. I need to stop the watch, clear the cache and initiate the watch again.
It is kind of slow, eats 5%-20% CPU time constantly.
Is there an alternative to development with Assetic? I already tried the approach of serving the resources through a controller (use_controller: true for Assetic), but it was even slower (because let's face the fact, PHP is not for serving static data).
For me, this is the fastest way to develop with Assetic I could find. I tried and I tried to find a better workflow to enhance speed of asset generation, but found none.
There is some work in the master branch of Symfony2 on a ResourceWatcher component which could possibly helps on this issue by:
Speeding up the watching process by relying on native resource watcher like inotify
Fixing problem when resources are added/removed so they are dumped correctly.
You can watch progress on the component in this PR.
Hope someone will provide some tricks to speed up development with assetic or a completely different workflow.
Regards,
Matt
For slowness, you can run with --no-debug and --forks=4. Install Spork dependency through composer, and run app/console assetic:dump --no-debug --forks=4.
If you have more cores add more forks. If you want to keep core(s) free lower the number. Not sure why it isn’t 4 times faster - doubtless it is not too intelligent about assigning different assetic jobs to different cores - but it’s a start.
Some things I just tried briefly:
time app/console assetic:dump
real 1m53.511s
user 0m52.874s
sys 0m4.989s
time app/console assetic:dump --forks=4
real 1m14.272s
user 1m12.716s
sys 0m5.752s
time app/console assetic:dump --forks=4 --no-debug
real 1m9.569s
user 1m6.948s
sys 0m5.844s
I'm not sure that this will help with --watch, as --watch consumes an entire core on it's own, because while (true) in PHP.
İn developpement use this:
php app/console assets:install web --symlink
Configure different filters for development and production. In production you want your JS and CSS minified and uglified, but this is a waste of time during development.
Make sure that assetic.debug is false. This will ensure that your JS and CSS files are concatenated, so that all JS and CSS can be fetched in one HTTP request each.
If you are using the controller (assetic.use_controller is true) and you have your browser’s developer toolbox open, make sure to uncheck the “Disable cache” checkbox (in Chrome, the checkbox is on the Network pane; in Firefox it is in the settings pane). This will allow your browser to send If-Modified-Since requests — if the files have not changed on the server, the server will return 304 Not modified without recompiling your assets, and the browser will use the latest version from the browser cache.
Do not use Assetic to load files from CDNs. Either download the files to your server (manually, using Bower, or whatever), or load them from the CDN by adding <script src=…> or <link rel=stylesheet href=…> directly into your HTML template.

Optimize PHP framework loading

I have a custom built application framework written in PHP which I have been tasked to optimize. This framework is a shared codebase which loads MVC "modules" to provide various functionality. Each module is a directory containing multiple PHP classes for controllers and models and PHP files for views.
The entire framework loads for almost all requests, including images and stylesheets. This is because the modules are designed to be self contained packages, and they may contain images, stylesheets, javascripts or other static files within them. Because of this, there is overhead in serving what would normally be a very simple request because the system has to load all the modules just to determine what modules are available from which to pull static files.
The general process for handling any given URI is as follows:
All base system classes are included
A global exception handler and some global variables are set
A system-wide configuration file is read. (This is a file filled with PHP statements to set config variables)
A connection to the database is made
The modules folder is scanned via opendir() and each module is verified to be valid and free of syntax errors, and then included.
A second configuration file is loaded which sets up configuration for the modules
A new instance of each module is created (calling it's __construct() method and possibly creating other database connections, performing individual startup routines, etc)
The URI is examined and passed off to the appropriate module(s)
Steps 1 - 7 will almost always be exactly the same. They will always perform the exact same operations unless new modules are installed or the configuration file is changed. My question is, what could be done to optimize the process? Ideally, I'd like some sort of way of handling multiple requests, similar to the way KeepAlive requests work. All the overhead of initializing all modules seems like a waste just to readfile() a single image or css file, just to have that same overhead again to serve another request.
Is there any way to reduce the overhead of a framework like this? (I don't even know if anyone can help me without studying all the code, this may be a hopeless question)
It's generally a bad idea to tie up a dynamic web server thread serving static content. Apache, IIS, Nginx, et. al. already do everything you need to serve up these files. If each static asset is located somewhere within the public docroot and has a unique URL, you shouldn't need to worry about PHP being involved in loading them.
Furthermore, if you can ensure that your cache-related headers (ETag, Last-Modified, etc.) are being generated correctly, and each client should only request each file once. Free caching == win!
Is there a reason all of the modules need to be loaded for every request? Why not allow controllers to specify which modules they require to be loaded, and only load those which are requested?
Why not move step 8 before step 5? Examine the URL first, then load modules based on the results.
Another one:
each module is verified to be valid and free of syntax errors, and then included.
Are you really syntax checking files before including() them? If so, why is this necessary?

Categories