Making Symfony 2 Assetic development comfortable - php

I'm looking for ways to make Symfony 2 Assetic 1.0.2 development easier. I use Assetic for dumping/publishing my assets.
Currently I keep running this command in the background:
php app/console assetic:dump --watch
It helps a lot, every change I make to JS or CSS files will automatically get dumped to the public directory where the resources are fetched from by the browser.
However, I have issues with this:
If I add a new CSS/JS file, for some reason it does not get dumped. I need to stop the watch, clear the cache and initiate the watch again.
It is kind of slow, eats 5%-20% CPU time constantly.
Is there an alternative to development with Assetic? I already tried the approach of serving the resources through a controller (use_controller: true for Assetic), but it was even slower (because let's face the fact, PHP is not for serving static data).

For me, this is the fastest way to develop with Assetic I could find. I tried and I tried to find a better workflow to enhance speed of asset generation, but found none.
There is some work in the master branch of Symfony2 on a ResourceWatcher component which could possibly helps on this issue by:
Speeding up the watching process by relying on native resource watcher like inotify
Fixing problem when resources are added/removed so they are dumped correctly.
You can watch progress on the component in this PR.
Hope someone will provide some tricks to speed up development with assetic or a completely different workflow.
Regards,
Matt

For slowness, you can run with --no-debug and --forks=4. Install Spork dependency through composer, and run app/console assetic:dump --no-debug --forks=4.
If you have more cores add more forks. If you want to keep core(s) free lower the number. Not sure why it isn’t 4 times faster - doubtless it is not too intelligent about assigning different assetic jobs to different cores - but it’s a start.
Some things I just tried briefly:
time app/console assetic:dump
real 1m53.511s
user 0m52.874s
sys 0m4.989s
time app/console assetic:dump --forks=4
real 1m14.272s
user 1m12.716s
sys 0m5.752s
time app/console assetic:dump --forks=4 --no-debug
real 1m9.569s
user 1m6.948s
sys 0m5.844s
I'm not sure that this will help with --watch, as --watch consumes an entire core on it's own, because while (true) in PHP.

İn developpement use this:
php app/console assets:install web --symlink

Configure different filters for development and production. In production you want your JS and CSS minified and uglified, but this is a waste of time during development.
Make sure that assetic.debug is false. This will ensure that your JS and CSS files are concatenated, so that all JS and CSS can be fetched in one HTTP request each.
If you are using the controller (assetic.use_controller is true) and you have your browser’s developer toolbox open, make sure to uncheck the “Disable cache” checkbox (in Chrome, the checkbox is on the Network pane; in Firefox it is in the settings pane). This will allow your browser to send If-Modified-Since requests — if the files have not changed on the server, the server will return 304 Not modified without recompiling your assets, and the browser will use the latest version from the browser cache.
Do not use Assetic to load files from CDNs. Either download the files to your server (manually, using Bower, or whatever), or load them from the CDN by adding <script src=…> or <link rel=stylesheet href=…> directly into your HTML template.

Related

Handling Symfony's cache in production

I have a Symfony2 website that I'm testing in production. I went ahead and cleared its cache because I've made and will probably make more modifications, however there is a small problem:
While the cache is being cleared and say, afterwards I want to warm it up, someone that accesses the website rebuilds the cache. That creates a small problem as the cache is being built, but not completely, while half of it gets deleted because the deletion is still in progress.
What happens afterwards is, the cache is built, but only a part of it. Symfony thinks that the cache is built entirely, and runs without trying to build it anymore, but it runs on a half-built cache. The deletion process is a bit long (~15 sec), so in this timeframe nobody must try and create the cache by accessing the website.
Either that, or the cache is completely built, it overwrites the old cache, and the system treats these new files as old ones, deletes part of them and some others remain. Not entirely sure, I'm not sure how to check this.
For instance, one of the errors that I'd get is
The directory "D:\xampp\htdocs\med-app\app\app\cache\dev/jms_diextra/metadata" does not exist.
If I wouldn't use that bundle I'd get another cache problem from Doctrine. This appears at every website access until I delete the cache again WITHOUT anyone accessing the website. it completely blocks access to the website and makes it non-functional.
Also, what about the warmup? That takes a while, too. What if someone accesses the website while the cache is being warmed up? Doesn't that create a conflict, too?
How to handle this problem? Do I need to close the apache service, clear and warm cache and then restart apache? How is this handled with a website in production?
EDIT
Something interesting that I have discovered. The bug occurs when I delete the cache/prod folder. If I delete the contents of the folder without deleting the folder itself, it seems the bug does not occur. I wonder why.
Usually it is good practice to lock the website into maintenance mode if you're performing updates, or clearing the cache for any other reason in the production. Sometimes web hosting services have this option to handle this for you, or there is a nice bundle for handling maintenance easily from the command line.
This way you can safely delete the cache and be sure no-one visits the page and rebuilds the cache incorrectly.
Usually if you have to clear the Symfony cache it means you're updating to a new version - so not only are you having to clear the cache, but you're probably having to dump assets and perform other tasks. In this case what I've done in the past that has worked very well is to treat each production release as its own version n its own folder - so when you install a new version you do it unconnected from the webserver, and then just change your webserver to point to the new version when you are done. The added benefit is if you mess something up and have to perform a rollback, you just immediately link back to the previous version.
For example, say your Apache config has DocumentRoot always points to a specific location:
DocumentRoot /var/www/mysite/web
You would make that root a symlink to your latest version:
/var/www/mysite/web -> /var/www/versions/1.0/web
Now say you have version 1.1 of your site to install. You simply install it to /var/www/versions/1.1 - put the code there, install your assets, update the cache, etc. Then simply change the symlink:
/var/www/mysite/web -> /var/www/versions/1.1/web
Now if the site crashes horribly you can simply point the symlink back. The benefit here is that there is no downtime to your site and it's easy to rollback if you made a mistake. To automate this I use a bash script that installs a new version and updates the symlinks with a series of commands connected via && so if one step of the install fails, the whole install fails and you're not stuck between version limbo.
Granted there are probably better ways to do all of the above or ways to automate it further, but the point is if you're changing production you'll want to perform the Symfony installation/setup without letting users interfere with that.

ZendFramework 2 - painfully slow?

I have installed the ZendSkelletonApp on my webserver, which runs with php-fpm (5.5+, so opcache is enabled) and apache.
However, response time is - for the out of the box example application - 110ms, which seems like a lot to me. A "static" php-file is served in ~30ms. I am not saying that this should be possible with a php framework looping through listeners and whatnot, but serving a static controller & template with > 100ms is really slow.
Even with generating class- and templatemaps ( http://akrabat.com/zend-framework-2/using-zendloaderautoloader/ ) and enabling module and configuration caching in the application.config.php , I couldn't get below the 100ms mark.
Are there any other ways of enhancing performance for zf2?
ZF2, due to its nature, has a lot of file-IO for every request. A single page load request to load a data set from a database with doctrine and display the results can result in the opening around 200 php files. (Run an xdebug cache grind and you can see how much time is spent checking for, and opening files. It can be substantial.)
Most of what's being opened is "small," and executes very quickly once it's been read off-disk, but the actual file-io itself can cause significant delays.
A couple things you need to go with a ZF2 app in PRODUCTION:
1) Run "composer dump-autoload -o" which will cache a full auto-load map for the vendor directory. This keeps the autoload system from having to run a "file_exists()" before including a needed file.
2) Generate an autoload classmap for your project itself and make sure the project is configured to use it.
3) Make sure you've set up a template map in your config so ZF2 doesn't have to "assume" the location of your templates, which results in disk IO.
4) Make sure you have an opcode caching solution in place such as Zend Opcache or APC (depending on your PHP version). You will want it set up to have a medium-term cache timeout (an hour or more), and file stat should be disabled in production. You should hard-clear this cache every time you deploy code (can be accomplished via apache restart, a script, etc).
5) If you're using anything that depends on Annotations, such as Doctrine, etc, you MUST make sure the annotations are cached. APC is a good solution for this, but even a file cache is much better than no cache at all. Parsing these annotations is very expensive.
This combination resulted in "instantaneous" page loads for ZF2 for me.
While developing, don't sweat it too much. Install opcode caching if you want, but make sure it will stat files to check if they're changed...otherwise it'll ignore changes you make to the files.

Use sprockets 2.0 on both local and prod (php)environments with minimal hassle

While I recognize the dependency handling of sprockets is awesome, I have little knowledge on how to use it properly to make it meet my needs.
I'm actually working on a php 5.3 application (lithium framework powered #li3), and I'm beginning the development of a public javascript file meant to send request to our servers and build DOM snippets with the results.
Basically, I'm willing to keep my sources organized in modules, each dedicated to one task (ajax request, json parsing, DOM generating etc...), and feel the urge to use sprockets.
BUT
how could sprockets be nicelly and somehow transparently integrated to my workflow (I want to avoid CLI tasks every time I modify one of my files) on my local env. ?
I'm sure this is somehow possible, but my knowledge of sprockets doesn't allow me to discover this by myself.
Have been exprimenting with the same problematics ? How could this be solved ? Thanks
Generally on your local environment you'll run sprockets as a web server. Generally that will involve adding a config.ru file in you app with something like
require 'sprockets'
map '/assets' do
environment = Sprockets::Environment.new
environment.append_path 'app/assets/javascripts'
environment.append_path 'app/assets/stylesheets'
run environment
end
and run it with rackup config.ru. This should reload your assets every time you change them.

Deploy PHP web system to multiple locations

I am developing (solo web developer) a rather large web based system which needs to run at various different locations. Unfortunately, due to some clients having dialup, we have had to do this and not have a central server for them all. Each client is part of our VPN, and those on dialup/ISDN get dialed on demand from our Cisco router. All clients are accessable within a matter of seconds.
I was wondering what the best way to release an update to all these clients at once would be. Automation would be great as their are 23+ locations to deploy the system to, each of which is used on a very regular basis. Because of this, when deploying, I need to display a 'updating' page so that the clients don't try access the system while the update is partially complete.
Any thoughts on what would be the best solution
EDIT: Found FileSyncTask which allows me to rsync with Phing. Going to use that.
There's also a case here for maintaining a "master" code repository (in SVN, CVS or maybe GIT). This isn't your standard "keep editions of your code in the repo and allow roll backs"... this repo holds your current production code (only). Once an update is ready you check the working updated code into the master repo. All of your servers check the repo on a scheduled bases to see if it's changed, downloading new code if a change is found. That check process could even include turning on the maintenance.php file (that symcbean suggested) before starting the repo download and removing the file once the download is complete.
At the company I work for, we work with huge web-based systems which are both Java and PHP. For all systems we have our development environments and production environments.
This company has over 200 developers, so I guess you can imagine the size of the products we develop.
What we have done is use ANT and RPM build archives for creating deployment packages. This is done quite easily. I haven't done this myself, but might be worth for you to look into.
Because we use Linux systems we can easily deploy RPM packages, the setup scripts within a RPM package can make sure everything gets to the correct place. Also you get a more proper version handling and release process.
Hope this helped you.
Br,
Paul
There's 2 parts to this, lets deal with the simple one first:
I need to display a 'updating' page
If you need to disable the entire site while maintaining transactional integrity, and publishing a message to the users from the server being updated, then the only practical way to do this is via an auto-prepend - this needs to be configured in advance (note - I believe this can be done using a .htaccess file without having to restart the webserver for a new PHP config):
<?php
if (file_exists($_SERVER['DOCUMENT_ROOT'] . '/maintenance.php')) {
include_once($_SERVER['DOCUMENT_ROOT'] . '/maintenance.php');
exit;
}
Then just drop maintenance.php into your webroot and that file will be displayed instead of the expected file. Note that it should probably include a session_start() and auto-refresh to ensure the session is not expired. You might want to extend the above to allow a grace period where POSTs will still be processed e.g. by adding a second php file.
In terms of deploying to remote sites, I'd recommend using rsync over ssh for copying content files - which should be invoked via a controlling script which:
Applies the lock file(s) as shown above
runs rsync to replicate files
runs any database deployment script
removes the lock file(s)
If each site has a different set up then I'd recommend either managing the site specific stuff via a hierarchy of include paths, or even maintaining a comlpete image of each site locally.
C.

How do you deploy a website to your webservers?

At my company we have a group of 8 web developers for our business web site (entirely written in PHP, but that shouldn't matter). Everyone in the group is working on different projects at the same time and whenever they're done with their task, they immediately deploy it (cause business is moving fast these days).
Currently the development happens on one shared server with all developers working on the same code base (using RCS to "lock" files away from others). When deployment is due, the changed files are copied over to a "staging" server and then a sync script uploads the files to our main webserver from where it is distributed over to the other 9 servers.
Quite happily, the web dev team asked us for help in order to improve the process (after us complaining for a while) and now our idea for setting up their dev environment is as follows:
A dev server with virtual directories, so that everybody has their own codebase,
SVN (or any other VCS) to keep track of changes
a central server for testing holding the latest checked in code
The question is now: How do we manage to deploy the changed files on to the server without accidentaly uploading bugs from other projects? My first idea was to simply export the latest revision from the repository, but that would not give full control over the files.
How do you manage such a situation? What kind of deployment scripts do you have in action?
(As a special challenge: the website has organically grown over the last 10 years, so the projects are not split up in small chunks, but files for one specific feature are spread all over the directory tree.)
Cassy - you obviously have a long way to go before you'll get your source code management entirely in order, but it sounds like you are on your way!
Having individual sandboxes will definitely help on things. Next then make sure that the website is ALWAYS just a clean checkout of a particular revision, tag or branch from subversion.
We use git, but we have a similar setup. We tag a particular version with a version number (in git we also get to add a description to the tag; good for release notes!) and then we have a script that anyone with access to "do a release" can run that takes two parameters -- which system is going to be updated (the datacenter and if we're updating the test or the production server) and then the version number (the tag).
The script uses sudo to then run the release script in a shared account. It does a checkout of the relevant version, minimizes javascript and CSS1, pushes the code to the relevant servers for the environment and then restarts what needs to be restarted. The last line of the release script connects to one of the webservers and tails the error log.
On our websites we include an html comment at the bottom of each page with the current server name and the version -- makes it easy to see "What's running right now?"
1 and a bunch of other housekeeping tasks like that...
You should consider using branching and merging for individual projects (on the same codebase), if they make huge changes to the shared codebase.
we usually have a local dev enviroment for testing (meaning, webserver locally) for testing the uncommited code (you don't want to commit non functioning code at all), but that dev enviroment could even be on a separeate server using shared folders.
however, committed code, should be deployed to a staging server for testing before putting it in production.
You can probably use Capistrano even though is more for ruby there are some articles that describe how to use it for PHP
I think Phing can be use with CVS but not with SVN (at least that what I last read)
There are also some project around that mimic Capistrano but written in PHP.
Otherwise there is also a custom made solution :
tag files you want to deploy.
checkout files using the tag in a
specific directory
symlink the directory to the current
document root (easy to rollback to
the previous version)
Naturally check out SVN for the repository, Trac to track things, and Apache Ant to deploy.
The basic process is managing in Subversion, tracking the repositroy and developers in Trac and using Ant deployment scripts to push your site out with the settings needed. Ant allows you to easily deploy a project to a specific location. (Dev/test/prod) etc.
You need to look at:
Continuous Integration
Running unit tests on check-in of code to check it is bug free
Potentially rejecting code if it contains a bug
Having nightly builds
Releasing only the last build that was bug free
You may not get to a perfect solution, especially not at first, but the more you use your chosen solution, the more comfortable everyone will get and be able to make suggestions on improving it.
We check for the stability with ant, every night. And use ant script to deploy. It is very easy to configure and use.
I gave a similar answer yesterday to another question. Basically you can work in branches and integrate before going live.
The biggest thing you will have to get your head round is that you are dealing with changes to files, rather than individual files. Once you have branches there isn't really a current version there are just versions with different changes in.

Categories