This might be my mistake somewhere, but anyway, I am using the fat free framework which has an inbuilt function to minify multiple css/js into a single file, and I thought that this would be good for optimization, but it turns out the opposite. If I keep the js files separately(and they are at the end of my html), the total size if added, comes to around 364kb, and seems to load in parallel within 1.5 secs. If I try to load the combined version however, the single file size becomes around 343kb, but takes around 10secs to load.
My minifying logic is a bit different though. First in the template I call a function to load the files:
<script type="text/javascript" src="{{ #BM->minify('js','js/',array(
'vendor/jQui/jquery-ui-1.10.4.custom.min.js',
'vendor/datatables/jquery.dataTables.min.js',
'vendor/bootstrap.min.js',
'vendor/smartmenus-0.9.5/jquery.smartmenus.min.js',
'vendor/smartmenus-0.9.5/addons/bootstrap/jquery.smartmenus.bootstrap.min.js',
'vendor/smartmenus-0.9.5/addons/keyboard/jquery.smartmenus.keyboard.min.js',
'plugins.js',
'main.js'
)) }}"></script>
The function sets the appropriate session variables and returns a path.
public function minify($type='',$folderpath='css/',$files=array()){
$filepaths = implode(",",$files);
$this->f3->set('SESSION.UI_'.$type,$this->themeRelFolder().'/'.$folderpath);
$this->f3->set('SESSION.ReplaceThemePath_'.$type,$this->themeRelFolder());
$this->f3->set('SESSION.m_'.$type,$filepaths);
return($this->f3->get('BASE').'/minify/'.$type);
}
The path maps to a controller which calls the minify method and spits out the actual minified content.
public function index($f3, $params) {
$f3->set('UI',$f3->get('SESSION.UI_'.$params['type']));
if($params['type']=='css'){
echo str_replace("<<themePath>>","../".$f3->get('SESSION.ReplaceThemePath_'.$params['type'])."/",\Web::instance()->minify($f3->get('SESSION.m_'.$params['type'])));
}else
{
echo \Web::instance()->minify($f3->get('SESSION.m_'.$params['type']));
}
}
I did it this way so that I can minify as many files as the template needed, and also be able to maintain file paths no matter what folder nesting structure inside a theme.
What am I doing wrong?
PS: I am testing this on my local wamp setup, not an actual server, so the load times are obviously different than a actual web server.
Seems like the engine is re-minifying every time. I'll bet you just need to setup caching - http://fatfreeframework.com/web#minify :
To get maximum performance, you can enable the F3 system caching and
F3 will use it to save/retrieve file(s) to minify and to save the
combined output as well. You can have a look at the Cache Engine User
Guide for more details.
http://fatfreeframework.com/quick-reference#cache :
Cache backend. F3 can handle Memcache module, APC, WinCache, XCache
and a filesystem-based cache.
For example: if you'd like to use the memcache module, a configuration
string is required, e.g. $f3->set('CACHE','memcache=localhost') (port
11211 by default) or $f3->set('CACHE','memcache=192.168.72.72:11212').
You're making it minify those files on the fly every single time a page is loaded. This, obviously, takes time.
Consider minifying once, then just linking to that one file.
Related
Wondering how to reach a css file like this one from css-tricks.com
http://cdn.css-tricks.com/wp-content/themes/CSS-Tricks-9/style.css?v=9.5
Not sure if he is using php to accomplish this or not. I've been reading countless articles with no luck.
Also, is it something automated that spits out the version number after the .css? Been seeing it around and wondered how to achieve a clean css file.
Any help is appreciated! Thanks.
It's simple enough to use an editor with Search/Replace and strip out all the unnecessary spaces. For instance, when I write CSS I only use spaces to separate keywords - I use newlines and tabs to format it legibly. So I could just replace all tabs and newlines with the empty string and the result is "minified" CSS like the one above.
The version number is a fairly common cache trick. It doesn't affect anything server-side, but the browser sees it as a new file, and caches it as such. This makes it easy to purge the cache of all users when an update is made. Personally, though, I use a PHP function to append "?t=".filemtime($file) (in other words, the timestamp that the file was modified) automatically, which saves me the trouble of manually updating version numbers.
Here is the exact code I use to automatically append modification time to JS and CSS files:
<?php
ob_start(function($data) {
chdir($_SERVER['DOCUMENT_ROOT']);
return preg_replace_callback(
"(/(?:js|css)/.*?\.(?:js|css))",
// all the relevant files are in /js and /css folders of the root
function($m) {
if( file_exists(substr($m[0],1)))
return $m[0]."?t=".filemtime(substr($m[0],1));
else return $m[0];
},
$data
);
});
?>
I would avoid to do it manually because you may corrupt your css.
There are good tools available which will solve such problems for you without to be tricky.
An excellent solution is Assetic which is an assets manager and allow you to filter (minify, compress) using various tools (yuicompressor, google closure, etc..).
It is currently bundle by default with Symfony2 but may be used standalone in any PHP Project.
I've successfully implemented it in a Zend Framework project.
From a tutorial I read on Sitepoint, I learned that I could load JS files through PHP (it was a comment, anyway). The code for this was in this form:
<script src="js.php?script1=jquery.js&scipt2=main.js" />
The purpose of using PHP was to reduce the number of HTTP requests for JS files. But from the markup above, it seems to me that there are still going to be the same number of requests as if I had written two tags for the JS files (I could be wrong, that's why I'm asking).
The question is how is the PHP code supposed to be written and what is/are the advantage(s) of this approach over the 'normal' method?
The original poster was presumably meaning that
<script src="js.php?script1=jquery.js&scipt2=main.js" />
Will cause less http requests than
<script src="jquery.js" />
<script src="main.js" />
That is because js.php will read all script names from GET parameters and then print it out to a single file. This means that there's only one roundtrip to the server to get all scripts.
js.php would probably be implemented like this:
<?php
$script1 = $_GET['script1'];
$script2 = $_GET['script2'];
echo file_get_contents($script1); // Load the content of jquery.js and print it to browser
echo file_get_contents($script2); // Load the content of main.js and print it to browser
Note that this may not be an optimal solution if there is a low number of scripts that is required. The main issue is that web browser does not load an infinitely number of scripts in parallel from the same domain.
You will need to implement caching to avoid loading and concatenating all your scripts on every request. Loading and combining all scripts on every request will eat very much CPU.
IMO, the best way to do this is to combine and minify all script files into a big one before deploying your website, and then reference that file. This way, the client just makes one roundtrip to the server, and the server does not have any extra load upon each request.
Please note that the PHP solution provided is by no means a good approach, it's just a simple demonstration of the procedure.
The main advantage of this approach is that there is only a single request between the browser and server.
Once the server receives the request, the PHP script combines the javascript files and spits the results out.
Building a PHP script that simply combines JS files is not at all difficult. You simply include the JS files and send the appropriate content-type header.
When it gets more difficult is based on whether or not you want to worry about caching.
I recommend you check out minify.
<script src="js.php?script1=jquery.js&scipt2=main.js" />
That's:
invalid (ampersands have to be encoded)
hard to expand (using script[]= would make PHP treat it as an array you can loop over)
not HTML compatible (always use <script></script>, never <script />)
The purpose of using PHP was to reduce the number of HTTP requests for JS files. But from the markup above, it seems to me that there are still going to be the same number of requests as if I had written two tags for the JS files (I could be wrong, that's why I'm asking).
You're wrong. The browser makes a single request. The server makes a single response. It just digs around in multiple files to construct it.
The question is how is the PHP code supposed to be written
The steps are listed in this answer
and what is/are the advantage(s) of this approach over the 'normal' method?
You get a single request and response, so you avoid the overhead of making multiple HTTP requests.
You lose the benefits of the generally sane cache control headers that servers send for static files, so you have to set up suitable headers in your script.
You can do this like this:
The concept is quite easy, but you may make it a bit more advanced
Step 1: merging the file
<?php
$scripts = $_GET['script'];
$contents = "";
foreach ($scripts as $script)
{
// validate the $script here to prevent inclusion of arbitrary files
$contents .= file_get_contents($pathto . "/" . $script);
}
// post processing here
// eg. jsmin, google closure, etc.
echo $contents();
?>
usage:
<script src="js.php?script[]=jquery.js&script[]=otherfile.js" type="text/javascript"></script>
Step 2: caching
<?php
function cacheScripts($scriptsArray,$outputdir)
{
$filename = sha1(join("-",$scripts) . ".js";
$path = $outputdir . "/" . $filename;
if (file_exists($path))
{
return $filename;
}
$contents = "";
foreach ($scripts as $script)
{
// validate the $script here to prevent inclusion of arbitrary files
$contents .= file_get_contents($pathto . "/" . $script);
}
// post processing here
// eg. jsmin, google closure, etc.
$filename = sha1(join("-",$scripts) . ".js";
file_write_contents( , $contents);
return $filename;
}
?>
<script src="/js/<?php echo cacheScripts(array('jquery.js', 'myscript.js'),"/path/to/js/dir"); ?>" type="text/javascript"></script>
This makes it a bit more advanced. Please note, this is semi-pseudo code to explain the concepts. In practice you will need to do more error checking and you need to do some cache invalidation.
To do this is a more managed and automated way, there's assetic (if you may use php 5.3):
https://github.com/kriswallsmith/assetic
(Which more or less does this, but much better)
Assetic
Documentation
https://github.com/kriswallsmith/assetic/blob/master/README.md
The workflow will be something along the lines of this:
use Assetic\Asset\AssetCollection;
use Assetic\Asset\FileAsset;
use Assetic\Asset\GlobAsset;
$js = new AssetCollection(array(
new GlobAsset('/path/to/js/*'),
new FileAsset('/path/to/another.js'),
));
// the code is merged when the asset is dumped
echo $js->dump();
There is a lot of support for many formats:
js
css
lot's of minifiers and optimizers (css,js, png, etc.)
Support for sass, http://sass-lang.com/
Explaining everything is a bit outside the scope of this question. But feel free to open a new question!
PHP will simply concatenate the two script files and sends only 1 script with the contents of both files, so you will only have 1 request to the server.
Using this method, there will still be the same number of disk IO requests as if you had not used the PHP method. However, in the case of a web application, disk IO on the server is never the bottle neck, the network is. What this allows you to do is reduce the overhead associated with requesting the file from the server over the network via HTTP. (Reduce the number of messages sent over the network.) The PHP script outputs the concatenation of all of the requested files so you get all of your scripts in one HTTP request operation rather than multiple.
Looking at the parameters it's passing to js.php it can load two javascript files (or any number for that matter) in one request. It would just look at its parameters (script1, script2, scriptN) and load them all in one go as opposed to loading them one by one with your normal script directive.
The PHP file could also do other things like minimizing before outputting. Although it's probably not a good idea to minimize every request on the fly.
The way the PHP code would be written is, it would look at the script parameters and just load the files from a given directory. However, it's important to note that you should check the file type and or location before loading. You don't want allow a people a backdoor where they can read all the files on your server.
What are some popular ways (and their caveats) of handling dependency path changes in scripted languages that will need to occur when deploying to a different directory structure?
Let the build tool do a textual replace? (such as an Ant replace?)
Say for example the src path in a HTML tag would need to change, maybe also a AJAX path in a javascript file and an include in a PHP file.
Update
I've pretty much sorted out what I need.
The only problem left is the URL that gets posted to via AJAX. At the moment this URL is in a JS config file amongst many other parameters. It is however the only parameter that changes between development, QA and production.
Unfortunately dynamically generated URLs as #prodigitalson suggested aren't desirable here--I have to support multiple server side technologies and it would get messy.
At the moment I'm leaning towards putting the URL parameter into its own JS file, deploying a different version of it for each of development, QA and production, additionally concatenating to the main config file when deployed.
IMO if you need to do this then youe developed in the wrong way. All of this shoul dbe managed in configuration file(s). On th ephp side you should have some kin dof helper that ouputs the path based on the config value... for example:
<?php echo image_tag('myimage.jpg', array('alt'=>'My Image')); ?>
<?php echo echo javascript_include('myscript.js'); ?>
In both these cases the function would look up the path to the javascript and image directories based on deployment.
For links you should be bale to generate a link based on the local install of the application and a set of parameters like:
<?php echo link_to('/user/profile', array('user_id' => 7)); // output a tag ?>
<?php echo url('/user/profile', array('user_id'=>7)); // jsut get the url ?>
As far as javascript goes you shouldnt have any paths hardcoded in a js file that need to be changed. You should make your ajax or things that are path dependent accept parameters and then you send these parameters from the view where you have the ability to use the same scripted configuration.. so oyu might have something like:
<script type="text/javascript">
siteNamespace.callAjax(
'<?php echo url('/user/profile/like', array('user_id' => 7)); ?>',
{'other': 'option'}
);
</script>
This way you can change all this in a central location based on any number of variables. Most MVC frameworks are going to do something like this though the code will look a bit different and the configuration options will vary.
I would take a look at Zend Framework MVC - specifcally the Config, Router, and View Helpers, or Symfony 1.4 and its Config, Routing and Asset and Url Helpers for example implementations.
Is it possible with PHP(5) or other Linux tools on an apache debian webserver to get the file requests a single http request made?
for performance reasons i would like to compare them with the cached "version" of my cake app.
non-cached that might be over 100 in some views.
cached only up to 10 (hopefully).
afaik there are 3 main file functions:
file_get_contents(), file() and the manual fopen() etc
but i cannot override them so i cannot place a logRequest() function in them.
is there any other way? attaching callbacks to functions? intercepting the file requests?
This suggestion does not seems intuitive, but you can take look on xdebug - function trace
Once you have xdebug installed and enabled, you can using all sort of configuration to save the profiling into a disk file and you can retrieve it later. Such as profiling results for different URL save into different disk file.
To monitoring file system related functions, you can do a parse of the text file(profiling results) and do a count of matchable functions (programmable or manually)
The way I would do it would be to create a custom function that wraps around the one you need.
function custom_file_get_contents($filename) {
$GLOBALS['file_get_contents_count']++;
return file_get_contents($filename);
}
And just replace all of your calls to file_get_contents with custom_file_get_contents. This is just a rudimentary example, but you get the idea.
Side note, if you want to count how many files your script has included (or required), see get_included_files()
You can use Xdebug to log all function calls
Those so-called "function traces" can be a help for when you are new to an application or when you are trying to figure out what exactly is going on when your application is running. The function traces can optionally also show the values of variables passed to the functions and methods, and also return values. In the default traces those two elements are not available.
http://www.xdebug.org/docs/execution_trace
Interesting question. I'd start with the stream_wrapper ... try to replace them with custom classes.
There is a pecl extention called ADB (Advanced PHP Debugger) that has tow functions that would be very useful for a cse like this - override_function() and rename_function(). You could do something like this:
rename_function('file_get_contents', 'file_get_contents_orig');
function file_get_contents($filename) {
logRequest();
return file_get_contents_orig($filename);
}
It looks like ADB is pretty easy to install, too.
See http://www.php.net/manual/en/book.apd.php
I occasionally come across pages where some Javascript is included via a PHP file:
<html>
<head>
<script type="text/javascript" src="fake_js.php"></script>
</head>
<body onload="handleLoad();">
</body>
</html>
where the contents of fake_js.php might look something like this:
<?php header('Content-type: text/javascript') ?>
function handleLoad() {
alert('I loaded');
}
What are the advantages (or disadvantages) to including Javascript like this?
It makes it easy to set javascript variables from the server side.
var foo = <?=$foo?>
I usually have one php/javascript file in my projects that I use define any variables that need to be used in javascript. That way I can access constants used on the server-side (css colors, non-sensitive site properties, etc) easily in javascript.
Edit: For example here's a copy of my config.js.php file from the project I'm currently working on.
<?php
require_once "libs/config.php";
if (!function_exists("json_encode")) {
require_once "libs/JSON.php";
}
header("Content-type: text/javascript");
echo "var COLORS = ". json_encode($CSS_COLORS) .";\n";
echo "var DEBUG = ". ((DEBUG == true) ? "true" : "false").";";
?>
If you don't need it, don't use it:
The first thing you need to keep in
mind is YAGNI. You Ain't Gonna
Need It. Until a certain feature,
principle, or guideline becomes useful
and relevant, don't use it.
Disadvantages:
Added complexity
Slower than static files.
Caching problems (server side)
Scalability issues (load balancers offload static files from the heavy PHP/Apache etc processes)
Advantages:
User specific javascript - Can be achieved by initializing with the right variables / parameters in the <head> </head> section of the HTML
Page specific javascript - JS could also be generalized to use parameters
JSON created from database (usually requested via AJAX)
Unless the javascript is truely unique (i.e. JSON, parameters/variables) you don't gain much. But in every case you should minimize the amount of JS generated on the server side and maximize the amount of code in the static files. Don't forget that if it's dynamic, it has to be generated/downloaded again and again so it's not wanted for it to be a heavy process.
Also:
This could also be used to minimize the amount of server configuration (for example if the web server doesn't serve file.js with the correct content type)
There's no benefit for the example you gave above (beyond peculiar deployment scenarios where you have access to .php files and not .js files, which would be insane but not unheard of).
That said, this approach allows you to pass the JS through the php parser - which means you can generate your JS dynamically based on server variables.
Agree with tj111. Apart from what tj mentioned, I also found php-generated javascripts a great weapon to fight the browser's caching tricks. Not that long ago I was cursing the whole javascript for its being constantly cached by the browser. Refreshing the page helped me not, had to clear the whole cache in order to force the browser to reload the javascript files. As soon as I built a php wall in front of my javascripts:
fake_js.php:
<?php
header('Content-type: text/javascript')
include('the_real_javascript.js');
?>
A fresh new javascript would always show up at the client side. However this approach is obviously only good in the development phase, when it can save the developer quite some headache to have the correct javascript loaded in the browser. Of course when connecting to localhost, the penalty of repeatedly loading the same file is not as big.
In a live web application/site client-side caching is welcome to reduce network traffic and overall server load.
Advantage (not PHP specific - I used this technique in EmbPerl and JSP) would be the ability to dynamically generate or tweak/customize the JavaScript code on the server side.
An example usage would be population of an array based on the contents of a DB table.
Or application of localization techniques.
If you don't have full server access and can't turn on gzip encoding then it's pretty useful to put the following in your javascript file (note: will need to be renamed to file.js.php or parsed as PHP through .htaccess directive):
<?php
ob_start( 'ob_gzhandler' );
header("Content-type: text/javascript");
?>
// put all your regular javascript below...
You could also use it for better cache control, visitor tracking, etc in lieu of server-controlled solutions.
Absolutely none, IMHO. I use a js framework that I wrote to handle the setting of whatever server-side variables I need to have access to. It is essentially the same as embedding PHP in JavaScript, but much less ambiguous. Using this method allows you to also completely separate server-side logic and html away from javascript. This results in much cleaner, more organized and lowly-coupled modular code.
You could do something like this in your html:
<script type="text/javascript">
registry = {
myString : '<?php echo $somePhpString; ?>',
myInt : <?php echo $somePhpInteger; ?>
}
</script>
And then do something like this in your js:
if (registry.myInt === 1) {
alert(registry.myString);
}