Simple and fast question:
I need the jquery library for a site, so I access it in my header.php file as such:
<head>
<!-- include jQuery library -->
<script type="text/javascript" src="../javascript/jquery.js"></script>
</head>
Now my question is should I also be putting some other php scripting in there to make sure it is only included when I need it for a given page? My gut tells me I am absolutely right to only load the jquery files when I need them, yet I swear I always see things like this loaded without thought to if they are actually needed for the given situation. So perhaps there is no harm in always having it there.
So should it be like this?? Pseudo code:
<head>
<?php
//set jquery variable to true or false in configure.php file based on situation
if($jquery){
//include the jquery script!!
}
else{
//jquery??? we don't need that here
}
?>
</head>
It's generally better to load only what you need for a page, but if you do include jQuery everywhere, make sure:
1) It's minified (to reduce overhead).
2) You serve from a CDN (e.g. Google's). The likelihood of the user having a cached version (and thus a faster page load) will be much higher. See:
http://encosia.com/3-reasons-why-you-should-let-google-host-jquery-for-you/
It's not that big a deal.
Keep in mind that (most) people's browsers will cache files, for at least a certain duration of time. If say your homepage doesn't require jQuery, but the about page does, people with slower internet connections will be suffering a longer page load duration on your about page.
Even so, you should absolutely try to use minified versions of jQuery to reduce the amount of data you have to serve, and your users have to download. Even better is to use a CDN like Google's, as they're optimised for distribution to users, and also because with it in use widely, people are more likely to have it already cached on their system. HTML5 Boilerplate has a nice way to include jQuery from the Google CDN, and use the copy on your server as a fallback:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.js"></script>
<script>window.jQuery || document.write('<script src="path/to/jquery-1.5.1.min.js">\x3C/script>')</script>
The issue of longer page loads can also be somewhat offset by having your jQuery include tag before the </body> tag, as scripts and stylesheets block page load until they're fully downloaded (for the majority of browsers). The only issue with doing that is that anything that depends on jQuery for site interaction of course won't respond immediately.
Given, if you absolutely want to conditionally include jQuery, that code snippet you provided should work fine.
Related
My main goal is to allow for the loading of several pages to be as fast as possible. For this I want to take advantage of both, the cache and one "special technique" that, as a fallback, relies on the standard cache.
Structure
On the backend I have the following structure. There's a main page in the public_html and several subpages, each with specific css rules different from each other. The creation of all the minimized files is done by a script, so no extra complexity there. For simplicity, let's assume that this is the structure, although it's more complex:
/public_html
/index.php
/style.css ~50kb
/min.css ~100kb
/subjects
/index.php
/style.css ~20kb
/min.css ~10kb
/books
/index.php
/style.css ~20kb
/min.css ~10kb
...
First request
So when the user enters first time on a subpage, they will receive this html code:
<!DOCTYPE html>
<html>
<head>
<link href="/subjects/min.css" rel="stylesheet" type="text/css">
</head>
<body>
All the body here
<link href="/min.css" rel="stylesheet" type="text/css">
</body>
As you can see, the user loads all the css code needed for that page in the header, in a small file. Note that /subjects/min.css is MUCH smaller than /min.css which would make this first request to load faster. Then, after the full html and css has correctly loaded, the /min.css will start loading. This file contains all of the subpages style.
Note that it's appropriate to put the <link> within the <body> tag, and even if it didn't work, there's no problem since the page-specific style is already loaded. Why am I loading this here? Keep reading:
Following requests
For the second and subsequent requests on that session, the user will receive this html code:
<!DOCTYPE html>
<html>
<head>
<link href="/min.css" rel="stylesheet" type="text/css">
</head>
<body>
All the body here
</body>
The /min.css should be already cached from the first request. However, if for any reason it's not, it will load now the full minimized style, as in any normal website. This would be the fallback case.
Is this a valid scheme? Why haven't I seen anything like this before? Does it contain any logic error?
These are the main problems I can see, not strong enough in comparison to the benefits:
It adds some extra complexity to the code.
An extra request, after everything is already loaded, needs to be made. This would add a slight overhead on the server, however it's a static file.
Notes about the comments:
The browser will make less requests. This is true, in this way the browser does one extra request. However, it's after loading the html and css, so this will not affect in a great manner the html.
Cache. Yes, I'm doing my best to catch the file. A point could be made against cache of the <link> if it's inside the <body>, though, because I don't know if it behaves differently about the cache, I only assumed yes in the question.
UPDATE:
Please mind that the answer which the questioner marked as accepted cannot be recommended -
don't ever do this!
Any kind of "pre-loading" of CSS files doesn't make any sense, as you should never split up your CSS into several files!
My original answer:
So what is your real question in the end?
In my humble opinion your doing it all wrong - sorry!
Usually an author intends to
give a site a consistent look/ appearance
keep the maintainability as easy as possible
avoid FOUC (flash of unstyled content)
minimize number of (HTTP) requests
support cache mechanism to reduce bandwidth/ data volume
just to mention some of the most important aspects.
All of them are disregarded by your approach.
As you are using the link element within the body element, I assume you are using HTML5. Because in other HTML versions this would be invalid.
But also in HTML5 I would not rely on this. Have a look at the 2 versions:
http://www.w3.org/html/wg/drafts/html/master/document-metadata.html#the-link-element
http://www.w3.org/html/wg/drafts/html/CR/document-metadata.html#the-link-element
Compare the section (at the top) "Contexts in which this element can be used:".
As the information from the CSS is most needed by the browser to render a page, it should be one of the first things loaded.
Have a look at the article:"How Browsers Work: Behind the scenes of modern web browsers" and especially at the section:"Rendering engines".
So loading another style sheet will force the browser to redo all the work, beside the additional HTTP request, which in particular on GSM connections may cause "trouble" because of the greater latency.
And if each page of your site really has such an amount of individual style rules then I would say it is a "design flaw".
One of the "design principles" is: As much as necessary - as little as possible!
Another (big) advantage of using just one style sheet is that it is cached by the browser after the first load. And as the CSS of a site normally doesn't change too often this is a great advantage which by far outweighs the disadvantage of some more KB to load on first page visit (btw independent of the entry/ landing page)!
Conclusion:
I really cannot recommend to use your approach!
Put all your styles (normalize, basic, media queries, print) in one single file which you load via <link> in the <head> of your document.
That's the best you can do.
Yes, what you are doing is perfectly valid and common
CSS is perhaps a bad example, but the same principle ( load the last one in via ajax btw )
Like say, images.
We are on page 1 of our website and we know 99.999% of the time our visitors are going to click to page 2, and we know that on page 2 we have some large images to serve, yes, then we may load them silently AFTER page 1 has loaded - getting ready, then the site 'feels' fast as they navigate. A common trick in mobile web applications/sites/
So yes:
It is the same principle for ANY type of file that you may want to 'pre cache' for subsequent requests.
Load the page
while the visitor is 'reading' the loaded page, pre fetch files/data that
you expect they may request next. ( images, page 2 of result data, javascript, and css ). These are loaded via ajax as to not hold up the page 'onload' event firing - a key difference from your example
However, To answer your goal - allow for the loading of the pages to be as fast as possible
Doing this, or any kind of 'pre emptive loading' technique, is minimal to 'speed of delivery' if we are not serving static files from a static server, a cookieless domain , and ultimately a Content Delivery Network.
Achieving the goal of allowing for the loading of the pages to be as fast as possible, is the serving of static files differently from your dynamic content ( php rendered et all )
1) Create a subdomain for these resources ( css, js, images/media ) - static.yourdomain.com
2) Turn off cookies, headers and tune cache headers specifically for this sub domain.
3) Look into using a service like http://cdnify.com/ or www.akamai.com.
These are the performance and speed steps for serving static content. ( hope no suck eggs, just directly related the question and if anyone is unfamiliar with this )
The 'pre emptive loading' techniques are still great,
but they are now more related to pre loading data for usability than they are for speed.
Edit/Update:
To clarify 'speed' and 'usability speed'.
Speed is judged by software often as when the page 'onload' event fires ( that is why it is important to load these 'pre emptive resources' via ajax.
Perceived speed ( usability ) is the how quickly a user can see and interact with the content ( even though the page load event may not have fired ).
Edit/update
In a few areas of the post and in the comments was mentioned the loading of these additional 'pre emptive' resources via javascript/ajax.
The reason is to not delay the page 'onload' event firing.
Many website test speed tools ( yslow, google .. ) use this 'onload' event to judge page speed.
Here we delay the page 'onload' event.
<body>
... page content
<link rel="stylesheet" href="/nextpage.css" />
</body>
Here we Load via javascript /some cases Ajax ( page data ) and do not preventing the page load event
<body>
.. page content
<script>
window.onload = function () {
var style = document.createElement( 'link' );
style.rel = 'stylesheet';
style.type = 'text/css';
style.href = '/nextpage.css';
document.getElementsByTagName( 'head' )[0].appendChild( style );
};
</script>
( this, as a bonus, also gets around the compatibility problems with having a <link> tag within the <body> as discussed in your other threads )
Since min.css contains all styles properly minimized, just use that
Why ?
1.The browser will make less requests
2.The file will be cached after fetching for some 2 or three times by the browser. Tremendous decrease in page load time !
3.The browser doesn't have to go through the specific page's css, which in turns decreases the time needed for a page to render
4.Easy maintainability of code. If you want to update css, just prefix some query variable, so that browser fetches the updated css
I think , the above reasons are enough for you to use just the min.css
Also, don't forget to set a reallyyyyy long cache expiry date, if you would do as I've recomended
Edit:
As OP didn't understand point 2, I'm gonna make myself and the point clear.
The browser will not cache the css file in it's first encounter, because it thinks : 'Hey, Let's not cache this immediately. What if it changes ? I will see to that, the same css is being reloaded atleast 2 times, so as to reap the benefit of caching'
There's no point in caching the css, when it is first loaded. Because if the browser does that, then there will be huge amount of cahce on the user's system. So browsers are clever enough to cache the files that are frquently loaded and unchanged.
What you're describing is a pre-fetch/lazy-load pattern with resources loaded in anticipation of becoming relevant in the future - for instance, a basic login page with minimal styling that starts loading site css in the background.
This has been done before, among other things, in PageSpeed Module. In fact, it's more aggressive yet requires less development effort! A vanilla landing page (like a login screen) utilizing only a small subset of styles could take advantage of prioritize_critical_css that inlines relevant rules into the html and loads css at the bottom of the page! Unlike your original scenario where two consecutive requests have to be performed, the render-blocking effects of not having stylesheet in the head are being offset. This improvement is well-perceived by first-time visitors using mobile devices, who are subject to higher network latency and smaller number of simultaneous http requests allowed.
A natural progression of this would be to lazy-load sprites, webfonts and other static cacheable content. However, I'm inclined to speculate that the benefits of having well-structured separate css are probably superficial, and you would generally do well with minifying styles into one file. The difference in loading time between a 5 and a 50 kilobyte file is not tenfold, it's negligible, since website performance does not depend on bandwidth anymore. As a side note, you'll never have to worry about dependency management (i.e. remembering to include rules relevant to specific elements on your page), which is not easily automated for html+css and gets quite hairy for big projects.
If you focus on the cardinal rule of static resources - aggressive caching - and remember to fingerprint your assets so that deployments don't get messy, you're doing great! And if you address perceived performance with a well-placed throbber here and there...
between this
<script src="js/script.js"></script>
and that
<?php
echo '<script>';
include 'js/script.js';
echo '</script>';
?>
Which is better?
I'm actually wondering about things like HTTP Request and others stuffs...
(the same goes for CSS styles, should I put everything in the same file and send to the user, thus reducing the amount of requests, or should I properly separate just like everyone else do? thus increasing the number of requests)
There is something else that I should be concerned about?
Ok, it took me second to figure out what you were asking. In your first choice you are outputing a script tag that links to your javascript, in the second you using PHP to include your javascript inline.
Of the two choices, the first is by far the best. Assuming your page content is dynamic, due to browser caching, for every page a person downloads from you, the same javascript will be included everytime. If your javascript is 100kb in size, every page is now an extra 100kb. Over time this will add up for both your server and your clients.
Including your Javascript (and CSS) by linkages allows the browser to cache pages, and only fetch what is necessary. This will similarly reduce the number of requests as a browser will only fetch what is necessary, which in most cases is just the HTML page.
edit: What if the script is used on only one page?
Still include the Javascript by a link, rather than inline. If you page is 100% static, but has thats not one page but many. And each request will get a new output, with the same replicated Javascript. Even if your page is pure-static HTML, still include it by a link as you never know when you might want to reuse the Javascript (or CSS) code.
I would consider something like this
<script src="js/js.php"></script>
where js.php includes all the need js files I assume this will resolve the caching issue, plus you can make things dynamic by adding get values I guess.
btw I find it better to use the php open and close tags for html whenever possible
<script src="<?php echo $var ?>" ></script>
As I commented before, <script src="js/script.js"></script>.
This is in you <head> and it will be implemented before anything goes into you <body>
Since you are building your front end via JavaScript, php functionality will come after everything was built by JS.
Well according to me using the later approach is better , if you are designing a php page it is always better to write everything in php , whereas HTML side scripting is better done inside echo"" or print""; functions , you can read a lot about it in w3schools.com , hope my answer solved your problem.
In HTML is there such a line of code that will do the same thing as PHP's require_once? I'm just curious because there are some lines of codes that I want to duplicate through multiples sheets without needing to require myself to type it each page.
I know I can do it via PHP, but I am looking for an HTML variant? Is there such a beast or am I barking up the wrong tree?
That depends on what you want to include. Including a PHP-File is not possible, if you want to include a CSS stylesheet, use:
<link rel="stylesheet" type="text/css" href="yourstylefile.css" />
and for a Javascript file
<script type="text/javascript" src="yourscriptfile.js"></script>
Of course you have to put that code between the header-tags.
No, there is no include mechanism in HTML. Unless you count SSI.
Edit: wait, "sheets"? You mean CSS?
Yeah, SSI is the closest you're going to get. However, there are many non-server-side ways to get around this. Several web development applications have html templating systems that replicate server-side includes on the development side. For example, dreamweaver allows you to insert repeatable regions into HTML templates. When you modify the "included" file, Dreamweaver will modify all HTML files that use that block. As this is not a true include, but rather an HTML-updating system, you do have to then re-upload these files if you use a remote server, but if you have to stick to plain HTML it can make projects much more manageable and is much better than using iframes.
Lastly, there is also the option of having Javascript build a repeating block of code. You can simply include a common javascript library on every page <script type="text/javascript" src="templater.js"></script> and have it build the element on the client's side (either with an innerHTML call or inserting elements into the DOM). This has the obvious downside that
It requires Javascript to work
It might mess with SEO
It may slow down page loads (from the client side anyhow)
Using a proper include in a server side language is of course the best approach, but in a pinch these are both potential alternatives.
Technically you can create an iframe on your page which will load and handle a separate page but it does not function like include or require once. And to this I know of no alternatives.
Does going from
<script type="text/javascript" src="jquery.js"></script>
to
<script type="text/javascript">
<?php echo file_get_contents('jquery.js'); ?>
</script>
really speed things up?
I am thinking it does because php can fetch and embed a file's contents faster than the client's browser can make a full request for the file, because php isn't going over the network.
Is the main difference that the traditional method can be cached?
It may be faster on the first page load, but on every subsequent load it will be much slower. In the first example, the client browser would cache the result. In the second, it can not.
If you only ever serve one single website in your client's life, then yes, because you only have one HTTP request instead of two.
If you are going to serve multiple sites which all link to the same javascript source file, then you're duplicating all this redundant data and not giving the client a chance to cache the file.
You need to transfer the bytes to the browser in both cases. The only difference is that you save a HTTP request in the latter case.
Also make sure to escape the javascript with CDATA or using htmlspecialchars.
If you include your JS lib in your HTML page, it cannot be cached by the browser. It's generally a good idea to keep the JS separate from the normal HTML code because the browser can cache it and does not need to fetch it on subsequent requests.
So to make it short, it's an optimization that works only if the page is called once by the user and jquery is not used on other pages.
Alternatively, you may want to use the jquery from google apis - with the effect that they are often in the browser's cache anyway, so there is no need to transfer the lib at all.
It does so for that ONE PAGE.
All subsequent pages using the same library (jquery.js downloaded from the same URL) SUFFER, because if you include the reference to the external file yes, it has to be downloaded in an extra connection (which is relatively cheap with HTTP\1.1 and pipelining), BUT - provided your webserver serves it with useful headers (Expires:-header far in the future), the browser caches that download, while with the "optimization" it has to retrieve it with every single content-page.
Also see pages like this one:
http://www.stevesouders.com/blog/2008/08/23/revving-filenames-dont-use-querystring/
(the keyword here is "revving" in connection with those far-future expiration dates)
The first one is better since the browser can cache the script. With the second version it will have to re-download the script every time it loads the page even if the script didn't change.
The only time the second version is an improvement for scripts that cannot be cached by the browser.
It depends on how many files use the same file. However, in most situations this will be slower than your first piece of code, mostly because jquery.js can be cached.
Yes, that would initially be a performance optimization regarding the number of HTTP-requests being used to serve the page - your page will however become a bit bigger per pageload as the jquery.js will be cached in the browser after the first download.
It does if your page is static.
But if its not static your browser will download the page very time while jquery doesn't change but still included. if you use src="jquery.js" and the page changes, the browser will load jquery from cache and not download it again so using src="jquery.js" is actually faster.
What is the difference between the following two codes in an HTML file? If I add one more javascript file xyz.js after including the abc.js, is there any priority associated when the scripts are being used?
First code:
<script src="js/abc.js" type="text/javascript" language="javascript"> </script>
Second code:
<script language="javascript">
/*same code of abc.js*/
</script
The primary difference is that the javascript file can be cached by the browser and network devices so the user doesn't have to download it on every page load.
So if you have 100k of javascript files, your visitor only needs to download them once. Otherwise, they'd have to download those same exact 100k every page load and visit.
This allow applies to inline and external CSS and images as well!!
Granted this is only the tip of the iceburg of caching and browser performance (Steve's book is one of the web 'bibles'):
http://yuiblog.com/blog/2006/11/28/performance-research-part-1/
http://www.yuiblog.com/blog/2007/01/04/performance-research-part-2/
http://www.stevesouders.com/
What is the difference between the following two codes in an HTML file?
One requires an extra HTTP request, but gets cached. The other doesn't.
If I add one more javascript file xyz.js after including the abc.js, is there any priority associated when the scripts are being used?
Loading of external scripts is blocking. The first one will be loaded first. Then the second one will be loaded.
They both cause the browser to read the javascript and execute it. The first code can leverage caching while the latter DOES NOT cache.
The first use case also requires another HTTP request which could be costly.
There is no priority otherwise.
The first difference between loading a script from a file and running a script from a script tag is that the loading requires an extra HTTP request. This is usually trivial, but you will get a speed increase from having the script embedded in the page. However, loading from an external file does allow for the script to be cached. It seems like you cannot rely on caching, though.
Now, I should tell you, having all of your scripts hard-coded on the page is not very manageable. If you want to update one of the scripts but it's tied to a specific html file, it becomes that much harder to update.
As for your second question, scripts are loaded in order. All external loading is blocked while scripts are loading. Therefore, it is advisable to put all of your script includes at the bottom of the <body> tag.
Besides the primary reason of cacheing, a secondary and important difference is the maintenance of Separation of Concerns, which, among other things, goes to say that, in web development, markup (html) should be separated from style (css) and behaviors (js). These elements should be held in separate locations, and only linked to in the markup. This is important for project organization, ongoing upkeep and optimization. Writing a mess of spaghetti code with inline everything makes you a sad panda.