I am working on Magento(EE). I found a term "Full page caching". Can any one please tell me what is "full page caching in Magento or in Zend ?
Caching the full page?
As in every thing that is generated from a script is written to HTML and served next time, improving performance (by reducing load and not having to generate the page for every visit).
However this comes as the disadvantage of having occasionally out of date pages.
If you website isn't getting a significant amount of hits, enabling full page caching or caching of all the HTML it going to make little difference
Magento is a shopping website CMS.
It simple means that to boost the performance of the website, it will cache (store in buffer) the HTML output of a particular page. For example, your homepage, everytime user opens your homepage, the PHP behind it, has to fetch the information from database, parse it with related views and then display the final HTML output, LOTSA processing.
Instead, caching will store the HTML output in its buffer and when user comes in, it will show the HTML cached output rather going to the database and stuff. However, life of cache has to be defined although modern cache plugins will check for any changes in the output data and update the cache as is.
Simple?
Related
Is it possible to cache all page, but do not cache a part of it in the browser?
For example, i have a page with date. Only date is changing daily, the rest of page never changes. How i shall cache such a page in the browser?
Could on the browser cached page contain dynamical content?
Actually, i am new to caching, i do not understand how it works with dynamical content and browser caching. Is this right, that from the moment some dynamic page is cached, it is served always as it was after during the caching, and new dynamic content is not displayed?
I do not ask about the server side caching, only about the browser side caching.
There is no specific tool for ignoring caching a part of the page. but you can do some tricks like:
You can cache whole page and change the part you want by iframe!
You can cache whole page and change the part you want by ajax!
You can cache whole page and change the part you want by a javascript
file!
i have not checked iframe solution and not sure if it works.
If the js files has cache, you can add a version in their file name like scripts.v.2.3.js and load them by version name.
You can't really dynamically cache "part" of a file, you can however cache separate assets the more you split your page into separate assets the more you can cache each one of them separately.
Your index.html could have a cache setting of dont-cache (using the Cache-Control header)
Your logo.png could have a long cache set of 10 days
Now if you want to have certain elements changing but the core to stay the same then I believe this would be a better job for JavaScript. What you could do is write a Javascript function to display the date, then you can fully cache the HTML page and the Javascript page and since the raw content never changes (only the manipulation of the DOM does you have very little client->server requests.
I'm developing a wiki-like web app and each page has 5, individually-editable content parts to it.
I have a simple caching class that saves the rendered parts to a file.
If a part of the page has not changed, it loads the cache, if it has, it renders it and then saves it to cache.
Because the page has 5 parts that are separately editable, I am saving each part as its own file, so when an edit is made, only that part is re-rendered and cached.
But, this also mean that every load, 5 files are read and included in the code.
Is it better to do it this way, or save the entire page in a single cache file?
That depends on multiple factors, I guess...
site load
file sizes of pieces
update frequency
are updates likely to happen on more then one subitem?
...
I would optimize for the viewing of the site, because it happens a lot more frequent than making a change, I suppose. So I would cache it in one file.
The only way to know, is to measure it... with the microtime()-function, you can compare the script executiontime on different points and during different tryouts...
i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course).
but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory.
how do i clear the web browser screen?
EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?
Pagination: What is it and how to do it? Sounds like you're describing a perfect opportunity to paginate your results.
You can use document.write('') to completely delete the contents of the web page without reloading it.
If you want to clear the screen in a way that saves memory on the client's side, you'll have no choice but to do redirect to a different page, e.g. using JavaScript: location.href="...". That will trigger the loading of a completely new page though, so you would have to save your crawler's state and continue from the saved point.
I am currently developing a web site with PHP + MySQL and jQuery. So far I have been doing it in my local machine. I notice that when I see the page the images on it take some time to load (few time but its visible). All images are small (PNG's with less than 3 KB). Now, when I load the page, there are some database connections happening in order to get some data that I will display.
I am not sure if this loading time issue has something to do with the amount of images, or with the time that the PHP script + the DB connections take to execute. (I have very little data in my database so I wouldn't assume this case.)
My question is: Is it a good approach to preload all the images in the beginning of each page? I tried it with jQuery and it works fine. I'm just not sure which disadvantages I can get with it. For example, to do so, I need to include the jQuery library in the beginning of the page? I thought it was a bad practice.
If these PNGs are stored in the database as BLOBs — not clear from your question — don't do that. Serving images from a DB through PHP is not as efficient as letting the web server serve them straight from the filesystem. If the images are tied to particular records, just name the PNG after the row ID, so you can find it in a directory dedicated to storing those images. The PHP code then just generates the URL that points to the PNG file on disk, so the web server can send them statically.
I don't think preloading the images within the same page is going to buy you anything. If anything, it might slow the apparent overall page load time because the browser can only retrieve a fixed number of resources concurrently, typically 2-4. Loading images at the top of the <body> means there are other things at the top of the page "above the fold" that have to wait for some HTTP connection slots to free up. Better to let the images load in their natural order.
Preloading makes sense in two situations:
The image isn't shown by default, but is expected to be needed as the user interacts with the page. Good examples of this are the hover and click state images for rollovers.
The image isn't used on this page, but will be needed on the next. Good examples of this are any site where there is a clear progression from one page to the next, like in a shopping cart.
Either way, do the preload at the very bottom of the <body>, so everything else loads first.
Having addressed those two issues, run YSlow on your site. It started out as a plugin for Firebug, which in turn is a plugin for Firefox, but it's since been ported to all major browsers except IE.
The beauty of YSlow is that it detects common slowdowns automatically, just by loading the page while the extension is active. It then gives you a clear grade for the page, so you can judge when you're "done" optimizing. If you're below an A, you're not done yet. :) It's not uncommon to see sites rating a D or worse, because the default configuration for web servers is conservative to avoid causing problems. Fixing YSlow warnings is generally pretty easy, but you have to be careful to avoid creating caching and other problems, which is why the default server config doesn't do these things.
Another answer recommended the Google PageSpeed offering. It's available as a plugin for Chrome and Firefox, as a server-side Apache module, and as a Google-hosted service. I have no idea how it compares to YSlow, but it looks interesting.
Also consider using the browser's debugger to get a waterfall graph of resource load times:
In Firebug you get this in the Net tab.
In Safari, you get to it via the Develop menu, which is normally disabled in Preferences. Turn it on if needed, then say Develop > Start Timeline Recording. That puts you into the Network Requests instrument. You can also get to it through Develop > Show Web Inspector.
In Chrome, say View > Developer > Developer Tools, then go to the Network tab.
IE has a very weak form of this, via Tools > Developer Tools > Profiler. It just gives a table of numbers, rather than a waterfall graph, so the information is there, but you can't just visually scan for long bars to find the slowest resources.
You should use page speed plugin from google to check what data takes most of the time to load. It will show separate load times for images as well.
If you're using lots of small pngs I suggest you combining them into one image and manipulating the display via css background property since they are part of styling and not information. In that case - instead of few images only one will be loaded and reused through all elements. In this case even preloading of one image is really easy.
Have you considered using CSS Sprites to combine all of your images into a single download? There are a number of tools online to help you do this, and it will significantly reduce the number of HTTP requests required by your page.
Make sure you have set the correct Expires header on your images to allow them to be cached.
Finally, take a look at YSlow which can provide you with futher optimisation tips.
I need to write a text file viewer (not the directory tree, but the actual file contents) for use in a browser. It will be used to view large files. I want to give the user the ability to actually ummm, browse the file, ie prev page & next page buttons, while each page will show only a portion of the file.
Two question:
Is there anyway to pass the file descriptor through POST (or something) so that on each page I can keep reading from an already open file, and not starting all over again (again - huge files)
Is there a way to read the file backwards? Will be very useful for browsing back in a file.
Any other implementation ideas are very welcome. Thanks
Keeping the file open between requests is not a good idea - you don't have to "start all over again" - just maintain an offset and use fseek() to jump to that offset. That way, you can also implement the "backwards jumping".
Cut your huge files into smaller files once, and then serve the small files to the user.
You should consider pagination. If you're concerned about the user being frustrated by needing to click "next" too often, you could make each chunk reasonably large (so a normal reader pages every 20min).
Another option is the Chunked-Endoding transfer type: Wikipedia Entry. This would allow your server to respond quickly and give the user something to read while it streams the rest of the file over the network (rather than the server needing to read in the file and send it all at once). This could dramatically improve the perceived performance compared to serving the files normally, but still consumes a lot of bandwidth for your server.
You might be able to simulate a large document with Javascript and AJAX, but only send pieces at a time for better performance.
Consider sending a few pages worth of your document and attaching listeners to the scroll event of your browser. Over time or as the user scrolls down you AJAX more chunks. This creates a few annoying UX edge cases, like:
Scroll bar indicates a much smaller document than there actually is
You might be able to avoid this by filling in the bottom of your document with many page breaks, but it'll be difficult to make the length perfect.
Scrolling past the point of currently-available content will show a blank page.
You could detect this using JavaScript and display a "loading" icon to let the user know what's going on.
Built-in "find" feature doesn't work
Hard to avoid this without the user downloading the entire document, but you could provide your own search feature for them to use instead (not as good but perhaps adequate).
Really though, you're probably best off with pagination with medium-sized pages. It's a very well understood design pattern that's a relatively easy (compared to other options at least) to implement and make fast.
Hope that helps!