alternative to display:none for mobile - php

Im currently building a practice responsive website, what I am doing is taking an exising website, building it up using twitter bootsrap js and css, meaning it will be fully responsive for mobile.
The issue is that there are some large carousels and images on the site. Ideally I would like to just completely remove certain elements, like a carousel for instance, and instead have the options within the carousel as a standard list menu.
It seems the main option is display:none based on media queries, but I am starting to foresee that I will run into big problems for loading time if the entire desktop site is still going to be loaded on the mobile, only elements hidden.
Are there ways to completely exclude html based on browser size? If anyone has any good links or articles that would be great. Or even just opinions, on whether there is actually need to exclude html or not.
Thank you

First off it is really good to see that although you're talking about display:none; you actually still want to display the content without the bells and whistles of the image. Well done you.
The next thing I would look at is if you don't want to load images for a mobile then why are you adding it for the larger sites. If the image isn't providing a function, assisting in explaining the content better, then why not just drop it for the desktop size as well?
If in fact it does help tell a story then you can include the images and some of the popular image services like adaptive images, hiSRC, or PictureFill which will serve the mobile version of the image first and replace with a larger image at higher viewports (but remember, there's no bandwidth test).
Finally, if you do want to serve some different content, then take the advice of fire around including more content with ajax. The South Street toolbox from Filament group can help you out, pay particular attention to the AjaxInclude pattern (it also has a link to the picturefill).

You could consider storing heavy data JSON-encoded, and then creating elements and loading them on demand like so
var heavyImage = new Image();
heavyImage.src=imageList[id];
Then you can append image element to a desired block. From my experience with mobiles this is more robust than requesting <img> via AJAX, since AJAX could be pretty slow sometimes.
You may also 'prefetch' images with this method (like 2-3 adjacent to visible at the moment), thus improving UX.

You could pull in the heavy elements via AJAX so they wouldn't sit on the page initially, making it load faster. You could decide to do the AJAX call only if the screen size is larger than X.

If you want you can use visibility:hidden, or if you use jQuery you can use
$(element).remove() //to remove completely
$(element).hide() //to hide
$(element).fadeOut(1) //to fadeout

Related

a way to implement facebook's functionality of link sharing

I am looking for a way to create functionality, similar to when you post a link to the existed web-site in facebook. If this statement is rather ambiguous, I will try to elaborate.
When you paste your link and submit your post, facebook together with you link gives a small preview of the page, you are posting (text and may be a small image)
What are the ways to achieve this?
I read the similar post, but the thing is that I do not need an image so much, text will be sufficient.
Working in PHP, but language is not important, because I am looking for a high level idea.
Previously I was thinking about parsing content of the link with cURL but the thing is that in a lot of situations the text returned by facebook is not available on the page.
Is there other ways?
From what I can tell, Facebook pulls from the meta name="description" tag's content attribute on the linked page.
If no meta description tag is available, it seems to pull from the beginning of the first paragraph <p> tag it can find on the page.
Images are pulled from available <img> tags on the page, with a carousel selection available to pick from when posting.
Finally, the link subtext is also user-editable (start a status update, include a link, and then click in the link subtext area that appears).
Personally I would go with such a route: cURL the page, parse it for a meta tag description and if not grab some likely data using a basic algorithm or just the first paragraph tag, and then allow user editing of whatever was presented (it's friendlier to the user and also solves issues with different returns on user-agent). Do the user facing control as ajax so that you don't have issues with however long it takes your site to access the link you want to preview.
I'd recommend using a DOM library (you could even use DOMDocument if you're comfortable with it and know how to handle possibly malformed html pages) instead of regex to parse the page for the <meta>, <p>, and potentially also <img> tags. Building a regex which will properly handle all of the myriad potential different cases you will encounter "in the wild" versus from a known set of sites can get very rough. QueryPath usually comes recommended, and there are stackoverflow threads covering many of the available options.
Most modern sites, especially larger ones, are good about populating the meta description tag, especially for dynamically generated pages.
You can scrape the page for <img> tags as well, but you'll want to then host the images locally: You can either host all of the images, and then delete all except the one chosen, or you can host thumbnails (assuming you have an image processing library installed and turned on). Which you choose depends on whether bandwidth and storage are more important, or the one-time processing of running an imagecopyresampled, imagecopyresized, Gmagick::thumbnailimage, etc, etc. (pick whatever you have at hand/your favorite). You don't want to hot link to the images on the page due to both the morality of it in terms of bandwidth and especially the likelihood of ending up with broken images when linking any site with hotlink prevention (referrer/etc methods), or from expiration/etc. Personally I would probably go for storing thumbnails.
You can wrap the entire link entity up as an object for handling expiration/etc if you want to eventually delete the image/thumbnail files on your own server. I'll leave particular implementation up to you since you asked for a high level idea.
but the thing is that in a lot of situations the text returned by facebook is not available on the page.
Have you looked at the page's meta tags? I've tested with a few pages so far and this is generally where content not otherwise visible on the rendered linked pages is coming from, and seems to be the first choice for Facebook's algorithm.
Full disclosure upfront, I'm a developer at ThumbnailApp.com.
It's an JSON API service with an optional Javascript SDK which I think does exactly what you're after: It will parse a string to detect any urls and return the title, description and thumbnail of the asset. If the page has OpenGraph tags, it will use those for the image thumbnail. It's currently in private beta but we're adding more accounts each week.
If you feel that you really need a do-it-yourself solution:
Checkout the python based Webkit2Png and the headless browser PhantomJs. They can render webpages to an image (default size is 800x600), then you'll have to write some code to resize and crop the image like taswyn mentioned. Ideally you would then upload the resized image to Amazon S3 and then get it hosted on a CDN such as CloudFront.
To get the title and description, first get the URL content (cURL or whatever) and you will need to check the content-type header to make sure it's a webpage. If it is, you can then use a HTML parser such as the SimpleHTMLDOM PHP library to grab the title and description meta data. If you want it exactly like Facebook you will also need to check for any OpenGraph tags specifically the og:image tag.
Also don't forget about caching. The first render and description parsing can take a long time. Even if your site is fast, the webpage you're rendering could be slow and the best approach is to render / parse it once, then just save and return the resized image and meta data for subsequent requests. Depending on what your requirements are you may need to refresh the cached data every hour or you could get away with refreshing it once a day.
To do it yourself takes quite a bit of work and lots of server configuration. I feel using a 3rd party service is a better way to go, but obviously I have a biased opinion :)

How to properly preload images, js and css files?

I'm creating a website from scratch and I was really into this in the late 90's but the web has changed alot since then! And I'm more of a designer so when I started putting this site together, I basically did a system of php includes to make the site more "dynamic"
When you first visit the site, you'll be presented to a logon screen, if you're not already logged on (cookies). If you're not logged on, a page called access.php is introdused.
I thought I'd preload the most heavy images at this point. So that when the user is done logging on, the images are already cached. And this is working as I want. But I still notice that the biggest image still isn't rendered immediatly anyway. So it's seems kinda pointless.
All of this has made me rethink how the site is structured and how scripts and css files are loaded. Using FireBug and YSlow with Firefox I see a few pointers like expires headers and reducing the size of each script. But is this really the culprit?
For example, would this be really really stupid in the main index.php? The entire site is basically structured like this
<?php
require("dbconnect.php");
?>
<?php
include ("head.php");
?>
And below this is basically just the body and the content of the site.
Head.php however consists of the doctype, head portions, linking of two css style sheets, jQuery library, jQuery validation engine, Cufon and Cufon font file, and then the small Cufon.Replace snippet.
The rest of the body comes with the index.php file, but at the bottom of this again is an include of a file called "footer.php" which basically consists of loading of a couple of jsLoader scripts and a slidepanel and then a js function.
All of this makes the end page source look like a typical complete webpage, but I'm wondering if any of you can see immediatly that "this is really really stupid" and "don't do that, do this instead" etc. :) Are includes a bad way to go?
This site is also pretty image intensive and I can probably do a little more optimization.
But I don't think that's its the primary culprit. YSlow gives me a report of what takes up the most space:
doc(1) - 5.8K
js(5) - 198.7K
css(2) - 5.6K
cssimage(8) - 634.7K
image(6) - 110.8K
I know it looks like it's cssimage(8) that weighs the most, but I've already preloaded these images from before and it doesn't really affect the rendering.
To speed a little, you could assemble all your images on the same image sprite, so that you have only 1 request to download all the images. But that requires you to fine tune your css to let display just the small subset of your image.
To have a better explanation, check out : http://css-tricks.com/css-sprites/
Another answer that could seem a little stupid but I like to think of this when I make a website : Just Keep It Simple. I mean do all your JS add real value, do all this images are fine, could you display less, make a lighter design ? I'm not criticizing your work at all, just suggest you...
I used the following approach on an extranet project:
Using jQuery and a array of file names, I ajax in all the images, .js and .css files so that they are preloaded in the cache. As I iterate through the array, I update a progress bar on the screen that indicates that the site is loading - much like a flash loader.
It worked well.
What I will do is show by default the loading page with pure CSS and HTML then wait for the jQuery to load and preload the images with ImageLoader. Once you are done redirect to the normal website since the images will be already in the cache they won't be loaded again.
Another optimization you can do is minify all JS files and combine all except the jquery.js. Put the jquery.js first into your HTML so it loads first. Also put your SCRIPT tags at the bottom of the HTML.
It sounds like you have pretty much nailed preloading, if you have loaded it once, and the expiry header is set correctly, you have preloaded it, no matter what kind of content it is.
File combination can be key to a quick website, each extra file will add load time, in the worst cases of network and server lag you might add up to a second extra for each separate file. More commonly it will be around 100 - 200 milliseconds per file.
If not already minified, minify the scripts, and put them in the same file, just remember to keep the order. I have no idea why Ivo Sabev wouldn't include jQuery.
Same thing with the CSS files.
How much have you done about testing image compression? There can really be a gain from trying out different compression settings and comparing size vs. quality. For PNG images IrfanView with PNGOUT can often make files 25% smaller than other programs, on top of that, a very big gain in size reduction can be achieved by reducing the image to 8 bit colour, with a lot of graphic elements you simply can't tell the difference. Right here on Stack Overflow there is a great example of well compressed and stacked images in the editor control buttons: http://sstatic.net/so/Img/wmd-buttons.png

display webpage inside other webpage without frames

i'm looking for a way to display a web page inside a div of other web page.
i can fetch the the webpage with CURL, but since it has an external stylesheet when i try to display it, it appears without all his style properties.
i remember facebook used this technique with shared links (you used to see the page that was linked with a facebook header)
did some unsuccessful jquery tests but I'm pretty much clueless about how to continue..
i know this can be done with frames but i always here that it's good practice to avoid frames so i'm a bit confused
any ideas how to work this out?
If you want to display the other website's contents exactly as they are rendered in that site then frames are, in this case, the best (easiest) way to go.
Facebook and Google both use this technique to display pages while maintaining their branding / navigation bar above the other site.
I am going to guess that Facebook still used an iFrame, just with no borders and a well placed header outside of it. The reason I am guessing that is because if the outside page has its own style sheet, there is a high probability that your styles and their styles will clash and not show things properly.
In order for the styles not to clash everything on both ends would have to be extremely detailed, not just generic styles applied to all paragraphs etc...
i agree that using frames would probably be the best solution for you problem.
but if you still want to avoid frames and put the contents into a div with the id externalConent, you could request the stylesheets the same way you get the other contents and prefix every rule in them with "#externalContent ". save these stylesheets to your server and include them in your page. with a few more customizations, that should work.
i have to admit this solution does sound quite strange... well, it is.
but it's the only way i see to do what you're asking for.
If you are unable to use a frame or iframe, try:
extract the HTML inside the BODY and inserting it into the destination DIV
extract the and sections at the header
Is not very clean though, but it will definitely work, you can insert a phpBB forum into another dynamic way using this technique, take a look at http://www.clearerimages.com/forum/ for an example.

Programmatically combining images in PHP

I'm a big fan of Yahoo's recommendations for speeding up websites. One of the recommendations is to combine images where possible to cut down on size and the number of requests. However, I've noticed that while it can be easy to use CSS sprites for layouts, other image uses aren't as easily combined. The primary example I'm thinking of is a blog or article list, where each blog or article also has an image associated with it. Those images can greatly affect load time and page size, especially if they aren't optimized. What I'm looking for, in concept or in practice, is a way to dynamically combine those images while running them through a loss-less compression using PHP.
A few added thoughts or concerns:
Combining the images and generating
a dynamic CSS stylesheet to position
the backgrounds of the images might
be one way to go about it, but I
also worry about accessibility and
semantics. As far as I understand,
CSS images should be used for layout
elements and the img tag (with the
alt attribute) should be used for
images that are meant to convey
information. I could set the image
as a background to a div element and
substitute a title attribute for the
alt attribute, but I'm unsure about
the accessibility and semantic
implications of doing so.
Might the GD library be a good
candidate for something like this?
Can you recommend other options?
I wouldn't go down this route if I were you. Sure, you may save a few bytes in protocol overhead by reducing the number of requests, but this would more-tha-likely end up being self-defeating.
Imagine this scenario:
A blog site, whose front page has 10 articles at a time. Each article has it's own image associated with it. To save a byte or two of transfer time, you programatically create a composite image of all 10 article images. You now have one of two problems.
You must update the composite image each time a new post is made, as the most recent 10 images will have a modified set of content.
You decide to create a new composite each request, on the fly.
Obviously, #1 is preferable here, and would not be difficult to implement. However, what if a user searches for all posts tagged with the word "SQL"? You are unlikely to have a composite image of the first 10 results already created for this simple query, let alone a more complex one. Also, what happens if you want to update or delete an image? Once again you'd have to trigger the background creation of the composite.
How about an RSS aggregator, like Google Reader? It wouldn't have the required logic to figure out which portion of a composite image it would need to display, and would probably display the full image. (I mention Google Reader because I very rarely visit blog sites directly, tending to trust to an RSS aggregation service like Reader)
If it were me, I'd leave the single images alone. With modern connection speeds, the tradeoff between additional bandwidth overhead and on-server processing time is unlikely to win you and great gains.
Having said that, if you decide to go down this route anyway, I'd say the GD library is an excellent place to start.
You'd almost certainly be better off reducing the filesize of the images in articles, than combine them. I'd agree that there might be accessibility issues with the method you suggest. Also, I suppose it depends on what you mean by "dynamic" - if you're thinking of combining those images and generating CSS for each page load, you might well find that that results in slower page load times for users with average connection speeds.
As to your second point, GD could certainly handle that. A better use of GD for reducing page load times might be reducing the image quality of your article images to reduce filesizes, at article creation time, not at page load.

sIFR or FLIR?

I've recently bumped into facelift, an alternative to sIFR and I was wondering if those who have experience with both sIFR and FLIR could shed some light on their experience with FLIR.
For those of you who've not yet read about how FLIR does it, FLIR works by taking the text from targeted elements using JavaScript to then make calls to a PHP app that uses PHP's GD to render and return transparent PNG images that get placed as background for the said element, where the overflow is then set to hidden and padding is applied equal to the elements dimensions to effectively push the text out of view.
This is what I've figured so far:
The good
No flash (+for mobiles)
FLIR won't break the layout
Images range from some 1KB (say one h3 sentence) to 8KB (very very large headline)
Good documentation
Easy to implement
Customizable selectors
Support for jQuery/prototype/scriptaculous/mooTools
FLIR has implemented cache
Browsers cache the images themselves!
The bad
Text can't be selected
Requests are processed from all sources (you need to restrict FLIR yourself to process requests from your domain only)
My main concerns are about how well does it scale, that is, how expensive is it to work with the GD library on a shared host, does anyone have experience with it?; second, what love do search engines garner for sIFR or FLIR implementations knowing that a) text isn't explicitly hidden b) renders only on a JavaScript engine.
Over the long term, sIFR should cache better because rendering is done on the client side, from one single Flash movie. Flash text acts more like browser text than an image, and it's easy to style the text within Flash (different colors, font weights, links, etc). You may also prefer the quality of text rendered in Flash, versus that rendered by the server side image library. Another advantage is that you don't need any server side code.
Google has stated that sIFR is OK, since it's replacing HTML text by the same text, but rendered differently. I'd say the same holds true for FLIR.
I know that with sIFR, and I assume with FLIR that you perform your markup in the same way as usual, but with an extra class tag or similar, so it can find the text to replace. Search engines will still read the markup as regular text so that shouldn't be an issue.
Performance-wise: if you're just using this for headings (and they're not headings which will change each page load), then the caching of the images in browsers, and also presumably on the server's disk should remove any worries about performance. Just make sure you set up your HTTP headers correctly!
since FLIR is IMAGES and sIFR is flash i would imagine that it would be a bit more resource intensive to use sIFR. I havent run any tests but it seems logical.
Search engines search sIFR better than FLIR because some search engines can go into the text of a flash document
I don't know much about sIFR because FLIR worked, and it "felt" better to me than Flash. Just looking at the sIFR 3 beta demo page I noticed that it doesn't seem to react to browser preference text resizing. That is, I increase my font-size in Firefox (ctrl-+) and reload the page, the headings stay the same size.
To those who know sIFR, is this an actual limitation of the script or did they just do the demo page wrong?
If it actually doesn't handle this, I'd call that a major advantage for FLIR, which does work this way. People with impaired vision who don't use screen readers probably don't appreciate that the text doesn't resize to their preference.
That said, from a quick glance at sIFR's API, you should be able to make resized text work in sIFR. I'd consider it a bug to be fixed, not an essential disadvantage of the method.
Woff files is the best solution.
http://www.fontsquirrel.com/tools/webfont-generator

Categories