I believe the recommended way these days is to serve static files from a domain using //domain.com rather then http://domain.com and https://domain.com as needed. In fact https://developers.google.com/speed/libraries/devguide lists snippets in this format.
My question is, does this or any method allow caching across http/https? I thought it did (can't remember why) but my testing doesn't seem to allow me to do it. The problem is I recently changed a number of things about my setup (server, PHP framework) and so can't be sure why I cant cache across http/https (unless of course it isn't possible).
I'm wondering which of the methods is best to include static files:
(1) Serve exactly http or https depending on the page requested
(2) use //domain.com
(3) Always use https to serve static content, even on http pages as that way it will only donwnload once, but of course first visit to a site uses https for static content which can be slow, but at least it won't download files twice.
I know there's an issue in IE7 and 8 with stylesheets by using the //domain.com method though.
Any help, and particularly is it possible to cache across protocol as when a user first uses a https page for the first time it's really slow (until everything's cached) and I want to stop that.
On a server that supports both http and https you should probably reference assets via https if possible. This will maximize the caching and avoid any mixed content errors. Another motivation is that tomorrow this is going to start happening with the release of FF 23. Basically in an ssl context the browser will not download non-ssl assets.
Related
I have a php, Mysql, Apache based website. Hosting server located in London.
Opened cloudflare account, enabled and configured my website to route via cloudflare and enabled caching for static content.
Ran a page load test from different countries. and could not see any improvements.
The test tool howwver detects that i am making effective use of CDN. but there isnt any performance improvement.
1 My static resources each takes around 20ms to download when accessed from london.
2. When accessed from other countries, these resources are taking a good 600ms roughly.
am i missing something?
A couple of things to check:
Are you sending appropriate Cache-Control headers for the static content and confirmed that CloudFlare is caching the content (you should get "HIT" for one of the headers)
Have you run multiple tests for the resources? Perhaps they just haven't been cached yet so you were getting the timing of the CloudFlare->origin fetch
CloudFlare by default does not cache pages, only resources. That may be why your pages are not any faster. If you want your pages to be cached and faster, you have to set up rules to do so.
I have a very simple question, which I have been unable to find a clear answer to. I have a web page which is generated dynamically (so I absolutely do not want it cached), which loads in several thousand images. Since there are so many images, and they never change, I very definitely want the user's browser to cache these images.
I'm applying headers to the HTML page to prevent caching, by following the advice in this solution: How to control web page caching, across all browsers?
My question is: Will this cause the user's browser to also not-cache any images this page contains, or will it cache them? Thank you.
TL;DR the answer is not clear because it is complicated.
There is an ongoing struggle between a drive to do the "right" thing (i.e., follow the standards... which themselves have changed) and a drive to "improve" the standards to achieve better performances or smoother navigation experience for users. So from the application point of view you need to properly use headers such as ETag, If-Modified-Since and Expires, together with cache hinting pragmas, but the browser - or something in the middle such as a proxy - might still decide to override what would be the "clear thing to do".
On the latest Firefox, directly attached to an Apache 2.4 on virtual Ubuntu machine, I have tried with a page (test.html) referring to an image (test.jpg).
When the page is cached, server side I see a single request for the HTML and nothing for the image. What is probably happening is that the "rendering" part of Firefox does request the image (it has to!), but that is entirely supplied by the local cache. This makes sense; if the page has not changed, its content hasn't changed.
When the page is not cached, I see two requests, one for the page and one for the image, to which the server responds with a 304, but that is because I also send the image's Last-Modified header. This also makes sense - if the page has changed, the images might have changed too, so the browser has to know whether this is the case, and can only do so by asking the server (unless the Expires header is used to "assure" the client that the image will not change).
I have not yet tried with an uncached page that responds with a 304. I expect it to generate a single request (no image request to the server), for the same reasons.
What you might want to consider is that your way you will not cache the HTML page but might still perform a thousand image requests (which will yield one thousand 304's, but still). Performances on this kind of event depend on whether the requests are sent independently or back-to-back by using the Keep-Alive HTTP/1.1 extension (has to be enabled and advertised server side).
You should then use the Expires header on the images to tell the client that those resources will not go stale anytime soon.
You might perhaps also want to explore a different approach:
the HTML is cached
images are cached too
the HTML also references a (cached?) Javascript
variable content is loaded by the Javascript via AJAX. That request can be made cache-unfriendly by including a timestamp, without involving the server at all.
This way you can configure the server for caching everything everywhere, except where you make sure it can't via a single crafted request.
I have found many posts regarding checking the SNI browser support part, but not combining with the system time.
The default landing for my site is http. Now I have set up SSL. But compatibility is most important to me. Security is least important. I only want to auto-redirect users to https if I'm damn sure they won't see an error (and then falsely think my site's broken).
Since landing page is HTTP, we can use php or .htacess to detect. I had seen discussion elsewhere on using php redirections which may cause some "Back" button issue. (Force SSL and handle browsers without SNI support)
Besides SNI support, I also know that if user has a wrongly configured system time, he may encounter error on https site.
So what's the best approach to achieve both SNI + system time check? Any other possible scenario where user get redirected to https may encounter errors?
=====================================================================
Update:
Loading the home page doesn't really require "security". I'm thinking whether I can do a quick check to see if user could successfully load an object e.g. https://example.com/test.ico , if yes then show a "Secure Login" option. (Kudos to Steffen). The post action will then be done in https to prevent credentials being submitted unsafely.
If the test for https fails, then no choice the user has to login without https.
Will this work?
The additional test would definitely be a drag on site load speed isnt it
There is no way to detect this on the server side with PHP or .htaccess because if a connection fails because of missing SNI or wrong system time then it fails already during the SSL handshake. But the PHP part or .htaccess gets only used for the HTTP part, i.e. only if the SSL handshake completed successfully.
What you might try to do is to include a https resource into your landing page and see if this gets loaded successfully. This might be done with image, css , XHR or similar. For example you could do this
<img src="https://test-ssl.example.com"
onload="redirect_to_https();"
onerror="notify_user_about_problem();" />
If the SSL connection to your site got established successfully the onload handler gets executed and the user gets redirected to the https site. If the handshake fails instead the user gets notified about the problem.
Note that you cannot distinguish between different kinds of errors this way, i.e. you don't know if the problem is caused by the wrong system time or missing SNI support. If you want to do this you need to have to include a non-SNI resource the same way and see if it gets successfully loaded.
But compatibility is most important to me. Security is least important.
I consider this a bad but unfortunately very common approach. I would recommend you instead use this mechanism not to allow the user to access all functionality of your site with http only, but the sensitive parts should be restricted to https. But you could use this approach to inform user about the problem in detail and showing alternatives instead of just causing some strange connection error because of the failed SSL handshake.
This is not the answer you are looking for and was going to leave as a comment for that reason, but got to long, so decided to answer instead:
While it's admirable to try to handle all your users no matter what antiquated tech or bad setup they have, at some point you've got to ask yourself about the effort's you're putting in versus the reward of a few extra hits?
Managing dual http and https is an administrative nightmare that just puts your https-capable users (the vast majority on nearly all sites) at needless risk due to downgrade attacks, inability to make cookies secure, accidentally including insecure content... etc. From an SEO perspective as well you basically have two sites which is difficult to manage.
IMHO, if you feel this strongly about SNI users then pay for a dedicated IP, if not move on to HTTPS only.
compatibility is most important to me. Security is least important
Then stick to HTTP and don't even consider HTTPS.
Finally you've got to remember that browsers will force you move on soon enough. Forcing SHA-2 certs for example (which are not supported by very old browsers - similar to SNI) means that you will eventually have to call it a day on older browsers. So any answer you come up with here will only be short lived. And that's assuming you can come up with a solution that will work across all browsers (no small ask in itself!).
I was asked this question not too long ago, and didn't have a good answer...
Is there a good reason why a site that has an SSL certificate wouldn't use https:// for their entire site rather than http://
Are there SEO issues? Performance overhead for the server?
Just in case it matters, we use LAMP stacks.
Thanks!
A few reasons:
Generating SSL content takes some extra work so performance of a busy site could be an issue
Most (all?) browsers stop sending referrer info with requests to tracking users through your site could be more challenging
You might have to be more deliberate in how you serve pages to get browsers to cache them properly
If the page is SSL, all content loaded on the page should be SSL, too, to avoid mixed-content warnings in the browser; serving dependencies like scripts, images, etc. under SSL is not always convenient
Note, however, that a lot of sites do do this. For example, several of the banks I use are always https, even for the parts that don't require it.
for each request your data will be encoded and and decoded this will increase unnecessary load on server and would also increase response time of ur site.
Using SSL/TLS does no longer add very much overhead: http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
(As #erickson said in a comment on this page, the most computationally expensive part is the handshake. Good comment in general.)
I think you may get a loss in performance in some cases because browsers tend not to keep content obtained via HTTPS in the file cache if you close them (assuming that it's sensitive content that shouldn't be kept on disk), therefore you wouldn't benefit from the browser's cache and would have to reload the content.
What should i consider when switching a simple(user+pass) login form from http to https?
Are there any differences when using https compared to http?
From what i know the browser won't cache content server over https, so page-loading might be slower, but other that that i know nothing about this.
Anyone has any experience with this things?
Do not mix secure and non-secure content on the same site as browsers will display annoying warnings if you do so.
Additionally, set cookies as https-only when the users uses https so they are never sent over a http connection.
When switching over to https consider that ALL web assets (images, js, css) must be coming from a https domain, otherwise your user will get warnings about unsecure transmission of data. If you've got any hard coded urls you'll need to dynamically change them to https.
I would add that you should prefer to send your url parameters via post instead of get, otherwise you may be leaving private data all over the place in logfiles, browser windows, etc.
The security layer is implemented in the webserver (e.g. Apache), while your login is implemented at the business logic (your application).
There's no difference for your business logic to use http or https, by the time you receive the request, it's going to be the same, because you receive it decrypted. The web server does the dirty job for you.
As you say, it might be a little bit slower because the web server has to encrypt / decrypt the requests.
As Ben says, all the resources have to come from the secure domain, otherwise some browsers get really annoying (such as IE) with the warnings.
From what i know the browser won't cache content server over https
Provided you send caching instructions in the headers then the client should still cache the content (MSIE does have a switch hidden away to disable caching for HTTPS - but it defaults to caching enabled, Firefox probably has similar).
The time taken for the page to turn will be higher - and much more affected by network latency due the additional overhead of the SSL handshake (once encryption has been negotiated the overhead isn't that much, but depending on your webserver and how its configured you probably won't be able to use KeepAlives with MSIE).
Certainly there will be no difference to your PHP code.
C.