Webfonts not caching on Cloudfront - php

We are trying to put all of our file assets on s3 and cloudfront to reduce load on our server. For almost all file types things are working fine and Cloudfront is caching files. For fonts files there always seems to be a miss. What am I doing wrong?
When we first put the fonts online and called them we got an error which pointed to the CORS protocol issue. This is where we learned about CORS.
We followed this solution, Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading
Here is my CORS setting. We have to have AllowedOrigion as a wildcard because we are supporting many websites.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-*</AllowedHeader>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I setup behavior rules in the cloudfront distribution for each font type type:
/*.ttf
with Whitelist Headers to origin
I checked with curl and there is always a miss and access-control-allow-origin is always *
curl -i -H "Origin: https://www.example.com" https://path/to-file/font-awesome/fonts/fontawesome-webfont.woff
HTTP/2 200
content-type: binary/octet-stream
content-length: 98024
date: Tue, 08 Jan 2019 09:07:03 GMT
access-control-allow-origin: *
access-control-allow-methods: GET
access-control-max-age: 3000
last-modified: Mon, 07 Jan 2019 08:44:46 GMT
etag: "fee66e712a8a08eef5805a46892932ad"
accept-ranges: bytes
server: AmazonS3
vary: Origin
x-cache: Miss from cloudfront
via: 1.1 d76fac2b5a2f460a1cbffb76189f59ef.cloudfront.net (CloudFront)
x-amz-cf-id: 1azzRgw3h33KXW90xyPMXCTUAfZdXjCb2osrSkxxdU5lCoq6VNC7fw==
I should also mentioned that when I go directly to the file it downloads instead of opens in browser (which might be the correct behavior, not sure).
The files are loading today, which is good but in the end I would like for Cloudfront to server the files when it has it in the cache instead of always missing.

Your Curl dump indicate there is no "cache-control" headers in the response. You should have this header setted (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html). Best practice is to havec cache-control to "public, max-age=xxx, s-maxage=yyy" (xxx = time cached on user browser, yyy time cached in CDN).
Do you have this header for other ressources (like a css or js) and not for woff ?
Check this : how to add cache control in AWS S3?

Related

Transfer-Encoding: chunked sent twice (chunk size included in response body)

I'm using Apache 2.2 and PHP 7.0.1. I force chunked encoding with flush() like in this example:
<?php
header('HTTP/1.1 200 OK');
echo "hello";
flush();
echo "world";
die;
And I get unwanted characters at the beginning and end of the response:
HTTP/1.1 200 OK
Date: Fri, 09 Sep 2016 15:58:20 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/7.0.9
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
a
helloworld
0
The first one is the chunk size in hex (10 = A). I'm using Klein as PHP router and I have found that the problem comes up only when the HTTP status header is rewritten. I guess there is a problem with my Apache config, but I wasn't able to figure it out.
Edited: My problem had nothing to do with Apache but Nginx and chunked_transfer_encoding directive. Check the answer below.
This is how Transfer-Encoding: chunked works. The extra characters you're seeing are part of the encoding, rather than the body.
A client that understands the encoding will not include them in the result; a client that doesn't doesn't support HTTP/1.1, and should be considered bugged.
As #Joe pointed out before, that is the normal behavior when Chunked transfer enconding is enabled. My tests where not accurate because I was requesting Apache directly on the server. Actually, when I was experiencing the problem in Chrome I was querying a Nginx service as a proxy for Apache.
By running tcpdump I realized that Nginx was rechunking responses, but only when rewritting HTTP status header (header('HTTP/1.1 200 OK')) in PHP. The solution to sending Transfer-Encoding: chunked twice is to set chunked_transfer_encoding off in the location context of my Nginx .php handler.

varnish not work only for one web site

I have a server with more site , after install varnish I tested if cache works, but for one web site not work varnish (have response of max-age=0). If I try to insert a simple php page (not correlated to main website) in same folder of this website, the response works.
This is a header when try :
HTTP/1.1 200 OK
Server: Apache/2.2.27 (Unix) mod_ssl/2.2.27 OpenSSL/1.0.1e-fips
X-Powered-By: PHP/5.2.17
Set-Cookie: PHPSESSID=ragejao4sm1kckjn1trvap3ft0; path=/
Vary: User-Agent,Accept-Encoding
Content-Encoding: gzip
Content-Type: text/html
Cache-Control: max_age=8600
magicmarker: 1
Content-Length: 11863
Accept-Ranges: bytes
Date: Fri, 12 Jun 2015 12:28:15 GMT
X-Varnish: 1250916100
Age: 0
Via: 1.1 varnish
Connection: keep-alive
Varnish by default doesn't cache responses where cookies are set.
If you want to change this behaviour you need to consider how the cookie is being used (it looks like a session cookie) and either use the session id as part of the cache hash (ie so other users don't get a cached response from someone else's session) or use something like ESI to allow the "common" parts of the page to be cached while the session specific parts are fetched independently.
http://www.varnish-cache.org/trac/wiki/VCLExampleCacheCookies
https://www.varnish-cache.org/trac/wiki/ESIfeatures

html/php: Changing images order by swapping filenames doesn't take immediate effect

I have following problem that's bugging my mind completely. I have to take over this cms from someone who doesn't want to care for it anymore and is giving no support whatsoever.
Situation is as follows: on the site there are several photo albums which are populated by reading a directory in php. All is good there, pictures are shown in the order they are read. In the management system, these pictures can be changed in order by an up or down-button. The way this is done is by swapping the image's filenames. This works, when I change the order for an image i can see server-side the filenames have actually been swapped.
This is however not the case on the site, at least not immediately: it takes an average of 10 minutes to see the images swapped there. Ofcourse, my client can't work like this, and he claims it has always worked before. I have tried to turn off caching browser-side, this hasn't helped. I can also note the changes take effect on the same time in IE and FF. I tried several ways of turning off cache server-side in php too, also to no avail.
Is there any other place where I should be looking or could there be another reason why these changes don't take effect immediately?
In addition, changes i make to javascript don't get picked up immediately too. I installed fiddler and this is the request header for that js file:
GET http://www.nobel-country-gite.be/admin/modules/Photoalbum/js/album.js HTTP/1.1
Accept: application/javascript, /;q=0.8
Referer: http://www.nobel-country-gite.be/admin/index.php?page=pic&album=24
Accept-Language: nl-BE
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
If-Modified-Since: Wed, 27 May 2015 15:55:12 GMT
If-None-Match: "ba1248f5-138b-5171244a92f66"
DNT: 1
Host: www.nobel-country-gite.be
Pragma: no-cache
Cookie: __utmc=39679548; __utma=39679548.1608184058.1429963360.1432662247.1432664636.7; __utmz=39679548.1429963360.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=1; PHPSESSID=7uge1ltg2rc11q63untthrc5s1; __utma=1.459796341.1429963360.1432662247.1432664636.7; __utmz=1.1429963360.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Response header is as follows:
HTTP/1.1 304 Not Modified
Server: Apache
Last-Modified: Wed, 27 May 2015 15:55:12 GMT
ETag: "ba1248f5-138b-5171244a92f66"
Vary: Accept-Encoding
Content-Type: application/javascript
Date: Wed, 27 May 2015 16:57:55 GMT
X-Varnish: 1826689067 1825041752
Age: 556
Via: 1.1 varnish
Connection: keep-alive
I would expect the answer to be different instead of 'not modified'?
Edit - upon waiting a few minutes and refreshing the page again, the response for this file is what is expected:
HTTP/1.1 200 OK
Server: Apache
Last-Modified: Wed, 27 May 2015 16:57:30 GMT
ETag: "ba1248f5-1387-51713237ac28e"
Vary: Accept-Encoding
Content-Type: application/javascript
Transfer-Encoding: chunked
Date: Wed, 27 May 2015 17:03:43 GMT
X-Varnish: 1827728442
Age: 0
Via: 1.1 varnish
Connection: keep-alive
I couldn't help but notice you are using Varnish (indicated by the X-Varnish response header). Varnish is a caching reverse proxy, which means your pages are not just being cached by the browser, but also on the server (by Varnish). Your browser connects to Varnish, and Varnish connects to your Apache backend.
The first response header includes "Age: 556" - that's the cached version's age in seconds (almost 10 minutes). Then the age comes across as "0" when the page refreshes - that's because Varnish has updated its cache. Probably you can access the page over HTTPS to see your changes immediately reflected (Varnish doesn't work for HTTPS and most people don't bother setting up an HTTPS cache), or you can generally add garbage GET parameters to your URL (e.g. "?bogus=123") to force Varnish to re-fetch the page (this won't make other users see the new version, since they'll be accessing via the normal URLs).
Fixes: You can use varnishadm to ban (expire) certain URLs in Varnish when you've made a change; you can modify the "Cache-Control" or "Expires" headers your CMS/Apache (via PHP, .htaccess, etc.) produces to reduce cache time (Varnish completely respects cache control headers in its caching strategy); you can change Varnish's behavior by editing the relevant VCL (usually "default.vcl"); or you can accept that caches are generally good (they save a lot of time and resources in generating the response), and maybe a 10 minute delay is an acceptable trade-off.

Trying to download a file using curl where file download is blocked by javascript?

I am trying to use curl to download a torrent file the url is
http://torcache.net/torrent/006DDC8C407ACCDAF810BCFF41E77299A373296A.torrent
You will notice that upon getting to the page the download of the file is blocked for a few seconds via javascript, I was wondering if there is anyway to bypass this while using curl and php?
Thanks
The file is not blocked via javascript, that's just an informal message if you request that file. The redirect then is done via javascript.
You can simulate the request your own, the important part here is that you add the HTTP Referrer request header. Example:
$ curl -I -H 'Referer: http://torcache.net/torrent/006DDC8C407ACCDAF810BCFF41E77299A373296A.torrent' http://torcache.net/torrent/006DDC8C407ACCDAF810BCFF41E77299A373296A.torrent
HTTP/1.1 200 OK
Server: nginx/1.3.0
Date: Sun, 10 Jun 2012 17:13:59 GMT
Content-Type: application/x-bittorrent
Content-Length: 10767
Last-Modified: Sat, 09 Jun 2012 22:17:03 GMT
Connection: keep-alive
Content-Encoding: gzip
Accept-Ranges: bytes
Referrer is one thing to check, mind the typo in the HTTP specs, see Wikipedia.

How do I gzip my html source

I would like to gzip the html sources of my webpages, what's the best way to do it on a lighttpd/php5 server.
I have tried to do it by editing my php.ini file with:
zlib.output_compression = On
zlib.output_handler = On
but it seems to be a transparent compression only.
You'll need to enable mod_compress on lighthttpd in addition to the changes you made in your php file.
http://www.cyberciti.biz/tips/lighttpd-mod_compress-gzip-compression-tutorial.html
Edit:
I believe you're looking for an html minimizer then. If you check out the headers that google is sending back they look like this:
(Status-Line) HTTP/1.1 200 OK
Date Thu, 22 Oct 2009 18:28:47 GMT
Expires -1
Cache-Control private, max-age=0
Content-Type text/html; charset=UTF-8
Content-Encoding gzip
Server gws
Content-Length 3519
X-XSS-Protection 0
The "Content-Encoding gzip" is what you're looking for if you want to check for to see if your webserver is properly compressing your files.

Categories