In symfony2 project, One of my route is /vendor. After cache warm up it takes about 4 secs for this page to load after which the page comes from the cache. It takes less than 100ms for the /route to be processed when the page is found in the cache.
Here is the real problem. Every subsequent route, say /vendor/1 or /vendor/1/edit take lot of time for the first time. If I am rendering lots of vendors on the /vendor page then it is a nightmare to click all the links for the first time.
I am using the following cache parameters for the controller.
Question: How to set cache so that all the links in /vendor are found in the cache so that /vendor/1 /vendor/2 /vendor/3 /vendor/{id} can all be processed fast?
/**
* Vendor controller.
*
* #Route("/vendor")
* #Cache(expires="tomorrow", public=true, smaxage="36000", maxage="36000")
*/
Age:0
Cache-Control:max-age=3600, public, s-maxage=36000
Connection:Keep-Alive
Content-Length:1854
Content-Type:text/html; charset=UTF-8
Date:Fri, 31 Jul 2015 15:16:15 GMT
Keep-Alive:timeout=5, max=99
Server:Apache/2.4.12 (Win32) OpenSSL/1.0.1m PHP/5.6.11
X-Content-Digest:en11c12a9aefabc2c0cc106e0b632e46f26bdf5f004618f501912a005ef058b7b6
X-Powered-By:PHP/5.6.11
X-Symfony-Cache:GET /vendor/2: **stale, invalid, store**
The cache warmup is not for HTTP cache. It is the php cache (compiling configurations etc etc)
If you want to warm up your HTTP cache you need to write a script that access those URLs.
What you should be more concerned about is why it takes 4 seconds when you access a page that is not in cache. Try to profile with Blackfire and see whats wrong.
I figured out what the problem was. I had to clear my browser cache and cookies and all the responses are now getting "fresh".
Looks like if I do some change to the cache settings then the whole cache gets invalid and I need to start afresh.
I am not sure whether it is the right reasoning but it sure is working for me now. for the time being.
Related
Background
Part of my application's responsibility is handling requests for static resources (CSS, JavaScript, images) in a controlled manner. Based on some application logic, it will return one from a selection of different files that might be served on that URL at different times and to different users. These are therefore static files, but delivered in a dynamic way.
The application is based on Symfony Components and the serving of these static-ish files is handled by the BinaryFileResponse class.
The bootstrap code calls the trustXSendfileTypeHeader method:
\Symfony\Component\HttpFoundation\BinaryFileResponse::trustXSendfileTypeHeader();
The application uses some internal logic based on configuration and the detection and use of apache_get_modules() to determine availability. If XSendfile is available and the configuration says to use it, it sets the X-Sendfile-Type header:
if ($useHeader === true) {
$request->headers->set('X-Sendfile-Type', $header);
}
$response = new BinaryFileResponse($filename);
Problem
When I run this with the configuration set to never use XSendfile, or through the PHP built-in web server, which obviously does not support XSendfile, everything is perfect.
When I utilise XSendfile, it also works -- most of the time.
Every so often, typically if I press the f5 key 3-4 times in quick succession, "something" wigs out and I get a garbled response. For example, this is supposed to be a JavaScript file (copied from "Response" tab under "Net" in Firebug):
hxYîãx��HTTP/1.1 200 OK Date: Tue, 05 Feb 2013 14:49:10 GMT Server:
Apache/2.2.22 (Ubuntu) X-Powered-By: PHP/5.4.6-1ubuntu1.1
Cache-Control: public Last-Modified: Tue, 29 Jan 2013 13:33:23 GMT
Accept-Ranges: bytes Content-Transfer-Encoding: binary ETag:
"10426f-9f6-0" Vary: Accept-Encoding Content-Encoding: gzip
Content-Length: 1011 Keep-Alive: timeout=5, max=98 Connection:
Keep-Alive Content-Type: application/javascript
������VmoÛ6þ,ÿkÀ²ãIý°~q [Üt]
XÑt¶H¤#Rv¼Àÿ}w(YSÀØ2yïå¹*¾Á>¯¥¥,è) Æ^Ât¸BaÆ\éjgäjí
Î&ð*¸Åí¸tY!³Ç$Óe"jÞ![#,n®®oï®A¨þ¸þù××Þ©¼¼ôÇêÚd¹49mv°ÔrtBÖ^;WÍÓÔg´Y¥´FéôÁR9o°35Îà^º´N=UÐèEµ¢XE¸íÒ%ª°¨Úò7¬KñT¾{;£ÈrTnß³étUè{QÀçÍn·:'üJëQÍÄËZeNjOàyÕÁ:#3wö~4Òét1ù$µeN)RD|
¶FTØJ·ß½¥¨¸õGç >9TyÜxzgl-J:) b«9ûAQ½KXÉ!yÐÓ]
óÆÎ#W¡?¢vún·7j©ÿ¢ðõÖGEÁy\ºp¤÷cKxf?ï*¼Éç0^ïîÌÇ°ñDQ¸mYJ|4t¾ñæËÛ¯Å
¨6:çøp(}þÑò|LÂ;Õ(#v¹* /[¨U|xª
æ]ÍyìjµòÛ¯p?4sI¥"v÷ôp|uQ4ò4&Ï·$eÒc¸ xo%7Ôi´2ñx;TuÙj23 áÊ%ħ¿¹lÌwÀS.&ÏØß7¸}ó
ZXzå k2'Zdùè
�¦ºû-Ù[Ó²ÿU(¯¤¥=pÃjô¾ç]]Øhhô²×ÙãÚÍ4¨[!Õ}'Òþ^Ð�ûxÿ#+ÚVÞ~áÌáy?d
aíD¹·U×ÃÚ] õ5íÃø¨o÷ÂAvUÆmÍaày`¦ä©A?mL[-}®(ÿË
d°öò¬}Ç¢³Çp1À^6%0 hTô^ts´ÞíWô
fO¶ö¢ÎNÜæ·HîUôÔ¶±ÌCµsxh.9åçi Û·_ÈÞØ_ÄãY_Ö}G<ì°ý2wÔ¿aw8/þù\ã±þ"0C
oÂh'tE¶À¤¥7I½éßRt.s?á^d|k/Æ)wRw÷cG¿<Þ
¼´°/^ø*ʤAVZ×y¿zÅΪ¥[²Õ1ò_Vµæï_YXÁÕö ��YXÁÕö ��
Note the presence of the headers in the response body, and the rest of it which is clearly not JavaScript. There are also some spurious characters at the start, which possibly is what leads to the headers being pushed to the body. I have tried to determine if this content is the result of gzipping, but I can't confirm that yet. (See also update below)
Question
Firstly, is BinaryFileResponse even the correct class to use for serving text (non-binary) files? The documentation for the class only says "BinaryFileResponse represents an HTTP response delivering a file." This isn't very detailed but it doesn't say anything about it being exclusively for "binary" files. However the name has its own implications, why didn't Fabien just call this class FileResponse?
Secondly, and more importantly, what could be causing this? I don't believe it is a browser issue because it is repeatable in both Firefox and Chrome. Is this a bug in the XSendfile module or in the BinaryFileResponse class perhaps? (I am likely to think it is not the former because I have used it before in a more "raw" way not via Symfony Components, with no such issues).
Has anyone else experienced this? Any idea where I should even start looking to track this down? I've looked at the BinaryResponseFile source code but it doesn't really do much with XSendfile, just sets the relevant header and prevents content in the response body, from what I can see.
Update
I've just noticed a couple of things about these garbled responses:
There are no actual headers being sent at all, i.e. on the "Headers" tab in Firebug, for the garbled responses, it only lists Request headers and doesn't even show the heading for Response headers.
Even if I set some custom header on the Response in PHP, that header does not appear at all in the garbled responses (as a header or in the response body), but the custom headers appear correctly for the responses that aren't broken.
First, let me say that I don't have any experience with this Apache module, but I'll try to guide you through a general error deduction:
You should check if you can reproduce it more reliably. While a web browser might be ok to try it out, you should go for something like curl and do the request multiple times, for example using a bash for-loop.
for i in `seq 1 5`; do curl -v http://localhost/xsendfile-url; done
The fact that the Connection: Keep-Alive header is set and that there are some weird characters before the actual HTTP header lead me to believe that you won't be able to reproduce this problem with separated curl calls, because it will open a fresh connection each time. So try this to check if that gives you the weird behavior (curl has keep alive on by default):
curl -v http://localhost/xsendfile-url http://localhost/xsendfile-url http://localhost/xsendfile-url
Using this, you could go to the projects github issue page and report your findings. Most probably they will there help you in telling you why mod_xsendfile is behaving the way it is or that you have found a bug.
On my server I have Varnish (caching) running on port 80 with Apache on 8080.
Varnish caches very well when I set the headers like below:
$this->getResponse()->setHeader('Expires', '', true);
$this->getResponse()->setHeader('Cache-Control', 'public', true);
$this->getResponse()->setHeader('Cache-Control', 'max-age=2592000');
$this->getResponse()->setHeader('Pragma', '', true);
But this means people cache my website without ever retrieving a new version when its available.
When I remove the headers people retrieve a new version every page reload (so Varnish never caches).
I can not figure out what goes wrong here.
My ideal situation is people don't cache the html on the client side but leave that up to Varnish.
My ideal situation is people don't cache the html on the client side but leave that up to Varnish.
What you want is varnish to cache the resource and serve it to clients, and only generate a new version if something changed. The easiest way to do this is have varnish cache it for a long time, and invalidate the entry in varnish (with a PURGE command) when this something changed.
By default, varnish will base its cache rules on the headers the back-end supplies. So, if your php code generates the headers you described, the default varnish vcl will adjust its caching strategy accordingly. However, it can only do this in generalized, safe way (e.g. if you use a cookie, it will never cache). You know how your back-end works, and you should change the cache behavior of varnish not by sending different headers from the back-end, but write a varnish .vcl file. You should tell varnish to cache the resource for a long time even though the Cache-Control of Max-Age headers are missing (set the TimeToLive ttl in your .vcl file). Varnish will then serve the generated entry until ttl has passed or you purged the entry.
If you've got this working, there's a more advanced option: cache the resource on the client but have the client 'revalidate' it every time it want to use it. A browser does this with an HTTP GET plus If-Modified-Since header (your response should include a Date header to provoke his behavior) or If-Match header (your response should include an ETAG header to provoke his behavior). This saves bandwith because varnish can respond with a 304 NOT-MODIFIED response, without sending the whole resource again.
Simplest approach is just to turn down the max-age to something more reasonable. Currently, you have it set to 30 days. Try setting it to 15 minutes:
$this->getResponse()->setHeader('Cache-Control', 'max-age=900');
Web caching is a somewhat complicated topic, exacerbated by some very different client interpretations. But in general this will lighten the load on your web server while ensuring that new content is available in a reasonable timeframe.
Set your standard HTTP headers for the client cache to whatever you want. Set a custom header that only Varnish will see, such as X-Varnish-TTL Then in your VCL, incorporate the following code in your vcl_fetch sub:
if (beresp.http.X-Varnish-TTL) {
C{
char *ttl;
/* first char in third param is length of header plus colon in octal */
ttl = VRT_GetHdr(sp, HDR_BERESP, "\016X-Varnish-TTL:");
VRT_l_beresp_ttl(sp, atoi(ttl));
}C
unset beresp.http.X-Varnish-TTL; // Remove so client never sees this
}
I think my question seems pretty casual but bear with me as it gets interesting (at least for me :)).
Consider a PHP page that its purpose is to read a requested file from filesystem and echo it as the response. Now the question is how to enable cache for this page? The thing to point out is that the files can be pretty huge and enabling the cache is to save the client from downloading the same content again and again.
The ideal strategy would be using the "If-None-Match" request header and "ETag" response header in order to implement a reverse proxy cache system. Even though I know this far, I'm not sure if this is possible or what should I return as response in order to implement this technique!
Serving huge or many auxiliary files with PHP is not exactly what it's made for.
Instead, look at X-accel for nginx, X-Sendfile for Lighttpd or mod_xsendfile for Apache.
The initial request gets handled by PHP, but once the download file has been determined it sets a few headers to indicate that the server should handle the file sending, after which the PHP process is freed up to serve something else.
You can then use the web server to configure the caching for you.
Static generated content
If your content is generated from PHP and particularly expensive to create, you could write the output to a local file and apply the above method again.
If you can't write to a local file or don't want to, you can use HTTP response headers to control caching:
Expires: <absolute date in the future>
Cache-Control: public, max-age=<relative time in seconds since request>
This will cause clients to cache the page contents until it expires or when a user forces a page reload (e.g. press F5).
Dynamic generated content
For dynamic content you want the browser to ping you every time, but only send the page contents if there's something new. You can accomplish this by setting a few other response headers:
ETag: <hash of the contents>
Last-Modified: <absolute date of last contents change>
When the browser pings your script again, they will add the following request headers respectively:
If-None-Match: <hash of the contents that you sent last time>
If-Modified-Since: <absolute date of last contents change>
The ETag is mostly used to reduce network traffic as in some cases, to know the contents hash, you first have to calculate it.
The Last-Modified is the easiest to apply if you have local file caches (files have a modification date). A simple condition makes it work:
if (!file_exists('cache.txt') ||
filemtime('cache.txt') > strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE'])) {
// update cache file and send back contents as usual (+ cache headers)
} else {
header('HTTP/1.0 304 Not modified');
}
If you can't do file caches, you can still use ETag to determine whether the contents have changed meanwhile.
I am trying to investigate the cause of slowness on my website.
Here I attach firebug screenshot:
As you can see, all of content is loaded in just 2.92s, but javascript onload event is fired up AFTER 17.67s.
In case you want to see the website itself: http://maylashop.com .
I have tried to use YSlow, I get A grade and it doesn't help.
If anyone have fix or know what caused this, please kindly let me know.
why http://cf.addthis.com ? http://platform.twitter.com, plusone.google.com .... I dont see you use them any where ? if you are using , add them when they are desired
follow the guide lines Yslow , get some matrix and the check what is the bottleneck
You will be happy to follow these rules
This is not a JavaScript problem. Your PHP script is taking that long to execute (see screenshot). All the other resources that page is loading (JS, CSS, images, etc.) are taking less than a second to load. I'm 95% sure this is caused by zlib.output_compression. Try adding the following code to the top of your script to see if disabling it does anything useful:
ini_set('zlib.output_compression', 0);
If that fixes it, then you could consider not using zlib.out_compression or figure out what specific thing in your code is causing problems with it (usually output buffering).
Pretty sure this is not related to javascript. Just to request your main page took about 2 seconds. Ran this on a linux machine:
date ; lynx -source http://maylashop.com/ > /dev/null ; date
Fri Apr 13 22:38:19 CEST 2012
Fri Apr 13 22:38:21 CEST 2012
This is an independent confirmation that the host is either generating the index page too slowly, or there is a network transfer issue.
Doing the same thing with /index.php or /index.html or even a 404 page I created on the fly results with same ~2 second delay.
Edit: checked image download speed, and that one is <1 second. Close to 0.
Something in your PHP code might be creating the problem (inducing a delay.) One of those things could be delay in connecting to a MySQL server (or whatever you're using.) Is the database server on the same exact machine, or remote? Are you connecting to it on each call, or do you have a caching system in place?
I have a website (with ESI) that uses Symfony2 reverse proxy for caching. Average response is around 100ms. I tried to install Varnish on server to try it out. I followed guide from Symfony cookbook step by step, deleted everything in cache folder, but http_cache folder was still created when I tried it out. So I figured I could try to comment out $kernel = new AppCache($kernel); from app.php. That worked pretty well. http_cache wasn't created anymore and by varnishstat, Varnish seemed to be working:
12951 0.00 0.08 cache_hitpass - Cache hits for pass
1153 0.00 0.01 cache_miss - Cache misses
That was out of around 14000 requests, so I thought everything would be alright. But after echoping I found out responses raised to ~2 seconds.
Apache runs on port 9000 and Varnish on 8080. So I echoping using echoping -n 10 -h http://servername/ X.X.X.X:8080.
I have no idea what could be wrong. Are there any additional settings needed to use Varnish with Symfony2? Or am I simply doing something wrong?
Per requests, here's my default.vcl with modifications I've done so far.
I found 2 issues with Varnish's default config:
it doesn't cache requests with cookies (and everyone in my app has session assigned)
it ignores Cache-Control: no-cache header
So I added conditions for these cases to my config and it performs fairly well now (~175 req/s up from ~160 with S2 reverse proxy - but honestly, I expected bit more). I just have no idea how to check if it's all set ok, so any inputs are welcome.
Most of pages have cache varied by cookie, with s-maxage 1200. Common ESI includes aren't varied by cookie, with s-maxage quite low (articles, article lists). User profile pages aren't cached at all (no-cache) and I'm not really sure if ESI includes on these are even being cached by Varnish. Only ESI that's varied by cookies is header with user specific information (that's on 100% of pages).
Everything in this post is Varnish 3.X specific (I'm personally using 3.0.2).
Also, after few weeks of digging into this, I have really no idea what I'm doing anymore, so if you find something odd in configs, just let me know.
I'm surprised this hasn't had a really full answer in 10 months. This could be a really useful page.
You pointed out yourself that:
Varnish doesn't cache requests with cookies
Varnish ignores Cache-Control: no-cache header
The first thing is, does everyone in your app need a session? If not, don't start the session, or at least delay starting it until it's really necessary (i.e. they log in or whatever).
If you can still cache pages when users are logged in, you need to be really careful you don't serve a user a page which was meant for someone else. But if you're going to do it, edit vcl_recv() to strip the session cookie for the pages that you want to cache.
You can easily get Varnish to process the no-cache directive in vcl_fetch() and in fact you've already done that.
Another problem which I found is that Symfony by default sets max-age to 0, which means that they won't ever get cached by the default logic in vcl_fetch
I also noticed that you had the port set in Varnish to:
backend default {
.host = "127.0.0.1";
.port = "80";
}
You yourself said that Apache is running on port 9000, so this doesn't seem to match. You would normally set Varnish to listen on the default port (80) and set Varnish to look up the backend on port 9000 or whatever.
If that's your entire configuration, the vcl_recv is configured twice.
In the pages you want to cache, can you send the caching headers? This would make the most sense, since images probably already have your apache caching headers and the app logic decide the pages that can be actually cached, but you can force this in varnish also.
You could use a vcl_recv like this:
# Called after a document has been successfully retrieved from the backend.
sub vcl_fetch {
# set minimum timeouts to auto-discard stored objects
# set beresp.prefetch = -30s;
set beresp.grace = 120s;
if (beresp.ttl < 48h) {
set beresp.ttl = 48h;}
if (!beresp.cacheable)
{pass;}
if (beresp.http.Set-Cookie)
{pass;}
# if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)")
# {pass;}
if (req.http.Authorization && !beresp.http.Cache-Control ~ "public")
{pass;}
}
This one caches, in varnish, only the requests that are set cacheable. Also, be aware that your configuration doesn't cache requests with cookies.