On my server I have Varnish (caching) running on port 80 with Apache on 8080.
Varnish caches very well when I set the headers like below:
$this->getResponse()->setHeader('Expires', '', true);
$this->getResponse()->setHeader('Cache-Control', 'public', true);
$this->getResponse()->setHeader('Cache-Control', 'max-age=2592000');
$this->getResponse()->setHeader('Pragma', '', true);
But this means people cache my website without ever retrieving a new version when its available.
When I remove the headers people retrieve a new version every page reload (so Varnish never caches).
I can not figure out what goes wrong here.
My ideal situation is people don't cache the html on the client side but leave that up to Varnish.
My ideal situation is people don't cache the html on the client side but leave that up to Varnish.
What you want is varnish to cache the resource and serve it to clients, and only generate a new version if something changed. The easiest way to do this is have varnish cache it for a long time, and invalidate the entry in varnish (with a PURGE command) when this something changed.
By default, varnish will base its cache rules on the headers the back-end supplies. So, if your php code generates the headers you described, the default varnish vcl will adjust its caching strategy accordingly. However, it can only do this in generalized, safe way (e.g. if you use a cookie, it will never cache). You know how your back-end works, and you should change the cache behavior of varnish not by sending different headers from the back-end, but write a varnish .vcl file. You should tell varnish to cache the resource for a long time even though the Cache-Control of Max-Age headers are missing (set the TimeToLive ttl in your .vcl file). Varnish will then serve the generated entry until ttl has passed or you purged the entry.
If you've got this working, there's a more advanced option: cache the resource on the client but have the client 'revalidate' it every time it want to use it. A browser does this with an HTTP GET plus If-Modified-Since header (your response should include a Date header to provoke his behavior) or If-Match header (your response should include an ETAG header to provoke his behavior). This saves bandwith because varnish can respond with a 304 NOT-MODIFIED response, without sending the whole resource again.
Simplest approach is just to turn down the max-age to something more reasonable. Currently, you have it set to 30 days. Try setting it to 15 minutes:
$this->getResponse()->setHeader('Cache-Control', 'max-age=900');
Web caching is a somewhat complicated topic, exacerbated by some very different client interpretations. But in general this will lighten the load on your web server while ensuring that new content is available in a reasonable timeframe.
Set your standard HTTP headers for the client cache to whatever you want. Set a custom header that only Varnish will see, such as X-Varnish-TTL Then in your VCL, incorporate the following code in your vcl_fetch sub:
if (beresp.http.X-Varnish-TTL) {
C{
char *ttl;
/* first char in third param is length of header plus colon in octal */
ttl = VRT_GetHdr(sp, HDR_BERESP, "\016X-Varnish-TTL:");
VRT_l_beresp_ttl(sp, atoi(ttl));
}C
unset beresp.http.X-Varnish-TTL; // Remove so client never sees this
}
Related
After recording script, I have added 'Cookie Manager'.
2. While running, Cookies are not showed for Request headers in Jmeter and the Connection closed error showing for listener.
But, Browser cookies are passing the request headers in my application.
So, How can i pass cookies in jmeter. kindly give me a solution.
Please refer the snapshot.
Thanks,
Vairamuthu.
Just add to Test Plan an HTTP Cookie Manager before your request.
I don't think you should be sending cookie. I would rather expect that you need to extract the value of this _xsrf cookie from the /api/login request and add the extracted value as X-Xsrftoken header.
Add the next line to user.properties file (located in JMeter's "bin" folder"
CookieManager.save.cookies=true
and just in case this line as well:
CookieManager.check.cookies=false
Restart JMeter to pick the properties up
Add HTTP Header Manager to the request which is failing
Make sure it contains the following like:
Name: X-Xsrftoken
Value: ${COOKIE__xsrf}
More information:
Configuring JMeter
How to Load Test CSRF-Protected Web Sites
I have the following setup:
Some files are dynamically generated dependent on some (only a few) session parameters. Since they have not a great diversity, i allow caching in proxys/browsers. The files get an etag on their way, and the reaction of the whole web application at first glance seems correct: Files served in correct dependence from session situations, traffic saved.
And then this erroneous behavior:
But at closer inspection, i found that in his answer in case of a 304 for those dynamically generated files, apache wrongly sends a "Connection: close" Header instead of the normally sent "Connection: KeepAlive". What he should do is: Simply do not manipulate anything concerning "connection".
I cannot find any point where to pinpoint the cause of this behavior: Nowhere in the apache config files is anything written except one single line in one single file where it is instructed to send a keepalive - which it does - as long as it does not send a 304 response for a dynamically generated file. Nowhere in PHP do i instruct that guy to send anything other than keepalives (and the latter only to try to counter the connection:close).
The apache does not do this when it serves "normal" (non-dynamic) files (with 304 answers). So in some way i assume that maybe the PHP kernel is the one who interferes here without permission or being asked. But then, an added "Header set Connection 'Keep-Alive'" in the apache config, which i too added to counter the closing of the connection, does not work, too. Normally, when you put such a header set rule (not of "early" type) in the apache config, this rules takes action AFTER finalization of any subordered work on the requested document (thus AFTER finalization of the PHP output). But in my case, nothing happens - well: in case of a 304 response. In all other cases, everything works normal and correct.
Since there do some other files go over the line at a page request, i would appreciate to get the apache rid of those connection-closures.
Is there anybody who has an idea what to do with this behavior?
P.S.: One day (and a good sleep) later, things are clearing:
The culprit in this case was a shortsightedly (on my behalf) copied example snippet, which had "HTTP/1.>>>0<<< 304" (the Null!) in it.
This protocol version number gets (correctly) post-processed by apache (after everything otherwise - including any apache modules work - got finalized), in that it decides not to send a "Connection: Keep-Alive" over the wire, since that feature didn't exist in version HTTP/1.0.
The problem in this case was to get the focus on the fact that everything inside php and apache modules worked correctly and something in the outer environment of them must have been wrong, and thereafter to shift the view to anything in the code that could possibly influence that outer environment (e.g. the protocol version).
Nginx waits until the request ends for the receipt of cookies, How to get the cookie when it is generated through php without waiting for the end of the request?
If you're using nginx version 1.5.6+, you can use set fastcgi_buffering directive to 'off', or do something like
<?php
header('X-Accel-Buffering: no');
to achieve the same effect. Please note that PHP won't flush the headers (which contains the cookies) unless there's some output to send in the body (using echo or print). See this answer for more info on headers.
I think my question seems pretty casual but bear with me as it gets interesting (at least for me :)).
Consider a PHP page that its purpose is to read a requested file from filesystem and echo it as the response. Now the question is how to enable cache for this page? The thing to point out is that the files can be pretty huge and enabling the cache is to save the client from downloading the same content again and again.
The ideal strategy would be using the "If-None-Match" request header and "ETag" response header in order to implement a reverse proxy cache system. Even though I know this far, I'm not sure if this is possible or what should I return as response in order to implement this technique!
Serving huge or many auxiliary files with PHP is not exactly what it's made for.
Instead, look at X-accel for nginx, X-Sendfile for Lighttpd or mod_xsendfile for Apache.
The initial request gets handled by PHP, but once the download file has been determined it sets a few headers to indicate that the server should handle the file sending, after which the PHP process is freed up to serve something else.
You can then use the web server to configure the caching for you.
Static generated content
If your content is generated from PHP and particularly expensive to create, you could write the output to a local file and apply the above method again.
If you can't write to a local file or don't want to, you can use HTTP response headers to control caching:
Expires: <absolute date in the future>
Cache-Control: public, max-age=<relative time in seconds since request>
This will cause clients to cache the page contents until it expires or when a user forces a page reload (e.g. press F5).
Dynamic generated content
For dynamic content you want the browser to ping you every time, but only send the page contents if there's something new. You can accomplish this by setting a few other response headers:
ETag: <hash of the contents>
Last-Modified: <absolute date of last contents change>
When the browser pings your script again, they will add the following request headers respectively:
If-None-Match: <hash of the contents that you sent last time>
If-Modified-Since: <absolute date of last contents change>
The ETag is mostly used to reduce network traffic as in some cases, to know the contents hash, you first have to calculate it.
The Last-Modified is the easiest to apply if you have local file caches (files have a modification date). A simple condition makes it work:
if (!file_exists('cache.txt') ||
filemtime('cache.txt') > strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE'])) {
// update cache file and send back contents as usual (+ cache headers)
} else {
header('HTTP/1.0 304 Not modified');
}
If you can't do file caches, you can still use ETag to determine whether the contents have changed meanwhile.
I have a website (with ESI) that uses Symfony2 reverse proxy for caching. Average response is around 100ms. I tried to install Varnish on server to try it out. I followed guide from Symfony cookbook step by step, deleted everything in cache folder, but http_cache folder was still created when I tried it out. So I figured I could try to comment out $kernel = new AppCache($kernel); from app.php. That worked pretty well. http_cache wasn't created anymore and by varnishstat, Varnish seemed to be working:
12951 0.00 0.08 cache_hitpass - Cache hits for pass
1153 0.00 0.01 cache_miss - Cache misses
That was out of around 14000 requests, so I thought everything would be alright. But after echoping I found out responses raised to ~2 seconds.
Apache runs on port 9000 and Varnish on 8080. So I echoping using echoping -n 10 -h http://servername/ X.X.X.X:8080.
I have no idea what could be wrong. Are there any additional settings needed to use Varnish with Symfony2? Or am I simply doing something wrong?
Per requests, here's my default.vcl with modifications I've done so far.
I found 2 issues with Varnish's default config:
it doesn't cache requests with cookies (and everyone in my app has session assigned)
it ignores Cache-Control: no-cache header
So I added conditions for these cases to my config and it performs fairly well now (~175 req/s up from ~160 with S2 reverse proxy - but honestly, I expected bit more). I just have no idea how to check if it's all set ok, so any inputs are welcome.
Most of pages have cache varied by cookie, with s-maxage 1200. Common ESI includes aren't varied by cookie, with s-maxage quite low (articles, article lists). User profile pages aren't cached at all (no-cache) and I'm not really sure if ESI includes on these are even being cached by Varnish. Only ESI that's varied by cookies is header with user specific information (that's on 100% of pages).
Everything in this post is Varnish 3.X specific (I'm personally using 3.0.2).
Also, after few weeks of digging into this, I have really no idea what I'm doing anymore, so if you find something odd in configs, just let me know.
I'm surprised this hasn't had a really full answer in 10 months. This could be a really useful page.
You pointed out yourself that:
Varnish doesn't cache requests with cookies
Varnish ignores Cache-Control: no-cache header
The first thing is, does everyone in your app need a session? If not, don't start the session, or at least delay starting it until it's really necessary (i.e. they log in or whatever).
If you can still cache pages when users are logged in, you need to be really careful you don't serve a user a page which was meant for someone else. But if you're going to do it, edit vcl_recv() to strip the session cookie for the pages that you want to cache.
You can easily get Varnish to process the no-cache directive in vcl_fetch() and in fact you've already done that.
Another problem which I found is that Symfony by default sets max-age to 0, which means that they won't ever get cached by the default logic in vcl_fetch
I also noticed that you had the port set in Varnish to:
backend default {
.host = "127.0.0.1";
.port = "80";
}
You yourself said that Apache is running on port 9000, so this doesn't seem to match. You would normally set Varnish to listen on the default port (80) and set Varnish to look up the backend on port 9000 or whatever.
If that's your entire configuration, the vcl_recv is configured twice.
In the pages you want to cache, can you send the caching headers? This would make the most sense, since images probably already have your apache caching headers and the app logic decide the pages that can be actually cached, but you can force this in varnish also.
You could use a vcl_recv like this:
# Called after a document has been successfully retrieved from the backend.
sub vcl_fetch {
# set minimum timeouts to auto-discard stored objects
# set beresp.prefetch = -30s;
set beresp.grace = 120s;
if (beresp.ttl < 48h) {
set beresp.ttl = 48h;}
if (!beresp.cacheable)
{pass;}
if (beresp.http.Set-Cookie)
{pass;}
# if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)")
# {pass;}
if (req.http.Authorization && !beresp.http.Cache-Control ~ "public")
{pass;}
}
This one caches, in varnish, only the requests that are set cacheable. Also, be aware that your configuration doesn't cache requests with cookies.