How to Enable geoip on magento with varnish page cache - php

I currently have 3 stores online with 3 different domains, running magento with Apache and varnish (using Phoenix page cache extension) running on centos
One store is for uk, another for Ireland and another for USA
Trouble is (Example) If an US user hits the uk store , I would like the user to be notified to go to the correct store on the page, (I do not want them automatically redirected)
I was able to php-pecl-geoip with maxmind database to get this to work, but as users on my website have increased I had to begin using varnish.
how could I implement this functionality on with varnish so I know what country the user is from so I can display a message to the user to view their relevant website?

Gunah, I think you missed the point here.
When put Varnish in front of Apache, the client IP that PHP would see will always be the IP of Varnish (127.0.0.1 if it stay in the same server).
molleman, In this case you need to look at X-Forwarded-For header set by Varnish to get the real client IP. You can see how Varnish set it in the default.vcl:
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
If your web server is behind a load balancer, then you need more works. Please refer here for a solution: Varnish removes Public IP from X-Forwarded-for

you can create your Crontroller with a JSON Action Result in Magento.
then you can check these with JavaScript and output the result.
Do not forget to add your controller to the withlist in Varnish.

Related

Website to read file on diff raspberry pi and display values based on variable

I'm trying to create a website and 2 raspberry pi's that will connect to the website.
The goal is:
rpiA's display will be different from rpiB.
I'll put a file on each rpi containing their name. When I visit the website using rpiA, rpiA's name will be displayed (vice versa).
What I did is I placed a file in var/www/html/name.php
<?php
$xname= 'RPI-001';
echo $xname;
?>
Then on the website I placed:
$device_name = file_get_contents('http://127.0.0.1/name.php');
echo $device_name;
However, the result is always empty. I checked the allow_url_fopen and it is On. The reason maybe the 127.0.0.1 is broad and it needs a specific ip?
I also tried curl but the result is Error 404.
Is there another way to do this?
I did not consider login or session since I'm using a small screen in rpi and it will be hard to type without vnc.
127.0.0.1 is the IP of the system running the request. If you run that file_get_contents on any central server and access this using a browser, it will always access the server itself.
If you want to access any website on the system that is accessing the website, you need to use the IP address of the client, like file_get_contents('http:/' . $_SERVER['REMOTE_ADDR'] . '/name.php')

Nginx to cache only for bots

I have a decent website (nginx -> apache -> mod_php/mysql) to tune it a bit, and I find the biggest problem is that search bots used to overload it sending many requests at once.
There is a cache in site's core (that is, in PHP), so the site's author reported there should be no problem but in fact the bottleneck is that apache's reply is too long as there is too many requests for the page.
What I can imagine is to have some nginx based cache to cache pages only for bots. The TTL may be high enough (there is nothing that dynamic on page that can't wait another 5-10 minutes to be refreshed) Let's define 'bot' as any client that have 'Bot' in its UA string ('BingBot' as an example).
So I try to do something like that:
map $http_user_agent $isCache {
default 0;
~*(google|bing|msnbot) 1;
}
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
...
location / {
proxy_cache my_cache;
proxy_cache_bypass $isCache;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_pass http://my_upstream;
}
# location for images goes here
}
Am I right with my approach? Looks like it won't work.
Any other approaches to limit load from bots? Surely without sending 5xx codes to them (as Search Engines can lower positions for sites that are too 5xx-ed).
Thank you!
If your content pages may differ (i.e. say a user is logged in and it the page contains "welcome John Doe", then that version of the page may be cached, as each request is updating the cached copy (i.e. a logged in person will update the cached version, including their session cookies, which is bad).
It is best to do something similar to the following:
map $http_user_agent $isNotBot {
~*bot "";
default "IAmNotARobot";
}
server {
...
location / {
...
# Bypass the cache for humans
proxy_cache_bypass $isNotBot;
# Don't cache copies of requests from humans
proxy_no_cache $isNotBot;
...
}
...
}
This way, only requests by a bot are cached for future bot requests, and only bots are served cached pages

Limit Page to one Simultanious Request

Using PHP or Apache (I'd also consider moving to Nginx if necessary), how can I make a page only be ran once at a time? If three requests come in at the same time, one would complete entirely, then the next, and then the next. At no time should the page be accessed by more than one request at a time.
Think of it like transactions! How can I achieve this? It should be one page per server, no user or IP.
What do you want to do with the "busy" state on server? return an error right away or keep requests in waiting until the previous finishes?
If you just want the server to refuse content to the client, you can do it on both nginx and apache:
limit_req module for nginx
using mod_security on apache
The "tricky" part in your request is not to limit by an IP as people usually want, but globally per URI. I know it should be possible with mod_security bud I didn't do that myself but I have this configuration working for nginx:
http {
#create a zone by $package id (my internal variable, similar to a vhost)
limit_req_zone $packageid zone=heavy-stuff:10m rate=1r/s;
}
then later:
server {
set $packageid some-id;
location = /some-heavy-stuff {
limit_req zone=heavy-stuff burst=1 nodelay;
}
}
what it does for me is creating N limit zones, one for each of my servers. The zone is then used to count requests and allow only 1 per second.
Hope it helps
If the same user gives the request from the same page then Use session_start() this will block the other requests until finish the first request
Example:
http://codingexplained.com/coding/php/solving-concurrent-request-blocking-in-php
If you want to block the request from different browser/client keep the entries in database and process it one by one.

Can't access Session variables on different servers

I have dedicated a server to maintain Memcached and store sessions, so that all my servers can work on the same session without difficulties.
But somehow I think I may have misunderstood the meaning of Memcached possibilities about PHP sessions.
I thought that I would be able to stand on Apache 1 a.domain.com and create a session e.g. $_SESSION['test'] = "This string is saved in the session" and then go to Apache 2 b.domain.com or c.domain.com and simply continue the session and type echo $_SESSION['test']; and it would output the string.
It doesn't, but i am sure that I was told that memcached would be a great tool if you have multiple webservers to share the same session.
What have I done wrong?
By the way. We seriously need a fully detailed tutorial or ebook to describe how to set up the server, using php, building clusters etc. based on Memcached.
In my php.ini file it says:
session.save_path = "192.168.100.228:11211"
Tutorials told me not to define a protocol, and the ip address has been given to the Apache 3 - memcached Server
Here is an image of phpinfo()
The domain in session.cookie_domain is not called domain but it is a .local.
It has been changed for this image.
EDIT:
Just for information. When I am using a simple Memcached based PHP command - everything works perfectly. But somehow when I am trying to save a session, the memcached server doesn't store the item.
This works:
<?php
$m = new Memcached();
$m->addServer('192.168.100.228', 11211);
$m->set('int', 99);
$m->set('string', 'a simple string');
$m->set('array', array(11, 12));
/* expire 'object' key in 5 minutes */
$m->set('object', new stdclass, time() + 300);
var_dump($m->get('int'));
var_dump($m->get('string'));
var_dump($m->get('array'));
var_dump($m->get('object'));
?>
This doesn't work
<?php
session_start();
$_SESSION['name'] = "This is a simple string.";
?>
EDIT 2: THE SOLUTION
I noticed that after deleting the cache history including cookies etc. the browser didn't finish the job. The problem continued due to the fact, that it hang on to the original individual session id, which made each subdomain separated from each other.
Everything defined here is correct, just make sure your browser resets its cookies when you ask it to. >.<
By default (session) cookies are domain specific, so set the cookie domain in your php.ini
session.cookie_domain = ".domain.com"
Also see here
Allow php sessions to carry over to subdomains
Make sure to restart your webserver and clear all of your browser cookies after making the change. Your browser could get confused if you have cookies with the same name but different subdomains.
Other things to check:
That the sessions work fine on each individual server.
Make sure the session handler is set properly by using phpinfo() if you are working with a large codebase especially inherited / 3rd party stuff there may be something overriding it.
If you are using 3rd party code - like phpbb for instance - check that the cookie settings are correct in there too.
(please note this answer tidied to remove brainstorming, kept all relevant info)

How to properly set up Varnish for Symfony2 sites?

I have a website (with ESI) that uses Symfony2 reverse proxy for caching. Average response is around 100ms. I tried to install Varnish on server to try it out. I followed guide from Symfony cookbook step by step, deleted everything in cache folder, but http_cache folder was still created when I tried it out. So I figured I could try to comment out $kernel = new AppCache($kernel); from app.php. That worked pretty well. http_cache wasn't created anymore and by varnishstat, Varnish seemed to be working:
12951 0.00 0.08 cache_hitpass - Cache hits for pass
1153 0.00 0.01 cache_miss - Cache misses
That was out of around 14000 requests, so I thought everything would be alright. But after echoping I found out responses raised to ~2 seconds.
Apache runs on port 9000 and Varnish on 8080. So I echoping using echoping -n 10 -h http://servername/ X.X.X.X:8080.
I have no idea what could be wrong. Are there any additional settings needed to use Varnish with Symfony2? Or am I simply doing something wrong?
Per requests, here's my default.vcl with modifications I've done so far.
I found 2 issues with Varnish's default config:
it doesn't cache requests with cookies (and everyone in my app has session assigned)
it ignores Cache-Control: no-cache header
So I added conditions for these cases to my config and it performs fairly well now (~175 req/s up from ~160 with S2 reverse proxy - but honestly, I expected bit more). I just have no idea how to check if it's all set ok, so any inputs are welcome.
Most of pages have cache varied by cookie, with s-maxage 1200. Common ESI includes aren't varied by cookie, with s-maxage quite low (articles, article lists). User profile pages aren't cached at all (no-cache) and I'm not really sure if ESI includes on these are even being cached by Varnish. Only ESI that's varied by cookies is header with user specific information (that's on 100% of pages).
Everything in this post is Varnish 3.X specific (I'm personally using 3.0.2).
Also, after few weeks of digging into this, I have really no idea what I'm doing anymore, so if you find something odd in configs, just let me know.
I'm surprised this hasn't had a really full answer in 10 months. This could be a really useful page.
You pointed out yourself that:
Varnish doesn't cache requests with cookies
Varnish ignores Cache-Control: no-cache header
The first thing is, does everyone in your app need a session? If not, don't start the session, or at least delay starting it until it's really necessary (i.e. they log in or whatever).
If you can still cache pages when users are logged in, you need to be really careful you don't serve a user a page which was meant for someone else. But if you're going to do it, edit vcl_recv() to strip the session cookie for the pages that you want to cache.
You can easily get Varnish to process the no-cache directive in vcl_fetch() and in fact you've already done that.
Another problem which I found is that Symfony by default sets max-age to 0, which means that they won't ever get cached by the default logic in vcl_fetch
I also noticed that you had the port set in Varnish to:
backend default {
.host = "127.0.0.1";
.port = "80";
}
You yourself said that Apache is running on port 9000, so this doesn't seem to match. You would normally set Varnish to listen on the default port (80) and set Varnish to look up the backend on port 9000 or whatever.
If that's your entire configuration, the vcl_recv is configured twice.
In the pages you want to cache, can you send the caching headers? This would make the most sense, since images probably already have your apache caching headers and the app logic decide the pages that can be actually cached, but you can force this in varnish also.
You could use a vcl_recv like this:
# Called after a document has been successfully retrieved from the backend.
sub vcl_fetch {
# set minimum timeouts to auto-discard stored objects
# set beresp.prefetch = -30s;
set beresp.grace = 120s;
if (beresp.ttl < 48h) {
set beresp.ttl = 48h;}
if (!beresp.cacheable)
{pass;}
if (beresp.http.Set-Cookie)
{pass;}
# if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)")
# {pass;}
if (req.http.Authorization && !beresp.http.Cache-Control ~ "public")
{pass;}
}
This one caches, in varnish, only the requests that are set cacheable. Also, be aware that your configuration doesn't cache requests with cookies.

Categories