I have a server that I want to setup as a load balancer/reverse proxy.
nginx/1.14.2 running on debian 10
I do not want caching at all, I simply want when people visit the load balancing nginx server it sends the TCP directly to backend servers (based on nginx's ip hash algo) as if they connected to it originally.
I also want to use cloudflare on top of this load balancer for it's CDN and cache.
Here is my current setup:
upstream backend {
ip_hash;
server node1.example.com;
server node2.example.com;
keepalive 100;
}
server {
listen 80;
listen [::]:80;
access_log off;
location / {
proxy_http_version 1.1;
proxy_set_header Host $http_host;
real_ip_header X-Forwarded-For;
proxy_pass http://backend;
proxy_redirect off;
proxy_request_buffering off;
proxy_buffering off;
}
}
all nodes and the load balancer have this in their conf.d/ (which comes right from cloudflare's recommendation for nginx)
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 104.16.0.0/12;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 131.0.72.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2c0f:f248::/32;
set_real_ip_from 2a06:98c0::/29;
real_ip_header CF-Connecting-IP;
Which seem to work fine CF-Connecting-IP is set to the client's Ip.
Issue 1
PHP server running on node1.example.com or node2.example.com is currently reporting the the following where (a.b.c.d) is the load balancers IP and (w.x.y.z) is the connecting client's IP
["REMOTE_ADDR"]=> "a.b.c.d"
["HTTP_X_FORWARDED_FOR"]=> "w.x.y.z"
I thought the real_ip_header X-Forwarded-For; would use the HTTP_X_FORWARDED_FOR (which comes from cloudflare) and would store it as the real IP, such that php would say REMOTE_ADDR is the same as the HTTP_X_FORWARDED_FOR
So this is what I want
["REMOTE_ADDR"]=> "w.x.y.z"
["HTTP_X_FORWARDED_FOR"]=> "w.x.y.z"
How can I accomplish this?
Issue 2
The load balancer is adding the request HTTP header CACHE_CONTROL: max-age=0
Is this correct? If not, how can I just have the load balancer use whatever CACHE_CONTROL cloudflare sends
Issue 3
The load balancer is making the request HTTP header CONNECTION: closed but if I access the backend I always get CONNECTION: keep-alive is this correct? I set keepalive on the load balancer but it seems to always be closed
Issue 2
The load balancer is adding the request HTTP header
CACHE_CONTROL: max-age=0 Is this correct? If not, how can I just have
the load balancer use whatever CACHE_CONTROL cloudflare sends
By default, Nginx's cache does not honour the Cache-Control:no-cache request header,
nor the Pragma:no-cache request header. You must explicitly configure Nginx to bypass
the cache and pass the request onto the origin server when the user agent sends these request headers.
Probably that will help you:
proxy_cache_bypass $http_pragma;
proxy_cache_bypass $http_cache_control;
In this case, the proxy will honour the cache header, sent by the Cloudflare services.
Issue 3
The load balancer is making the request HTTP header CONNECTION: closed but if
I access the backend I always get CONNECTION: keep-alive is this
correct? I set keepalive on the load balancer but it seems to always
be closed
According that docs for http keepalive you should also set:
proxy_http_version 1.1;
proxy_set_header Connection "";
Please note that the default behavior for proxy_http_version is 1.0
And be sure that 100 connection, which you have set is enough keepalive 100;
Because setting too low value could also lead to CONNECTION: closed behavior.
Issue 1
...snip...
I thought the real_ip_header X-Forwarded-For; would use the X-Forwarded-For (which comes from CloudFlare) and would store it as the real IP, such that PHP would say $_SERVER["REMOTE_ADDR"] is the same as the X-Forwarded-For
You can, but please don't do that. It's semantics, really. I mean you can use your car's front-left wheel as its steering wheel, for reasons, and for both being wheels, but I'm sure people will look at you funny if you do so.
$_SERVER["REMOTE_ADDR"] in your PHP script or $remote_addr in nginx refers to the direct client it's accepting requests from; your client if they connected directly to your backend, or your load balancer/proxy if your client connected from that.
X-Forwarded-For request header from your load balancer (or proxy) server refers to your real client IP address. Because it's a plain request header, any client can spoof them, either accidentally (let's say client misconfiguration) or on purpose.
This is done for security reasons, so that if your request is a proxied/load-balanced request (having X-Forwarded-For request header); you can accept the connection if their remote address ($remote_addr in nginx or $_SERVER["REMOTE_ADDR"] in PHP) is in your lists of trusted load balancer/proxy, or reject it as a forged request if their remote address is not on the list.
Issue 2
The load balancer is adding the request HTTP header CACHE_CONTROL: max-age=0
Is this correct?
Cache-Control could be a request or response header, so you need confirm who sent this header; either your client, CloudFlare, your nginx load balancer, or your PHP script.
Issue 3
The load balancer is making the request HTTP header CONNECTION: closed but if I access the backend I always get CONNECTION: keep-alive is this correct?
nginx by default adds Host: $proxy_host and Connection: close to every proxy_pass backend requests. Use proxy_set_header directive to prevent that.
Issue 1 was resolved by using set_real_ip_from a.b.c.d on both node1 and node2 where a.b.c.d is the load balances IP.
Related
For a website I'm using a nginx configuration that requires a client ssl certificate. I want my Symfony/php project to be able to verify that a client ssl certificate is being used (and provide some extra information from the certificate as well). So I was thinking of doing this by adding it to the request http header.
In my nginx site configuration I have set:
ssl_client_certificate /home/user/ssl/ca.crt;
ssl_verify_client on;
This works, the client certificate is obligatory.
But I want my underlaying Symfony/php project to be able to verify that a client certificate is being used. I was thinking of adding it to the http request header, but I seem only to be able to add it to the http response header (back to the browser) like this (in the same nginx site config):
location / {
try_files $uri /app.php$is_args$args;
add_header X-ssl-client-verify $ssl_client_verify;
}
In firefox I can see this response header indeed, but that is not what I want (and can be a security hazzard). I've also looked into this:
proxy_set_header X-ssl-client-verify $ssl_client_verify;
But this does not work because I'm not using a proxy.
Is there some other way to add an element to the request header? Or is there an alternative way to get client ssl certificate information into my Symfony / php project?
I need to develop a challenge page much similar to the Cloudflare firewall challenge.
I know how to make the front end and the back end of the challenge app and I know how to set it up on the server.
The problem comes on how to implement it to the website which is on one server, while the challenge app is on another server. How should I make the communication between the servers? Both servers will be Ubuntu 16.4. Thank you all in advance.
I think it's better to solve this issue like cloudflare using nginx server.
Make an nginx reverse proxy with rate limit
if limit hit user will be redirected to error page
integrate your challenge app with the error page.
more about this configuration is here:
https://serverfault.com/questions/645154/how-to-redirect-to-an-other-link-when-excess-request-limit-req-with-nginx
And How to use PHP in error pages is here:
Nginx, PHP + FPM Custom Error Pages
you can run this reverse proxy on third server or you can run it on the challenge app server.
make your domain points to the reverse proxy
then
make nginx config
server {
listen 80 default_server;
server_name *.*;
client_body_timeout 5s;
client_header_timeout 5s;
location / {
proxy_set_header Host $host;
proxy_pass http://00.00.00.00/; #replace with upstream ip
}
}
you have to combine this with custom php error page
I have a Symfony 3.2 application (running on port 8443) using FosUserBundle. When anonymous users access 'https://[myurl].com:8443', they are redirected to 'https://[myurl].com:8443/login' for the login process. This redirection is working fine when accessing the application but now, we want to use a reverse proxy to forward the requests from customers to the application. Customers would use standard https port 443.
What happens is the following : Users access the application with 'https://myurl.com'.
The request is forwarded by the reverse proxy to the web server (IIS) hosting the application on port 8443.
The user making the request is redirected to 'https://myurl.com:8443/login' which does not work because 8443 is only opened server-side.
I tried different solutions in symfony but was not able to make it work :
-set up the reverse proxy in symfony : Request::setTrustedProxies(array('123.456.78.89'));
-set http_port/https_port in config.yml
-set $_SERVER['SERVER_PORT'] = 443;
Any idea on how can I solve this ?
Thanks
In addition to #Gor I think you should also configure your proxy to add X-Forwarded headers. In Nginx something like
location / {
proxy_pass http://myurl:8443;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
}
Open the following file:
web/app.php
Right after this line:
$request = Request::createFromGlobals();
Insert this block:
// tell Symfony about your reverse proxy
Request::setTrustedProxies(
// the IP address (or range) of your proxy
['192.0.0.1', '10.0.0.0/8'],
// trust *all* "X-Forwarded-*" headers
Request::HEADER_X_FORWARDED_ALL
// or, if your proxy instead uses the "Forwarded" header
// Request::HEADER_FORWARDED
// or, if you're using AWS ELB
// Request::HEADER_X_FORWARDED_AWS_ELB
);
See:
"How to Configure Symfony to Work behind a Load Balancer or a Reverse Proxy"
http://symfony.com/doc/3.4/deployment/proxies.html
i am trying to send real visitor ip to nginx from php
this is the situation
server A - exmaple.com/a.php
server B - example/file.txt
when access exmaple.com/a.php it download file.txt located on server b
but nginx logs show server A ip as requested download, i guess that correct cause the file.txt downloaded via a.php located on server A
so how can i send the ip of the visitor instead of the server to nginx
i already have this in my nginx config
proxy_set_header X-Real-IP $remote_addr;
thank you
Server A: add X-Real-IP header with client's IP to outgoing request. This part depends on you code. For example, if CURL, you need to add curl_setopt($ch, CURLOPT_HTTPHEADER, [ 'X-Real-IP: '.$_SERVER['REMOTE_ADDR'] ]).
Server B: you need to configure nginx. Add to nginx's server config block:
set_real_ip_from SERVER_A_IP;
real_ip_header X-Real-IP; (not required because default value)
You would need to add it to your request headers.
$opts['http']['header'] = 'X-Real-IP: ' . $_SERVER['REMOTE_ADDR'] . "\r\n";
You would also need to configure Nginx to accept this, with set_real_ip_from config directives.
A better option would be to use cURL (see #Terra's answer), which gives you a bit more flexibility than the fopen wrappers.
The best option however is just to let Nginx do this. It's far more efficient than piping all this data through PHP. Use proxy_pass.
I have three servers. One for load balancing other two for serving web application. My load balancing works fine if i use my web page as static site. but when i log into my web page it does not respond correctly because every time it changes its server when the page loads. how can i do it without changing current server before log-out. My load balancing server configuration is
upstream web_backend{
server 192.168.33.2;
server 192.168.33.3;
}
server{
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;
}
}
You can use Session persistence feature of nginx:
If there is the need to tie a client to a particular application
server — in other words, make the client’s session “sticky” or
“persistent” in terms of always trying to select a particular server —
the ip-hash load balancing mechanism can be used.
With ip-hash, the client’s IP address is used as a hashing key to
determine what server in a server group should be selected for the
client’s requests. This method ensures that the requests from the same
client will always be directed to the same server except when this
server is unavailable.
To configure ip-hash load balancing, just add the ip_hash directive to
the server (upstream) group configuration:
In your case just add ip_hash into your upstream definition
upstream web_backend{
ip_hash;
server 192.168.33.2;
server 192.168.33.3;
}