I have high load website, my system runs out of memory in peak times. I want to split the load so the read operations which happens to be on specific URls move to another server.
I am using nginx and php-fpm, how do I redirect specific URLs to be processed by PHP-fpm on a different server?
This is the blue print of my requirements.
location /feed/generate {
use php-fpm on a different server
}
location / { #all other requests
use existing php-fpm
}
Setup php-fpm on the second server listening on an externally accessible IP (not 127.0.0.1) port 9000.
The IP address should be private (not routed to the internet) and/or configured to only allow connections from trusted hosts (firewall).
upstream feed_php_fpm {
server <other server ip>:9000;
}
upstream local_fpm {
server 127.0.0.1:9000;
}
location /feed/generate {
fastcgi_pass feed_php_fpm;
include fastcgi.conf;
}
location / {
fastcgi_pass local_fpm;
include fastcgi.conf;
}
Please understand what you are doing and the implications of php-fpm listening on a network port vs file socket.
Related
I have a server that I want to setup as a load balancer/reverse proxy.
nginx/1.14.2 running on debian 10
I do not want caching at all, I simply want when people visit the load balancing nginx server it sends the TCP directly to backend servers (based on nginx's ip hash algo) as if they connected to it originally.
I also want to use cloudflare on top of this load balancer for it's CDN and cache.
Here is my current setup:
upstream backend {
ip_hash;
server node1.example.com;
server node2.example.com;
keepalive 100;
}
server {
listen 80;
listen [::]:80;
access_log off;
location / {
proxy_http_version 1.1;
proxy_set_header Host $http_host;
real_ip_header X-Forwarded-For;
proxy_pass http://backend;
proxy_redirect off;
proxy_request_buffering off;
proxy_buffering off;
}
}
all nodes and the load balancer have this in their conf.d/ (which comes right from cloudflare's recommendation for nginx)
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 104.16.0.0/12;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 131.0.72.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2c0f:f248::/32;
set_real_ip_from 2a06:98c0::/29;
real_ip_header CF-Connecting-IP;
Which seem to work fine CF-Connecting-IP is set to the client's Ip.
Issue 1
PHP server running on node1.example.com or node2.example.com is currently reporting the the following where (a.b.c.d) is the load balancers IP and (w.x.y.z) is the connecting client's IP
["REMOTE_ADDR"]=> "a.b.c.d"
["HTTP_X_FORWARDED_FOR"]=> "w.x.y.z"
I thought the real_ip_header X-Forwarded-For; would use the HTTP_X_FORWARDED_FOR (which comes from cloudflare) and would store it as the real IP, such that php would say REMOTE_ADDR is the same as the HTTP_X_FORWARDED_FOR
So this is what I want
["REMOTE_ADDR"]=> "w.x.y.z"
["HTTP_X_FORWARDED_FOR"]=> "w.x.y.z"
How can I accomplish this?
Issue 2
The load balancer is adding the request HTTP header CACHE_CONTROL: max-age=0
Is this correct? If not, how can I just have the load balancer use whatever CACHE_CONTROL cloudflare sends
Issue 3
The load balancer is making the request HTTP header CONNECTION: closed but if I access the backend I always get CONNECTION: keep-alive is this correct? I set keepalive on the load balancer but it seems to always be closed
Issue 2
The load balancer is adding the request HTTP header
CACHE_CONTROL: max-age=0 Is this correct? If not, how can I just have
the load balancer use whatever CACHE_CONTROL cloudflare sends
By default, Nginx's cache does not honour the Cache-Control:no-cache request header,
nor the Pragma:no-cache request header. You must explicitly configure Nginx to bypass
the cache and pass the request onto the origin server when the user agent sends these request headers.
Probably that will help you:
proxy_cache_bypass $http_pragma;
proxy_cache_bypass $http_cache_control;
In this case, the proxy will honour the cache header, sent by the Cloudflare services.
Issue 3
The load balancer is making the request HTTP header CONNECTION: closed but if
I access the backend I always get CONNECTION: keep-alive is this
correct? I set keepalive on the load balancer but it seems to always
be closed
According that docs for http keepalive you should also set:
proxy_http_version 1.1;
proxy_set_header Connection "";
Please note that the default behavior for proxy_http_version is 1.0
And be sure that 100 connection, which you have set is enough keepalive 100;
Because setting too low value could also lead to CONNECTION: closed behavior.
Issue 1
...snip...
I thought the real_ip_header X-Forwarded-For; would use the X-Forwarded-For (which comes from CloudFlare) and would store it as the real IP, such that PHP would say $_SERVER["REMOTE_ADDR"] is the same as the X-Forwarded-For
You can, but please don't do that. It's semantics, really. I mean you can use your car's front-left wheel as its steering wheel, for reasons, and for both being wheels, but I'm sure people will look at you funny if you do so.
$_SERVER["REMOTE_ADDR"] in your PHP script or $remote_addr in nginx refers to the direct client it's accepting requests from; your client if they connected directly to your backend, or your load balancer/proxy if your client connected from that.
X-Forwarded-For request header from your load balancer (or proxy) server refers to your real client IP address. Because it's a plain request header, any client can spoof them, either accidentally (let's say client misconfiguration) or on purpose.
This is done for security reasons, so that if your request is a proxied/load-balanced request (having X-Forwarded-For request header); you can accept the connection if their remote address ($remote_addr in nginx or $_SERVER["REMOTE_ADDR"] in PHP) is in your lists of trusted load balancer/proxy, or reject it as a forged request if their remote address is not on the list.
Issue 2
The load balancer is adding the request HTTP header CACHE_CONTROL: max-age=0
Is this correct?
Cache-Control could be a request or response header, so you need confirm who sent this header; either your client, CloudFlare, your nginx load balancer, or your PHP script.
Issue 3
The load balancer is making the request HTTP header CONNECTION: closed but if I access the backend I always get CONNECTION: keep-alive is this correct?
nginx by default adds Host: $proxy_host and Connection: close to every proxy_pass backend requests. Use proxy_set_header directive to prevent that.
Issue 1 was resolved by using set_real_ip_from a.b.c.d on both node1 and node2 where a.b.c.d is the load balances IP.
I need to develop a challenge page much similar to the Cloudflare firewall challenge.
I know how to make the front end and the back end of the challenge app and I know how to set it up on the server.
The problem comes on how to implement it to the website which is on one server, while the challenge app is on another server. How should I make the communication between the servers? Both servers will be Ubuntu 16.4. Thank you all in advance.
I think it's better to solve this issue like cloudflare using nginx server.
Make an nginx reverse proxy with rate limit
if limit hit user will be redirected to error page
integrate your challenge app with the error page.
more about this configuration is here:
https://serverfault.com/questions/645154/how-to-redirect-to-an-other-link-when-excess-request-limit-req-with-nginx
And How to use PHP in error pages is here:
Nginx, PHP + FPM Custom Error Pages
you can run this reverse proxy on third server or you can run it on the challenge app server.
make your domain points to the reverse proxy
then
make nginx config
server {
listen 80 default_server;
server_name *.*;
client_body_timeout 5s;
client_header_timeout 5s;
location / {
proxy_set_header Host $host;
proxy_pass http://00.00.00.00/; #replace with upstream ip
}
}
you have to combine this with custom php error page
i am trying to send real visitor ip to nginx from php
this is the situation
server A - exmaple.com/a.php
server B - example/file.txt
when access exmaple.com/a.php it download file.txt located on server b
but nginx logs show server A ip as requested download, i guess that correct cause the file.txt downloaded via a.php located on server A
so how can i send the ip of the visitor instead of the server to nginx
i already have this in my nginx config
proxy_set_header X-Real-IP $remote_addr;
thank you
Server A: add X-Real-IP header with client's IP to outgoing request. This part depends on you code. For example, if CURL, you need to add curl_setopt($ch, CURLOPT_HTTPHEADER, [ 'X-Real-IP: '.$_SERVER['REMOTE_ADDR'] ]).
Server B: you need to configure nginx. Add to nginx's server config block:
set_real_ip_from SERVER_A_IP;
real_ip_header X-Real-IP; (not required because default value)
You would need to add it to your request headers.
$opts['http']['header'] = 'X-Real-IP: ' . $_SERVER['REMOTE_ADDR'] . "\r\n";
You would also need to configure Nginx to accept this, with set_real_ip_from config directives.
A better option would be to use cURL (see #Terra's answer), which gives you a bit more flexibility than the fopen wrappers.
The best option however is just to let Nginx do this. It's far more efficient than piping all this data through PHP. Use proxy_pass.
I have three servers. One for load balancing other two for serving web application. My load balancing works fine if i use my web page as static site. but when i log into my web page it does not respond correctly because every time it changes its server when the page loads. how can i do it without changing current server before log-out. My load balancing server configuration is
upstream web_backend{
server 192.168.33.2;
server 192.168.33.3;
}
server{
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;
}
}
You can use Session persistence feature of nginx:
If there is the need to tie a client to a particular application
server — in other words, make the client’s session “sticky” or
“persistent” in terms of always trying to select a particular server —
the ip-hash load balancing mechanism can be used.
With ip-hash, the client’s IP address is used as a hashing key to
determine what server in a server group should be selected for the
client’s requests. This method ensures that the requests from the same
client will always be directed to the same server except when this
server is unavailable.
To configure ip-hash load balancing, just add the ip_hash directive to
the server (upstream) group configuration:
In your case just add ip_hash into your upstream definition
upstream web_backend{
ip_hash;
server 192.168.33.2;
server 192.168.33.3;
}
Is there a way to directly connect to Redis using client side (not Node.js) javascript?
I'm already using Node.js + PHP + Redis + Socket.io (for the client) successfully for a few projects. However, I really think this could be further simplified to something like PHP + Redis + Browser javascript - taking out the Node.js server which is just another server I'd rather not use if it isn't necessary. For simple things, I think it would be better to just connect directly to Redis using Javascript.
From what I understand, Redis just serves its request through a port so any language that can make requests to that port would work. In theory, couldn't you just hit the redis server's port using client side javascript?
I'm mostly interested in the publish/subscribe functions, which may or may not be possible.
I'm not sure if you can access a non-port 80 port using AJAX, but you technically should be able to forward Redis' port to port 80 using Nginx reverse proxy or something.
Any ideas? Just a thought. I'm very happy with my current solution, but it doesn't hurt to wonder if we could do this even better or more efficiently.
You can only make HTTP and websockets requests with client-side JavaScript. However, you should look into Webdis. It adds an easy HTTP/JSON layer to Redis and should do exactly what you want.
Edit: Link fixed.
The real obstacle is overcoming the non-port 80/443 limitation for the ajax request in the browser; Even with the Webdis solution, because it runs off port 7379 by defaul,t and would conflict with your Apache or Nginx process if ran off port 80.
My advice would be to use the nginx proxy_pass to point to webdis process. You can redirect traffic to port 80 and perform ajax request without the annoying security issues.
Below is a sample NGINX configuration that seems to do the trick for me.
upstream WebdisServerPool
{
server 127.0.0.1:7379; #webdis server1
server 192.168.1.1:7379; #webdis server 2
}
server {
listen 80; #
root /path/to/my/php/code/;
index index.php;
server_name yourServerName.com;
location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location / {
# Check if a file exists, or route it to index.php.
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /path/to/my/php/code/$fastcgi_script_name;
}
location /redis {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
rewrite /(.*)/(.*)/(.*)$ /$2/$3 break; #ignore the /redis
proxy_redirect off;
proxy_pass http://webdisServerPool;
}
}
On the front end side, here is an example of getting all the keys. All redis requests would go through /redis for example:
$.ajax({
url: "/redis/KEYS/*",
method: 'GET',
dataType: 'json',
success:function(data)
{
$each(data.KEYS,function(key,value){
$('body').append(key+"=>"+value+" <br> ");
});
}
});
OR
You could use:
http://wiki.nginx.org/HttpRedis and parse the response yourself.
I have found that the direct Redis http interfaces don't work very well with pub/sub or are difficult to set up (at time of writing).
Here is my "workaround" for pub/sub based on the predis examples.
http://bradleygoldsmith.tumblr.com/post/35601539836/quick-and-dirty-redis-subscribe-publish-notifications
I have a bunch of predefined redis accessors in php, and I use a 'router' style function to use them from the client via $.post requests with jQuery. The router is just a big switch:
public function router() {
$response = array();
switch ($_POST['method']) {
case 'get_whole_list': //is a convenience function with arg $list_key
if ($_POST['list_key']) {//which will be provided by the POST request data
$response = $this->get_whole_list($_POST['list_key']);
break;
} else {
$response = (array('error' => 'must be passed with post key "list_key"'));
break;
} //and so on, until
//it's time to send the response:
return json_encode(array('response' => $response));
}
and then you just echo $myClass->router()
I access it with jQuery like:
redgets.get_whole_list = function(key, callback) {
$.post(redgets.router, //points to my php file
{method: 'get_whole_list', //tells it what to do
list_key: key}, //provides the required args
function(data) {
callback($.parseJSON(data).response); //parses the response
});
this all works fine; maybe it's not ideal, but it does make a node.js server redundant.
I am surprised that nobody has already made a general-purpose redis interface in this style.