I've build a cluster (1 manager, 4 workers). Only the manager has a public IP address, workers and the manager are on a private network.
I tried to build a webserver (Nginx + PHP-FPM) with Docker Swarm. So I set the Nginx container on the manager, so I can request it from outside the private network. If I do that, the container get a upstream timed out (110: Connection timed out) while connecting to upstream error while requesting a PHP file. If I run it on a worker node, everything works fine, but the Nginx isn't accessible from outside the private network (I can only request with curl on the manager or every worker).
Do you guys have any idea ? I really don't understand why running Nginx on the manager makes PHP-FPM timed out. Thanks !
Here the docker-compose.yml file:
version: '3.8'
services:
nginx:
image: arm32v5/nginx:latest
container_name: webserver_nginx
ports:
- 80:80
volumes:
- /media/storage/webserver/nginx/nginx.conf:/etc/nginx/nginx.conf
- /media/storage/webserver/nginx/log:/var/log/nginx
- /media/storage/www:/var/www
links:
- php
networks:
- webserver
deploy:
placement:
constraints:
- "node.role==manager"
php:
image: arm32v5/php:7.4-fpm
container_name: webserver_php
volumes:
- /media/storage/www:/var/www
- /media/storage/webserver/nginx/www.conf:/usr/local/etc/php-fpm.d/www.conf
networks:
- webserver
links:
- nginx
ports:
- 9000:9000
networks:
webserver:
driver: overlay
attachable: true
Nginx configuration:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server_tokens off;
server {
listen 80;
error_page 500 502 503 504 /50x.html;
location / {
index index.php index.html index.htm;
root /var/www/;
}
location ~ \.php$ {
root /var/www/;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
}
I had the similar kind of issue and the solution was to use the docker service name in upstream.
Here is my setup
3 EC2 with Docker Swarm
1 Swarm Manager ( Nginx proxy deployed )
2 Swarm Worker (Frontend + backend deployed)
Flow
Browser ->> Nginx-proxy ->> React-frontend (Nginx) ->> Go-backend
nginx.conf
http {
upstream allbackend {
server exam105_frontend:8080; #FrontEnd Service Name
}
server {
listen 80;
location / {
proxy_pass http://allbackend;
}
}
}
Docker Swarm Services
AWS Note:
Remember to open port 80 publicly and 8080 (or whatever port number you are using) for internal communication in Security Group of your AWS setup, otherwise you won't be able to access the service.
Related
this is my first time working with nginx and I'm using it to access my dockerised drupal appliction from a production subdomain.
So before everything, I'm currently using docker-compose to create my sql, app, and webservice containers, here is my docker-compose file :
version: '3'
services:
app:
image: osiolabs/drupaldevwithdocker-php:7.4
volumes:
- ./docroot:/var/www/html:cached
depends_on:
- db
restart: always
container_name: intranet
db:
image: mysql:5.5
volumes:
- ./mysql:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=linux1354web
- MYSQL_USER=root
- MYSQL_PASSWORD=linux1354web
- MYSQL_DATABASE=intranet
container_name: intranet-db
web:
build: ./web
ports:
- 88:80
depends_on:
- app
volumes:
- ./docroot:/var/www/html:cached
restart: always
container_name: webIntranet
I don't think the containers are the problem, as when I go to the drupal containers the site works, but my main problem is the link with the nginx container. Here is my nginx.conf :
# stuff for http block
client_max_body_size 1g;
# fix error: upstream sent too big header while reading response header from upstream
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
server {
listen 80;
listen [::]:80 default_server;
server_name _;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html;
#RENVOYER LA PAGE DE GARDE DE APACHE2
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php?$query_string; # For Drupal >= 7
}
location /intranet {
# try to serve file directly, fallback to app.php
try_files $uri $uri/;
}
#RENVOYER LE FICHIER ROBOTS.TXT
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location ~ \.php(/|$) {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:80;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $document_root;
}
}
And this is my DockerFile to build nginx image:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
When I go to localhost:88/ I currently have a apache2 hub page, but the moment I'm trying for another page I always get a 502 bad gateway error and the logs are saying :
webIntranet | 2021/03/11 08:44:55 [error] 30#30: *1 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 172.26.0.1, server: _, request: "GET /index HTTP/1.1", upstream: "fastcgi://172.26.0.3:80", host: "localhost:88"
To go more into details, my docker folder looks like that, docroot containes the drupal website.
I have tried sovling the problem by changing the ports as some solution mentionned it but it did nothing, I don't understand what could be wrong, I've tried many things with the conf but none of them works, and I still even can't have a single page of the drupal site showing up.
The drupaldevwithdocker-php project isn't using php-fpm, hence the response is unsupported as it's from apache rather than php-fpm. I'd imagine you'd need something more like this?
proxy_pass http://app:80;
See https://gist.github.com/BretFisher/468bca2900b90a4dddb7fe9a52143fc6
I have a stack with nginx and PHP to run on Docker Swarm Cluster.
In a moment in my PHP application, I need to get the remote_addr ($_SERVER['REMOTE_ADDR']) which contains the real IP from the client host accessing my webapp.
But the problem is that the IP informed for nginx by docker swarm cluster. It's showed an Internal IP like 10.255.0.2, but the real IP it's the external IP from the client Host (like 192.168.101.151).
How I can solve that?
My docker-compose file:
version: '3'
services:
php:
image: php:5.6
volumes:
- /var/www/:/var/www/
- ./data/log/php:/var/log/php5
networks:
- backend
deploy:
replicas: 1
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- /var/www/:/var/www/
- ./data/log/nginx:/var/log/nginx
networks:
- backend
networks:
backend:
My default.conf (vhost.conf) file:
server {
listen 80;
root /var/www;
index index.html index.htm index.php;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log error;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
try_files $uri $uri/ /index.php;
}
location = /50x.html {
root /var/www;
}
# set expiration of assets to MAX for caching
location ~* \.(js|css|gif|png|jp?g|pdf|xml|oga|ogg|m4a|ogv|mp4|m4v|webm|svg|svgz|eot|ttf|otf|woff|ico|webp|appcache|manifest|htc|crx|oex|xpi|safariextz|vcf)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_read_timeout 300;
}
}
My nginx config file:
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
keepalive_timeout 15;
client_body_buffer_size 100K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
log_format main '$remote_addr - $remote_user [$time_local] "$request_filename" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
include /etc/nginx/conf.d/*.conf;
}
for those don't want to read all the github thread ( https://github.com/moby/moby/issues/25526 ), the answer that was good for me was to change the config to this :
version: '3.7'
services:
nginx:
ports:
- mode: host
protocol: tcp
published: 80
target: 80
- mode: host
protocol: tcp
published: 443
target: 81
This still lets the internal overlay network work, but uses some tricks with iptables to forward those ports directly to the container, so the service inside the container see the correct source IP address of the packets.
There is no facility in iptables to allow balancing of ports between multiple containers, so you can only assign one port to one container (which includes multiple replicas of a container).
You can't get this yet through an overlay network. If you scroll up from bottom on this long-running GitHub issue, you'll see some options for using bridge networks in Swarm with your proxies to get around this issue for now.
changing port binding mode to host worked for me
ports:
- mode: host
protocol: tcp
published: 8082
target: 80
however your web front end must listen on a specific host inside swarm cluster
i.e.
deploy:
placement:
constraints:
[node.role == manager]
X-Real-IP will be passthrough and you can use it to access client IP. You can look at http://dequn.github.io/2019/06/22/docker-web-get-real-client-ip/ for reference.
I created a multi-container application which is about php. When I modify the php file, I can see the changes in the browser. But, when I modify the static files such as css and js, there are no changes in the browser. The following is my Dockerfile code:
Dockerfile
`FROM nginx:1.8
`ADD default.conf /etc/nginx/conf.d/default.conf
`ADD nginx.conf /etc/nginx/nginx.conf
`WORKDIR /Code/project/
`RUN chmod -R 777 /Code/project/
`VOLUME /Code/project
default.conf
`server {
listen 80;
server_name localhost;
root /Code/project/public;
index index.html index.htm index.php;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm:9000;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
nginx.conf
user root;
worker_processes 8;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 20m;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}`
docker-compose.yml
webphp:
image: index.alauda.cn/chuanty/local_php
#image: index.alauda.cn/chuanty/local_nginx:snapshot
ports:
- "9000:9000"
volumes:
- .:/Code/project
links:
- cache:cache
- db:db
- es:localhost
extra_hosts:
- "chaunty.taurus:192.168.99.100"
cache:
image: redis
ports:
- "6379:6379"
db:
#image: mysql
image: index.alauda.cn/alauda/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: chuantaiyidev
MYSQL_USER: cty
MYSQL_PASSWORD: chuantaiyidev
MYSQL_DATABASE: taurus
es:
image: index.alauda.cn/chuanty/local_elasticsearch
ports:
- "9200:9200"
- "9300:9300"
server:
#image: index.alauda.cn/ryugou/nginx
image: index.alauda.cn/chuanty/local_nginx:1.1
ports:
- "80:80"
- "443:443"
links:
- webphp:fpm
volumes_from:
- webphp:rw
I guess your problem is "sendfile on" in your nginx.conf.
For development purpose try to set it off in the server-directive of your server block:
server {
...
sendfile off;
}
This will force to reload the static files such as css and js. nginx won't load the file from memory instead.
http://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile
One option is to copy the modified static files directly to the Docker container:
docker cp web/static/main.js container_name:/usr/src/app/static/main.js
web/static is the static directory on your host machine
main.js is the modified file
container_name is the name of the container which can be found with docker ps
/usr/src/app/static is the static directory in your Docker container
If you want to determine exactly where your static files are on your Docker container, you can use docker exec -t -i container_name /bin/bash to explore its directory structure.
I really hope you actually do not copy configuration file into an image!
Docker Best Practice
Docker images are supposed to be immutable, hence at deploy the configuration file (that are depend on a lot [environment] of variables) should be passed to/shared with the container thanks to the docker run option -v|--volume instead.
As for your issue
If you want to see any change when static files are modified you need to docker build and then docker compose up on every modifications in order to actually change the web page. You may not want that (clearly). I suggest you to use shared directories (through the -v option).
I'm beginning with Docker and nginx, and I'm trying to setup a two container environment running:
nginx:latest on one side
php:fpm on the other side
I'm having trouble with php-fpm: I always get a 502 Bad Gateway error.
My setup is straightforward ($TEST_DIR is my working directory).
My Docker compose config TEST_DIR/docker-compose.yml:
nginx:
image: nginx
ports:
- "8080:80"
volumes:
- ./www:/usr/share/nginx/html
- ./conf/nginx.conf:/nginx.conf
- ./logs/nginx:/var/log/nginx
links:
- php:php
command: nginx -c /nginx.conf
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./www:/var/www/html
The nginx config $TEST_DIR/conf/nginx.conf:
user nginx;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
open_file_cache max=100;
upstream php-upstream {
server php:9000;
}
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# Pass PHP scripts to PHP-FPM
location ~* \.php$ {
fastcgi_pass php-upstream;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/php_error.log;
access_log /var/log/nginx/php_access.log;
}
}
daemon off;
Then, I put my PHP content in the same directory as my docker-compose.yml:
$TEST_DIR/www/test.php
<?php phpinfo(); ?>
If I start the infrastructure using docker-compose up and then go to localhost:8080/test.php, then I get the 502 Bad Gateway
and the following error from nginx:
[error] 6#6: *1 connect() failed (113: No route to host) while connecting to upstream, client: 172.17.42.1, server: localhost, request: "GET /phpinsfo2.php HTTP/1.1", upstream: "fastcgi://172.17.0.221:9000", host: "localhost:8080"
What is causing the error?
I finally managed to make it work.
The problem was that my host computer (Fedora 21) had a firewall enabled.
So doing: systemctl stop firewalld solved my problem.
Apparently this is a well known problem at Red Hat Linux:
Bug 1098281 - Docker interferes with firewall initialisation via firewalld
I am in the process of Dockerising my webserver/php workflow.
But because I am on Windows, I need to use a virtual machine. I chose boot2docker which is a Tiny Core Linux running in Virtualbox and adapted for Docker.
I selected three containers:
nginx: the official nginx container;
jprjr/php-fpm: a php-fpm container;
mysql: for databases.
In boot2docker, /www/ contains my web projects and conf/, which has the following tree:
conf
│
├───fig
│ fig.yml
│
└───nginx
nginx.conf
servers-global.conf
servers.conf
Because docker-compose is not available for boot2docker, I must use fig to automate everything. Here is my fig.xml:
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- 3306:3306
php:
image: jprjr/php-fpm
links:
- mysql:mysql
volumes:
- /www:/srv/http:ro
ports:
- 9000:9000
nginx:
image: nginx
links:
- php:php
volumes:
- /www:/www:ro
ports:
- 80:80
command: nginx -c /www/conf/nginx/nginx.conf
Here is my nginx.conf:
daemon off;
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
keepalive_timeout 65;
index index.php index.html index.htm;
include /www/conf/nginx/servers.conf;
autoindex on;
}
And the servers.conf:
server {
server_name lab.dev;
root /www/lab/;
include /www/conf/nginx/servers-global.conf;
}
# Some other servers (vhosts)
And the servers-global.conf:
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
}
So the problem now (sorry for all that configuration, I believe it was needed to clearly explain the problem): If I access lab.dev, no problem (which shows that the host is created in Windows) but if I try to access lab.dev/test_autoload/, I have a File not found.. I know this comes from php-fpm not being able to access the files, and the nginx logs confirm this:
nginx_1 | 2015/05/28 14:56:02 [error] 5#5: *3 FastCGI sent in stderr:
"Primary script unknown" while reading response header from upstream,
client: 192.168.59.3, server: lab.dev, request: "GET /test_autoload/ HTTP/1.1",
upstream: "fastcgi://172.17.0.120:9000", host: "lab.dev", referrer: "http://lab.dev/"
I know that there is a index.php in lab/test_autoload/ in both containers, I have checked. In nginx, it is located in /www/lab/test_autoload/index.php and /srv/http/lab/test_autoload/index.php in php.
I believe the problem comes from root and/or fastcgi_param SCRIPT_FILENAME, but I do not know how to solve it.
I have tried many things, such as modifying the roots, using a rewrite rule, adding/removing some /s, etc, but nothing has made it change.
Again, sorry for all this config, but I think it was needed to describe the environment I am in.
I finally found the answer.
The variable $fastcgi_script_name did not take into account the root (logical, as it would have included www/ otherwise. This means that a global file can not work. An example :
# "generated" vhost
server {
server_name lab.dev;
root /www/lab/;
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
# I need to add /lab after /srv/http because it requests PHP to look at the root of the web files, not in the lab/ folder
# fastcgi_param SCRIPT_FILENAME /srv/http/lab$fastcgi_script_name;
}
}
This meant that I couldn't write the lab/ at only one place, I needed to write at two different places (root and fastcgi_param) so I decided to write myself a small PHP script that used a .json and turned it into the servers.conf file. If anyone wants to have a look at it, just ask, it will be a pleasure to share it.
There's a mistake here:
fastcgi_param SCRIPT_FILENAME /srv/http/$fastcgi_script_name;
The good line is:
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
This is not the same thing.
Your nginx.conf is fully empty, where is the daemon off;
the user nginx line, worker_processes etc.. nginx need some configuration file before running the http.
In the http conf, same thing, is missing the mime types, the default_type, configuration of logs sendfile at on for boot2docker.
Your problem is clearly not a problem with Docker, but with the nginx configuration. Have you test your application with running docker run before using fig ? Did it worked ?