Simple docker-compose with two services: nginx and php - php

I'm beginning with Docker and nginx, and I'm trying to setup a two container environment running:
nginx:latest on one side
php:fpm on the other side
I'm having trouble with php-fpm: I always get a 502 Bad Gateway error.
My setup is straightforward ($TEST_DIR is my working directory).
My Docker compose config TEST_DIR/docker-compose.yml:
nginx:
image: nginx
ports:
- "8080:80"
volumes:
- ./www:/usr/share/nginx/html
- ./conf/nginx.conf:/nginx.conf
- ./logs/nginx:/var/log/nginx
links:
- php:php
command: nginx -c /nginx.conf
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./www:/var/www/html
The nginx config $TEST_DIR/conf/nginx.conf:
user nginx;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
open_file_cache max=100;
upstream php-upstream {
server php:9000;
}
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# Pass PHP scripts to PHP-FPM
location ~* \.php$ {
fastcgi_pass php-upstream;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/php_error.log;
access_log /var/log/nginx/php_access.log;
}
}
daemon off;
Then, I put my PHP content in the same directory as my docker-compose.yml:
$TEST_DIR/www/test.php
<?php phpinfo(); ?>
If I start the infrastructure using docker-compose up and then go to localhost:8080/test.php, then I get the 502 Bad Gateway
and the following error from nginx:
[error] 6#6: *1 connect() failed (113: No route to host) while connecting to upstream, client: 172.17.42.1, server: localhost, request: "GET /phpinsfo2.php HTTP/1.1", upstream: "fastcgi://172.17.0.221:9000", host: "localhost:8080"
What is causing the error?

I finally managed to make it work.
The problem was that my host computer (Fedora 21) had a firewall enabled.
So doing: systemctl stop firewalld solved my problem.
Apparently this is a well known problem at Red Hat Linux:
Bug 1098281 - Docker interferes with firewall initialisation via firewalld

Related

Docker Swarm - PHP-FPM not working with Nginx running on manager

I've build a cluster (1 manager, 4 workers). Only the manager has a public IP address, workers and the manager are on a private network.
I tried to build a webserver (Nginx + PHP-FPM) with Docker Swarm. So I set the Nginx container on the manager, so I can request it from outside the private network. If I do that, the container get a upstream timed out (110: Connection timed out) while connecting to upstream error while requesting a PHP file. If I run it on a worker node, everything works fine, but the Nginx isn't accessible from outside the private network (I can only request with curl on the manager or every worker).
Do you guys have any idea ? I really don't understand why running Nginx on the manager makes PHP-FPM timed out. Thanks !
Here the docker-compose.yml file:
version: '3.8'
services:
nginx:
image: arm32v5/nginx:latest
container_name: webserver_nginx
ports:
- 80:80
volumes:
- /media/storage/webserver/nginx/nginx.conf:/etc/nginx/nginx.conf
- /media/storage/webserver/nginx/log:/var/log/nginx
- /media/storage/www:/var/www
links:
- php
networks:
- webserver
deploy:
placement:
constraints:
- "node.role==manager"
php:
image: arm32v5/php:7.4-fpm
container_name: webserver_php
volumes:
- /media/storage/www:/var/www
- /media/storage/webserver/nginx/www.conf:/usr/local/etc/php-fpm.d/www.conf
networks:
- webserver
links:
- nginx
ports:
- 9000:9000
networks:
webserver:
driver: overlay
attachable: true
Nginx configuration:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server_tokens off;
server {
listen 80;
error_page 500 502 503 504 /50x.html;
location / {
index index.php index.html index.htm;
root /var/www/;
}
location ~ \.php$ {
root /var/www/;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
}
I had the similar kind of issue and the solution was to use the docker service name in upstream.
Here is my setup
3 EC2 with Docker Swarm
1 Swarm Manager ( Nginx proxy deployed )
2 Swarm Worker (Frontend + backend deployed)
Flow
Browser ->> Nginx-proxy ->> React-frontend (Nginx) ->> Go-backend
nginx.conf
http {
upstream allbackend {
server exam105_frontend:8080; #FrontEnd Service Name
}
server {
listen 80;
location / {
proxy_pass http://allbackend;
}
}
}
Docker Swarm Services
AWS Note:
Remember to open port 80 publicly and 8080 (or whatever port number you are using) for internal communication in Security Group of your AWS setup, otherwise you won't be able to access the service.

Upstream php-fpm sometimes gives 404 file not found when using docker-compose with nginx and php-fpm

In my local development environment, I'm using docker-compose with nginx and php-fpm containers, but sometimes php-fpm has a 404 file not found error. I only consistently get it when I have multiple ajax calls happening.
Here is my docker-compose file.
version: '3'
services:
nginx:
build:
context: docker
dockerfile: nginx.dockerfile
volumes:
- ./:/var/www/html
ports:
- 80:80
depends_on:
- php
php:
build:
context: docker
dockerfile: php.dockerfile
volumes:
- ./:/var/www/html
nginx.conf
user nginx;
worker_processes 4;
daemon off;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
# Switch logging to console out to view via Docker
#access_log /dev/stdout;
#error_log /dev/stderr;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*.conf;
}
conf.d/default.conf
upstream php-upstream {
server php:9000;
}
sites-available/site.conf
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/html/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx.dockerfile
FROM nginx
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY nginx/site.conf /etc/nginx/sites-available/site.conf
WORKDIR /var/www/
CMD [ "nginx" ]
EXPOSE 80 443
When I visit a page with two ajax calls happening one will give me a 200
"GET /index.php" 200
the next will give me a 404
ERROR: Unable to open primary script: /var/www/html/public/index.php (No such file or directory)
Often when I refresh the one that failed will now work and the one that did work will now 404.

PHP files are DOWNLOADING instead of EXECUTING on Nginx

I have a simple docker-compose config with php-fpm and nginx.
It looks like Nginx can't pass the php file to php-fpm which results in download of php files instead of execution.
It works with html files.(localhost:8080/readme.html).
I always get a 403 Forbidden error when I go to root of localhost(http://localhost/).
Please help.
docker-compose.yml
version: '3'
services:
nginx:
image: nginx:alpine
container_name: nginx
restart: always
volumes:
- './etc/nginx/nginx.conf:/etc/nginx/nginx.conf'
- './var/log:/var/log'
- './web:/usr/share/nginx/html'
ports:
- 8080:80
- 443:443
depends_on:
- php
php:
image: php:fpm-alpine
container_name: php
restart: always
volumes:
- "./web:/var/www/html"
- './var/log:/var/log'
nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
index index.php index.html index.htm;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename/index.php){
rewrite (.*) $1/index.php;
}
if (!-f $request_filename){
rewrite (.*) /index.php;
}
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;
}
}
}
The problem is that the root folder for php-fpm (/var/www/html) and nginx (/usr/share/nginx/html) are different but you pass the name of the root folder from nginx to php-fpm in this line: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Because of that, php-fpm looks in the wrong folder and can't execute the PHP file.
Try using /var/www/html as the root for nginx (change it in the nginx config and the docker-comnpose file) and php-fpm should be able to find and execute the PHP files.

Nginx & php-fpm - host unreachable

I have docker swarm containing one nginx and one php-fpm service. My problem is, that from other services in swarm, I randomly get error Failed to connect to nginx-fpm port 8081: Host unreachable.
nginx and fpm images are from official docker images with little config changes.
nginx-fpm dockerfile
FROM nginx:1.13.12-alpine
COPY ./nginx/config/ /etc/nginx/
ADD ./nginx/docker/entrypoint.sh /bin/
EXPOSE 8081
ENTRYPOINT ["/bin/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
nginx config
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
accept_mutex on;
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8081;
server_name worker;
keepalive_timeout 30;
send_timeout 30s;
fastcgi_read_timeout 30s;
client_max_body_size 1024M;
root /app/www;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
}
}
I tried event enabling nginx_status, but I can only see 1 active connection (or host unreachable).
It looks like to me, that nginx is only able to handle one connection at time, but I cannot find reason why... any help appreciated
How are you running your service in Swarm?
Generally, you need to expose the service port externally, something like this:
$ docker service create --name my_service \
--replicas 3 \
--publish published=8081,target=8081 \
nginx-fpm

Docker Swarm get real IP (client host) in Nginx

I have a stack with nginx and PHP to run on Docker Swarm Cluster.
In a moment in my PHP application, I need to get the remote_addr ($_SERVER['REMOTE_ADDR']) which contains the real IP from the client host accessing my webapp.
But the problem is that the IP informed for nginx by docker swarm cluster. It's showed an Internal IP like 10.255.0.2, but the real IP it's the external IP from the client Host (like 192.168.101.151).
How I can solve that?
My docker-compose file:
version: '3'
services:
php:
image: php:5.6
volumes:
- /var/www/:/var/www/
- ./data/log/php:/var/log/php5
networks:
- backend
deploy:
replicas: 1
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- /var/www/:/var/www/
- ./data/log/nginx:/var/log/nginx
networks:
- backend
networks:
backend:
My default.conf (vhost.conf) file:
server {
listen 80;
root /var/www;
index index.html index.htm index.php;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log error;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
try_files $uri $uri/ /index.php;
}
location = /50x.html {
root /var/www;
}
# set expiration of assets to MAX for caching
location ~* \.(js|css|gif|png|jp?g|pdf|xml|oga|ogg|m4a|ogv|mp4|m4v|webm|svg|svgz|eot|ttf|otf|woff|ico|webp|appcache|manifest|htc|crx|oex|xpi|safariextz|vcf)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_read_timeout 300;
}
}
My nginx config file:
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
keepalive_timeout 15;
client_body_buffer_size 100K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
log_format main '$remote_addr - $remote_user [$time_local] "$request_filename" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
include /etc/nginx/conf.d/*.conf;
}
for those don't want to read all the github thread ( https://github.com/moby/moby/issues/25526 ), the answer that was good for me was to change the config to this :
version: '3.7'
services:
nginx:
ports:
- mode: host
protocol: tcp
published: 80
target: 80
- mode: host
protocol: tcp
published: 443
target: 81
This still lets the internal overlay network work, but uses some tricks with iptables to forward those ports directly to the container, so the service inside the container see the correct source IP address of the packets.
There is no facility in iptables to allow balancing of ports between multiple containers, so you can only assign one port to one container (which includes multiple replicas of a container).
You can't get this yet through an overlay network. If you scroll up from bottom on this long-running GitHub issue, you'll see some options for using bridge networks in Swarm with your proxies to get around this issue for now.
changing port binding mode to host worked for me
ports:
- mode: host
protocol: tcp
published: 8082
target: 80
however your web front end must listen on a specific host inside swarm cluster
i.e.
deploy:
placement:
constraints:
[node.role == manager]
X-Real-IP will be passthrough and you can use it to access client IP. You can look at http://dequn.github.io/2019/06/22/docker-web-get-real-client-ip/ for reference.

Categories