Configure nextcloud-fpm docker-compose with bare metal nginx - php

I'm trying to install Nextcloud on my server.
The nginx service is installed directly on bare metal (Ubuntu)
I starting from the docker-compose found at https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy/postgres/fpm
version: '3.8'
services:
postgres-nextcloud:
image: postgres:alpine
restart: always
ports:
- 5435:5432
volumes:
- postgres-nextcloud-data:/var/lib/postgresql/data
env_file:
- db.env
redis-nextcloud:
image: redis:alpine
restart: always
nextcloud:
image: nextcloud:fpm-alpine
restart: always
ports:
- 8083:9000
volumes:
- /var/www/cloud.domain.com:/var/www/html
environment:
- POSTGRES_HOST=postgres-nextcloud
- REDIS_HOST=redis-nextcloud
- POSTGRES_PORT=5432
env_file:
- db.env
depends_on:
- postgres-nextcloud
- redis-nextcloud
web:
build: ./web
restart: always
volumes:
- /var/www/cloud.domain.com:/var/www/html:ro
environment:
- VIRTUAL_HOST=cloud.domain.com
- LETSENCRYPT_HOST=cloud.domain.com
- LETSENCRYPT_EMAIL=dev#domain.com
depends_on:
- nextcloud
networks:
- proxy-tier
- default
cron:
image: nextcloud:fpm-alpine
restart: always
volumes:
- /var/www/cloud.domain.com:/var/www/html
entrypoint: /cron.sh
depends_on:
- postgres-nextcloud
- redis-nextcloud
But with my knowledge in web server I haven't found the way to properly configure my "local" nginx.
I've many other website, app already working using this nginx instance
All the different config are in the sites-available directory
The config for the Nextcloud project is named cloud.mydomain.com
with this nginx config I only get a File not found. Page
server {
root /var/www/cloud.domain.com;
server_name cloud.domain.com www.cloud.domain.com;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass localhost:8083;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/cloud.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.cloud.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = cloud.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name cloud.domain.com www.cloud.domain.com;
listen 80;
listen [::]:80;
return 404; # managed by Certbot
}
I understand that -fpm app need a proxy but I don't really understand how to link it to my existing nginx set up. With the nginx NOT running in a docker container.
Thanks for your time!

One way I know is to install another nginx in another container and put those two containers in one network in docker...
Use your baremetal nginx as reverse proxy then forward traffic to the nginx in container will do the trick.
The other way is to use the image nextcloud:latest which comes with built in Apache server. Its actually the first way with built-in web server.
I heard there some way to configure your docker image to behave like a service installed on baremetal (with own public IP) by setting the network-mode in docker-compose files but I think its easier to just include another nginx server in docker...
Either way your existing service on baremetal will not be affected.

Related

Connection refused while connecting to upstream to a port other than 9000 in docker project

I've been struggling with running two php docker projects on the same server for a long time. For the first project, I use for the php port 9000:9000 in docker-compose.yaml. If I use the same port for another project, logically, when starting docker, it reports an error that port 9000 is already in use. Therefore, I set the port to 9002, but I get the error 502 Bad Gateway and Connection refused.
connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.19.0.3:9002", host: "127.0.0.1:4443"
The first project working correctly.
Can someone advise me how to adjust the configuration or where am i doing wrong?
First docker project:
version: '3.9'
services:
php:
container_name: php
build:
context: ./docker/php
ports:
- '9000:9000'
volumes:
- .:/var/www/
- ~/.ssh:/root/.ssh:ro
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '5080:80'
- '5443:443'
volumes:
- .:/var/www/
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt:/etc/letsencrypt
depends_on:
- php
upstream php-upstream {
server php:9000;
}
server {
listen 80;
server_name localhost;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
index index.php index.html index.htm;
server_name localhost;
root /var/www;
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
ssl_certificate /etc/letsencrypt/live/domain1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain1/privkey.pem;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
include fastcgi_params;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
internal;
}
location ~ \\.php$ {
return 404;
}
}
Second docker project:
version: '3.9'
services:
php:
container_name: hostmagic_php
build:
context: ./docker/php
ports:
- '9002:9000'
volumes:
- .:/var/www/symfony
- ~/.ssh:/root/.ssh:ro
nginx:
container_name: hostmagic_nginx
image: nginx:stable-alpine
ports:
- '8080:80'
- '4443:443'
volumes:
- .:/var/www/symfony
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt:/etc/letsencrypt
depends_on:
- php
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: hostmagic_rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- 15672:15672
- 5672:5672
upstream php-upstream {
server php:9002;
}
server {
listen 80;
server_name localhost;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
index index.php index.html index.htm;
server_name localhost;
root /var/www/symfony/public;
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
ssl_certificate /etc/letsencrypt/live/domain2/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain2/privkey.pem;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param HTTP_HOST $host;
include fastcgi_params;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
internal;
}
location ~ \\.php$ {
return 404;
}
}
Connections between containers always use the standard port number for the destination service. These connections don't require ports:, and ignore any port remapping that might be specified there.
That means, in the second Nginx proxy, you need to use the standard PHP-FPM port 9000 and not the remapped port:
upstream php-upstream {
server php:9000;
}
If you're not going to access the FastCGI service directly from the host (and tools to do this are limited) then you can delete the ports: on both php containers, which further avoids this conflict. (You can also delete container_name: and Compose will pick a non-conflicting default.)

nginx server connecting to docker containers running laravel

I need to configure an nginx server to connect to multiple docker networks for different projects. As of now, I am trying to connect nginx to one docker network.
The docker network has php (laravel) and mysql in two different containers.
Here is the nginx configuration:
upstream web {
server 127.0.0.1:9000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name tinyurl.abc.org;
root /home/azureuser/app/url-shorterner/public;
ssl_certificate /home/azureuser/app/certs/abc.org.crt;
ssl_certificate_key /home/azureuser/app/certs/abc.org.key;
error_log /var/log/nginx/error.log error;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass web;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME /tinyurl/public$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
server {
listen 80;
listen [::]:80;
server_name tinyurl.abc.org;
return 301 https://$server_name$request_uri;
}
The docker-compose.yml file has the following:
version: '3.8'
# Services
services:
# Web (Application) Service
web:
build:
context: ./.docker/web
args:
HOST_UID: $HOST_UID
container_name: tinyurl-web
ports:/
- '9000:9000'
volumes:
- '.:/var/www/html'
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
container_name: tinyurl-mysql
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_ROOT_HOST: '${DB_HOST}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_HOST: '${DB_HOST}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'no'
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 5s
retries: 10
# Scheduler Service
scheduler:
image: mcuadros/ofelia:latest
container_name: tinyurl-scheduler
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./.docker/scheduler/config.ini:/etc/ofelia/config.ini
depends_on:
- web
volumes:
mysqldata:
driver: local
The problem seems to be in the routing of the request. The index.page within the public folder of laravel shows up, but none of the other static pages are showing up. Also, the routing of the requests is not happening correctly.
Looks like the fastcgi configuration is not okay.
Can anyone help with this, and point me to what needs to be done to the nginx configuration to route to the web container having laravel.
With Regards,
Sharat
In your nginx config, replace 127.0.0.1 with the service name in your docker-compose
upstream web {
server web:9000;
}
server {
...
location ~ \.php$ {
fastcgi_pass http://web;
...

How to configure an nginx.conf with a dockerised drupal website?

this is my first time working with nginx and I'm using it to access my dockerised drupal appliction from a production subdomain.
So before everything, I'm currently using docker-compose to create my sql, app, and webservice containers, here is my docker-compose file :
version: '3'
services:
app:
image: osiolabs/drupaldevwithdocker-php:7.4
volumes:
- ./docroot:/var/www/html:cached
depends_on:
- db
restart: always
container_name: intranet
db:
image: mysql:5.5
volumes:
- ./mysql:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=linux1354web
- MYSQL_USER=root
- MYSQL_PASSWORD=linux1354web
- MYSQL_DATABASE=intranet
container_name: intranet-db
web:
build: ./web
ports:
- 88:80
depends_on:
- app
volumes:
- ./docroot:/var/www/html:cached
restart: always
container_name: webIntranet
I don't think the containers are the problem, as when I go to the drupal containers the site works, but my main problem is the link with the nginx container. Here is my nginx.conf :
# stuff for http block
client_max_body_size 1g;
# fix error: upstream sent too big header while reading response header from upstream
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
server {
listen 80;
listen [::]:80 default_server;
server_name _;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html;
#RENVOYER LA PAGE DE GARDE DE APACHE2
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php?$query_string; # For Drupal >= 7
}
location /intranet {
# try to serve file directly, fallback to app.php
try_files $uri $uri/;
}
#RENVOYER LE FICHIER ROBOTS.TXT
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location ~ \.php(/|$) {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:80;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $document_root;
}
}
And this is my DockerFile to build nginx image:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
When I go to localhost:88/ I currently have a apache2 hub page, but the moment I'm trying for another page I always get a 502 bad gateway error and the logs are saying :
webIntranet | 2021/03/11 08:44:55 [error] 30#30: *1 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 172.26.0.1, server: _, request: "GET /index HTTP/1.1", upstream: "fastcgi://172.26.0.3:80", host: "localhost:88"
To go more into details, my docker folder looks like that, docroot containes the drupal website.
I have tried sovling the problem by changing the ports as some solution mentionned it but it did nothing, I don't understand what could be wrong, I've tried many things with the conf but none of them works, and I still even can't have a single page of the drupal site showing up.
The drupaldevwithdocker-php project isn't using php-fpm, hence the response is unsupported as it's from apache rather than php-fpm. I'd imagine you'd need something more like this?
proxy_pass http://app:80;
See https://gist.github.com/BretFisher/468bca2900b90a4dddb7fe9a52143fc6

how to setup a single nginx server with multiple php-fpm docker containers

Nginx is running on my server (not a docker image) to proxy the subdomain requests to my docker containers.
Having an additional nginx container for each image works well as follows:
docker-compose.yml
version: '2'
services:
nginx:
image: nginx:latest
ports:
- "8080:80"
volumes:
- .:/app
- ./site.conf:/etc/nginx/conf.d/default.conf
networks:
- code-network
php:
image: php:fpm
volumes:
- .:/app
networks:
- code-network
networks:
code-network:
driver: bridge
site.conf
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /app;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
/etc/nginx/sites-available/dev.domain.co.uk
server {
listen 80;
listen [::]:80;
server_name dev.domain.co.uk;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
But seems wasteful to have an additional instance of nginx for each php site.
How can I route the server instance of nginx directly to each php:fpm container?
note: the docker 'code-network' gets renamed appname-code-network upon docker-compose up -d
Figured this out.
In the nginx server conf, the root is the location to the code on the server host i.e. /home/user/CODE/site
the fastcgi_param is the location of the code on the Docker container, (/app) as defined in docker-compose.yml
fastcgi_param SCRIPT_FILENAME /app$fastcgi_script_name;
You also need to pass to the docker container host ip:
fastcgi_pass 172.21.0.2:9000;
As discovered by hostname -I, when using bash on the container.
I'm yet to discover how to use the container hostname instead
To proxy to multiple php-fpm docker instances, you need to give each php instance a unique alias which then should be referenced in your server configuration.
For example:
in docker-compose.yml file (see php1 in 'aliases' section):
php:
image: php:fpm
volumes:
- .:/app
networks:
code-network:
aliases:
- php1
and then in site.conf use fastcgi_pass php1:9000; See php1?
Giving each php container instance a unique alias allows you to run different php setup (versions, etc...) on the same host with a single nginx container.
Hope it makes sense.

Changes to css, js, and static files not reflected on nginx docker container

I created a multi-container application which is about php. When I modify the php file, I can see the changes in the browser. But, when I modify the static files such as css and js, there are no changes in the browser. The following is my Dockerfile code:
Dockerfile
`FROM nginx:1.8
`ADD default.conf /etc/nginx/conf.d/default.conf
`ADD nginx.conf /etc/nginx/nginx.conf
`WORKDIR /Code/project/
`RUN chmod -R 777 /Code/project/
`VOLUME /Code/project
default.conf
`server {
listen 80;
server_name localhost;
root /Code/project/public;
index index.html index.htm index.php;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm:9000;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
nginx.conf
user root;
worker_processes 8;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 20m;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}`
docker-compose.yml
webphp:
image: index.alauda.cn/chuanty/local_php
#image: index.alauda.cn/chuanty/local_nginx:snapshot
ports:
- "9000:9000"
volumes:
- .:/Code/project
links:
- cache:cache
- db:db
- es:localhost
extra_hosts:
- "chaunty.taurus:192.168.99.100"
cache:
image: redis
ports:
- "6379:6379"
db:
#image: mysql
image: index.alauda.cn/alauda/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: chuantaiyidev
MYSQL_USER: cty
MYSQL_PASSWORD: chuantaiyidev
MYSQL_DATABASE: taurus
es:
image: index.alauda.cn/chuanty/local_elasticsearch
ports:
- "9200:9200"
- "9300:9300"
server:
#image: index.alauda.cn/ryugou/nginx
image: index.alauda.cn/chuanty/local_nginx:1.1
ports:
- "80:80"
- "443:443"
links:
- webphp:fpm
volumes_from:
- webphp:rw
I guess your problem is "sendfile on" in your nginx.conf.
For development purpose try to set it off in the server-directive of your server block:
server {
...
sendfile off;
}
This will force to reload the static files such as css and js. nginx won't load the file from memory instead.
http://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile
One option is to copy the modified static files directly to the Docker container:
docker cp web/static/main.js container_name:/usr/src/app/static/main.js
web/static is the static directory on your host machine
main.js is the modified file
container_name is the name of the container which can be found with docker ps
/usr/src/app/static is the static directory in your Docker container
If you want to determine exactly where your static files are on your Docker container, you can use docker exec -t -i container_name /bin/bash to explore its directory structure.
I really hope you actually do not copy configuration file into an image!
Docker Best Practice
Docker images are supposed to be immutable, hence at deploy the configuration file (that are depend on a lot [environment] of variables) should be passed to/shared with the container thanks to the docker run option -v|--volume instead.
As for your issue
If you want to see any change when static files are modified you need to docker build and then docker compose up on every modifications in order to actually change the web page. You may not want that (clearly). I suggest you to use shared directories (through the -v option).

Categories