nginx and php-fpm in Docker - php

I am in the process of Dockerising my webserver/php workflow.
But because I am on Windows, I need to use a virtual machine. I chose boot2docker which is a Tiny Core Linux running in Virtualbox and adapted for Docker.
I selected three containers:
nginx: the official nginx container;
jprjr/php-fpm: a php-fpm container;
mysql: for databases.
In boot2docker, /www/ contains my web projects and conf/, which has the following tree:
conf
│
├───fig
│ fig.yml
│
└───nginx
nginx.conf
servers-global.conf
servers.conf
Because docker-compose is not available for boot2docker, I must use fig to automate everything. Here is my fig.xml:
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- 3306:3306
php:
image: jprjr/php-fpm
links:
- mysql:mysql
volumes:
- /www:/srv/http:ro
ports:
- 9000:9000
nginx:
image: nginx
links:
- php:php
volumes:
- /www:/www:ro
ports:
- 80:80
command: nginx -c /www/conf/nginx/nginx.conf
Here is my nginx.conf:
daemon off;
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
keepalive_timeout 65;
index index.php index.html index.htm;
include /www/conf/nginx/servers.conf;
autoindex on;
}
And the servers.conf:
server {
server_name lab.dev;
root /www/lab/;
include /www/conf/nginx/servers-global.conf;
}
# Some other servers (vhosts)
And the servers-global.conf:
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
}
So the problem now (sorry for all that configuration, I believe it was needed to clearly explain the problem): If I access lab.dev, no problem (which shows that the host is created in Windows) but if I try to access lab.dev/test_autoload/, I have a File not found.. I know this comes from php-fpm not being able to access the files, and the nginx logs confirm this:
nginx_1 | 2015/05/28 14:56:02 [error] 5#5: *3 FastCGI sent in stderr:
"Primary script unknown" while reading response header from upstream,
client: 192.168.59.3, server: lab.dev, request: "GET /test_autoload/ HTTP/1.1",
upstream: "fastcgi://172.17.0.120:9000", host: "lab.dev", referrer: "http://lab.dev/"
I know that there is a index.php in lab/test_autoload/ in both containers, I have checked. In nginx, it is located in /www/lab/test_autoload/index.php and /srv/http/lab/test_autoload/index.php in php.
I believe the problem comes from root and/or fastcgi_param SCRIPT_FILENAME, but I do not know how to solve it.
I have tried many things, such as modifying the roots, using a rewrite rule, adding/removing some /s, etc, but nothing has made it change.
Again, sorry for all this config, but I think it was needed to describe the environment I am in.

I finally found the answer.
The variable $fastcgi_script_name did not take into account the root (logical, as it would have included www/ otherwise. This means that a global file can not work. An example :
# "generated" vhost
server {
server_name lab.dev;
root /www/lab/;
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
# I need to add /lab after /srv/http because it requests PHP to look at the root of the web files, not in the lab/ folder
# fastcgi_param SCRIPT_FILENAME /srv/http/lab$fastcgi_script_name;
}
}
This meant that I couldn't write the lab/ at only one place, I needed to write at two different places (root and fastcgi_param) so I decided to write myself a small PHP script that used a .json and turned it into the servers.conf file. If anyone wants to have a look at it, just ask, it will be a pleasure to share it.

There's a mistake here:
fastcgi_param SCRIPT_FILENAME /srv/http/$fastcgi_script_name;
The good line is:
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
This is not the same thing.

Your nginx.conf is fully empty, where is the daemon off;
the user nginx line, worker_processes etc.. nginx need some configuration file before running the http.
In the http conf, same thing, is missing the mime types, the default_type, configuration of logs sendfile at on for boot2docker.
Your problem is clearly not a problem with Docker, but with the nginx configuration. Have you test your application with running docker run before using fig ? Did it worked ?

Related

How to configure an nginx.conf with a dockerised drupal website?

this is my first time working with nginx and I'm using it to access my dockerised drupal appliction from a production subdomain.
So before everything, I'm currently using docker-compose to create my sql, app, and webservice containers, here is my docker-compose file :
version: '3'
services:
app:
image: osiolabs/drupaldevwithdocker-php:7.4
volumes:
- ./docroot:/var/www/html:cached
depends_on:
- db
restart: always
container_name: intranet
db:
image: mysql:5.5
volumes:
- ./mysql:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=linux1354web
- MYSQL_USER=root
- MYSQL_PASSWORD=linux1354web
- MYSQL_DATABASE=intranet
container_name: intranet-db
web:
build: ./web
ports:
- 88:80
depends_on:
- app
volumes:
- ./docroot:/var/www/html:cached
restart: always
container_name: webIntranet
I don't think the containers are the problem, as when I go to the drupal containers the site works, but my main problem is the link with the nginx container. Here is my nginx.conf :
# stuff for http block
client_max_body_size 1g;
# fix error: upstream sent too big header while reading response header from upstream
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
server {
listen 80;
listen [::]:80 default_server;
server_name _;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html;
#RENVOYER LA PAGE DE GARDE DE APACHE2
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php?$query_string; # For Drupal >= 7
}
location /intranet {
# try to serve file directly, fallback to app.php
try_files $uri $uri/;
}
#RENVOYER LE FICHIER ROBOTS.TXT
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location ~ \.php(/|$) {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:80;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $document_root;
}
}
And this is my DockerFile to build nginx image:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
When I go to localhost:88/ I currently have a apache2 hub page, but the moment I'm trying for another page I always get a 502 bad gateway error and the logs are saying :
webIntranet | 2021/03/11 08:44:55 [error] 30#30: *1 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 172.26.0.1, server: _, request: "GET /index HTTP/1.1", upstream: "fastcgi://172.26.0.3:80", host: "localhost:88"
To go more into details, my docker folder looks like that, docroot containes the drupal website.
I have tried sovling the problem by changing the ports as some solution mentionned it but it did nothing, I don't understand what could be wrong, I've tried many things with the conf but none of them works, and I still even can't have a single page of the drupal site showing up.
The drupaldevwithdocker-php project isn't using php-fpm, hence the response is unsupported as it's from apache rather than php-fpm. I'd imagine you'd need something more like this?
proxy_pass http://app:80;
See https://gist.github.com/BretFisher/468bca2900b90a4dddb7fe9a52143fc6

Docker Swarm - PHP-FPM not working with Nginx running on manager

I've build a cluster (1 manager, 4 workers). Only the manager has a public IP address, workers and the manager are on a private network.
I tried to build a webserver (Nginx + PHP-FPM) with Docker Swarm. So I set the Nginx container on the manager, so I can request it from outside the private network. If I do that, the container get a upstream timed out (110: Connection timed out) while connecting to upstream error while requesting a PHP file. If I run it on a worker node, everything works fine, but the Nginx isn't accessible from outside the private network (I can only request with curl on the manager or every worker).
Do you guys have any idea ? I really don't understand why running Nginx on the manager makes PHP-FPM timed out. Thanks !
Here the docker-compose.yml file:
version: '3.8'
services:
nginx:
image: arm32v5/nginx:latest
container_name: webserver_nginx
ports:
- 80:80
volumes:
- /media/storage/webserver/nginx/nginx.conf:/etc/nginx/nginx.conf
- /media/storage/webserver/nginx/log:/var/log/nginx
- /media/storage/www:/var/www
links:
- php
networks:
- webserver
deploy:
placement:
constraints:
- "node.role==manager"
php:
image: arm32v5/php:7.4-fpm
container_name: webserver_php
volumes:
- /media/storage/www:/var/www
- /media/storage/webserver/nginx/www.conf:/usr/local/etc/php-fpm.d/www.conf
networks:
- webserver
links:
- nginx
ports:
- 9000:9000
networks:
webserver:
driver: overlay
attachable: true
Nginx configuration:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server_tokens off;
server {
listen 80;
error_page 500 502 503 504 /50x.html;
location / {
index index.php index.html index.htm;
root /var/www/;
}
location ~ \.php$ {
root /var/www/;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
}
I had the similar kind of issue and the solution was to use the docker service name in upstream.
Here is my setup
3 EC2 with Docker Swarm
1 Swarm Manager ( Nginx proxy deployed )
2 Swarm Worker (Frontend + backend deployed)
Flow
Browser ->> Nginx-proxy ->> React-frontend (Nginx) ->> Go-backend
nginx.conf
http {
upstream allbackend {
server exam105_frontend:8080; #FrontEnd Service Name
}
server {
listen 80;
location / {
proxy_pass http://allbackend;
}
}
}
Docker Swarm Services
AWS Note:
Remember to open port 80 publicly and 8080 (or whatever port number you are using) for internal communication in Security Group of your AWS setup, otherwise you won't be able to access the service.

NGINX configuration when NGINX does not have access to the application files, along with php-fpm and docker

So my docker setup is the following: I have an Nginx container that accepts HTTP requests, and I have another container (my custom container) where I have php-fpm, and my application code. The application code is not on the host, only in the web container.
I want to configure Nginx as a proxy, to get requests and route them to php-fpm.
My nginx confiration is the following (i've removed some parts that are not important here):
upstream phpserver {
server web:9000;
}
server {
listen 443 ssl http2;
server_name app;
root /app/web;
ssl_certificate /ssl.crt;
ssl_certificate_key /ssl.key;
location ~ ^/index\.php(/|$) {
fastcgi_pass phpserver;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_read_timeout 160;
internal;
http2_push_preload on;
}
}
And my docker configuration (again, I've removed some not important parts)
nginx:
ports:
- 443:443/tcp
- 80:80/tcp
image: nginx
links:
- web:web
web:
image: custom_image
container_name: web
With this configuration I get the following Nginx error: "open() "/app/web" failed (2: No such file or directory)", because Nginx does not have access to that folder (that folder is in the web container were the php-fpm is).
Is there a way I can configure Nginx to route the HTTP requests, even if it does not have access to the application code?
I understand that one of the ways to fix this issue is to mount the application code to the Nginx container, but I would like to avoid that if possible. The reason for that is that in swarm mode, that wouldn't work if the two containers don't share a host.
I managed to solve the issue, so I'm posting my own solution bellow for people with similar problem.
The solution was to use the 'alias' directive and not use the 'root' directive in the nginx configuration (i've removed some parts that are not important here):
upstream phpserver {
server web:9000;
}
server {
listen 443 http2;
ssl on;
server_name app;
ssl_certificate /ssl.crt;
ssl_certificate_key /ssl.key;
location ~ ^/index\.php(/|$) {
alias /app/web;
fastcgi_pass phpserver;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
internal;
http2_push_preload on;
}
}
Now the request is properly routed to the phpserver on port 9000, and handled there by php fpm. Php fpm know which script to execute by looking at the 'alias' directive.
The problem now was how to serve static files. One solution was to serve them via php fpm as well, but from what I read online that's not recommended as the overhead would be bigger. So my solution was to share all the static files with the nginx docker container, so that ngnix has access to them and can serve them directly. If somebody has a better solution about how to serve static files in this scenario, please let me know.
# Cache Control for Static Files
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
#access_log on;
#log_not_found off;
expires 360d;
}

Docker machine users has no access to files on host

I have problem with setting up my docker environment on remote machine.
I prepared local docker machines. Problem is with nginx + php-fpm.
Nginx act as nginx user, php-fpm act as www-data user. Files on host machine (application files) are owned by user1. chmods are default for symfony2 application.
When I access my webserver it returns 404 error or just simple "file not found".
For a while exact same configuration works on my local Ubuntu 16.04, but fails on Debian Jessie on server. Right now it doesn't work on both. I tried everything, asked on sysops groups and googled for hours. Do you hve any idea?
Here is my vhost configuration
server {
listen 80;
server_name dev.xxxxx.co xxxxx.dev;
root /usr/share/www/co.xxxxx.dev/web;
index app_dev.php;
client_max_body_size 100M;
fastcgi_read_timeout 1800;
location / {
# try to serve file directly, fallback to app.php
try_files $uri $uri/ /app.php$is_args$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
access_log off;
}
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ ^/app\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/app.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
location ~ \.php$ {
return 404;
}
}
nginx configuration
user root;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
And my docker compose
version: '2'
services:
nginx:
image: nginx
ports:
- 8082:80
volumes:
- /home/konrad/Workspace:/usr/share/www:ro
- ./conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./sites:/etc/nginx/conf.d:ro
php-fpm:
image: php:fpm
ports:
- 9000:9000
volumes:
- /home/konrad/Workspace:/usr/share/www
- ./conf/www.conf:/etc/php/7.0/fpm/pool.d/www.conf
- ./conf/php.ini:/usr/local/etc/php/conf.d/90-php.ini:ro
On remote server files are accesible, visible as property of 1001:1001

Changes to css, js, and static files not reflected on nginx docker container

I created a multi-container application which is about php. When I modify the php file, I can see the changes in the browser. But, when I modify the static files such as css and js, there are no changes in the browser. The following is my Dockerfile code:
Dockerfile
`FROM nginx:1.8
`ADD default.conf /etc/nginx/conf.d/default.conf
`ADD nginx.conf /etc/nginx/nginx.conf
`WORKDIR /Code/project/
`RUN chmod -R 777 /Code/project/
`VOLUME /Code/project
default.conf
`server {
listen 80;
server_name localhost;
root /Code/project/public;
index index.html index.htm index.php;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm:9000;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
nginx.conf
user root;
worker_processes 8;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 20m;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}`
docker-compose.yml
webphp:
image: index.alauda.cn/chuanty/local_php
#image: index.alauda.cn/chuanty/local_nginx:snapshot
ports:
- "9000:9000"
volumes:
- .:/Code/project
links:
- cache:cache
- db:db
- es:localhost
extra_hosts:
- "chaunty.taurus:192.168.99.100"
cache:
image: redis
ports:
- "6379:6379"
db:
#image: mysql
image: index.alauda.cn/alauda/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: chuantaiyidev
MYSQL_USER: cty
MYSQL_PASSWORD: chuantaiyidev
MYSQL_DATABASE: taurus
es:
image: index.alauda.cn/chuanty/local_elasticsearch
ports:
- "9200:9200"
- "9300:9300"
server:
#image: index.alauda.cn/ryugou/nginx
image: index.alauda.cn/chuanty/local_nginx:1.1
ports:
- "80:80"
- "443:443"
links:
- webphp:fpm
volumes_from:
- webphp:rw
I guess your problem is "sendfile on" in your nginx.conf.
For development purpose try to set it off in the server-directive of your server block:
server {
...
sendfile off;
}
This will force to reload the static files such as css and js. nginx won't load the file from memory instead.
http://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile
One option is to copy the modified static files directly to the Docker container:
docker cp web/static/main.js container_name:/usr/src/app/static/main.js
web/static is the static directory on your host machine
main.js is the modified file
container_name is the name of the container which can be found with docker ps
/usr/src/app/static is the static directory in your Docker container
If you want to determine exactly where your static files are on your Docker container, you can use docker exec -t -i container_name /bin/bash to explore its directory structure.
I really hope you actually do not copy configuration file into an image!
Docker Best Practice
Docker images are supposed to be immutable, hence at deploy the configuration file (that are depend on a lot [environment] of variables) should be passed to/shared with the container thanks to the docker run option -v|--volume instead.
As for your issue
If you want to see any change when static files are modified you need to docker build and then docker compose up on every modifications in order to actually change the web page. You may not want that (clearly). I suggest you to use shared directories (through the -v option).

Categories