Nginx and php-fpm most optimal setup configuration with docker [duplicate] - php

I am trying to link 2 separate containers:
nginx:latest
php:fpm
The problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect.
Here is the source code, which is in my repository. Here is the file docker-compose.yml:
nginx:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www/test/
links:
- fpm
fpm:
image: php:fpm
ports:
- "9000:9000"
and Dockerfile which I used to build a custom image based on the nginx one:
FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./default.conf /etc/nginx/conf.d/
Lastly, here is my custom Nginx virtual host config:
server {
listen 80;
server_name localhost;
root /var/www/test;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Could anybody help me configure these containers correctly to execute php scripts?
P.S.
I run containers via docker-composer like this:
docker-compose up
from the project root directory.

I know it is kind an old post, but I've had the same problem and couldn't understand why your code didn't work.
After a LOT of tests I've found out why.
It seems like fpm receives the full path from nginx and tries to find the files in the fpm container, so it must be the exactly the same as server.root in the nginx config, even if it doesn't exist in the nginx container.
To demonstrate:
docker-compose.yml
nginx:
build: .
ports:
- "80:80"
links:
- fpm
fpm:
image: php:fpm
ports:
- ":9000"
# seems like fpm receives the full path from nginx
# and tries to find the files in this dock, so it must
# be the same as nginx.root
volumes:
- ./:/complex/path/to/files/
/etc/nginx/conf.d/default.conf
server {
listen 80;
# this path MUST be exactly as docker-compose.fpm.volumes,
# even if it doesn't exist in this dock.
root /complex/path/to/files;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass fpm:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/

Don't hardcode ip of containers in nginx config, docker link adds the hostname of the linked machine to the hosts file of the container and you should be able to ping by hostname.
EDIT: Docker 1.9 Networking no longer requires you to link containers, when multiple containers are connected to the same network, their hosts file will be updated so they can reach each other by hostname.
Every time a docker container spins up from an image (even stop/start-ing an existing container) the containers get new ip's assigned by the docker host. These ip's are not in the same subnet as your actual machines.
see docker linking docs (this is what compose uses in the background)
but more clearly explained in the docker-compose docs on links & expose
links
links:
- db
- db:database
- redis
An entry with the alias' name will be created in /etc/hosts inside containers for this service, e.g:
172.17.2.186 db
172.17.2.186 database
172.17.2.187 redis
expose
Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified.
and if you set up your project to get the ports + other credentials through environment variables, links automatically set a bunch of system variables:
To see what environment variables are available to a service, run docker-compose run SERVICE env.
name_PORT
Full URL, e.g. DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, e.g. DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container's IP address, e.g. DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, e.g. DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), e.g. DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, e.g. DB_1_NAME=/myapp_web_1/myapp_db_1

As pointed out before, the problem was that the files were not visible by the fpm container. However to share data among containers the recommended pattern is using data-only containers (as explained in this article).
Long story short: create a container that just holds your data, share it with a volume, and link this volume in your apps with volumes_from.
Using compose (1.6.2 in my machine), the docker-compose.yml file would read:
version: "2"
services:
nginx:
build:
context: .
dockerfile: nginx/Dockerfile
ports:
- "80:80"
links:
- fpm
volumes_from:
- data
fpm:
image: php:fpm
volumes_from:
- data
data:
build:
context: .
dockerfile: data/Dockerfile
volumes:
- /var/www/html
Note that data publishes a volume that is linked to the nginx and fpm services. Then the Dockerfile for the data service, that contains your source code:
FROM busybox
# content
ADD path/to/source /var/www/html
And the Dockerfile for nginx, that just replaces the default config:
FROM nginx
# config
ADD config/default.conf /etc/nginx/conf.d
For the sake of completion, here's the config file required for the example to work:
server {
listen 0.0.0.0:80;
root /var/www/html;
location / {
index index.php index.html;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
which just tells nginx to use the shared volume as document root, and sets the right config for nginx to be able to communicate with the fpm container (i.e.: the right HOST:PORT, which is fpm:9000 thanks to the hostnames defined by compose, and the SCRIPT_FILENAME).

New Answer
Docker Compose has been updated. They now have a version 2 file format.
Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
They now support the networking feature of Docker which when run sets up a default network called myapp_default
From their documentation your file would look something like the below:
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
fpm:
image: phpfpm
nginx
image: nginx
As these containers are automatically added to the default myapp_default network they would be able to talk to each other. You would then have in the Nginx config:
fastcgi_pass fpm:9000;
Also as mentioned by #treeface in the comments remember to ensure PHP-FPM is listening on port 9000, this can be done by editing /etc/php5/fpm/pool.d/www.conf where you will need listen = 9000.
Old Answer
I have kept the below here for those using older version of Docker/Docker compose and would like the information.
I kept stumbling upon this question on google when trying to find an answer to this question but it was not quite what I was looking for due to the Q/A emphasis on docker-compose (which at the time of writing only has experimental support for docker networking features). So here is my take on what I have learnt.
Docker has recently deprecated its link feature in favour of its networks feature
Therefore using the Docker Networks feature you can link containers by following these steps. For full explanations on options read up on the docs linked previously.
First create your network
docker network create --driver bridge mynetwork
Next run your PHP-FPM container ensuring you open up port 9000 and assign to your new network (mynetwork).
docker run -d -p 9000 --net mynetwork --name php-fpm php:fpm
The important bit here is the --name php-fpm at the end of the command which is the name, we will need this later.
Next run your Nginx container again assign to the network you created.
docker run --net mynetwork --name nginx -d -p 80:80 nginx:latest
For the PHP and Nginx containers you can also add in --volumes-from commands etc as required.
Now comes the Nginx configuration. Which should look something similar to this:
server {
listen 80;
server_name localhost;
root /path/to/my/webroot;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
Notice the fastcgi_pass php-fpm:9000; in the location block. Thats saying contact container php-fpm on port 9000. When you add containers to a Docker bridge network they all automatically get a hosts file update which puts in their container name against their IP address. So when Nginx sees that it will know to contact the PHP-FPM container you named php-fpm earlier and assigned to your mynetwork Docker network.
You can add that Nginx config either during the build process of your Docker container or afterwards its up to you.

As previous answers have solved for, but should be stated very explicitly: the php code needs to live in the php-fpm container, while the static files need to live in the nginx container. For simplicity, most people have just attached all the code to both, as I have also done below. If the future, I will likely separate out these different parts of the code in my own projects as to minimize which containers have access to which parts.
Updated my example files below with this latest revelation (thank you #alkaline )
This seems to be the minimum setup for docker 2.0 forward
(because things got a lot easier in docker 2.0)
docker-compose.yml:
version: '2'
services:
php:
container_name: test-php
image: php:fpm
volumes:
- ./code:/var/www/html/site
nginx:
container_name: test-nginx
image: nginx:latest
volumes:
- ./code:/var/www/html/site
- ./site.conf:/etc/nginx/conf.d/site.conf:ro
ports:
- 80:80
(UPDATED the docker-compose.yml above: For sites that have css, javascript, static files, etc, you will need those files accessible to the nginx container. While still having all the php code accessible to the fpm container. Again, because my base code is a messy mix of css, js, and php, this example just attaches all the code to both containers)
In the same folder:
site.conf:
server
{
listen 80;
server_name site.local.[YOUR URL].com;
root /var/www/html/site;
index index.php;
location /
{
try_files $uri =404;
}
location ~ \.php$ {
fastcgi_pass test-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
In folder code:
./code/index.php:
<?php
phpinfo();
and don't forget to update your hosts file:
127.0.0.1 site.local.[YOUR URL].com
and run your docker-compose up
$docker-compose up -d
and try the URL from your favorite browser
site.local.[YOUR URL].com/index.php

I think we also need to give the fpm container the volume, dont we? So =>
fpm:
image: php:fpm
volumes:
- ./:/var/www/test/
If i dont do this, i run into this exception when firing a request, as fpm cannot find requested file:
[error] 6#6: *4 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.42.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.81:9000", host: "localhost"

For anyone else getting
Nginx 403 error: directory index of [folder] is forbidden
when using index.php while index.html works perfectly and having included index.php in the index in the server block of their site config in sites-enabled
server {
listen 80;
# this path MUST be exactly as docker-compose php volumes
root /usr/share/nginx/html;
index index.php
...
}
Make sure your nginx.conf file at /etc/nginx/nginx.conf actually loads your site config in the http block...
http {
...
include /etc/nginx/conf.d/*.conf;
# Load our websites config
include /etc/nginx/sites-enabled/*;
}

Related

Nginx on docker, curl get Connection refused

I am working in a local environment with docker.
I have an nginx web container and a php container which are in the same network.
I build the php container from my own dockerfile (with phpfpm and phpcli); and, the nginx I compose it in a docker-compose from the nginx:stable hub image.
I have 2 projects: a symfony(http://i-r4y.kaiza.lh/) and a drupal(http://i-z4r4.kaiza.lh/) which runs in it. and the symfony exposes an api which have to be consumed by the drupal. The problem is that an error when I call the symfony from the drupal cURL error 7: Failed to connect to i-r4y.kaiza.lh port 80: Connection refused
I thought it was a configuration of the symfony side api route; like it must be public or accept CORS etc ...
but when I enter on bash in the php container, and I do curl either the symfony or drupal url, I have the same error.
app#kz-php74:/var/www$ curl http://i-r4y.kaiza.lh
curl: (7) Failed to connect to i-r4y.kaiza.lh port 80: Connection refused
app#kz-php74:/var/www$ curl http://i-z4r4.kaiza.lh
curl: (7) Failed to connect to i-z4r4.kaiza.lh port 80: Connection refused
I checked in the php container that the hosts are present in /etc/hosts
app#kz-php74:/var/www$ cat /etc/hosts | grep i-
127.0.0.1 i-r4y.kaiza.lh
127.0.0.1 i-z4r4.kaiza.lh
Here is the docker-compose.yml :
version: '2.4'
services:
php7.4:
build:
context: ../../../dockerfile
dockerfile: Dockerfile.php
args:
PHP_VERSION: 7.4
container_name: "kz-php74"
hostname: "kz-php74"
user: 1000:1000
working_dir: /var/www
volumes:
- "${LOCAL_PATH}/../www:/var/www"
extra_hosts:
- "i-r4y.kaiza.lh:127.0.0.1"
- "i-z4r4.kaiza.lh:127.0.0.1"
networks:
- kz_local
mysql:
container_name: kz-mysql
image: mariadb:10.4.0
volumes:
- ${LOCAL_PATH}/.data/mariadb:/var/lib/mysql
- ${LOCAL_PATH}/config/mariadb/conf.d/custom.cnf:/etc/mysql/conf.d/custom.cnf
- ${LOCAL_PATH}/../www:/var/www
ports:
- ${MYSQL_PORT:-3306}:3306
environment:
MYSQL_ROOT_PASSWORD: password
networks:
- kz_local
web:
image: nginx:stable
container_name: kz-web
volumes:
- ${LOCAL_PATH}/config/nginx/conf.d:/etc/nginx/conf.d
- ${LOCAL_PATH}/../www:/var/www
ports:
- 80:80
networks:
- kz_local
networks:
kz_local:
external: true
The nginx config of drupal:
server {
listen 80;
listen [::]:80;
server_name i-z4r4.kaiza.lh;
root /var/www/i-z4r4/web;
resolver 127.0.0.11 ipv6=off;
location #rewrite {
rewrite ^/(.*)$ /index.php?q=$1;
}
# In Drupal 8, we must also match new paths where the '.php' appears in
# the middle, such as update.php/selection. The rule we use is strict,
# and only allows this pattern with the update.php front controller.
# This allows legacy path aliases in the form of
# blog/index.php/legacy-path to continue to route to Drupal nodes. If
# you do not have any paths like that, then you might prefer to use a
# laxer rule, such as:
# location ~ \.php(/|$) {
# The laxer rule will continue to work if Drupal uses this new URL
# pattern with front controllers other than update.php in a future
# release.
location ~ '\.php$|^/update.php' {
set $fastcgi_pass "kz-php74:9000";
fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
# Security note: If you're running a version of PHP older than the
# latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini.
# See http://serverfault.com/q/627903/94922 for details.
include fastcgi_params;
# Block httpoxy attacks. See https://httpoxy.org/.
fastcgi_param HTTP_PROXY "";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_intercept_errors on;
fastcgi_pass $fastcgi_pass;
}
...
upstream php {
server kz-php74:8080;
}
}
For symfony:
server {
listen 80;
listen [::]:80;
server_name i-r4y.kaiza.lh;
root /var/www/i-r4y/public;
resolver 127.0.0.11 ipv6=off;
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
set $fastcgi_pass "kz-php74:9000";
fastcgi_pass $fastcgi_pass;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
...
upstream php {
server kz-php74:8080;
}
...
}
will anyone have any idea why this is not working?
thanks
You need to expose port 80 to your docker host.
It looks like you are trying to curl from your docker host (your real machine running docker) to nginx running in a docker container.
You can do that in docker-compose with the following:
web:
image: nginx:stable
…
ports:
- 80:80
This will get you to nginx. However, your next obstacle will probably be ngxinx reaching your php service.
Like #AmyDev mentioned, you'll be better served using docker's name resolution for that.
In your nginx config, you need the following line to point nginx at docker's internal DNS:
resolver 127.0.0.11 ipv6=off;
Then you can declare your upstream with the following:
upstream php {
# use the docker service name. I removed the . b/c I don't know if that works in a docker service name
# This assumes the docker service has been renamed from php7.4 to php74 in docker-compose.yml
server php74:8080; # or whatever port to which php is listening
}
I solved the problem by adding an alias in the network of the web container through which I can access from the php container.
...
web:
image: nginx:stable
....
networks:
kz_local:
# To allow fetch (call, curl) from php container.
aliases:
- api.i-r4y.kaiza.lh
- api.i-z4r4.kaiza.lh
...
and of course, I needed to add the url aliases in config nginx :
...
server_name i-z4r4.kaiza.lh api.i-z4r4.kaiza.lh;
...
When you run curl http://i-r4y.kaiza.lh you make a request to the same container (php7.4), not on web (nginx). If you want to make request to another container, you can use the container service name as domain.
Try running curl http://web in the php container

Convert docker-compose to Kubernetes for nginx & php-fpm containers

I have a dockerized symfony project and I'm trying to delpoy it in GPC on Kubernetes cluster.
In development I use docker-compose and I have two sepparate containers for php-fpm and nginx.
When I run docker-compose up --build, it all works fine, but when I try to create a kubernetes cluster I get this error after I run kubectl apply -f nginx.deployment.yaml:
nginx: [emerg] host not found in upstream "php-fpm" in /etc/nginx/conf.d/default.conf:11
This is the nginx default.conf file:
server {
listen 80;
server_name localhost;
root /app/public;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $document_root;
internal;
}
location ~ \.php$ {
return 404;
}
}
Please note
fastcgi_pass php-fpm:9000;
which references the php-fpm container.
Dockerfile for nginx:
ARG VERSION
# Dev image
FROM nginx:${VERSION}-alpine as dev
# Copy nginx config
COPY ./docker/nginx/default.conf /etc/nginx/conf.d/default.conf
# Prod image
FROM dev as prod
# Copy assets
COPY ./assets /app/public
Here it is the nginx.deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f ../docker-compose.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f ../docker-compose.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
spec:
containers:
- image: myDockerHubRegistry/symfony-nginx:0.2
name: nginx
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /app/public
name: nginx-claim
restartPolicy: Always
volumes:
- name: nginx-claim
persistentVolumeClaim:
claimName: nginx-claim
status: {}
I also tried to put nginx and php-fpm on the same deployment, but I still get the same error.
What am I missing?
There are 2 issues here:
The hostname php-fpm doesn't resolve to the service's IP via DNS in k8s
nginx/openresty refuses to start if a hostname used in a proxy pass can't be resolved
nginx: [emerg] host not found in upstream "hostname"
SOLUTION:
Use a hostname that resolves correctly inside kubernetes.
The DNS scheme used by Core DNS in GKE is <service>.<namespace>.svc.cluster.local
Store the correct hostname for the php-fpm service in a variable first.
Then use this variable as the target of the the (fastcgi-)proxy pass.
This will make nginx start regardless of being able to resolve the target hostname.
Example:
set $upstream php-fpm.your-namespace.svc.cluster.local:9000;
fastcgi_pass $upstream;
Since nginx refuses to start if any proxy_pass service is not already available, you need to start the php-fpm service first. You haven't shared the deployment yaml for php-fpm. Along with the kubernetes deployment for php-fpm, you need to create a kubernetes service object for php-fpm with the name php-fpm within the same namespace as nginx.
You could also use the variable hack suggested by Nicolai so that you can start nginx without any dependency on php-fpm. But either case you need to create the kubernetes deployment and service objects for php-fpm for your application to actually work.

Nginx + PHP-FPM: Connection refused on port 5000 of PHP-FPM container while port 9000 is no problem

I am training myself in Docker and I am trying to setup a Nginx + PHP FPM environment that I eventually wanna host on ECS (just for training purposes). The PHP environment has a basic Symfony 4 service running (just returns a json, nothing special). The issue however is with my Nginx container.
Something really strange happens as I have exposed the port 5000 on the PHP container in my Dockerfile for it but my Nginx container is giving me bad gateway errors when trying to access the PHP container at this port. If I change the port the Nginx container uses for the fastcgi_pass to 9000 while not changing the exposed port of the PHP container (leaving it at 5000 in Dockerfile), everything is fine and the setup just works.
Is there anyone who could give me some hints to why this is?
I have checked docker ps to check for ports and indeed both port 5000 and 9000 are open on the PHP container but only port 9000 seems to be useable.
Nginx conf file
server {
listen 80;
server_name localhost;
root /var/www/symfony/public;
index index.php;
access_log /var/log/access.log;
error_log /var/log/error.log;
location ~ \.php$ {
include fastcgi_params;
fastcgi_param HTTPS off;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass symfony:9000;
fastcgi_index index.php;
}
}
Symfony dockerfile (shortenend)
FROM php:7.2-fpm
WORKDIR /var/www/symfony
... (installation things)
COPY . /var/www/symfony
EXPOSE 5000
Docker-compose
version: "3.7"
services:
symfony:
container_name: symfony
build: ./symfony
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/var/logs
networks:
- api
nginx:
container_name: nginx
build: ./nginx
ports:
- 80:80
volumes:
- ./symfony:/var/www/symfony
- ./logs/nginx:/var/log/nginx
networks:
- api
depends_on:
- symfony
volumes:
symfony:
networks:
api:
driver: "bridge"
Docker ps result
PORTS NAMES
0.0.0.0:80->80/tcp nginx
5000/tcp, 9000/tcp symfony
I expect to be able to change symfony:9000 to symfony:5000 and be able to get the result from the PHP container.
In official docker php image, there are /usr/local/etc/php-fpm.d/zz-docker.conf ... shortly zz-docker.conf configuration file that changes 'listen="what you want"' to 'listen=9000'.
Therefore, you need to change (or delete) zz-docker.conf.
Related comment: https://github.com/docker-library/php/issues/241
Wish to solve your problem.
Thank you.

I am setting up nginx docker container as reverse proxy to php-fpm. Is it even possible?

I am working on my raspbberry pi (arm architecture) and I am using docker to run containers on a very lightweight OS (HyperiotOS). I have successfully set the nginx + php-fpm container that nginx config serves php file if the volume is mounted with code on both containers.
Now my problem is: I am trying to set the nginx pretty much as reverse proxy, meaning I don't want the nginx to access any code on drive I would like to forward all requests into php container and display results. I tried proxy_pass to it, but it did not work. Is there a way to do that ? Example of code below.
My docker compose:
version: '3'
services:
nginx:
image: arm32v7/nginx:latest
container_name: nginx
restart: unless-stopped
ports:
- "8080:80"
volumes:
- ./conf.d:/etc/nginx/conf.d
- ./logs:/var/log/nginx/
- ./code:/code # this is what I want to get rid off
networks:
- webserver
php:
image: arm32v7/php:7.3-fpm
expose:
- "9000"
restart: unless-stopped
container_name: php-fpm
volumes:
- ./code:/code
networks:
- webserver
networks:
webserver:
driver: bridge
My nginx config:
server {
listen 80;
index index.php index.html;
server_name raspberry.local;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;
location ~ \.php$ {
try_files $uri /dev/null =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
Now inside the /code folder, I have index.php with php info.
This configuration works well and after accessing raspberry.local:8080 I can see php info page.
However, in order to access it, I have that /code mounted in nginx container.
- ./code:/code # this is what I want to get rid off
Now I want to eliminate that so I can put the nginx as balancer somewhere else without the need to access code. I would like to proxy the requests straight to php container. I tried proxy_pass http://php:9000 but that broke the thing. What is the best way to "detach" the nginx from the code and just act as a standalone proxy? There are several reasons why to eliminate that if possible, one is production type build. I cannot share the folder it needs to be baked in. I don't want two containers carrying the code. I could just throw in apache into php container and proxy nginx inside if that would be the case and eliminate the fpm.

How to correctly link php-fpm and Nginx Docker containers?

I am trying to link 2 separate containers:
nginx:latest
php:fpm
The problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect.
Here is the source code, which is in my repository. Here is the file docker-compose.yml:
nginx:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www/test/
links:
- fpm
fpm:
image: php:fpm
ports:
- "9000:9000"
and Dockerfile which I used to build a custom image based on the nginx one:
FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./default.conf /etc/nginx/conf.d/
Lastly, here is my custom Nginx virtual host config:
server {
listen 80;
server_name localhost;
root /var/www/test;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Could anybody help me configure these containers correctly to execute php scripts?
P.S.
I run containers via docker-composer like this:
docker-compose up
from the project root directory.
I know it is kind an old post, but I've had the same problem and couldn't understand why your code didn't work.
After a LOT of tests I've found out why.
It seems like fpm receives the full path from nginx and tries to find the files in the fpm container, so it must be the exactly the same as server.root in the nginx config, even if it doesn't exist in the nginx container.
To demonstrate:
docker-compose.yml
nginx:
build: .
ports:
- "80:80"
links:
- fpm
fpm:
image: php:fpm
ports:
- ":9000"
# seems like fpm receives the full path from nginx
# and tries to find the files in this dock, so it must
# be the same as nginx.root
volumes:
- ./:/complex/path/to/files/
/etc/nginx/conf.d/default.conf
server {
listen 80;
# this path MUST be exactly as docker-compose.fpm.volumes,
# even if it doesn't exist in this dock.
root /complex/path/to/files;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass fpm:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/
Don't hardcode ip of containers in nginx config, docker link adds the hostname of the linked machine to the hosts file of the container and you should be able to ping by hostname.
EDIT: Docker 1.9 Networking no longer requires you to link containers, when multiple containers are connected to the same network, their hosts file will be updated so they can reach each other by hostname.
Every time a docker container spins up from an image (even stop/start-ing an existing container) the containers get new ip's assigned by the docker host. These ip's are not in the same subnet as your actual machines.
see docker linking docs (this is what compose uses in the background)
but more clearly explained in the docker-compose docs on links & expose
links
links:
- db
- db:database
- redis
An entry with the alias' name will be created in /etc/hosts inside containers for this service, e.g:
172.17.2.186 db
172.17.2.186 database
172.17.2.187 redis
expose
Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified.
and if you set up your project to get the ports + other credentials through environment variables, links automatically set a bunch of system variables:
To see what environment variables are available to a service, run docker-compose run SERVICE env.
name_PORT
Full URL, e.g. DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, e.g. DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container's IP address, e.g. DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, e.g. DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), e.g. DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, e.g. DB_1_NAME=/myapp_web_1/myapp_db_1
As pointed out before, the problem was that the files were not visible by the fpm container. However to share data among containers the recommended pattern is using data-only containers (as explained in this article).
Long story short: create a container that just holds your data, share it with a volume, and link this volume in your apps with volumes_from.
Using compose (1.6.2 in my machine), the docker-compose.yml file would read:
version: "2"
services:
nginx:
build:
context: .
dockerfile: nginx/Dockerfile
ports:
- "80:80"
links:
- fpm
volumes_from:
- data
fpm:
image: php:fpm
volumes_from:
- data
data:
build:
context: .
dockerfile: data/Dockerfile
volumes:
- /var/www/html
Note that data publishes a volume that is linked to the nginx and fpm services. Then the Dockerfile for the data service, that contains your source code:
FROM busybox
# content
ADD path/to/source /var/www/html
And the Dockerfile for nginx, that just replaces the default config:
FROM nginx
# config
ADD config/default.conf /etc/nginx/conf.d
For the sake of completion, here's the config file required for the example to work:
server {
listen 0.0.0.0:80;
root /var/www/html;
location / {
index index.php index.html;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
which just tells nginx to use the shared volume as document root, and sets the right config for nginx to be able to communicate with the fpm container (i.e.: the right HOST:PORT, which is fpm:9000 thanks to the hostnames defined by compose, and the SCRIPT_FILENAME).
New Answer
Docker Compose has been updated. They now have a version 2 file format.
Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
They now support the networking feature of Docker which when run sets up a default network called myapp_default
From their documentation your file would look something like the below:
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
fpm:
image: phpfpm
nginx
image: nginx
As these containers are automatically added to the default myapp_default network they would be able to talk to each other. You would then have in the Nginx config:
fastcgi_pass fpm:9000;
Also as mentioned by #treeface in the comments remember to ensure PHP-FPM is listening on port 9000, this can be done by editing /etc/php5/fpm/pool.d/www.conf where you will need listen = 9000.
Old Answer
I have kept the below here for those using older version of Docker/Docker compose and would like the information.
I kept stumbling upon this question on google when trying to find an answer to this question but it was not quite what I was looking for due to the Q/A emphasis on docker-compose (which at the time of writing only has experimental support for docker networking features). So here is my take on what I have learnt.
Docker has recently deprecated its link feature in favour of its networks feature
Therefore using the Docker Networks feature you can link containers by following these steps. For full explanations on options read up on the docs linked previously.
First create your network
docker network create --driver bridge mynetwork
Next run your PHP-FPM container ensuring you open up port 9000 and assign to your new network (mynetwork).
docker run -d -p 9000 --net mynetwork --name php-fpm php:fpm
The important bit here is the --name php-fpm at the end of the command which is the name, we will need this later.
Next run your Nginx container again assign to the network you created.
docker run --net mynetwork --name nginx -d -p 80:80 nginx:latest
For the PHP and Nginx containers you can also add in --volumes-from commands etc as required.
Now comes the Nginx configuration. Which should look something similar to this:
server {
listen 80;
server_name localhost;
root /path/to/my/webroot;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
Notice the fastcgi_pass php-fpm:9000; in the location block. Thats saying contact container php-fpm on port 9000. When you add containers to a Docker bridge network they all automatically get a hosts file update which puts in their container name against their IP address. So when Nginx sees that it will know to contact the PHP-FPM container you named php-fpm earlier and assigned to your mynetwork Docker network.
You can add that Nginx config either during the build process of your Docker container or afterwards its up to you.
As previous answers have solved for, but should be stated very explicitly: the php code needs to live in the php-fpm container, while the static files need to live in the nginx container. For simplicity, most people have just attached all the code to both, as I have also done below. If the future, I will likely separate out these different parts of the code in my own projects as to minimize which containers have access to which parts.
Updated my example files below with this latest revelation (thank you #alkaline )
This seems to be the minimum setup for docker 2.0 forward
(because things got a lot easier in docker 2.0)
docker-compose.yml:
version: '2'
services:
php:
container_name: test-php
image: php:fpm
volumes:
- ./code:/var/www/html/site
nginx:
container_name: test-nginx
image: nginx:latest
volumes:
- ./code:/var/www/html/site
- ./site.conf:/etc/nginx/conf.d/site.conf:ro
ports:
- 80:80
(UPDATED the docker-compose.yml above: For sites that have css, javascript, static files, etc, you will need those files accessible to the nginx container. While still having all the php code accessible to the fpm container. Again, because my base code is a messy mix of css, js, and php, this example just attaches all the code to both containers)
In the same folder:
site.conf:
server
{
listen 80;
server_name site.local.[YOUR URL].com;
root /var/www/html/site;
index index.php;
location /
{
try_files $uri =404;
}
location ~ \.php$ {
fastcgi_pass test-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
In folder code:
./code/index.php:
<?php
phpinfo();
and don't forget to update your hosts file:
127.0.0.1 site.local.[YOUR URL].com
and run your docker-compose up
$docker-compose up -d
and try the URL from your favorite browser
site.local.[YOUR URL].com/index.php
I think we also need to give the fpm container the volume, dont we? So =>
fpm:
image: php:fpm
volumes:
- ./:/var/www/test/
If i dont do this, i run into this exception when firing a request, as fpm cannot find requested file:
[error] 6#6: *4 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.42.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.81:9000", host: "localhost"
For anyone else getting
Nginx 403 error: directory index of [folder] is forbidden
when using index.php while index.html works perfectly and having included index.php in the index in the server block of their site config in sites-enabled
server {
listen 80;
# this path MUST be exactly as docker-compose php volumes
root /usr/share/nginx/html;
index index.php
...
}
Make sure your nginx.conf file at /etc/nginx/nginx.conf actually loads your site config in the http block...
http {
...
include /etc/nginx/conf.d/*.conf;
# Load our websites config
include /etc/nginx/sites-enabled/*;
}

Categories