I have a simple docker-compose config with php-fpm and nginx, and I can't see any php file. When I go to localhost, it shows File Not Found.
I tried everything I could find on the net, but everything I have tried has failed.
It works fine for html, but not for php files. Seems to be a path issue, or something like that.
I come across this error when I docker-compose logs:
project3-php_1 | 172.17.0.5 - 29/Mar/2016:13:29:12 +0000 "GET /index.php" 404
project3-front_1 | 172.17.0.1 - - [29/Mar/2016:13:29:12 +0000] "GET / HTTP/1.1" 404 27 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36"
project3-front_1 | 2016/03/29 13:29:12 [error] 8#8: *3 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.2:9000", host: "localhost"
Here's my docker-compose:
project3-front:
image: nginx
ports:
- "80:80"
links:
- "project3-php:project3-php"
volumes:
- ".:/home/docker"
- "./nginxdir/default.conf:/etc/nginx/conf.d/default.conf"
- "./html:/usr/share/nginx/html"
project3-php:
build: phpdir
volumes:
- ".:/home/docker:rw"
- "./html:/var/www/html"
ports:
- "9000:9000"
working_dir: "/home/docker"
Then my dockerfile for php:
FROM php:5.6-fpm
EXPOSE 9000
my default.conf for nginx:
server {
listen 80;
server_name localhost;
index index.php index.html;
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log;
root /usr/share/nginx/html;
location ~ \.php$ {
fastcgi_pass project3-php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
pwd of the main folder is:
/media/revenge/share/PROJECTS/docker_images/php7-nginx
The file hierarchy is:
├── docker-compose.yml
├── html
│ ├── index.php
├── nginxdir
│ ├── default.conf
├── phpdir
│ ├── dockerfile
│ └── php.ini
The whole folder is chmod 777
Any idea would be greatly appreciated. I'm sure there is something I didn't get. Thanks in advance.
Here is how you can find the problem
Switch on the nginx debug mode with custom command, eg:
docker-compose.yml
web:
image: nginx
volumes:
- "~/www/project:/var/www"
- "~/www/project/vhost.conf:/etc/nginx/conf.d/site.conf"
# Add this row:
command: [nginx-debug, '-g', 'daemon off;']
Edit your "site.conf" file (in this example it is the ~/www/project/vhost.conf file) Switch on the debug mode by the error_log (add "debug" word at the end):
error_log "/var/log/nginx/error.log" debug;
(Re)start the "web" container:
docker-compose stop web
docker-compose up -d web
Test the containers, are all containers running?
docker-compose ps
Test in browser
"Connect" and view or download ( http://blog.dcycle.com/blog/ae67284c/docker-compose-cp ) and view the /var/log/nginx/error.log file.
The problem in the most of case
You haven't set yet or you are using different directory structure in web and php-fpm. If you want to use different structure than you have to set the "fpm structure" at the fastcgi_param SCRIPT_FILENAME place, like this:
If your docker-compose.yml contains like this:
phpfpm:
image: php:fpm
volumes:
- "~/www/project:/var/www/html/user/project"
# ^^^^^^^^^^^^^^^^^^^^^^^^^^ the path in the php-fpm container
You have to set this in "site.conf" in nginx container:
fastcgi_param SCRIPT_FILENAME /var/www/html/user/project$fastcgi_script_name;
# ^^^^^^^^^^^^^^^^^^^^^^^^^^
Finally found it:
I was missing this line in the volume of PHP section of docker-compose:
"./html:/usr/share/nginx/html"
here's what the docker-compose should look like:
project3-front:
image: nginx
ports:
- "80:80"
links:
- "project3-php:project3-php"
volumes:
- ".:/home/docker"
- "./nginxdir/default.conf:/etc/nginx/conf.d/default.conf"
- "./html:/usr/share/nginx/html"
project3-php:
build: phpdir
volumes:
- ".:/home/docker:rw"
- "./html:/var/www/html"
ports:
- "9000:9000"
working_dir: "/home/docker"
The absolute root (here "/usr/share/nginx/html") in the nginx default.conf file had to be set as well in the php part of docker-compose (was only under nginx before)
That's a relief ;)
In my case the volumes: of php and nginx were pointing a the correct (and hence the same) directory. But in my nginx config there was a NGINX_SERVER_ROOT: pointing the wrong way.
So make sure you double check all volumes and root directory settings. Some are easy to overlook.
While replacing $document_root$fastcgi_script_name; by /var/www/html/user/project$fastcgi_script_name; might fix the issue, the more elegant approach in my opinion would be just changing the nginx root directory in the default.conf:
server {
listen 80;
server_name localhost;
index index.php index.html;
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log;
root /var/www/html;
location ~ \.php$ {
fastcgi_pass project3-php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
And in docker-compose.yml:
project3-front:
image: nginx
ports:
- "80:80"
links:
- "project3-php:project3-php"
volumes:
- ".:/home/docker"
- "./nginxdir/default.conf:/etc/nginx/conf.d/default.conf"
- "./html:/var/www/html"
I had the same problem but this wasn't solved by any of this answer here:
Yesterday a Docker Update hits in - which has been installed. The problem depending on this update is, that the (vEthernet (DockerNAT)) Network has been changed. In that way my firewall (in my case Kaspersky) reset my network firewall to "public" instead of "trustable".
I figured this out by open "Docker -> Settings -> Shared Devices". On the first look, everthing seems fine. There was a "tick" on each device I need to be shared. Next I tried to disable and enable my shared devices again. By enable my shared device again the default error "A firewall is blocking file Sharing between Windows and the containers. See documetation for more info." came up. -> PERFECT!!. The shared device is not able to access the files anymore but Docker persists in the state that its all fine.
So I could fix it by:
Set (vEthernet (DockerNAT)) Network as trusted again in my Kaspersky "Firewall - Network" settings.
Restarted my docker container by using docker-compose up and everthing runs like a charm. My files could be accessed again.
One of the case might be, Your running the docker container and did composer install/update
then run following to restart container
`docker-compose down` `docker-compose up -d`
Related
I am trying to run nginx and php in docker-compose. I am able to build the containers and when I browse to an html file in my browser nginx serves it. But when I try to browse a php file I just get 502 Bad Gateway.
I'm aware that it may be something to do with the server name in my nginx configuration file, since the tutorial I got it from mentions this needs to be correct. But I've tried changing it to 'localhost' and 'ip-of-my-dockerhost' with same result.
I'm also aware I should look at the log files. However I am a bit of a linux novice. When I open Portainer, go to the container called Web and execute a shell, if I go:
cd /var/log/nginx ; ls
I see two files called access.log and error.log
however if I type
cat error.log
nothing happens! I get a new blank line with a blinking cursor that never outputs anything. The shell is then 'frozen' until I press CTRL-C to kill the task.
Contents of docker-compose.yml
version: "3.7"
#############################
#
# NETWORKS
#
#############################
networks:
main:
external:
name: main
default:
driver: bridge
###########################
#
# SERVICES
#
###########################
services:
# All services / apps go below this line
portainer:
container_name: Portainer
image: portainer/portainer:latest
restart: unless-stopped
command: -H unix:///var/run/docker.sock
networks:
- main
ports:
- "$PORTAINER_PORT:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DATADIR/portainer:/data # Change to local directory if you want to save/transfer config locally
environment:
- TZ=$TZ
<<snip as don't believe relevant - a bunch of other containers here such as influxdb, radarr etc>>
web:
container_name: Web
image: nginx:latest
networks:
- main
ports:
- 81:80
environment:
- PUID=$PUID
- PGID=$PGID
volumes:
- $CONFIGDIR/nginx/site.conf:/etc/nginx/conf.d/default.conf
- $DATADIR/nginx/code:/code
php:
container_name: php
image: php:7-fpm
networks:
- main
ports:
- 9090:9000
environment:
- PUID=$PUID
- PGID=$PGID
volumes:
- $CONFIGDIR/nginx/site.conf:/etc/nginx/conf.d/default.conf
- $DATADIR/nginx/code:/code
contents of $CONFIGDIR/nginx/site.conf
server {
index index.php index.html;
server_name 192.168.1.7; ## This is the IP address of my docker host but I've also tried 'localhost' and 'php-docker.local'
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9090;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
9090 is an exposed port. That means you can reach localhost:9090 for the PHP container but not for internal container communication.
Change your site.conf:
fastcgi_pass php:9090; => fastcgi_pass php:9000;
I am working in a local environment with docker.
I have an nginx web container and a php container which are in the same network.
I build the php container from my own dockerfile (with phpfpm and phpcli); and, the nginx I compose it in a docker-compose from the nginx:stable hub image.
I have 2 projects: a symfony(http://i-r4y.kaiza.lh/) and a drupal(http://i-z4r4.kaiza.lh/) which runs in it. and the symfony exposes an api which have to be consumed by the drupal. The problem is that an error when I call the symfony from the drupal cURL error 7: Failed to connect to i-r4y.kaiza.lh port 80: Connection refused
I thought it was a configuration of the symfony side api route; like it must be public or accept CORS etc ...
but when I enter on bash in the php container, and I do curl either the symfony or drupal url, I have the same error.
app#kz-php74:/var/www$ curl http://i-r4y.kaiza.lh
curl: (7) Failed to connect to i-r4y.kaiza.lh port 80: Connection refused
app#kz-php74:/var/www$ curl http://i-z4r4.kaiza.lh
curl: (7) Failed to connect to i-z4r4.kaiza.lh port 80: Connection refused
I checked in the php container that the hosts are present in /etc/hosts
app#kz-php74:/var/www$ cat /etc/hosts | grep i-
127.0.0.1 i-r4y.kaiza.lh
127.0.0.1 i-z4r4.kaiza.lh
Here is the docker-compose.yml :
version: '2.4'
services:
php7.4:
build:
context: ../../../dockerfile
dockerfile: Dockerfile.php
args:
PHP_VERSION: 7.4
container_name: "kz-php74"
hostname: "kz-php74"
user: 1000:1000
working_dir: /var/www
volumes:
- "${LOCAL_PATH}/../www:/var/www"
extra_hosts:
- "i-r4y.kaiza.lh:127.0.0.1"
- "i-z4r4.kaiza.lh:127.0.0.1"
networks:
- kz_local
mysql:
container_name: kz-mysql
image: mariadb:10.4.0
volumes:
- ${LOCAL_PATH}/.data/mariadb:/var/lib/mysql
- ${LOCAL_PATH}/config/mariadb/conf.d/custom.cnf:/etc/mysql/conf.d/custom.cnf
- ${LOCAL_PATH}/../www:/var/www
ports:
- ${MYSQL_PORT:-3306}:3306
environment:
MYSQL_ROOT_PASSWORD: password
networks:
- kz_local
web:
image: nginx:stable
container_name: kz-web
volumes:
- ${LOCAL_PATH}/config/nginx/conf.d:/etc/nginx/conf.d
- ${LOCAL_PATH}/../www:/var/www
ports:
- 80:80
networks:
- kz_local
networks:
kz_local:
external: true
The nginx config of drupal:
server {
listen 80;
listen [::]:80;
server_name i-z4r4.kaiza.lh;
root /var/www/i-z4r4/web;
resolver 127.0.0.11 ipv6=off;
location #rewrite {
rewrite ^/(.*)$ /index.php?q=$1;
}
# In Drupal 8, we must also match new paths where the '.php' appears in
# the middle, such as update.php/selection. The rule we use is strict,
# and only allows this pattern with the update.php front controller.
# This allows legacy path aliases in the form of
# blog/index.php/legacy-path to continue to route to Drupal nodes. If
# you do not have any paths like that, then you might prefer to use a
# laxer rule, such as:
# location ~ \.php(/|$) {
# The laxer rule will continue to work if Drupal uses this new URL
# pattern with front controllers other than update.php in a future
# release.
location ~ '\.php$|^/update.php' {
set $fastcgi_pass "kz-php74:9000";
fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
# Security note: If you're running a version of PHP older than the
# latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini.
# See http://serverfault.com/q/627903/94922 for details.
include fastcgi_params;
# Block httpoxy attacks. See https://httpoxy.org/.
fastcgi_param HTTP_PROXY "";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_intercept_errors on;
fastcgi_pass $fastcgi_pass;
}
...
upstream php {
server kz-php74:8080;
}
}
For symfony:
server {
listen 80;
listen [::]:80;
server_name i-r4y.kaiza.lh;
root /var/www/i-r4y/public;
resolver 127.0.0.11 ipv6=off;
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
set $fastcgi_pass "kz-php74:9000";
fastcgi_pass $fastcgi_pass;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
...
upstream php {
server kz-php74:8080;
}
...
}
will anyone have any idea why this is not working?
thanks
You need to expose port 80 to your docker host.
It looks like you are trying to curl from your docker host (your real machine running docker) to nginx running in a docker container.
You can do that in docker-compose with the following:
web:
image: nginx:stable
…
ports:
- 80:80
This will get you to nginx. However, your next obstacle will probably be ngxinx reaching your php service.
Like #AmyDev mentioned, you'll be better served using docker's name resolution for that.
In your nginx config, you need the following line to point nginx at docker's internal DNS:
resolver 127.0.0.11 ipv6=off;
Then you can declare your upstream with the following:
upstream php {
# use the docker service name. I removed the . b/c I don't know if that works in a docker service name
# This assumes the docker service has been renamed from php7.4 to php74 in docker-compose.yml
server php74:8080; # or whatever port to which php is listening
}
I solved the problem by adding an alias in the network of the web container through which I can access from the php container.
...
web:
image: nginx:stable
....
networks:
kz_local:
# To allow fetch (call, curl) from php container.
aliases:
- api.i-r4y.kaiza.lh
- api.i-z4r4.kaiza.lh
...
and of course, I needed to add the url aliases in config nginx :
...
server_name i-z4r4.kaiza.lh api.i-z4r4.kaiza.lh;
...
When you run curl http://i-r4y.kaiza.lh you make a request to the same container (php7.4), not on web (nginx). If you want to make request to another container, you can use the container service name as domain.
Try running curl http://web in the php container
I am trying to link 2 separate containers:
nginx:latest
php:fpm
The problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect.
Here is the source code, which is in my repository. Here is the file docker-compose.yml:
nginx:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www/test/
links:
- fpm
fpm:
image: php:fpm
ports:
- "9000:9000"
and Dockerfile which I used to build a custom image based on the nginx one:
FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./default.conf /etc/nginx/conf.d/
Lastly, here is my custom Nginx virtual host config:
server {
listen 80;
server_name localhost;
root /var/www/test;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Could anybody help me configure these containers correctly to execute php scripts?
P.S.
I run containers via docker-composer like this:
docker-compose up
from the project root directory.
I know it is kind an old post, but I've had the same problem and couldn't understand why your code didn't work.
After a LOT of tests I've found out why.
It seems like fpm receives the full path from nginx and tries to find the files in the fpm container, so it must be the exactly the same as server.root in the nginx config, even if it doesn't exist in the nginx container.
To demonstrate:
docker-compose.yml
nginx:
build: .
ports:
- "80:80"
links:
- fpm
fpm:
image: php:fpm
ports:
- ":9000"
# seems like fpm receives the full path from nginx
# and tries to find the files in this dock, so it must
# be the same as nginx.root
volumes:
- ./:/complex/path/to/files/
/etc/nginx/conf.d/default.conf
server {
listen 80;
# this path MUST be exactly as docker-compose.fpm.volumes,
# even if it doesn't exist in this dock.
root /complex/path/to/files;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass fpm:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/
Don't hardcode ip of containers in nginx config, docker link adds the hostname of the linked machine to the hosts file of the container and you should be able to ping by hostname.
EDIT: Docker 1.9 Networking no longer requires you to link containers, when multiple containers are connected to the same network, their hosts file will be updated so they can reach each other by hostname.
Every time a docker container spins up from an image (even stop/start-ing an existing container) the containers get new ip's assigned by the docker host. These ip's are not in the same subnet as your actual machines.
see docker linking docs (this is what compose uses in the background)
but more clearly explained in the docker-compose docs on links & expose
links
links:
- db
- db:database
- redis
An entry with the alias' name will be created in /etc/hosts inside containers for this service, e.g:
172.17.2.186 db
172.17.2.186 database
172.17.2.187 redis
expose
Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified.
and if you set up your project to get the ports + other credentials through environment variables, links automatically set a bunch of system variables:
To see what environment variables are available to a service, run docker-compose run SERVICE env.
name_PORT
Full URL, e.g. DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, e.g. DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container's IP address, e.g. DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, e.g. DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), e.g. DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, e.g. DB_1_NAME=/myapp_web_1/myapp_db_1
As pointed out before, the problem was that the files were not visible by the fpm container. However to share data among containers the recommended pattern is using data-only containers (as explained in this article).
Long story short: create a container that just holds your data, share it with a volume, and link this volume in your apps with volumes_from.
Using compose (1.6.2 in my machine), the docker-compose.yml file would read:
version: "2"
services:
nginx:
build:
context: .
dockerfile: nginx/Dockerfile
ports:
- "80:80"
links:
- fpm
volumes_from:
- data
fpm:
image: php:fpm
volumes_from:
- data
data:
build:
context: .
dockerfile: data/Dockerfile
volumes:
- /var/www/html
Note that data publishes a volume that is linked to the nginx and fpm services. Then the Dockerfile for the data service, that contains your source code:
FROM busybox
# content
ADD path/to/source /var/www/html
And the Dockerfile for nginx, that just replaces the default config:
FROM nginx
# config
ADD config/default.conf /etc/nginx/conf.d
For the sake of completion, here's the config file required for the example to work:
server {
listen 0.0.0.0:80;
root /var/www/html;
location / {
index index.php index.html;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
which just tells nginx to use the shared volume as document root, and sets the right config for nginx to be able to communicate with the fpm container (i.e.: the right HOST:PORT, which is fpm:9000 thanks to the hostnames defined by compose, and the SCRIPT_FILENAME).
New Answer
Docker Compose has been updated. They now have a version 2 file format.
Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
They now support the networking feature of Docker which when run sets up a default network called myapp_default
From their documentation your file would look something like the below:
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
fpm:
image: phpfpm
nginx
image: nginx
As these containers are automatically added to the default myapp_default network they would be able to talk to each other. You would then have in the Nginx config:
fastcgi_pass fpm:9000;
Also as mentioned by #treeface in the comments remember to ensure PHP-FPM is listening on port 9000, this can be done by editing /etc/php5/fpm/pool.d/www.conf where you will need listen = 9000.
Old Answer
I have kept the below here for those using older version of Docker/Docker compose and would like the information.
I kept stumbling upon this question on google when trying to find an answer to this question but it was not quite what I was looking for due to the Q/A emphasis on docker-compose (which at the time of writing only has experimental support for docker networking features). So here is my take on what I have learnt.
Docker has recently deprecated its link feature in favour of its networks feature
Therefore using the Docker Networks feature you can link containers by following these steps. For full explanations on options read up on the docs linked previously.
First create your network
docker network create --driver bridge mynetwork
Next run your PHP-FPM container ensuring you open up port 9000 and assign to your new network (mynetwork).
docker run -d -p 9000 --net mynetwork --name php-fpm php:fpm
The important bit here is the --name php-fpm at the end of the command which is the name, we will need this later.
Next run your Nginx container again assign to the network you created.
docker run --net mynetwork --name nginx -d -p 80:80 nginx:latest
For the PHP and Nginx containers you can also add in --volumes-from commands etc as required.
Now comes the Nginx configuration. Which should look something similar to this:
server {
listen 80;
server_name localhost;
root /path/to/my/webroot;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
Notice the fastcgi_pass php-fpm:9000; in the location block. Thats saying contact container php-fpm on port 9000. When you add containers to a Docker bridge network they all automatically get a hosts file update which puts in their container name against their IP address. So when Nginx sees that it will know to contact the PHP-FPM container you named php-fpm earlier and assigned to your mynetwork Docker network.
You can add that Nginx config either during the build process of your Docker container or afterwards its up to you.
As previous answers have solved for, but should be stated very explicitly: the php code needs to live in the php-fpm container, while the static files need to live in the nginx container. For simplicity, most people have just attached all the code to both, as I have also done below. If the future, I will likely separate out these different parts of the code in my own projects as to minimize which containers have access to which parts.
Updated my example files below with this latest revelation (thank you #alkaline )
This seems to be the minimum setup for docker 2.0 forward
(because things got a lot easier in docker 2.0)
docker-compose.yml:
version: '2'
services:
php:
container_name: test-php
image: php:fpm
volumes:
- ./code:/var/www/html/site
nginx:
container_name: test-nginx
image: nginx:latest
volumes:
- ./code:/var/www/html/site
- ./site.conf:/etc/nginx/conf.d/site.conf:ro
ports:
- 80:80
(UPDATED the docker-compose.yml above: For sites that have css, javascript, static files, etc, you will need those files accessible to the nginx container. While still having all the php code accessible to the fpm container. Again, because my base code is a messy mix of css, js, and php, this example just attaches all the code to both containers)
In the same folder:
site.conf:
server
{
listen 80;
server_name site.local.[YOUR URL].com;
root /var/www/html/site;
index index.php;
location /
{
try_files $uri =404;
}
location ~ \.php$ {
fastcgi_pass test-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
In folder code:
./code/index.php:
<?php
phpinfo();
and don't forget to update your hosts file:
127.0.0.1 site.local.[YOUR URL].com
and run your docker-compose up
$docker-compose up -d
and try the URL from your favorite browser
site.local.[YOUR URL].com/index.php
I think we also need to give the fpm container the volume, dont we? So =>
fpm:
image: php:fpm
volumes:
- ./:/var/www/test/
If i dont do this, i run into this exception when firing a request, as fpm cannot find requested file:
[error] 6#6: *4 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.42.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.81:9000", host: "localhost"
For anyone else getting
Nginx 403 error: directory index of [folder] is forbidden
when using index.php while index.html works perfectly and having included index.php in the index in the server block of their site config in sites-enabled
server {
listen 80;
# this path MUST be exactly as docker-compose php volumes
root /usr/share/nginx/html;
index index.php
...
}
Make sure your nginx.conf file at /etc/nginx/nginx.conf actually loads your site config in the http block...
http {
...
include /etc/nginx/conf.d/*.conf;
# Load our websites config
include /etc/nginx/sites-enabled/*;
}
I'm trying to set up Symfony 3 in Docker on a Windows machine with NGINX and PHP-FPM. At the moment, I get a 502 bad gateway error. I changed the FPM port from 9000 to 8000 because on my host, port 9000 is already in use by a hyper-v service vmms.exe. I don't know if it's related.
docker-compose.yml
version: "3"
services:
nginx:
build: ./nginx
volumes:
- ./symfony:/usr/shared/nginx/html
ports:
- "80:80"
- "443:443"
environment:
- NGINX_HOST=free-energy.org
depends_on:
- fpm
fpm:
image: php:fpm
ports:
- "8000:8000"
# It seems like FPM receives the full path from NGINX
# and tries to find the files in this dock, so it must
# be the same as nginx.root
volumes:
- ./symfony:/usr/shared/nginx/html
Dockerfile NGINX:
FROM nginx:1.13.7-alpine
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./default.conf /etc/nginx/conf.d/
EXPOSE 80
EXPOSE 443
default.conf override NGINX:
server {
listen 80;
server_name free-energy.org;
# this path MUST be exactly as docker-compose.fpm.volumes,
# even if it doesn't exist in this dock.
root /usr/share/nginx/html;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass fpm:8000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Could you please check php-fpm container? As I remember, the php-fpm default port is 9000, not 8000. Change port mapping from inside container from 8000 to 9000
ports:
- "8000:9000"
Or if it already to use on your host you can expose the port only between containers.
expose:
- "9000"
With the answer of 0TshEL_n1ck the 502 error was solved. I still got 'File not found' in the browser and a related error log in NGINX 'FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream'.
Here is the configuration that turned out to work. The key things that needed to change:
the root directive in the NGINX configuration file did not contain /public which is needed in a Symfony project because the front controller index.php is located there
I changed the volume destination in docker-compose.yml such that the Symfony project directory is now mapped to /var/www/symfony instead of /usr/shared/nginx/html (not sure why previous didn't work)
These points came to light after using the NGINX configuration from the Symfony docs located at https://symfony.com/doc/current/setup/web_server_configuration.html
Here's the code that worked.
docker-compose.yml:
version: "3"
services:
nginx:
build: ./nginx
volumes:
- ./symfony:/var/www/symfony
ports:
- "80:80"
- "443:443"
depends_on:
- fpm
fpm:
image: php:fpm
ports:
- "9000"
volumes:
- ./symfony:/var/www/symfony
NGINX configuration default.conf:
server {
listen 80;
server_name free-energy.org, www.free-energy.org;
root /var/www/symfony/public;
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/index.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
We will guess so a week, try to use this:
/docker-compose.yml
version: "2"
services:
nginx:
build: "./docker/nginx/"
container_name: "nginx-free-energy"
volumes_from:
- php-fpm
restart: always
ports:
- "80:80"
php-fpm:
image: php:7.1-fpm
container_name: "php-fpm-free-energy"
volumes:
- ./src/free-energy.org:/var/www/free-energy.org
restart: always
expose:
- "9000"
/docker/nginx/Dockerfile
FROM nginx
COPY config/free-energy.conf /etc/nginx/conf.d/free-energy.conf
RUN rm /etc/nginx/conf.d/default.conf
RUN usermod -u 1000 www-data
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
/docker/nginx/config/free-energy.conf (for symfony3 app)
server {
listen 80;
server_name www.free-energy.org free-energy.org;
root /var/www/free-energy.org/web;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /app.php$is_args$args;
}
# PROD
location ~ ^/app\.php(/|$) {
fastcgi_pass php-fpm-free-energy:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/app.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/free-energy.org_error.log;
access_log /var/log/nginx/free-energy.org_access.log;
}
After start, you need to add 127.0.0.1 free-energy.org to your /etc/hosts file for resolving the host, and your app will live on the /src/free-energy.org (entry point is - /src/free-energy.org/web/app.php)
I am trying to link 2 separate containers:
nginx:latest
php:fpm
The problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect.
Here is the source code, which is in my repository. Here is the file docker-compose.yml:
nginx:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www/test/
links:
- fpm
fpm:
image: php:fpm
ports:
- "9000:9000"
and Dockerfile which I used to build a custom image based on the nginx one:
FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./default.conf /etc/nginx/conf.d/
Lastly, here is my custom Nginx virtual host config:
server {
listen 80;
server_name localhost;
root /var/www/test;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Could anybody help me configure these containers correctly to execute php scripts?
P.S.
I run containers via docker-composer like this:
docker-compose up
from the project root directory.
I know it is kind an old post, but I've had the same problem and couldn't understand why your code didn't work.
After a LOT of tests I've found out why.
It seems like fpm receives the full path from nginx and tries to find the files in the fpm container, so it must be the exactly the same as server.root in the nginx config, even if it doesn't exist in the nginx container.
To demonstrate:
docker-compose.yml
nginx:
build: .
ports:
- "80:80"
links:
- fpm
fpm:
image: php:fpm
ports:
- ":9000"
# seems like fpm receives the full path from nginx
# and tries to find the files in this dock, so it must
# be the same as nginx.root
volumes:
- ./:/complex/path/to/files/
/etc/nginx/conf.d/default.conf
server {
listen 80;
# this path MUST be exactly as docker-compose.fpm.volumes,
# even if it doesn't exist in this dock.
root /complex/path/to/files;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass fpm:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/
Don't hardcode ip of containers in nginx config, docker link adds the hostname of the linked machine to the hosts file of the container and you should be able to ping by hostname.
EDIT: Docker 1.9 Networking no longer requires you to link containers, when multiple containers are connected to the same network, their hosts file will be updated so they can reach each other by hostname.
Every time a docker container spins up from an image (even stop/start-ing an existing container) the containers get new ip's assigned by the docker host. These ip's are not in the same subnet as your actual machines.
see docker linking docs (this is what compose uses in the background)
but more clearly explained in the docker-compose docs on links & expose
links
links:
- db
- db:database
- redis
An entry with the alias' name will be created in /etc/hosts inside containers for this service, e.g:
172.17.2.186 db
172.17.2.186 database
172.17.2.187 redis
expose
Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified.
and if you set up your project to get the ports + other credentials through environment variables, links automatically set a bunch of system variables:
To see what environment variables are available to a service, run docker-compose run SERVICE env.
name_PORT
Full URL, e.g. DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, e.g. DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container's IP address, e.g. DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, e.g. DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), e.g. DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, e.g. DB_1_NAME=/myapp_web_1/myapp_db_1
As pointed out before, the problem was that the files were not visible by the fpm container. However to share data among containers the recommended pattern is using data-only containers (as explained in this article).
Long story short: create a container that just holds your data, share it with a volume, and link this volume in your apps with volumes_from.
Using compose (1.6.2 in my machine), the docker-compose.yml file would read:
version: "2"
services:
nginx:
build:
context: .
dockerfile: nginx/Dockerfile
ports:
- "80:80"
links:
- fpm
volumes_from:
- data
fpm:
image: php:fpm
volumes_from:
- data
data:
build:
context: .
dockerfile: data/Dockerfile
volumes:
- /var/www/html
Note that data publishes a volume that is linked to the nginx and fpm services. Then the Dockerfile for the data service, that contains your source code:
FROM busybox
# content
ADD path/to/source /var/www/html
And the Dockerfile for nginx, that just replaces the default config:
FROM nginx
# config
ADD config/default.conf /etc/nginx/conf.d
For the sake of completion, here's the config file required for the example to work:
server {
listen 0.0.0.0:80;
root /var/www/html;
location / {
index index.php index.html;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
which just tells nginx to use the shared volume as document root, and sets the right config for nginx to be able to communicate with the fpm container (i.e.: the right HOST:PORT, which is fpm:9000 thanks to the hostnames defined by compose, and the SCRIPT_FILENAME).
New Answer
Docker Compose has been updated. They now have a version 2 file format.
Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
They now support the networking feature of Docker which when run sets up a default network called myapp_default
From their documentation your file would look something like the below:
version: '2'
services:
web:
build: .
ports:
- "8000:8000"
fpm:
image: phpfpm
nginx
image: nginx
As these containers are automatically added to the default myapp_default network they would be able to talk to each other. You would then have in the Nginx config:
fastcgi_pass fpm:9000;
Also as mentioned by #treeface in the comments remember to ensure PHP-FPM is listening on port 9000, this can be done by editing /etc/php5/fpm/pool.d/www.conf where you will need listen = 9000.
Old Answer
I have kept the below here for those using older version of Docker/Docker compose and would like the information.
I kept stumbling upon this question on google when trying to find an answer to this question but it was not quite what I was looking for due to the Q/A emphasis on docker-compose (which at the time of writing only has experimental support for docker networking features). So here is my take on what I have learnt.
Docker has recently deprecated its link feature in favour of its networks feature
Therefore using the Docker Networks feature you can link containers by following these steps. For full explanations on options read up on the docs linked previously.
First create your network
docker network create --driver bridge mynetwork
Next run your PHP-FPM container ensuring you open up port 9000 and assign to your new network (mynetwork).
docker run -d -p 9000 --net mynetwork --name php-fpm php:fpm
The important bit here is the --name php-fpm at the end of the command which is the name, we will need this later.
Next run your Nginx container again assign to the network you created.
docker run --net mynetwork --name nginx -d -p 80:80 nginx:latest
For the PHP and Nginx containers you can also add in --volumes-from commands etc as required.
Now comes the Nginx configuration. Which should look something similar to this:
server {
listen 80;
server_name localhost;
root /path/to/my/webroot;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
Notice the fastcgi_pass php-fpm:9000; in the location block. Thats saying contact container php-fpm on port 9000. When you add containers to a Docker bridge network they all automatically get a hosts file update which puts in their container name against their IP address. So when Nginx sees that it will know to contact the PHP-FPM container you named php-fpm earlier and assigned to your mynetwork Docker network.
You can add that Nginx config either during the build process of your Docker container or afterwards its up to you.
As previous answers have solved for, but should be stated very explicitly: the php code needs to live in the php-fpm container, while the static files need to live in the nginx container. For simplicity, most people have just attached all the code to both, as I have also done below. If the future, I will likely separate out these different parts of the code in my own projects as to minimize which containers have access to which parts.
Updated my example files below with this latest revelation (thank you #alkaline )
This seems to be the minimum setup for docker 2.0 forward
(because things got a lot easier in docker 2.0)
docker-compose.yml:
version: '2'
services:
php:
container_name: test-php
image: php:fpm
volumes:
- ./code:/var/www/html/site
nginx:
container_name: test-nginx
image: nginx:latest
volumes:
- ./code:/var/www/html/site
- ./site.conf:/etc/nginx/conf.d/site.conf:ro
ports:
- 80:80
(UPDATED the docker-compose.yml above: For sites that have css, javascript, static files, etc, you will need those files accessible to the nginx container. While still having all the php code accessible to the fpm container. Again, because my base code is a messy mix of css, js, and php, this example just attaches all the code to both containers)
In the same folder:
site.conf:
server
{
listen 80;
server_name site.local.[YOUR URL].com;
root /var/www/html/site;
index index.php;
location /
{
try_files $uri =404;
}
location ~ \.php$ {
fastcgi_pass test-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
In folder code:
./code/index.php:
<?php
phpinfo();
and don't forget to update your hosts file:
127.0.0.1 site.local.[YOUR URL].com
and run your docker-compose up
$docker-compose up -d
and try the URL from your favorite browser
site.local.[YOUR URL].com/index.php
I think we also need to give the fpm container the volume, dont we? So =>
fpm:
image: php:fpm
volumes:
- ./:/var/www/test/
If i dont do this, i run into this exception when firing a request, as fpm cannot find requested file:
[error] 6#6: *4 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.42.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.81:9000", host: "localhost"
For anyone else getting
Nginx 403 error: directory index of [folder] is forbidden
when using index.php while index.html works perfectly and having included index.php in the index in the server block of their site config in sites-enabled
server {
listen 80;
# this path MUST be exactly as docker-compose php volumes
root /usr/share/nginx/html;
index index.php
...
}
Make sure your nginx.conf file at /etc/nginx/nginx.conf actually loads your site config in the http block...
http {
...
include /etc/nginx/conf.d/*.conf;
# Load our websites config
include /etc/nginx/sites-enabled/*;
}