I have a simple docker-compose config with php-fpm and nginx.
It looks like Nginx can't pass the php file to php-fpm which results in download of php files instead of execution.
It works with html files.(localhost:8080/readme.html).
I always get a 403 Forbidden error when I go to root of localhost(http://localhost/).
Please help.
docker-compose.yml
version: '3'
services:
nginx:
image: nginx:alpine
container_name: nginx
restart: always
volumes:
- './etc/nginx/nginx.conf:/etc/nginx/nginx.conf'
- './var/log:/var/log'
- './web:/usr/share/nginx/html'
ports:
- 8080:80
- 443:443
depends_on:
- php
php:
image: php:fpm-alpine
container_name: php
restart: always
volumes:
- "./web:/var/www/html"
- './var/log:/var/log'
nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
index index.php index.html index.htm;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename/index.php){
rewrite (.*) $1/index.php;
}
if (!-f $request_filename){
rewrite (.*) /index.php;
}
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;
}
}
}
The problem is that the root folder for php-fpm (/var/www/html) and nginx (/usr/share/nginx/html) are different but you pass the name of the root folder from nginx to php-fpm in this line: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Because of that, php-fpm looks in the wrong folder and can't execute the PHP file.
Try using /var/www/html as the root for nginx (change it in the nginx config and the docker-comnpose file) and php-fpm should be able to find and execute the PHP files.
Related
In my local development environment, I'm using docker-compose with nginx and php-fpm containers, but sometimes php-fpm has a 404 file not found error. I only consistently get it when I have multiple ajax calls happening.
Here is my docker-compose file.
version: '3'
services:
nginx:
build:
context: docker
dockerfile: nginx.dockerfile
volumes:
- ./:/var/www/html
ports:
- 80:80
depends_on:
- php
php:
build:
context: docker
dockerfile: php.dockerfile
volumes:
- ./:/var/www/html
nginx.conf
user nginx;
worker_processes 4;
daemon off;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
# Switch logging to console out to view via Docker
#access_log /dev/stdout;
#error_log /dev/stderr;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*.conf;
}
conf.d/default.conf
upstream php-upstream {
server php:9000;
}
sites-available/site.conf
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/html/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx.dockerfile
FROM nginx
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY nginx/site.conf /etc/nginx/sites-available/site.conf
WORKDIR /var/www/
CMD [ "nginx" ]
EXPOSE 80 443
When I visit a page with two ajax calls happening one will give me a 200
"GET /index.php" 200
the next will give me a 404
ERROR: Unable to open primary script: /var/www/html/public/index.php (No such file or directory)
Often when I refresh the one that failed will now work and the one that did work will now 404.
I'm trying to setup a Symfony 3.x application with Docker.
I configured 3 docker containers through a docker-compose.yml file:
Nginx
Php-fpm
MySQL
When I navigate to my-project.dev:8080/, I see a simple 404-Not found-page.
I can't load my-project.dev:8080/app_dev.php or my-project.dev:8080/config.php (I get a "file not found" error)
I don't see any entries in the /var/log/nginx/access.log either.
docker-compose.yml:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- .:/var/www
- ./docker/vhost.conf:/etc/nginx/sites-enabled/vhost.conf
- ./docker/nginx.conf:/etc/nginx/nginx.conf
links:
- php
php:
image: php:5.6-fpm
volumes:
- .:/var/www
links:
- db
db:
image: mysql:latest
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-password
nginx.conf file:
user www-data;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
vhost.conf file:
server {
listen *:80;
server_name my-project.dev;
root /var/www/web;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /app.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass php:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
I had to override the WORKDIR of the php-fpm image by creating an image like this:
Dockerfile:
FROM php:5.6-fpm
MAINTAINER Firstname Lastname <firstname.lastname#domain.com>
WORKDIR /var/www
Build image:
docker build -t companyx/php-5.6-fpm .
Update docker-compose file:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- .:/var/www
- ./docker/vhost.conf:/etc/nginx/sites-enabled/vhost.conf
- ./docker/nginx.conf:/etc/nginx/nginx.conf
links:
- php
php:
image: companyx/php-5.6-fpm
volumes:
- .:/var/www
links:
- db
db:
image: mysql:latest
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-password
The problem is that with this your definition:
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass php:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ \.php$ {
return 404;
}
you are using a php page that return 404.
You should change the line
try_files $uri /app.php$is_args$args;
in
try_files $uri /app_dev.php$is_args$args;
I had same issue and tried different proposed solutions but my case was different that I was missing index.php inside public/ folder.
Make sure you have this class inside the public folder by doing it by yourself or create a symfony project in your root directory.
I have problem with setting up my docker environment on remote machine.
I prepared local docker machines. Problem is with nginx + php-fpm.
Nginx act as nginx user, php-fpm act as www-data user. Files on host machine (application files) are owned by user1. chmods are default for symfony2 application.
When I access my webserver it returns 404 error or just simple "file not found".
For a while exact same configuration works on my local Ubuntu 16.04, but fails on Debian Jessie on server. Right now it doesn't work on both. I tried everything, asked on sysops groups and googled for hours. Do you hve any idea?
Here is my vhost configuration
server {
listen 80;
server_name dev.xxxxx.co xxxxx.dev;
root /usr/share/www/co.xxxxx.dev/web;
index app_dev.php;
client_max_body_size 100M;
fastcgi_read_timeout 1800;
location / {
# try to serve file directly, fallback to app.php
try_files $uri $uri/ /app.php$is_args$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
access_log off;
}
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ ^/app\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/app.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
location ~ \.php$ {
return 404;
}
}
nginx configuration
user root;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
And my docker compose
version: '2'
services:
nginx:
image: nginx
ports:
- 8082:80
volumes:
- /home/konrad/Workspace:/usr/share/www:ro
- ./conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./sites:/etc/nginx/conf.d:ro
php-fpm:
image: php:fpm
ports:
- 9000:9000
volumes:
- /home/konrad/Workspace:/usr/share/www
- ./conf/www.conf:/etc/php/7.0/fpm/pool.d/www.conf
- ./conf/php.ini:/usr/local/etc/php/conf.d/90-php.ini:ro
On remote server files are accesible, visible as property of 1001:1001
I created a multi-container application which is about php. When I modify the php file, I can see the changes in the browser. But, when I modify the static files such as css and js, there are no changes in the browser. The following is my Dockerfile code:
Dockerfile
`FROM nginx:1.8
`ADD default.conf /etc/nginx/conf.d/default.conf
`ADD nginx.conf /etc/nginx/nginx.conf
`WORKDIR /Code/project/
`RUN chmod -R 777 /Code/project/
`VOLUME /Code/project
default.conf
`server {
listen 80;
server_name localhost;
root /Code/project/public;
index index.html index.htm index.php;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm:9000;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
nginx.conf
user root;
worker_processes 8;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 20m;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}`
docker-compose.yml
webphp:
image: index.alauda.cn/chuanty/local_php
#image: index.alauda.cn/chuanty/local_nginx:snapshot
ports:
- "9000:9000"
volumes:
- .:/Code/project
links:
- cache:cache
- db:db
- es:localhost
extra_hosts:
- "chaunty.taurus:192.168.99.100"
cache:
image: redis
ports:
- "6379:6379"
db:
#image: mysql
image: index.alauda.cn/alauda/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: chuantaiyidev
MYSQL_USER: cty
MYSQL_PASSWORD: chuantaiyidev
MYSQL_DATABASE: taurus
es:
image: index.alauda.cn/chuanty/local_elasticsearch
ports:
- "9200:9200"
- "9300:9300"
server:
#image: index.alauda.cn/ryugou/nginx
image: index.alauda.cn/chuanty/local_nginx:1.1
ports:
- "80:80"
- "443:443"
links:
- webphp:fpm
volumes_from:
- webphp:rw
I guess your problem is "sendfile on" in your nginx.conf.
For development purpose try to set it off in the server-directive of your server block:
server {
...
sendfile off;
}
This will force to reload the static files such as css and js. nginx won't load the file from memory instead.
http://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile
One option is to copy the modified static files directly to the Docker container:
docker cp web/static/main.js container_name:/usr/src/app/static/main.js
web/static is the static directory on your host machine
main.js is the modified file
container_name is the name of the container which can be found with docker ps
/usr/src/app/static is the static directory in your Docker container
If you want to determine exactly where your static files are on your Docker container, you can use docker exec -t -i container_name /bin/bash to explore its directory structure.
I really hope you actually do not copy configuration file into an image!
Docker Best Practice
Docker images are supposed to be immutable, hence at deploy the configuration file (that are depend on a lot [environment] of variables) should be passed to/shared with the container thanks to the docker run option -v|--volume instead.
As for your issue
If you want to see any change when static files are modified you need to docker build and then docker compose up on every modifications in order to actually change the web page. You may not want that (clearly). I suggest you to use shared directories (through the -v option).
I'm beginning with Docker and nginx, and I'm trying to setup a two container environment running:
nginx:latest on one side
php:fpm on the other side
I'm having trouble with php-fpm: I always get a 502 Bad Gateway error.
My setup is straightforward ($TEST_DIR is my working directory).
My Docker compose config TEST_DIR/docker-compose.yml:
nginx:
image: nginx
ports:
- "8080:80"
volumes:
- ./www:/usr/share/nginx/html
- ./conf/nginx.conf:/nginx.conf
- ./logs/nginx:/var/log/nginx
links:
- php:php
command: nginx -c /nginx.conf
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./www:/var/www/html
The nginx config $TEST_DIR/conf/nginx.conf:
user nginx;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
open_file_cache max=100;
upstream php-upstream {
server php:9000;
}
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# Pass PHP scripts to PHP-FPM
location ~* \.php$ {
fastcgi_pass php-upstream;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/php_error.log;
access_log /var/log/nginx/php_access.log;
}
}
daemon off;
Then, I put my PHP content in the same directory as my docker-compose.yml:
$TEST_DIR/www/test.php
<?php phpinfo(); ?>
If I start the infrastructure using docker-compose up and then go to localhost:8080/test.php, then I get the 502 Bad Gateway
and the following error from nginx:
[error] 6#6: *1 connect() failed (113: No route to host) while connecting to upstream, client: 172.17.42.1, server: localhost, request: "GET /phpinsfo2.php HTTP/1.1", upstream: "fastcgi://172.17.0.221:9000", host: "localhost:8080"
What is causing the error?
I finally managed to make it work.
The problem was that my host computer (Fedora 21) had a firewall enabled.
So doing: systemctl stop firewalld solved my problem.
Apparently this is a well known problem at Red Hat Linux:
Bug 1098281 - Docker interferes with firewall initialisation via firewalld