I'm using traefik :
version: '3'
services:
traefik:
image: traefik:v2.9
# command: --api.insecure=true --providers.docker
command: --providers.docker
ports:
- "80:80"
- "8080:8080"
network_mode: "host"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./acme.json:/acme.json
command:
# We are going to use the docker provider
- --api.insecure=true
- "--providers.docker"
# Only enabled containers should be exposed
#- "--providers.docker.exposedByDefault=false"
# We want to use the dashbaord
- "--api.dashboard=true"
# The entrypoints we ant to expose
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
#- "--entrypoints.web.http.redirections.entryPoint.to=websecure"
# - "--entrypoints.web.http.redirections.entryPoint.scheme=https"
# - "--entrypoints.web.http.redirections.entrypoint.permanent=true"
- "--certificatesresolvers.letsencrypt.acme.email=$EMAIL"
- "--certificatesresolvers.letsencrypt.acme.storage=acme.json"
# used during the challenge
And separately another docker-compose.yml with database, mailserver, nginx+php-fpm:
version: "3"
services:
database:
build:
context: ./database
environment:
MYSQL_DATABASE: '${MYSQL_DATABASE}'
MYSQL_USER: '${MYSQL_USER}'
MYSQL_PASSWORD: '${MYSQL_PASSWORD}'
MYSQL_ROOT_PASSWORD: '${MYSQL_ROOT_PASSWORD}'
volumes:
- ./database/data:/var/lib/mysql
restart: always
php-http:
build:
context: ../
dockerfile: ./docker/php-nginx/Dockerfile
args:
DOMAIN: '${DOMAIN}'
depends_on:
- database
- mailserver
volumes:
- ./nginxlogs/:/var/log/nginx/
- './symfonylogs:/var/www/html/mystuff/var/log/'
#labels:
# - traefik.http.routers.php-http.tls=true
# - traefik.http.routers.php-http.tls.certresolver=letsencrypt
# - traefik.http.services.php-http.loadbalancer.server.port=8080
# - traefik.enable=true
# - traefik.http.routers.php-http.rule=Host(`mystuff.com`, `mystuff2.com`)
# - 'traefik.http.routers.php-http.tls.domains[0].main=mystuff.com'
# - 'traefik.http.routers.php-http.tls.domains[1].main=mystuff2.com'
restart: always
mailserver:
[stuffs]
The dockerfile for nginx/php-fpm is based on https://github.com/TrafeX/docker-php-nginx/tree/1.10.0.
But I made a few changes in the config.
The nginx.conf :
worker_processes auto;
error_log stderr warn;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
# Define custom log format to include reponse times
log_format main_timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'$request_time $upstream_response_time $pipe $upstream_cache_status';
access_log /dev/stdout main_timed;
error_log /dev/stderr notice;
keepalive_timeout 65;
# Write temporary files to /tmp so they can be created as a non-privileged user
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
# Default server definition
server {
listen [::]:8080 default_server;
listen 8080 default_server;
server_name _;
sendfile off;
root /var/www/html;
index index.php index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.php
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# Redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/lib/nginx/html;
}
# Pass the PHP scripts to PHP-FPM listening on 127.0.0.1:9000
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
expires 5d;
}
# Deny access to . files, for security
location ~ /\. {
log_not_found off;
deny all;
}
# Allow fpm ping and status from localhost
location ~ ^/(fpm-status|fpm-ping)$ {
access_log off;
allow 127.0.0.1;
# deny all;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
}
gzip on;
gzip_proxied any;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_vary on;
gzip_disable "msie6";
# Include other server configs
include /etc/nginx/conf.d/*.conf;
}
my website config :
server {
listen [::]:8080 default_server;
listen 8080 default_server;
server_name mystuff.com mystuff2.com;
root /var/www/html/mystuff/public;
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
# optionally disable falling back to PHP script for the asset directories;
# nginx will return a 404 error when files are not found instead of passing the
# request to Symfony (improves performance but Symfony's 404 page is not displayed)
# location /bundles {
# try_files $uri =404;
# }
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# optionally set the value of the environment variables used in the application
# fastcgi_param APP_ENV prod;
# fastcgi_param APP_SECRET <app-secret-id>;
# fastcgi_param DATABASE_URL "mysql://db_user:db_pass#host:3306/db_name";
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
# Caveat: When PHP-FPM is hosted on a different machine from nginx
# $realpath_root may not resolve as you expect! In this case try using
# $document_root instead.
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/index.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
location ~ ^/(fpm-status|fpm-ping)$ {
access_log off;
allow 127.0.0.1;
deny all;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
www.conf:
[global]
; Log to stderr
error_log = /dev/stderr
[www]
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on
; a specific port;
; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses
; (IPv6 and IPv4-mapped) on a specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = /var/run/php/php7.4-fpm.sock
listen.owner = nobody
listen.group = nobody
listen.mode = 0660
user = nobody
group = nobody
; Enable status page
pm.status_path = /fpm-status
; Ondemand process manager
pm = ondemand
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 100
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
pm.process_idle_timeout = 10s;
; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
pm.max_requests = 1000
; Make sure the FPM workers can reach the environment variables for configuration
clear_env = no
; Catch output from PHP
catch_workers_output = yes
; Remove the 'child 10 said into stderr' prefix in the log and only show the actual message
decorate_workers_output = no
; Enable ping page to use in healthcheck
ping.path = /fpm-ping
Among other things, I made it listen to /var/run/php/php7.4-fpm.sock instead of the localhost.
Result : if I try to reach my website, I get a "404 page not found". Without traefik it seems to work (in http).
On the dashboard of traefik, I can see that traefik detect the containers of my database and mailserver, but not php-nginx. It's not in the list.
Update :
php-http_1 | 2023-01-22 16:27:56,736 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
php-http_1 | 2023-01-22 16:27:56,739 INFO supervisord started with pid 1
php-http_1 | 2023-01-22 16:27:57,742 INFO spawned: 'nginx' with pid 7
php-http_1 | 2023-01-22 16:27:57,744 INFO spawned: 'php-fpm' with pid 8
php-http_1 | nginx: [emerg] a duplicate default server for [::]:8080 in /etc/nginx/conf.d/symfony-nginx.conf:2
php-http_1 | 2023-01-22 16:27:57,754 INFO exited: nginx (exit status 1; not expected)
php-http_1 | 2023-01-22 16:27:57,755 INFO gave up: nginx entered FATAL state, too many start retries too quickly
php-http_1 | [22-Jan-2023 16:27:57] NOTICE: PHP message: PHP Warning: PHP Startup: Unable to load dynamic library 'pdo_mysql' (tried: /usr/lib/php7/modules/pdo_mysql (Error loading shared library /usr/lib/php7/modules/pdo_mysql: No such file or directory), /usr/lib/php7/modules/pdo_mysql.so (Error relocating /usr/lib/php7/modules/pdo_mysql.so: pdo_throw_exception: symbol not found)) in Unknown on line 0
php-http_1 | [22-Jan-2023 16:27:57] NOTICE: fpm is running, pid 8
php-http_1 | [22-Jan-2023 16:27:57] NOTICE: ready to handle connections
php-http_1 | 2023-01-22 16:27:58,792 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
docker ps:
713bd86c922d docker_php-http "/usr/bin/supervisor…" 2 minutes ago Up 2 minutes (unhealthy) 8080/tcp docker_php-http_1
f80676dda336 docker_database "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3306/tcp, 33060/tcp docker_database_1
a4a50d91722d docker_mailserver "/usr/bin/entrypoint…" 2 minutes ago Up 2 minutes (healthy) 25/tcp, 110/tcp, 143/tcp, 465/tcp, 587/tcp, 993/tcp, 995/tcp, 4190/tcp mailserver
I tried billions of small changes, nothing works. I'm getting quite desperate. I had made another build with apache instead of nginx, it works perfectly. I have no idea what's going on. Does anyone has any idea?
Thank you
Update : okay we can see in the docker ps that it's noted as unhealthy. I assume it's related.
In the original docker image, there is this :
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
I assume it's not correct anymore after my changes.
Related
I am running alpine 3.12 container with a few custom configuration like so:
FROM alpine:3.12
[some irrelevant stuff]
# Switch to use a non-root user from here on
USER nobody
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html && \
chown -R nobody.nobody /run && \
chown -R nobody.nobody /var/lib/nginx && \
chown -R nobody.nobody /var/log/nginx
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --fail http://127.0.0.1:8080/fpm-ping
I have my nginx.conf with the following data:
worker_processes 1;
error_log stderr warn;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
# Define custom log format to include reponse times
log_format main_timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'$request_time $upstream_response_time $pipe $upstream_cache_status';
access_log /dev/stdout main_timed;
error_log /dev/stderr notice;
keepalive_timeout 65;
# Write temporary files to /tmp so they can be created as a non-privileged user
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
# Default server definition
server {
listen [::]:8080 default_server;
listen 8080 default_server;
server_name _;
sendfile off;
root /var/www/html/public;
index index.php index.html;
location / {
# try_files $uri $uri/ /index.php?q=$uri&$args;
try_files $uri $uri/ /index.php$is_args$args;
}
# Redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/lib/nginx/html/public;
}
# Pass the PHP scripts to PHP-FPM listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
expires 5d;
}
# Deny access to . files, for security
location ~ /\. {
log_not_found off;
deny all;
}
# Allow fpm ping and status from localhost
location ~ ^/(fpm-status|fpm-ping)$ {
access_log off;
allow 127.0.0.1;
deny all;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
}
gzip on;
gzip_proxied any;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_vary on;
gzip_disable "msie6";
# Include other server configs
include /etc/nginx/conf.d/*.conf;
}
and my supervisord config such as:
[supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
pidfile=/run/supervisord.pid
[program:php-fpm]
command=php-fpm7 -F
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=false
startretries=0
[program:nginx]
command=nginx -g 'daemon off;'
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=false
startretries=0
and in my php-fpm/www.conf:
[global]
; Log to stderr
error_log = /dev/stderr
[www]
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on
; a specific port;
; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses
; (IPv6 and IPv4-mapped) on a specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = 127.0.0.1:9000
; Enable status page
pm.status_path = /fpm-status
; Ondemand process manager
pm = ondemand
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 100
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
pm.process_idle_timeout = 10s;
; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
pm.max_requests = 1000
; Make sure the FPM workers can reach the environment variables for configuration
clear_env = no
; Catch output from PHP
catch_workers_output = yes
; Remove the 'child 10 said into stderr' prefix in the log and only show the actual message
decorate_workers_output = no
; Enable ping page to use in healthcheck
ping.path = /fpm-ping
So my question: Where is this error log coming from???
[crit] 9#9: *2 connect() to unix:/var/run/php7.3-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 172.17.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7.3-fpm.sock:", host: "127.0.0.1:8080"
It should really not be triggered, since I never specify php-fpm to listen to this socket. Where does it come from???
I was able to set up nginx server blocks as per tutorials. When I try to access the sites through the respective domain names I am directed to the same site.
I have been trying to multiple the subsite of /site1 under localhost in windows.
nginx.conf
#user nobody;
# worker_processes 1;
worker_processes auto;
# error_log logs/error.log;
# error_log logs/error.log notice;
# error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root H:\www\html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root H:\www\html;
}
# this is the default server
location = /site1 {
return 301 /site1/;
}
location ^~ /site1/ {
root H:\www\html\drupal-8.1.10;
index index.php;
}
location ~ /site1/\.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9123;
fastcgi_index index.php;
include fastcgi_params;
}
}
}
www directory:
H:\www\html>tree /f
Folder PATH listing for volume 975
Volume serial number is 0000-043C
H:.
│ 50x.html
│ index.html
│ drupal.tar.gz
│
└───drupal-8.1.10
index.php
The potential URL should be:
localhost
localhost/site1
Thanks
In Windows use:
nginx path on the same drive e.g:
H:/nginx
pid full absolute path e.g:
pid H:/nginx/logs/nginx.pid;
error logs enable e.g (uncoment):
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
set|enable output log format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
correct root into server block is (in double ""):
server{
location /{
root "H:/nginx/www/html";
}
}
correct php fastCGI params e.g:
server{
location ~ /site1/\.php$ {
root www/html/site1
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME C:/nginx/www/html/site1$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
include fastcgi_params;
}
}
Well I think that's it.
All root settings must be enclosed in double quotes and normal bar (/) do not forget to see the log of errors ... it says a lot :)
My sample configuration:
init.bat (to start|close service)[can not serve you]
#ECHO OFF
TITLE LOCALHOST SERVICE
REM GO TO NGINX DIR
CD C:/nginx
TASKLIST /FI "IMAGENAME eq nginx.exe" | FIND /I "nginx.exe" > NUL && (GOTO STOP) || (GOTO START)
:START
ECHO.
ECHO ---------------------------- STARTING NGINX SERVER ----------------------------
ECHO.
REM START NGINX SERVICE
START/MIN nginx.exe
ECHO.
ECHO ----------------------------- STARTER PHP SERVICE -----------------------------
ECHO.
REM START PHP SERVICE (FOR NGINX)
php/php-cgi.exe -b 127.0.0.1:9000 -c C:/nginx/php/php.ini
REM GO TO "END" BLOCK FOR DON'T EXECUTE "STOP" BLOCK ¬¬
GOTO END
:STOP
REM QUIT|STOP NGINX SERVICE
REM OLD-COMMAND: START nginx.exe -s quit
TASKKILL /F /IM nginx.exe > NUL
REM STOP PHP SERVICE
TASKKILL /F /IM php-cgi.exe > NUL
GOTO END
:END
Ok the init.bat file lets you start or stop nginx and php with just a double-click simple.
You can add it there someone icon and place it in such work area.
Assuming php run in a subdirectory nginx would have the following structure:
// System hard drive (in my case)
---C:
| // nginx path
|--------nginx
|
|---nginx.exe //executable
|
|---conf // configurations path
|
|---logs // logs path
|
|---pid // path to pid your proccess
|
|---html // path to your server (or blocks)
|
|---mime.types // archive list mime types
|
|---init.bat // optinal
A good practice is to use server-blocks even if they do not use subdomains.
For this create a folder in "C:/nginx/conf" called "sites-enabled" and make a "backup" of your nginx configuration file "C:/nginx/conf/nginx.conf" for such "nginx.conf.bk".
The new configuration file would look like this:
nginx.conf (modified)
# Configuration File - Nginx Server Configs
# http://nginx.org/en/docs/dirindex.html
# Run as a unique, less privileged user for security reasons.
# user www www;
# Sets the worker threads to the number of CPU cores available in the system for best performance.
# Should be > the number of CPU cores.
# Maximum number of connections = worker_processes * worker_connections
worker_processes auto;
# Maximum number of open files per worker process.
# Should be > worker_connections.
worker_rlimit_nofile 8192;
events {
# If you need more connections than this, you start optimizing your OS.
# That's probably the point at which you hire people who are smarter than you as this is *a lot* of requests.
# Should be < worker_rlimit_nofile.
worker_connections 8000;
}
# Log errors and warnings to this file
# This is only used when you don't override it on a server{} level
error_log logs/error.log warn;
# The file storing the process ID of the main process
pid C:/nginx/pids/nginx.pid;
http {
# Hide nginx version information.
server_tokens off;
# Specify MIME types for files.
include mime.types;
default_type application/octet-stream;
# Update charset_types to match updated mime.types.
# text/html is always included by charset module.
charset_types text/css text/plain text/vnd.wap.wml application/javascript application/json application/rss+xml application/xml;
# Include $http_x_forwarded_for within default format used in log files
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Log access to this file
# This is only used when you don't override it on a server{} level
access_log logs/access.log main;
# How long to allow each connection to stay idle.
# Longer values are better for each individual client, particularly for SSL,
# but means that worker connections are tied up longer.
keepalive_timeout 20s;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
# For performance reasons, on FreeBSD systems w/ ZFS
# this option should be disabled as ZFS's ARC caches
# frequently used files in RAM by default.
sendfile on;
# Don't send out partial frames; this increases throughput
# since TCP frames are filled up before being sent out.
tcp_nopush on;
# Enable gzip compression.
gzip on;
# Compression level (1-9).
# 5 is a perfect compromise between size and CPU usage, offering about
# 75% reduction for most ASCII files (almost identical to level 9).
gzip_comp_level 5;
# Don't compress anything that's already small and unlikely to shrink much
# if at all (the default is 20 bytes, which is bad as that usually leads to
# larger files after gzipping).
gzip_min_length 256;
# Compress data even for clients that are connecting to us via proxies,
# identified by the "Via" header (required for CloudFront).
gzip_proxied any;
# Tell proxies to cache both the gzipped and regular version of a resource
# whenever the client's Accept-Encoding capabilities header varies;
# Avoids the issue where a non-gzip capable client (which is extremely rare
# today) would display gibberish if their proxy gave them the gzipped version.
gzip_vary on;
# Compress all output labeled with one of the following MIME-types.
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
# text/html is always compressed by gzip module
# This should be turned on if you are going to have pre-compressed copies (.gz) of
# static files available. If not it should be left off as it will cause extra I/O
# for the check. It is best if you enable this in a location{} block for
# a specific directory, or on an individual server{} level.
# gzip_static on;
# Include files in the sites-enabled folder. server{} configuration files should be
# placed in the sites-available folder, and then the configuration should be enabled
# by creating a symlink to it in the sites-enabled folder.
# See doc/sites-enabled.md for more info.
include C:/nginx/conf/sites-enabled/*.conf;
}
Note that the end of this example we "including" all files ".conf" of "sites-enabled" folder.
If you do not use server-blocks you can simply create a file "default.conf" that will have your server settings.
Something like this:
default.conf (example)
server {
listen 80;
keepalive_timeout 300s;
# define path to this project
root "C:/nginx/html/your_path_here";
# Specify a charset
charset utf-8;
# define your server name
server_name localhost;
index index.php index.html;
autoindex off;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 ------------------------------
#
location ~ \.php$ {
# root for PHP FASTCGI MAPING
root html/your_path_here;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME C:/nginx/html/your_path_here$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
include fastcgi_params;
}
# Prevent clients from accessing hidden files (starting with a dot) -------------------
# This is particularly important if you store .htpasswd files in the site hierarchy
# Access to `/.well-known/` is allowed.
# https://www.mnot.net/blog/2010/04/07/well-known
# https://tools.ietf.org/html/rfc5785
location ~* /\.(?!well-known\/) {
deny all;
}
# Prevent clients from accessing to backup/config/source files ------------------------
location ~* (?:\.(?:bak|conf|dist|fla|in[ci]|log|psd|sh|sql|sw[op])|~)$ {
deny all;
}
# Expire rules for static content -----------------------------------------------------
# No default expire rule. This config mirrors that of apache as outlined in the
# html5-boilerplate .htaccess file. However, nginx applies rules by location,
# the apache rules are defined by type. A consequence of this difference is that
# if you use no file extension in the url and serve html, with apache you get an
# expire time of 0s, with nginx you'd get an expire header of one month in the
# future (if the default expire rule is 1 month). Therefore, do not use a
# default expire rule with nginx unless your site is completely static
# cache.appcache, your document html and data -----------------------------------------
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
access_log logs/static.log;
}
# Feed --------------------------------------------------------------------------------
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
}
# Media: images, icons, video, audio, HTC ---------------------------------------------
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# CSS and Javascript ------------------------------------------------------------------
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# WebFonts ----------------------------------------------------------------------------
# If you are NOT using cross-domain-fonts.conf, uncomment the following directive
# location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ {
# expires 1M;
# access_log off;
# add_header Cache-Control "public";
# }
}
For local development is a good choice to set cache negatito (-1) to always update to load the page.
Note that the configuration shown here is just an example and you may (or may not) use them.
Also note that by defining a root directory I put "your_path_here" replace as your real directory name.
This directory must be inside the folder "html" in "C:/nginx/html/".
To create a server-block to "site1" create a new configuration file in "sites-enabled" with any name and point to the corresponding root directory, this assuming your hosts file ("C:/Windows/System32/drives/etc/") has "site1" to "127.0.0.1" or subdomain is set to localhost (127.0.0.1 site1.localhost)
i Use
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME C:/nginx-0.7.60/html$fastcgi_script_name; #this is the one line for edition
include fastcgi_params;
}
C:\PHP5\php-cgi.exe -b 127.0.0.1:9000
I have problem with setting up my docker environment on remote machine.
I prepared local docker machines. Problem is with nginx + php-fpm.
Nginx act as nginx user, php-fpm act as www-data user. Files on host machine (application files) are owned by user1. chmods are default for symfony2 application.
When I access my webserver it returns 404 error or just simple "file not found".
For a while exact same configuration works on my local Ubuntu 16.04, but fails on Debian Jessie on server. Right now it doesn't work on both. I tried everything, asked on sysops groups and googled for hours. Do you hve any idea?
Here is my vhost configuration
server {
listen 80;
server_name dev.xxxxx.co xxxxx.dev;
root /usr/share/www/co.xxxxx.dev/web;
index app_dev.php;
client_max_body_size 100M;
fastcgi_read_timeout 1800;
location / {
# try to serve file directly, fallback to app.php
try_files $uri $uri/ /app.php$is_args$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
access_log off;
}
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ ^/app\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/app.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
location ~ \.php$ {
return 404;
}
}
nginx configuration
user root;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
And my docker compose
version: '2'
services:
nginx:
image: nginx
ports:
- 8082:80
volumes:
- /home/konrad/Workspace:/usr/share/www:ro
- ./conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./sites:/etc/nginx/conf.d:ro
php-fpm:
image: php:fpm
ports:
- 9000:9000
volumes:
- /home/konrad/Workspace:/usr/share/www
- ./conf/www.conf:/etc/php/7.0/fpm/pool.d/www.conf
- ./conf/php.ini:/usr/local/etc/php/conf.d/90-php.ini:ro
On remote server files are accesible, visible as property of 1001:1001
I have 1 local haproxy server (10.10.1.18) that is used for loadbalance 2 nginx local webservers (web1=10.10.1.21,web2=10.10.1.22).
I can reach local ips of web servers to the index.php file successfully like that http://10.10.1.21/ and http://10.10.1.22/
However, when I point local ip of haproxy http://10.10.1.18/, it only brings the index.html file instead of index.php file. We also have a domainname that points the public ip to the haproxy but http://example.uni.edu brings again the index.html file and not index.php file
So I don't think it's about public vs local ip but rather haproxy or nginx configuration
/etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
#use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 10000
#---------------------------------------------------------------------
#HAProxy statistics backend
#---------------------------------------------------------------------
listen haproxy3-monitoring *:80
mode http
option forwardfor except 127.0.0.1
option httpclose
stats enable
stats show-legends
stats refresh 5s
stats uri /stats
stats realm Haproxy\ Statistics
stats auth username:password
stats admin if TRUE
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind *:80
default_backend webapp-main
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend webapp-main
balance roundrobin
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.uni.edu
server web1 10.10.1.21:80 check
server web2 10.10.1.22:80 check
web1 nginx - /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name 10.10.1.21;
# note that these lines are originally from the "location /" block
root /usr/share/nginx/html;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location /dataroot/ {
internal;
alias /var/moodledata/; # ensure the path ends with /
}
location /cachedir/ {
internal;
alias /var/moodledata/cache/; # ensure the path ends with /
}
location /localcachedir/ {
internal;
alias /var/moodledata/localcache/; # ensure the path ends with /
}
location /tempdir/ {
internal;
alias /var/moodledata/temp/; # ensure the path ends with /
}
location /filedir/ {
internal;
alias /var/moodledata/filedir/; # ensure the path ends with /
}
}
web2 has the same configs as web1 along with its own local ip.
When I point directly the index.php http://10.10.1.18/index.php it downloads the index.php file and gives
503 Service Unavailable
Anybody has similar experience issues like this?
Finally it worked out, please follow these steps:
do not use config files under /etc/nginx/conf.d/ only use 1 config file /etc/nginx/nginx.conf like this
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 8192;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
tcp_nopush on;
sendfile on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 512m;
large_client_header_buffers 2 1k;
client_body_timeout 1200;
client_header_timeout 1200;
send_timeout 100;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name example.uni.edu;
# note that these lines are originally from the "location /" block
root /usr/share/nginx/html;
index index.php index.html index.htm;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ =404;
index index.php;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ [^/]\.php(/|$) {
root /usr/share/nginx/html;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
###################### For Moodle Application ##################
location /dataroot/ {
internal;
alias /var/moodledata/; # ensure the path ends with /
}
location /cachedir/ {
internal;
alias /var/moodledata/cache/; # ensure the path ends with /
}
location /localcachedir/ {
internal;
alias /var/moodledata/localcache/; # ensure the path ends with /
}
location /tempdir/ {
internal;
alias /var/moodledata/temp/; # ensure the path ends with /
}
location /filedir/ {
internal;
alias /var/moodledata/filedir/; # ensure the path ends with /
}
###################### For Moodle Application ##################
}
}
Make sure you use a valid haproxy config along with 2 different ports 80 is for backend and 8080 is to monitor the stats
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
#use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 10000
#---------------------------------------------------------------------
#HAProxy statistics backend
#---------------------------------------------------------------------
listen haproxy3-monitoring *:8080
mode http
option forwardfor
option httpclose
stats enable
stats show-legends
stats refresh 5s
stats uri /stats
stats realm Haproxy\ Statistics
stats auth username:password
stats admin if TRUE
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind *:80
option http-server-close
option forwardfor
default_backend webapp-main
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend webapp-main
balance source
option httpchk HEAD / HTTP/1.1\r\nHost:\ example.uni.edu
server web1 10.10.1.21:80 check
server web2 10.10.1.22:80 check
Now you are fine to lookup your stats http://10.10.1.18:8080/ or http://example.uni.edu:8080/
You can also browse your application http://example.uni.edu
Note: Make sure you public ip points to your haproxy server successfully!
I am in the process of Dockerising my webserver/php workflow.
But because I am on Windows, I need to use a virtual machine. I chose boot2docker which is a Tiny Core Linux running in Virtualbox and adapted for Docker.
I selected three containers:
nginx: the official nginx container;
jprjr/php-fpm: a php-fpm container;
mysql: for databases.
In boot2docker, /www/ contains my web projects and conf/, which has the following tree:
conf
│
├───fig
│ fig.yml
│
└───nginx
nginx.conf
servers-global.conf
servers.conf
Because docker-compose is not available for boot2docker, I must use fig to automate everything. Here is my fig.xml:
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- 3306:3306
php:
image: jprjr/php-fpm
links:
- mysql:mysql
volumes:
- /www:/srv/http:ro
ports:
- 9000:9000
nginx:
image: nginx
links:
- php:php
volumes:
- /www:/www:ro
ports:
- 80:80
command: nginx -c /www/conf/nginx/nginx.conf
Here is my nginx.conf:
daemon off;
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
keepalive_timeout 65;
index index.php index.html index.htm;
include /www/conf/nginx/servers.conf;
autoindex on;
}
And the servers.conf:
server {
server_name lab.dev;
root /www/lab/;
include /www/conf/nginx/servers-global.conf;
}
# Some other servers (vhosts)
And the servers-global.conf:
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
}
So the problem now (sorry for all that configuration, I believe it was needed to clearly explain the problem): If I access lab.dev, no problem (which shows that the host is created in Windows) but if I try to access lab.dev/test_autoload/, I have a File not found.. I know this comes from php-fpm not being able to access the files, and the nginx logs confirm this:
nginx_1 | 2015/05/28 14:56:02 [error] 5#5: *3 FastCGI sent in stderr:
"Primary script unknown" while reading response header from upstream,
client: 192.168.59.3, server: lab.dev, request: "GET /test_autoload/ HTTP/1.1",
upstream: "fastcgi://172.17.0.120:9000", host: "lab.dev", referrer: "http://lab.dev/"
I know that there is a index.php in lab/test_autoload/ in both containers, I have checked. In nginx, it is located in /www/lab/test_autoload/index.php and /srv/http/lab/test_autoload/index.php in php.
I believe the problem comes from root and/or fastcgi_param SCRIPT_FILENAME, but I do not know how to solve it.
I have tried many things, such as modifying the roots, using a rewrite rule, adding/removing some /s, etc, but nothing has made it change.
Again, sorry for all this config, but I think it was needed to describe the environment I am in.
I finally found the answer.
The variable $fastcgi_script_name did not take into account the root (logical, as it would have included www/ otherwise. This means that a global file can not work. An example :
# "generated" vhost
server {
server_name lab.dev;
root /www/lab/;
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
# I need to add /lab after /srv/http because it requests PHP to look at the root of the web files, not in the lab/ folder
# fastcgi_param SCRIPT_FILENAME /srv/http/lab$fastcgi_script_name;
}
}
This meant that I couldn't write the lab/ at only one place, I needed to write at two different places (root and fastcgi_param) so I decided to write myself a small PHP script that used a .json and turned it into the servers.conf file. If anyone wants to have a look at it, just ask, it will be a pleasure to share it.
There's a mistake here:
fastcgi_param SCRIPT_FILENAME /srv/http/$fastcgi_script_name;
The good line is:
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
This is not the same thing.
Your nginx.conf is fully empty, where is the daemon off;
the user nginx line, worker_processes etc.. nginx need some configuration file before running the http.
In the http conf, same thing, is missing the mime types, the default_type, configuration of logs sendfile at on for boot2docker.
Your problem is clearly not a problem with Docker, but with the nginx configuration. Have you test your application with running docker run before using fig ? Did it worked ?