Nginx on Docker serves only welcome pages - php

I'm using macOS. I'm working on a Debian image created through Dockerfile. Nginx, php-fpm was installed in Debian image. Then I copied server file to /etc/nginx/sites-available/server and created its symbolic link file in /etc/nginx/sites-enabled/. It also copied srcs/info.php files to /var/www/server/info.php.
After starting nginx service, I can access my private IP address 192.168.0.46 and see NGINX's welcome page. However, when accessing 192.168.0.46/info.php, page opening fails. This is the same when other html files are inserted.
I looked it up on Google and found out that it was an INCLUDE problem of /etc/nginx/nginx.conf, but there was no problem.
The first question I suspected was the firewall, but if it was the problem, shouldn't I not be able to see the welcome page of Nginx?
I've been thinking and searching all day, but I couldn't get an answer. Please help me!
Here is my files:
Dockerfile :
FROM debian:buster
ARG DEBIAN_FRONTEND=noninteractive
ENV USER=root
COPY srcs/server.sites-available /etc/nginx/sites-available/server
COPY srcs/info.php /var/www/server/info.php
COPY srcs/login_form.html /var/www/server/login_form.html
# Installed mariadb-server instead of mysql-server
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y --no-install-recommends --no-install-suggests \
nginx \
openssl \
mariadb-server \
php-fpm \
php-mysql && \
ln -s /etc/nginx/sites-available/server /etc/nginx/sites-enabled/
CMD service mysql start; \
service nginx start; \
bash;
# EXPOSE 80 for HTTP and 443 for HTTPS
EXPOSE 80 443
server.sites-available :
server {
listen 80;
listen [::]:80;
root /var/www/server;
index index.php index.html index.htm;
server_name server;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
}
}
info.php :
<?php
phpinfo();
?>
nginx.conf :
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here is my commands:
$ docker build -t server_image .
$ docker run -it -P --rm --name server_container server_image

The server_name server directive means that only requests for http://server will be served.
You should add 192.168.0.46 server to your hosts file.
I'd also recommend changing the server_name to something else like myserver.local (and add that to the hosts file)

use docker run -it -p 80:80 --rm --name server_container server_image to fix it.
But I don't know why...

Related

Multiple process for user

I looking how allow run multiple process for one user in symfony.
I have this behavior in Symfony / Nginx / Php FPM
but I test in clean symfony and php webserver:
composer create-project symfony/website-skeleton my-project
this is docker config :
version: "3.1"
services:
webserver:
image: nginx:alpine
container_name: viz-webserver
working_dir: /application
volumes:
- .:/application
- ./docker/nginx/nginx_all.conf:/etc/nginx/nginx.conf
networks:
itb_net:
ipv4_address: 10.2.0.6
php-fpm:
build: itb_docker/php-fpm
container_name: php-fpm
working_dir: /application
networks:
itb_net:
ipv4_address: 10.2.0.8
volumes:
- ~/.ssh/:/root/.ssh/
- .:/application
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./.aws/credentials:/var/www/.aws/credentials
php-ini-overrides.ini
upload_max_filesize = 200M
post_max_size = 208M
memory_limit = 4048M
php-fpm Dockerfile
FROM phpdockerio/php72-fpm:latest
WORKDIR "/application"
# Fix debconf warnings upon build
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get -y --no-install-recommends install php-zip php-xdebug php-gd php-xml php7.2-mysql php7.2-intl php7.2-tidy php7.2-xmlrpc php7.2-soap php7.2-bcmath php-mbstring php-curl\
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
# Install git
RUN apt-get update \
&& apt-get -y install git acl wkhtmltopdf xvfb \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
nginx
user nginx;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
#sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
server {
# limit_conn conn_limit_per_ip 10;
# limit_req zone=req_limit_per_ip burst=10 nodelay;
fastcgi_keep_conn on;
#proxy_buffering off;
gzip off;
listen 80 default;
client_max_body_size 208M;
access_log /var/log/nginx/application.access.log;
root /application/public;
rewrite ^/index\.php/?(.*)$ /$1 permanent;
try_files $uri #rewriteapp;
location #rewriteapp {
rewrite ^(.*)$ /index.php/$1 last;
}
# Deny all . files
location ~ /\. {
deny all;
}
location ~ ^/(index)\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_index app_dev.php;
send_timeout 1800;
fastcgi_read_timeout 1800;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
# Statics
location /(bundles|media) {
access_log off;
expires 30d;
try_files $uri #rewriteapp;
}}
}
test controller:
class TestController extends Controller
{
/**
* #Route("/test1")
*/
public function aAction(){
$i = 0;
while($i<50){
echo "$i<BR>";
$i++;
sleep(2);
}
}
/**
* #Route("/test2")
*/
public function bAction(){
return new Response('OK');
}
}
and same result - go to /test1 and test2 in second tab = second tab wait to first finish.
The built-in PHP server is single-threaded. If you get a long or hanged request it can't serve the next request until the thread is free to serve the incoming request.
As per http://php.net/manual/en/features.commandline.webserver.php
The web server runs a only one single-threaded process, so PHP applications will stall if a request is blocked.

Can't make nginx and php-fpm dockers communicate

I have created two docker images to match my needs, here is the nginx one:
FROM alpine:3.3
RUN apk add --update nginx
EXPOSE 80 443
CMD nginx -c /www/dev/nginx/conf/nginx.conf -g 'daemon off;'
then the php-fpm
FROM php:7.0-fpm
RUN apt-get update
RUN apt-get install -y apt-transport-https ca-certificates dcmtk libgdcm-tools wkhtmltopdf libdbd-freetds libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd
RUN apt-get install -y imagemagick --fix-missing
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /www/public
CMD rm /etc/freetds.conf
ADD freetds.conf /etc/freetds.conf
CMD rm /var/cache/apk/*
Then the run commands in the following order
docker create -p 9000:9000 --name php -v /www:/www:rw php
docker start php
docker create --privileged=true -p 80:80 -p 443:443 --name nginx -v /www:/www --link php:php nginx
docker start nginx
So far all works good, containers are running and I can even get static content from nginx owever any php script fails with
[error] 6#0: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: platform.v2.vetology.net, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://172.17.0.2:9000", host: "localhost.localdomain"
doing
cgi-fcgi -bind -connect 127.0.0.1:9000
from the host machine works and php responds so I know both containers ar working and responsive they just don't communicate.
I read in a different thread that the issue is caused by php and nginx using a different root directory but here is not the case as both have the /www directory mounted and /www/public contains the index.php file I am trying to open.
And here is the nginx.conf
user nobody;
error_log /www/dev/nginx/log/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
worker_processes 1;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /www/dev/nginx/log/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
default_type application/octet-stream;
gzip on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
include /www/dev/nginx/conf/upstream.conf;
server {
listen 80;
listen 443 ssl;
listen [::]:443 default ipv6only=on;
listen [::]:80 default_server ipv6only=on;
ssl_certificate /www/dev/ssl/certificate.crt;
ssl_certificate_key /www/dev/ssl/key.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
root /www/public;
index index.php index.html index.htm;
server_name hostname.net;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
client_body_buffer_size 128k;
client_body_temp_path /www/dev/nginx/tmp/client_body_temp;
proxy_temp_path /www/dev/nginx/tmp/proxy_temp_path;
fastcgi_temp_path /www/dev/nginx/tmp/fastcgi_temp_path;
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param HTTP_PROXY "";
fastcgi_pass vetology;
fastcgi_index index.php;
include fastcgi_params;
}
}
}
Any ideas of what I maybe doing wrong?
First off, I'm not sure you want to start the nginx container with privileged. There's also no need to mount the php directive with the :rw switch. rw is the default mode.
Second, take a look at where you're telling nginx to send the php request -- specifically, which host, which is defined in nginx's fastcgi_pass directive. You have it sending the php request to a machine named vetology. But docker is linking the containers together with --link php:php. That means its taking the container named php and giving it the alias named php. That means it will be exposed to your nginx container as a machine named php.
You can try this out:
docker run -it --rm --link php:some_php nginx sh -c 'ping some_php'
Change
fastcgi_pass vetology;
to:
fastcgi_pass php:9000;
And your container should work.
For complex container assemblies like this one, have a look at docker-compose next.

Nginx and php-fpm docker compose

I'm trying to create a development environnement for PHP project with Docker-compose.
I just created a docker-compose file that contains 3 docker:
Application: that mounts my code volume
Php: that runs php5-fpm
Nginx: that runs the nginx web server.
I can get an html file through my web server but when I try with a php file I get a 502 Bad Request error.
Error.log:
2016/09/19 16:51:18 [error] 8#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: xxx, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://172.17.0.3:9000", host: "xxx"
I think that nginx can't connect to php.
Here is my docker-compose file:
# import the code and bind it on /var/www
application:
build: code
volumes:
- ./sites:/var/www/
tty: true
# php container
php:
build: php-fpm
volumes_from:
- application
expose:
- 9000:9000
volumes:
- ./logs/php5-fpm/:/var/log
# nginx container
nginx:
build: nginx
ports:
- 80:80
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx:/var/log/nginx
php-fpm/Dockerfile:
FROM debian:jessie
RUN apt-get update && apt-get install -y php5-common php5-cli php5-fpm php5-mcrypt php5-mysql php5-gd php5-imagick php5-curl php5-intl php5-memcache php5-imap
RUN usermod -u 1000 www-data
CMD ["php5-fpm", "-F"]
EXPOSE 9000
nginx/Dockerfile:
FROM debian:jessie
RUN apt-get update && apt-get install -y nginx
ADD nginx.conf /etc/nginx
ADD test.conf /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-available/test.conf /etc/nginx/sites-enabled/test.conf
RUN rm /etc/nginx/sites-enabled/default
RUN echo "upstream php-upstream { server php:9000; }" > /etc/nginx/conf.d/upstream.conf
RUN usermod -u 1000 www-data
CMD ["nginx"]
EXPOSE 80
nginx/nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
daemon off;
nginx/test.conf:
server {
listen 80;
server_name xxx;
root /var/www/test;
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
error_log /var/log/nginx/test_error.log;
access_log /var/log/nginx/test_access.log;
}
There is no error when launching the docker-compose.
Can someone help me ?
Thanks

How to set up PHP 7 with NGINX

I installed PHP 7 Nightly and Nginx 1.9.7 (Mainline) on my development Debian Stable:
$ curl http://nginx.org/keys/nginx_signing.key | apt-key add -
$ echo -e 'deb http://nginx.org/packages/mainline/debian/ jessie nginx\ndeb-src http://nginx.org/packages/mainline/debian/ jessie nginx' > /etc/apt/sources.list.d/nginx.list
$ echo -e 'deb http://repos.zend.com/zend-server/early-access/php7/repos ubuntu/' > /etc/apt/sources.list.d/php.list
$ apt-get -y update && time apt-get -y dist-upgrade
$ apt-get -y --force-yes install --fix-missing nginx php7-nightly
$ service nginx restart
I have this configuration file in /etc/nginx/conf.d/php.conf (default.conf uncommented). It's the original default.conf, I just uncommented the PHP fastCGI lines:
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
I didn't found any tutorial on the net how to set up Nginx Mainline with PHP 7 yet. :(
Thank You!
After taking a quick look at the packages offered by zend (the packages you are installing), you'll have to write a script for your init system (systemd) to start and manage the php-fpm process.
The binaries are installed in /usr/local/php7/sbin/php-fpm. You'll also have to modify the configuration files in /usr/local/php7/etc. The main change will be to ensure that fpm has a pool listening on address 127.0.0.1 at port 9000.

Nginx RTMP not recording

I already setting up Nginx RTMP in ubuntu linux hosted by DigitalOcean. And currently running my laravel web application in localhost mode in my desktop. Everything seems work fine for the live streaming. I'm testing with my localhost JWPlayer and Open Broadcaster Software(OBS) for live streaming. It works. But whenever I need to record the streaming video to linux directory (/var/www), seems like nothing happen and no error at all after I hit stop streaming button in OBS.
I'm don't know how does the recording works, I try record manual and it has the link on it. I click start record, it comes out /var/rec/{mystream}.flv
This manual version of recording link embed in laravel website:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
recorder rec1 {
record all manual;
record_suffix all.flv;
record_path /var/rec;
record_unique on;
}
}
}
}
Start Recording:
Start rec1
nginx config for http:
access_log logs/rtmp_access.log;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
root /var/www/;
}
location /control {
rtmp_control all;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
By the way
Plan B: I plan to store my recorded stream files to Amazon AWS s3. Anyone know how to do this with RTMP Nginx instead of using Wowza Amazon.
You can run inside shell script nginx conf. Check the permission first:
chown -R nobody:nogroup foldername
chmod -R 700 foldername
Shell script:
ffmpeg -v error -y -i "$1" -vcodec libx264 -acodec aac -f mp4 -movflags +faststart "/tmp/recordings/$2.mp4"
aws s3 cp "/tmp/recordings/$basname.mp4" "s3://bucketname/"
exec_record_done bash -c "/home/ubuntu/script/record.sh $path $basname";
You should check the permissions on the directory you're trying to record to (/var/rec in your case). Nginx, even though started up with sudo, spawns worker processes as user "nobody" by default. You can also try changing the user that the worker processes spawn as: https://serverfault.com/a/534512/102045
When i did this with my partner I would use
record_path /tmp/rec;
Then in the file I would set a crontab that permanently tries to send new files(videos) to his NextCloud FTP(In this case could be your amazon aws)
It seems like bhh1998 and akash jakhad answers are correct, although it seems that nowadays the nginx.conf file comes with nginx user as default, so instead of using nobody and nogroup, use only nginx instead. The command mentioned in previous answers would be like this:
chown -R nginx:nginx foldername
To be sure of the correct username, check your configuration file and see which user is being specified.
In addition to the user permission settings mentioned in other answers, I also had to change the path to end with a trailing slash i.e. /var/rec/ instead of /var/rec.

Categories