Docker, Nginx, PHP-FPM : problems with connection - php

I've taken on a project built on Docker containers and I'm having it to run smoothly.
My containers build successfully, but when I try getting to the website, nginx gives me a 502 with this error in the logs:
connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.6:9000", host: "localhost:2000"
Which, from what I've read would be a problem with the link between my two containers.
I've tried changing the listen parameter of php-fpm directly to 0.0.0.0:9000 as seen on Nginx+PHP-FPM: connection refused while connecting to upstream (502) but this gave rise to a new error I don't fully understand either:
*11 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.6:9000", host: "localhost:2000"
Does anyone has an idea of what is failing and how to fix it ?
The docker-compose part regarding theses two services is:
elinoi-webserver:
build: .
dockerfile: docker/Dockerfile.nginx.conf
container_name: elinoi-webserver
volumes:
- .:/var/www/elinoi.com
ports:
- "2000:80"
links:
- elinoi-php-fpm
elinoi-php-fpm:
build: .
dockerfile: docker/Dockerfile.php-fpm.conf
container_name: elinoi-php-fpm
volumes:
- .:/var/www/elinoi.com
- /var/docker_volumes/elinoi.com/shared:/var/www/elinoi.com/shared
ports:
- "22001:22"
links:
- elinoi-mailhog
- elinoi-memcached
- elinoi-mysql
- elinoi-redis
The nginx conf file is:
server {
listen 80 default;
root /var/www/elinoi.com/current/web;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
try_files $uri #rewriteapp;
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny all . files
location ~ /\. {
deny all;
}
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_pass elinoi-php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index app.php;
send_timeout 1800;
fastcgi_read_timeout 1800;
}
# Statics
location /(bundles|media) {
access_log off;
expires 30d;
try_files $uri #rewriteapp;
}
}
The Dockerfile for the elinoi-php-fpm service is :
FROM phpdockerio/php7-fpm:latest
# Install selected extensions
RUN apt-get update \
&& apt-get -y --no-install-recommends install php7.0-memcached php7.0-mysql php7.0-redis php7.0-gd php7.0-imagick php7.0-intl php7.0-xdebug php7.0-mbstring \
&& apt-get -y --no-install-recommends install nodejs npm nodejs-legacy vim ruby-full git build-essential libffi-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN npm install -g bower
RUN npm install -g less
RUN gem install sass
# If you're using symfony and the vagranted environment, I strongly recommend you change your AppKernel to use the following temporary folders
# for cache, logs and sessions, otherwise application performance may suffer due to these being shared over NFS back to the host
RUN mkdir -p "/tmp/elinoi/cache" \
&& mkdir -p "/tmp/elinoi/logs" \
&& mkdir -p "/tmp/elinoi/sessions" \
&& chown www-data:www-data -R "/tmp/elinoi"
RUN apt-get update \
&& apt-get -y --no-install-recommends install openssh-server \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
ADD docker/.ssh /root/.ssh
RUN chmod 700 /root/.ssh/authorized_keys
CMD ["/usr/sbin/sshd", "-D"]
WORKDIR "/var/www/elinoi.com"
The Dockerfile for the elinoi-webserver is:
FROM smebberson/alpine-nginx:latest
COPY /docker/nginx.conf /etc/nginx/conf.d/default.conf
WORKDIR "/var/www/elinoi.com"

There can only be one CMD instruction in a Dockerfile. If you list
more than one CMD then only the last CMD will take effect.
The original Dockerfile ends with:
CMD /usr/bin/php-fpm
and the Dockerfile of the elinoi-php-fpm service ends with the following CMD layer:
CMD ["/usr/sbin/sshd", "-D"]
So, only sshd is started after container creates. php-fpm is not started there.
That's why nginx constantly returns 502 error, because php backend is not working at all.
You can fix your issue the following ways:
1. Docker Alpine linux running 2 programs
2. Simply delete sshd part from the elinoi-php-fpm service.

Related

Laravel laradock Unable to connect localhost

After installing Docker in ubuntu and adding laradock to an existing project, I run this below command to start using laradock:
docker-compose up -d nginx mysql phpmyadmin workspace
result:
Creating laradock_mysql_1 ... done
Creating laradock_docker-in-docker_1 ... done
Creating laradock_workspace_1 ... done
Creating laradock_phpmyadmin_1 ... done
Creating laradock_php-fpm_1 ... done
Creating laradock_nginx_1 ... done
now after running docker-compose exec bash command I served laravel into that
docker-compose exec workspace bash
root#b3c88be3e389:/var/www# php artisan serve
INFO Server running on [http://127.0.0.1:8000].
Press Ctrl+C to stop the server
when I clicked on IP address, I get this message in browser:
Unable to connect
Firefox can’t establish a connection to the server at 127.0.0.1:8000.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the web.
docker-compose ps command result:
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
laradock_docker-in-docker_1 dockerd-entrypoint.sh Up 2375/tcp, 2376/tcp
laradock_mysql_1 docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp,:::3306->3306/tcp, 33060/tcp
laradock_nginx_1 /docker-entrypoint.sh /bin ... Up 0.0.0.0:443->443/tcp,:::443->443/tcp, 0.0.0.0:80->80/tcp,:::80->80/tcp, 0.0.0.0:81->81/tcp,:::81->81/tcp
laradock_php-fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
laradock_phpmyadmin_1 /docker-entrypoint.sh apac ... Up 0.0.0.0:8081->80/tcp,:::8081->80/tcp
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp,:::2222->22/tcp, 0.0.0.0:3000->3000/tcp,:::3000->3000/tcp, 0.0.0.0:3001->3001/tcp,:::3001->3001/tcp,
0.0.0.0:4200->4200/tcp,:::4200->4200/tcp, 0.0.0.0:5173->5173/tcp,:::5173->5173/tcp, 0.0.0.0:8001->8000/tcp,:::8001->8000/tcp,
0.0.0.0:8080->8080/tcp,:::8080->8080/tcp
nginx definition into docker-compose.yml:
nginx:
build:
context: ./nginx
args:
- CHANGE_SOURCE=${CHANGE_SOURCE}
- PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
- PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
- http_proxy
- https_proxy
- no_proxy
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ${NGINX_HOST_LOG_PATH}:/var/log/nginx
- ${NGINX_SITES_PATH}:/etc/nginx/sites-available
- ${NGINX_SSL_PATH}:/etc/nginx/ssl
ports:
- "${NGINX_HOST_HTTP_PORT}:80"
- "${NGINX_HOST_HTTPS_PORT}:443"
- "${VARNISH_BACKEND_PORT}:81"
depends_on:
- php-fpm
networks:
- frontend
- backend
and nginx Dockerfile:
FROM nginx:alpine
LABEL maintainer="Mahmoud Zalt <mahmoud#zalt.me>"
COPY nginx.conf /etc/nginx/
# If you're in China, or you need to change sources, will be set CHANGE_SOURCE to true in .env.
ARG CHANGE_SOURCE=false
RUN if [ ${CHANGE_SOURCE} = true ]; then \
# Change application source from dl-cdn.alpinelinux.org to aliyun source
sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/' /etc/apk/repositories \
;fi
RUN apk update \
&& apk upgrade \
&& apk --update add logrotate \
&& apk add --no-cache openssl \
&& apk add --no-cache bash
RUN apk add --no-cache curl
RUN set -x ; \
addgroup -g 82 -S www-data ; \
adduser -u 82 -D -S -G www-data www-data && exit 0 ; exit 1
ARG PHP_UPSTREAM_CONTAINER=php-fpm
ARG PHP_UPSTREAM_PORT=9000
# Create 'messages' file used from 'logrotate'
RUN touch /var/log/messages
# Copy 'logrotate' config file
COPY logrotate/nginx /etc/logrotate.d/
# Set upstream conf and remove the default conf
RUN echo "upstream php-upstream { server ${PHP_UPSTREAM_CONTAINER}:${PHP_UPSTREAM_PORT}; }" > /etc/nginx/conf.d/upstream.conf \
&& rm /etc/nginx/conf.d/default.conf
ADD ./startup.sh /opt/startup.sh
RUN sed -i 's/\r//g' /opt/startup.sh
CMD ["/bin/bash", "/opt/startup.sh"]
EXPOSE 80 81 443
laradock env:
# All volumes driver
VOLUMES_DRIVER=local
# All Networks driver
NETWORKS_DRIVER=bridge

Laravel is suspiciously slow (fresh app, everything is by default)

I work with Windows 10 (WSL 2).
My hardware is:
Intel(R) Core(TM) i7-9750H CPU # 2.60GHz 2.60 GHz
RAM 8.00GB
SSD
Actually, this is a game laptop (MSI GL 65 95CK) if you are interested.
I decided to install Laravel, went to documentation and implemented described steps:
In WSL terminal (I use Ubuntu) curl -s "https://laravel.build/example-app?with=mysql,redis" | bash
cd example-app && /vendor/bin/sail up
I went to browser and realized that the main page took almost 1 second to render! Sometimes even two seconds!
I thought, "ok, maybe the framework is in a not optimized mode. Debug and so on", and decided to turn APP_DEBUG in .env to false. I also removed all routes and put this instead:
Route::get('/', [\App\Http\Controllers\TestController::class, 'test']);
Before this, I created the TestController:
class TestController extends Controller
{
public function test() {
return response()->json([
'name' => 'Abigail',
'state' => 'CA',
]);
}
}
Then I run php artisan optimaze, open in browser http://localhost/api
and the result is a big sorrow:
Why 800ms? I did not do anything.
Ok, I decided just to rename the index.php file in the public folder to index2 for example, put new one index.php with the array printing just to test whether this is a Laravel problem or this is just an infrastructure issue.
New index.php:
Much better!
Then I thought, "let's compare with another framework, for example with .NET Core". And I made a very simple Web Api project.
Controller:
namespace MockWebApi.Controllers
{
[ApiController]
[Route("")]
public class MainController : ControllerBase
{
[Route("test")]
public IActionResult Test()
{
return Ok(new
{
Test = "hello world!!!"
});
}
}
}
The result is:
Ok, you can argue that this is a compiled language. I decided to check with Node.js and Express:
Code:
router.get('/', function(req, res, next) {
res.json({"test": "123"})
});
Result:
As you can see, Node as fast as C# in this case.
So, what is wrong with Laravel? Did I miss something in installation?
UPDATE
I raised Laravel without Sail. My docker-compose file:
version: '3'
services:
php-fpm:
build:
context: docker/php-fpm
volumes:
- ./:/var/www
networks:
- internal
nginx:
build:
context: docker/nginx
volumes:
- ./:/var/www
ports:
- "80:80"
depends_on:
- php-fpm
networks:
- internal
networks:
internal:
driver: bridge
Nginx Dockerfile:
FROM nginx
ADD ./default.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www
Nginx config:
server {
listen 80;
index index.php;
server_name 127.0.0.1 localhost;
root /var/www/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_read_timeout 1000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
php-fpm Dockerfile:
FROM php:7.4-fpm
RUN apt-get update && apt-get install -y wget git unzip \
&& apt-get install libpq-dev -y
RUN wget https://getcomposer.org/installer -O - -q \
| php -- --install-dir=/bin --filename=composer --quiet
RUN groupadd -r -g 1000 developer && useradd -r -u 1000 -g developer developer
USER developer
WORKDIR /var/www
I did not get any performance improvement(

Docker swarm is not correctly copying volumes across other worker nodes

I'm trying to build a Laravel application (LEMP Stack) in Docker Swarm. I'm using HaProxy, as described in Dockers networking documentation on ingress networks, to load balance from the manager nodes public IPV4 between all worker nodes.
My current configuration looks like so:
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend http_front_ssl
bind *:443
stats uri /haproxy?stats
default_backend http_back_ssl
backend http_back
balance roundrobin
server vps1 xx.xx.xxx.xxx:8080 check
server vps2 xx.xx.xxx.xxx:8080 check
server vps3 xx.xx.xxx.xxx:8080 check
backend http_back_ssl
balance roundrobin
server vps1 xx.xx.xxx.xxx:4043 check
server vps2 xx.xx.xxx.xxx:4043 check
server vps3 xx.xx.xxx.xxx:4043 check
I have then a custom image that provides and exposes a PHP-FPM on port 9000:
docker service create \
--name php-fpm \
--replicas=3 \
--network app-net \
--secret source=proxy.conf,target=/etc/nginx/conf.d/site.conf \
--secret site.key \
--secret site.crt \
--mount type=volume,src=web,dst=/var/www \
jaquarh/php-fpm:alpha
The keys are for self-signed SSL (testing Https). The Dockerfile to build this is:
FROM php:7.4-fpm
COPY laravel/composer.lock laravel/composer.json /var/www/
WORKDIR /var/www
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install gd
RUN pecl install memcached; \
docker-php-ext-install intl; \
docker-php-ext-install exif; \
echo "extension=memcached.so" >> /usr/local/etc/php/conf.d/memcached.ini; \
docker-php-ext-enable exif;
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
COPY . /var/www
COPY --chown=www:www . /var/www
USER www
EXPOSE 9000
CMD ["php-fpm"]
I am then building an nginx service as the proxy for the php-fpm service.
docker service create \
--name nginx \
--secret site.key \
--secret site.crt \
--secret source=site.conf,target=/etc/nginx/conf.d/site.conf \
-p 8080:80 \
-p 4043:443 \
--replicas=3 \
--network app-net \
--mount type=volume,src=web,dst=/var/www \
nginx:latest \
sh -c "exec nginx -g 'daemon off;'"
Inside of my web volume, I hold all the laravel files. I am only testing Http at the moment so my nginx configuration looks like so:
server {
listen 80;
listen [::]:80;
server_name example.co.uk; # this is my actual domain for testing
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location = /favicon.ico {
access_log off;
log_not_found off;
}
location = /robots.txt {
access_log off;
log_not_found off;
}
error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;
}
# ready for when letsencrypt is introduced
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
Finally, I have a pre-built SQL database volume and service ready to use which the laravel .env uses.
docker service create \
--name mysql \
--replicas=3 \
--network app-net \
--secret site.key \
--secret site.crt \
--mount type=volume,src=dbdata,target=/var/lib/mysql/ \
--env MYSQL_DATABASE=xxxxxx \
--env MYSQL_ROOT_PASSWORD=xxxxxx \
--env MYSQL_USER=xxxxx \
--env MYSQL_PASSWORD=xxxxx \
-p 3306:3306 \
mysql:latest
The issue I am facing is that VPS1 (in this case the manager) gets these files perfectly fine because the volumes exist in the manager VPS. Both VPS2 and VPS3 show these volumes have been mounted correctly but are not copying the data over.
As seen in the images, on my development pc (which is also a worker for testing) I can see the mount has been successfully created and my docker is assigned the nginx container however, when I inspect the directories in bash, only the directory housing the laravel application exists in the manager node.
What am I doing wrong? How can I copy this volume across my worker nodes? If I inspect the volume on the manager it looks like this:
ubuntu#vps-438f3f94:~$ docker volume inspect web
[
{
"CreatedAt": "2021-04-23T09:29:23Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/web/_data",
"Name": "web",
"Options": {},
"Scope": "local"
}
]
Update: If I look at the HaProxy stats, I can see my development PC is DOWN because I'm not port forwarded (as its just for me to debug containers) but I'm assuming this has no real effect on volumes but just to give you an insight to my environment better.
http_back
Queue Session rate Sessions Bytes Denied Errors Warnings Server
Cur Max Limit Cur Max Limit Cur Max Limit Total LbTot Last In Out Req Resp Req Conn Resp Retr Redis Status LastChk Wght Act Bck Chk Dwn Dwntme Thrtle
vps1 0 0 - 0 3 0
1 - 38
38 25m15s 14191 23441 0 0 0 0 0 25m30s UP L4OK in 0ms 1 Y - 28 10 0s -
vps2 0 0 - 0 1 0
1 - 30
30 25m6s 10055 16896 0 0 0 0 0 25m28s UP L4OK in 0ms 1 Y - 30 10 0s -
vps3 0 0 - 0 0 0
0 - 0
0 ? 0 0 0 0 0 0 0 13h5m DOWN * L4TOUT in 2001ms 1 Y - 1 1 13h5m -
Backend 0 0 0 8 0 1 26212 121
68 25m6s 43399 52050
0 0 53 0 0 0 25m30s UP 2 2 0 9 2h20m

Docker, Nginx, localhost does not respond

I hope you're all doing well during this particular time.
I'm stuck on a problem concerning my nginx+php image, it's my first image I'm building from scratch (with Google), so I'm not sure everything is correct.
This image will serve me in multiple home projects, I'm using this image in a Gitlab Container Registry.
I'm using this Dockerfile :
FROM alpine:latest
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
RUN adduser -S www-data -u 1000
RUN apk upgrade -U
RUN apk add --update --no-cache \
git \
bash \
vim
RUN apk --update --no-cache add php7 php7-fpm php7-mysqli php7-json php7-openssl php7-curl \
php7-zlib php7-xml php7-phar php7-intl php7-dom php7-xmlreader php7-ctype php7-session \
php7-mbstring php7-gd nginx curl
RUN apk add --update --no-cache nginx supervisor
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
ENV COMPOSER_ALLOW_SUPERUSER=1
RUN composer global require "hirak/prestissimo:^0.3" --prefer-dist --no-progress --no-suggest --classmap-authoritative \
&& composer clear-cache
ENV PATH="${PATH}:/root/.composer/vendor/bin"
COPY config/nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p var/cache var/log var/sessions \
&& chown -R www-data var
VOLUME /srv/api/var
VOLUME /var/www/html
VOLUME /var/www/src
COPY index.php /var/www/html/
COPY src/run.sh /run.sh
RUN chmod u+rwx /run.sh
EXPOSE 80
ENTRYPOINT [ "/run.sh" ]
CMD ["init"]
Which calls this bash script, for Symfony migration:
#!/bin/bash
INIT=false
MIGRATION=false
CREATEDB=true
for DOCKER_OPTION in $#
do
case "$DOCKER_OPTION" in
init )
INIT=true
shift
;;
migration )
MIGRATION=true
shift
;;
with-existing-db )
CREATEDB=false
shift
;;
* )
break
;;
esac
done
APP_ENV=${APP_ENV:-dev}
init_project () {
composer install --prefer-dist --no-progress --no-suggest --no-interaction
chmod +x bin/console && sync
php bin/console assets:install
if $CREATEDB
then
php bin/console doctrine:schema:update -f
php bin/console doctrine:fixtures:load -n
fi
}
if $MIGRATION;
then
php bin/console doctrine:migrations:migrate --no-interaction -vvv
chown -R www-data:www-data /srv/api/var/
fi
if $INIT;
then
mkdir -p /var/nginx/client_body_temp
chown www-data:www-data /var/nginx/client_body_temp
mkdir -p /var/run/php/
chown www-data:www-data /var/run/php/
touch /var/log/php-fpm.log
chown www-data:www-data /var/log/php-fpm.log
if [ "$APP_ENV" != 'prod' ];
then
init_project
fi
exec supervisord --nodaemon --configuration="/etc/supervisord.conf" --loglevel=info
fi
exec "$#";
Launched by this Docker-compose:
version: "3.7"
services:
lamp:
build:
context: .
dockerfile: ./Dockerfile
ports:
- 80:80
volumes:
- .:/var/www/html/
- ./src:/var/www/src/
With this Nginx configuration:
user www-data;
worker_processes 4;
error_log /dev/stdout;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
types {
text/html html htm shtml;
text/css css;
text/xml xml;
image/gif gif;
image/jpeg jpeg jpg;
application/javascript js;
application/atom+xml atom;
application/rss+xml rss;
image/svg+xml svg;
}
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
# mime types
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
access_log /dev/stdout;
error_log /dev/stdout;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
}
Everything seems to be going well.
Unfortunately I can't display my index.php (php_info).
When I do
curl -vvv http://localhost:80
* Trying ::1:80...
* Connected to localhost (::1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.69.1
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
And When I do
curl -vvv http://localhost:80 --trace-ascii dump.txt
== sync, corrected by elderman ==
== Info: Connected to localhost (::1) port 80 (#0)
=> Send header, 73 bytes (0x49)
0000: GET / HTTP/1.1
0010: Host: localhost
0021: User-Agent: curl/7.69.1
003a: Accept: */*
0047:
== Info: Empty reply from server
== Info: Connection #0 to host localhost left intact
I've searched the net, but my skills are quite limited in the configuration of nginx.
Do you have any ideas to submit to me.
thank you for your reply,
For VIM, he's already add here :
RUN apk add --update --no-cache \
git \
bash \
vim
So i try with this Dockerfile, whitout PHP/Composer (It's pretty minimalist?) :
FROM alpine:latest
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
RUN adduser -S www-data -u 1000
RUN apk upgrade -U
RUN apk add --update --no-cache \
git \
bash \
vim
RUN apk --update --no-cache add nginx curl
RUN apk add --update --no-cache nginx supervisor
COPY config/nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p var/cache var/log var/sessions \
&& chown -R www-data var
VOLUME /srv/api/var
VOLUME /var/www/html
VOLUME /var/www/src
COPY index.php /var/www/html/
COPY src/run.sh /run.sh
RUN chmod u+rwx /run.sh
EXPOSE 80
ENTRYPOINT [ "/run.sh" ]
CMD ["init"]
For
curl -vv http://localhost:80
* Trying ::1:80...
* Connected to localhost (::1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.69.1
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
And for
curl -vv http://localhost:80 --trace-ascii dump.txt
== Info: Trying ::1:80...
== Info: Connected to localhost (::1) port 80 (#0)
=> Send header, 73 bytes (0x49)
0000: GET / HTTP/1.1
0010: Host: localhost
0021: User-Agent: curl/7.69.1
003a: Accept: */*
0047:
== Info: Recv failure: Connection reset by peer
== Info: Closing connection 0
What do you think ?

Nginx pointed to wrong directory with Docker on Windows

I'm setting up a Laravel application with Docker, using a Docker image configuration I found here: https://blog.pusher.com/docker-for-development-laravel-php/
Now, this works fine on my Ubuntu machine (16.04), but on Window (10 Pro) I get a weird error. It first complains about not finding a composer.json file. Then, with each request I make to localhost:8000, I get the following error:
15#15: *1 open() "/var/www/public404" failed (2: No such file or directory), client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8000"
I am very new to this, but it seems that nginx points to /var/www/public404 - I have no idea how that "404" got there. I have a feeling it has to do with the line try_files $uri = 404; in the site.conf file, however, I don't really know how that works and I don't want to break it... The weird thing is that this works with Ubuntu, but not on Windows (or maybe that's not weird at all?).
I use docker build . -t my-image to build the image and docker run -p 8000:80 --name="my-container" my-image to run a container using the image.
The EOL of all the config files is set to line feed. Does anybody have any idea how I might fix this?
Dockerfile
FROM nginx:mainline-alpine
LABEL maintainer="John Doe <john#doe>"
COPY start.sh /start.sh
COPY nginx.conf /etc/nginx/nginx.conf
COPY supervisord.conf /etc/supervisord.conf
COPY site.conf /etc/nginx/sites-available/default.conf
RUN apk add --update \
php7 \
php7-fpm \
php7-pdo \
php7-pdo_mysql \
php7-mcrypt \
php7-mbstring \
php7-xml \
php7-openssl \
php7-json \
php7-phar \
php7-zip \
php7-dom \
php7-session \
php7-tokenizer \
php7-zlib && \
php7 -r "copy('http://getcomposer.org/installer', 'composer-setup.php');" && \
php7 composer-setup.php --install-dir=/usr/bin --filename=composer && \
php7 -r "unlink('composer-setup.php');" && \
ln -s /etc/php7/php.ini /etc/php7/conf.d/php.ini
RUN apk add --update \
bash \
openssh-client \
supervisor
RUN mkdir -p /etc/nginx && \
mkdir -p /etc/nginx/sites-available && \
mkdir -p /etc/nginx/sites-enabled && \
mkdir -p /run/nginx && \
ln -s /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled/default.conf && \
mkdir -p /var/log/supervisor && \
rm -Rf /var/www/* && \
chmod 755 /start.sh
RUN sed -i -e "s/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g" \
-e "s/variables_order = \"GPCS\"/variables_order = \"EGPCS\"/g" \
/etc/php7/php.ini && \
sed -i -e "s/;daemonize\s*=\s*yes/daemonize = no/g" \
-e "s/;catch_workers_output\s*=\s*yes/catch_workers_output = yes/g" \
-e "s/user = nobody/user = nginx/g" \
-e "s/group = nobody/group = nginx/g" \
-e "s/;listen.mode = 0660/listen.mode = 0666/g" \
-e "s/;listen.owner = nobody/listen.owner = nginx/g" \
-e "s/;listen.group = nobody/listen.group = nginx/g" \
-e "s/listen = 127.0.0.1:9000/listen = \/var\/run\/php-fpm.sock/g" \
-e "s/^;clear_env = no$/clear_env = no/" \
/etc/php7/php-fpm.d/www.conf
EXPOSE 443 80
WORKDIR /var/www
CMD ["/start.sh"]
start.sh
#!/bin/bash
# ----------------------------------------------------------------------
# Create the .env file if it does not exist.
# ----------------------------------------------------------------------
if [[ ! -f "/var/www/.env" ]] && [[ -f "/var/www/.env.example" ]];
then
cp /var/www/.env.example /var/www/.env
fi
# ----------------------------------------------------------------------
# Run Composer
# ----------------------------------------------------------------------
if [[ ! -d "/var/www/vendor" ]];
then
cd /var/www
composer update
composer dump-autoload -o
fi
# ----------------------------------------------------------------------
# Start supervisord
# ----------------------------------------------------------------------
exec /usr/bin/supervisord -n -c /etc/supervisord.conf
site.conf
server {
listen 80;
root /var/www/public;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ /\. {
deny all;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/sites-enabled/*.conf;
}
supervisord.conf
[unix_http_server]
file=/dev/shm/supervisor.sock
[supervisord]
logfile=/tmp/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=warn
pidfile=/tmp/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200
user=root
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///dev/shm/supervisor.sock
[program:php-fpm7]
command = /usr/sbin/php-fpm7 --nodaemonize --fpm-config /etc/php7/php-fpm.d/www.conf
autostart=true
autorestart=true
priority=5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
priority=10
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
As mentioned in the comments above, I just forgot to add the -v parameter in the docker run command, like so:
docker run -p 8000:80 -v $PWD/src:/var/www --name="my-container" my-image
... with $PWD/src being the full path to the src directory.

Categories