Laravel is suspiciously slow (fresh app, everything is by default) - php

I work with Windows 10 (WSL 2).
My hardware is:
Intel(R) Core(TM) i7-9750H CPU # 2.60GHz 2.60 GHz
RAM 8.00GB
SSD
Actually, this is a game laptop (MSI GL 65 95CK) if you are interested.
I decided to install Laravel, went to documentation and implemented described steps:
In WSL terminal (I use Ubuntu) curl -s "https://laravel.build/example-app?with=mysql,redis" | bash
cd example-app && /vendor/bin/sail up
I went to browser and realized that the main page took almost 1 second to render! Sometimes even two seconds!
I thought, "ok, maybe the framework is in a not optimized mode. Debug and so on", and decided to turn APP_DEBUG in .env to false. I also removed all routes and put this instead:
Route::get('/', [\App\Http\Controllers\TestController::class, 'test']);
Before this, I created the TestController:
class TestController extends Controller
{
public function test() {
return response()->json([
'name' => 'Abigail',
'state' => 'CA',
]);
}
}
Then I run php artisan optimaze, open in browser http://localhost/api
and the result is a big sorrow:
Why 800ms? I did not do anything.
Ok, I decided just to rename the index.php file in the public folder to index2 for example, put new one index.php with the array printing just to test whether this is a Laravel problem or this is just an infrastructure issue.
New index.php:
Much better!
Then I thought, "let's compare with another framework, for example with .NET Core". And I made a very simple Web Api project.
Controller:
namespace MockWebApi.Controllers
{
[ApiController]
[Route("")]
public class MainController : ControllerBase
{
[Route("test")]
public IActionResult Test()
{
return Ok(new
{
Test = "hello world!!!"
});
}
}
}
The result is:
Ok, you can argue that this is a compiled language. I decided to check with Node.js and Express:
Code:
router.get('/', function(req, res, next) {
res.json({"test": "123"})
});
Result:
As you can see, Node as fast as C# in this case.
So, what is wrong with Laravel? Did I miss something in installation?
UPDATE
I raised Laravel without Sail. My docker-compose file:
version: '3'
services:
php-fpm:
build:
context: docker/php-fpm
volumes:
- ./:/var/www
networks:
- internal
nginx:
build:
context: docker/nginx
volumes:
- ./:/var/www
ports:
- "80:80"
depends_on:
- php-fpm
networks:
- internal
networks:
internal:
driver: bridge
Nginx Dockerfile:
FROM nginx
ADD ./default.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www
Nginx config:
server {
listen 80;
index index.php;
server_name 127.0.0.1 localhost;
root /var/www/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_read_timeout 1000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
php-fpm Dockerfile:
FROM php:7.4-fpm
RUN apt-get update && apt-get install -y wget git unzip \
&& apt-get install libpq-dev -y
RUN wget https://getcomposer.org/installer -O - -q \
| php -- --install-dir=/bin --filename=composer --quiet
RUN groupadd -r -g 1000 developer && useradd -r -u 1000 -g developer developer
USER developer
WORKDIR /var/www
I did not get any performance improvement(

Related

How to connect php and nginx containers together

I'm trying to create a simple docker project and connect PHP and Nginx containers for a test. but i got this error :
Building php
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM php:latest
---> 52cdb5f30a05
Successfully built 52cdb5f30a05
Successfully tagged test_php:latest
WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM nginx:latest
---> 55f4b40fe486
Step 2/2 : ADD default.conf /etc/nginx/conf.d/default.conf
---> 20190910ffec
Successfully built 20190910ffec
Successfully tagged test_nginx:latest
WARNING: Image for service nginx was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating php ... done
Creating nginx ... done
Attaching to php, nginx
php | Interactive shell
php |
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
php | php > nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
php exited with code 0
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/07/10 05:34:07 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx | nginx: [emerg] host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx exited with code 1
Here is the full directory structure of the project:
- docker
-- nginx
-- default.conf
-- Dockerfile
-- php
-- Dockerfile
- src
-- index.php
docker-compose.yml
and this is all files and their contents which i use :
# docker/nginx/default.conf
server {
listen 80;
index index.php index.htm index.html;
root /var/www/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# docke/nginx/Dockerfile
FROM nginx:latest
ADD default.conf /etc/nginx/conf.d/default.conf
# docker/php/Dockerfile
FROM php:latest
# src/index.php
<?php
echo phpinfo();
# docker-compose.yml
version: "3.8"
services:
nginx:
container_name: nginx
build: ./docker/nginx
command: nginx -g "daemon off;"
links:
- php
ports:
- "80:80"
volumes:
- ./src:/var/www/html
php:
container_name: php
build: ./docker/php
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
the main problem occurs when I want to connect the PHP container to the project and without PHP, Nginx will work correctly.
You can try and add depends_on: php in your nginx service to at least try to make sure the nginx service doesnt' start until the php service is Running. Probably the dependency is starting after the main service that requires it. This is a race condition problem, I think.
I had 3 nodes, where nginx and php containers lived on different nodes.
After trying various methods, such as:
define dedicated network for services inside docker-compose
in nginx config use upstream definition instead of direct name of the service
explicitly adding docker's 127.0.0.11 resolver to nginx
neither worked...
And the reason actually was in a closed ports: https://docs.docker.com/engine/swarm/networking/#firewall-considerations
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container overlay network.
After I revert back all changes I did (network, resolver, upstream definition) to original simple setup and open the ports for inner node communication - service discovery begin to work as expected.
Docker 20.10
Several issues:
It seems you have containers that can't see other.
It seems containers exits/fails and it is certainly not because of the first issue; nginx would still work of php-fpm socket is unavailable, it might crash but it should manage such unavailability very well
Make sure the php.ini is really opening an fpm socket on port 9000.
you index.php file script is not closed with "?>" [but that does not matter here]
For a summary, you were suggested:
to consider docker swarm networking configuration [but it seems you are not using docker swarm]
to use depends_on which helps docker decide what to start first, but it should not be an issue in your case, nginx can wait. it will use the socket only upon web user requests.
So it seems the internal docker name resolution is your issue and It seems defining the network manually is best practice. In my case I wandered too long before just giving the docker-compose file a specific network name and attaching containers to that network.
If containers are in the same docker-compose file they should be in the same yourserver_default network that is autogenerated for your composed services.
Have a look at https://blog.devsense.com/2019/php-nginx-docker, they actually define that network manually.
And eventually redo everything from scratch, if you haven't solved this yet. Else all the best to you!

Docker, Symfony nginx/php-fpm initialized very slow

Using this project/Docker setup:
https://gitlab.com/martinpham/symfony-5-docker
When I do docker-compose up -d I have to wait about 2-3 minutes to actually get it working.
Before it loads, it gives me "502 Bad Gateway" and logs error:
2020/05/10 09:22:23 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.28.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.28.0.3:9000", host: "localhost"
Why nginx or php-fpm or smth else is loading so slow ?
It's my first time using nginx and Symfony. Is it something normal ? I expect it to be loaded max in 1-2 second, not 2-3 minutes.
Yes, I have seen similar issues, but not appropriate solutions for me.
Some nginx/php-fpm/docker-compose configuration should be changed - I tried, but no luck.
I modified a little bit nginx/sites/default.conf (just added xdebug stuff)
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#!!!!fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
fastcgi_param PHP_VALUE "xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM
xdebug.remote_enable=1
xdebug.remote_port=9001
xdebug.remote_host=192.168.0.12";
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx/conf.d/default.conf:
upstream php-upstream {
server php-fpm:9000;
}
docker-compose.yml:
version: '3'
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
build:
context: ./php-fpm
depends_on:
- database
environment:
- TIMEZONE=Europe/Tallinn
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
EDIT:
I think I know why now your project is taking ages to start. I had a closer look at the Dockerfile in the php-fpm folder and you have that command:
CMD composer install ; wait-for-it database:3306 -- bin/console doctrine:migrations:migrate ; php-fpm
As you can see, that command will install all composer dependencies and then wait until it can connect to the database container defined in the docker-compose.yml configuration :
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
Once the database is up and running, it will run the migration files in src/src/Migrations to update the database and then start php-fpm.
Until all of this is done, your project won't be ready and you will get the nice '502 Bad Gateway' error.
You can verify this and what's happening by running docker-compose up but omitting the -d argument this time so that you don't run in detached mode, and this will display all your container logs in real time.
You will see a bunch of logs, including the ones related to what composer is doing in the background, ex:
api-app | - Installing ocramius/package-versions (1.8.0): Downloading (100%)
api-app | - Installing symfony/flex (v1.6.3): Downloading (100%)
api-app |
api-app | Prefetching 141 packages
api-app | - Downloading (100%)
api-app |
api-app | - Installing symfony/polyfill-php73 (v1.16.0): Loading from cache
api-app | - Installing symfony/polyfill-mbstring (v1.16.0): Loading from cache
Composer install might take more or less time depending on whether you have all the repositories cached or not.
The solution here if you want to speed things up during your development would be remove the composer install command from the Dockerfile and run it manually only when you want to update/install new dependencies. This way, you avoid composer install to be run every time you run docker-compose up -d.
To do it manually, you would need to connect to your container and then run composer install manually or if you have composer directly installed in your OS, you could simply navigate to the src folder and run that same command.
This + the trick below should help you have a nice a fast enough project locally.
I have a similar configuration and everything is working well, the command docker-compose should take some time the first time you run it as your images need to be built, but then it shouldn't even take a second to run.
From what I see however, you have a lot of mounted volumes that could affect your performances. When I ran tests with nginx and Symfony on a Mac, I had really bad performances at the beginning with pages taking at least 30 seconds load.
One solution to speed this up in my case was to use the :delegated option on some of my volumes to speed up their access.
Try adding that options to your volumes and see if it changes anything for you:
[...]
volumes:
- ../src:/var/www:delegated
[...]
If delegated is not good option for you, read more about the other options consistent and cached here to see what would best fits your needs.

Is possible to have PHP + MongoDb + Nginx in the same Dockerfile?

I need to have the 3 services running into only one docker container. I have this "system" done in one docker-compose, but due to some limitations I need to have it into only one dockerfile. The application that I want to run is in PHP (with Symfony) + MongoDb.
Right now, I'm executing this command:
sudo docker run -p 9091:80 -p 27017:27017 myapp
And the maximum that I get is a 502 error when I browse to localhost:9091.
Thanks
Dockerfile:
FROM php:7.3-fpm-stretch
COPY --from=composer /usr/bin/composer /usr/bin/composer
RUN apt update
RUN apt install -y \
nginx \
mongodb
COPY nginx.conf /etc/nginx/nginx.conf
RUN pecl install mongodb
RUN docker-php-ext-enable mongodb
RUN service nginx start
RUN service mongodb start
EXPOSE 9091
EXPOSE 27017
COPY entrypoint.sh /
WORKDIR /usr/www
ENTRYPOINT ["/entrypoint.sh"]
COPY . /usr/www
entrypoint.sh
#!/bin/bash
set -e -u
service nginx start
service mongodb start
tail -f /dev/null
nginx.conf
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
root /usr/www/public;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
error_log /dev/stderr debug;
access_log /dev/stdout;
}
}
Yes, it is possible, but it is not the recommended approach in the context of docker.
Limiting each container to one process is a good rule of thumb.
Decouple applications
Each container should have only one concern. Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
So better to use docker-compose and launch three container and use the docker network for communication.
It will less or more likely this
or using docker-compose
As far the issue in your current as mentioned in the comment you are not starting PHP, better to check logs of the container.
The problem with your dockerfile is you are not starting php-fpm. You shall use supervisord to manage multiple services in a single container , This document explains this use case very clearly.
You can also refer this example for the complete supervisord conf for nginx,php-fpm and mysql .
As a piece of opinion ,It is unadvisable to run multiple services in a single container .

Connect to MariaDB with localhost from Docker container

First of I did read thoses links
Connect to Docker MySQL container from localhost?
Connect to Mysql on localhost from docker container
From inside of a Docker container, how do I connect to the localhost of the machine?
But as a beginner with docker. It did not help me.
What you need to know:
Yes, I need localhost. I'm working on an app that interact
directly with the database. It create/remove user privileges and
allow some user to access with limited privileges from a remote
access. When initialized, the app will drop the default remote access to root and forge user and grant them full privilege on localhost.
I'm using a docker-compose.yml generated by https://phpdocker.io
Ubuntu 18.10
Docker version 18.09.3, build 774a1f4
docker-compose version 1.21.0, build unknown
I'm using docker only for development purpose. On production I use forge
./docker-compose.yml
###############################################################################
# Generated on phpdocker.io #
###############################################################################
version: "3.1"
services:
mailhog:
image: mailhog/mailhog:latest
container_name: myapp-mailhog
ports:
- "8081:8025"
redis:
image: redis:alpine
container_name: myapp-redis
mariadb:
image: mariadb:10.4
container_name: myapp-mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
- MYSQL_USER=forge
- MYSQL_PASSWORD=forge
ports:
- "8083:3306"
webserver:
image: nginx:alpine
container_name: myapp-webserver
working_dir: /application
volumes:
- .:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
php-fpm:
build: phpdocker/php-fpm
container_name: myapp-php-fpm
working_dir: /application
volumes:
- .:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
./phpdocker/nginx/nginx.conf
server {
listen 80 default;
client_max_body_size 108M;
access_log /var/log/nginx/application.access.log;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
./phpdocker/php-fpm/Dockerfile (slightly modified to add mysql_client and not installing git in a second RUN command)
FROM phpdockerio/php73-fpm:latest
WORKDIR "/application"
# Fix debconf warnings upon build
ARG DEBIAN_FRONTEND=noninteractive
# Install selected extensions and other stuff
RUN apt-get update \
&& apt-get -y --no-install-recommends install \
php7.3-mysql php-redis php7.3-sqlite3 php-xdebug php7.3-bcmath php7.3-bz2 php7.3-gd \
git \
mysql-client \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
./php-ini-overrides.ini
upload_max_filesize = 100M
post_max_size = 108M
I tried to use network_mode: host but it makes the webserver stopping with Exit 1
Ok but as remember it, localhost in mysql/mariadb means access thru the local unix socket. There are ways of sharing these between containers.
Have a look here Connection between docker containers via UNIX sockets
#F.Maden gave me the right direction. I accepted his answer but here's how I made it in details.
Basically has he said, I need to share mysqld.sock between my services mariadb and php-fpm
The first step is to share a folder between both services. Since I already
have /application that contains the docker config /application/phpdocker, I will reuse this one.
I had to create a custom my.cnf file to edit the default mariadb config configuration and add a custom socket path:
./phpdocker/mariadb/my.cnf
[mysql]
socket = /application/phpdocker/shared/mysqld.sock
[mysqld]
socket = /application/phpdocker/shared/mysqld.sock
Then I had to share the config file with my mariadb container
./docker-compose.yml
mariadb:
image: mariadb:10.4
container_name: myapp-mariadb
working_dir: /application
volumes:
- .:/application
- ./phpdocker/mariadb/my.cnf:/etc/mysql/my.cnf # notice this line
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
- MYSQL_USER=forge
- MYSQL_PASSWORD=forge
ports:
- "8083:3306"
I created a folder ./phpdocker/shared with privileges 777 where mariadb will be able to create mysqld.sock (I couldn't start mariadb with 755. In my case this is only used on local not on production so it's fine)
From the terminal
$ mkdir ./phpdocker/shared && chmod 777 ./phpdocker/shared
And now test it!
From the terminal
$ docker-compose up -d --force-recreate --build
$ docker exec -it -u $(id -u):$(id -g) myapp-php-fpm /bin/bash
Once in the container
$ mysql -u root -p -h localhost --socket=/application/phpdocker/shared/mysqld.sock
$ mysql > select user();
+----------------+
| user() |
+----------------+
| root#localhost |
+----------------+
If the problem with connection to DB persists:
We can interogate what IP DB container has:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
We'll receive container IP (eg: 172.21.0.3);
After this, we can place this IP into connection "host" section.
Enjoy!
Ref How can I access my docker maria db?

Docker, Nginx, PHP-FPM : problems with connection

I've taken on a project built on Docker containers and I'm having it to run smoothly.
My containers build successfully, but when I try getting to the website, nginx gives me a 502 with this error in the logs:
connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.6:9000", host: "localhost:2000"
Which, from what I've read would be a problem with the link between my two containers.
I've tried changing the listen parameter of php-fpm directly to 0.0.0.0:9000 as seen on Nginx+PHP-FPM: connection refused while connecting to upstream (502) but this gave rise to a new error I don't fully understand either:
*11 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.17.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://172.17.0.6:9000", host: "localhost:2000"
Does anyone has an idea of what is failing and how to fix it ?
The docker-compose part regarding theses two services is:
elinoi-webserver:
build: .
dockerfile: docker/Dockerfile.nginx.conf
container_name: elinoi-webserver
volumes:
- .:/var/www/elinoi.com
ports:
- "2000:80"
links:
- elinoi-php-fpm
elinoi-php-fpm:
build: .
dockerfile: docker/Dockerfile.php-fpm.conf
container_name: elinoi-php-fpm
volumes:
- .:/var/www/elinoi.com
- /var/docker_volumes/elinoi.com/shared:/var/www/elinoi.com/shared
ports:
- "22001:22"
links:
- elinoi-mailhog
- elinoi-memcached
- elinoi-mysql
- elinoi-redis
The nginx conf file is:
server {
listen 80 default;
root /var/www/elinoi.com/current/web;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
try_files $uri #rewriteapp;
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny all . files
location ~ /\. {
deny all;
}
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_pass elinoi-php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index app.php;
send_timeout 1800;
fastcgi_read_timeout 1800;
}
# Statics
location /(bundles|media) {
access_log off;
expires 30d;
try_files $uri #rewriteapp;
}
}
The Dockerfile for the elinoi-php-fpm service is :
FROM phpdockerio/php7-fpm:latest
# Install selected extensions
RUN apt-get update \
&& apt-get -y --no-install-recommends install php7.0-memcached php7.0-mysql php7.0-redis php7.0-gd php7.0-imagick php7.0-intl php7.0-xdebug php7.0-mbstring \
&& apt-get -y --no-install-recommends install nodejs npm nodejs-legacy vim ruby-full git build-essential libffi-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN npm install -g bower
RUN npm install -g less
RUN gem install sass
# If you're using symfony and the vagranted environment, I strongly recommend you change your AppKernel to use the following temporary folders
# for cache, logs and sessions, otherwise application performance may suffer due to these being shared over NFS back to the host
RUN mkdir -p "/tmp/elinoi/cache" \
&& mkdir -p "/tmp/elinoi/logs" \
&& mkdir -p "/tmp/elinoi/sessions" \
&& chown www-data:www-data -R "/tmp/elinoi"
RUN apt-get update \
&& apt-get -y --no-install-recommends install openssh-server \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
ADD docker/.ssh /root/.ssh
RUN chmod 700 /root/.ssh/authorized_keys
CMD ["/usr/sbin/sshd", "-D"]
WORKDIR "/var/www/elinoi.com"
The Dockerfile for the elinoi-webserver is:
FROM smebberson/alpine-nginx:latest
COPY /docker/nginx.conf /etc/nginx/conf.d/default.conf
WORKDIR "/var/www/elinoi.com"
There can only be one CMD instruction in a Dockerfile. If you list
more than one CMD then only the last CMD will take effect.
The original Dockerfile ends with:
CMD /usr/bin/php-fpm
and the Dockerfile of the elinoi-php-fpm service ends with the following CMD layer:
CMD ["/usr/sbin/sshd", "-D"]
So, only sshd is started after container creates. php-fpm is not started there.
That's why nginx constantly returns 502 error, because php backend is not working at all.
You can fix your issue the following ways:
1. Docker Alpine linux running 2 programs
2. Simply delete sshd part from the elinoi-php-fpm service.

Categories