Docker use Multiple nginx server with single php instance - php

I have several websites using php, each website is on a different docker.
In normal case, out of docker, all the websites use the same php installation.
How can that be achieved with docker?
this is my current setup. my single nginx docker uses this docker instance. (php-fpm.dockerfile only installing php-mysql).
php:
env_file: ".env"
build :
context : .
dockerfile : php-fpm.dockerfile
volumes:
- "./server/www/:/var/www/:cached"
I want to add another nginx server using this php instance, is it possible?
this is my entire relevant yml file, nginx_server instance us using php, how can i tell the php instance to process nginx_site instance as well.
version: "3.5"
nginx_server:
image: nginx
env_file: ".env"
links:
- php
- mysql
ports:
- "8081:80"
restart:
always
volumes :
- "./server/nginx/default.conf.php.template:/etc/nginx/conf.d/default.conf.template"
- "./server/logs/:/var/log/nginx/:cached"
- "./server/www/:/var/www/:cached"
command : /bin/bash -c "envsubst '$$NGINX_HOST,$$NGINX_ROOT,$$NGINX_INDEX_FILE,$$NGINX_PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
environment:
- NGINX_HOST=$NGINX_SERVER_HOST
- NGINX_ROOT=$NGINX_SERVER_ROOT/public
- NGINX_INDEX_FILE=index.php
- NGINX_PORT=8081
nginx_site:
image: nginx
env_file: ".env"
links:
- php
- mysql
ports:
- "8082:80"
restart:
always
volumes :
- "./site/nginx/default.conf.php.template:/etc/nginx/conf.d/default.conf.template"
- "./site/logs/:/var/log/nginx/:cached"
- "./site/www/:/var/www/:cached"
command : /bin/bash -c "envsubst '$$NGINX_HOST,$$NGINX_ROOT,$$NGINX_INDEX_FILE,$$NGINX_PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
environment:
- NGINX_HOST=$NGINX_SITE_HOST
- NGINX_ROOT=/var/www/
- NGINX_INDEX_FILE=index.php
- NGINX_PORT=8082
php:
env_file: ".env"
build :
context : .
dockerfile : php-fpm.dockerfile
volumes:
- "./server/www/:/var/www/:cached"
thanks

Related

Can't connect to MYSQL container from Drupal container through drush

I try to figure out this problem since a long time now and I've got to admit that I'm out of responses.
Here is the case:
docker-compose.yml
version: "3"
services:
mysql:
image: mysql:8.0
container_name: mysql
hostname: mysql
command: --default-authentication-plugin=mysql_native_password
restart: unless-stopped
env_file: .env
volumes:
- db-data:/var/lib/mysql
networks:
- internal
drupal:
build:
context: .
dockerfile: Dockerfile
container_name: drupal
depends_on:
- mysql
restart: unless-stopped
volumes:
- ./web:/var/www/html/web:rw
- ./vendor:/var/www/html/vendor:rw
- ./drush:/var/www/html/drush
- drupal-data:/var/www/html
- ./php-conf/php.ini:/usr/local/etc/php/php.ini
networks:
- internal
- external
webserver:
image: nginx:1.19.1-alpine
container_name: webserver
depends_on:
- drupal
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- drupal-data:/var/www/html
- ./web:/var/www/html/web:rw
- ./nginx-conf/snippets/self-signed.conf:/etc/nginx/conf.d/snippets/self-signed.conf
- ./nginx-conf/snippets/ssl-params.conf:/etc/nginx/conf.d/snippets/ssl-params.conf
- ./nginx-conf/sites-available/default.conf:/etc/nginx/conf.d/default.conf
- ./ca-certificates/qiminfo-docker.dev+4.pem:/etc/ssl/certs/qiminfo-docker.dev+4.pem
- ./ca-certificates/qiminfo-docker.dev+4-key.pem:/etc/ssl/private/qiminfo-docker.dev+4-key.pem
networks:
- external
networks:
external:
driver: bridge
internal:
driver: bridge
volumes:
drupal-data:
db-data:
Dockerfile (only for drupal container)
FROM drupal:8.9.2-fpm-alpine
RUN apk add mysql-client && apk add openssh
I manage all dependencies via the mounted files in volumes (it works nice btw), but when I run drush through my host machine or through the container it doesn't see the Drupal instance (neither root or database) and I get the classic error message in this case ...
In BootstrapHook.php line 32:
[Exception]
Bootstrap failed. Run your command with -vvv for more information.
But, I use the Dockerfile to get mysql-client installed on drupal container and to allow connection from drupal container to mysql container. And when I try to connect to mysql (which has 'mysql' hostname) container database it works !
➜ docker exec -it drupal sh
/var/www/html # mysql -u drupal -p drupal -h mysql
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 13
Server version: 8.0.21 MySQL Community Server - GPL
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [drupal]>
The question is:
Why drush doesn't has access to mysql server container while mysql-cli does the job fine ?!
Please help T-T

Why docker not syncing files inside container on Windows 10?

I have issue after last docker update (seems so) on Windows 10 (local development). When I changed files in PhpStorm (and in another editors - Sublime, Notepad+), after a while, files inside container didn't receive changes.
Steps that can help for a while:
If I completely shut down all containers and after that arise them again. docker-compose down && docker-compoes up
If I get into php-fpm container and for file that not changed ran touch file.php (this file will be immidiatly changed).
What I tried and it didn't help:
I restarted php-fpm and nginx containers docker-compose restart php-fpm nginx (Yes it's strange, because down|up for all container helped)
I changed inside PhpStorm setting Use Safe write(save changes for temporary file first)
Also I checked inode for file inside container. With ls -lai file.php. Before changes worked and after they broked I had the same inode number. There is no determined number of changes I must to do to break syncing, it's random, sometime 2 changes enough.
I have:
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.2, build 698e2846
docker-compose.yml
version: '3'
services:
nginx:
container_name: pr_kpi-nginx
build:
context: ./
dockerfile: docker/nginx.docker
volumes:
- ./:/var/www/kpi
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/fastcgi.conf:/etc/nginx/fastcgi.conf
ports:
- "8081:80"
links:
- php-fpm
networks:
- internal
php-fpm:
container_name: pr_kpi-php-fpm
build:
context: ./
dockerfile: docker/php-fpm.docker
volumes:
- ./:/var/www/kpi
links:
- kpi-mysql
environment:
# 192.168.221.1 -> host.docker.internal for Mac and Windows
XDEBUG_CONFIG: "remote_host=host.docker.internal remote_enable=1"
PHP_IDE_CONFIG: "serverName=Docker"
networks:
- internal
mailhog:
container_name: pr_kpi-mailhog
image: mailhog/mailhog
restart: always
ports:
# smtp
- "1025:1025"
# http
- "8025:8025"
networks:
- internal
kpi-mysql:
container_name: pr_kpi-kpi-mysql
image: mysql:5.7
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
volumes:
- ./docker/storage/kpi-mysql:/var/lib/mysql
environment:
# We must change prod secrets, this is not good approach
- "MYSQL_ROOT_PASSWORD=pass"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=user_pass"
- "MYSQL_DATABASE=kpi_db"
ports:
- "33061:3306"
networks:
- internal
kpi-npm:
container_name: pr_kpi-npm
build:
context: ./
dockerfile: docker/npm.docker
volumes:
- ./:/var/www/kpi
- /var/www/kpi/admin/node_modules
ports:
- "4200:4200"
networks:
- internal
tty: true
# For xdebug
networks:
internal:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.221.0/28
P.S. There is opened issue:
https://github.com/docker/for-win/issues/5530
P.P.S. We need to update Docker from 2.2.0.0 to 2.2.0.3, Seems it's fixed
I have a separate container for syncing my folder:
app:
image: httpd:2.4.38
volumes:
- ./:/var/www/html
command: "echo true"
I just use the basic apache image, you could use anything really though. Then in my actual containers, I use the following volumes_from key:
awesome.scot:
build: ./build/httpd
links:
- php
ports:
- 80:80
- 443:443
volumes_from:
- app
php:
build: ./build/php
ports:
- 9000
- 9001
volumes_from:
- app
links:
- mariadb
- mail
environment:
APPLICATION_ENV: 'development'
I've never had an issue using this set up, files always sync fast, and I have tested both on Mac OSX and MS Windows.
If you're interested, here is my full LAMP stack on Github https://github.com/delboy1978uk/lamp
I have the same issue on Windows10 since 31st Jan.
I have commented a line in PhpStorm and checked it in the container using vim.
The changes were not there.
If I run docker-compose down and up, the changes go in the container.
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.4, build 8d51620a
Nothing changed in my docker-compose.yml since 2018.

NodeJS Docker-compose, Cannot link to a non running container

I have this docker stack running nginx, php, and mariadb.
Now I want to add NodeJS in order to migrate my platform to services, one at the time.
I'm trying to add node in the same way I added php,but when I run "docker-compose up", nodejs appears "Exited with code 254".
It appears to be a ENOET error related with my package.json file, but I'm already coping my file into the container, also if I run everything less nodejs, when I access my nginx container the package.json appears in the route it means to be, so I don't know why I'm seeing this error.
Here is my docker-compose.yml
nginx:
image: tutum/nginx
ports:
- "80:80"
- "8080:8080"
# - "8081:8081"
# - "8082:8082"
# - "8083:8083"
# - "8090:8090"
links:
- nodejs
- phpfpm
- mariadb
volumes:
- ./public/leal-api:/var/www
- ./nginx/default:/etc/nginx/sites-available/default
- ./nginx/default:/etc/nginx/sites-enabled/default
- ./public:/usr/share/nginx/html
nodejs:
command: npm start
build: ./node
ports:
- "3000:3000"
links:
- mariadb
volumes:
- ./public/leal-api:/var/www
phpfpm:
build: ./php
ports:
- "9000:9000"
links:
- mariadb
volumes:
- ./public:/usr/share/nginx/html
mariadb:
image: mariadb
environment:
MYSQL_DATABASE: leal
MYSQL_USER: lealadm
MYSQL_PASSWORD: leal2015*
MYSQL_ROOT_PASSWORD: LealColombia2017!
ports:
- "3306:3306"
And here is the Dockerfile of the nodejs
FROM node:7.10
VOLUME ["/var/www"]
# WORKDIR /src
WORKDIR /var/www
# COPY . /src
# RUN npm install
# RUN npm install -g nodemon #hmm idk
EXPOSE 3000
CMD ["npm", "start"]
Thanks in advance for any help that you can provide.

How to run linux daemon in container linked to another container with docker-compose?

I have the following docker-compose.yml file which runs nginx with PHP support:
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
The PHP application inside /var/www/my-app needs to communicate with a linux daemon (let's call it myappd).
The way I see it, I need to either:
Copy the myappd into the nginx container to /usr/local/bin, make it executable with chmod +x and run it in the background.
Create a different container, copy myappd to /usr/local/bin, make it executable with chmod +x and run it in the foreground.
Now, I'm new to Docker and I'm researching and learning about it but my best guess, given that I'm using Docker Composer, is that option 2 is probably the recommended one? Given my limited knowledge about Docker, I'd have to guess that this container would require some sort of linux-based image (like Ubuntu or something) to run this binary. So maybe option 1 is preferred? Or maybe option 2 is possible with a minimal Ubuntu image or maybe it's possible without such image?
Either way, I have no idea how would I implement that on the composer file. Especially option 2, how would the PHP application communicate with the daemon in a different container? Just "sharing" a volume (where the binary is located) like I did for nginx/php services would suffice? Or something else is required?
Simple answer is adding command entry to php service in docker-compose.yml.
Given that myappd is at ./my-app/ on host machine and at /var/www/my-app/, updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
command: ["/bin/sh", "/var/www/my-app/mappd", "&&", "php-fpm"]
Better answer is to create the third container which runs linux daemon.
New Dockerfile is something like following.
FROM debian:jessie
COPY ./myappd /usr/src/app/
EXPOSE 44444
ENTRYPOINT ['/bin/sh']
CMD ['/usr/src/app/myappd']
Build image and name it myapp/myappd.
Updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
networks:
- network1
depends_on:
- daemon
daemon:
container_name: my-app-daemon
image: myapp/myappd
restart: always
networks:
- network1
networks:
network1:
You can send request with hostname daemon from inside php. Docker container has capability to resolve hostname of another container in the same network.

How use php from another docker container

in my app i have separated docker containers for nginx, mysql, php and supervisor. But now i require set in supervisor a program which run php script. It`s possible call php from another container?
EDIT
Example:
When i run supervisor program test, then i see error: INFO spawnerr: can't find command 'php'. I know that php is not in the container supervisor, but how i call from container php? And i require same php as for application.
./app/test.php
<?php
echo "hello world";
docker-compose.yml
version: "2"
services:
nginx:
build: ./docker/nginx
ports:
- 8080:80
volumes:
- ./app:/var/www/html
links:
- php
- mysql
php:
build: ./docker/php
volumes:
- ./app:/var/www/html
ports:
- 9001:9001
mysql:
build: ./docker/mysql
ports:
- 3306:3306
volumes:
- ./data/mysql:/var/lib/mysql
supervisor:
build: ./docker/supervisor
volumes:
- ./app:/var/www/html
ports:
- 9000:9000
supervisor.conf
[program:test]
command = php /var/www/html/test.php
process_name = %(process_num)02d
numprocs = 1
autostart = false
autorestart = true
Please check this repo in github
I used angular, laravel and mongo
with 3 containers, for mongo, php-fpm and nginx to make proxi to api and angular.
Angular dose not use nodejs container because I build angular ng build and this out the build in the folder angular-dist.
The folder angular-src its the source code of angular
into folder laravel run the command composer install, if you use Linux use sudo chmod 777 -R laravel
you can see this
and the route http://localhost:8000/api/
and the route http://localhost:8000/api/v1.0

Categories