I am trying to set up a local environment for web development (LAMP stack) using Docker.
All websites live in folder called Sites with this structure:
/Sites
-- site1.local
-- www
-- site2.local
-- www
For every website I need its own version of PHP and MySQL.
So far I was able to run one local website with this docker-compose.yml (uses php:7.1-apache):
version: "3"
services:
webserver:
build:
context: ./bin/webserver
container_name: 'sp-webserver'
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mysql
volumes:
- ${DOCUMENT_ROOT-./www}:/var/www/html
- ${PHP_INI-./config/php/php.ini}:/usr/local/etc/php/php.ini
- ${VHOSTS_DIR-./config/vhosts}:/etc/apache2/sites-enabled
- ${LOG_DIR-./logs/apache2}:/var/log/apache2
mysql:
build: ./bin/mysql
container_name: 'sp-mysql'
restart: 'always'
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA_DIR-./data/mysql}:/var/lib/mysql
- ${MYSQL_LOG_DIR-./logs/mysql}:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: sp-demo
MYSQL_USER: sp-demo
MYSQL_PASSWORD: sp-demo
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'rb-phpmyadmin'
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- '8080:80'
volumes:
- /sessions
redis:
container_name: 'rb-redis'
image: redis:latest
ports:
- "6379:6379"
The website is available at http://localhost:80
Questions:
1) How do I make it accesible by domain domain, for example, http://site1.local instead of http://localhost:80? I heard I need some Nginx Proxy for this (https://github.com/jwilder/nginx-proxy), but I can't understand how to set it up with Apache in my case.
2) How to set up the second website (http://site2.local) the same way to run it simultaneously? As far as I understand, I would need to change all ports (80, 443 and 3306), otherwise I will have a conflict when I run docker-compose up -d? Is it possible without changing ports?
Thanks for the answers!
Related
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 5 months ago.
How can a Docker container communicate with a local application that is not in Docker?
I have a Symfony container (PHP) and I want to communicate with a nodejs application for socket (so I need input and output for the 1337 port).
How can I make a communication between these applications?
My docker-file:
version: "3.7"
volumes:
db-data:
networks:
dev:
driver: bridge
services:
mariadb:
container_name: symfony_mariadb
image: mariadb:10.9.3-jammy
restart: always
environment:
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: symfony
MYSQL_ROOT_PASSWORD: root
volumes:
- db-data:/var/lib/mysql
expose:
- 3306
ports:
- "3306:3306"
phpmyadmin:
container_name: phpmyadmin
depends_on:
- mariadb
restart: always
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: mariadb
PMA_USER: root
PMA_PASSWORD: root
ports:
- "${PHPMYADMIN_PORT:-8081}:80"
redis:
container_name: redis
image: redis:7.0.5-alpine3.16
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
nginx:
build:
context: .docker/nginx
restart: on-failure
volumes:
- ./:/var/www/server:cached
- ./.docker/nginx/server.conf:/etc/nginx/conf.d/server.conf:cached
ports:
- "${NGINX_PORT:-8000}:80"
depends_on:
- php
- mariadb
- redis
php:
build:
context: .docker/php
restart: on-failure
ports:
- 5000:8000
volumes:
- ./:/var/www/server:cached
- ./:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
- ./.docker/php/php.ini:/usr/local/etc/php/php.ini:ro
depends_on:
- mariadb
- redis
user: "${ID_USER:-1001}:${ID_USER:-1001}"
Normally yo can see host machine port from docker, you try using host IP ?
Note: if this host IP is in a different network segment of internals IPs form dockers, like 192.168.1.28
I am working on a laravel project where I successfully configured the docker-compose with laravel using built in images (nginx, mysql, php etc).
The containers are working fine even the data persistence is working correct. But now i want to connect the docker-compose to remote database rather then using the sql container database.
It can the the localhost database that is on my local system may be in xampp or it can be an AWS remote database. In simple words the docker should pick the database outside of container.
I have tried different solution using the IP address and make changes to .env and docker-compose.yml but i didn't find any solution.
Here is my default configurations for docker-compose.yml :
version: '3'
networks:
laravel:
services:
site:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx
ports:
- 81:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- php
- mysql
- phpmyadmin
networks:
- laravel
mysql:
image: mysql:5.7.29
container_name: mysql
restart: unless-stopped
tty: true
ports:
- 3307:3306
environment:
MYSQL_DATABASE: test_db
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
volumes:
- ./mysql:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
container_name: phpmyadmin
depends_on:
- mysql
ports:
- "8081:80"
environment:
PMA_HOST: mysql
MYSQL_ROOT_PASSWORD: secret
UPLOAD_LIMIT: 1G
networks:
- laravel
php:
build:
context: .
dockerfile: php.dockerfile
container_name: php
volumes:
- ./src:/var/www/html:delegated
networks:
- laravel
volumes:
mysql:
And this is how my .env looks like:
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=test_db
DB_USERNAME=root
DB_PASSWORD=secret
As mentioned above it is working fine but i am confused about how to integrate/configure my local database or remote database like AWS with docker-compose in laravel. I don't want to push my data with the sql image.
I would appreciate if someone might help me in this regard about what changes are required and where to implement them.
Thanks
I am building an app wrapped in docker, that consist of PHP backend ("API") and NODE frontend, they are united by NGINX, where php app is served by the means of php-fpm and my node app is served by the reverse proxy. NGINX exposes phpMyAdmin app (phpmyadmin.test) and "API" (api.php.test) for dev purposes and NODE api (nodeapp.test).
NODE apps SSR ("Server-Side Rendering") needs to fetch some data from an API within docker the network, and because domains such as api.php.test can't be recognized from within docker I have to make calls to NGINX container which serves 3 different domains mentioned above, so I either need to fake 'HOST' header to get appropriate response from an API via NGINX which leads problems. Such as: Refused to set unsafe header "Host", Error: unable to verify the first certificate in nodejs etc.
Do I have to spin up Nginx container for each endpoint to avoid these issues? Or is there a better way to go around this?
Here is an example of my docker-compose.yml to give you a better idea of what happens in my app.
version: "3.7"
services:
workspace:
build:
context: workspace
args:
WORKSPACE_USER: ${WORKSPACE_USER}
volumes:
- api:/var/www/api
- site:/var/www/site
ports:
- "2222:22"
environment:
S3_KEY: ${S3_KEY}
S3_SECRET: ${S3_SECRET}
S3_BUCKET: ${S3_BUCKET}
DB_CONNECTION: ${DB_CONNECTION}
MYSQL_HOST: ${MYSQL_HOST}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MEDIA_LIBRARY_ENDPOINT_TYPE: ${MEDIA_LIBRARY_ENDPOINT_TYPE}
MEDIA_LIBRARY_IMAGE_SERVICE: ${MEDIA_LIBRARY_IMAGE_SERVICE}
tty: true
php-fpm:
build:
context: ./php-fpm
depends_on:
- nodejs
volumes:
- api:/var/www/api
- ./certs:/certs
environment:
S3_KEY: ${S3_KEY}
S3_SECRET: ${S3_SECRET}
S3_BUCKET: ${S3_BUCKET}
DB_CONNECTION: ${DB_CONNECTION}
MYSQL_HOST: ${MYSQL_HOST}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MEDIA_LIBRARY_ENDPOINT_TYPE: ${MEDIA_LIBRARY_ENDPOINT_TYPE}
MEDIA_LIBRARY_IMAGE_SERVICE: ${MEDIA_LIBRARY_IMAGE_SERVICE}
nodejs:
build:
context: ./nodejs
args:
NODEJS_SITE_PATH: ${NODEJS_SITE_PATH}
NODEJS_VER: ${NODEJS_VER}
volumes:
- site:${NODEJS_SITE_PATH}
- ./certs:/certs
environment:
NODEJS_ENV: ${NODEJS_ENV}
ports:
- 3000:3000
- 3001:3001
nginx:
build:
context: nginx
depends_on:
- php-fpm
- mariadb
restart: always
volumes:
- api:/var/www/api
- site:/var/www/site
- ./nginx/global:/etc/nginx/global
- ./nginx/sites:/etc/nginx/sites-available
- ./nginx/logs:/var/log/nginx
- ./certs:/certs
ports:
- 80:80
- 443:443
mariadb:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- db:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
depends_on:
- mariadb
restart: always
environment:
PMA_HOST: ${MYSQL_HOST}
PMA_USER: root
PMA_PASSWORD: ${MYSQL_ROOT_PASSWORD}
UPLOAD_LIMIT: 2048M
volumes:
phpmyadmin:
db:
site:
external: true
api:
external: true
May be this question asked few times before but I did't get a valid answer which can solve my problem.
I am trying to run phpmyadmin in docker on different container using docker-compose but It always through the following error:
#2002 - Connection refused — The server is not responding (or the local server's socket is not correctly configured).
My docker compose file contains the following code:
version: "2"
services:
web:
build: .
ports:
- "80:80"
networks:
- web
volumes:
- .:/code
restart: always
db:
image: "mysql:5"
volumes:
- ./mysql:/etc/mysql/conf.d
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_DATABASE: phpapp
networks:
- web
restart: always
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_PORT: 3306
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: toor
ports:
- "8000:80"
restart: always
networks:
- web
networks:
web:
driver: bridge
In web container I am trying to connect with database and it works fine, but the problem occur with phpmyadmin connection
Any help would be appreciated. :)
Interestng enough, I have your compose-file running and phpmyadmin is accessible
from host.
Had to change port 8000 to 8004 though (port 8000 is occupied on my host).
In case your db-container does not start fast enough for phpmyadmin to connect, I suggest adding depends_on into phpmyadmin service. Makes sure db starts before phpmyadmin.
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_PORT: 3306
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: toor
ports:
- "8004:80"
restart: always
depends_on:
- db
networks:
- web
Please show logs from docker-compose up if problem persists.
Now you need to add command to mysql service for connecting to phpmyadmin.
command: --default-authentication-plugin=mysql_native_password
version: "2"
services:
db:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: drupal
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/docker-entrypoint-initdb.d
- /var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8000:80
environment:
PMA_HOST: db
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
So I have a trouble with mount new format version 2 docker compose file.
I have the code in folder wordpress inside where is location docker-compose file also I have inside the folder code docker file like this:
FROM debian:jessie
VOLUME /var/www/wordpress
When I used old format like this:
application:
build: code
volumes:
- ./wordpress:/var/www/wordpress
- ./logs/wordpress:/var/www/wordpress/app/logs
tty: true
db:
image: mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: somename
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
ports:
- 9001:9001
volumes_from:
- application
links:
- db
nginx:
build: nginx
ports:
- 8080:80
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- 81:80
volumes:
- ./elk/logstash:/etc/logstash
- ./elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- application
- php
- nginx
When I started use version '2' the same as code as previous version I got any error so I reformat my compose file and move dockerfile from code folder to insider main folder where is location docker-compose file. My new version docker-compose became looks like as:
version: '2'
services:
web:
build: .
volumes:
- /wordpress:/var/www/wordpress
- /logs/wordpress:/var/www/wordpress/app/logs
tty: true
db:
image: mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: somename
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
ports:
- 9001:9001
volumes_from:
- web
links:
- db
nginx:
build: nginx
ports:
- 82:82
links:
- php
volumes_from:
- web
volumes:
- /logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- 81:80
volumes:
- /elk/logstash:/etc/logstash
- /elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- web
- php
- nginx
Finally after reformat the code docker-compose was successfully build and up but when I open my php and nginx container inside both of them in /var/www/worpdress I have just empty folder app is not my wordpress project.
In which place I was mistake with settings mount project volume?
Thanks in advance.
The problem is with the way you are defining the local directories to be used for the volumes. In your previous version, you were using ./wordpress, while in the new one, you're using just /wordpress.
When referencing local directories for volume mappings, they always have to start with ./ - please try this:
version: '2'
services:
web:
build: .
volumes:
- ./wordpress:/var/www/wordpress
- ./logs/wordpress:/var/www/wordpress/app/logs
One more thing: I recommend to always enclose the volume mappings in double quotes to avoid issues with space characters, e.g.:
version: '2'
services:
web:
build: .
volumes:
- "./wordpress:/var/www/wordpress"
- "./logs/wordpress:/var/www/wordpress/app/logs"