I have several php servers, let's name them api1, api2. I've set up docker-compose file which successfully run them in link with nginx, so they are accessible from host machine and working fine.
Here is example of my docker-compose.yml:
version: "3.5"
services:
nginx:
user: "root:root"
image: nginx:stable
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./cert:/etc/nginx/ssl/
- ./vhosts/:/etc/nginx/conf.d/:cached
- ./logs/nginx:/var/log/nginx:cached
- ${ACCOUNT_PATH}:/var/www/api1:cached
- ${ACCOUNT_API_PATH}:/var/www/api2:cached
api1:
image: php-fpm-image
build:
context: "."
dockerfile: "./images/api1/Dockerfile"
user: "root:root"
container_name: account
working_dir: /var/www/api1/
volumes:
- api1path/:/var/www/api1
api2:
image: php-fpm-image
build:
context: "."
dockerfile: "./images/api2/Dockerfile"
user: "root:root"
container_name: api2
working_dir: /var/www/api2/
volumes:
- api2:/var/www/api2:cached
networks:
default:
name: domain.com
driver: bridge
In connection with this docker-compose file for functional tests:
version: '3.5'
services:
apitests:
image: apitests:latest
container_name: api_tests
volumes:
- ./config/:/opt/project/config/
- ./test_reports/screenshots:/root/.selene/screenshots/
networks:
default:
driver: bridge
external:
name: domain.com
There are next domains which are accessible from host machine:
api1.domain.com 127.0.0.1
api2.domain.com 127.0.0.1
My problem is how to connect them directly inside docker, because I need to do requests from api1 to api2, apitests to api1, api2.
When I do request, their domain resolving directly to their containers, so I receive next error
can't connect to remote host (172.21.0.8): Connection refused
from any container.
As you understand, I need that their domains resolve to NGINX container, so it can work correctly and php-fpm can return result via nginx back to me.
How do can I achieve this?
It seems that your problem is that you've named php container as api1.domain.com when your requirements are to assign this domain name for nginx container.
You could assign api1/api2 aliases to nginx inside container networks.
services:
nginx:
networks:
default:
aliases:
- api1
- api2
If I got you right, you have several docker-compose.yml files and need services from them to interact with each other. The logic suggestion is to have a globally-defined network, lets say
docker network create testapis
and have all services linked to it:
docker-compose.yml
...
networks:
default:
external:
name: testapis
In this case all services from all docker-compose files will see each other under their hostnames (api1, api2, etc) and no port exposing would be needed (unless you want to use services from outside this net)
Related
Mysql is in it's own docker-compose.yml as I want a mysql server up and running that any other php application can connect to. So I do not have php and mysql in the same docker-compose.yml. From the php application, I can connect to mysql if I use the mysql container's gateway ip address by looking it up and then hard coding it into the php application. docker inspect mysql-db. But docker will change that 172... ip address each time mysql restarts so that is not ideal for development.
I can connect to mysql via mysql -h 127.0.0.1 no problem, but from the php application if I try to use 127.0.0.1 I get connection refused. I can only connect if I use the 172... gateway ip address.
How do I get the mysql container listening for connections from the host to 127.0.0.1?
docker-compose.yml for mysql
version: "3"
services:
mysql:
container_name: mysql-db
image: mysql
build:
dockerfile: Dockerfile
context: ./server/mysql
environment:
- MYSQL_ROOT_PASSWORD=admin
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- 3306:3306
docker-compose.yml for php
version: "3"
services:
nginx:
container_name: nginx_myapp
image: nginx
build:
dockerfile: Dockerfile
context: ./server/nginx
ports:
- 80:80
- 443:443
volumes:
- ./app:/var/www/html
networks:
- myapp
php:
container_name: php_myapp
image: php:7.3-fpm
build:
dockerfile: Dockerfile
context: ./server/php-fpm
environment:
CI_ENV: development
volumes:
- ./app:/var/www/html
networks:
- myapp
networks:
myapp:
127.0.0.1 is the loopback address. It points to localhost. In the context of docker, localhost is the container itself. There is no db running on your php container so the connection will never succeed.
What you need to do is to configure the default network in you mysql compose file so that you will predictably control its name for later convenience (else it will be calculated from your compose project name which could change if you rename the containing folder...):
Important note: for the below to work, you need to use compose file version >= 3.5
---
version: '3.7'
#...
networks:
default:
name: shared_mysql
You can now use that shared_mysql network as external from any other compose project.
version: '3.7'
services:
nginx:
#...
networks:
- myapp
php:
#...
networks:
- myapp
- database
networks:
myapp:
database:
external: true
name: shared_mysql
You can then connect to mysql from your php container using the service name mysql (e.g. mysql -h mysql -u user -p)
Reference: https://docs.docker.com/compose/networking/
Few solutions for you
1) you can duplicate mysql section in each file using same volume path, in that case when you start project you will have same databases
project1
version: "3.2"
services:
mysql:
volumes:
- /var/mysql:/var/lib/mysql
php:
build:
context: './php/'
project2
version: "3.2"
services:
mysql:
volumes:
- /var/mysql:/var/lib/mysql
php:
build:
context: './php/'
2) you can connect using host.docker.internal or macos (docker.for.mac.localhost) directly to your host machine more information here From inside of a Docker container, how do I connect to the localhost of the machine?
Not sure if my title is accurate, but here's my issue. I am running a basic laravel site on Docker and cannot get the site itself to connect to the PostgreSQL service. I will post my docker-compose.yml below. When i run php artisan migrate i get no errors and it all works. I can even use my Postico PostgreSQL client to connect to the DB and run queries. But, when i try and connect to the DB from the site, it errors out saying this:
SQLSTATE[08006] [7] could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5433?
Here are my PostgreSQL client settings (which DO work):
Host: 127.0.0.1
Port: 5433
User: homestead
Password: homestead
Database: homestead
I have been messing around with different settings and things so here is my docker-compose.yml, although i'm sure there are things i don't need in there:
version: '2'
services:
php:
image: jguyomard/laravel-php:7.2
build:
context: .
dockerfile: infrastructure/php/Dockerfile
volumes:
- ./:/var/www/
- $HOME/.composer/:$HOME/.composer/
networks:
- default
links:
- postgresql
- redis
nginx:
image: jguyomard/laravel-nginx:1.13
build:
context: .
dockerfile: infrastructure/nginx/Dockerfile
ports:
- 81:80
networks:
- default
links:
- postgresql
- redis
postgresql:
image: postgres:9.6-alpine
volumes:
- pgsqldata:/var/lib/postgresql/data
environment:
- "POSTGRES_DB=homestead"
- "POSTGRES_USER=homestead"
- "POSTGRES_PASSWORD=homestead"
ports:
- "5433:5432"
networks:
- default
redis:
image: redis:4.0-alpine
command: redis-server --appendonly yes
ports:
- "6379:6379"
networks:
- default
# elastic:
# image: elasticsearch:5.5-alpine
# ports:
# - "9200:9200"
volumes:
pgsqldata:
networks:
default:
Any thoughts on why the site can't connect to the DB?
My docker network ls output:
NETWORK ID NAME DRIVER SCOPE
2bf85424f466 bridge bridge local
c29d413f768e host host local
0bdf9db30cd8 none null local
f3d9cb028ae3 my-app_default bridge local
The error message ask Is the server running on host "127.0.0.1" but in your case PostgreSQL is running on a different docker container which is not 127.0.0.1 reference to php app so, change the server host to postgresql inside your php application.
And for the modified error, it is because that you have used port 5433 inside the php application which is the port of host machine which is for use outside the docker container (for host machine, that's why your Postico PostgreSQL client worked). But the port you have to use inside the docker network is 5432 change the server port to 5432 inside your php application.
And you have made the compose file complex by defining network in each host as default network. (You can follow this link for more details) If you don't have a requirement for that you don't need to do that as docker-compose will deploy all containers in single network.
And you don't need to use links they are deprecated. When multiple containers are in one docker-compose.yml file they are automatically deployed in a same single network.
So this simplified compose file will be recommended.
version: '2'
services:
php:
image: jguyomard/laravel-php:7.2
build:
context: .
dockerfile: infrastructure/php/Dockerfile
volumes:
- ./:/var/www/
- $HOME/.composer/:$HOME/.composer/
nginx:
image: jguyomard/laravel-nginx:1.13
build:
context: .
dockerfile: infrastructure/nginx/Dockerfile
ports:
- 81:80
postgresql:
image: postgres:9.6-alpine
volumes:
- pgsqldata:/var/lib/postgresql/data
environment:
- "POSTGRES_DB=homestead"
- "POSTGRES_USER=homestead"
- "POSTGRES_PASSWORD=homestead"
ports:
- "5433:5432"
redis:
image: redis:4.0-alpine
command: redis-server --appendonly yes
ports:
- "6379:6379"
# elastic:
# image: elasticsearch:5.5-alpine
# ports:
# - "9200:9200"
volumes:
pgsqldata:
I want to have a remote access to my containers which are running on a debian server, I am using Treafik to do that.
I have access to my server via it ip (192.168.12.28).
My docker-compose file for traefik container:
version: "2"
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
networks:
- webgateway
networks:
webgateway:
driver: bridge
My docker-compose file for my two tests containers :
version: "2"
services:
app1:
image: php:7.0-apache
labels:
- "traefik.port=80"
- "traefik.backend=app1"
- "traefik.frontend.rule=Host:app1"
volumes:
- ./app1:/var/www/html
networks:
- web
app2:
image: php:7.0-apache
labels:
- "traefik.port=80"
- "traefik.backend=app2"
- "traefik.frontend.rule=Host:app2"
volumes:
- ./app2:/var/www/html
networks:
- web
networks:
web:
external:
name: traefik_webgateway
For now, I have to add entries in the /etc/hosts file of the visitor (the computer from which I access to my containers) to have access to my containers. The entries :
192.168.12.28 app1
192.168.12.28 app2
And I have remote access from the url : http://app1 or http://app2.
What I want is to be abble to access to my containers with a url like :
http://192.168.12.28/app1 or http://192.168.12.28/app2
whithout any modification in the /etc/hosts file of the visitor. (I will not oblige my client to change his /etc/hosts file to access to my applications...)
Is it the configuration of the domain name which is not appropriate?
Do I have to change the default traefik config?
Since volumes_from disappear when Docker Compose change it's compose file version I am a bit lost in how to share a volume between different containers.
See the example below where a PHP application is living in a PHP-FPM container and Nginx is living in a second one.
version: '3.3'
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
- shared-volume:/var/www
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
- shared-volume:/var/www
volumes:
shared-volume:
driver_opts:
type: none
device: ~/sources/websocket
o: bind
In order to make the application works of course somehow Nginx has to access the PHP files and there is where volumes_from help us a lot. Now that option is gone.
When I try the command docker-compose up it ends with the following message:
ERROR: for websocket_php_1 Cannot create container for service php:
error while mounting volume with options: type='none'
device='~/sources/websocket' o='bind': no such file or directory
How do I properly share the same host volume between the two containers?
Why would you not use a bind mount? This is just source code that each needs to see, correct? I added the :ro (read-only) option which assumes no code generation is happening.
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro
I want to write a docker-compose.yml of nginx+mariadb+php+redis,
I read the documentation about compose-file,url: https://docs.docker.com/compose/compose-file/#versioning
format is like this:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
volumes:
redis-data:
driver: local
networks:
front-tier:
driver: bridge
back-tier:
driver: bridge
But I don't know how to write the compose-file of nginx+mariadb+php+redis,I want to reference some examples.And,I use the official images of Docker Hub,url: https://hub.docker.com/explore/
**ps:**software version:
OS:centos7.2
nginx:latest
php:latest
mariadb:latest
redis:latest
I would go with something along these lines:
version: '2'
services:
web:
container_name: my_app
build: .
links:
- redis
- mariadb
nginx:
container_name: nginx
image: nginx
links:
- my_app
ports:
- 80:80
- 443:443
redis:
container_name: redis
image: redis
mariadb:
container_name: mariadb
image: mariadb
So create a dockerfile for your project, and extend the official PHP image by adding your files like in the readme.
This docker-compose.yml will start your container and link it to the nginx container. This means it will be available under my_app hostname, and you will need to add your own nginx config to pass the requests to that container.
Redis an mariadb will also be triggered by docker-compose and will be made available inside your app container under hostnames redis and mariadb.
Nginx should be the only container with ports exposed on the host.
The dockerfile above is not a complete solution, you will need to add an nginx config, and probably provide some environment variables here and there.
I hope this helps.