I'm currently trying to dockerize my app for local development. For a bit of context, it is using Magento.
I have a configuration file where I'm used to set 127.0.0.1 as MySQL hostname as the web app is running on the same host as MariaDB.
Initially, I've tried to link my container within my docker-compose file using 'links' (below an extract of my docker-compose setting at this point)
mariadb:
image: mariadb:5.5
php-fpm:
build: docker/php-fpm
links:
- "mariadb:mysql"
At this point, MariaDB was reachable by setting mysql as hostname in my configuration file instead of 127.0.0.1. However, I wanted to keep 127.0.0.1.
After a bit of digging, I've found this blog post where it is explained how to set up containers so that it can be reached through localhost or 127.0.0.1
This is working as I'm expecting but it has a flaw.
Without Docker, I'm able to run PHP scripts which leverage magento core modules by loading it. But with Docker and the blog post configuration, I'm not able to do it anymore as Magento is weirdly expecting for a DB hostname called "mysql".
Is there anyway through docker-compose to have a container be reachable with localhost and an hostname?
Without Docker, if I install MariaDB on my host machine, I am able to connect to its instance through 127.0.0.1:3306 or mysql://. I want to get a similar behaviour.
As said by #Sai Kumar you can connect the docker containers to the host network and then can use localhost to access services. But the problem is the port will be reserved by that container and will not be available till it is deleted.
But from your question, the following sentence caught my attention
Without Docker, I'm able to run PHP scripts which leverage magento
core modules by loading it. But with Docker and the blog post
configuration, I'm not able to do it anymore as Magento is weirdly
expecting for a DB hostname called "mysql"
So If I understand properly Magento is expecting to connect to MySQL with mysql as hostname instead of localhost. If so this can be easily solved.
How?
So in docker, there is a concept called service discovery. I've explained the same in many of my answers. Basically what it does is it resolves IP of containers by container hostname/aliases.So, instead of connecting between containers using IP address you can connect between them by using their hostname such that even if container restarts(which results in change of IP), Docker will take care of resolving it to respective container.
This works only with user-defined networks. So what you can do is create a bridge network and connect both Magento and Mysql to it. and give the container_name as mysql for mysql or you can also use alias as mentioned here. So putting it all together a sample docker compose will be
version: '3'
services:
mariadb:
image: mariadb:5.5
container_name: mysql #This can also be used but I always prefer using aliases
networks:
test:
aliases:
- mysql #Any container connected to test network can access this simply by using mysql as hostname
php-fpm:
build: docker/php-fpm
networks:
test:
aliases:
- php-fpm
networks:
test:
external: true
More references
1. Networking in Compose.
2. Docker Networking.
Yes you can connect through your db with localhost or 127.0.0.1 But this is only possible when you create docker-compose network in host mode.
But when you set your docker network in host mode then containerising concept will fail. So you have to choose host or bridge network mode
You can find networking in docker-compose
network_mode: "host"
Related
A brief summary of my question:
What characteristic of my docker-compose is colliding with PDO that prevents Host -> mysql-docker via PDO, but allows Host -> mysql-docker from all other tools?
My App's config file:
database:
host: mysql
port: 3306
name: <name>
username: <username>
password: <pass>
outsideContainerConnections:
host: 127.0.0.1
port: 3307
Everything is running fine within the container, and I am able to access the database from the host using PHPStorm or Mysql on the command line.
If I run this command from the host, it connects
mysql --port=3307 -h 127.0.0.1 -u <username> -p
However, if I try to connect from php on the host, using PDO, it fails with
PDOException: SQLSTATE[HY000] [2002] Connection refused
PDO DSNs I've tried:
mysql:host=127.0.0.1:3307;port=;dbname=<name>;&charset=utf8;
or
mysql:host=127.0.0.1;port=3307;dbname=<name>;&charset=utf8;
or
mysql:host=127.0.0.1:3307;dbname=<name>;&charset=utf8;
I have read about using localhost vs 127.0.0.1 to force TCP, and I am doing that here.
Here also is the relevant section of my docker-composer. Again, using command line or other tools from the host work fine, it is only PDO that seems to have an issue. For what it's worth, PDO on another container in the docker-compose network is behaving.
mysql:
build: './mysql_docker'
command: --lower_case_table_names=0
ports:
- '3307:3306'
volumes:
- ./volumes/mysql:/var/lib/mysql
- ./volumes/my.cnf:/etc/my.cnf
networks:
- app-tier
Thank you for reading.
I've solved it– it's simple but hopefully a good lesson to pass on.
In one case the PHP file was being run by cron, and so it was being run locally, outside of the docker container. I had a small syntax or other error in my file so cron was having some trouble.
In order to debug this situation, I was opening the file in my web browser–and that's the kicker. The connection errors I saw in the web browser were not the same problems cron was having, because that page was being served by the docker container. Thus, the connection details had to be different. The same setup could not work both from the host and from the neighbor container.
What I learned is this; what seems like a really heady technical problem that requires lots of manual reading and research COULD be a simple bad assumption. Sometimes it's worth going back to the drawing board and sketching an outline of the situation from the ground up.
In a way that is what I did with asking this question, so thank you for the space to do so.
try "mysql --port=3307 -h 127.0.0.1 -u -p" from client if no
Sorry I am brand new to docker and web development in general but I made basic docker compose server that hosts my local PHP file and displays some text. It works fine with the local host but I bought some domains and was wondering how I change from connecting to localhost to a domain so anyone can connect to it. My IP is already set up for outside connect and works for my ssh server so I do not need to do that. I just can't seem to find any results when I try to look it up. So I just need to know what to change in my docker compose files or settings to make go a domain instead.
Here is my docker-compose file:
services:
product-service:
build: ./product
volumes:
- ./product:/usr/src/app
ports:
- 5001:80
website:
image: php:apache
volumes:
- ./website:/var/www/html
ports:
- 5000:80
depends_on:
- product-service
Without more knowledge about your code logic, I'm not sure if I can fully make it run just by this answer. But I guess I can give an abstract checklist.
Due to your docker-compose config, the port 5001, 5000 already being opened to localhost. If these ports are already configured to open to the external network, you can already type yourdomain.com:5000 to access it.
If you just want to access by typing yourdomain.com (without port). I assume your website service will serve it:
Open port 80 and connect it with port 80 of the website service:
website:
image: php:apache
volumes:
- ./website:/var/www/html
ports:
- 80:80
depends_on:
- product-service
Make sure any AJAX call to product-service API will be called to yourdomain.com:5001:
From inside the website service, it can call to product-serivce API by using localhost:5001. But from user browser, any AJAX call will be counted as external call, so any AJAX call to this service must be configure as yourdomain.com:5001
I am trying to run docker-compose up -d to following docker-compose.yml file
version: '2'
services:
php7-cli:
build: php7-cli
image: php7-cli
tty: true
volumes:
- ../:/var/www/app
networks:
- my-network
networks:
my-network:
driver: bridge
The docker-compose build builds successfully, but if I try docker-compose up, I get the following error message:
ERROR: could not find an available, non-overlapping IPv4 address
pool among the defaults to assign to the network
When I tried to docker network ls , I got
NETWORK ID NAME DRIVER SCOPE
7c8a6c131c1b bridge bridge local
9f16d3a33b4e host host local
24c54a4323ed none null local
I tried to remove orphan networks by docker network prune but non of those network have been removed.
then I tried to remove the bridge network manually docker network rm 7c8a6c131c1b
but I got this error
Error response from daemon: bridge is a pre-defined network and cannot be removed
here is my docker version
docker version
Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:17:20 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.1-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:15:30 2018
OS/Arch: linux/amd64
Experimental: false
The same issue was gone after renaming the network interface from ens33 to eth0.
One guide used was https://www.itzgeek.com/how-tos/mini-howtos/change-default-network-name-ens33-to-old-eth0-on-ubuntu-16-04.html
try ifconfig
If there are too many network interfaces, then you may not stop docker correctly before. When your machine reached its maximum number of network interfaces, docker-compose cannot be upped in this situation(docker-compose up can't create new network).
Take docker-compose for example.
Each time we run docker-compose up, we create a network interface and this interface should be deleted when we run docker-compose down. docker-compose down stops containers and removes containers, networks, volumes, and images created by up.
Solution: reboot
YOU MUST DELETE THE bridge NETWORKS YOU DON'T USE!!!
List the docker networks
Look for the bridge ones associated to projects you don't use anymore
Those were created while running docker-compose apps in multiple directories
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
070b2051c2d6 altacv_default bridge local
90269e56ef9c arquitecture_default bridge local
f04c85b27605 awesome-cv-docker_default bridge local
cfdc24506a32 aws-s3-bucket-state_default bridge local
51b7b53697cc bridge bridge local
e4f4800f11bf certs-resource_default bridge local
7d760fc8a8c3 cloner_default bridge local
4a161fb223cc coverletter_default bridge local
d16029047f65 distance-matrix-service_default bridge local
9e944ddd5da0 docker-compose-deployer_default bridge local
fb25db546e14 docker-compose-deployment_default bridge local
150cd02ad55c docker-neo4j_default bridge local
c76d6167e7b1 docker-pycobertura_default bridge local
Delete the ones you don't use
Just delete each individual IDs that you don't want anymore...
$ docker network rm 655b6657b925 644fecac3549 6fac6fd59fe8 3d03a7df183d 40d112093fc7 32de1ea6549d 5cdd12695a4d 1e30562c7f63 58b1f6343780 e7bbf7af6dbb 4a161fb223cc 070b2051c2d6 90269e56ef9c
655b6657b925
Error response from daemon: error while removing network: network tools_supercash-data id 644fecac3549735341a482ddf330918a2050bc6ee2bcd4123d208051c836f9eb has active endpoints
6fac6fd59fe8
3d03a7df183d
40d112093fc7
...
...
...
Profit
Docker-compose now works as expected...
docker-compose up
...
...
I have the same issue.
In my case I am using a VPN client named FortiClient VPN. Disconnecting the VPN makes docker-compose the ability again to create the default network.
I am new to Docker.
I have read that it is better to keep an app per container.
I need to run web app (LAMP stack). So I have found the container for running PHP + Apache.
Now I need to set up a mysql container. But I am not sure what is the correct way to do that.
I read that it is possible to connect multiple containers together.
The idea is to make it trasnparent to the container running PHP + Apache when it tries to connect to mysql database locally.
But redirect all local connections to another container.
Another idea I have is to provide environment variable with host where should all connections go. In this case I need to have publicly accessible mysql server, but I need to keep it private and accessible only locally.
Could you please suggest a better option to choose in my case ?
Thank you
Use docker-compose:
For example, start from this docker-compose.yml. Put it in the same dir as your php Dockerfile:
version: "3"
services:
web:
build: .
ports:
- 8000:80
depends_on:
- db
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=something
volumes:
- ./mysql-data:/var/lib/mysql
Then:
docker-compose up
So thanks to Docker network, you can point from your PHP as this: db:3306.
I'm trying to dockerizing a project runs with php + Apache http server. I learned that I need to have a container for apache http server and another container for php script. I searched a lot but still don't understanding how that works. What I know now is I should resort to docker networking, as long as they are in the same network they should be communicating with each other.
The closest info I got is this, but it uses nginx:
https://www.codementor.io/patrickfohjnr/developing-laravel-applications-with-docker-4pwiwqmh4
quote from original article:
vhost.conf
The vhost.conf file contains standard Nginx configuration that will handle http
requests and proxy traffic to our app container on port 9000. Remember from
earlier, we named our container app in the Docker Compose file and linked it to the web container; here, we can just reference that container by its name and Docker will route traffic to that app container.
My question is what configuring I should do to make the communication between php container and web container happen using Apache http server like above? what is the rationale behind this? I'm really confused, any information will be much much appreciated.
The example that you linked to utilizes two containers:
a container that runs nginx
a container that runs php-fpm
The two containers are then able to connect to each other due to the links directive in the web service in the article's example docker-compose.yml. With this, the two containers can resolve the name web and app to the corresponding docker container. This means that the nginx service in the web container is able to forward any requests it receives to the php-fpm container by simply forwarding to app:9000 which is <hostname>:<port>.
If you are looking to stay with PHP + Apache there is a core container php:7-apache that will do what you're looking for within a single container. Assuming the following project structure
/ Project root
- /www/ Your PHP files
You can generate a docker-compose.yml as follows within your project root directory:
web:
image: php:7-apache
ports:
- "8080:80"
volumes:
- ./www/:/var/www/html
Then from your project root run docker-compose up and will be able to visit your app at localhost:8080
The above docker-compose.yml will mount the www directory in your project as a volume at /var/www/html within the container which is where Apache will serve files from.
The configuration in this case is Docker Compose. They are using Docker Compose to facilitate the DNS changes in the containers that allow them to resolve names like app to IP addresses. In the example you linked, the web service links to the app service. The name app can now be resolved via DNS to one of the app service containers.
In the article, the web service nginx configuration they use has a host and port pair of app:9000. The app service is listening inside the container on port 9000 and nginx will resolve app to one of the IP addresses for the app service containers.
The equivalent of this in just Docker commands would be something like:
App container:
docker run --name app -v ./:/var/www appimage
Web container:
docker run --name web --link app:app -v ./:/var/www webimage