I have 3 docker containers that are supposed to talk to each other. A very classic stack with nginx, php-fpm and mariadb.
My stack used to work, but now my containers can't seem to talk to each other anymore, and I don't know what changed. The nginx can't talk to the php-fpm, and the php-fpm can't talk to mariadb.
Here is the docker-compose I use :
version: '3'
services:
web-nginx:
image: nginx:stable-alpine
container_name: web-nginx
restart: always
ports:
- "80:80"
web-php:
restart: always
build:
context: ./php
dockerfile: Dockerfile
container_name: web-php
web-mariadb:
build:
context: ./mariadb
dockerfile: Dockerfile
restart: always
container_name: web-mariadb
ports:
- "3306:3306"
The content of docker-compose ps :
docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------
web-mariadb docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp
web-nginx /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
web-php docker-php-entrypoint php-fpm Up 9000/tcp
My DockerFiles :
FROM php:fpm-alpine
ENV USER=docker
ENV UID=1000
RUN adduser \
--disabled-password \
--gecos "" \
--home "$(pwd)" \
--no-create-home \
--uid "$UID" \
"$USER"
[...]
USER docker
FROM mariadb
RUN echo max_allowed_packet=512M >> /etc/mysql/my.cnf
The nginx.conf file
[...]
http {
[...]
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.php index.html index.htm;
}
location ~ \.php$ {
root /usr/share/nginx/html;
include fastcgi_params;
fastcgi_pass web-php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /script$fastcgi_script_name;
}
}
}
So my php-fpm is supposed to be accessed through port 9000 while mariadb listens to the classical 3306.
Between containers, ping commands are OK, but nc are not (I use nc because telnet is not bundled in alpine images):
docker exec -u 0 -it web-nginx sh #I use -u 0 to be root because otherwise ping is not allowed in the container
/ # ping -c1 web-php
PING web-php (172.22.0.3): 56 data bytes
64 bytes from 172.22.0.3: seq=0 ttl=64 time=0.156 ms
--- web-php ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.156/0.156/0.156 ms
/ # nc -v web-php 9000
nc: web-php (172.22.0.3:9000): Host is unreachable
docker exec -u 0 -it web-php sh
ping -c1 web-mariadb
PING web-mariadb (172.22.0.4): 56 data bytes
64 bytes from 172.22.0.4: seq=0 ttl=64 time=0.199 ms
--- web-mariadb ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.199/0.199/0.199 ms
/var/www/html # nc -v web-mariadb 3306
nc: web-mariadb (172.22.0.4:3306): Host is unreachable
Here is the docker inspect command :
[
{
"Name": "web_default",
"Id": "e458e63d89e3f6e2c35644aa03f377dc9fba6975b4b8838727b85c57c3b183ba",
"Created": "2021-01-28T15:00:06.814105722+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"af441d802fd26173ea98ca81b85eb48cbea36b65b867df46b6ddb2f5ac484816": {
"Name": "web-nginx",
"EndpointID": "6b69e2033b3de50c86161cbec103843de5f83ab53ecebd557ff66ae57adaae04",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
},
"be99788a48d03fb51288aab6373596c41ed8adcbcf8ceaa8ae2ece1e0aa10843": {
"Name": "web-mariadb",
"EndpointID": "8f52e8fe97f24161934b9ab3dcdde9da69aeb80388deb3f6442dd089554e30f0",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"f1d2aa6a6c2e156416da3ca0424336380b01950d6f0990203a2d0484ad0d04a5": {
"Name": "web-php",
"EndpointID": "7e6b95ccb7da5bcdd3754d1965820e429bdf5370cecee0bceb29c79138d36e0e",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "web",
"com.docker.compose.version": "1.27.4"
}
}
]
Since ping works between containers, I assume network is correct, but ports are not opened.
Also, I noticed that my containers can't reach internet. Any ping to google or just "outside" is rejected.
I tried restarting the docker service and even reboot my server, but no luck.
I don't know how to troubleshoot this.
Server running centos 8 :
Linux myserver 4.18.0-240.10.1.el8_3.x86_64
Docker version 20.10.2, build 2291f61
Any help much appreciated.
Thanks !
Related
I had my Docker setup yesterday with Apache PHP with a single index.php file to run "hello world" and it was working properly. But today out of nowhere, it has started to misbehave.
Here is my very simple docker-compose.yml:
version: '3'
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: my_secret_pw_shh
MYSQL_DATABASE: test_db
MYSQL_USER: devuser
MYSQL_PASSWORD: devpass
ports:
- "9906:3306"
web:
image: php:7.2.2-apache
container_name: php_web
depends_on:
- db
volumes:
- ./php/:/var/www/html/
ports:
- "8100:80"
stdin_open: true
tty: true
The issue that I'm facing is that, when I browse localhost:8100, it shows connection is reset. I don't have any application engaging this port 8100. Moreover, after a lot of failed research I managed to find out that Docker has it's own IP address inside the container and it exposes IP outside to get itself connected. However, I tried using network_mode to host as well but no luck.
I can share logs, which of no use as there are no errors they are really simple as it should be. There is not breaking in Apache config, the issue seems to be in IP mapping.
I've also looked docker network inspect {{network_name}} but no luck. I noticed that each Docker image creates the it's network during the first build. I'd like you to see:
docker network inspect test_default
[
{
"Name": "test_default",
"Id": "018e7845456beaabee1cd729da9c1d14440f13f93f4f606f700ff0013804246d",
"Created": "2022-02-03T23:51:25.740391332+05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "test",
"com.docker.compose.version": "1.29.2"
}
}
]
This is the output of network of my container.
I tried creating custom network following tutorial on Docker but still no luck.
After a bit of research, I also got to know if the IPs that are assigned to docker0 are colliding with some of my other gateways, then it happens which I didn't notice that any ip-address range is being collided.
In case you ask for my bridge IP here it is:
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "12b99dd622ac5d09e45cf935cc2db907b4249140bbd2fb397db5dc81a1d70ab0",
"Created": "2022-02-03T22:03:26.564919187+05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
And my system netstat -nr shows this result:
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlp0s20f3
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-4c91e6855bcb
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-4c91e6855bcb
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-9711f2bb8edf
172.22.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-2211a05bcc3d
172.23.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-018e7845456b
172.28.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-55c64ea21f40
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlp0s20f3
asad#sav-dev--asad:/var/www/test$
I used the following commands,
netstat -nr for all
docker network inspect bridge
^^ FOR MY BRIDGE NETWORK
Everything seems fine but what happened all of a sudden that it has stopped working. Apache is running with no issues, here are the logs:
8aa94fcb082a_php_web | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.23.0.3. Set the 'ServerName' directive globally to suppress this message
8aa94fcb082a_php_web | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.23.0.3. Set the 'ServerName' directive globally to suppress this message
8aa94fcb082a_php_web | [Thu Feb 03 20:20:54.830485 2022] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.2 configured -- resuming normal operations
8aa94fcb082a_php_web | [Thu Feb 03 20:20:54.830543 2022] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
I read somewhere to curl the IP inside the container, but the responses from inside the container weren't also up to the mark. They were refusing the connection.
When I do my docker-compose up and the written port gets engaged I get this.
sudo netstat -pna | grep 8100
tcp 0 0 0.0.0.0:8100 0.0.0.0:* LISTEN 1123608/docker-prox
tcp6 0 0 :::8100 :::* LISTEN 1123617/docker-prox
otherwise I don't.
P.S: I tried reinstalling docker-compose, and Docker engine twice, rebuilding image, removing containers etc. but it didn't help me.
First of I did read thoses links
Connect to Docker MySQL container from localhost?
Connect to Mysql on localhost from docker container
From inside of a Docker container, how do I connect to the localhost of the machine?
But as a beginner with docker. It did not help me.
What you need to know:
Yes, I need localhost. I'm working on an app that interact
directly with the database. It create/remove user privileges and
allow some user to access with limited privileges from a remote
access. When initialized, the app will drop the default remote access to root and forge user and grant them full privilege on localhost.
I'm using a docker-compose.yml generated by https://phpdocker.io
Ubuntu 18.10
Docker version 18.09.3, build 774a1f4
docker-compose version 1.21.0, build unknown
I'm using docker only for development purpose. On production I use forge
./docker-compose.yml
###############################################################################
# Generated on phpdocker.io #
###############################################################################
version: "3.1"
services:
mailhog:
image: mailhog/mailhog:latest
container_name: myapp-mailhog
ports:
- "8081:8025"
redis:
image: redis:alpine
container_name: myapp-redis
mariadb:
image: mariadb:10.4
container_name: myapp-mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
- MYSQL_USER=forge
- MYSQL_PASSWORD=forge
ports:
- "8083:3306"
webserver:
image: nginx:alpine
container_name: myapp-webserver
working_dir: /application
volumes:
- .:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
php-fpm:
build: phpdocker/php-fpm
container_name: myapp-php-fpm
working_dir: /application
volumes:
- .:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
./phpdocker/nginx/nginx.conf
server {
listen 80 default;
client_max_body_size 108M;
access_log /var/log/nginx/application.access.log;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
./phpdocker/php-fpm/Dockerfile (slightly modified to add mysql_client and not installing git in a second RUN command)
FROM phpdockerio/php73-fpm:latest
WORKDIR "/application"
# Fix debconf warnings upon build
ARG DEBIAN_FRONTEND=noninteractive
# Install selected extensions and other stuff
RUN apt-get update \
&& apt-get -y --no-install-recommends install \
php7.3-mysql php-redis php7.3-sqlite3 php-xdebug php7.3-bcmath php7.3-bz2 php7.3-gd \
git \
mysql-client \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
./php-ini-overrides.ini
upload_max_filesize = 100M
post_max_size = 108M
I tried to use network_mode: host but it makes the webserver stopping with Exit 1
Ok but as remember it, localhost in mysql/mariadb means access thru the local unix socket. There are ways of sharing these between containers.
Have a look here Connection between docker containers via UNIX sockets
#F.Maden gave me the right direction. I accepted his answer but here's how I made it in details.
Basically has he said, I need to share mysqld.sock between my services mariadb and php-fpm
The first step is to share a folder between both services. Since I already
have /application that contains the docker config /application/phpdocker, I will reuse this one.
I had to create a custom my.cnf file to edit the default mariadb config configuration and add a custom socket path:
./phpdocker/mariadb/my.cnf
[mysql]
socket = /application/phpdocker/shared/mysqld.sock
[mysqld]
socket = /application/phpdocker/shared/mysqld.sock
Then I had to share the config file with my mariadb container
./docker-compose.yml
mariadb:
image: mariadb:10.4
container_name: myapp-mariadb
working_dir: /application
volumes:
- .:/application
- ./phpdocker/mariadb/my.cnf:/etc/mysql/my.cnf # notice this line
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
- MYSQL_USER=forge
- MYSQL_PASSWORD=forge
ports:
- "8083:3306"
I created a folder ./phpdocker/shared with privileges 777 where mariadb will be able to create mysqld.sock (I couldn't start mariadb with 755. In my case this is only used on local not on production so it's fine)
From the terminal
$ mkdir ./phpdocker/shared && chmod 777 ./phpdocker/shared
And now test it!
From the terminal
$ docker-compose up -d --force-recreate --build
$ docker exec -it -u $(id -u):$(id -g) myapp-php-fpm /bin/bash
Once in the container
$ mysql -u root -p -h localhost --socket=/application/phpdocker/shared/mysqld.sock
$ mysql > select user();
+----------------+
| user() |
+----------------+
| root#localhost |
+----------------+
If the problem with connection to DB persists:
We can interogate what IP DB container has:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
We'll receive container IP (eg: 172.21.0.3);
After this, we can place this IP into connection "host" section.
Enjoy!
Ref How can I access my docker maria db?
I have a Docker setup with a php-fpm container, a node container and an nginx container which serves as a proxy. Now in the browser (http://project.dev), the php container responds with json like I expect. All good. However, when I make a request from the node container to this php container (view code), I get an error on the request: ECONNRESET. So apparently, the node container can not communicate with the php container. The nginx error does not seem to add an entry.
Error: read ECONNRESET at _errnoException(util.js: 1031: 13) at TCP.onread(net.js: 619: 25)
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
Any ideas?
I've made a github repo: https://github.com/thomastilkema/docker-nginx-php-fpm-node
Trimmed version of docker-compose.yml (view file)
nginx:
depends_on:
- php-fpm
- node
networks:
- app
ports:
- 80:80
php-fpm:
networks:
- app
node:
networks:
- app
networks:
app:
driver: overlay
Trimmed version of nginx.conf (view file)
http {
upstream php-fpm {
server php-fpm:9000;
}
upstream node {
server node:4000;
}
server {
listen 80 reuseport;
server_name api.project.dev;
location ~ \.php$ {
fastcgi_pass php-fpm;
...
}
}
server {
listen 80;
server_name project.dev;
location / {
proxy_pass http://node;
}
}
}
php-fpm/Dockerfile (view file)
FROM php:7.1-fpm-alpine
WORKDIR /var/www
EXPOSE 9000
CMD ["php-fpm"]
Request which gives an error
const response = await axios.get('http://php-fpm:9000');
How to reproduce
Create a swarm manager (and a worker) node
Find out the ip address of your swarm manager node (usually 192.168.99.100): docker-machine ip manager or docker-machine ls. Edit your hosts file (on a Mac, sudo vi /private/etc/hosts) by adding 192.168.99.100 project.dev and 192.168.99.100 api.project.dev
git clone https://github.com/thomastilkema/docker-nginx-php-fpm-node project
cd project ./scripts/up.sh
Have a look at the logs of the container: docker logs <container-id> -f
ECONNRESET is the other end closing the connection which can usually be attributed to a protocol error.
The FastCGI Process Manager (FPM) uses the FastCGI protocol to transport data.
Go via the nginx container, which translates the HTTP request to FastCGI
axios.get('http://nginx/whatever.php')
I am trying to set up ECS in order to run my php/nginx docker application.
It works locally using this docker-compose.yml file:
version: '2'
services:
nginx:
image: NGINX-IMAGE
ports:
- 80:80
links:
- php
volumes_from:
- php
environment:
APP_SERVER_NAME: <ip>
php:
image: PHP-IMAGE
ports:
- 9000:9000
volumes:
- /var/www/html
The problem is that I can't get this working using ECS.
I don't know how to create the web-data volume and let nginx grap it using volumes_from.
I am trying to create the volume using this JSON:
volumes='[
{
"name": "webdata",
"host": {
"sourcePath": "/var/www/html"
}
}
]'
And then in my container-definitions to the php-container I add:
"mountPoints":
[
{
"sourceVolume": "webdata",
"containerPath": "/var/www/html",
"readOnly": false
}
]
However, when I do this, it adds the content from the host's /var/www/html folder to the /var/www/html folder of the containers.
My question is, how do I configure the volume to use the data from the php's /var/www/html container and let nginx access this data?
I managed to find a solution that suited the setup for ECS.
I simply created a VOLUME in my php Dockerfile referencing /var/www/html.
This means I longer need to reference the volume in the volumes section of the php container. And nginx will still be able to access the volume with volumes_from.
Update
This is my working task definition for ECS:
task_template='[
{
"name": "nginx",
"image": "NGINX_IMAGE",
"essential": true,
"cpu": 10,
"memoryReservation": 1000,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"environment" : [
{ "name" : "APP_SERVER_NAME", "value" : "%s" }
],
"links": [
"app"
],
"volumesFrom": [
{ "sourceContainer": "app" }
]
},
{
"name": "app",
"image": "IMAGE",
"essential": true,
"cpu": 10,
"memoryReservation": 1000,
"portMappings": [
{
"containerPort": 9000,
"hostPort": 9000
}
]
}
]'
And then I added VOLUME ["/var/www/html"] to my app Dockerfile. Now nginx can access the data with the volumes_from argument in the task definition.
While setting up a php dev environment with docker, I ran into an issue while setting up remote debugging (XDEBUG) through a dbgp proxy.
Connecting my host machine to the proxy doesn't seem to be a problem, but the proxy container cannot reach my host machine over the port that is configured (in this case 9003)
I'm using docker compose on windows.
Sucessfully connecting my dev machine to the proxy:
INFO: dbgp.proxy: Server:onConnect ('172.18.0.1', 36558) [proxyinit -p 9003 -k XDEBUG_IDEA -m 1 ]
When executing a request containing the right IDE key (e.g. http://localhost/?XDEBUG_SESSION_START=XDEBUG_IDEA), the proxy reacts correctly but is unable to contact the gateway over the port that was registered by my dev machine)
INFO: dbgp.proxy: connection from 172.18.0.2:40902 [<__main__.sessionProxy instance at 0x7fcff1998998>]
ERROR: dbgp.proxy: Unable to connect to the server listener 172.18.0.1:9003 [<__main__.sessionProxy instance at 0x7fcff1998998>]
Traceback (most recent call last):
File "/usr/local/bin/pydbgpproxy", line 223, in startServer
self._server.connect((self._serverAddr[0], self._serverAddr[1]))
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused
WARNING: dbgp.proxy: Unable to connect to server with key [XDEBUG_IDEA], stopping request [<__main__.sessionProxy instance at 0x7fcff1998998>]
INFO: dbgp.proxy: session stopped
Any ideas on what is going wrong here?
Firewall issues can be excluded here, since i basically just turned it of.
I did log into the dbgpproxy container and was able to send requests to the gateway ip over ports 80 an 8080, but got the same connection refused when trying port 9003)
Any pointers would be greatly appreciated!
The docker compose file:
version: '2'
volumes:
mysqldata:
driver: local
services:
app:
restart: "always"
image: php:7.0-fpm
command: "true"
volumes:
- .:/var/www/html
nginx:
restart: "always"
build: ./docker/nginx/
ports:
- "80:80"
depends_on:
- php
php:
restart: "always"
build: ./docker/php/
environment:
XDEBUG_CONFIG: remote_host=dbgpproxy
expose:
- "9000"
depends_on:
- mysql
volumes_from:
- app
composer:
restart: "no"
image: composer/composer:php7
command: install
volumes:
- .:/app
dbgpproxy:
restart: "always"
image: christianbladescb/dbgpproxy
expose:
- "9000"
ports:
- "9001:9001"
environment:
DOCKER_HOST: 10.0.75.1
mysql:
image: mysql:latest
volumes:
- mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
depends_on:
- mysql
environment:
PMA_HOST: mysql
redis:
image: redis
ports:
- "6379:6379"
mailcatcher:
image: schickling/mailcatcher
restart: "always"
Docker network info:
[
{
"Name": "test_default",
"Id": "8f5b2e1188d65948d6a46977467b181e7fdb4b112a688ff87691b35c29da8970",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Containers": {
"05725540eca07666de250f2bb9ae856da69c0c325c4476150f214ba32a9b8714": {
"Name": "test_nginx_1",
"EndpointID": "723a820ea07e77cf976712293a911be3245e862477af6e0ecdcc1462536de6f5",
"MacAddress": "02:42:ac:12:00:08",
"IPv4Address": "172.18.0.8/16",
"IPv6Address": ""
},
"78085ebed911e767a9c006d909cb245e0392055d37550c6cfa3a618969bef821": {
"Name": "test_dbgpproxy_1",
"EndpointID": "2332e1a01a8c0ec7262d96829d7d8f3cb4c711b6e9033ab85a8dfdb57ae01382",
"MacAddress": "02:42:ac:12:00:0a",
"IPv4Address": "172.18.0.10/16",
"IPv6Address": ""
},
"7e12ea0a3a9b90360be6c15222fd052fbf02065aa18b8a3b12d19779bef4b41b": {
"Name": "test_phpmyadmin_1",
"EndpointID": "456a6508b6a507e01584beaf54eec9605db449261749065a562a6fb62111bb9c",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
},
"81043a642cd9932e16bc51ba4604f6057d82e2c05f6e7378a85adfaa2de87f28": {
"Name": "test_app_1",
"EndpointID": "cfa41a5f210d4907747dcf7d516c6bdaecb817c993867a1e5f8e0250d33c927b",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"8b0cd7dc33fb783ae811f7ba15decd0165199da66242a10a33d8ee86c41bd664": {
"Name": "test_mailcatcher_1",
"EndpointID": "f2ed38e42dffd9565822a7ac248dcb022a47c8a78b05e93793b62d7188d0823c",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"d552bf1ab3914220b8fbf9961cc3801acbe180c6e945bd0b4c3bcf8588352a5d": {
"Name": "test_mysql_1",
"EndpointID": "6188cbeb49cf8afc2a7622bd6ef7fc7076ea91b909ec3efc1d9a1ed1d35d5790",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"ecc941fc337d727e3c118bf9112dee1552ef5db7c94b24706c7d03bc42ea6c0a": {
"Name": "test_redis_1",
"EndpointID": "3f4254982ed1be8354f514dd717993e02b4afdfad8d022f5f8daf0b919a852e1",
"MacAddress": "02:42:ac:12:00:07",
"IPv4Address": "172.18.0.7/16",
"IPv6Address": ""
},
"f15f53405205db7263013fbb1ef1272764ca16850a46097b23d3619cd3d37b20": {
"Name": "test_php_1",
"EndpointID": "5fe30610823cd5660bf62e7612007ff4eef0316cbdfd15dbc0e56cafa6a3aca7",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
That's because pydbgpproxy works similar to XDebug and it is trying to connect to the wrong IP address. The correct IP address pydbgpproxy should connect to is host.docker.internal.
Situation:
xDebug ---> pydbgpproxy -X-> Host
This is because pydbgpproxy received the wrong IP from Docker in the first place.
So I guess you have to hardcode the host.docker.internal IP into pydbgpproxy