Locally I have following docker-compose configuration:
nginx:
build:
context: ./nginx
ports:
- "80:80"
volumes:
- ./../logs:/home/web/logs/
- ./../:/home/web/my-website.com/
depends_on:
- php
php:
build:
context: ./php
volumes:
- ./../:/home/web/my-website.com/
working_dir: /home/web/my-website.com/
expose:
- "8123"
php container has Xdebug installed into it, I can easily connect to it from PhpStorm.
I have remote ClickHouse database which is connected via SSH Tunnel. When I start my container I just go into my container and execute:
ssh -4 login#host.com -p 2211 -L 8123:localhost:8123 -oStrictHostKeyChecking=no -Nf
After this, my site is able to use this connection, but when I execute console command
./yii analysis/start-charts 003b56fe-db47-11e8-bcc0-52540010e5bc 205
from PhpStorm, I'm getting an exception:
Failed to connect to 127.0.0.1 port 8123: Connection refused
If I jump into the container and launch the same command, everything works fine.
What's wrong? Why PhpStorm doesn't see my SSH tunnel?
I've got an answer on "superuser" site: https://superuser.com/questions/1374463/phpstorm-docker-xdebug-db-ssh-tunnel/1375961#1375961
Besides, I've added ports node to my php container definition, now it's the following:
php:
build:
context: ./php
volumes:
- ./../:/home/web/my-website.com/
working_dir: /home/web/my-website.com/
expose:
- "8123"
ports:
- "8123:8123"
depends_on:
- redis
- mysql
Related
Mysql is in it's own docker-compose.yml as I want a mysql server up and running that any other php application can connect to. So I do not have php and mysql in the same docker-compose.yml. From the php application, I can connect to mysql if I use the mysql container's gateway ip address by looking it up and then hard coding it into the php application. docker inspect mysql-db. But docker will change that 172... ip address each time mysql restarts so that is not ideal for development.
I can connect to mysql via mysql -h 127.0.0.1 no problem, but from the php application if I try to use 127.0.0.1 I get connection refused. I can only connect if I use the 172... gateway ip address.
How do I get the mysql container listening for connections from the host to 127.0.0.1?
docker-compose.yml for mysql
version: "3"
services:
mysql:
container_name: mysql-db
image: mysql
build:
dockerfile: Dockerfile
context: ./server/mysql
environment:
- MYSQL_ROOT_PASSWORD=admin
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- 3306:3306
docker-compose.yml for php
version: "3"
services:
nginx:
container_name: nginx_myapp
image: nginx
build:
dockerfile: Dockerfile
context: ./server/nginx
ports:
- 80:80
- 443:443
volumes:
- ./app:/var/www/html
networks:
- myapp
php:
container_name: php_myapp
image: php:7.3-fpm
build:
dockerfile: Dockerfile
context: ./server/php-fpm
environment:
CI_ENV: development
volumes:
- ./app:/var/www/html
networks:
- myapp
networks:
myapp:
127.0.0.1 is the loopback address. It points to localhost. In the context of docker, localhost is the container itself. There is no db running on your php container so the connection will never succeed.
What you need to do is to configure the default network in you mysql compose file so that you will predictably control its name for later convenience (else it will be calculated from your compose project name which could change if you rename the containing folder...):
Important note: for the below to work, you need to use compose file version >= 3.5
---
version: '3.7'
#...
networks:
default:
name: shared_mysql
You can now use that shared_mysql network as external from any other compose project.
version: '3.7'
services:
nginx:
#...
networks:
- myapp
php:
#...
networks:
- myapp
- database
networks:
myapp:
database:
external: true
name: shared_mysql
You can then connect to mysql from your php container using the service name mysql (e.g. mysql -h mysql -u user -p)
Reference: https://docs.docker.com/compose/networking/
Few solutions for you
1) you can duplicate mysql section in each file using same volume path, in that case when you start project you will have same databases
project1
version: "3.2"
services:
mysql:
volumes:
- /var/mysql:/var/lib/mysql
php:
build:
context: './php/'
project2
version: "3.2"
services:
mysql:
volumes:
- /var/mysql:/var/lib/mysql
php:
build:
context: './php/'
2) you can connect using host.docker.internal or macos (docker.for.mac.localhost) directly to your host machine more information here From inside of a Docker container, how do I connect to the localhost of the machine?
I have issue after last docker update (seems so) on Windows 10 (local development). When I changed files in PhpStorm (and in another editors - Sublime, Notepad+), after a while, files inside container didn't receive changes.
Steps that can help for a while:
If I completely shut down all containers and after that arise them again. docker-compose down && docker-compoes up
If I get into php-fpm container and for file that not changed ran touch file.php (this file will be immidiatly changed).
What I tried and it didn't help:
I restarted php-fpm and nginx containers docker-compose restart php-fpm nginx (Yes it's strange, because down|up for all container helped)
I changed inside PhpStorm setting Use Safe write(save changes for temporary file first)
Also I checked inode for file inside container. With ls -lai file.php. Before changes worked and after they broked I had the same inode number. There is no determined number of changes I must to do to break syncing, it's random, sometime 2 changes enough.
I have:
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.2, build 698e2846
docker-compose.yml
version: '3'
services:
nginx:
container_name: pr_kpi-nginx
build:
context: ./
dockerfile: docker/nginx.docker
volumes:
- ./:/var/www/kpi
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/fastcgi.conf:/etc/nginx/fastcgi.conf
ports:
- "8081:80"
links:
- php-fpm
networks:
- internal
php-fpm:
container_name: pr_kpi-php-fpm
build:
context: ./
dockerfile: docker/php-fpm.docker
volumes:
- ./:/var/www/kpi
links:
- kpi-mysql
environment:
# 192.168.221.1 -> host.docker.internal for Mac and Windows
XDEBUG_CONFIG: "remote_host=host.docker.internal remote_enable=1"
PHP_IDE_CONFIG: "serverName=Docker"
networks:
- internal
mailhog:
container_name: pr_kpi-mailhog
image: mailhog/mailhog
restart: always
ports:
# smtp
- "1025:1025"
# http
- "8025:8025"
networks:
- internal
kpi-mysql:
container_name: pr_kpi-kpi-mysql
image: mysql:5.7
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
volumes:
- ./docker/storage/kpi-mysql:/var/lib/mysql
environment:
# We must change prod secrets, this is not good approach
- "MYSQL_ROOT_PASSWORD=pass"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=user_pass"
- "MYSQL_DATABASE=kpi_db"
ports:
- "33061:3306"
networks:
- internal
kpi-npm:
container_name: pr_kpi-npm
build:
context: ./
dockerfile: docker/npm.docker
volumes:
- ./:/var/www/kpi
- /var/www/kpi/admin/node_modules
ports:
- "4200:4200"
networks:
- internal
tty: true
# For xdebug
networks:
internal:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.221.0/28
P.S. There is opened issue:
https://github.com/docker/for-win/issues/5530
P.P.S. We need to update Docker from 2.2.0.0 to 2.2.0.3, Seems it's fixed
I have a separate container for syncing my folder:
app:
image: httpd:2.4.38
volumes:
- ./:/var/www/html
command: "echo true"
I just use the basic apache image, you could use anything really though. Then in my actual containers, I use the following volumes_from key:
awesome.scot:
build: ./build/httpd
links:
- php
ports:
- 80:80
- 443:443
volumes_from:
- app
php:
build: ./build/php
ports:
- 9000
- 9001
volumes_from:
- app
links:
- mariadb
- mail
environment:
APPLICATION_ENV: 'development'
I've never had an issue using this set up, files always sync fast, and I have tested both on Mac OSX and MS Windows.
If you're interested, here is my full LAMP stack on Github https://github.com/delboy1978uk/lamp
I have the same issue on Windows10 since 31st Jan.
I have commented a line in PhpStorm and checked it in the container using vim.
The changes were not there.
If I run docker-compose down and up, the changes go in the container.
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.4, build 8d51620a
Nothing changed in my docker-compose.yml since 2018.
Not sure if my title is accurate, but here's my issue. I am running a basic laravel site on Docker and cannot get the site itself to connect to the PostgreSQL service. I will post my docker-compose.yml below. When i run php artisan migrate i get no errors and it all works. I can even use my Postico PostgreSQL client to connect to the DB and run queries. But, when i try and connect to the DB from the site, it errors out saying this:
SQLSTATE[08006] [7] could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5433?
Here are my PostgreSQL client settings (which DO work):
Host: 127.0.0.1
Port: 5433
User: homestead
Password: homestead
Database: homestead
I have been messing around with different settings and things so here is my docker-compose.yml, although i'm sure there are things i don't need in there:
version: '2'
services:
php:
image: jguyomard/laravel-php:7.2
build:
context: .
dockerfile: infrastructure/php/Dockerfile
volumes:
- ./:/var/www/
- $HOME/.composer/:$HOME/.composer/
networks:
- default
links:
- postgresql
- redis
nginx:
image: jguyomard/laravel-nginx:1.13
build:
context: .
dockerfile: infrastructure/nginx/Dockerfile
ports:
- 81:80
networks:
- default
links:
- postgresql
- redis
postgresql:
image: postgres:9.6-alpine
volumes:
- pgsqldata:/var/lib/postgresql/data
environment:
- "POSTGRES_DB=homestead"
- "POSTGRES_USER=homestead"
- "POSTGRES_PASSWORD=homestead"
ports:
- "5433:5432"
networks:
- default
redis:
image: redis:4.0-alpine
command: redis-server --appendonly yes
ports:
- "6379:6379"
networks:
- default
# elastic:
# image: elasticsearch:5.5-alpine
# ports:
# - "9200:9200"
volumes:
pgsqldata:
networks:
default:
Any thoughts on why the site can't connect to the DB?
My docker network ls output:
NETWORK ID NAME DRIVER SCOPE
2bf85424f466 bridge bridge local
c29d413f768e host host local
0bdf9db30cd8 none null local
f3d9cb028ae3 my-app_default bridge local
The error message ask Is the server running on host "127.0.0.1" but in your case PostgreSQL is running on a different docker container which is not 127.0.0.1 reference to php app so, change the server host to postgresql inside your php application.
And for the modified error, it is because that you have used port 5433 inside the php application which is the port of host machine which is for use outside the docker container (for host machine, that's why your Postico PostgreSQL client worked). But the port you have to use inside the docker network is 5432 change the server port to 5432 inside your php application.
And you have made the compose file complex by defining network in each host as default network. (You can follow this link for more details) If you don't have a requirement for that you don't need to do that as docker-compose will deploy all containers in single network.
And you don't need to use links they are deprecated. When multiple containers are in one docker-compose.yml file they are automatically deployed in a same single network.
So this simplified compose file will be recommended.
version: '2'
services:
php:
image: jguyomard/laravel-php:7.2
build:
context: .
dockerfile: infrastructure/php/Dockerfile
volumes:
- ./:/var/www/
- $HOME/.composer/:$HOME/.composer/
nginx:
image: jguyomard/laravel-nginx:1.13
build:
context: .
dockerfile: infrastructure/nginx/Dockerfile
ports:
- 81:80
postgresql:
image: postgres:9.6-alpine
volumes:
- pgsqldata:/var/lib/postgresql/data
environment:
- "POSTGRES_DB=homestead"
- "POSTGRES_USER=homestead"
- "POSTGRES_PASSWORD=homestead"
ports:
- "5433:5432"
redis:
image: redis:4.0-alpine
command: redis-server --appendonly yes
ports:
- "6379:6379"
# elastic:
# image: elasticsearch:5.5-alpine
# ports:
# - "9200:9200"
volumes:
pgsqldata:
So, I spent about two days and can't make xdebug work in the container (docker-compose config) on the remote server
And I always get E: Time-out connecting to the client. :-(
Dev machine:
Win10 with dynamic IP
All the code is real-time syncing with remote server (via "deployment" feature)
PhpStorm is configured to listen on port 9001, "Listen for PHP Debug Connections" is on.
No servers and etc, just zero-config debugging session.
Remote server:
xdebug config:
xdebug.remote_enable=1
xdebug.remote_port=9001
xdebug.remote_log="/var/www/xdebug.log"
xdebug.remote_connect_back=1
php-fpm, jwilder/nginx-proxy, letsencrypt-companion, nginx, jeroenpeeters/docker-ssh running via docker-compose.
Here is a simplified version of docker-compose.yml:
version: "2"
networks:
default:
external:
name: nginx-proxy
services:
nginx-proxy:
container_name: nginx-proxy
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./secured/certs_letsencrypt:/etc/nginx/certs:ro
- /etc/nginx/vhost.d
- /usr/share/nginx/html
restart: always
letsencrypt-companion:
container_name: letsencrypt-companion
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- ./secured/certs_letsencrypt:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy
restart: always
my_project_nginx:
container_name: my_project_nginx
build:
context: ./containers/nginx_ssl
args:
appcontainer: my_project_app
domain: example.com
depends_on:
- nginx-proxy
links:
- my_project_app
- nginx-proxy
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=stepan#example.com
restart: always
my_project_app:
container_name: my_project_app
build:
context: .
dockerfile: ./containers/php7_1/Dockerfile
args:
idrsafile: ./secured/id_rsa_shared
php_memory_limit: 6G
depends_on:
- nginx-proxy
restart: always
my_project_ssh:
container_name: my_project_ssh
image: jeroenpeeters/docker-ssh
depends_on:
- my_project_app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./secured/authorized_keys:/authorized_keys
ports:
- "2227:22"
environment:
- FILTERS={"name":["my_project_app"]}
- AUTH_MECHANISM=publicKey
- AUTHORIZED_KEYS=/authorized_keys
restart: always
The problem
And there is xdebug log:
...
I: Checking remote connect back address.
I: Remote address found, connecting to 5.18.238.83:9001.
E: Time-out connecting to the client. :-(
I tried a lot of variants (specified xdebug.remote_host, established SSH tunnel via Putty etc) and still got no results.
I set up 2 projects (Admin and API) and try to move into docker on local.
I can access the running web instances on both without any problems, but when the Admin tries to make a Curl requests to the API, I get a cURL error:
cURL error 7: Failed to connect to localhost port 8080
This is my docker-compose.yml file contents:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: project-admin-memcached
redis:
image: redis:alpine
container_name: project-admin-redis
mariadb:
image: mariadb:10.1
container_name: project-admin-mariadb
working_dir: /application
volumes:
- ./Projects:/application
environment:
- MYSQL_ROOT_PASSWORD=docker
- MYSQL_DATABASE=db_test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
ports:
- "8083:3306"
# docker-compose exec webserver sh
# docker exec -it project-admin-webserver nginx -s reload
webserver:
image: nginx:alpine
container_name: project-admin-webserver
working_dir: /application
volumes:
- ./Projects/Api:/application/api
- ./Projects/Admin:/application/admin
- ./Docker/nginx:/etc/nginx/conf.d
ports:
- "8080:8080"
- "8090:8090"
# docker-compose exec php-fpm bash
php-fpm:
build: Docker/php-fpm
container_name: project-admin-php-fpm
working_dir: /application
volumes:
- ./Projects:/application
- ./Docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
I can access both projects from my browser with:
http://localhost:8080/ <= API
http://localhost:8090/ <= Admin
How can I fix this?
inside your docker network (create by default with a compose), you have to use the container name.
So inside a container you have to use http://webserver:8080