I am new to docker, I am trying to connect the existing MySQL container to my Laravel application through docker-compose.
Where MySQL docker-compose.yml file is like
version: '3'
services:
web:
container_name: ${APP_NAME}_web
build:
context: ./docker/web
ports:
- "9000:80"
volumes:
- ./:/var/www/app
depends_on:
- db
db:
container_name: ${APP_NAME}_db
image: mysql:5.7
ports:
- "3307:3306"
restart: always
volumes:
- dbdata:/var/lib/mysql/
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=laravel_docker_db
volumes:
dbdata:
driver: local
driver_opts:
type: none
device: /storage/docker/laravel_mysql_data
o: bind
This script is downloading the latest mysql version and making a new container. I just want to connect MYSQL container with my application
Where my existing MYSQL container is created with a specific IP using a bridge network and running on docker-machine.
I am wondering, can I define the existing MYSQL container configurations on the db context?
Try this docker-compose file:
version: '3.5'
services:
db:
container_name: myapp_db
image: mysql:5.7
ports:
- "3307:3306"
restart: always
volumes:
- dbdata:/var/lib/mysql/
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=laravel_docker_db
networks:
- myapp-network
networks:
myapp-network:
driver: bridge
Situation has two cases:
1. If your laravel instance is in docker too - just add to .env file next rows:
DB_HOST=myapp_db
DB_PORT=3306
DB_DATABASE=laravel_docker_db
DB_USERNAME=root
DB_PASSWORD=root
2.If you don't use docker for laravel - just change variable DB_HOST=localhost
In .env file:
DB_HOST=db
DB_PORT=3306
DB_DATABASE=your database name
DB_USERNAME=root
DB_PASSWORD=root
Related
I was trying to create a database in my app using command:
php bin/console doctrine:database:create
After that I got an error:
An exception occurred in driver: SQLSTATE[08006] [7] could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432?
My docker-compose file looks like that:
version: '3'
services:
php:
container_name: symfony_php
build:
context: ./php
volumes:
- ./symfony/:/var/www/symfony/
depends_on:
- database
networks:
- symfony
database:
container_name: symfony_postgres
image: postgres
restart: always
hostname: symfony_postgres
environment:
POSTGRES_DB: symfony_db
POSTGRES_USER: root
POSTGRES_PASSWORD: root
ports:
- "5432:5432"
volumes:
- ./postgres:/var/lib/postgres
networks:
- symfony
pgadmin:
container_name: symfony_pgadmin
image: dpage/pgadmin4
restart: always
ports:
- "5555:80"
depends_on:
- database
links:
- database
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: root
networks:
- symfony
nginx:
container_name: symfony_nginx
image: nginx:stable-alpine
build:
context: ./nginx
dockerfile: Dockerfile-nginx
volumes:
- ./symfony:/var/www/symfony
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- database
networks:
- symfony
networks:
symfony:
I'm not using any postgres config file, according to Symfony documentation, there's a config line in .env:
DATABASE_URL="postgresql://root:root#127.0.0.1:5432/symfony_db?serverVersion=11&charset=utf8"
I run netstat command and I got:
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
I think there's a problem with binding my localhost address where my app's server is running to postgres. Does anyone know, how I can fix it?
In docker-compose all services can be reached by their names.
You need to change DB host address to database - as a service name in docker-compose
DATABASE_URL="postgresql://root:root#database:5432/symfony_db?serverVersion=11&charset=utf8"
I have setup my Laravel application using Laravel Sail (Docker based). Everything's working fine except the MySQL. MySQL server is behaving differently for web (local.mysite.com:8080) and for CLI (ex: php artisan migrate).
Configruation (1)
If I use the following configuration in my .env file,
...
APP_DOMAIN=local.mysite.com
...
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=mydb
DB_USERNAME=root
DB_PASSWORD=root
FORWARD_DB_PORT=3307
the web works, not the CLI. When I run php artisan migrate command I get SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
Configuration (2)
And if I use the following configuration,
...
APP_DOMAIN=local.mysite.com
...
DB_CONNECTION=mysql
DB_HOST=local.mysite.com
DB_PORT=3307
DB_DATABASE=mydb
DB_USERNAME=root
DB_PASSWORD=root
FORWARD_DB_PORT=3307
The CLI works fine, not the web. I get a Connection Refured error by MySQL on web.
I've been banging my head with different tries, no luck...
What am I doing wrong?
Here's my docker-compose.yml for reference (phpmyadmin is working real good btw):
# For more information: https://laravel.com/docs/sail
version: '3'
services:
phpmyadmin:
depends_on:
- mysql
image: phpmyadmin/phpmyadmin
restart: always
ports:
- '8081:80'
environment:
PMA_HOST: mysql
MYSQL_ROOT_PASSWORD: password
networks:
- sail
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
extra_hosts:
- '${APP_DOMAIN}:127.0.0.1'
hostname: '${APP_DOMAIN}'
domainname: '${APP_DOMAIN}'
ports:
- '${APP_PORT:-80}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
mysql:
image: 'mysql:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sailmysql:/var/lib/mysql'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sailmysql:
driver: local
I realized my mistake.
I should be using sail artisan migrate instead of php artisan migrate as clearly explained in the official Laravel Sail documentation.
The correct configuration therefore is:
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=mydb
DB_USERNAME=root
DB_PASSWORD=root
FORWARD_DB_PORT=3307
I am working on a laravel project where I successfully configured the docker-compose with laravel using built in images (nginx, mysql, php etc).
The containers are working fine even the data persistence is working correct. But now i want to connect the docker-compose to remote database rather then using the sql container database.
It can the the localhost database that is on my local system may be in xampp or it can be an AWS remote database. In simple words the docker should pick the database outside of container.
I have tried different solution using the IP address and make changes to .env and docker-compose.yml but i didn't find any solution.
Here is my default configurations for docker-compose.yml :
version: '3'
networks:
laravel:
services:
site:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx
ports:
- 81:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- php
- mysql
- phpmyadmin
networks:
- laravel
mysql:
image: mysql:5.7.29
container_name: mysql
restart: unless-stopped
tty: true
ports:
- 3307:3306
environment:
MYSQL_DATABASE: test_db
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
volumes:
- ./mysql:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
container_name: phpmyadmin
depends_on:
- mysql
ports:
- "8081:80"
environment:
PMA_HOST: mysql
MYSQL_ROOT_PASSWORD: secret
UPLOAD_LIMIT: 1G
networks:
- laravel
php:
build:
context: .
dockerfile: php.dockerfile
container_name: php
volumes:
- ./src:/var/www/html:delegated
networks:
- laravel
volumes:
mysql:
And this is how my .env looks like:
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=test_db
DB_USERNAME=root
DB_PASSWORD=secret
As mentioned above it is working fine but i am confused about how to integrate/configure my local database or remote database like AWS with docker-compose in laravel. I don't want to push my data with the sql image.
I would appreciate if someone might help me in this regard about what changes are required and where to implement them.
Thanks
This is my docker-compose:
version: '3'
services:
web:
build: .
image: citadel/php7.3
ports:
- "80"
volumes:
- ./src:/home/app/src:cached
container_name: 'studentlaptops'
restart: unless-stopped
tty: true
environment:
- XDEBUG_CONFIG='remote_host=<my-host-name>'
- VIRTUAL_HOST=studentlaptops.docker
#MySQL Service
db:
image: mysql:8.0.1
container_name: sl-db
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: studentlaptops
MYSQL_ROOT_PASSWORD: <password>
volumes:
- dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
#Volumes
volumes:
dbdata:
driver: local
When I run docker-compose, it comes up fine. When I connect via a gui client, I can access the sl-db database in the container. When I run php artisan migrate, it migrates successfully.
However when I hit the application in the web browser SQLSTATE[2002]Connection refused...
My env db section is:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=studentlaptops
DB_USERNAME=root
DB_PASSWORD=<password>
In your case I would change
DB_HOST=127.0.0.1
to
DB_HOST=db
which is the name of your mysql in docker
I am using Docker compose to run nginx+php+mysql environment,OS is centos 7.2, question is about mutiple sub-website on one host:
For example:
There is one host,two projects will run on this host,
named project-a and project-b,
two different docker-compose.yml exsist in project-a and project-b.
Question:
When executing docker-compose up in project-a and project-b,does nginx+php+mysql environment run two or one? If two, a lot of space is occupied ,how to solve this problem?
Add: docker-compose.yml
version: '2'
services:
nginx:
container_name: nginx
image: nginx
ports:
- 80:80
- 443:443
links:
- php
env_file:
- ./.env
working_dir: /usr/share/nginx/html # should be `/usr/share/nginx/html/project-a` and `/usr/share/nginx/html/project-b` here?
volumes:
- ~/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ~/docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ~/docker/www:/usr/share/nginx/html
php:
container_name: php
image: fpm
links:
- mariadb
- redis
env_file:
- ./.env
volumes:
- ~/docker/www:/usr/share/nginx/html
mariadb:
container_name: mariadb
image: mariadb
env_file:
- ./.env
volumes:
- ~/opt/data/mysql:/var/lib/mysql
redis:
container_name: redis
image: redis
.env:
project-a and project-bhave different DB_DATABASE and APP_KEY,other items are the same.
APP_ENV=local
APP_DEBUG=true
APP_KEY=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa # Here different.
DB_HOST=laravel.dev
DB_DATABASE=project-a # Here different.
DB_USERNAME=root
DB_PASSWORD=
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
REDIS_HOST=laravel.dev
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
Project files:
project-a and project-b have the same catalog.
urls:
project-a:aaa.xxxxxx.com
project-b:bbb.xxxxxx.com
Project folders:
project-a:~/docker/www/project-a
project-b:~/docker/www/project-b
Subsidiary question:
1、Should working_dir be /usr/share/nginx/html/project-name or /usr/share/nginx/html in docker-compose.yml file?
2、If working_dir is /usr/share/nginx/html,I think docker-compose.yml files have the same content in project-a and project-b,right?Is there any other items to be modified?
3、How to merge the compose file into one?
Add2:
project-common: docker-compose.yml
version: '2'
services:
nginx:
container_name: nginx
image: nginx
php:
container_name: php
image: fpm
mariadb:
container_name: mariadb
image: mariadb
redis:
container_name: redis
image: redis
project-a: docker-compose.yml,and project-b is the same except project name.
version: '2'
services:
nginx:
external_links:
- project-common:nginx
ports:
- 80:80
- 443:443
links:
- php
env_file:
- ./.env
working_dir: /usr/share/nginx/html/project-a
volumes:
- ~/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ~/docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ~/docker/www/project-a:/usr/share/nginx/html/project-a
php:
external_links:
- project-common:php
links:
- mariadb
- redis
env_file:
- ./.env
volumes:
- ~/docker/www:/usr/share/nginx/html/project-a
mariadb:
external_links:
- project-common:mariadb
env_file:
- ./.env
volumes:
- ~/opt/data/mysql:/var/lib/mysql
redis:
external_links:
- project-common:redis
You don't need to run nginx and php-fpm in different containers. (if your projects share same php version)
You run fpm as a service and in the same container run nginx in daemon-mode-off.
Then you go to nginx config and setup multiple subdomains.
Then setup data-volumes / shared folders according to your subdomain setup.
So you have single nginx/fpm instance which serves multple projects, and some other database/service containers
I agree with strangeqargo, you need only one container on which you can mount 2 volumes..
If the 2 projects use the same PHP and MariaDB version, just configure them correctly to use different databases but on the same container
Generally, I agree with both strangeqargo and intuix.
However, if you should require isolation between the sites, it would be best to setup separate php-fpm pools and separate users for each site, by have separate definitions in /etc/php5/fpm/pool.d/ with different unix sockets.
Each site definition in nginx should use the corresponding socket of the php-fpm pool.
This gives quite a good explanation on how to do that:
https://www.digitalocean.com/community/tutorials/how-to-host-multiple-websites-securely-with-nginx-and-php-fpm-on-ubuntu-14-04