I got a docker-compose setup with two containers: One is the php/apache service and the other container is the database (mysql).
Here is my docker-compose.yml
version: '2'
services:
app:
depends_on:
- db
links:
- db:mysql
build: .
image: app
ports:
- "80:80"
restart: always
links:
- db:db
volumes:
- ../:/var/www/html/
db:
image: mysql:latest
restart: unless-stopped
volumes:
- ./db_data:/var/lib/mysql
- ./databaseDumps:/tmp/databaseDumps
environment:
MYSQL_USER: "myApp"
MYSQL_PASSWORD: "root"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "myAppDatabase"
MYSQL_ROOT_HOST: "%"
And here is my app Dockerfile:
FROM php:7-apache
COPY prefilled_files/000-default.conf /etc/apache2/sites-available/000-default.conf
RUN apt-get -qq update
RUN apt-get -qq -y install libpng-dev curl git nano vim zip unzip mysql-client libmysqlclient-dev
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
RUN npm install -g bower
RUN npm install -g gulp
RUN docker-php-ext-install pdo pdo_mysql gd mysqli
EXPOSE 8080 80
The main problem is, the mysql database and the app container working well, and I can connect to the mysql database from the app container via
$root#app: mysql -h db -u myApp -p
BUT
if I try to execute composer install on my symfony project, following error message appears:
[Doctrine\DBAL\Driver\PDOException]
SQLSTATE[HY000] [2002] Connection refused
[PDOException]
SQLSTATE[HY000] [2002] Connection refused
Here are my parameters of my app:
parameters:
database_driver: pdo_mysql
database_host: db
database_port: 3306
database_name: myAppDatabase
database_user: myApp
database_password: root
Why is this happening?
I've read through several forums and sites but nothing helped.
I tried the following solutions and nothing helped:
clearing symfony cache ;)
expose 3306 on mysql container
link app and mysql container together
removing all images and containers from my computer and reinstalled everything
I tried on windows and on ubuntu 17.04. Same behavior
connecting to a docker-compose mysql container denies access but docker running same image does not
docker-compose wordpress mysql connection refused
UPDATE:
I tried to access my database with a little php script from https://gist.github.com/chales/11359952 . PHP/My Script can actually connect to the database, so the problem has to be in with my composer install or in doctrine or in my configuration of symfony.
tl;dr
2 Docker container via docker compose
I can access the database via mysql command on the app container but not over composer install. Why?
I think it's the way it's trying to find the container via the host name 'db', I've found that on the machine running docker, it doesn't seem to pick up the names of the guest containers (could be some DNS config you could change) but the way I've worked round it is to find the IP address of the MySQL container. I have docker_db_1 as the container name for MySQL, so I run (assuming *nix)
docker inspect docker_db_1 | grep IPAddress
Which in my case gives me
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.18.0.2",
And I use this IP address (172.18.0.2) to connect to rather than db.
Related
I created a docker container from mysql:5.7 image.
sudo docker run --name mysqltest -e MYSQL_ROOT_PASSWORD=password -v mysql:/var/lib/mysql -d mysql:5.7
And I Created a php container that included phpunit
sudo docker run --rm -v $(pwd):/app -w /app phpunit/phpunit:8 phpunit --testdox file.php
in file.php I'm trying to connect mysql container, via mysql container ip as host:
sudo docker inspect mysqltest
but still I get "connection Refused", but I can connect to mysql container directly via :
sudo docker exec -it mysqltest mysql -ppassword
Please Help me, I'm really confused !
The connection is refused as you are not exposing MySQL ports, so it is not seen by the host and the other containers. The proper way of handling these cases is by using docker-compose and custom docker networks, however, the following changes can act as a quick-fix:
sudo docker run --name mysqltest -p 3306:3306 --network=host -e MYSQL_ROOT_PASSWORD=password -v mysql:/var/lib/mysql -d mysql:5.7
Followed by:
sudo docker run --network=host --rm -v $(pwd):/app -w /app phpunit/phpunit:8 phpunit --testdox file.php
-p 3306:3306 tells Docker to map default port of MySQL inside the container to port 3306 of the host. --network=host directs docker to use your local machine network stack. You can verify MySQL being accessible by trying to connect to it from your machine on port 3306 with any of its client applications.
Note that you need to update your application configurations to use the MySQL database on localhost.
Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully – why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others
There are a few tutorials on the internet, some use docker-compose and therefore combine e.g. PHP, MariaDB, and PHPMyAdmin, all from the original projects on hub.docker.com. This method is pretty fast and easy to configure. With one yml file, the whole lamp server basically runs as required.
version: '3'
services:
php-apache:
image: php:7.3.2-apache-stretch
ports:
- 80:80
volumes:
- D:\test\src:/var/www/html
links:
- 'mariadb'
mariadb:
image: mariadb:10.1
volumes:
- mariadb:/var/lib/mysql
environment:
TZ: "Europe/Rome"
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_ROOT_PASSWORD: "rootpwd"
MYSQL_USER: 'testuser'
MYSQL_PASSWORD: 'testpassword'
MYSQL_DATABASE: 'testdb'
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
PMA_HOST: "mariadb"
restart: always
ports:
- 8181:80
volumes:
- /sessions
links:
- 'mariadb'
volumes:
mariadb:
Source (edited)
Others create one DOCKERFILE and put all apt-get commands within this file, like this one from fauria/docker-lamp.
FROM ubuntu:16.04
MAINTAINER Fer Uria <fauria#gmail.com>
LABEL Description="Cutting-edge LAMP stack, based on Ubuntu 16.04 LTS. Includes .htaccess support and popular PHP7 features, including composer and mail() function." \
License="Apache License 2.0" \
Usage="docker run -d -p [HOST WWW PORT NUMBER]:80 -p [HOST DB PORT NUMBER]:3306 -v [HOST WWW DOCUMENT ROOT]:/var/www/html -v [HOST DB DOCUMENT ROOT]:/var/lib/mysql fauria/lamp" \
Version="1.0"
RUN apt-get update
RUN apt-get upgrade -y
COPY debconf.selections /tmp/
RUN debconf-set-selections /tmp/debconf.selections
RUN apt-get install -y zip unzip
RUN apt-get install -y \
php7.0 \ ...
While the first one seems to be a lot simpler, the second one has a few redundancies (Debian for PHP, ubuntu for MariaDB, php-alpine for PHPMyAdmin).
So does Docker now run 3 servers? One for PHP, one for the Database and one for phpmyadmin? It feels like a waste of resources, isn't it?
Which method is the typical convention?
According to the official documentation: "It is generally recommended that you separate areas of concern by using one service per container" which will be easier to maintain, scale or update without affecting any other services.
In docker these instances called services so the docker compose running each component as a service
Also you can read more about Running multi-service in container if you need to know more about it
Regarding the resource usage it wont waste as much as you think because this is one of the advantages when you compare a virtual machine to a docker container as it uses the same kernel of the host and does not dedicate a specific resources like what vms do as they run a whole separate operating system
I have three Docker containers running on Mac OS sierra, namely web, mysql and mongo, and have linked both mongo and mysql into web, which is essentially a Ubuntu Xenail base, with Apache and PHP added.
I am currently mounting my local Symfony project into the web container, and that seems to be working fine, but when I try to interact with the DB in any way, I get:
An exception occured in driver: SQLSTATE[HY000] [2002] Connection
refused
I've tried almost every combination of parameter values, but keep getting the same result.
I suspect it might have something to do with the way that I am linking the containers?
I'm in the process of learning Docker, so please excuse my limited knowledge.
Thanks!
Web dockerfile:
FROM ubuntu:xenial
MAINTAINER Some Guy <someguy#domain.com>
RUN apt-get update && apt-get install -y \
apache2 \
vim \
php \
php-common \
php-cli \
php-curl \
php-mysql \
php-mongodb \
libapache2-mod-php \
php-gd
RUN mkdir -p /var/www/symfony.local/public_html
RUN chown -R $USER:$USER /var/www/symfony.local/public_html
RUN chmod -R 755 /var/www
COPY config/php/php.ini /usr/local/etc/php/
COPY config/apache/sites-available/*.conf /etc/apache2/sites-available/
RUN a2enmod rewrite
RUN a2dissite 000-default.conf
RUN a2ensite symfony.local.conf
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Mysql dockerfile:
FROM mysql:5.7
MAINTAINER Some Guy <someguy#domain.com>
# Set the root users password
ENV MYSQL_ROOT_PASSWORD password
# Copy over the DB dump to be run upon creation
COPY sql/ /docker-entrypoint-initdb.d
# Copy over the custom mysql config file
COPY config/ /etc/mysql/conf.d
EXPOSE 3306
Run commands:
docker run --name mongo -d mongo #Im making use of the official Mongo image
docker run --name mysql -v /usr/local/var/mysql:/var/lib/mysql -d someguy/local:mysql
docker run --name web -d -p 80:80 --link mysql:mysql --link mongo:mongo -v ~/Sites/symfony.local/:/var/www/symfony.local/public_html/ someguy/local:web
Symfony parameters.yml file:
parameters:
database_host: mysql
database_port: 3306
database_name: gorilla
database_user: root
database_password: password
UPDATE:
So I've moved over to using docker-compose, but am still receiving the same error.
docker-compose.yml file
version: "2"
services:
web:
build: ./web
ports:
- "80:80"
volumes:
- ~/Sites/symfony.local/:/var/www/symfony.local/public_html/
depends_on:
- db
- mongo
mongo:
image: mongo:latest
mysql:
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
An exception occured in driver: SQLSTATE[HY000] [2002] Connection refused
Means, it has nothing to do with your network per se - the links are just fine.
What you are lacking is the how the user has been created, if the user has been created https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh#L122 .. so actually without a host limitation per se.
The question in your case is, what is inside your "sql/" folder - those scripts are executed during the entrypoint.
Be sure to never use exitX in those scripts, they will interrupt the main script, see https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh#L151
Check your docker logs for mysql to ensure the script did not print you any warnings, use https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh as an reference.
And last but not least, please use docker-compose. If you have issues with the timings ( mysql starting to slow and your web-container freaks out ), use a "wait for mysql" entrypoint in web:
#!/bin/bash
# this script does only exist to wait for the database before we fire up tomcat / standalone
RET=1
echo "Waiting for database"
while [[ RET -ne 0 ]]; do
sleep 1;
if [ -z "${db_password}" ]; then
mysql -h $db_host -u $db_user -e "select 1" > /dev/null 2>&1; RET=$?
else
mysql -h $db_host -u $db_user -p$db_password -e "select 1" > /dev/null 2>&1; RET=$?
fi
done
Set db_host, $user, $pasword accordingly using ENV or whatever suits you.
I need to install cURL compiled with OpenSSL and zlib via Dockerfile for Debian image with apache and php 5.6. I tried many approaches but due to the fact that I don't have string understanding in Linux a failed. I use docker-compose to up my container. docker-compose.yaml looks like:
version: '2'
services:
web:
build: .
command: php -S 0.0.0.0:80 -t /var/www/html/
ports:
- "80:80"
depends_on:
- db
volumes:
- $PWD/www/project:/var/www/html
container_name: "project-web-server"
db:
image: mysql:latest
ports:
- "192.168.99.100:3306:3306"
container_name: "project-db"
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbpass
As a build script I use Dockerfile:
FROM php:5-fpm
RUN apt-get update && apt-get install -y \
apt-utils \
curl libcurl3 libcurl3-dev php5-curl php5-mcrypt
RUN docker-php-ext-install -j$(nproc) curl
'docker-php-ext-install' is a helper script from the base image https://hub.docker.com/_/php/
The problem is that after $ docker build --rm . which is successful a don't get an image with cURL+SSL+zlib. After $ docker-compose up I have a working container with Apache+MySQL and can run my project but libraries I need are not there.
Could you explain how to add these extensions to my apache in container properly? I even tried to create my own Dockerfile and build apache+php+needed libs there, but had no result.
Your Dockerfile is not complete. You have not done a COPY (or similar) to transfer your source code to execute from the host into the container. The point of a Dockerfile is to setup an environment together with your source code which finishes by launching a process (typically a server).
COPY code-from-some-location into-location-in-container
CMD path-to-your-server
... as per the URL you reference a more complete Dockerfile would appear like this
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
notice the COPY which recursively copies all files/dirs (typically the location of your source code, etc like data and/or config files) in your $PWD where you execute the command onto the specified location internal to the container In unix a period as in . indicates the current directory so above command
COPY . /usr/src/myapp
will copy all files and directories in current directory from the host computer (the one you are using when typing in the docker build command) into the container directory called /usr/src/myapp
the WORKDIR acts to change directories into your container's dir supplied
finally the CMD launches the server which hums along once your launch the container