How to "import" xdebug into existing php docker container - php

I have a docker-compose.yml with a php container that already exists insid::
version: "2"
services:
php:
image: wodby/drupal-php:5.6-3.3.1
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
PHP_FPM_CLEAR_ENV: "no"
DB_HOST: mariadb
DB_USER: drupal
DB_PASSWORD: drupal
DB_NAME: drupal
DB_DRIVER: mysql
volumes:
- docker-sync:/var/www/html
I need to install xdebug onto the docker container so that I can then use it with PHPStorm.
A lot of tutorials ar using a Dockerfile is needed to create a php image with xdebug on it. I have not used docker for more than hosting my project locally so I am confused to how the Dockerfile comes into play and if you can use a Dockerfile with a docker-compose file as well.
Does anyone know the steps I need to take in order to add xdebug to this existing container
I am using the following stack: https://github.com/wodby/docker4drupal

Related

docker-compose overrides directories in the container

Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully – why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others

Docker LAMP with default hub images or one custom single Stack

There are a few tutorials on the internet, some use docker-compose and therefore combine e.g. PHP, MariaDB, and PHPMyAdmin, all from the original projects on hub.docker.com. This method is pretty fast and easy to configure. With one yml file, the whole lamp server basically runs as required.
version: '3'
services:
php-apache:
image: php:7.3.2-apache-stretch
ports:
- 80:80
volumes:
- D:\test\src:/var/www/html
links:
- 'mariadb'
mariadb:
image: mariadb:10.1
volumes:
- mariadb:/var/lib/mysql
environment:
TZ: "Europe/Rome"
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_ROOT_PASSWORD: "rootpwd"
MYSQL_USER: 'testuser'
MYSQL_PASSWORD: 'testpassword'
MYSQL_DATABASE: 'testdb'
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
PMA_HOST: "mariadb"
restart: always
ports:
- 8181:80
volumes:
- /sessions
links:
- 'mariadb'
volumes:
mariadb:
Source (edited)
Others create one DOCKERFILE and put all apt-get commands within this file, like this one from fauria/docker-lamp.
FROM ubuntu:16.04
MAINTAINER Fer Uria <fauria#gmail.com>
LABEL Description="Cutting-edge LAMP stack, based on Ubuntu 16.04 LTS. Includes .htaccess support and popular PHP7 features, including composer and mail() function." \
License="Apache License 2.0" \
Usage="docker run -d -p [HOST WWW PORT NUMBER]:80 -p [HOST DB PORT NUMBER]:3306 -v [HOST WWW DOCUMENT ROOT]:/var/www/html -v [HOST DB DOCUMENT ROOT]:/var/lib/mysql fauria/lamp" \
Version="1.0"
RUN apt-get update
RUN apt-get upgrade -y
COPY debconf.selections /tmp/
RUN debconf-set-selections /tmp/debconf.selections
RUN apt-get install -y zip unzip
RUN apt-get install -y \
php7.0 \ ...
While the first one seems to be a lot simpler, the second one has a few redundancies (Debian for PHP, ubuntu for MariaDB, php-alpine for PHPMyAdmin).
So does Docker now run 3 servers? One for PHP, one for the Database and one for phpmyadmin? It feels like a waste of resources, isn't it?
Which method is the typical convention?
According to the official documentation: "It is generally recommended that you separate areas of concern by using one service per container" which will be easier to maintain, scale or update without affecting any other services.
In docker these instances called services so the docker compose running each component as a service
Also you can read more about Running multi-service in container if you need to know more about it
Regarding the resource usage it wont waste as much as you think because this is one of the advantages when you compare a virtual machine to a docker container as it uses the same kernel of the host and does not dedicate a specific resources like what vms do as they run a whole separate operating system

update to php7.2 in docker wp-cli container

I'm new to docker. I have a WordPress stack for local dev that implements wp-cli via a different container. The WP container has PHP 7.2.4, but the wp-cli container appears to have php 5.6.27.
What's the best approach to updating php for wp-cli?
remove wp-cli container, install wp-cli, save as a new container
use a different container for wp-cli
update php inside the existing container
?
snippets from my docker-compose file:
wordpress:
container_name: wordpress
depends_on:
- db
image: jburger/wordpress-xdebug
volumes:
- "./public:/var/www/html"
wpcli:
command: "--info"
container_name: wpcli
entrypoint: wp
image: tatemz/wp-cli
links:
- db:mysql
volumes:
You're pulling in an image which hasn't been freshly built/pushed in a year.
The DockerFile itself of these images is exactly what you need. If you clone the original repo into a folder, set the build param in your docker-compose file to that folder, and then run docker-compose build, you'll have a fresh image.
The ideal setup is to actually have a 'workspace' container, which contains all of the tools needed to interact with your project, for a reference of what that looks like, see laradock (it can be a bit overwhelming).

Docker with Symfony and MongoDB

I want to make a project in PHP (Symfony) and MongoDB.
I created the file docker-compose.yml:
web_server:
build: .
ports:
- 5000:5000
links:
- mongo
mongo:
image: mongo:3.0
container_name: mongo
command: mongod --smallfiles
expose:
- 27017
And I try to run Docker Compose in PHP Storm but I recived:
Removing old containers...
(Re)building services...
mongo uses an image, skipping
Building web_server
Cannot locate specified Dockerfile: Dockerfile
Starting...
Building web_server
Cannot locate specified Dockerfile: Dockerfile
No containers created for service: web_server
No containers created for service: mongo
Failed to deploy 'Compose: docker-compose.yml': Some services/containers not started
I don't know what I should do, what should contain Dockerfile, what create containers.
Thanks!
Done!
I use Dockerfile from https://github.com/lepiaf/docker-symfony2 (with all files) and previously docker-compose.yml.
Thanks!

Files changes not reflected in Docker image after rebuild

I'm trying to set up two Docker images for my PHP web application (php-fcm) reversed proxied by NGINX. Ideally I would like all the files of the web application to be copied into the php-fcm based image and exposed as a volume. This way both containers (web and app) can access the files with NGINX serving the static files and php-fcm interpreting the php files.
docker-compose.yml
version: '2'
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- app
links:
- app
app:
build: .
volumes:
- /app
Dockerfile:
FROM php:fpm
COPY . /app
WORKDIR /app
The above setup works as expected. However, when I make any change to the files and then do
compose up --build
the new files are not picked up in the resulting images. This is despite the following message indicating that the image is indeed being rebuilt:
Building app
Step 1 : FROM php:fpm
---> cb4faea80358
Step 2 : COPY . /app
---> Using cache
---> 660ab4731bec
Step 3 : WORKDIR /app
---> Using cache
---> d5b2e4fa97f2
Successfully built d5b2e4fa97f2
Only removing all the old images does the trick.
Any idea what could cause this?
$ docker --version
Docker version 1.11.2, build b9f10c9
$ docker-compose --version
docker-compose version 1.7.1, build 0a9ab35
The 'volumes_from' option mounts volumes from one container to another. The important word there is container, not image. So when you rebuild an image, the previous container is still running. If you stop and restart that container, or even just stop it, the other containers are still using those old mount points. If you stop, remove the old app container, and start a new one, the old volume mounts will still persist to the now deleted container.
The better way to solve this in your situation is to switch to named volumes and setup a utility container to update this volume.
version: '2'
volumes:
app-data:
driver: local
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
- app-data:/app
app:
build: .
volumes:
- app-data:/app
A utility container to update your app-data volume could look something like:
docker run --rm -it \
-v `pwd`/new-app:/source -v app-data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"
As BMitch points out, image updates don't automatically filter down into containers. your workflow for updates needs to be revisited. I've just gone through the process of building a container which includes NGINX and PHP-FPM. I've found, for me, that the best way was to include nginx and php in a single container, both managed by supervisord.
I then have scripts in the image that allow you to update your code from a git repo. This makes the whole process really easy.
#Create new container from image
docker run -d --name=your_website -p 80:80 -p 443:443 camw/centos-nginx-php
#git clone to get website code from git
docker exec -ti your_website get https://www.github.com/user/your_repo.git
#restart container so that nginx config changes take effect
docker restart your_website
#Then to update, after committing changes to git, you'll call
docker exec -ti your_website update
#restart container if there are nginx config changes
docker restart your_website
My container can be found at https://hub.docker.com/r/camw/centos-nginx-php/
The dockerfile and associated build files are available at https://github.com/CamW/centos-nginx-php
If you want to give it a try, just fork https://github.com/CamW/centos-nginx-php-demo, change the conf/nginx.conf file as indicated in the readme and include your code.
Doing it this way, you don't need to deal with volumes at all, everything is in your container which I like.

Categories