My problem is the following - I have Docker on OSX with containers containing Redis, NginX, PHP 7 and Unison. Mapped to php-container I have volume with Symfony 3.1.7.
Everything works, but Symfony's "Welcome" page taked ~1.5 second loading time on average. At the same time same setup without docker gives me 0.2 second loading time. Same difference I got for Symfony's console commands, so, I guess, it's not the problem with NginX, and Unison should've negated all issues related to Docker files sync on OSX problem.
Right now I've ran out of ideas what can I do to speed things up and how to figure out what creates that 1.5s delay.
I have same issue on my second MBP, but such thing does not happen on colleagues laptop, which is similar to the one I have, but we were unable to find any difference between two setups.
Everything is running on my MBP with 2.5 GHz i5, 8 Gb RAM and SSD.
Docker 1.12.3, OSX 10.12.1 (Sierra)
docker-compose.yml:
mydockerbox-redis:
image: phpdockerio/redis:latest
container_name: mydockerbox-redis
mydockerbox-webserver:
image: phpdockerio/nginx:latest
container_name: mydockerbox-webserver
volumes:
- ..:/var/www/mydockerbox
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
links:
- mydockerbox-php-fpm
unison:
image: leighmcculloch/unison:latest
environment:
- UNISON_WORKING_DIR=/unison
volumes:
- ../mydockerbox:/var/www/mydockerbox
ports:
- "5000:5000"
mydockerbox-php-fpm:
build: .
dockerfile: php-fpm/Dockerfile
container_name: mydockerbox-php-fpm
volumes_from:
- unison
volumes:
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.0/fpm/conf.d/99-overrides.ini
links:
- mydockerbox-redis
UPD And here is Dockerfile for php-fpm container:
FROM phpdockerio/php7-fpm:latest
# Install selected extensions and other stuff
RUN apt-get update \
&& apt-get -y --no-install-recommends install php7.0-mongodb php7.0-redis php7.0-igbinary \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
WORKDIR "/var/www/mydockerbox"
I suggest you to use the docker-machine-driver-xhyve:
docker-machine/libmachine driver plugin for xhyve/hyperkit (native
macOS hypervisor.framework)
You can simply install with brew (I hope you have already installed docker&Co with brew also, otherwise unlink and install them with brew!):
brew install docker-machine-driver-xhyve
sudo chown root:wheel $(brew --prefix)/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
sudo chmod u+s $(brew --prefix)/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
Then you can create a docker machine as:
docker-machine create --driver xhyve --xhyve-experimental-nfs-share my-xhyve-docker-machine
and use it for run your container
First, benchmark PHP performance inside your php-fpm container (using this for example) and compare it with your colleague's container.
If you find that performance is the same/comparable, then use PHP performance profiling tools to find out what Symfony does in every bit of those ~1.5 seconds while generating "Welcome" page. That will likely identify the bottleneck (could be filesystem, network communication with Redis container, DNS lookups etc).
If the benchmark shows that PHP itself in your container runs slower (which I think is unlikely), then run the benchmark on the host machine. In case there is big difference between the host machine's and the php-fpm contaner's results — that will mean docker engine is throttling resources and needs a deep tweak or reinstall.
Related
I made the following script in docker-compose.yml, which tries to run a official PHP + Apache image from Docker Hub:
services:
apache:
image: 'php:5.6-apache'
container_name: apache
restart: always
ports:
- '80:80'
- '443:443'
volumes:
- /mnt/data/apps/html:/var/www/html
- /mnt/data/apps/ssl:/etc/ssl
- /mnt/data/apps/apache:/etc/apache2
But when i run it with docker compose up the container does go up, but the files that were supposed to be created on container launch are not being created it... (Also happens when using docker run script)
If i remove the volumes, run it again and access the container with "docker exec -it apache bash" i see that the files are generated accordingly... Just happens when binding volumes. Wasnt the files suposed to be created automatically to the local volumes?
Please, what am i doing wrong? Is there something missing on the script? Am i being dumb?
Sorry if this is a really obvious question, i have nowere to go and are starting now on docker.
Thank you
Solution was to run container without any volumes and use docker cp to copy the config files to my machine, then run it again but with volume poiting to the copied config files...
Nginx Example
docker pull nginx:1.23.1-alpine
docker run --name tmp-nginx-container -d nginx:1.23.1-alpine
docker cp tmp-nginx-container:/etc/nginx/ D:\Docker\Config
docker rm -f tmp-nginx-container
Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully – why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others
There are a few tutorials on the internet, some use docker-compose and therefore combine e.g. PHP, MariaDB, and PHPMyAdmin, all from the original projects on hub.docker.com. This method is pretty fast and easy to configure. With one yml file, the whole lamp server basically runs as required.
version: '3'
services:
php-apache:
image: php:7.3.2-apache-stretch
ports:
- 80:80
volumes:
- D:\test\src:/var/www/html
links:
- 'mariadb'
mariadb:
image: mariadb:10.1
volumes:
- mariadb:/var/lib/mysql
environment:
TZ: "Europe/Rome"
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_ROOT_PASSWORD: "rootpwd"
MYSQL_USER: 'testuser'
MYSQL_PASSWORD: 'testpassword'
MYSQL_DATABASE: 'testdb'
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
PMA_HOST: "mariadb"
restart: always
ports:
- 8181:80
volumes:
- /sessions
links:
- 'mariadb'
volumes:
mariadb:
Source (edited)
Others create one DOCKERFILE and put all apt-get commands within this file, like this one from fauria/docker-lamp.
FROM ubuntu:16.04
MAINTAINER Fer Uria <fauria#gmail.com>
LABEL Description="Cutting-edge LAMP stack, based on Ubuntu 16.04 LTS. Includes .htaccess support and popular PHP7 features, including composer and mail() function." \
License="Apache License 2.0" \
Usage="docker run -d -p [HOST WWW PORT NUMBER]:80 -p [HOST DB PORT NUMBER]:3306 -v [HOST WWW DOCUMENT ROOT]:/var/www/html -v [HOST DB DOCUMENT ROOT]:/var/lib/mysql fauria/lamp" \
Version="1.0"
RUN apt-get update
RUN apt-get upgrade -y
COPY debconf.selections /tmp/
RUN debconf-set-selections /tmp/debconf.selections
RUN apt-get install -y zip unzip
RUN apt-get install -y \
php7.0 \ ...
While the first one seems to be a lot simpler, the second one has a few redundancies (Debian for PHP, ubuntu for MariaDB, php-alpine for PHPMyAdmin).
So does Docker now run 3 servers? One for PHP, one for the Database and one for phpmyadmin? It feels like a waste of resources, isn't it?
Which method is the typical convention?
According to the official documentation: "It is generally recommended that you separate areas of concern by using one service per container" which will be easier to maintain, scale or update without affecting any other services.
In docker these instances called services so the docker compose running each component as a service
Also you can read more about Running multi-service in container if you need to know more about it
Regarding the resource usage it wont waste as much as you think because this is one of the advantages when you compare a virtual machine to a docker container as it uses the same kernel of the host and does not dedicate a specific resources like what vms do as they run a whole separate operating system
I am trying to set up a CI pipeline with docker-compose and am struggling to understand how named volumes work...
As part of my Dockerfile, I copy in the application files and then run composer install to install the application dependencies. There are some elements of the applicaton files and the dependencies that I want to share with the other containers that are running / are set up to be run to perform utility processes (such as running database migrations). See the example below:
Dockerfile:
FROM php:5.6-apache
# Install dependencies
COPY composer.* /app/
RUN composer install --no-dev
# Copy application files
COPY bin bin
COPY environment.json environment.json
VOLUME /app
docker-compose.yml
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- app:/app
- ~/.cache/composer:/composer/cache
migrations:
image: my-image
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes:
- app:/app
working_dir: /app/vendor/me/my-lib
volumes:
app:
In the example above (irrelevant information omitted), I have a "migrations" service that pulls the migrations from the application dependencies installed with composer. My idea is that when I perform docker-compose build followed by docker-compose up, it will bring up the latest version of software with the latest dependencies and run the latest migrations at the same time.
This works fine the first time. Unfortunately on subsequent runs I cannot get docker-compose to use the new versions. If I run docker-compose build, I can see the composer install run and install all the latest libraries, but then when I go into the container with docker-compose run web /bin/bash, the old dependencies are in there! If I run the image directly with docker run web_1, I can see all the latest files no problem. So it's definitely a compose-specific problem.
I assume I need to do something like clear out the volume cache, but whatever I have tried doesn't seem to work. I can only assume I am misunderstanding the idea of volumes.
Any help would be hugely appreciated. Thanks!
What I understand from your question is you want to run composer install every time you run your container. In that case you have to use CMD instruction to execute that command.
CMD composer install --no-dev
RUN and CMD are both Dockerfile instructions.
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer.
For example if you wanted to install a package or create a directory inside of your Docker image then RUN will be what you’ll want to use. For example, RUN mkdir -p /path/to/folder.
CMD lets you define a default command to run when your container starts.
You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
The problem here has to do with mounting a volume over a location defined in the build. The first build of the image has composer put its output into /app, and the first run of the first build mounts the app named volume to /app. This clobbers the image version of /app with a new write-layer on top. Mounting this named volume on the second build of the image will keep the original contents of /app.
Instead of using a named volume, use volumes-from to load the exported /app volume from web into the migration container.
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- ~/.cache/composer:/composer/cache
migrations:
image: docker-registry.efficio.digital:5043/doctrine-migrator:1.1
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes_from:
- web:ro
I'm trying to set up two Docker images for my PHP web application (php-fcm) reversed proxied by NGINX. Ideally I would like all the files of the web application to be copied into the php-fcm based image and exposed as a volume. This way both containers (web and app) can access the files with NGINX serving the static files and php-fcm interpreting the php files.
docker-compose.yml
version: '2'
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- app
links:
- app
app:
build: .
volumes:
- /app
Dockerfile:
FROM php:fpm
COPY . /app
WORKDIR /app
The above setup works as expected. However, when I make any change to the files and then do
compose up --build
the new files are not picked up in the resulting images. This is despite the following message indicating that the image is indeed being rebuilt:
Building app
Step 1 : FROM php:fpm
---> cb4faea80358
Step 2 : COPY . /app
---> Using cache
---> 660ab4731bec
Step 3 : WORKDIR /app
---> Using cache
---> d5b2e4fa97f2
Successfully built d5b2e4fa97f2
Only removing all the old images does the trick.
Any idea what could cause this?
$ docker --version
Docker version 1.11.2, build b9f10c9
$ docker-compose --version
docker-compose version 1.7.1, build 0a9ab35
The 'volumes_from' option mounts volumes from one container to another. The important word there is container, not image. So when you rebuild an image, the previous container is still running. If you stop and restart that container, or even just stop it, the other containers are still using those old mount points. If you stop, remove the old app container, and start a new one, the old volume mounts will still persist to the now deleted container.
The better way to solve this in your situation is to switch to named volumes and setup a utility container to update this volume.
version: '2'
volumes:
app-data:
driver: local
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
- app-data:/app
app:
build: .
volumes:
- app-data:/app
A utility container to update your app-data volume could look something like:
docker run --rm -it \
-v `pwd`/new-app:/source -v app-data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"
As BMitch points out, image updates don't automatically filter down into containers. your workflow for updates needs to be revisited. I've just gone through the process of building a container which includes NGINX and PHP-FPM. I've found, for me, that the best way was to include nginx and php in a single container, both managed by supervisord.
I then have scripts in the image that allow you to update your code from a git repo. This makes the whole process really easy.
#Create new container from image
docker run -d --name=your_website -p 80:80 -p 443:443 camw/centos-nginx-php
#git clone to get website code from git
docker exec -ti your_website get https://www.github.com/user/your_repo.git
#restart container so that nginx config changes take effect
docker restart your_website
#Then to update, after committing changes to git, you'll call
docker exec -ti your_website update
#restart container if there are nginx config changes
docker restart your_website
My container can be found at https://hub.docker.com/r/camw/centos-nginx-php/
The dockerfile and associated build files are available at https://github.com/CamW/centos-nginx-php
If you want to give it a try, just fork https://github.com/CamW/centos-nginx-php-demo, change the conf/nginx.conf file as indicated in the readme and include your code.
Doing it this way, you don't need to deal with volumes at all, everything is in your container which I like.