For a PHP/Symfony project, I currently setup the Gitlab CI (self-hosted CE) for building the Docker images and running tests and code-style checks.
To run it parallel, I have one build job which is building the Docker image and two jobs, the first is running the phpunit tests, the other one runs the code style checks like phpstan and codesniffer.
The project has some composer dependencies which are installed with the command docker-compose run --entrypoint="composer install -n" php. The project folder is a volume configured in the docker-compose.yml file:
php:
image: 'git.cd.de:5050/sf/sf-software:dev_latest'
depends_on:
- database
environment:
TIMEZONE: Europe/Berlin
XDEBUG_MODE: 'off'
XDEBUG_CONFIG: >-
client_host=host.docker.internal
client_port=9003
idekey=PHPSTORM
PHP_IDE_CONFIG: serverName=sf
volumes:
- './:/var/www/html'
- './docker/php/php.ini:/usr/local/etc/php/php.ini:ro'
This is working on my local machine and also in the CI, it's working - but only, if it's running on "the one" gitlab runner, which is installed on a second virtual machine, while the "second runner" is installed on the same machine on which is also gitlab running. The runner "second runner" fails with the following message:
$ docker-compose run --entrypoint="composer install -n" php
Creating cd-software_php_run ...
Creating cd-software_php_run ... done
Composer could not find a composer.json file in /var/www/html
To initialize a project, please create a composer.json file. See https://getcomposer.org/basic-usage
The composer.json file does not exists in the docker image php.
It makes no difference in the result whether the "second runner" performs the test-job or the check-job. The "second job" always fails with that error.
My .gitlab-ci.yml file:
variables:
DOCKER_DRIVER: overlay
before_script:
- apk add --no-cache docker-compose
- docker info
- docker-compose --version
build_dev:
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY_IMAGE:dev_latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:dev_latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:dev_latest .
- docker push $CI_REGISTRY_IMAGE:dev_latest
tests:
services:
- docker:dind
needs:
- build_dev
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker-compose pull
- docker-compose run --entrypoint="composer install -n" php
- docker-compose run --entrypoint="bin/console doctrine:migrations:migrate -n" php
- docker-compose run --entrypoint="bin/console doctrine:schema:validate" php
- docker-compose run --entrypoint="bin/console doctrine:fixtures:load -n" php
- docker-compose run --entrypoint="vendor/bin/simple-phpunit -c phpunit.xml.dist" php
checks:
services:
- docker:dind
needs:
- build_dev
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker-compose pull
- ls -la
- docker-compose run --entrypoint="ls -la" php
- docker-compose run --entrypoint="composer install -n" php
- docker-compose run --entrypoint="composer run check-style" php
- docker-compose run --entrypoint="composer run phpstan" php
Related
I have created a simple Dockerfile to install apache with PHP and then install packages from composer.json.
FROM php:7-apache
WORKDIR /var/www/html
COPY ./src/html/ .
COPY composer.json .
RUN apt-get update
RUN apt-get install -y unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer update
When I run docker build -t my-web-server . followed by docker run -p 8080:80 my-web-server, everything works fine and the packages install.
But when I use a docker-compose file:
version: "3.9"
services:
ecp:
build: .
ports:
- "8080:80"
volumes:
- ./src:/var/www
and perform docker-compose build followed by docker-compose up The packages do not install and I just index.php is taken across to the container
my current file structure:
src
|-- html
|-- index.php
composer.json
docker-compose.yaml
Dockerfile
When docker-compose is building the image all the console outputs are identical to that of docker build
Your two approaches are not identical. You are using volumes in your docker compose, and not in your docker call. Your problem lies there.
More specifically, notice that in your docker compose you are mounting your host's ./src to your container's ./var/www - which is not the giving you the correct structure, since you "shadow" the container's folder that contains your composer.json (which was copied to the container at build time).
To avoid such confusion, I suggest that if you want to mount a volume with your compose (which is a good idea for development), then your docker-compose.yml file should mount the exact same volumes as the COPY commands in your Dockerfile. For example:
volumes:
- ./src/html:/var/www/html
- ./composer.json:/var/www/html/composer.json
Alternatively, remove the volumes directive from your docker-compose.yml.
Note that it can be a cause for additional problems and confusion to have a file (in your case composer.json) copied to a folder in the container, while having the same folder also copied to the container as is. It is best to have the structure on the container mimic the one on the host as closely as possible.
I'm new with Github Actions and I try to make some continuous integration with functional tests.
I use Codeception and my workflow run perfectly, but when some tests fail the step is written as success. Github don't stop the action and continue to run the nexts steps.
Here is my workflow yml file :
name: Run codeception tests
on:
push:
branches: [ feature/functional-tests/codeception ]
jobs:
build:
runs-on: ubuntu-latest
steps:
# ββ Setup Github Actions π βββββββββββββββββββββββββββββββββββββββββββββ
- name: Checkout
uses: actions/checkout#v2
# ββ Setup PHP Version 7.3 π βββββββββββββββββββββββββββββββββββββββββββββ
- name: Setup PHP environment
uses: shivammathur/setup-php#master
with:
php-version: '7.3'
# ββ Setup Docker Environment π βββββββββββββββββββββββββββββββββββββββββββββ
- name: Build containers
run: docker-compose build
- name: Start all container
run: docker-compose up -d
- name: Execute www container
run: docker exec -t my_container developer
- name: Create parameter file
run: cp app/config/parameters.yml.dist app/config/parameters.yml
# ββ Composer π§βοΈ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- name: Install dependancies
run: composer install
# ββ Check Requirements π βββββββββββββββββββββββββββββββββββββββββββββ
- name: Check PHP version
run: php --version
# ββ Setup Database πΎ βββββββββββββββββββββββββββββββββββββββββββββ
- name: Create database
run: docker exec -t mysql_container mysql -P 3306 --protocol=tcp -u root --password=**** -e "CREATE DATABASE functional_tests"
- name: Copy database
run: cat tests/_data/test.sql | docker exec -i mysql_container mysql -u root --password=**** functional_tests
- name: Switch database
run: docker exec -t php /var/www/bin/console app:dev:marketplace:switch functional_tests
- name: Execute migrations
run: docker exec -t php /var/www/bin/console --no-interaction doctrine:migrations:migrate
- name: Populate database
run: docker exec -t my_container php /var/www/bin/console fos:elastica:populate
# ββ Generate Assets π₯ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- name: Install assets
run: |
docker exec -t my_container php /var/www/bin/console assets:install
docker exec -t my_container php /var/www/bin/console assetic:dump
# ββ Tests β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- name: Run functional tests
run: docker exec -t my_container php codeception:functional
- name: Run Unit Tests
run: php vendor/phpunit/phpunit/phpunit -c app/
And here is the logs of the action step :
Github Action log
Anyone know why the step don't faile and how to throw an error ?
Probably codeception:functional set an exit code = 0 even though an error occured. docker exec passes the exit code of the process through. GitHub Actions fails the step if a command returns with an exit code != 0.
Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully β why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others
I am trying to set up a CI pipeline with docker-compose and am struggling to understand how named volumes work...
As part of my Dockerfile, I copy in the application files and then run composer install to install the application dependencies. There are some elements of the applicaton files and the dependencies that I want to share with the other containers that are running / are set up to be run to perform utility processes (such as running database migrations). See the example below:
Dockerfile:
FROM php:5.6-apache
# Install dependencies
COPY composer.* /app/
RUN composer install --no-dev
# Copy application files
COPY bin bin
COPY environment.json environment.json
VOLUME /app
docker-compose.yml
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- app:/app
- ~/.cache/composer:/composer/cache
migrations:
image: my-image
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes:
- app:/app
working_dir: /app/vendor/me/my-lib
volumes:
app:
In the example above (irrelevant information omitted), I have a "migrations" service that pulls the migrations from the application dependencies installed with composer. My idea is that when I perform docker-compose build followed by docker-compose up, it will bring up the latest version of software with the latest dependencies and run the latest migrations at the same time.
This works fine the first time. Unfortunately on subsequent runs I cannot get docker-compose to use the new versions. If I run docker-compose build, I can see the composer install run and install all the latest libraries, but then when I go into the container with docker-compose run web /bin/bash, the old dependencies are in there! If I run the image directly with docker run web_1, I can see all the latest files no problem. So it's definitely a compose-specific problem.
I assume I need to do something like clear out the volume cache, but whatever I have tried doesn't seem to work. I can only assume I am misunderstanding the idea of volumes.
Any help would be hugely appreciated. Thanks!
What I understand from your question is you want to run composer install every time you run your container. In that case you have to use CMD instruction to execute that command.
CMD composer install --no-dev
RUN and CMD are both Dockerfile instructions.
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer.
For example if you wanted to install a package or create a directory inside of your Docker image then RUN will be what youβll want to use. For example, RUN mkdir -p /path/to/folder.
CMD lets you define a default command to run when your container starts.
You could say that CMD is a Docker run-time operation, meaning itβs not something that gets executed at build time. It happens when you run an image. A running image is called a container.
The problem here has to do with mounting a volume over a location defined in the build. The first build of the image has composer put its output into /app, and the first run of the first build mounts the app named volume to /app. This clobbers the image version of /app with a new write-layer on top. Mounting this named volume on the second build of the image will keep the original contents of /app.
Instead of using a named volume, use volumes-from to load the exported /app volume from web into the migration container.
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- ~/.cache/composer:/composer/cache
migrations:
image: docker-registry.efficio.digital:5043/doctrine-migrator:1.1
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes_from:
- web:ro
I need to install cURL compiled with OpenSSL and zlib via Dockerfile for Debian image with apache and php 5.6. I tried many approaches but due to the fact that I don't have string understanding in Linux a failed. I use docker-compose to up my container. docker-compose.yaml looks like:
version: '2'
services:
web:
build: .
command: php -S 0.0.0.0:80 -t /var/www/html/
ports:
- "80:80"
depends_on:
- db
volumes:
- $PWD/www/project:/var/www/html
container_name: "project-web-server"
db:
image: mysql:latest
ports:
- "192.168.99.100:3306:3306"
container_name: "project-db"
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbpass
As a build script I use Dockerfile:
FROM php:5-fpm
RUN apt-get update && apt-get install -y \
apt-utils \
curl libcurl3 libcurl3-dev php5-curl php5-mcrypt
RUN docker-php-ext-install -j$(nproc) curl
'docker-php-ext-install' is a helper script from the base image https://hub.docker.com/_/php/
The problem is that after $ docker build --rm . which is successful a don't get an image with cURL+SSL+zlib. After $ docker-compose up I have a working container with Apache+MySQL and can run my project but libraries I need are not there.
Could you explain how to add these extensions to my apache in container properly? I even tried to create my own Dockerfile and build apache+php+needed libs there, but had no result.
Your Dockerfile is not complete. You have not done a COPY (or similar) to transfer your source code to execute from the host into the container. The point of a Dockerfile is to setup an environment together with your source code which finishes by launching a process (typically a server).
COPY code-from-some-location into-location-in-container
CMD path-to-your-server
... as per the URL you reference a more complete Dockerfile would appear like this
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
notice the COPY which recursively copies all files/dirs (typically the location of your source code, etc like data and/or config files) in your $PWD where you execute the command onto the specified location internal to the container In unix a period as in . indicates the current directory so above command
COPY . /usr/src/myapp
will copy all files and directories in current directory from the host computer (the one you are using when typing in the docker build command) into the container directory called /usr/src/myapp
the WORKDIR acts to change directories into your container's dir supplied
finally the CMD launches the server which hums along once your launch the container