Cache dependency in gitlab-ci / docker - php

I don't know how to do cache dependency in gitlab-ci -> docker.
My project has 82 dependencies and they get very slow.. (vendor is in gitignore)
Full process:
change local file -> comit and push to repo remote -> run gitlab-ci -> build docker image -> push image to other server -> publish image
My example project:
app -> my files (html, img, php, css, anything)
gitlab-ci.yml
composer.json
composer.lock
Makefile
Dockerfile
Dockerfile:
FROM hub.myserver.test/image:latest
ADD . /var/www
CMD cd /var/www
RUN composer install --no-interaction
RUN echo "#done" >> /etc/sysctl.conf
gitlab-ci:
build:
script:
- make build
only:
- master
Makefile:
all: build
build:
docker build hub.myserver.test/new_image .
How I can caching dependencies (composer.json)? I do not want to download libraries from scratch.

Usually it's not a good idea to run composer install inside your image. I assume you need eventually to run your php app not composer itself, so you can avoid having it in production.
One possible solution is to split app image creation into 2 steps:
Install everything outside image
Copy ready-made files to image
.gillab-ci.yml
stages:
- compose
- build
compose:
stage: compose
image: composer # or you can use your hub.myserver.test/image:latest
script:
- composer install # install packages
artifacts:
paths:
- vendor/ # save them for next job
build:
stage: build
script:
- docker build -t hub.myserver.test/new_image .
- docker push hub.myserver.test/new_image
So in Dockerfile you just copy files from artifacts dir from first stage to image workdir:
# you can build from your own image
FROM php
COPY . /var/www
WORKDIR /var/www
# optional, if you want to replace CMD of base image
CMD [ "php", "./index.php" ]
Another good consideration is that you can test your code before building image with it. Just add test job between compose and build.
Live example # gitlab.com

Related

Docker project with php composer, installs composer packages in root directory instead of vendor

I have a php/wordpress project which requires composer. The project setup is simple and minimal.
docker-compose.yaml
version: "3.9"
services:
# Database
clearlaw-mysql1:
image: mysql:8
volumes:
- database:/var/lib/mysql
restart: on-failure
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- clearlaw
# Wordpress
clearlaw-wp1:
container_name: clearlaw-wp1
build:
context: .
depends_on:
- clearlaw-mysql
image: wordpress:latest
ports:
- 10002:80
restart: unless-stopped
CLI_MULTISITE_DEBUG: 1
CLI_MULTISITE_DEBUG_DISPLAY: 1
CLI_MULTISITE_DB_HOST: clarlaw-mysql:3306
CLI_MULTISITE_DB_NAME: wordpress
CLI_MULTISITE_DB_USER: wordpress
CLI_MULTISITE_DB_PASSWORD: wordpress
networks:
- clearlaw
clearlaw-adminer1:
image: adminer
ports:
- 10003:8080
restart: unless-stopped
networks:
- clearlaw
networks:
clearlaw:
volumes:
database:
Dockerfile
FROM wordpress:latest
# INSTALL AND UPDATE COMPOSER
COPY --from=composer /usr/bin/composer /usr/bin/composer
RUN composer self-update
COPY composer.json .
RUN composer install --prefer-dist
RUN composer dump-autoload
COPY . .
EXPOSE 80
composer.json
{
"require": {
"vlucas/phpdotenv": "^v2.6.7",
"dompdf/dompdf": "^1.0"
}
}
When I run this setup I get fatal error autoload.php file is not where it should be (/vendor/autoload). instead it is in the root directory along with all the installed pacakges. The vendor directory exists however it is empty.
Example Directory structure:
-- autoload.php
vendor # empty
composer
wp-content
wp-admin
wp-includes
# all other files
What I've Tried?
I have tried adding vendor directory explicitly in composer.json but it didn't help
{
"config": {
"vendor-dir": "vendor"
},
"require": {
"vlucas/phpdotenv": "^v2.6.7",
"dompdf/dompdf": "^1.0"
}
}
Update
I have created this repository for you to quick test https://github.com/prionkor/wp-composer-test
The wordpress:latest container has a volume specified at /var/www/html which is also the containers working directory.
When you bring your docker-compose up, the (anonymous) volume is created and the containers entry-point script (/usr/local/bin/docker-entrypoint.sh) copies the WordPress sources into the new volume.
In general only after that a composer install with the vendor dir at /var/www/html/vendor would not be discarded.
As you have the composer install within the Dockerfile building FROM wordpress:latest the vendor folder is created but then discarded. It is either too early or too late depending on from where you look.
Therefore, you can drop the RUN composer ... instructions from your Dockerfile, these are effectively only lengthen the build time.
FROM wordpress:latest
# INSTALL AND UPDATE COMPOSER
COPY --from=composer /usr/bin/composer /usr/bin/composer
RUN composer self-update
COPY . .
EXPOSE 80
Then do the composer install (dump-autoload is not necessary after it) and exec the base containers entrypoint when starting the container:
# WordPress
build:
context: .
command: /bin/bash -c "set -ex; composer update --prefer-dist; exec docker-entrypoint.sh apache2-foreground"
Then Composer installs into the volume, effectively at /var/www/html/vendor.
The original WordPress container setup will give a warning that the folder is not empty any longer, but it does not prevent the initialization of WordPress, so you can ignore it:
WARNING: /var/www/html is not empty! (copying anyhow)
Composer then installs the build contexts composer.json/composer.lock when you bring the project up initially (if the composer.lock file is leading in the project outside the container use composer install instead of composer update).
You then can run
docker-compose up -d --build
to recreate the WordPress service container from scratch and Composer populates the vendor folder again.
Commonly this is the time when you can greatly benefit from a command runner, for example a Makefile. You then have the standard operations in your project at your fingertips.
Here a composer-update goal:
wordpress := clearlaw-wp1
cu: composer-update;
composer-update:
tar -c -f- composer.json \
| docker cp - $(wordpress):/var/www/html
docker-compose exec $(wordpress) composer update
Then alias the make command for that file in your shell (e.g. alias dc="make -C $(pwd) -f Makefile") and then you only need to type dc cu to perform the update.
You can extend it further for other operations, and you can also experiment with the volumes as you have commented them out in your example. E.g. you can mount things in read-only inside the existing volume /var/www/html, just not directly at that place.
You can also mount single files. E.g. to extend on the composer example, you could as well mount the composer.json (and composer.lock) file into the container, then you can spare the tar-pipe to docker cp. Which also serves as an example how you can get files out of the container as it works in both directions.
command: /bin/bash -c "set -x; composer update --prefer-dist; exec docker-entrypoint.sh apache2-foreground"
volumes: ["./composer.json:/var/www/html/composer.json:ro"]
You then can run
docker-compose up -d --build
again to recreate the container and trigger the Composer update.
And a side-note: Take a bit of care with the vlucas/phpdotenv package, it can become easily incompatible with docker / docker-compose if you don't follow the standard syntax rules for dot-env files (shell, docker etc.). Normally you also don't need it for a container setup.

Optimizing Laravel docker image

Updated
I updated the dockerfile for anyone who wants a good dockerfile for their laravel application.
I'm trying to build a Docker image from my laravel application. My application plus all the dependencies are about 380 MB but the image turns to be 840 MB. I used multistage build as Ivan suggested (Which halved the size of the image, it was 1.2 GB at first). But I still wondering why is my Docker image this big? And how can I reduce the size of the image?
Here is my Dockerfile:
# Instruction adapted from https://laravel-news.com/multi-stage-docker-builds-for-laravel
# PHP Dependencies
FROM composer:latest as vendor
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-dev \
--ignore-platform-reqs \
--no-interaction \
--no-plugins \
--no-scripts \
--prefer-dist
# Frontend
FROM node:16.13.1 as frontend
RUN mkdir -p /app/public
COPY package.json webpack.mix.js tailwind.config.js /app/
COPY resources/ /app/resources/
COPY public/ /app/public/
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci && npm run production
# Application
FROM php:7.4-apache
COPY . /var/www/html
COPY --from=vendor /app/vendor/ /var/www/html/vendor/
COPY --from=frontend /app/public/ /var/www/html/public/
Your image is big because it contains all application which you was install via apt-get and their dependencies.
There are multiple ways to solve problem:
use multistage build
use suitable base image
use Alpine linux
Multistage build
Use one base image for get/build/test your app and copy needed result to next stage.
FROM ubuntu:18.04 AS build
*do smth*
FROM php:7.4.27-fpm-alpine AS final
COPY from build...
Suitable base image
Use image that already contains environment which you need to run application. Where no need to install all these garbage.
Use Alpine linux
Use the images which based on Alpine or similar distro, who optimized for docker/clouds, and build your app based on them.

docker-compose build and docker build giving different results

I have created a simple Dockerfile to install apache with PHP and then install packages from composer.json.
FROM php:7-apache
WORKDIR /var/www/html
COPY ./src/html/ .
COPY composer.json .
RUN apt-get update
RUN apt-get install -y unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer update
When I run docker build -t my-web-server . followed by docker run -p 8080:80 my-web-server, everything works fine and the packages install.
But when I use a docker-compose file:
version: "3.9"
services:
ecp:
build: .
ports:
- "8080:80"
volumes:
- ./src:/var/www
and perform docker-compose build followed by docker-compose up The packages do not install and I just index.php is taken across to the container
my current file structure:
src
|-- html
|-- index.php
composer.json
docker-compose.yaml
Dockerfile
When docker-compose is building the image all the console outputs are identical to that of docker build
Your two approaches are not identical. You are using volumes in your docker compose, and not in your docker call. Your problem lies there.
More specifically, notice that in your docker compose you are mounting your host's ./src to your container's ./var/www - which is not the giving you the correct structure, since you "shadow" the container's folder that contains your composer.json (which was copied to the container at build time).
To avoid such confusion, I suggest that if you want to mount a volume with your compose (which is a good idea for development), then your docker-compose.yml file should mount the exact same volumes as the COPY commands in your Dockerfile. For example:
volumes:
- ./src/html:/var/www/html
- ./composer.json:/var/www/html/composer.json
Alternatively, remove the volumes directive from your docker-compose.yml.
Note that it can be a cause for additional problems and confusion to have a file (in your case composer.json) copied to a folder in the container, while having the same folder also copied to the container as is. It is best to have the structure on the container mimic the one on the host as closely as possible.

docker-compose using old volumes

I am trying to set up a CI pipeline with docker-compose and am struggling to understand how named volumes work...
As part of my Dockerfile, I copy in the application files and then run composer install to install the application dependencies. There are some elements of the applicaton files and the dependencies that I want to share with the other containers that are running / are set up to be run to perform utility processes (such as running database migrations). See the example below:
Dockerfile:
FROM php:5.6-apache
# Install dependencies
COPY composer.* /app/
RUN composer install --no-dev
# Copy application files
COPY bin bin
COPY environment.json environment.json
VOLUME /app
docker-compose.yml
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- app:/app
- ~/.cache/composer:/composer/cache
migrations:
image: my-image
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes:
- app:/app
working_dir: /app/vendor/me/my-lib
volumes:
app:
In the example above (irrelevant information omitted), I have a "migrations" service that pulls the migrations from the application dependencies installed with composer. My idea is that when I perform docker-compose build followed by docker-compose up, it will bring up the latest version of software with the latest dependencies and run the latest migrations at the same time.
This works fine the first time. Unfortunately on subsequent runs I cannot get docker-compose to use the new versions. If I run docker-compose build, I can see the composer install run and install all the latest libraries, but then when I go into the container with docker-compose run web /bin/bash, the old dependencies are in there! If I run the image directly with docker run web_1, I can see all the latest files no problem. So it's definitely a compose-specific problem.
I assume I need to do something like clear out the volume cache, but whatever I have tried doesn't seem to work. I can only assume I am misunderstanding the idea of volumes.
Any help would be hugely appreciated. Thanks!
What I understand from your question is you want to run composer install every time you run your container. In that case you have to use CMD instruction to execute that command.
CMD composer install --no-dev
RUN and CMD are both Dockerfile instructions.
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer.
For example if you wanted to install a package or create a directory inside of your Docker image then RUN will be what you’ll want to use. For example, RUN mkdir -p /path/to/folder.
CMD lets you define a default command to run when your container starts.
You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
The problem here has to do with mounting a volume over a location defined in the build. The first build of the image has composer put its output into /app, and the first run of the first build mounts the app named volume to /app. This clobbers the image version of /app with a new write-layer on top. Mounting this named volume on the second build of the image will keep the original contents of /app.
Instead of using a named volume, use volumes-from to load the exported /app volume from web into the migration container.
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- ~/.cache/composer:/composer/cache
migrations:
image: docker-registry.efficio.digital:5043/doctrine-migrator:1.1
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes_from:
- web:ro

Files changes not reflected in Docker image after rebuild

I'm trying to set up two Docker images for my PHP web application (php-fcm) reversed proxied by NGINX. Ideally I would like all the files of the web application to be copied into the php-fcm based image and exposed as a volume. This way both containers (web and app) can access the files with NGINX serving the static files and php-fcm interpreting the php files.
docker-compose.yml
version: '2'
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- app
links:
- app
app:
build: .
volumes:
- /app
Dockerfile:
FROM php:fpm
COPY . /app
WORKDIR /app
The above setup works as expected. However, when I make any change to the files and then do
compose up --build
the new files are not picked up in the resulting images. This is despite the following message indicating that the image is indeed being rebuilt:
Building app
Step 1 : FROM php:fpm
---> cb4faea80358
Step 2 : COPY . /app
---> Using cache
---> 660ab4731bec
Step 3 : WORKDIR /app
---> Using cache
---> d5b2e4fa97f2
Successfully built d5b2e4fa97f2
Only removing all the old images does the trick.
Any idea what could cause this?
$ docker --version
Docker version 1.11.2, build b9f10c9
$ docker-compose --version
docker-compose version 1.7.1, build 0a9ab35
The 'volumes_from' option mounts volumes from one container to another. The important word there is container, not image. So when you rebuild an image, the previous container is still running. If you stop and restart that container, or even just stop it, the other containers are still using those old mount points. If you stop, remove the old app container, and start a new one, the old volume mounts will still persist to the now deleted container.
The better way to solve this in your situation is to switch to named volumes and setup a utility container to update this volume.
version: '2'
volumes:
app-data:
driver: local
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
- app-data:/app
app:
build: .
volumes:
- app-data:/app
A utility container to update your app-data volume could look something like:
docker run --rm -it \
-v `pwd`/new-app:/source -v app-data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"
As BMitch points out, image updates don't automatically filter down into containers. your workflow for updates needs to be revisited. I've just gone through the process of building a container which includes NGINX and PHP-FPM. I've found, for me, that the best way was to include nginx and php in a single container, both managed by supervisord.
I then have scripts in the image that allow you to update your code from a git repo. This makes the whole process really easy.
#Create new container from image
docker run -d --name=your_website -p 80:80 -p 443:443 camw/centos-nginx-php
#git clone to get website code from git
docker exec -ti your_website get https://www.github.com/user/your_repo.git
#restart container so that nginx config changes take effect
docker restart your_website
#Then to update, after committing changes to git, you'll call
docker exec -ti your_website update
#restart container if there are nginx config changes
docker restart your_website
My container can be found at https://hub.docker.com/r/camw/centos-nginx-php/
The dockerfile and associated build files are available at https://github.com/CamW/centos-nginx-php
If you want to give it a try, just fork https://github.com/CamW/centos-nginx-php-demo, change the conf/nginx.conf file as indicated in the readme and include your code.
Doing it this way, you don't need to deal with volumes at all, everything is in your container which I like.

Categories