Optimizing Laravel docker image - php

Updated
I updated the dockerfile for anyone who wants a good dockerfile for their laravel application.
I'm trying to build a Docker image from my laravel application. My application plus all the dependencies are about 380 MB but the image turns to be 840 MB. I used multistage build as Ivan suggested (Which halved the size of the image, it was 1.2 GB at first). But I still wondering why is my Docker image this big? And how can I reduce the size of the image?
Here is my Dockerfile:
# Instruction adapted from https://laravel-news.com/multi-stage-docker-builds-for-laravel
# PHP Dependencies
FROM composer:latest as vendor
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-dev \
--ignore-platform-reqs \
--no-interaction \
--no-plugins \
--no-scripts \
--prefer-dist
# Frontend
FROM node:16.13.1 as frontend
RUN mkdir -p /app/public
COPY package.json webpack.mix.js tailwind.config.js /app/
COPY resources/ /app/resources/
COPY public/ /app/public/
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci && npm run production
# Application
FROM php:7.4-apache
COPY . /var/www/html
COPY --from=vendor /app/vendor/ /var/www/html/vendor/
COPY --from=frontend /app/public/ /var/www/html/public/

Your image is big because it contains all application which you was install via apt-get and their dependencies.
There are multiple ways to solve problem:
use multistage build
use suitable base image
use Alpine linux
Multistage build
Use one base image for get/build/test your app and copy needed result to next stage.
FROM ubuntu:18.04 AS build
*do smth*
FROM php:7.4.27-fpm-alpine AS final
COPY from build...
Suitable base image
Use image that already contains environment which you need to run application. Where no need to install all these garbage.
Use Alpine linux
Use the images which based on Alpine or similar distro, who optimized for docker/clouds, and build your app based on them.

Related

How to copy from one image to another in a docker multistaged build?

I want to copy the vendor folder from the composer image to another php image during a multistaged build.
My Dockerfile looks like this:
FROM composer
WORKDIR /tmp/composer-vendors/
COPY composer.lock composer.json ./
RUN composer install --ignore-platform-reqs
RUN pwd && ls
FROM php:7.3-fpm-alpine
WORKDIR /var/www/html
RUN pwd && ls
COPY --from=composer /tmp/composer-vendors/vendor ./vendor
CMD ["php-fpm"]
The RUN pwd && ls is only there to show that the files are indeed there.
Yet the copy --from=composer fails stating:
Step 9/10 : COPY --from=composer /tmp/composer-vendors/vendor ./vendor
COPY failed: stat /var/lib/docker/overlay2/c0cece8b4ffcc3ef3f6ed26c3131ae94813acffd5034b359c2ea6aed922f56ee/merged/tmp/composer-vendors/vendor: no such file or directory
What am I doing wrong?
My example composer.json:
{
"name": "kopernikus/multistage-copy-issue",
"require": {
"nesbot/carbon": "^2.36"
}
}
You have to alias the image as otherwise docker uses the base image as provided by composer and not the image you built.
This means you need to change your FROM statement:
FROM composer as BUILDER
and reference your image:
COPY --from=BUILDER /tmp/composer-vendors/vendor ./vendor
The alias could be anything, I used BUILDER as an example. In fact, you could even reuse the name:
FROM composer AS composer
though that might lead to unexpected behavior if people are not expecting a modified image.

Bitbucket pipeline deploy ignores vendor folder with ftp upload

I am trying to deploy PHP project using bitbucket pipeline. With this code:
init: # -- First time init
- step:
name: build
image: php:7.1.1
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- vendor/bin/phpunit
artifacts: # defining vendor/ as an artifact
- vendor/**
- step:
image: samueldebruyn/debian-git
name: deployment
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init -u "$FTP_DEV_USERNAME" -p "$FTP_DEV_PASSWORD" ftp://$FTP_DEV_HOST/$FTP_DEV_FOLDER
But it ignores the vendor folder. I have assumed, that artifacts will add this folder to deploy too.
What is wrong or what can I do better?
This happens because you probably have a .gitignore which includes the vendor directory. The artifacts are in fact passed to the next step by bitbucket but are ignored by git-ftp. In order to upload these files with git-ftp you need to create a file called .git-ftp-include where you will need to add the following line: !vendor/. The ! is required as stated in the docs:
The .git-ftp-include file specifies intentionally untracked files that Git-ftp should
upload. If you have a file that should always be uploaded, add a line beginning with !
followed by the file's name.

How can I make this docker image smaller?

I have the following dockerfile:
FROM php:7.2-apache
LABEL name "medico-app"
COPY composer.json composer.lock ./
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
RUN apt-get update && apt-get install -y --no-install-recommends git zip && composer install
COPY . /var/www/html
EXPOSE 80
When this image is built, it has a size of ~500 Mbs. I'm trying to compress this image to < 100 Mb so that I can use it on Zeit Now. According to what I'm reading in the docker documentation, multi-stage builds sometimes help in making images smaller. My current idea is to split the dockerfile into two stages, one where I would install the dependencies with compose and the other where I'd just have php and apache. I can't seem to get it right though. Any suggestions?
This is what I have so far:
# first stage
FROM composer:latest
COPY composer.json composer.lock ./
RUN composer install
For the second stage, I tried this
FROM httpd:2.4-alpine
LABEL name "medico-app"
COPY --from=0 /app/vendor ./vendor
COPY . /usr/local/apache2/htdocs/
EXPOSE 80
However when I run the container now, the php files aren't served, I just see them as text. Im probably missing something here with PHP/Apache.
EDIT:
I also tried this for the second stage but I can't get it to work:
FROM php:7.2-alpine
LABEL name "medico-app"
RUN apk --no-cache update && apk --no-cache add apache2 openrc
COPY --from=0 /app/vendor ./vendor
COPY . /var/www/
EXPOSE 80
Now when I open my localhost I don't see the PHP files that I should see. I just see the default it works page.
General tips for making docker images smaller:
Use a minimal base image such as the alpine versions. In this case you can use something like php:7.2-alpine and install apache using apk.
When using apt-get follow the best practices. In particular add && rm -rf /var/lib/apt/lists/*
Try minifiying the code being added to the image using something like gulp minify.

Cache dependency in gitlab-ci / docker

I don't know how to do cache dependency in gitlab-ci -> docker.
My project has 82 dependencies and they get very slow.. (vendor is in gitignore)
Full process:
change local file -> comit and push to repo remote -> run gitlab-ci -> build docker image -> push image to other server -> publish image
My example project:
app -> my files (html, img, php, css, anything)
gitlab-ci.yml
composer.json
composer.lock
Makefile
Dockerfile
Dockerfile:
FROM hub.myserver.test/image:latest
ADD . /var/www
CMD cd /var/www
RUN composer install --no-interaction
RUN echo "#done" >> /etc/sysctl.conf
gitlab-ci:
build:
script:
- make build
only:
- master
Makefile:
all: build
build:
docker build hub.myserver.test/new_image .
How I can caching dependencies (composer.json)? I do not want to download libraries from scratch.
Usually it's not a good idea to run composer install inside your image. I assume you need eventually to run your php app not composer itself, so you can avoid having it in production.
One possible solution is to split app image creation into 2 steps:
Install everything outside image
Copy ready-made files to image
.gillab-ci.yml
stages:
- compose
- build
compose:
stage: compose
image: composer # or you can use your hub.myserver.test/image:latest
script:
- composer install # install packages
artifacts:
paths:
- vendor/ # save them for next job
build:
stage: build
script:
- docker build -t hub.myserver.test/new_image .
- docker push hub.myserver.test/new_image
So in Dockerfile you just copy files from artifacts dir from first stage to image workdir:
# you can build from your own image
FROM php
COPY . /var/www
WORKDIR /var/www
# optional, if you want to replace CMD of base image
CMD [ "php", "./index.php" ]
Another good consideration is that you can test your code before building image with it. Just add test job between compose and build.
Live example # gitlab.com

php composer inside container lost vendor directory

I have the following Dockerfile
FROM bitgandtter/sf:php7
# basic env fix
ENV TERM xterm
# install packages
ADD . /var/www
# update dependencies
RUN cd Helpers && SYMFONY_ENV=prod composer update -o --no-dev
ENV SYMFONY_ENV prod
After build the image the Helpers directory does not contain the vendor directory.
I really dont know why is that since the previous compsoer update just execute successfully and the image was created just fine.
Any help please
NOTE: the image bitgandtter/sf:php7 use a VOLUME declaration on /var/www
In fact i discover that the VOLUME declaration on the base image was the main issue.
As explained in the official doc after define a VOLUME on a dockerfile if any file changes happens inside that volume will be lost.
So the solution is to not declare VOLUMES on base images.

Categories