php composer inside container lost vendor directory - php

I have the following Dockerfile
FROM bitgandtter/sf:php7
# basic env fix
ENV TERM xterm
# install packages
ADD . /var/www
# update dependencies
RUN cd Helpers && SYMFONY_ENV=prod composer update -o --no-dev
ENV SYMFONY_ENV prod
After build the image the Helpers directory does not contain the vendor directory.
I really dont know why is that since the previous compsoer update just execute successfully and the image was created just fine.
Any help please
NOTE: the image bitgandtter/sf:php7 use a VOLUME declaration on /var/www

In fact i discover that the VOLUME declaration on the base image was the main issue.
As explained in the official doc after define a VOLUME on a dockerfile if any file changes happens inside that volume will be lost.
So the solution is to not declare VOLUMES on base images.

Related

Optimizing Laravel docker image

Updated
I updated the dockerfile for anyone who wants a good dockerfile for their laravel application.
I'm trying to build a Docker image from my laravel application. My application plus all the dependencies are about 380 MB but the image turns to be 840 MB. I used multistage build as Ivan suggested (Which halved the size of the image, it was 1.2 GB at first). But I still wondering why is my Docker image this big? And how can I reduce the size of the image?
Here is my Dockerfile:
# Instruction adapted from https://laravel-news.com/multi-stage-docker-builds-for-laravel
# PHP Dependencies
FROM composer:latest as vendor
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-dev \
--ignore-platform-reqs \
--no-interaction \
--no-plugins \
--no-scripts \
--prefer-dist
# Frontend
FROM node:16.13.1 as frontend
RUN mkdir -p /app/public
COPY package.json webpack.mix.js tailwind.config.js /app/
COPY resources/ /app/resources/
COPY public/ /app/public/
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci && npm run production
# Application
FROM php:7.4-apache
COPY . /var/www/html
COPY --from=vendor /app/vendor/ /var/www/html/vendor/
COPY --from=frontend /app/public/ /var/www/html/public/
Your image is big because it contains all application which you was install via apt-get and their dependencies.
There are multiple ways to solve problem:
use multistage build
use suitable base image
use Alpine linux
Multistage build
Use one base image for get/build/test your app and copy needed result to next stage.
FROM ubuntu:18.04 AS build
*do smth*
FROM php:7.4.27-fpm-alpine AS final
COPY from build...
Suitable base image
Use image that already contains environment which you need to run application. Where no need to install all these garbage.
Use Alpine linux
Use the images which based on Alpine or similar distro, who optimized for docker/clouds, and build your app based on them.

Symfony 4 is painfully slow in DEV

I try to run a simple Symfony 4 project on a docker container.
I have tested regular PHP scripts, and they work very well. But, with Symfony project, the execution gets ridiculously slow. For example, a page without any significant content takes 5-6 seconds.
I have attached the screenshots from Symfony's performance profiler.
Do you have any idea what how to reduce this execution time to an acceptable level?
It seems that changing the consistency level greatly increases Symfony performance. (see Docker docs)
Here is my new docker-compose.yml file. Note the ":cached" after the volumne.
version: '3'
services:
web:
image: apache-php7
ports:
- "80:80"
volumes:
- .:/app:cached
tty: true
Note from manual:
For directories mounted with cached, the host’s view of the file
system is authoritative; writes performed by containers are
immediately visible to the host, but there may be a delay before
writes performed on the host are visible within containers.
Since the provided answer is working with macOSX, only, but performance issues exist with Docker for Windows as well the preferred answer didn't help in my case. I was following different approach partially described in answers to similar questions here on SO.
According to Performance Best Practices folders with heavy load such as vendor and var in a Symfony application shouldn't be part of a shared mount. If you require to persist those folders you should use volumes instead.
To prevent interferences with shared volume in /app I was relocating those two folders to separate folder /symfony in container. In Dockerfile folders /symfony/var and /symfony/vendor are created in addition.
The script run on start of container is setting symbolic links from /app/var to /symfony/var and from /app/vendor to /symfony/vendor. These two new folders are then mounted to volumes e.g. in a docker-compose.yml file.
Here is what I was adding to my Dockerfile:
RUN mkdir /app && mkdir /symfony/{var,vendor}
COPY setup-symfony.sh /setup-symfony.sh
VOLUME /symfony/var
VOLUME /symfony/vendor
Here is what I was adding to my startup script right before invoking composer update or any task via bin/console:
[ -e /app/var ] || ln -s /symfony/var /app/var
[ -e /app/vendor ] || ln -s /symfony/vendor /app/vendor
This is what my composition looks like eventually:
version: "3.5"
services:
database:
build:
context: docker/mysql
volumes:
- "dbdata:/var/lib/mysql"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
application:
depends_on:
- database
build:
context: docker/lamps
ports:
- "8000:8000"
volumes:
- ".:/app:cached"
- "var:/symfony/var"
- "vendor:/symfony/vendor"
environment:
DATABASE_URL: mysql://dbuser:dbuser#database/dbname
volumes:
dbdata:
var:
vendor:
Using this setup Symfony is responding within 500ms rather than taking 4000ms and more.
UPDATE: When using an IDE for developing Symfony-based application like PhpStorm you might need the files in vendor/ for code assist or similar. In my case I was able to take a snapshot of those files and put them into a different folder which is shared with host as well, but isn't actively used by Symfony/PSR, e.g. vendor.dis/. This snapshot is taken manually once per install/upgrade e.g. by entering the running container with a shell like so:
docker exec -it IDofContainer /bin/sh
Then in shell invoke
cp -Lr vendor vendor.dis
Maybe you have to fix the pathnames or make sure to switch into folder containing the your app first.
In my case using PhpStorm the vendor.dis/ was picked up by background indexing and obeyed by code inspection and code assist. Visual Studio code was having issues with the great number of untracked changes with regards to git so I had to explicitly make this snapshot ignored by git, adding its name in .gitignore file.
UPDATE 2020: More recent setups may have issues with accessing folders like /symfony/templates or /symfony/public e.g. on warming up the cache. This is obviously due to using relative folders in auto-loading code now existing in /symfony/vendor due to relocation described above. As an option, you could directly mount extra volumes in /app/var and /app/vendor instead of /symfony/var and /symfony/vendor. Creating deep copies of those folders in /app/var.dis and /app/vendor.dis keeps enabling code assist and inspections in host filesystem.
do not sync the vendor folder
In your docker file, you can prevent the vendor folder to sync with the container. This has the biggest impact on performance because the folder gets very huge:
#DockerFile:
volumes:
- /local/app:/var/www/html/app
- /var/www/html/app/vendor # ignore vendor folder
This will have the effect that you will need to copy the vendor folder manuelly to the container once after the build and when you update your composer dependencies:
docker cp /local/app/vendor <CONTAINER_ID>:/var/www/html/app/
do not sync the cache folder
in your src/Kernel.php:
public function getCacheDir()
{
// for docker performance
if ($this->getEnvironment() === 'test' || $this->getEnvironment() === 'dev') {
return '/tmp/'.$this->environment;
} else {
return $this->getProjectDir().'/var/cache/'.$this->environment;
}
}
sync the app folders in cached mode
use cached mode for volume mounts on development environments: http://docs.docker.oeynet.com/docker-for-mac/osxfs-caching/#delegated
The cached configuration provides all the guarantees of the delegated
configuration, and some additional guarantees around the visibility of
writes performed by containers. As such, cached typically improves the
performance of read-heavy workloads, at the cost of some temporary
inconsistency between the host and the container.
For directories mounted with cached, the host’s view of the file
system is authoritative; writes performed by containers are
immediately visible to the host, but there may be a delay before
writes performed on the host are visible within containers.
This makes sense for dev envrionemtns, because normally you change your code with your IDE on the host not in the container and sync into the container.
#DockerFile:
volumes:
- /local/app:/var/www/html/app:cached
disable Docker debug mode
check if Docker is NOT in debug mode:
docker info
# It Should display: Debug Mode: false
Disable in docker-config:
{
"debug": false,
}
do not use a file cache
this is extra slow in a docker box, use for examle a SQLITE cache: Symfony Sqlite Cache
for Windows 10 users: Use Docker Desktop with WSL 2 support
Use Docker Desktop with WSL 2 support, whichs incredibley boosts performance in general:
https://docs.docker.com/docker-for-windows/wsl/
Prevent syncing the vendor directory with the container:
# docker-compose.yml:
volumes:
- ./app:/var/www
- /var/www/vendor # ignore vendor map
When building in your Dockerfile copy the vendor map to the container location:
# Dockerfile
COPY app/vendor /var/www/vendor
Sebastian Viereck his answer helped me solve this. Loading went from 14000 to 500ms average on Symfony 5.3
The only downside is that you have to rebuild after you add/update something via composer. But thats not all to bad.
One more very important thing for container's performances.
It's essential to check if a Dockerfile contain build of unnecessary layers.
For example,
Bad Practice -> use multiple unnecessary chained RUN
Best Practice -> use && from shell for chianed command as often as possible
e.g. , for example
We might write in our Dockerfile:
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
&& apt-get update && apt-get install -y --no-install-recommends \
locales apt-utils git \
\
&& echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
&& echo "fr_FR.UTF-8 UTF-8" >> /etc/locale.gen \
&& locale-gen \
Instead of :
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN apt-get update && apt-get install -y --no-install-recommends \
locales apt-utils git
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
&& echo "fr_FR.UTF-8 UTF-8" >> /etc/locale.gen
RUN locale-gen
More layers improve container's slowness... Check your Server Dockerfiles friends !
I hope this comment help someone somewhere !
You can avoid using bind mounts which are extremely slow on Mac or Windows when they contain a big amount of files.
So, instead you can sync files between the host and the container volumes by using Mutagen, it's almost as fast as native with Linux. A benchmark is available here.
Here is a basic configuration of Mutagen:
sync:
defaults:
ignore:
vcs: true
permissions:
defaultFileMode: 644
defaultDirectoryMode: 755
codebase:
alpha: "./app" # dir of your app
beta: "docker://project_container_1/var/www" # targets an absolute path in the container named project_container_1
mode: "two-way-resolved"
This repository shows a full configuration with a simple PHP project (Symfony 5) but it can be used for any type of project in any language.
I would recommend using docker-sync. I have used it myself and it reduced the load time of my Laravel based app.
Developing with docker under OSX/ Windows is a huge pain, since sharing your code into containers will slow down the code-execution about 60 times (depends on the solution). Testing and working with a lot of the alternatives made us pick the best of those for each platform, and combine this in one single tool: docker-sync.

Installing composer in a docker image

So i tried many ways, but this one seems the most legit way of installing composer in a docker image :
https://hub.docker.com/r/composer/composer/
Now when im trying to do the "same", i'm getting:
latest: Pulling from composer/composer
Status: Image is up to date for composer/composer:latest
Composer could not find a composer.json file in /app
When im running it:
docker run --rm -v $(pwd):/app composer/composer install
Even if i change the /app route to /var/www, i still get the same error that it cant find the composer.json in /app.
How do i make it so that it looks in the directory where i need the composer? I dont understand the $(pwd) part but tried it several ways.. I dont have any volumes created as im using php from tutum/lamp.
I am also not using docker-compose so if possible, tell me some way of doing it without docker compose.
Can provide more details if needed.
EDIT
Could it be something regarding the volumes? Currently i have the composer.json file in /var/protobuf and i use this command: docker run --rm -v $(pwd):/var/protobuf composer/composer install to tell it about the directory of the json file, but it still keeps looking in /app, maybe i need to create volumes?

composer vendor/bin files not being detected on Ubuntu Terminal

Yesterday I've just installed Laravel with Behat on my VM Ubuntu 15.10.
Everything works fine, running the command $ vendor/bin/behat --init successfully created the features/ folder
but today something is weird, when running the $ vendor/bin/behat its saying vendor/bin/behat: line 1: ../behat/behat/bin/behat: No such file or directory
What's inside the vendor/bin/behat file?
This first single line ../behat/behat/bin/behat
accessing the actual location works $ vendor/behat/behat/bin/behat which basically means the file DOES exists
Please note that the issue is the same for the files in vendor/bin like doctrine phpspec etc..
You're having relative path problems. If your current directory contains vendor/ and you execute vendor/bin/behat, then ../behat/behat/bin/behat doesn't exist because it's going one directory up from your current directory, not vendor/bin/. For example:
$ cd $HOME/project
$ vendor/bin/behat
vendor/bin/behat: line 1: ../behat/behat/bin/behat: No such file or directory
That relative path becomes $HOME/project/behat/behat/bin/behat and not $HOME/project/vendor/behat/behat/bin/behat (note vendor present in the second path)
You need to be inside vendor/bin/ when executing behat:
$ cd $HOME/project/vendor/bin
$ behat
...
However, I don't see this being an issue with the latest behat install, line #1 is a well formed shebang. I think you might want to destroy your vendor install, update composer, etc, and reinstall Behat. Those files should not start with relative paths.
EDIT:
According to the composer docs, it creates symlinks to package binaries, as seen in the source code. You can verify this by running ls -l vendor/bin (all symlinks will have a -> pointing to their destination path). It would seem your original php composer.phar require ... was corrupt from the beginning.

GitLab-CI Multi Runner php composer cache

I'm using gitlab-ci-multi-runner with docker containers. Everything is going fine, but docker containers don't keep the composer cache so in every run composer downloads dependencies again and again, which takes a lot of time. Is there any way to configure gitlab-ci-runner docker container to keep the composer cache or mount a volume on each run where the composer cache is kept?
You can change the composer cache path by exporting the COMPOSER_CACHE_DIR environment variable in your runner configuration file, and then add a volume in the [runners.docker] section to match it.
If you run gitlab-runner as root or with sudo, then your configuration file is located at /etc/gitlab-runner/config.toml. Otherwise it's located at $HOME/.gitlab-runner/config.toml.
# config.toml
[[runners]]
name = "Generic Docker Runner"
...
environment = ["COMPOSER_CACHE_DIR=/cache"]
executor = "docker"
[runners.docker]
...
volumes = ["/var/cache:/cache:rw"]
cache_dir = "/cache"
You could modify the composer cache path and write the stuff to a docker volume.
That storage is persistent and can be shared across containers.
Referencing:
https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/advanced-configuration.md#volumes-in-the-runnersdocker-section
https://docs.docker.com/engine/admin/volumes/volumes/

Categories