Symfony 4 is painfully slow in DEV - php

I try to run a simple Symfony 4 project on a docker container.
I have tested regular PHP scripts, and they work very well. But, with Symfony project, the execution gets ridiculously slow. For example, a page without any significant content takes 5-6 seconds.
I have attached the screenshots from Symfony's performance profiler.
Do you have any idea what how to reduce this execution time to an acceptable level?

It seems that changing the consistency level greatly increases Symfony performance. (see Docker docs)
Here is my new docker-compose.yml file. Note the ":cached" after the volumne.
version: '3'
services:
web:
image: apache-php7
ports:
- "80:80"
volumes:
- .:/app:cached
tty: true
Note from manual:
For directories mounted with cached, the host’s view of the file
system is authoritative; writes performed by containers are
immediately visible to the host, but there may be a delay before
writes performed on the host are visible within containers.

Since the provided answer is working with macOSX, only, but performance issues exist with Docker for Windows as well the preferred answer didn't help in my case. I was following different approach partially described in answers to similar questions here on SO.
According to Performance Best Practices folders with heavy load such as vendor and var in a Symfony application shouldn't be part of a shared mount. If you require to persist those folders you should use volumes instead.
To prevent interferences with shared volume in /app I was relocating those two folders to separate folder /symfony in container. In Dockerfile folders /symfony/var and /symfony/vendor are created in addition.
The script run on start of container is setting symbolic links from /app/var to /symfony/var and from /app/vendor to /symfony/vendor. These two new folders are then mounted to volumes e.g. in a docker-compose.yml file.
Here is what I was adding to my Dockerfile:
RUN mkdir /app && mkdir /symfony/{var,vendor}
COPY setup-symfony.sh /setup-symfony.sh
VOLUME /symfony/var
VOLUME /symfony/vendor
Here is what I was adding to my startup script right before invoking composer update or any task via bin/console:
[ -e /app/var ] || ln -s /symfony/var /app/var
[ -e /app/vendor ] || ln -s /symfony/vendor /app/vendor
This is what my composition looks like eventually:
version: "3.5"
services:
database:
build:
context: docker/mysql
volumes:
- "dbdata:/var/lib/mysql"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
application:
depends_on:
- database
build:
context: docker/lamps
ports:
- "8000:8000"
volumes:
- ".:/app:cached"
- "var:/symfony/var"
- "vendor:/symfony/vendor"
environment:
DATABASE_URL: mysql://dbuser:dbuser#database/dbname
volumes:
dbdata:
var:
vendor:
Using this setup Symfony is responding within 500ms rather than taking 4000ms and more.
UPDATE: When using an IDE for developing Symfony-based application like PhpStorm you might need the files in vendor/ for code assist or similar. In my case I was able to take a snapshot of those files and put them into a different folder which is shared with host as well, but isn't actively used by Symfony/PSR, e.g. vendor.dis/. This snapshot is taken manually once per install/upgrade e.g. by entering the running container with a shell like so:
docker exec -it IDofContainer /bin/sh
Then in shell invoke
cp -Lr vendor vendor.dis
Maybe you have to fix the pathnames or make sure to switch into folder containing the your app first.
In my case using PhpStorm the vendor.dis/ was picked up by background indexing and obeyed by code inspection and code assist. Visual Studio code was having issues with the great number of untracked changes with regards to git so I had to explicitly make this snapshot ignored by git, adding its name in .gitignore file.
UPDATE 2020: More recent setups may have issues with accessing folders like /symfony/templates or /symfony/public e.g. on warming up the cache. This is obviously due to using relative folders in auto-loading code now existing in /symfony/vendor due to relocation described above. As an option, you could directly mount extra volumes in /app/var and /app/vendor instead of /symfony/var and /symfony/vendor. Creating deep copies of those folders in /app/var.dis and /app/vendor.dis keeps enabling code assist and inspections in host filesystem.

do not sync the vendor folder
In your docker file, you can prevent the vendor folder to sync with the container. This has the biggest impact on performance because the folder gets very huge:
#DockerFile:
volumes:
- /local/app:/var/www/html/app
- /var/www/html/app/vendor # ignore vendor folder
This will have the effect that you will need to copy the vendor folder manuelly to the container once after the build and when you update your composer dependencies:
docker cp /local/app/vendor <CONTAINER_ID>:/var/www/html/app/
do not sync the cache folder
in your src/Kernel.php:
public function getCacheDir()
{
// for docker performance
if ($this->getEnvironment() === 'test' || $this->getEnvironment() === 'dev') {
return '/tmp/'.$this->environment;
} else {
return $this->getProjectDir().'/var/cache/'.$this->environment;
}
}
sync the app folders in cached mode
use cached mode for volume mounts on development environments: http://docs.docker.oeynet.com/docker-for-mac/osxfs-caching/#delegated
The cached configuration provides all the guarantees of the delegated
configuration, and some additional guarantees around the visibility of
writes performed by containers. As such, cached typically improves the
performance of read-heavy workloads, at the cost of some temporary
inconsistency between the host and the container.
For directories mounted with cached, the host’s view of the file
system is authoritative; writes performed by containers are
immediately visible to the host, but there may be a delay before
writes performed on the host are visible within containers.
This makes sense for dev envrionemtns, because normally you change your code with your IDE on the host not in the container and sync into the container.
#DockerFile:
volumes:
- /local/app:/var/www/html/app:cached
disable Docker debug mode
check if Docker is NOT in debug mode:
docker info
# It Should display: Debug Mode: false
Disable in docker-config:
{
"debug": false,
}
do not use a file cache
this is extra slow in a docker box, use for examle a SQLITE cache: Symfony Sqlite Cache
for Windows 10 users: Use Docker Desktop with WSL 2 support
Use Docker Desktop with WSL 2 support, whichs incredibley boosts performance in general:
https://docs.docker.com/docker-for-windows/wsl/

Prevent syncing the vendor directory with the container:
# docker-compose.yml:
volumes:
- ./app:/var/www
- /var/www/vendor # ignore vendor map
When building in your Dockerfile copy the vendor map to the container location:
# Dockerfile
COPY app/vendor /var/www/vendor
Sebastian Viereck his answer helped me solve this. Loading went from 14000 to 500ms average on Symfony 5.3
The only downside is that you have to rebuild after you add/update something via composer. But thats not all to bad.

One more very important thing for container's performances.
It's essential to check if a Dockerfile contain build of unnecessary layers.
For example,
Bad Practice -> use multiple unnecessary chained RUN
Best Practice -> use && from shell for chianed command as often as possible
e.g. , for example
We might write in our Dockerfile:
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
&& apt-get update && apt-get install -y --no-install-recommends \
locales apt-utils git \
\
&& echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
&& echo "fr_FR.UTF-8 UTF-8" >> /etc/locale.gen \
&& locale-gen \
Instead of :
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN apt-get update && apt-get install -y --no-install-recommends \
locales apt-utils git
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
&& echo "fr_FR.UTF-8 UTF-8" >> /etc/locale.gen
RUN locale-gen
More layers improve container's slowness... Check your Server Dockerfiles friends !
I hope this comment help someone somewhere !

You can avoid using bind mounts which are extremely slow on Mac or Windows when they contain a big amount of files.
So, instead you can sync files between the host and the container volumes by using Mutagen, it's almost as fast as native with Linux. A benchmark is available here.
Here is a basic configuration of Mutagen:
sync:
defaults:
ignore:
vcs: true
permissions:
defaultFileMode: 644
defaultDirectoryMode: 755
codebase:
alpha: "./app" # dir of your app
beta: "docker://project_container_1/var/www" # targets an absolute path in the container named project_container_1
mode: "two-way-resolved"
This repository shows a full configuration with a simple PHP project (Symfony 5) but it can be used for any type of project in any language.

I would recommend using docker-sync. I have used it myself and it reduced the load time of my Laravel based app.
Developing with docker under OSX/ Windows is a huge pain, since sharing your code into containers will slow down the code-execution about 60 times (depends on the solution). Testing and working with a lot of the alternatives made us pick the best of those for each platform, and combine this in one single tool: docker-sync.

Related

docker-compose overrides directories in the container

Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully – why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others

docker-compose using old volumes

I am trying to set up a CI pipeline with docker-compose and am struggling to understand how named volumes work...
As part of my Dockerfile, I copy in the application files and then run composer install to install the application dependencies. There are some elements of the applicaton files and the dependencies that I want to share with the other containers that are running / are set up to be run to perform utility processes (such as running database migrations). See the example below:
Dockerfile:
FROM php:5.6-apache
# Install dependencies
COPY composer.* /app/
RUN composer install --no-dev
# Copy application files
COPY bin bin
COPY environment.json environment.json
VOLUME /app
docker-compose.yml
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- app:/app
- ~/.cache/composer:/composer/cache
migrations:
image: my-image
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes:
- app:/app
working_dir: /app/vendor/me/my-lib
volumes:
app:
In the example above (irrelevant information omitted), I have a "migrations" service that pulls the migrations from the application dependencies installed with composer. My idea is that when I perform docker-compose build followed by docker-compose up, it will bring up the latest version of software with the latest dependencies and run the latest migrations at the same time.
This works fine the first time. Unfortunately on subsequent runs I cannot get docker-compose to use the new versions. If I run docker-compose build, I can see the composer install run and install all the latest libraries, but then when I go into the container with docker-compose run web /bin/bash, the old dependencies are in there! If I run the image directly with docker run web_1, I can see all the latest files no problem. So it's definitely a compose-specific problem.
I assume I need to do something like clear out the volume cache, but whatever I have tried doesn't seem to work. I can only assume I am misunderstanding the idea of volumes.
Any help would be hugely appreciated. Thanks!
What I understand from your question is you want to run composer install every time you run your container. In that case you have to use CMD instruction to execute that command.
CMD composer install --no-dev
RUN and CMD are both Dockerfile instructions.
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer.
For example if you wanted to install a package or create a directory inside of your Docker image then RUN will be what you’ll want to use. For example, RUN mkdir -p /path/to/folder.
CMD lets you define a default command to run when your container starts.
You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
The problem here has to do with mounting a volume over a location defined in the build. The first build of the image has composer put its output into /app, and the first run of the first build mounts the app named volume to /app. This clobbers the image version of /app with a new write-layer on top. Mounting this named volume on the second build of the image will keep the original contents of /app.
Instead of using a named volume, use volumes-from to load the exported /app volume from web into the migration container.
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
volumes:
- ~/.cache/composer:/composer/cache
migrations:
image: docker-registry.efficio.digital:5043/doctrine-migrator:1.1
depends_on:
- web
environment:
- DB_DRIVER=pdo_mysql
- AUTOLOADER=../../../vendor/autoload.php
volumes_from:
- web:ro

Could not debug an PHP application using PhpStorm which runs in Docker?

My PHP application runs in docker. My IDE is PhpStorm. I configured like this:
My Docker configuration contains:
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_enable=on" >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_autostart=off" >> /usr/local/etc/php/conf.d/xdebug.ini
and docker-compose.yml contains like:
environment:
-XDEBUG_CONFIG: "remote_host=192.168.0.111 idekey=phpstorm"
-PHP_XDEBUG_ENABLED: 1 # Set 1 to enable.
And I run my docker as;
docker-compose up.
When I access a page, it is not hitting my breakpoints.
What setting I miss here?
Debug setting in PhpStorm:
It's really hard to get this working. The short answer is that Docker is not for debugging. Use Vagrant.
Docker is meant to run tiny applications called containers. You want to run the smallest possible process like a database, and then run your http server in another container. Because of this, the standard containers are all bare bones. They're not meant to solve complicated issues. Docker is for production.
Vagrant, on the other hand, is well-suited to developers. It has a lot of niceties that support the developer and make life easier. It works on Mac, Windows, and Linux, and it runs the same way on all of them so you can easily use this in a team setting by sharing just the Vagrantfile, you get "cloning". It even mounts local folders, and thus gives you real-time updates with your http server. You can also destroy the Vagrant image over and over again which is really nice. A good tip is to record all your setup steps in the Vagrantfile. When you have a good Vagrant setup, destroy the Vagrant image, recreate it, and never touch what's inside of it again. This also really helps when you put a project aside for 6 months and can't remember what you did 6 months ago.

How to handle permission inside a volume from docker?

I have a container running a PHP application and the php-fpm service can't write the cache files inside an folder provided by a docker volume.
I gave 777 permissions on the folder I need to write, from the host machine, but it works just for a while. Files created by php-fpm doesn't have necessary permissions.
Furthermore it's not possible to change the owner and group with chown command.
This is my docker-composer.yml file:
web:
image: eduardoleal/nginx
external_links:
- proxy
links:
- php:php
container_name: "app-web"
environment:
VIRTUAL_HOST: web.app
volumes_from:
- data
volumes:
- ./src/nginx-vhost.conf:/etc/nginx/sites-enabled/default
php:
image: eduardoleal/php56
container_name: "app-web-php"
volumes_from:
- data
data:
container_name: "app-web-data"
image: phusion/baseimage
volumes:
- /Users/eduardo.leal/Code/vidaclass/web:/var/www/public
I'm running docker on OSX with VirtualBox.
MacOS has some mounting problems because of the differences in user and group owning the file versus user and group modifying/reading the file. As a workaround, do the following (preferably using the latest version) of Docker,
$ brew install docker-machine-nfs
$ docker-machine start yourdockermachine
$ docker-machine-nfs yourdockermachine --shared-folder=/Users --nfs-config="-alldirs -maproot=0"
You can change the name of yourdockermachine as you like. Also, you have the ability to change the shared folder you want to map. The above option is the best bet and works in all cases. I would suggest not changing that so that you don't mess around with system files.
After the above setup, make sure you provide appropriate read, write, execute permissions to your files and folders.
NOTE: Dependencies for above procedure are brew, docker-machine (or the complete docker toolbox for simplicity)
UPDATE 1: Docker for Mac Beta is in private invite phase. It runs Docker natively on Mac on top of xhyve Hypervisor. It would mean, no more permission errors and improved performance.
UPDATE 2: Docker for Mac is now in Public Beta. The underlying technology remains the same and the VM is completely managed by the Docker service. The version as of this writing is 1.12.0-rc2 which works seamlessly with OS X without any intervention of docker-machine.
I had this problem too. Docker machine uses docker user and staff group on mounted volumes, which have UID=1000 and GID=50 respectively. You need to modify your php-fpm config and replace default user (I suppose it's nobody), with username which have UID=1000 inside container. In case you don't have such user, you'll need to create such user. Do the same trick for group with GID=50. It's very-very dirty hack, but I didn't found better solution yet.
Add RUN usermod -u 1000 www-data somewhere before EXPOSE into Dockerfile for eduardoleal/php56 image, it will fix problem with permissions.
If you are using MacOSx. you have to change permission of staff user (this user is created by docker). Default staff user only read permission. So webserver cannot write into your cache folder in docker container. You can do and see the picture below
chmod -R 777 cache_folder on your mac.
Hope this answer is useful for you
Anyway, you can also use docker-machine-nfs to fix this problem

Project layout with vagrant, docker and git

So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:
Vagrant in order to have a interchangeable environment that all the developers can use.
Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.
The first approach is to have all the definition files together with the source code in the same repository with this layout:
/docker
/machine1-web_server
/Dockerfile
/machine2-db_server
/Dockerfile
/machineX
/Dockerfile
/src
/app
/public
/vendors
/vagrant
/Vagrantfile
So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.
Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.
Is this a good approach?
Yes, at least it works for me since a few months now.
The difference is that I also have a docker-compose.yml file.
In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
if ! type docker >/dev/null; then
echo -e "\n\n========= installing docker..."
curl -sL https://get.docker.io/ | sh
echo -e "\n\n========= installing docker bash completion..."
curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
adduser vagrant docker
fi
if ! type pip >/dev/null; then
echo -e "\n\n========= installing pip..."
curl -sk https://bootstrap.pypa.io/get-pip.py | python
fi
if ! type docker-compose >/dev/null; then
echo -e "\n\n========= installing docker-compose..."
pip install -U docker-compose
echo -e "\n\n========= installing docker-compose command completion..."
curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
fi
SCRIPT
and finally a provisioning section that fires docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
cd /vagrant
docker-compose up -d
SCRIPT
There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.
An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.
I also put my docker-compose.yml file at the root of my projects.
In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.
I'm still trying to figure out how this will work in terms of deployment to production.
For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.
Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?
This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.
I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.
You can use an strategy like this:
docker-compose.yml
db:
image: my_database_image
ports: ...
machinex:
image: my_machine_x_image
web:
build: .
volumes:
- '/path/to/my/php/code:/var/www'
In your Dockerfile you can specify the dependencies to run your PHP code.
Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.
If you are using Mac, you are already using a VM called boot2docker
I hope this helps.

Categories