I'm building a docker environment for a Symfony application. I have a container per application with an attached data only container for the web root that is linked to the application server. As part of the security hardening for the infrastructure these data containers are set to read only, to prevent any remote code exploits. Each application then also has a side car container that allows logs to be written to.
Symfony currently writes the cache to the default cache_dir location of
${web_root}/app/cache/${env}
Which is in the read-only data container
when trying to boot the application I get this error
Unable to write in the cache directory
Obviously as its in the write only container this will happen
I've set my log_path is set in parameters outside the read-only container in the read-write sidecar logging container of
/data/logs/symfony
which works fine.
I've read the Symfony cookbook on how to over ride the directory structure but it only advises on how to do this in AppKernal.php which I don't want to do as the paths may change dependant on if its in a local/uat/prod environment.
We feed Symfony different parameters from our build server depending on the environment we are deploying to so it makes sense to put this config in here.
does anyone know if its possible to override the cache dir in config rather than editing AppKernal.php
I'm creating the cache file outside the container and using -v to mount the directory into the container
$DIR is the current location
htdocs where the webfiles are
docker run -d \
-v $DIR/htdocs:/var/www/html \
-v $DIR/cache_folder:/var/www/html/app/cache
Then make sure that the container is allowed to write into cache_folder . The advantage is that you're not loosing any data if you recreate the container. This will also overwrite the folder /var/www/html/app/cache
Another way you can do this is inside every container, but loose the setting with every restart
chmod -R 777 ${web_root}/app/cache/${env}
Here's a simplified example of a docker-compose yml file i'm using, with a read only parent data container with 2 sidecar containers for caching and logging with :rw access that overrides a path that is contained with the read-only parent path
docker-compose-base.yml
version: '2.0'
# maintainer james.kirkby#sonyatv.com
# #big narstie said "dont f*** up the #base"
services:
# web server
pitchapp-web:
hostname: pitchapp-web
depends_on:
- pitchapp-dc
- pitchapp-log-sc
- pitchapp-cache-sc
- pitchapp-fpm
volumes_from:
- pitchapp-dc
- pitchapp-log-sc:rw
- pitchapp-cache-sc:rw
links:
- pitchapp-fpm
build:
args:
- APP_NAME=pitchapp
- FPM_POOL=pitchapp-fpm
- FPM_PORT=9001
- PROJECT=pitch
- APP_VOL_DIR=/data/www
- CONFIG_FOLDER=app/config
- ENVIRONMENT=dev
- ENV_PATH=dev
context: ./pitch
dockerfile: Dockerfile
ports:
- "8181:80"
extends:
file: "shared/dev-common.yml"
service: dev-common-env
env_file:
- env/dev.env
# web data-container
pitchapp-dc:
volumes:
- /data/tmp:/data/tmp:rw
- /Sites/pitch/pitchapp:/data/www/dev/pitch/pitchapp/current:ro
hostname: pitchapp-dc
container_name: pitchapp-dc
extends:
file: "shared/data-container-common.yml"
service: data-container-common-env
read_only: true
working_dir: /data/www
# web cache sidecar
pitchapp-cache-sc:
volumes:
- /data/cache/pitchapp:/data/www/dev/pitch/pitchapp/current/app/cache/dev:rw
hostname: pitchapp-cache-sc
container_name: pitchapp-cache-sc
extends:
file: "shared/data-container-common.yml"
service: data-container-common-env
read_only: false
working_dir: /data/cache
# web log sidecar
pitchapp-log-sc:
volumes:
- /data/log/pitchapp:/data/log:rw
- /data/log/pitchapp/symfony:/data/www/dev/pitch/pitchapp/current/app/logs:rw
build:
args:
- APP_NAME=pitchapp
- TARGET_SERVICE=pitchapp
hostname: pitchapp-log-sc
container_name: pitchapp-log-sc
extends:
file: "shared/logging-common.yml"
service: logging-common-env
data-container-common.yml
version: '2.0'
services:
data-container-common-env:
build:
context: ./docker-data-container
dockerfile: Dockerfile
image: jkirkby91/docker-data-container
env_file:
- env/data.env
restart: always
privileged: false
tty: false
shm_size: 64M
stdin_open: true
logging-common.yml
version: '2.0'
services:
logging-common-env:
build:
context: ./logging
dockerfile: Dockerfile
image: jkirkby91/docker-data-container
env_file:
- env/logging.env
restart: always
working_dir: /data/log
privileged: false
tty: false
shm_size: 64M
stdin_open: true
read_only: false
Related
I have a php app dockerized.
My issue is how to capture errors from php service into a dedicated file on the host.
docker file looks is next:
version: "3.9"
services:
web:
image: nginx:latest
ports:
- "3000:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./public:/public
php:
build:
context: .
dockerfile: PHP.Dockerfile
environment:
APP_MODE: 'development'
env_file:
- 'dev.env'
volumes:
- ./app:/app
- ./public:/public
- ./php.conf:/usr/local/etc/php-fpm.d/zz-log.conf
mysql:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: <pass here>
env_file:
- 'dev.env'
volumes:
- mysqldata:/var/lib/mysql
- ./app:/app
ports:
- '3306:3306'
volumes:
mysqldata: {}
my php.conf that maps as /usr/local/etc/php-fpm.d/zz-log.conf inside php service looks like bellow:
php_admin_value[error_log] = /app/php-error.log
php_admin_flag[log_errors] = on
catch_workers_output = yes
My intention is using php error_log() function and have all the logs recorded in php-error.log which is a file inside volume app.
Now, all logs from containers are shown on terminal only.
I have been struggling with this several hours and have no ideea how to continue. Thank you
I don't know what is your source image. I assume some official docker image for PHP like https://hub.docker.com/_/php
All containerized applications are usually configured to log to stdout so you must override that behaviour. This is really PHP specific and I'm no PHP expert. From what you let us know it looks like you know how to override that behaviour (by using some error_log() function and php_admin_value[error_log] = /app/php-error.log property.
If the behaviour is overridden you should ensure the file app/php-error.log exists inside of the PHP container (i.e. get inside the container by something like docker exec -it my-container-id /bin/bash and then do ls /app/php-error.log and cat /app/php-error.log to see if the file is created.
Because you're mounting the ./app directory from the host to /app directory in container you already have them mirrored. Whatever is inside container's /app you will find in also your /path/to/docker/compose/app directory. You can check if file exists and some content is inside. If not you failed to override the default behaviour of where PHP is logging to.
I have issue after last docker update (seems so) on Windows 10 (local development). When I changed files in PhpStorm (and in another editors - Sublime, Notepad+), after a while, files inside container didn't receive changes.
Steps that can help for a while:
If I completely shut down all containers and after that arise them again. docker-compose down && docker-compoes up
If I get into php-fpm container and for file that not changed ran touch file.php (this file will be immidiatly changed).
What I tried and it didn't help:
I restarted php-fpm and nginx containers docker-compose restart php-fpm nginx (Yes it's strange, because down|up for all container helped)
I changed inside PhpStorm setting Use Safe write(save changes for temporary file first)
Also I checked inode for file inside container. With ls -lai file.php. Before changes worked and after they broked I had the same inode number. There is no determined number of changes I must to do to break syncing, it's random, sometime 2 changes enough.
I have:
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.2, build 698e2846
docker-compose.yml
version: '3'
services:
nginx:
container_name: pr_kpi-nginx
build:
context: ./
dockerfile: docker/nginx.docker
volumes:
- ./:/var/www/kpi
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/fastcgi.conf:/etc/nginx/fastcgi.conf
ports:
- "8081:80"
links:
- php-fpm
networks:
- internal
php-fpm:
container_name: pr_kpi-php-fpm
build:
context: ./
dockerfile: docker/php-fpm.docker
volumes:
- ./:/var/www/kpi
links:
- kpi-mysql
environment:
# 192.168.221.1 -> host.docker.internal for Mac and Windows
XDEBUG_CONFIG: "remote_host=host.docker.internal remote_enable=1"
PHP_IDE_CONFIG: "serverName=Docker"
networks:
- internal
mailhog:
container_name: pr_kpi-mailhog
image: mailhog/mailhog
restart: always
ports:
# smtp
- "1025:1025"
# http
- "8025:8025"
networks:
- internal
kpi-mysql:
container_name: pr_kpi-kpi-mysql
image: mysql:5.7
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
volumes:
- ./docker/storage/kpi-mysql:/var/lib/mysql
environment:
# We must change prod secrets, this is not good approach
- "MYSQL_ROOT_PASSWORD=pass"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=user_pass"
- "MYSQL_DATABASE=kpi_db"
ports:
- "33061:3306"
networks:
- internal
kpi-npm:
container_name: pr_kpi-npm
build:
context: ./
dockerfile: docker/npm.docker
volumes:
- ./:/var/www/kpi
- /var/www/kpi/admin/node_modules
ports:
- "4200:4200"
networks:
- internal
tty: true
# For xdebug
networks:
internal:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.221.0/28
P.S. There is opened issue:
https://github.com/docker/for-win/issues/5530
P.P.S. We need to update Docker from 2.2.0.0 to 2.2.0.3, Seems it's fixed
I have a separate container for syncing my folder:
app:
image: httpd:2.4.38
volumes:
- ./:/var/www/html
command: "echo true"
I just use the basic apache image, you could use anything really though. Then in my actual containers, I use the following volumes_from key:
awesome.scot:
build: ./build/httpd
links:
- php
ports:
- 80:80
- 443:443
volumes_from:
- app
php:
build: ./build/php
ports:
- 9000
- 9001
volumes_from:
- app
links:
- mariadb
- mail
environment:
APPLICATION_ENV: 'development'
I've never had an issue using this set up, files always sync fast, and I have tested both on Mac OSX and MS Windows.
If you're interested, here is my full LAMP stack on Github https://github.com/delboy1978uk/lamp
I have the same issue on Windows10 since 31st Jan.
I have commented a line in PhpStorm and checked it in the container using vim.
The changes were not there.
If I run docker-compose down and up, the changes go in the container.
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.4, build 8d51620a
Nothing changed in my docker-compose.yml since 2018.
My project is defined in a docker-compose file, but I'm not too familiar with docker-compose definitions.
When I try to docker-compose up -d in a fresh setup, the following error occurred during the build of a docker image.
This is after composer install, under post-autoload-dump. Laravel tries to auto discover packages (php artisan package:discover).
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> #php artisan package:discover --ansi
RedisException : php_network_getaddresses: getaddrinfo failed: Name or service not known
at [internal]:0
1|
Exception trace:
1 ErrorException::("Redis::connect(): php_network_getaddresses: getaddrinfo failed: Name or service not known")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
2 Redis::connect("my_redis", "6379")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
Please use the argument -v to see more details.
Script #php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1
ERROR: Service 'my_app' failed to build: The command '/bin/sh -c composer global require hirak/prestissimo && composer install' returned a non-zero code: 1
The reason it cannot connect to my_redis:6379 is because my_redis is another service in the same docker-compose.yml file. So I assume the domain is not ready yet, since docker-compose wants to first build my images before hosting containers.
EDIT I just found this GitHub issue linking to my problem: https://github.com/laravel/telescope/issues/620. It seems that the problem is related to Telescope trying to use the Cache driver. The difference is I'm not using Docker just for CI/CD, but for my local development.
How can I resolve this problem? Is there a way to force Redis container to up first before building my_app? Or is there a Laravel way to prevent any domain discovery? Or is there a way to specify the building of an image depends on another service to be available?
If you want to see my docker-compose.yml:
version: '3.6'
services:
# Redis Service
my_redis:
image: redis:5.0-alpine
container_name: my_redis
restart: unless-stopped
tty: true
ports:
- "6379:6379"
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
- redisdata:/data
networks:
- app-network
# Postgres Service
my_db:
image: postgres:12-alpine
container_name: my_db
restart: unless-stopped
tty: true
ports:
- "5432:5432"
environment:
POSTGRES_DB: my
POSTGRES_PASSWORD: admin
SERVICE_TAGS: dev
SERVICE_NAME: postgres
volumes:
- dbdata:/var/lib/postgresql
- ./postgres/init:/docker-entrypoint-initdb.d
networks:
- app-network
# PHP Service
my_app:
build:
context: .
dockerfile: Dockerfile
image: my/php
container_name: my_app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: my_app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- /tmp:/tmp #For CS Fixer
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
- fsdata:/my
networks:
- app-network
# Nginx Service
my_webserver:
image: nginx:alpine
container_name: my_webserver
restart: unless-stopped
tty: true
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
# Docker Networks
networks:
app-network:
driver: bridge
# Volumes
volumes:
dbdata:
driver: local
redisdata:
driver: local
fsdata:
driver: local
There is a way to force a service to wait another service in docker compose depends_on, but it only wait until the container is up not the service, and to fix that you have to customize redis image by using command to execute a bash script that check for redis container and redis daemon availability check startup-order on how to set it up.
I currently mitigated this by adding --no-scripts to Dockerfile and added a start.sh. Since it is Laravel's package discovery script that binds to post-autoload-dump that wants to access Redis.
Dockerfile excerpt
#...
# Change current user to www
USER www
# Install packages
RUN composer global require hirak/prestissimo && composer install --no-scripts
RUN chmod +x /var/www/scripts/start.sh
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["/var/www/scripts/start.sh"]
start.sh
#!/usr/bin/env sh
composer dumpautoload
php-fpm
I'm sure you've resolve this yourself by now but for anyone else coming across this question later, there are two solutions I have found:
1. Ensure Redis is up and running before your App
In your redis service in docker-compose.yml add this...
healthcheck:
test: ["CMD", "redis-cli", "ping"]
...then in your my_app service in docker-compose.yml add...
depends_on:
redis:
condition: service_healthy
2. Use separate docker compose setups for local development and CI/CD pipelines
Even better in my opinion is to create a new docker-compose.test.yml In here you can omit the redis service entirely and just use the CACHE_DRIVER=array. You could set this either directly in your the environment property of your my_app service or create a .env.testing. (make sure to set APP_ENV=testing too).
I like this approach because as your application grows there may be more and more packages which you want to enable/disable or configure differently in your testing environment and using .env.testing in conjunction with a docker-compose.testing.yml is a great way to manage that.
I am configuring the docker-compose.yml file and I want to run a PHP stack that contains Elastic, Redis, Symfony, and composer.
Now the problem that I have is, that I don't know how can I use composer with docker because some features of composer need to have PHP and some extension. I don't want to build a new image and install Nginx and PHP and composer and extension of PHP on it, I won't to have all of them in a disparate image.
What I have tried so far its this:
version : '2'
services:
nginx:
image: tutum/nginx
ports:
- "80:80"
volumes:
- ./nginx/default:/etc/nginx/sites-available/default
- ./nginx/default:/etc/nginx/sites-enabled/default
- ./logs/nginx-error.log:/var/log/nginx/error.log
- ./logs/nginx-access.log:/var/log/nginx/access.log
- ./app:/usr/share/nginx/html
phpfpm:
image: php:fpm
ports:
- 9000:9000
volumes:
- ./app:/usr/share/nginx/html
composer:
image: composer/composer:php7
command: install
volumes:
- ./app:/app
elastic2.4.4:
image: elasticsearch:2.4.4
ports:
- 9200:9200
volumes:
- ./esdata1:/usr/share/elasticsearch/data
redis:
image: redis:3.2
ports:
- 6379:6379
but this won't install dependencies.
I set up my docker-compose.yml file so one docker instance would use the composer/composer image and execute composer install within a shared container. All of the other images would then be able to access the vendor directory that composer created. The tricky part was realizing that the composer/composer image assumes that the composer.json file will be in an /app directory. I had to override this behavior by specifying my shared container as the working_dir instead:
version: '3'
services:
#=====================#
# nginx proxy service #
#=====================#
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
# self-signed testing wildcard ssl certificate
- "./certs:/certs"
# proxy needs access to static files
- "./site1/public:/site1/public"
- "./site2/public:/site2/public"
# proxy needs nginx configuration files
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#===============#
# composer.test #
#===============#
composer.test:
image: composer/composer
networks:
- test_network
ports:
- "9001:9000"
volumes:
- "./composer:/composer"
container_name: composer.test
working_dir: /composer
command: install
#============#
# site1.test #
#============#
site1.test:
build: ./site1
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./composer:/composer"
- "./site1:/site1"
container_name: site1.test
#============#
# site2.test #
#============#
site2.test:
build: ./site2
networks:
- test_network
ports:
- "9003:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./composer:/composer"
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
Here is how the directory structure looks:
certs
test.crt
test.key
composer
composer.json
site1
app
public
Dockerfile
site1.test.conf
site2
app
public
Dockerfile
site2.test.conf
docker-compose.yml
If you look at composer/composer:php7 Dockerfile, then you will see, that it is based on php:7.0-alpine and it doesn't seem like fpm is included. So, you could use composer/composer:php7 as base image to install php-fpm on top of it.
So, since you do the mapping of your project in all three containers, running composer install in one container should result in the changes be visible in all three containers.
Me personally, I do not see a point in segregating PHP and nginx into 2 different containers, because one is highly dependable on another. And mapping your app into both containers is also a perfect example of nonsense. That's why I maintain my own public build of nginx+php Docker image. You can check it out here. There are more builds with more flavors. And they all come with composer inside.
Since volumes_from disappear when Docker Compose change it's compose file version I am a bit lost in how to share a volume between different containers.
See the example below where a PHP application is living in a PHP-FPM container and Nginx is living in a second one.
version: '3.3'
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
- shared-volume:/var/www
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
- shared-volume:/var/www
volumes:
shared-volume:
driver_opts:
type: none
device: ~/sources/websocket
o: bind
In order to make the application works of course somehow Nginx has to access the PHP files and there is where volumes_from help us a lot. Now that option is gone.
When I try the command docker-compose up it ends with the following message:
ERROR: for websocket_php_1 Cannot create container for service php:
error while mounting volume with options: type='none'
device='~/sources/websocket' o='bind': no such file or directory
How do I properly share the same host volume between the two containers?
Why would you not use a bind mount? This is just source code that each needs to see, correct? I added the :ro (read-only) option which assumes no code generation is happening.
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro