Retain environment variable values for Symfony2 app development in Docker - php

I have a Symfony2 app running inside Docker container. The parameters.yml file is set to pick environment variables as following:
framework:
secret: "%env.secret%"
...
The Docker compose file contents are:
services:
my-website:
env_file: my-website.env
build: .
expose:
- "80"
volumes:
- .:/app
The environment variables file is:
SYMFONY__ENV__SECRET=1234567890
...
Everything works fine when accessed via "app.php". However when accessed in dev mode (via app_dev.php) it fails to pick the environment variables. Is there any way to use the environment variables? I do not want to create another parameters.yml file with hardcoded values. Than you!

Related

Why separated docker containers for ngnix and php?

Most of the times I see in the docker-compose.yml two separated services for ngnix and php, like this:
version: '3.3'
services:
php:
image: php
ports:
- "127.0.0.1:8000:8000"
volumes:
- ".:/code_folder"
networks:
- default
ngnix:
image: ngnix
ports:
- "127.0.0.1:80:80"
volumes:
- ".:/code_folder"
networks:
- default
I would assume there must be a single image having ngnix and php together but it's not a commonly used approach.
Another question is:
how does it works, since those will be separated containers, both mounting the same code base?
You're missing mapping your Nginx configuration file which is what makes the whole difference.
The configuration file specifies that all requests for PHP files should be passed on to the PHP container. All non-PHP files are served directly by the Nginx container.
Because each container should run a single service. The design pattern should be it runs a service then when it completes it stops itself. It just happens that nginx and php need to continuously listen and therefor don't stop naturally

How to log to a file from a dockerized php app

I have a php app dockerized.
My issue is how to capture errors from php service into a dedicated file on the host.
docker file looks is next:
version: "3.9"
services:
web:
image: nginx:latest
ports:
- "3000:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./public:/public
php:
build:
context: .
dockerfile: PHP.Dockerfile
environment:
APP_MODE: 'development'
env_file:
- 'dev.env'
volumes:
- ./app:/app
- ./public:/public
- ./php.conf:/usr/local/etc/php-fpm.d/zz-log.conf
mysql:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: <pass here>
env_file:
- 'dev.env'
volumes:
- mysqldata:/var/lib/mysql
- ./app:/app
ports:
- '3306:3306'
volumes:
mysqldata: {}
my php.conf that maps as /usr/local/etc/php-fpm.d/zz-log.conf inside php service looks like bellow:
php_admin_value[error_log] = /app/php-error.log
php_admin_flag[log_errors] = on
catch_workers_output = yes
My intention is using php error_log() function and have all the logs recorded in php-error.log which is a file inside volume app.
Now, all logs from containers are shown on terminal only.
I have been struggling with this several hours and have no ideea how to continue. Thank you
I don't know what is your source image. I assume some official docker image for PHP like https://hub.docker.com/_/php
All containerized applications are usually configured to log to stdout so you must override that behaviour. This is really PHP specific and I'm no PHP expert. From what you let us know it looks like you know how to override that behaviour (by using some error_log() function and php_admin_value[error_log] = /app/php-error.log property.
If the behaviour is overridden you should ensure the file app/php-error.log exists inside of the PHP container (i.e. get inside the container by something like docker exec -it my-container-id /bin/bash and then do ls /app/php-error.log and cat /app/php-error.log to see if the file is created.
Because you're mounting the ./app directory from the host to /app directory in container you already have them mirrored. Whatever is inside container's /app you will find in also your /path/to/docker/compose/app directory. You can check if file exists and some content is inside. If not you failed to override the default behaviour of where PHP is logging to.

How to use enviroment variable that is set in docker composer

I am using docker container that runs codeigniter application and I have setted enviroment variable for base url in docker composer.yml like:
version: '3.4' services:
app:
image: WEBPORTAL_VERSION
ports:
- port_key:port_num
environment:
- BASE_URL=http://example.com
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
Now, I want to access the environment key i.e. base url in codeigniter application
I am using:
$_config['base_url']=getenv('BASE_URL');
where BASE_URL is key initialized in docker composer file above.
The problem here is getenv do not fetch from environment set in docker composer?
Finally! solved the problem, by default in php-fpm config file was taking
clear_env = yes
which is the default value, I changed it and updated as
clear_env=no

Docker - deliver the code to nginx and php-fpm

How do I deliver the code of a containerized PHP application, whose image is based on busybox and contains only the code, between separate NGINX and PHP-FPM containers? I use the 3rd version of docker compose.
The Dockerfile of the image containing the code would be:
FROM busybox
#the app's code
RUN mkdir /app
VOLUME /app
#copy the app's code from the context into the image
COPY code /app
The docker-compose.yml file would be:
version: "3"
services:
#the application's code
#the volume is currently mounted from the host machine, but the code will be copied over into the image statically for production
app:
image: app
volumes:
- ../../code/cms/storage:/storage
networks:
- backend
#webserver
web:
image: web
depends_on:
- app
- php
networks:
- frontend
- backend
ports:
- '8080:80'
- '8081:443'
#php
php:
image: php:7-fpm
depends_on:
- app
networks:
- backend
networks:
cms-frontend:
driver: "bridge"
cms-backend:
driver: "bridge"
The solutions I thought of, neither appropriate:
1) Use the volume from the app's container in the PHP and NGINX containers, but compose v3 doesn't allow it (the volumes_from directive). Can't use it.
2) Place the code in a named volume and connect it to the containers. Going this way I can't containerize the code. Can't use. (I'll also have to manually create this volume on every node in a swarm?)
3) Copy the code twice directly into images based on NGINX and PHP-FPM. Bad idea, I'll have to maintain them to be in concert.
Got stuck with this. Any other options? I might have misunderstood something, only beginning with Docker.
I too have been looking around to solve a similar issue and it seems Nginx + PHP-FPM is one of those exceptions when it is better to have both services running in one container for production. In development you can bind mount the project folder to both nginx and php containers. As per Bret Fisher's guide for good defaults for php: php-docker-good-defaults
So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.
You can read more about setting up multiple service containers on the docker page here (it's also listed in the link above): Docker Running Multiple Services in a Container
The way I see it, you have two options:
(1) Using Docker-compose : (this is for very simplistic development env)
You will have to build two separate container from nginx and php-fpm images. And then simply serve app folder from php-fpm on a web folder on nginx.
# The Application
app:
build:
context: ./
dockerfile: app.dev.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
expose:
- 9000
# The Web Server
web:
build:
context: ./
dockerfile: web.dev.dockerfile
working_dir: /var/www
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
(2) Use a single Dockerfile to build everything in it.
Start with some flavor of linux or php image
install nginx
Build your custom image
And serve multi services docker container using supervisord

Docker - Pass a environment setting via docker-compose

I am new to Docker and have a working docker-compose file, apart from one part. What I want to achieve is to set an environment setting so that in my PHP application I can use some variables to determine which resources I load in.
In MAMP PRO, for instance, you can access environment settings on this page:
.
In my docker-compose file I have the following:
services:
webserver:
build: ./docker/webserver
image: perch
ports:
- "80:80"
- "443:443"
volumes:
- C:/websites/sitename/www:/var/www/html
links:
- db
environment:
- DEVELOPER_ENV=development
At the moment the variable - from what I can tell - isn't being set as my php variables to detect the environment, fail. Any pointers would be appreciated.
Apache by default removes most environment variables for security reasons.
But you can whitelist variables in the etc/apache2/conf-enabled/expose-env.conf file.
So I added theses commands to my dockerfile:
RUN
echo 'PassEnv DB_PW' >> /etc/apache2/conf-enabled/expose-env.conf \
&& echo 'PassEnv DB_USER' >> /etc/apache2/conf-enabled/expose-env.conf
Alternatively you can copy or mount the expose-env.conf.

Categories