Environment variables not found symfony - php

I have a docker container :
services:
php-fpm:
build:
context: ./docker/php-fpm
volumes:
- ./symfony:/home/home
container_name: php
ports:
- "9004:9001"
networks:
- local
working_dir: /home/home
environment:
- DATABASE_HOST=test
- DATABASE_PORT=
- DATABASE_NAME=test
I made :
docker-compose build --no-cache
docker-compose up
clear cache
When I refresh the page I get : Environment variable not found: "DATABASE_HOST".. What's the problem I don't understand. I spent a lot of time analyze this issue. Have you an idea about that ? Thx in advance. Btw when I do docker inspect I see all this environment variables assigned.

I've run into this problem myself. You have to explicitly map the environment variables you want to make accessible to php/symfony in php-fpm.conf like:
[www]
env[MY_ENV_VAR_1] = 'value1'
env[MY_ENV_VAR_2] = 'value2'
However that doesn't seem to work with actual enviornment variables from the host!.
There is a long discussion of that here (along with several, what seem to me, laborious work-arounds to the problem: https://groups.google.com/forum/#!topic/docker-user/FCzUbjTIp_0
I've successfully done it in the pool.d configuration file like so:
env[DATABASE_HOST] = $DATABASE_HOST
env[DATABASE_PORT] = $DATABASE_PORT
env[DATABASE_NAME] = $DATABASE_NAME
I just add this in as part of the docker file:
ADD fpm/app.pool.conf /etc/php5/fpm/pool.d/

If you extend from the official PHP Docker image, it sets clear_env = no for you or you can set it yourself in your image's pool config.
Here's the line adding clear_env = no to the config in the official image. You just need to add this to your FPM pool config, or you can add them one-by-one if you prefer as shown by Robert.

Related

Docker-compose up works, but using visual studio code dev container doesn't

I am new to Docker and containers in general. I played around and got to a point, where I do not get any further. I searched through other questions but I couldn't see the right question (or understand) the answers. So I hope you can help me.
I do have two containers, running php:7.4-apache and mariadb. They are working fine and if I use a docker compose file, I can start everything well with docker-compose up -d.
Here the docker compose file (remember, this is my first file, so I have not that much knowledge about it):
version: '3.1'
services:
mariadb:
image: mariadb:latest
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=docker
ports:
- "3306:3306"
volumes:
- ./database/storage
- ./database/src:/usr/src
restart: always
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./php/src:/var/www/html/
restart: always
My "project structure" looks like this:
If I start now the dev container, I can choose to use the docker-compose file. I did this, and the first thing I didn't understand is, that I have to choose between those two services (php/mariadb).
So I used php for a try. It starts running, and I can see the container with docker ps.
However, if I want to connect to the php-website via localhost:80 and see my "website", I do not get any connections.
I expected the same behavior like if I would use the docker-compose up. But this doesn't happen. Here is the devcontainer.json, it might help:
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Existing Docker Compose (Extend)",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"..\\Docker-compose.yml",
"docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "php",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": null
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [80],
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
As you can see, I tried forwarding the port 80, but this didn't work either. It is also confusing, that the docker-compose.yml within the .devcontainer folder is not the same yml as my original Docker-compose.yml.
I have no idea on what to do next. I hope, that I can use visual studio code for making simple php scripts within this container and later on, connect to the mariadb. Both mariadb and php should run in seperate containers. Well, at least that was my hope.
---Additional information---
To show what VisualStudio Code is showing me, here some screenshots:
I choose to open folder in container.
Then I choose the Docker-compose.yml (Which is working fine if I use docker-compose up).
As you can see, it is now asking me about which service I want to select. Which if funny, because I would like both service to run ... But in this scenario, it would be fine if I could change via VS-Code the php scripts.
Thanks for the help
-GreNait

Do Nginx and PHP container both need same php files?

Looking at a common docker-compose setup for a Nginx / combo like:
version: '3'
services:
nginx-example:
image: nginx:1.13-alpine
ports:
- "80:80"
volumes:
- ./www:/www
- ./config/site.conf:/etc/nginx/conf.d/default.conf
php-example:
image: php-fpm
volumes:
- ./www:/www
You find many examples like that to make sure, that if you change something in your local www folder it will be immediately picked up by a running container.
But when I do not want that and copy some php files/content etc. into the container:
Is it enough to create a volume of the same name for both containers and copy my files into that folder e.g. in Dockerfile?
Or is it even possible to not have a volume but create a directory in the container and copy the files there... and in that case: do I have to do for both nginx and php-fpm with the same files?
Perhaps my misunderstanding is around how the php-fpm container works in that combination (of course fastcgi... in conf points to the php-example:9000 standard)
My ideal solution would be to copy once and making sure that file permissions are handled.

Docker: Creating a Fully Installed PHP Application

Can I use docker-compose to create a one step setup for a completely installed PHP application? Since that's a pretty vague questions. I will use WordPress as an example.
If I look at the official wordpress docker repositories, I see there's already a super-useful yml file for docker-composer
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
This is great and gets me a running frontend app server with WordPress already there. It also gets a database server. It sets up everything to talk to one another. That's all great.
What this doesn't get me is a fully installed WordPress. Like a lot of PHP applications, in order to be fully installed there's a few additional configuration fields that need to be set, and a few additional database fields as well.
This means I can't fully install the application until the both the wordpress and db container are fully up. I've thought about hacking up a workaround where I have the CMD or ENTRYPOINT wait around for a DB connection to be established, but the base WordPress Dockerfile(s) already uses ENTRYPOINT and CMD so that's not an option. (or is it?)
Is there an elegant, docker-ish way to do what I want to do? Or am I stuck telling my users to docker-compose up, and then run a second command to finish the installation?
If there are extra steps that you do manually or through scripts after running docker-compose up for the docker-compose posted in your question you can modify the original wordpress image with your own image in order to make it work as expected for your business needs all you have to do is to start writing a new Dockerfile that produces a new customized wordpress image. So you can do the following as an example:
FROM wordpress:5.1.0-php7.1-apache
COPY custom_docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["custom_docker-entrypoint.sh"]
CMD ["apache2-foreground"]
The custom_docker-entrypoint.sh represents the extra steps which needed to be done. also you may introduce new environment variables inside the entrypoint script so you can make the process dynamic as possible for different clients without the need to build a custom image for each client.
The generated image can be used in your docker-compose file instead of the official one
What you will probably have to do is to create your own Dockerfile based on the Wordpress image, and then give them your own stuff to run.
I believe something like this should work
docker-compose.yml
version: '3.1'
services:
wordpress:
build:
context: ./docker/wordpress
restart: always
ports:
- 8080:80
depends_on:
- "db"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
./docker/wordpress/Dockerfile
FROM wordpress
RUN your-own-commands
I only briefly put together, so it is not really tested, but it should give you a general idea and direction
If there are some commands that you want to run after the containers are up then there is a way. I don't know if it's the standard way to do it in docker but it will work.
As you already know that the wordpress image has its own ENTRYPOINT which is the script docker-entrypoint.sh.
So what you can do is put your custom command inside this script which will be executed on wordpress container start.
You can do it as follows :
Start your container and copy the contents of the existing
docker-entrypoint.sh
Create a new docker-entrypoint.sh outside the container and edit that
script to add your chmod command at appropriate location.
In your Dockerfile add a line to copy this new entrypoint script to the
location /usr/local/bin/docker-entrypoint.sh
NOTE:
Do not put your custom command at the end of the script docker-entrypoint.sh. You can put it anywhere before the line exec "$#".
As far as I know, the MySQL image has its own startup script which may still run after the container is ready. This startup script can be used to import data into the database - MySQL connections still wil not get accepted, but docket will think that your db is ready.
What this means for you is that if for any reason the db init script runs longer, it will stop the php install commands from working.
You might need to implement a polling thread which will wait for the database to start - only then run the install scripts on php.
Here’s an article I found that details a similar problem.
https://cweiske.de/tagebuch/docker-mysql-available.htm
There might be some other tips that might be appropriate for your use case, but I would need to know a bit more context.
EDIT:
Check docker healthchecks. This might fit your usecase.
https://docs.docker.com/compose/compose-file/#healthcheck

How to deal with permissions using docker - nginx / php-fpm

I'm trying to deploy a very simple Symfony application using nginx & php-fpm via Docker.
Two docker services :
1. web : running nginx
2. php : running php-fpm; containing application source.
I want to build images that can be deployed without any external dependency.
That's why I'm copying source code within the php container.
On development process; i'm overriding /var/www/html volume with local path.
# file: php-fpm/Dockerfile
FROM php:7.1-fpm-alpine
COPY ./vendor /var/www/html
COPY . /var/www/html
VOLUME /var/www/html
Now the docker-compose configuration file.
# file : docker-compose-prod.yml
version: '2'
services:
web:
image: "private/web"
ports:
- 80:80
volumes_from:
- php
php:
image: "private/php"
ports:
- 9000:9000
The problem is about permissions.
When accessing localhost, Symfony is botting up, but cache / logs / sessions folders are not writable.
nginx is using /var/www/html to serve static files.
php-fpm is using /var/www/html to execute php files.
I'm not sure about the problem.
But how can I be sure about the following:
/var/www/html have to be readable for nginx ?
/var/www/html have to be writable for php-fpm ?
Note: I'm building images from MacbookPro; cache / logs / sessions are 777.
docker-compose.yml supports a user directive under services. The docs only mention it in the run command, but it works the same.
I have a similar setup and this is how I do it:
# file : docker-compose-prod.yml
version: '2'
services:
web:
image: "private/web"
ports:
- 80:80
volumes_from:
- php
php:
image: "private/php"
ports:
- 9000:9000
user: "$UID"
I have to run export UID before running docker-compose and then that sets the default user to my current user. This allows logging / caching etc. to work as expected.
I am using this solution "Docker for Symfony" https://github.com/StaffNowa/docker-symfony
New features on
./d4d start
./d4s stop
./d4d help
I've found a solution;
But if someone can explain best practices, it will be appreciate !
Folders cache / logs / sessions from docker context where not empty (on host).
Now that folders have been flushed, Symfony creates them with good permissions.
I've found people using usermod to change UID, ie: 1000 for www-data / nginx ...
But it seems to be an ugly hack. What do you think about ?

Docker - Pass a environment setting via docker-compose

I am new to Docker and have a working docker-compose file, apart from one part. What I want to achieve is to set an environment setting so that in my PHP application I can use some variables to determine which resources I load in.
In MAMP PRO, for instance, you can access environment settings on this page:
.
In my docker-compose file I have the following:
services:
webserver:
build: ./docker/webserver
image: perch
ports:
- "80:80"
- "443:443"
volumes:
- C:/websites/sitename/www:/var/www/html
links:
- db
environment:
- DEVELOPER_ENV=development
At the moment the variable - from what I can tell - isn't being set as my php variables to detect the environment, fail. Any pointers would be appreciated.
Apache by default removes most environment variables for security reasons.
But you can whitelist variables in the etc/apache2/conf-enabled/expose-env.conf file.
So I added theses commands to my dockerfile:
RUN
echo 'PassEnv DB_PW' >> /etc/apache2/conf-enabled/expose-env.conf \
&& echo 'PassEnv DB_USER' >> /etc/apache2/conf-enabled/expose-env.conf
Alternatively you can copy or mount the expose-env.conf.

Categories