Use env variables in Dockerfile - php

I have a Dockerfile in a php project where I need to pass a user and a password to download a library during the execution.
The user and password must be hidden in production or in the local .env files. At the moment I'm just trying the local option and the user and password come empty.
I have used "${USER}" and ${USER}, but not only the login fails, but when I print the variables they come empty. Also I've tried putting the variables hardcoded and it works fine, so the problem is that the variables are not retrieve from the .env file.
The docker-compose starts as follows
version: '3'
services:
server:
build:
context: .
dockerfile: docker/Server/Dockerfile
container_name: "server"
ports:
- 80:80
- 8888:8888
networks:
- network
env_file:
- .env
command: sh /start-workers.sh
And the Dockerfile:
FROM php:7.3-cli-alpine3.10
RUN apk add --update
#
# Dependencies
#
RUN apk add --no-cache --no-progress \
libzip-dev zip php7-openssl pkgconfig \
php-pear php7-dev openssl-dev bash \
build-base composer
#
# Enable PHP extensions
#
RUN docker-php-ext-install bcmath sockets pcntl
#
# Server Dependencies
#
RUN echo '' | pecl install swoole \
&& echo "extension=swoole.so" >> /usr/local/etc/php/conf.d/swoole.ini
#
# installation
#
WORKDIR /var/www/service
COPY . .
RUN echo "START"
RUN echo "${USER}"
RUN echo "${PASSWORD}"
RUN echo "END"
RUN composer config http.libraries.com "${USER}" "${PASSWORD}" --global \
&& composer install -n -o --prefer-dist --no-dev --no-progress --no-suggest \
&& composer clear-cache \
&& mv docker/Server/start-workers.sh /
EXPOSE 80
The .env starts and ends as follows:
APP_ENV=dev
APP_SECRET=666666666666666
.
.
.
USER=user
PASSWORD=password
At the moment if I execute docker-compose up --build the output as follows
Step 10/15 : RUN echo "START"
---> Running in 329b1707c2ab
START
Removing intermediate container 329b1707c2ab
---> 362b915ef616
Step 11/15 : RUN echo "${USER}"
---> Running in e052e7ee686a
Removing intermediate container e052e7ee686a
---> 3c9cfd43a4df
Step 12/15 : RUN echo "${PASSWORD}"
---> Running in b568e7b8d9b4
Removing intermediate container b568e7b8d9b4
---> 26a727ba6842
Step 13/15 : RUN echo "END"
---> Running in 726898b3eb42
END
I'd like the user and the password to be printed, so I know I'm receiving the .env data and I can use it.

You could use args to meet your requirement.
And one notice here is: you should not use USER in .env as a keyword, as it will be override by bash's default environment USER which will make your dockerfile not get the correct value.
A full workable minimal example as follows, FYI:
docker/Server/Dockerfile:
FROM php:7.3-cli-alpine3.10
ARG USER
ARG PASSWORD
RUN echo ${USER}
RUN echo ${PASSWORD}
.env (NOTE: you had to use USR, not USER here):
USR=user
PASSWORD=password
docker-compose.yaml:
version: '3'
services:
server:
build:
context: .
dockerfile: docker/Server/Dockerfile
args:
- USER=${USR}
- PASSWORD=${PASSWORD}
Execute:
$ docker-compose build --no-cache
Building server
Step 1/5 : FROM php:7.3-cli-alpine3.10
---> 84d7ac5a44d4
Step 2/5 : ARG USER
---> Running in 86b35f6903e2
Removing intermediate container 86b35f6903e2
---> ee6a0e84c76a
Step 3/5 : ARG PASSWORD
---> Running in 92480327a820
Removing intermediate container 92480327a820
---> 1f886e8f6fbb
Step 4/5 : RUN echo ${USER}
---> Running in 8c207c7e6080
user
Removing intermediate container 8c207c7e6080
---> cf97b2cc0317
Step 5/5 : RUN echo ${PASSWORD}
---> Running in 7cbdd909826d
password
Removing intermediate container 7cbdd909826d
---> 6ab7987e080a
Successfully built 6ab7987e080a
Successfully tagged 987_server:latest

The problem here that your ENV will bem accessed only from run phase not build.
I suggest you to use build args for example:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19

Related

Docker with Doctrine generate-proxies

I'm trying to create a Docker container (using docker-compose) for an application wit Doctrine, the problem is: if I just run the application, it works, but when I try to use the application before I run command ./vendor/bin/doctrine orm:generate-proxies, I get the error:
PHP Warning: require(/tmp/__CG__DomainEntitiesAnyEntity.php): failed to open stream: No such file or directory in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
PHP Fatal error: require(): Failed opening required '/tmp/__CG__DomainEntitiesAnyEntity.php' (include_path='.:/usr/local/lib/php') in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
OK, so just run the command on docker-compose.yml
version: '3'
services:
apache_server:
build: .
working_dir: /var/www/html
ports:
- "80:80"
volumes:
- ./:/var/www/html
- ../uploads:/var/www/uploads
- ./.docker/apache2.conf:/etc/apache2/apache2.conf
- ./.docker/000-default.conf:/etc/apache2/sites-avaliable/000-default.conf
- ./.docker/php.ini:/etc/php/7.4/apache2/php.ini
depends_on:
- postgres_database
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
networks:
- some-network
Yes, it works as expected and generates the proxies to /tmp folder, but after the command run and after the prompt with proxies generated, I get the message exited with code 0. It happens because Docker finish the container execution after getting the status code 0. So I tried two more things:
Add tail to something:
command: sh -c "./vendor/bin/doctrine orm:generate-proxies && tail -f /var/www/html/log.txt"
but when I do this, the server doesn't respond to requests (http://localhost/) anymore.
Add tty before running the command:
tty: true
# restart: unless-stopped <--- also tried this
and doesn't work also. Is there another way to solve this without I have to manually run the command inside the container every time?
PS: my dockerfile is this one:
FROM php:7.4-apache
WORKDIR /var/www/html
RUN a2enmod rewrite
RUN a2enmod headers
RUN mkdir /var/www/uploads
RUN mkdir /var/www/uploads/foo-upload-folder
RUN mkdir /var/www/uploads/bar-upload-folder
RUN chmod 777 -R /var/www/uploads
RUN apt-get update \
&& apt-get install -y \
libpq-dev \
zlib1g-dev \
libzip-dev \
unzip \
&& docker-php-ext-install \
pgsql \
pdo \
pdo_pgsql \
zip
RUN service apache2 restart
Cause of your issue
Your Docker Compose configuration of command
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
in docker-compose.yml overwrites the Cmd in the Docker image php:7.4-apache that normally would start the Apache server, see
docker inspect php:7.4-apache
or more specific
docker inspect --format="{{ .Config.Cmd }}" php:7.4-apache
which gives you
[apache2-foreground]
Solution in general
If you like to run a command before the original command of a Docker image, use Entrypoint and make sure you call the original entrypoint, see
$ docker inspect --format="{{ .Config.Entrypoint }}" php:7.4-apache
[docker-php-entrypoint]
For example, instead of command define
entrypoint: sh -c "./vendor/bin/doctrine orm:generate-proxies && docker-php-entrypoint"
Solution in your case
However, in your case, I would configure Doctrine like this (see Advanced Doctrine Configuration)
$config = new Doctrine\ORM\Configuration;
// ...
if ($applicationMode == "development") {
$config->setAutoGenerateProxyClasses(true);
} else {
$config->setAutoGenerateProxyClasses(false);
}
In development your code changes (mounted as volume) and proxies may have to be updated/generated. In production your code does not change anymore (copy code to Docker image). Hence, you should generate proxies in your Dockerfile (after you copied the source code), e.g.
FROM php:7.4-apache
WORKDIR /var/www/html
# ...
Copy . /var/www/html
RUN ./vendor/bin/doctrine orm:generate-proxies

How to stop Github Actions step when functional tests failed (using Codeception)

I'm new with Github Actions and I try to make some continuous integration with functional tests.
I use Codeception and my workflow run perfectly, but when some tests fail the step is written as success. Github don't stop the action and continue to run the nexts steps.
Here is my workflow yml file :
name: Run codeception tests
on:
push:
branches: [ feature/functional-tests/codeception ]
jobs:
build:
runs-on: ubuntu-latest
steps:
# —— Setup Github Actions 🐙 —————————————————————————————————————————————
- name: Checkout
uses: actions/checkout#v2
# —— Setup PHP Version 7.3 🐘 —————————————————————————————————————————————
- name: Setup PHP environment
uses: shivammathur/setup-php#master
with:
php-version: '7.3'
# —— Setup Docker Environment 🐋 —————————————————————————————————————————————
- name: Build containers
run: docker-compose build
- name: Start all container
run: docker-compose up -d
- name: Execute www container
run: docker exec -t my_container developer
- name: Create parameter file
run: cp app/config/parameters.yml.dist app/config/parameters.yml
# —— Composer 🧙‍️ —————————————————————————————————————————————————————————
- name: Install dependancies
run: composer install
# —— Check Requirements 👌 —————————————————————————————————————————————
- name: Check PHP version
run: php --version
# —— Setup Database 💾 —————————————————————————————————————————————
- name: Create database
run: docker exec -t mysql_container mysql -P 3306 --protocol=tcp -u root --password=**** -e "CREATE DATABASE functional_tests"
- name: Copy database
run: cat tests/_data/test.sql | docker exec -i mysql_container mysql -u root --password=**** functional_tests
- name: Switch database
run: docker exec -t php /var/www/bin/console app:dev:marketplace:switch functional_tests
- name: Execute migrations
run: docker exec -t php /var/www/bin/console --no-interaction doctrine:migrations:migrate
- name: Populate database
run: docker exec -t my_container php /var/www/bin/console fos:elastica:populate
# —— Generate Assets 🔥 ———————————————————————————————————————————————————————————
- name: Install assets
run: |
docker exec -t my_container php /var/www/bin/console assets:install
docker exec -t my_container php /var/www/bin/console assetic:dump
# —— Tests ✅ ———————————————————————————————————————————————————————————
- name: Run functional tests
run: docker exec -t my_container php codeception:functional
- name: Run Unit Tests
run: php vendor/phpunit/phpunit/phpunit -c app/
And here is the logs of the action step :
Github Action log
Anyone know why the step don't faile and how to throw an error ?
Probably codeception:functional set an exit code = 0 even though an error occured. docker exec passes the exit code of the process through. GitHub Actions fails the step if a command returns with an exit code != 0.

Docker compose volume overrides COPY in multi-stage build

My goal is to get php dependencies in one stage of a docker file then copy those dependencies to the next stage (the vendor/ dir). However, once a volume is specified in docker-compose.yml that overrides the COPY statement as if it never happened.
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: docker/app/Dockerfile
volumes:
- .:/var/www/html
docker/app/Dockerfile
FROM composer AS php_builder
COPY . /app
RUN composer install --no-dev
FROM php:7.1-fpm
COPY --from=php_builder /app/vendor /var/www/html/vendor
The result of building and running this is a /var/www/html directory that doesn't have the vendor/ directory as I'd expect.
My guess is that this is because the volume specified in the docker-compose.yml service definition actually happens after the COPY --from statement which seems to be logical. But how do I get around this? I'd still like to use a volume here instead of an ADD or COPY command.
You can combine bind mounts & volume to make your aim, a minimal example for your reference:
docker-compose.yaml:
version: "3"
services:
app:
build:
context: .
dockerfile: docker/app/Dockerfile
volumes:
- my_folder:/var/www/html/vendor
- .:/var/www/html
volumes:
my_folder:
docker/app/Dockerfile:
FROM composer AS php_builder
COPY . /app
#RUN composer install --no-dev
RUN mkdir -p vendor && echo "I'm dependency!" > vendor/dependencies.txt
FROM php:7.1-fpm
COPY --from=php_builder /app/vendor /var/www/html/vendor
Results:
shubuntu1#shubuntu1:~/test$ ls
docker docker-compose.yaml index.php
shubuntu1#shubuntu1:~/test$ docker-compose up -d
Creating network "test_default" with the default driver
Creating test_app_1 ... done
shubuntu1#shubuntu1:~/test$ docker exec -it test_app_1 /bin/bash
root#bf59d8684581:/var/www/html# ls
docker docker-compose.yaml index.php vendor
root#bf59d8684581:/var/www/html# cat vendor/dependencies.txt
I'm dependency!
From above execution, you can see the dependencies.txt which generated in the first stage of Dockerfile still could be seen in the container, volume just manage the data by docker itself, while bind mounts give you chance to manage data for yourself.

Trying to build a local Docker image for BlackFire from official blackfire/blackfire image on docker hub

i am trying to run backfire using docker , but getting x509 error:
$ docker run -it --rm \
-e BLACKFIRE_CLIENT_ID=$BLACKFIRE_CLIENT_ID \
-e BLACKFIRE_CLIENT_TOKEN=$BLACKFIRE_CLIENT_TOKEN \
blackfire/blackfire blackfire \
--slot=7 --samples=10 \
curl http://symfony.com/
Error:
Unable to verify certificate because of Unknown Authority, fallbacking on embedded Certificate Authorities.
Error sending request to Blackfire API: Get https://blackfire.io/api/v1/collab-tokens: x509: certificate signed by unknown authority
Error retrieving reference profiles: Cannot send an HTTP request to the Blackfire API.
Could one solution be to build a local image of blackfire/blackfire using the following Dockerfile
FROM alpine:latest
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
COPY BCPSG.pem /etc/ssl/certs
RUN update-ca-certificates 2>/dev/null
FROM blackfire/blackfire
ENV BLACKFIRE_CONFIG /dev/null
ENV BLACKFIRE_LOG_LEVEL 1
ENV BLACKFIRE_SOCKET tcp://0.0.0.0:8707
RUN mkdir -p /var/run/blackfire
EXPOSE 8707
RUN apk add --no-cache curl
ADD blackfire blackfire-agent /usr/bin
CMD ["blackfire-agent"]
notice that both the FROM are in the same Dockerfile.
but,
$docker build -t blackfire/blackfire .
gives me error:
Step 12/13 : ADD blackfire blackfire-agent /usr/bin
ADD failed: stat /var/lib/docker/tmp/docker-builder392805777/blackfire: no such file or directory
The reason, I am building a local Blackfire image, is to add the certificate, so that the Blackfire-agent will be able to communicate with the Blackfire SaaS Service and not fail with x509 error.
tried #mihal solution:
$ docker build -t blackfire/blackfire .
Sending build context to Docker daemon 7.68kB
Step 1/12 : FROM blackfire/blackfire
latest: Pulling from blackfire/blackfire
8e402f1a9c57: Pull complete
8244547729ec: Pull complete
ddd7f503c29b: Pull complete
Digest: sha256:efb4966f8d23759119fcd74040a16b5197a4ff1ba52b87a540b2eb765d3cc72b
Status: Downloaded newer image for blackfire/blackfire:latest
---> 7a7965c92939
Step 2/12 : RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
---> Running in b9907f44f2a5
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
v3.9.2-48-g471cf80f4f [http://dl-cdn.alpinelinux.org/alpine/v3.9/main]
v3.9.2-49-g87ea954c9b [http://dl-cdn.alpinelinux.org/alpine/v3.9/community]
OK: 9759 distinct packages available
OK: 7 MiB in 19 packages
Removing intermediate container b9907f44f2a5
---> 812e166dbda7
Step 3/12 : COPY BCPSG.pem /etc/ssl/certs
---> fe8246febc72
Step 4/12 : RUN update-ca-certificates 2>/dev/null
---> Running in 58eb7726b653
Removing intermediate container 58eb7726b653
---> 0dafb5dea0be
Step 5/12 : ENV BLACKFIRE_CONFIG /dev/null
---> Running in d7fd082168c0
Removing intermediate container d7fd082168c0
---> b5a6c49e0856
Step 6/12 : ENV BLACKFIRE_LOG_LEVEL 1
---> Running in 82d8017afad6
Removing intermediate container 82d8017afad6
---> 897d2d602633
Step 7/12 : ENV BLACKFIRE_SOCKET tcp://0.0.0.0:8707
---> Running in 0ca41e881a1a
Removing intermediate container 0ca41e881a1a
---> 65a43d10ea8c
Step 8/12 : RUN mkdir -p /var/run/blackfire
---> Running in 7a7f0fe60538
Removing intermediate container 7a7f0fe60538
---> af5e1266099c
Step 9/12 : EXPOSE 8707
---> Running in 2336bab0f174
Removing intermediate container 2336bab0f174
---> 2ae826054fca
Step 10/12 : RUN apk add --no-cache curl
---> Running in 044e9a932299
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
OK: 7 MiB in 19 packages
Removing intermediate container 044e9a932299
---> 9c5a9bcef470
Step 11/12 : ADD blackfire blackfire-agent /usr/bin
ADD failed: stat /var/lib/docker/tmp/docker-builder620641237/blackfire: no such file or directory
You can have 2 FROM statements if you have a multi-stage build but then you should name the first one and do something with it in the second one. Which doesn't seem to be your case.
Can you try this version:
FROM blackfire/blackfire
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
COPY BCPSG.pem /etc/ssl/certs
RUN update-ca-certificates 2>/dev/null
ENV BLACKFIRE_CONFIG /dev/null
ENV BLACKFIRE_LOG_LEVEL 1
ENV BLACKFIRE_SOCKET tcp://0.0.0.0:8707
RUN mkdir -p /var/run/blackfire
EXPOSE 8707
RUN apk add --no-cache curl
ADD blackfire blackfire-agent /usr/bin
CMD ["blackfire-agent"]

Install cURL compiled with OpenSSL and zlib in Dockerfile

I need to install cURL compiled with OpenSSL and zlib via Dockerfile for Debian image with apache and php 5.6. I tried many approaches but due to the fact that I don't have string understanding in Linux a failed. I use docker-compose to up my container. docker-compose.yaml looks like:
version: '2'
services:
web:
build: .
command: php -S 0.0.0.0:80 -t /var/www/html/
ports:
- "80:80"
depends_on:
- db
volumes:
- $PWD/www/project:/var/www/html
container_name: "project-web-server"
db:
image: mysql:latest
ports:
- "192.168.99.100:3306:3306"
container_name: "project-db"
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbpass
As a build script I use Dockerfile:
FROM php:5-fpm
RUN apt-get update && apt-get install -y \
apt-utils \
curl libcurl3 libcurl3-dev php5-curl php5-mcrypt
RUN docker-php-ext-install -j$(nproc) curl
'docker-php-ext-install' is a helper script from the base image https://hub.docker.com/_/php/
The problem is that after $ docker build --rm . which is successful a don't get an image with cURL+SSL+zlib. After $ docker-compose up I have a working container with Apache+MySQL and can run my project but libraries I need are not there.
Could you explain how to add these extensions to my apache in container properly? I even tried to create my own Dockerfile and build apache+php+needed libs there, but had no result.
Your Dockerfile is not complete. You have not done a COPY (or similar) to transfer your source code to execute from the host into the container. The point of a Dockerfile is to setup an environment together with your source code which finishes by launching a process (typically a server).
COPY code-from-some-location into-location-in-container
CMD path-to-your-server
... as per the URL you reference a more complete Dockerfile would appear like this
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
notice the COPY which recursively copies all files/dirs (typically the location of your source code, etc like data and/or config files) in your $PWD where you execute the command onto the specified location internal to the container In unix a period as in . indicates the current directory so above command
COPY . /usr/src/myapp
will copy all files and directories in current directory from the host computer (the one you are using when typing in the docker build command) into the container directory called /usr/src/myapp
the WORKDIR acts to change directories into your container's dir supplied
finally the CMD launches the server which hums along once your launch the container

Categories