The question is related to a Laravel project run in a Docker container, but is applicable for any kind of project and any kind of container since the .env files are common on all platforms.
After changing a variable value in an .env file, the project that is run inside a docker container doesn't see the change and still runs with previous values. In Laravel, there are commands to clear the cached config, but that doesn't seem to change the values inside the running container. Even running docker-compose with --force-recreate flag doesn't update the containers .env values. So basically, none of the following work:
docker exec -it container_name php artisan config:clear
docker exec -it container_name php artisan cache:clear
docker-compose run --build --force-recreate
What is the correct way to update environment variable's value in a running Docker container?
If you do the following you should see changes in the container:
Create a .env file with some content.
Then create this docker-compose.yml:
version: '3.7'
services:
tmp:
image: alpine
container_name: SomeContainer
volumes:
- ./.env:/root/.env:rw
command: ["sleep", "300d"]
After starting it with
docker-compose up -d
And changing the .env file, changes should immediately be reflected in the Container.
Related
I made the following script in docker-compose.yml, which tries to run a official PHP + Apache image from Docker Hub:
services:
apache:
image: 'php:5.6-apache'
container_name: apache
restart: always
ports:
- '80:80'
- '443:443'
volumes:
- /mnt/data/apps/html:/var/www/html
- /mnt/data/apps/ssl:/etc/ssl
- /mnt/data/apps/apache:/etc/apache2
But when i run it with docker compose up the container does go up, but the files that were supposed to be created on container launch are not being created it... (Also happens when using docker run script)
If i remove the volumes, run it again and access the container with "docker exec -it apache bash" i see that the files are generated accordingly... Just happens when binding volumes. Wasnt the files suposed to be created automatically to the local volumes?
Please, what am i doing wrong? Is there something missing on the script? Am i being dumb?
Sorry if this is a really obvious question, i have nowere to go and are starting now on docker.
Thank you
Solution was to run container without any volumes and use docker cp to copy the config files to my machine, then run it again but with volume poiting to the copied config files...
Nginx Example
docker pull nginx:1.23.1-alpine
docker run --name tmp-nginx-container -d nginx:1.23.1-alpine
docker cp tmp-nginx-container:/etc/nginx/ D:\Docker\Config
docker rm -f tmp-nginx-container
I have created a command line application using symfony 3.4 which doesn't need to display any web page.
I generally run the commands like following:
php bin/console MY_COMMAND_NAME
I want to dockerize the application and share it with others, so inside the root directory of my project I created a docker-compose.yml file, which looks like following:
version: "3.3"
services:
web:
image: php:7.3-cli
Then I ran docker-compose up, after that I checked the PHP version by the following command and it showed my the correct version:
docker run php:7.3-cli php -v
However, when I ran docker ps, it didn't show any container running.
My question is how to run the commands inside my project root directory. FYI, I am using Docker Toolbox, on windows 10 Home Edition and my project location is:
C:\Users\{my_user_name}\Desktop\folder_1\folder_2
The docker container need to have a long running process defined in CMD to stay running. php-cli is not that. If you run composer up, you'll see something like this:
$ docker-compose up
Creating network "tempphpdocker_default" with the default driver
Pulling web (php:7.3-cli)...
7.3-cli: Pulling from library/php
b8f262c62ec6: Pull complete
a98660e7def6: Pull complete
4d75689ceb37: Pull complete
639eb0368afa: Pull complete
2cdbfdb779b1: Pull complete
e0b637fa9606: Pull complete
da7333b0ef25: Pull complete
01d65ff46009: Pull complete
673e50bed3b9: Pull complete
bf6c6e34305d: Pull complete
Digest: sha256:1453f5ef0d4d1d424ed8114dd90a775bdec06cc6fb3bbae9521dcb4ca0c8ca90
Status: Downloaded newer image for php:7.3-cli
Creating tempphpdocker_web_1 ...
Creating tempphpdocker_web_1 ... done
Attaching to tempphpdocker_web_1
web_1 | Interactive shell
web_1 |
tempphpdocker_web_1 exited with code 0
The exit code is 0. This means your command in the docker image php:7.3-cli has successfully run and finished.
To properly dockerize your applicaiton, you should override this by writing you own docker file with proper COPY calls that bundle your CLI program into it. Your Dockerfile should probably look something like this:
FROM php:7.3-cli
RUN mkdir -p /opt/workdir/bin
RUN mkdir -p /opt/workdir/vendor
COPY bin/ /opt/workdir/bin
COPY vendor/ /opt/workdir/vendor
WORKDIR /opt/workdir
CMD php ./bin/console COMMAND
You can simply build and run this Dockerfile, or you if you prefer docker-compose, you can define docker-compose.yml in the same folder as the Dockerfile:
version: "3.3"
services:
web:
image: php-custom
build: ./
Please noted that a dockerized application can only access files and folder in the docker image. You should bind volumes of your local file system to the container before it can actually work on your filesystem.
Quick and dirty fix to keep you container running just override the container command in docker-compose.
version: "3.3"
services:
web:
image: php:7.3-cli
command: tail -f /dev/null
when you run docker-compose up it will keep the docker container but it will do not thing, just will give away to run command inside container.
docker exec -it php-cli_web_1 ash
My question is how to run the commands inside my project root
directory.
As mentioned by #David, you need to mount your host project to the container in docker-compose.
For instance
your project is placed on the host /home/myporject, mount the project within docker-compose and it will be available inside the container. then you can update the command of your docker-compose to run the script.
keep in mind
The life of container is the life of docker-compose command
When the execution completed your container will be die after execution. so your container will run until the php:7.3-cli /app/your_script.php this script completed.
version: "3.3"
services:
web:
image: php:7.3-cli
command: php:7.3-cli /app/your_script.php
volumes:
- /home/myporject:/app
Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully – why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others
i know this is rather a stupid question, but i have the following problem. i use Docker above a year and a editor to change my programm which is hostet as a volume.
i dont have installed php because it only runs inside of the containers, like almost all other of my server programms (like sql, apache). now i installed visual studio code and it cannot find the path to php to use intellisense.
i know that i can set an environment path inside my docker-compose or Dockerfile to set an environment for my container. but the container is, if its run, isolated to the outside, except for commands like docker cp.
is it possible to set a path from my host machine to the container machine, so that visual studio code can find PHP inside of the container and use it for intellisense? or do i have to install php on my host machine? but this would destroy the usage of the Docker containers in my opinion.
for example in visual studio code config settings.json
"php.validate.executablePath": DOCKERCONTAINER/usr/bin/php
The trick is to create a Bash file that calls to our PHP container.
At first, start a PHP7 container and keep it running by using this docker-compose.yml
version: "3"
services:
python:
image: php:7.2
container_name: php7-vscode
restart: always #this option will keep your container always running, auto start after turn on your host machine
stdin_open: true
networks:
- proxy
networks:
proxy:
external: true
Create a file named php in /usr/local/bin
Chmod to make it executable
sudo chmod +x php
This file will contain this script that use our running container to process php
#!/bin/bash
docker exec -i --user=1000:1000 php7-vscode php "$#"
1000:1000 is our user id and our user group on our host machine. We have to run as our current user on host machine so that the container won't modify our file's owner.
That's it. Now you can type
php -v
to see the result.
I'm trying to set up two Docker images for my PHP web application (php-fcm) reversed proxied by NGINX. Ideally I would like all the files of the web application to be copied into the php-fcm based image and exposed as a volume. This way both containers (web and app) can access the files with NGINX serving the static files and php-fcm interpreting the php files.
docker-compose.yml
version: '2'
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- app
links:
- app
app:
build: .
volumes:
- /app
Dockerfile:
FROM php:fpm
COPY . /app
WORKDIR /app
The above setup works as expected. However, when I make any change to the files and then do
compose up --build
the new files are not picked up in the resulting images. This is despite the following message indicating that the image is indeed being rebuilt:
Building app
Step 1 : FROM php:fpm
---> cb4faea80358
Step 2 : COPY . /app
---> Using cache
---> 660ab4731bec
Step 3 : WORKDIR /app
---> Using cache
---> d5b2e4fa97f2
Successfully built d5b2e4fa97f2
Only removing all the old images does the trick.
Any idea what could cause this?
$ docker --version
Docker version 1.11.2, build b9f10c9
$ docker-compose --version
docker-compose version 1.7.1, build 0a9ab35
The 'volumes_from' option mounts volumes from one container to another. The important word there is container, not image. So when you rebuild an image, the previous container is still running. If you stop and restart that container, or even just stop it, the other containers are still using those old mount points. If you stop, remove the old app container, and start a new one, the old volume mounts will still persist to the now deleted container.
The better way to solve this in your situation is to switch to named volumes and setup a utility container to update this volume.
version: '2'
volumes:
app-data:
driver: local
services:
web:
image: nginx:latest
depends_on:
- app
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
- app-data:/app
app:
build: .
volumes:
- app-data:/app
A utility container to update your app-data volume could look something like:
docker run --rm -it \
-v `pwd`/new-app:/source -v app-data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"
As BMitch points out, image updates don't automatically filter down into containers. your workflow for updates needs to be revisited. I've just gone through the process of building a container which includes NGINX and PHP-FPM. I've found, for me, that the best way was to include nginx and php in a single container, both managed by supervisord.
I then have scripts in the image that allow you to update your code from a git repo. This makes the whole process really easy.
#Create new container from image
docker run -d --name=your_website -p 80:80 -p 443:443 camw/centos-nginx-php
#git clone to get website code from git
docker exec -ti your_website get https://www.github.com/user/your_repo.git
#restart container so that nginx config changes take effect
docker restart your_website
#Then to update, after committing changes to git, you'll call
docker exec -ti your_website update
#restart container if there are nginx config changes
docker restart your_website
My container can be found at https://hub.docker.com/r/camw/centos-nginx-php/
The dockerfile and associated build files are available at https://github.com/CamW/centos-nginx-php
If you want to give it a try, just fork https://github.com/CamW/centos-nginx-php-demo, change the conf/nginx.conf file as indicated in the readme and include your code.
Doing it this way, you don't need to deal with volumes at all, everything is in your container which I like.