I have created a docker image container which is running on a virtual machine (with docker toolbox). My Problem is now that I don't know in which windows path I can store my files for the development? Also I'm not sure how I can open this image container in the browser (docker-machine ip)?
It seems that you need to define a data volume. In short you declare a volume in your Dockerfile thus declaring that this path in your container will essentially be bound to a path on your host (that'd be your VM if I understand the setup correctly). E.g. (say that you want your shared path to live in /var/www in your container, then you add something like next command in your in your Dockerfile):
VOLUME ["/var/www"]
Then upon spinning up your container you bound it to your host's path:
E.g. (say your code lives in /src/webapp in your VM):
docker run -v /src/webapp:/var/www
While you are at it you may want to consider fitting your setup to the 'data volume' pattern (in short having another container playing the part of a shareable data volume) which is generally considered best practice for working with persistent data in Docker.
See docker documentation for details:
https://docs.docker.com/engine/tutorials/dockervolumes/
and this thread
How to deal with persistent storage (e.g. databases) in docker for more on the 'data volume' pattern.
Related
I have two different docker containers, each of them runs a PHP application. The problem that I have to solve is copy a list of files (using the PHP copy command) from container 1 to container 2.
Eg:
copy('var/www/html/uploads/test.jpg', 'var/www/html/site/uploads/test.jpg');
Now, the container 1 doesn't have access to container 2 which is site.
What is the best way to fix this?
Use a shared volume to transfer data. So mount
-v filetransfer:/var/www/html/transfer
or
--mount type=volume,source=filetransfer,destination=/var/www/html/transfer
to both containers. If both containers running different non-root users you have to ensure file permissions are set accordingly.
If you want to avoid file corruption use :ro (read-only) for all but one containers or ensure that by code.
Other comments:
docker cp is used to copy files from host to container or vice versa.
Building a REST-API for copying a single file is in my opinion a little bit over-engineered, as long as you're using the same host.
I've got a laravel application (based on laradock) that is amended to productionize it.
github
I'm trying to build a solution where:
files are copied from host to php app to a volume. As you can see in the docker file here, the files are copied into the image: image file copy and composer update is running as expected.
composer update
php app is built in docker (the composer update command needs to run on the volume)
when docker-compose up is called, multiple containers start and nginx and php-fpm share the same content. The Nginx container can therefore server php application.
When i run the code in the repo above, i am seeing a 404 in the browser. The reason is (i think) because in the docker-compose file, on line 97, the statement:
- app-data:/var/www/
(mounting the volume), has erased the files that have been added as it mounts. (These files are being correctly added to the image as part of the docker build php-fpm.).
So, the question is, how can i mount a volume at run time, and not erase the files that have been added as part of image build? The volume needs to be added to that path so that both containers can see the files (AFAIK).
The Docker volume will persist it data except the first time created it will use your container data to init the volume. I guess that you have wrongly created the volume before. So you just need to manually delete the app-data volume and re-deploy your docker-compose file.
Stop your docker compose
docker volume rm <project_name_or_stack_name>_app-data
Start yout docker compose again
Besides, you will need to make sure that your php-fpm which contains your source code data will be started first before any other services sharing app-data volume.
I have the following containers:
nginx:latest
myapp container (derived from php-fpm:alpine)
Currently I have a dummy project with CI pipeline in place which, build-time, compiles production variant of resources (images/js/css,...). Build files end up in (/public/build). At the very end of CI pipeline, I package everything into Docker images and upload it to Hub.
Both nginx and myapp do have volume (not bind mount) set up and pointing to /opt/ci-test/public/build.
This works, for the first time.
But let's say that I add a new file new.css - my new version of docker image will contain a build variant of new.css.
Running a new container with pre-existing volume does not reveal new files and I understand that it should not.. I can create a new volume my_app_v2.
At this point nginx does not see this new volume and it must be removed and re-run (with new volume) for it to take effect.
Is there an easy way to overcome this?
My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?
EDIT:
One workaround I have managed to dig out is to remove all files from attached volume and start new myapp container. This mirrors all the latest files to the volume. But this feels dirty...
EDIT2:
Related issue (case 3): https://github.com/moby/moby/issues/18670#issuecomment-165059630
EDIT3:
Dockerfile
FROM php:7.2.30-fpm-alpine3.11
COPY . /opt/ci-test
WORKDIR /opt/ci-test
VOLUME /opt/ci-test/public/build
So far, I do not have docker-composer and I run the containers manually via commands:
docker run -it -d --name php71alp -v shr_test:/opt/ci-test/public/build -p 9000:9000 <myaccount>/citest
docker run -it -d --name nginx -v shr_test:/var/www/citest -p 80:80 nginx:latest
Simply do not use a volume for this.
You should treat docker images as "monolithic packages" that contain your dependencies (nginx) and your app's files (images, js, css...). There's no need to treat your app's files any differently than nginx itself, it's all part of the single docker image.
Without a volume, you run v1 of your image, nginx sees the v1 files. You run v2 of your image, nginx sees the v2 files.
Volumes are intended to be used when you actually want to keep files between container versions (such as databases, file uploads...). Not for your site's static assets.
My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?
Yes, this is bad design. If you want to run multiple apps, you should run 1 Docker container per app. That way, when you release a new version of one app, you only need to restart that container. Containers aren't supposed to be treated like traditional virtual machines where you "SSH into" and manually configure things. Containers are "throw-away". New version of the app? just replace the container with a new one with a newer image.
First option: don't use a volume. If you want to have the files accessible from the image build, and don't need persistence, then the volume isn't helping with your workflow.
Second option: delete the previous volume between runs and use a named volume, which docker will initialize with the image contents.
Third option: modify the image build and container entrypoint to save the directory off to a different location during the build, and restore that location into the volume on container startup in the entrypoint. I've got an implementation of this in the save-volume and load-volume scripts in my base image. It gets more complicated when you want to merge the contents of the volume with the contents of the host, and you'll need to decide how to handle files getting deleted and what changes to save from the previous runs.
I need to somehow run my PHP application in Swarm (maybe we will consider kubernetes if it will be easier). We want to keep nginx and php containers separate, so we can scale them independently. But there is the problem, nginx must have access to those static files somehow.
How would you solve this situation?
Our first idea would be that in the CI, versioned compiled assets would be included to Nginx image. But what to do when i want to update my application containers? I would need old and also the new assets. Or should I use some kind of persisted volume and update it with CI? But I'm no sure how can I do that...
The persisted volume is probably the best way to accomplish this. Docker containers can mount NFS volumes. Create a container to act as an NFS server for the shared files. Here is one of the many containers available on Docker Hub: https://hub.docker.com/r/itsthenetwork/nfs-server-alpine/
Here is an example of how to set up NFS volumes for use with containers. https://gist.github.com/ruanbekker/4a9c0d250bce9f84482f2a788ce92131
Keep in mind that the server address will need to be that of the NFS container.
I'm trying to build a couple of Docker images for different PHP-based web apps, and for now I've been deploying them to Elastic Beanstalk using the Sample Application provided by AWS as a template. This application contains two Docker container definitions: one for PHP itself, and the other for Nginx (to act as a reverse-proxy).
However, it seems a little odd to me that the source code for my PHP application is effectively deployed outside of the Docker image. As you can see from the Github sample project linked above, there's a folder called php-app which contains all the PHP source files, but these aren't part of the container definitions. The two containers are just the stock images from Dockerhub. So in order to deploy this, it's not sufficient to merely upload the Dockerrun.aws.json file by itself; you need to ZIP this file together with the PHP source files in order for things to run. As I see it, it can (roughly) be represented by this visual tree:
*
|
|\
| - PHP Docker Container
|\
| - Linked Nginx Container
\
- Volume that Beanstalk auto-magically creates alongside these containers
Since the model here involves using two Docker images, plus a volume/file system independent of those Docker images, I'm not sure how that works. In my head I keep thinking that it would be better/easier to roll my PHP source files and PHP into one common Docker container, instead of doing whatever magic that Beanstalk is doing to tie everything together.
And I know that Elastic Beanstalk is really only acting as a facade for ECS in this case, where Task definitions and the like are being created. I have very limited knowledge of ECS but I'd like to keep my options open, in case I wanted to manually create an ECS task (using Fargate, for instance), instead of relying on Beanstalk to do it for me. And I'm worried that Beanstalk is doing something magical with that volume that would make things difficult to manually write a Task definition, if I wanted to go down that route.
What is the best practice for packaging PHP applications in a Docker environment, when the reverse proxy (be it Nginx or Apache or whatever) is in a separate container? Can anyone provide a better explanation (or correct any misunderstandings) of how this works? And how would I do the equivalent of what Beanstalk is doing here, in ECS, for a PHP application?
You have several options to build it.
The easiest is to have one service ecs with two container foreach web app (one container for php app and one container for nginx).
The only exposed port in each service is nginx 80 port with ecs dynamic port.
Each exposed service should have an alb that redirect trafic to nginx port.
In this case nginx is not used as loadbalancer, only for front web server.
Edit:
Your DockerFile for php app should be like that:
...
# Add Project files.
COPY . /home/usr/src
...
And for dev mode your docker-compose:
version: '3.0'
services:
php:
build:.
depends_on:
...
environment:
...
ports:
...
tty: true
working_dir: /home/usr/src
volumes:
- .:/home/usr/src
Then in local use docker-compose and live edit your files in container.
And in production mode file was copied in container at build time.
Is it more clear?