Where does the data live in a Multi-Container Dockerized application? - php

I'm trying to build a couple of Docker images for different PHP-based web apps, and for now I've been deploying them to Elastic Beanstalk using the Sample Application provided by AWS as a template. This application contains two Docker container definitions: one for PHP itself, and the other for Nginx (to act as a reverse-proxy).
However, it seems a little odd to me that the source code for my PHP application is effectively deployed outside of the Docker image. As you can see from the Github sample project linked above, there's a folder called php-app which contains all the PHP source files, but these aren't part of the container definitions. The two containers are just the stock images from Dockerhub. So in order to deploy this, it's not sufficient to merely upload the Dockerrun.aws.json file by itself; you need to ZIP this file together with the PHP source files in order for things to run. As I see it, it can (roughly) be represented by this visual tree:
*
|
|\
| - PHP Docker Container
|\
| - Linked Nginx Container
\
- Volume that Beanstalk auto-magically creates alongside these containers
Since the model here involves using two Docker images, plus a volume/file system independent of those Docker images, I'm not sure how that works. In my head I keep thinking that it would be better/easier to roll my PHP source files and PHP into one common Docker container, instead of doing whatever magic that Beanstalk is doing to tie everything together.
And I know that Elastic Beanstalk is really only acting as a facade for ECS in this case, where Task definitions and the like are being created. I have very limited knowledge of ECS but I'd like to keep my options open, in case I wanted to manually create an ECS task (using Fargate, for instance), instead of relying on Beanstalk to do it for me. And I'm worried that Beanstalk is doing something magical with that volume that would make things difficult to manually write a Task definition, if I wanted to go down that route.
What is the best practice for packaging PHP applications in a Docker environment, when the reverse proxy (be it Nginx or Apache or whatever) is in a separate container? Can anyone provide a better explanation (or correct any misunderstandings) of how this works? And how would I do the equivalent of what Beanstalk is doing here, in ECS, for a PHP application?

You have several options to build it.
The easiest is to have one service ecs with two container foreach web app (one container for php app and one container for nginx).
The only exposed port in each service is nginx 80 port with ecs dynamic port.
Each exposed service should have an alb that redirect trafic to nginx port.
In this case nginx is not used as loadbalancer, only for front web server.
Edit:
Your DockerFile for php app should be like that:
...
# Add Project files.
COPY . /home/usr/src
...
And for dev mode your docker-compose:
version: '3.0'
services:
php:
build:.
depends_on:
...
environment:
...
ports:
...
tty: true
working_dir: /home/usr/src
volumes:
- .:/home/usr/src
Then in local use docker-compose and live edit your files in container.
And in production mode file was copied in container at build time.
Is it more clear?

Related

Serving new static files via nginx stored in php-fpm container after update

I have the following containers:
nginx:latest
myapp container (derived from php-fpm:alpine)
Currently I have a dummy project with CI pipeline in place which, build-time, compiles production variant of resources (images/js/css,...). Build files end up in (/public/build). At the very end of CI pipeline, I package everything into Docker images and upload it to Hub.
Both nginx and myapp do have volume (not bind mount) set up and pointing to /opt/ci-test/public/build.
This works, for the first time.
But let's say that I add a new file new.css - my new version of docker image will contain a build variant of new.css.
Running a new container with pre-existing volume does not reveal new files and I understand that it should not.. I can create a new volume my_app_v2.
At this point nginx does not see this new volume and it must be removed and re-run (with new volume) for it to take effect.
Is there an easy way to overcome this?
My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?
EDIT:
One workaround I have managed to dig out is to remove all files from attached volume and start new myapp container. This mirrors all the latest files to the volume. But this feels dirty...
EDIT2:
Related issue (case 3): https://github.com/moby/moby/issues/18670#issuecomment-165059630
EDIT3:
Dockerfile
FROM php:7.2.30-fpm-alpine3.11
COPY . /opt/ci-test
WORKDIR /opt/ci-test
VOLUME /opt/ci-test/public/build
So far, I do not have docker-composer and I run the containers manually via commands:
docker run -it -d --name php71alp -v shr_test:/opt/ci-test/public/build -p 9000:9000 <myaccount>/citest
docker run -it -d --name nginx -v shr_test:/var/www/citest -p 80:80 nginx:latest
Simply do not use a volume for this.
You should treat docker images as "monolithic packages" that contain your dependencies (nginx) and your app's files (images, js, css...). There's no need to treat your app's files any differently than nginx itself, it's all part of the single docker image.
Without a volume, you run v1 of your image, nginx sees the v1 files. You run v2 of your image, nginx sees the v2 files.
Volumes are intended to be used when you actually want to keep files between container versions (such as databases, file uploads...). Not for your site's static assets.
My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?
Yes, this is bad design. If you want to run multiple apps, you should run 1 Docker container per app. That way, when you release a new version of one app, you only need to restart that container. Containers aren't supposed to be treated like traditional virtual machines where you "SSH into" and manually configure things. Containers are "throw-away". New version of the app? just replace the container with a new one with a newer image.
First option: don't use a volume. If you want to have the files accessible from the image build, and don't need persistence, then the volume isn't helping with your workflow.
Second option: delete the previous volume between runs and use a named volume, which docker will initialize with the image contents.
Third option: modify the image build and container entrypoint to save the directory off to a different location during the build, and restore that location into the volume on container startup in the entrypoint. I've got an implementation of this in the save-volume and load-volume scripts in my base image. It gets more complicated when you want to merge the contents of the volume with the contents of the host, and you'll need to decide how to handle files getting deleted and what changes to save from the previous runs.

What is the best Docker architecture with php, AngularJS, Nginx, Mysql

I have an AngularJS/PHP7/MySQL application and I want improve its architecture. Actually, it currently runs in separate docker containers.
One container for both the front end and back end created with Angular JS and PHP respectively.
One container for the database
I want to improve this setup and achieve something like this:
One container with NGINX : port 80, 443
One container with node and my AngularJS front : port 4200
One container apache (or php fpm ?) with my backend PHP : port 81
One container with MySQL : port 3306
And more broadly, is it a good thing to separate front and back ? For scaling problems later? And what kind of tools, I will use for these scaling problems? Docker Swarm, Kubernetes?
I don't know if it's a good approach. Could you help me to choose the right path for this application ? (Sorry for my English, I'm not native). Thanks!
I think you need to start looking into Docker Compose Basics first rather than trying to straightly dive into the deployment. This will help you to understand how to deploy multi container applications and the basic idea behind it. As per your scenario docker-compose allows you to define your NGINX container, Angular JS container (your front end), PHP (your back end) and your MySQL DB container, the ports which they are running on and also define overlay networks which you can separate container wise. Please see below resources, in the 3rd link you can also visit a Docker Compose file which has defined exactly the same configurations as your current setup, just refer it after you get hang out of the basics.
Docker Compose Official Documentation
How to use Docker Compose
Sample Docker Compose File
When it comes into Docker Swarm and Kubernetes those are container orchestration platforms. Both of these allow you to run containers in clusters which means you can replicate the number of containers running each service (service here means one of your containers, i.e NGINX Container) to make them highly available. Imagine one of your NGINX containers fails and you have defined to run 3 NGINX containers on your Docker Swarm or Kubernetes cluster and your users won't be affected though one of your NGINX container is down as there are two more other containers running. There are some differences between Docker Swarm and Kubernetes and its up to you to decide on what tool to use after you get these basics right. If you are starting out with docker-compose going into Docker-Swarm will be bit easy.
Difference between Docker Swarm and Kubernetes
Also I have added an answer explaining the use of Docker Swarm on to scenario like yours, read it so you can get an idea on how to deploy your app with Docker Swarm.
My Answer on Docker Swarm Use case

Dockerizing existing zend web app project

I have a web based application using Zend framework running on LAMPP server and MYSQL which I would like to Dockerize. Essentially, I would like to do the following:
Create a docker file that would be my docker container for the web application
Be able to pass in a configuration to this docker file so that I can build the docker image either for my local code base or pull from master branch
Any suggestions on how do I get started?
As a general best practice you should only have one service - daemon - in each container. So in your case, when you have apache and mysql, you should think about two containers. For running them at the same time you can either put together a simple bash script, or have a look at docker-compose.
The Dockerfile for the apache container would either have a volume mapped to your "live" code on your disk, or it would contain the code inside. To do the former, you would use the VOLUME declaration in the Dockerfile, and then "-v" parameter when running it (or "volumes" declaration in docker-compose.yml). To do the latter, you would use the ADD instruction in Dockerfile.
Now if you want some kind of switch between these two, I would suggest creating three Dockerfiles, such as Dockerfile.parent, Dockerfile.app, Dockerfile.live - where:
the parent contains all your dependencies
the other two use "FROM parent"
app uses the ADD instruction and therefore contains the code inside
live uses the volume

How to Dockerize a PHP app

I am trying to understand the general process that goes into deploying a PHP web app through Docker. I have a web app developed in LAMP.
So far I understood that firs of all I have to download and install Docker itself. Afterwards I have to install the Docker Composer. Then using the Composer I have to create a container that will contain the image of my server (Apache).
And this is where I get confused. Do I have to then create a container for my database and another one for the application itself (the directory containing the code) or do I have one container for the server the database and the app?
I dont need a detailed explanation, just the general idea behind the process then I can figure out the rest on my own
Thank you to anyone who can provide any help.
There can be many ways but the simple way is to install docker on some linux machine , write a docker file that installs and configures all the necessay components such as apache , php , mysql etc , and then grab your application code either inside the container or attach it as external volume from the host.
After writing dockerfile , you can build the docker image by using the docker build command , and after the image is build you can use it locally or push to dockerhub , or push it to your private docker registery if you want.
The other option is if you just want to test , you can pull an already existing image from dockerhub that contains the LAMP stack and you then just need to do docker run on the image attaching your php application as external volume.
Ofcourse , to access the application on port 80 or 443 outside docker you have to expose those ports either in docker file , or at docker run command time.
For a test environment , you can run all the services in just one container.
For larger Deployements you can consider a container orchestration service such as DockerSwarm or Kubernetes . You can also try DC/OS from MesoSphere , by grabbing a vagrant file from thier github repo that will setup the DC/OS on a single machine for you. Then you can just spin up as many services as you want on Mesos. They provide out of the box support for service installation , container management and scaling.
The best practices recommends to have one Docker container per process/service (one container for Apache + PHP, another container for MySQL and so on) but it's just a guideline and it doesn't mean that you cannot have only one container with everything you need inside it.
If you decide to go with only one container to run all the services, you'll be fine just using Docker (Engine). You can still use Docker Compose in this case but there's no real need for it.
Docker Compose is more helpful when you have to run multiple containers. With just one command you can get all your containers up and running. Also here you can use only Docker Engine but you'll need to run each container manually.

Docker and microservices

I am developing a system using microservices, for myself to learn new technology. One service on php (laravel) + postgres, the other on nodejs (express) + mongo, and another on php (symfony) + with other postgres server, I want to wrap all of this services in the docker. I looked at the decision https://github.com/LaraDock/laradock, but there is only one container workspace, and one container to postgres, how do I correct tune docker?
If you look at the docker-compose.yml in the link you provided, you can see that they have split up everything into separate docker containers.
If you want more than one of any of the containers listed you use docker-compose scale to create duplicates.

Categories