How to run PHP application with Nginx in Docker Swarm/Kubernetes - php

I need to somehow run my PHP application in Swarm (maybe we will consider kubernetes if it will be easier). We want to keep nginx and php containers separate, so we can scale them independently. But there is the problem, nginx must have access to those static files somehow.
How would you solve this situation?
Our first idea would be that in the CI, versioned compiled assets would be included to Nginx image. But what to do when i want to update my application containers? I would need old and also the new assets. Or should I use some kind of persisted volume and update it with CI? But I'm no sure how can I do that...

The persisted volume is probably the best way to accomplish this. Docker containers can mount NFS volumes. Create a container to act as an NFS server for the shared files. Here is one of the many containers available on Docker Hub: https://hub.docker.com/r/itsthenetwork/nfs-server-alpine/
Here is an example of how to set up NFS volumes for use with containers. https://gist.github.com/ruanbekker/4a9c0d250bce9f84482f2a788ce92131
Keep in mind that the server address will need to be that of the NFS container.

Related

How to copy a file between docker containers?

I have two different docker containers, each of them runs a PHP application. The problem that I have to solve is copy a list of files (using the PHP copy command) from container 1 to container 2.
Eg:
copy('var/www/html/uploads/test.jpg', 'var/www/html/site/uploads/test.jpg');
Now, the container 1 doesn't have access to container 2 which is site.
What is the best way to fix this?
Use a shared volume to transfer data. So mount
-v filetransfer:/var/www/html/transfer
or
--mount type=volume,source=filetransfer,destination=/var/www/html/transfer
to both containers. If both containers running different non-root users you have to ensure file permissions are set accordingly.
If you want to avoid file corruption use :ro (read-only) for all but one containers or ensure that by code.
Other comments:
docker cp is used to copy files from host to container or vice versa.
Building a REST-API for copying a single file is in my opinion a little bit over-engineered, as long as you're using the same host.

How do I maintain work inside drupal containers for working in teams?

I am new to Drupal and just looking for some help getting my dev environment going and using best practices.
So I have a working container with drupal, mariadb, drush etc... so after installing Drupal with the installer, I install themes and such however it seems if I drop the container I lose all my work. How could I ever work in a team then? How do I keep that work? Do I use git inside the container and pull and push from within?
As far as I'm aware, work inside the container does not necessarily reflect into my local working directory.
Any help would be much appreciated.
I dont know about dupral but generally in docker you would mount a folder from your local filesystem where docker is running when you start the container. The data in the "/your/local/folder" will be accessible both in the container and in your local filesystem. It'll also survive a restart of the container.
docker run -d \
-v </your/local/folder>:</folder in container>:z \
<your image>
The trick will be to identify the data in the container you want on your local filesystem.
Look here for different alternative ways to handle persistent data in docker:
https://docs.docker.com/storage/volumes/
I can highly recommend you Lando for Drupal 8.
SEE: https://docs.devwithlando.io/tutorials/drupal8.html
It's a free, open-source, cross-platform, local development environment and DevOps tool built on Docker container technology.

What is the best Docker architecture with php, AngularJS, Nginx, Mysql

I have an AngularJS/PHP7/MySQL application and I want improve its architecture. Actually, it currently runs in separate docker containers.
One container for both the front end and back end created with Angular JS and PHP respectively.
One container for the database
I want to improve this setup and achieve something like this:
One container with NGINX : port 80, 443
One container with node and my AngularJS front : port 4200
One container apache (or php fpm ?) with my backend PHP : port 81
One container with MySQL : port 3306
And more broadly, is it a good thing to separate front and back ? For scaling problems later? And what kind of tools, I will use for these scaling problems? Docker Swarm, Kubernetes?
I don't know if it's a good approach. Could you help me to choose the right path for this application ? (Sorry for my English, I'm not native). Thanks!
I think you need to start looking into Docker Compose Basics first rather than trying to straightly dive into the deployment. This will help you to understand how to deploy multi container applications and the basic idea behind it. As per your scenario docker-compose allows you to define your NGINX container, Angular JS container (your front end), PHP (your back end) and your MySQL DB container, the ports which they are running on and also define overlay networks which you can separate container wise. Please see below resources, in the 3rd link you can also visit a Docker Compose file which has defined exactly the same configurations as your current setup, just refer it after you get hang out of the basics.
Docker Compose Official Documentation
How to use Docker Compose
Sample Docker Compose File
When it comes into Docker Swarm and Kubernetes those are container orchestration platforms. Both of these allow you to run containers in clusters which means you can replicate the number of containers running each service (service here means one of your containers, i.e NGINX Container) to make them highly available. Imagine one of your NGINX containers fails and you have defined to run 3 NGINX containers on your Docker Swarm or Kubernetes cluster and your users won't be affected though one of your NGINX container is down as there are two more other containers running. There are some differences between Docker Swarm and Kubernetes and its up to you to decide on what tool to use after you get these basics right. If you are starting out with docker-compose going into Docker-Swarm will be bit easy.
Difference between Docker Swarm and Kubernetes
Also I have added an answer explaining the use of Docker Swarm on to scenario like yours, read it so you can get an idea on how to deploy your app with Docker Swarm.
My Answer on Docker Swarm Use case

Docker image container - where can I store my files?

I have created a docker image container which is running on a virtual machine (with docker toolbox). My Problem is now that I don't know in which windows path I can store my files for the development? Also I'm not sure how I can open this image container in the browser (docker-machine ip)?
It seems that you need to define a data volume. In short you declare a volume in your Dockerfile thus declaring that this path in your container will essentially be bound to a path on your host (that'd be your VM if I understand the setup correctly). E.g. (say that you want your shared path to live in /var/www in your container, then you add something like next command in your in your Dockerfile):
VOLUME ["/var/www"]
Then upon spinning up your container you bound it to your host's path:
E.g. (say your code lives in /src/webapp in your VM):
docker run -v /src/webapp:/var/www
While you are at it you may want to consider fitting your setup to the 'data volume' pattern (in short having another container playing the part of a shareable data volume) which is generally considered best practice for working with persistent data in Docker.
See docker documentation for details:
https://docs.docker.com/engine/tutorials/dockervolumes/
and this thread
How to deal with persistent storage (e.g. databases) in docker for more on the 'data volume' pattern.

Dockerized PHP Application Architecture Best Practices

I'm pretty new do Docker. I played a lot with Docker in my development environment but I tried to deploy real app only once.
I've read tons of documentations and watched dozes of videos but still have a lot of questions.
I do understand that Docker is just a tool that can be used in so many different ways, but now I'm trying to find the best way to develop and deploy web apps.
I'll use real PHP App case to make my question more concrete and practical.
To keep it simple let's assume I'm building a very simple PHP App so I'll need:
Web Server (nginx)
PHP Interpreter (php-fpm or hhvm)
Persistent storage for SESSIONs
The best example/tutorial I could find was this one year old post. Dylan proposes this kind of structure:
He use Data Only container for the whole PHP project files and logs and docker-compose to run all this images with proper links. In development env I'll mount a host directory as a data volume and for production I'll copy files directly to Data Only Images and deploy.
This is understandable. I do want to share data across nginx and php-fpm. nginx needs access to static files (.img, .css, .js...) and php-fpm need access to PHP files. And both services are separated so can be updated/changed independently.
Data only container shares a data volume that is linked to nginx and php-fpm by --volumes-from option.
But as I understand - there's a problem with Data Only containers and -v flag.
Official Docker Documentation says that data volume is specially-designated directory to persist data! It is said that
Data volumes persist even if the container itself is deleted.
So this solution is great for data I do not want to loose like Session files, DB storage, logs etc.. But not for my code files, right? I do want to change my code files. I want to deploy changes without rebuilding nginx and php-fpm images.
Another problem is when I tried this approach I could not deploy code changes until I stopped all running containers, removed them and their images and rebuild them entirely. Just rebuilding and deploying Data Only images did nothing!
I've seen some other implementations when data is stored directly in Interpreter container, but it's not an option because I need nginx to have access to these files also.
The question is what is the best practices on where to put my project code files and how to deploying changes for this kind of app?
Thanks.
Right, don't use a data volume for your code. docker-compose makes a point to re-use old volumes (so you don't lose data), so you'd always be stuck with old code.
Use a COPY directive to add the static resources in the nginx Dockerfile and a COPY in the application (phpfpm) Dockerfile to add the code. In dev you can use a host volume so that you don't have to restart containers to see your code changes (assuming the web server supports picking up changes).

Categories