How to quickly switch Docker containers on production? - php

Imagine that i had created service for uploading kittens pictures and use Docker container on production.
To do it I created Docker image with PHP 5.5 service, mounted "upload" folder of my app from real OS and also mounted folder with source code.
After some time I decided to improve my app, changed source code and now it requires different env from existed in Docker.
For example, now I need PHP 5.6 instead of PHP 5.5
So when I want to change source code of my app, I can do it by switching mounted source code folder with symlinks (or cannot, because Docker will keep socket? If so, how to switch source code? Should I do it right in container without mounting?).
But how can I quickly switch Docker container after switching source code?

Fastest way would be to exec a shell session in the container, update the environment, restart the php service. As you have mounted the source code, no need to switch.
Best way would be to create a docker image with required environment and stop previous container then run the new image mounting appropriate directories.

Related

Use docker-compose correctly so that two containers can files in the same volume

I've got a laravel application (based on laradock) that is amended to productionize it.
github
I'm trying to build a solution where:
files are copied from host to php app to a volume. As you can see in the docker file here, the files are copied into the image: image file copy and composer update is running as expected.
composer update
php app is built in docker (the composer update command needs to run on the volume)
when docker-compose up is called, multiple containers start and nginx and php-fpm share the same content. The Nginx container can therefore server php application.
When i run the code in the repo above, i am seeing a 404 in the browser. The reason is (i think) because in the docker-compose file, on line 97, the statement:
- app-data:/var/www/
(mounting the volume), has erased the files that have been added as it mounts. (These files are being correctly added to the image as part of the docker build php-fpm.).
So, the question is, how can i mount a volume at run time, and not erase the files that have been added as part of image build? The volume needs to be added to that path so that both containers can see the files (AFAIK).
The Docker volume will persist it data except the first time created it will use your container data to init the volume. I guess that you have wrongly created the volume before. So you just need to manually delete the app-data volume and re-deploy your docker-compose file.
Stop your docker compose
docker volume rm <project_name_or_stack_name>_app-data
Start yout docker compose again
Besides, you will need to make sure that your php-fpm which contains your source code data will be started first before any other services sharing app-data volume.

Laradock in remote containers with VS code and PHP IntelliSense working

How to setup "PHP IntelliSense" on Visual studio code with Laradock to use the PHP binary in the laradock_workspace_1 container?
I have tried to start Remote-containers: attach to running container..., then problem is I can´t access my git repo since its mounted on Windows.
In Windows I can´t access the PHP binary in the docker container, is it possible for vs code to access PHP some remote way(without open a new vs code in the container), so it will have all libraries and modules loaded. This is something I need to get PHP IntelliSense working in correct way? Now some of the autocomplete are not working for example all functions related to Eloquent.
I have found this but unfortunately I don´t understand how to get it work:
https://github.com/laradock/laradock/issues/2248
Any other suggestions on how to get autocomplete to work, without install same PHP version in Windows (I don´t want to pollute my system)?
Start with connecting to the Laradock workspace container (Remote-containers) and mount the folder:
/var/www/
This will allow you to access the files outside the container.
Then for PHP IntelliSense you should add this line to the settings file:
{
"php.executablePath": "/usr/local/bin/php"
}
It might be possible to export the port to php-fpm outside the container, but nothing I know how to do. You can also connect to the php-fpm container, but I think the workspace is more practical to connect to.

Docker image on CentOS with php, mysql and apache

Right now we have a website running on php 5.6 at Azure on a CentOS 7 based operating system.
Every time if we want to do a new code deploy we have to do it using ftp to our server and manually transfer files and folders. This is very error prone and takes us hours of deploying and debugging afterwards every single time.
we develop on our localhost Windows machine using PHP with WAMP. So there's already a discrepancy between our localhost environments and the production environment.
I started reading a lot about docker lately and how it integrates with BitBucket pipelines. So I wanted to make our deployment Flow more streamlined and automatic with BitBucket pipelines.
Before I get to the technical stuff I have already tried, I want to make sure that I have the general picture of steps that need to be done correct.
What I want to achieve is a way for me and my colleague to write our code and push it to our BitBucket repository, from there on the pipeline picks it up, creates a docker container and deploys it (automatically) (is this a good idea, what about active users during a new container deploy?) to our website.
These are the steps I think that need to be done, please correct me where I am wrong:
1) I create a CentOS virtual machine using VirtualBox
on this VM I install docker
I create a dockerfile here where I use the php7.3-apache base image and I will install mysql on top of it as well.
?? Do I need to do extra stuff here like copying of folders with code or is
that done by bitbucket??
Now the problem I encounter is whilst creating this "docker container" for my situation. I realize this is probably a very common use case for Docker, but I've read through thousand of tutorials and watches tons of videos, but I cannot find answers to my most basic questions and I end up being stuck and frustrated for days/weeks.
I've got a fully working website created in CodeIgniter, but for the sake of the question I just want to have a working version of the docker container containing PHP MySQL and Apache.
I've logged into the CentOS VM and performed the following commands:
mkdir dockertest
touch index.html (and i placed some text in here)
touch index.php (and i placed a basic echo "hello world" in here)
touch docker-compose.yml
mkdir .docker
cd .docker
touch Dockerfile
touch vhost.conf
Dockerfile looks like this:
FROM php:7.3.0-apache-stretch
MAINTAINER Dennis
COPY . /srv/app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /srv/app && a2enmod rewrite
Then I'm able to build the image using
docker build --file .docker/Dockerfile -t docker-test .
Right now I can run the container with the following command:
docker run --rm -p 8080:80 docker-test
At this point I go to my CentOS VM and I try to do a curl localhost:8080 and I get the following HTML:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access on this server. <br />
</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at localhost Port 8080</address>
</body></html>
So I guess this means that the Apache server is running, but it does not see my index files anywhere.
I am massively overwhelmed by the amount of documentation and tutorials that are available for Docker, but they all seem to be too high level for me or none which targets CentOS 7, PHP, MySQL and Apache combined.
A question that's also bugging me: The advantage of Docker is that it can be deployed to anywhere and the environment is the exact same. This causes no problems like "it works on my localhost". But how does this exactly work, do me and my colleague need to develop our code INSIDE the docker container? How does this even work?
The process should be:
develop: you and your colleages develop code, they push that to a version control system (git on bitbucket/github) -> the code is in one trusted repository
build: you take this code and create a (or multiple) Docker image(s) with it: on the Apache server, you need the HTML, Javascript code. Build a Docker image starting from Apache image, which then has a step to PULL the code from the git repository inside the container. That's your front end server.
For the DB part, you probably want another container, or even use a managed service that handles the migrations/updates for you, so you only need to worry about the data in the database. If you want to have you own container, make sure the data is in a VOLUME that is mounted in the container, but is otherwise stored on a local or network drive, (i.e. NOT inside the container which would get destroyed on any update)
deploy: pull the images from registry of choice, and make sure the containers are connected as needed (i.e. either on the same host and linked, or on different nodes that have access to each other through a private network)
Notes:
Use Docker for Windows rather than virtual machine and installing Docker inside it.
The host doesn't matter, it's the base image in the container that matters whether you deploy on a Ubuntu, CentOS or CoreOS host, the Docker base image is what matters for you to install dependencies and make your code run.
On the build phase, you probably don't want to pull from git inside the image if your project is a private repository, because you would need to have credentials inside the image to do that: rather you either pull the code from git outside the image, and ADD it to the image, or use another (private) container that has the git pull credentials to pull the code, do the build, and dump a build file that you can then ADD to a shippable container.

How to Dockerize a PHP app

I am trying to understand the general process that goes into deploying a PHP web app through Docker. I have a web app developed in LAMP.
So far I understood that firs of all I have to download and install Docker itself. Afterwards I have to install the Docker Composer. Then using the Composer I have to create a container that will contain the image of my server (Apache).
And this is where I get confused. Do I have to then create a container for my database and another one for the application itself (the directory containing the code) or do I have one container for the server the database and the app?
I dont need a detailed explanation, just the general idea behind the process then I can figure out the rest on my own
Thank you to anyone who can provide any help.
There can be many ways but the simple way is to install docker on some linux machine , write a docker file that installs and configures all the necessay components such as apache , php , mysql etc , and then grab your application code either inside the container or attach it as external volume from the host.
After writing dockerfile , you can build the docker image by using the docker build command , and after the image is build you can use it locally or push to dockerhub , or push it to your private docker registery if you want.
The other option is if you just want to test , you can pull an already existing image from dockerhub that contains the LAMP stack and you then just need to do docker run on the image attaching your php application as external volume.
Ofcourse , to access the application on port 80 or 443 outside docker you have to expose those ports either in docker file , or at docker run command time.
For a test environment , you can run all the services in just one container.
For larger Deployements you can consider a container orchestration service such as DockerSwarm or Kubernetes . You can also try DC/OS from MesoSphere , by grabbing a vagrant file from thier github repo that will setup the DC/OS on a single machine for you. Then you can just spin up as many services as you want on Mesos. They provide out of the box support for service installation , container management and scaling.
The best practices recommends to have one Docker container per process/service (one container for Apache + PHP, another container for MySQL and so on) but it's just a guideline and it doesn't mean that you cannot have only one container with everything you need inside it.
If you decide to go with only one container to run all the services, you'll be fine just using Docker (Engine). You can still use Docker Compose in this case but there's no real need for it.
Docker Compose is more helpful when you have to run multiple containers. With just one command you can get all your containers up and running. Also here you can use only Docker Engine but you'll need to run each container manually.

Dockerized PHP Application Architecture Best Practices

I'm pretty new do Docker. I played a lot with Docker in my development environment but I tried to deploy real app only once.
I've read tons of documentations and watched dozes of videos but still have a lot of questions.
I do understand that Docker is just a tool that can be used in so many different ways, but now I'm trying to find the best way to develop and deploy web apps.
I'll use real PHP App case to make my question more concrete and practical.
To keep it simple let's assume I'm building a very simple PHP App so I'll need:
Web Server (nginx)
PHP Interpreter (php-fpm or hhvm)
Persistent storage for SESSIONs
The best example/tutorial I could find was this one year old post. Dylan proposes this kind of structure:
He use Data Only container for the whole PHP project files and logs and docker-compose to run all this images with proper links. In development env I'll mount a host directory as a data volume and for production I'll copy files directly to Data Only Images and deploy.
This is understandable. I do want to share data across nginx and php-fpm. nginx needs access to static files (.img, .css, .js...) and php-fpm need access to PHP files. And both services are separated so can be updated/changed independently.
Data only container shares a data volume that is linked to nginx and php-fpm by --volumes-from option.
But as I understand - there's a problem with Data Only containers and -v flag.
Official Docker Documentation says that data volume is specially-designated directory to persist data! It is said that
Data volumes persist even if the container itself is deleted.
So this solution is great for data I do not want to loose like Session files, DB storage, logs etc.. But not for my code files, right? I do want to change my code files. I want to deploy changes without rebuilding nginx and php-fpm images.
Another problem is when I tried this approach I could not deploy code changes until I stopped all running containers, removed them and their images and rebuild them entirely. Just rebuilding and deploying Data Only images did nothing!
I've seen some other implementations when data is stored directly in Interpreter container, but it's not an option because I need nginx to have access to these files also.
The question is what is the best practices on where to put my project code files and how to deploying changes for this kind of app?
Thanks.
Right, don't use a data volume for your code. docker-compose makes a point to re-use old volumes (so you don't lose data), so you'd always be stuck with old code.
Use a COPY directive to add the static resources in the nginx Dockerfile and a COPY in the application (phpfpm) Dockerfile to add the code. In dev you can use a host volume so that you don't have to restart containers to see your code changes (assuming the web server supports picking up changes).

Categories