Right now we have a website running on php 5.6 at Azure on a CentOS 7 based operating system.
Every time if we want to do a new code deploy we have to do it using ftp to our server and manually transfer files and folders. This is very error prone and takes us hours of deploying and debugging afterwards every single time.
we develop on our localhost Windows machine using PHP with WAMP. So there's already a discrepancy between our localhost environments and the production environment.
I started reading a lot about docker lately and how it integrates with BitBucket pipelines. So I wanted to make our deployment Flow more streamlined and automatic with BitBucket pipelines.
Before I get to the technical stuff I have already tried, I want to make sure that I have the general picture of steps that need to be done correct.
What I want to achieve is a way for me and my colleague to write our code and push it to our BitBucket repository, from there on the pipeline picks it up, creates a docker container and deploys it (automatically) (is this a good idea, what about active users during a new container deploy?) to our website.
These are the steps I think that need to be done, please correct me where I am wrong:
1) I create a CentOS virtual machine using VirtualBox
on this VM I install docker
I create a dockerfile here where I use the php7.3-apache base image and I will install mysql on top of it as well.
?? Do I need to do extra stuff here like copying of folders with code or is
that done by bitbucket??
Now the problem I encounter is whilst creating this "docker container" for my situation. I realize this is probably a very common use case for Docker, but I've read through thousand of tutorials and watches tons of videos, but I cannot find answers to my most basic questions and I end up being stuck and frustrated for days/weeks.
I've got a fully working website created in CodeIgniter, but for the sake of the question I just want to have a working version of the docker container containing PHP MySQL and Apache.
I've logged into the CentOS VM and performed the following commands:
mkdir dockertest
touch index.html (and i placed some text in here)
touch index.php (and i placed a basic echo "hello world" in here)
touch docker-compose.yml
mkdir .docker
cd .docker
touch Dockerfile
touch vhost.conf
Dockerfile looks like this:
FROM php:7.3.0-apache-stretch
MAINTAINER Dennis
COPY . /srv/app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /srv/app && a2enmod rewrite
Then I'm able to build the image using
docker build --file .docker/Dockerfile -t docker-test .
Right now I can run the container with the following command:
docker run --rm -p 8080:80 docker-test
At this point I go to my CentOS VM and I try to do a curl localhost:8080 and I get the following HTML:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access on this server. <br />
</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at localhost Port 8080</address>
</body></html>
So I guess this means that the Apache server is running, but it does not see my index files anywhere.
I am massively overwhelmed by the amount of documentation and tutorials that are available for Docker, but they all seem to be too high level for me or none which targets CentOS 7, PHP, MySQL and Apache combined.
A question that's also bugging me: The advantage of Docker is that it can be deployed to anywhere and the environment is the exact same. This causes no problems like "it works on my localhost". But how does this exactly work, do me and my colleague need to develop our code INSIDE the docker container? How does this even work?
The process should be:
develop: you and your colleages develop code, they push that to a version control system (git on bitbucket/github) -> the code is in one trusted repository
build: you take this code and create a (or multiple) Docker image(s) with it: on the Apache server, you need the HTML, Javascript code. Build a Docker image starting from Apache image, which then has a step to PULL the code from the git repository inside the container. That's your front end server.
For the DB part, you probably want another container, or even use a managed service that handles the migrations/updates for you, so you only need to worry about the data in the database. If you want to have you own container, make sure the data is in a VOLUME that is mounted in the container, but is otherwise stored on a local or network drive, (i.e. NOT inside the container which would get destroyed on any update)
deploy: pull the images from registry of choice, and make sure the containers are connected as needed (i.e. either on the same host and linked, or on different nodes that have access to each other through a private network)
Notes:
Use Docker for Windows rather than virtual machine and installing Docker inside it.
The host doesn't matter, it's the base image in the container that matters whether you deploy on a Ubuntu, CentOS or CoreOS host, the Docker base image is what matters for you to install dependencies and make your code run.
On the build phase, you probably don't want to pull from git inside the image if your project is a private repository, because you would need to have credentials inside the image to do that: rather you either pull the code from git outside the image, and ADD it to the image, or use another (private) container that has the git pull credentials to pull the code, do the build, and dump a build file that you can then ADD to a shippable container.
Related
I've got a laravel application (based on laradock) that is amended to productionize it.
github
I'm trying to build a solution where:
files are copied from host to php app to a volume. As you can see in the docker file here, the files are copied into the image: image file copy and composer update is running as expected.
composer update
php app is built in docker (the composer update command needs to run on the volume)
when docker-compose up is called, multiple containers start and nginx and php-fpm share the same content. The Nginx container can therefore server php application.
When i run the code in the repo above, i am seeing a 404 in the browser. The reason is (i think) because in the docker-compose file, on line 97, the statement:
- app-data:/var/www/
(mounting the volume), has erased the files that have been added as it mounts. (These files are being correctly added to the image as part of the docker build php-fpm.).
So, the question is, how can i mount a volume at run time, and not erase the files that have been added as part of image build? The volume needs to be added to that path so that both containers can see the files (AFAIK).
The Docker volume will persist it data except the first time created it will use your container data to init the volume. I guess that you have wrongly created the volume before. So you just need to manually delete the app-data volume and re-deploy your docker-compose file.
Stop your docker compose
docker volume rm <project_name_or_stack_name>_app-data
Start yout docker compose again
Besides, you will need to make sure that your php-fpm which contains your source code data will be started first before any other services sharing app-data volume.
I have the following containers:
nginx:latest
myapp container (derived from php-fpm:alpine)
Currently I have a dummy project with CI pipeline in place which, build-time, compiles production variant of resources (images/js/css,...). Build files end up in (/public/build). At the very end of CI pipeline, I package everything into Docker images and upload it to Hub.
Both nginx and myapp do have volume (not bind mount) set up and pointing to /opt/ci-test/public/build.
This works, for the first time.
But let's say that I add a new file new.css - my new version of docker image will contain a build variant of new.css.
Running a new container with pre-existing volume does not reveal new files and I understand that it should not.. I can create a new volume my_app_v2.
At this point nginx does not see this new volume and it must be removed and re-run (with new volume) for it to take effect.
Is there an easy way to overcome this?
My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?
EDIT:
One workaround I have managed to dig out is to remove all files from attached volume and start new myapp container. This mirrors all the latest files to the volume. But this feels dirty...
EDIT2:
Related issue (case 3): https://github.com/moby/moby/issues/18670#issuecomment-165059630
EDIT3:
Dockerfile
FROM php:7.2.30-fpm-alpine3.11
COPY . /opt/ci-test
WORKDIR /opt/ci-test
VOLUME /opt/ci-test/public/build
So far, I do not have docker-composer and I run the containers manually via commands:
docker run -it -d --name php71alp -v shr_test:/opt/ci-test/public/build -p 9000:9000 <myaccount>/citest
docker run -it -d --name nginx -v shr_test:/var/www/citest -p 80:80 nginx:latest
Simply do not use a volume for this.
You should treat docker images as "monolithic packages" that contain your dependencies (nginx) and your app's files (images, js, css...). There's no need to treat your app's files any differently than nginx itself, it's all part of the single docker image.
Without a volume, you run v1 of your image, nginx sees the v1 files. You run v2 of your image, nginx sees the v2 files.
Volumes are intended to be used when you actually want to keep files between container versions (such as databases, file uploads...). Not for your site's static assets.
My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?
Yes, this is bad design. If you want to run multiple apps, you should run 1 Docker container per app. That way, when you release a new version of one app, you only need to restart that container. Containers aren't supposed to be treated like traditional virtual machines where you "SSH into" and manually configure things. Containers are "throw-away". New version of the app? just replace the container with a new one with a newer image.
First option: don't use a volume. If you want to have the files accessible from the image build, and don't need persistence, then the volume isn't helping with your workflow.
Second option: delete the previous volume between runs and use a named volume, which docker will initialize with the image contents.
Third option: modify the image build and container entrypoint to save the directory off to a different location during the build, and restore that location into the volume on container startup in the entrypoint. I've got an implementation of this in the save-volume and load-volume scripts in my base image. It gets more complicated when you want to merge the contents of the volume with the contents of the host, and you'll need to decide how to handle files getting deleted and what changes to save from the previous runs.
I am new to Drupal and just looking for some help getting my dev environment going and using best practices.
So I have a working container with drupal, mariadb, drush etc... so after installing Drupal with the installer, I install themes and such however it seems if I drop the container I lose all my work. How could I ever work in a team then? How do I keep that work? Do I use git inside the container and pull and push from within?
As far as I'm aware, work inside the container does not necessarily reflect into my local working directory.
Any help would be much appreciated.
I dont know about dupral but generally in docker you would mount a folder from your local filesystem where docker is running when you start the container. The data in the "/your/local/folder" will be accessible both in the container and in your local filesystem. It'll also survive a restart of the container.
docker run -d \
-v </your/local/folder>:</folder in container>:z \
<your image>
The trick will be to identify the data in the container you want on your local filesystem.
Look here for different alternative ways to handle persistent data in docker:
https://docs.docker.com/storage/volumes/
I can highly recommend you Lando for Drupal 8.
SEE: https://docs.devwithlando.io/tutorials/drupal8.html
It's a free, open-source, cross-platform, local development environment and DevOps tool built on Docker container technology.
I am trying to understand the general process that goes into deploying a PHP web app through Docker. I have a web app developed in LAMP.
So far I understood that firs of all I have to download and install Docker itself. Afterwards I have to install the Docker Composer. Then using the Composer I have to create a container that will contain the image of my server (Apache).
And this is where I get confused. Do I have to then create a container for my database and another one for the application itself (the directory containing the code) or do I have one container for the server the database and the app?
I dont need a detailed explanation, just the general idea behind the process then I can figure out the rest on my own
Thank you to anyone who can provide any help.
There can be many ways but the simple way is to install docker on some linux machine , write a docker file that installs and configures all the necessay components such as apache , php , mysql etc , and then grab your application code either inside the container or attach it as external volume from the host.
After writing dockerfile , you can build the docker image by using the docker build command , and after the image is build you can use it locally or push to dockerhub , or push it to your private docker registery if you want.
The other option is if you just want to test , you can pull an already existing image from dockerhub that contains the LAMP stack and you then just need to do docker run on the image attaching your php application as external volume.
Ofcourse , to access the application on port 80 or 443 outside docker you have to expose those ports either in docker file , or at docker run command time.
For a test environment , you can run all the services in just one container.
For larger Deployements you can consider a container orchestration service such as DockerSwarm or Kubernetes . You can also try DC/OS from MesoSphere , by grabbing a vagrant file from thier github repo that will setup the DC/OS on a single machine for you. Then you can just spin up as many services as you want on Mesos. They provide out of the box support for service installation , container management and scaling.
The best practices recommends to have one Docker container per process/service (one container for Apache + PHP, another container for MySQL and so on) but it's just a guideline and it doesn't mean that you cannot have only one container with everything you need inside it.
If you decide to go with only one container to run all the services, you'll be fine just using Docker (Engine). You can still use Docker Compose in this case but there's no real need for it.
Docker Compose is more helpful when you have to run multiple containers. With just one command you can get all your containers up and running. Also here you can use only Docker Engine but you'll need to run each container manually.
I deployed a local project using gcloud command since there everything OK. I'm getting a 500 error in the browser, but I still have hundreds of questions. Where's the code? What is doing gcloud behind the scenes when I do a deploy? Why do I see 3 instances when I just deployed just one project?
I did SSH to each of the three compute instances I see and I couldn't find the code. I want to do something very silly and easy, just go to the index.php file and do echo '1';die; to check that's the code I can play with to make my project work on Google Platform.
Because I'm noob on this I won't be able to tweak my project perfectly to work on Google Cloud at first, so it's probably silly but a must!
My current and only config file:
runtime: php
vm: true
runtime_config:
document_root: public
You are using the AppEngine Flexible Environment (what used to be called Managed VMs). This environment uses Docker to build an image out of your application code and run it in a container.
See the Additional Debugging part of the Managed VMs PHP tutorial for more info on how to debug on the machine. After SSHing into an instance, you are on the host machine, but you still need to run additional commands to access the container, which is where your application code is running. The following command will get you on your machine:
sudo docker exec -t -i gaeapp /bin/bash
Once there, you can edit your running application by running the following commands
apt-get update
apt-get install vim # or your editor of choice
vi /app/public/index.php # I am assuming this is where your file is
Yes, you have to install vim on the container because it will not be installed by default, as this is your production image.
Also be sure to check the Logging page in Developer Console, as that is where the 500 error message will be logged, and it's a lot easier than going through these steps!