I have a web server with multiple PHP website. I push all my updates through FTP.
I intend to move to a more containerized environnement without changing to much of my current basic workflow.
I would like to deploy each of my website in a Docker container. The database for all the website would be in another container.
I will have a Docker as a reverse proxy.
To update my website, i have two ideas :
Set up an FTP access in the container so i can update it directly
Set up a shared directories with the host through volume, so i can set up FTP access from the host.
What do you think of it ?
Thanks for your help
Changing the code inside a running container, or at all, is against docker best practices, as containers are designed to be ephemeral.
A better idea would be to rebuild the image every time you update the code, allowing the containers to stay ephemeral, and making it easier to scale. You could implement this through CI/CD, but that is out of the scope of this question.
If you really want to continue with the idea of ftp still, it's a good idea to have one container with an ftp service in it, and another one with the web server in, as containers should have only one concern.
If your FTP server image is my/ftp-image and your web server is my/web-server-image, then you can start your containers like this:
docker run -itd --name my-web-server -p 80:80 -v files_volume_name_here:/path/to/files/in/container my/web-server-image
docker run -itd --name my-ftp-server [ports for ftp server here] -v files_volume_name_here:/path/to/files/in/container my/ftp-image
Related
I'm pretty new to Docker, and I have this task. I need to write a configuration for creating a container infrastructure to run a PHP application using nginx, PHP-FPM & MySQL. The code for the application is in a tarball on a remote server.
What I did so far:
Created the fully functioning nginx, php and mysql containers.
Downloaded the code manually, extracted it to a host directory and mounted to both nginx and PHP-FPM containers with bind mount.
This setup works, but I don't want to keep the code locally. What I want is to download it during the build step of one of the containers and use it. My first idea is to use a shared volume to store downloaded code and mount this volume both to nginx and PHP-FPM containers. However, if I do it from within one of the Dockerfiles, I don't have access to the mounted volume (volumes are mounted after the container is built). I can do it to the host filesystem, but this doesn't seem right. What is the right way to deal with it?
Actually I found that the data is copied from the container to the volume, so after the container builds, it can be then reused from another container.
There's a lot info in the internet about docker's base operations: "how to pull image", "how to run/start container", but almost nothing about what to do next with it. How to develop?
For example, I pulled linode/lamp. Simple project is lying in /var/www/example.com/public_html/.
I launch: docker run -p 80:80 -t -i linode/lamp /bin/bash, service apache2 start. Now, in my browser in http://localhost I see an index page of the project.
Now I want to edit/add/delete files in the project. Do this with bash and nano is insane, obviously. Therefore I want to use PhpStorm. And I cannot understand what to do.
What option should I choose to create a project?
Web server is installed locally, source files are located under its document root.
Web server is installed locally, source files are located elsewhere locally.
Web server is on a remote host, files are accessible via network share or mounted drive.
Web server is on a remote host, files are accessible via FTP/SFTP/FTPS.
Source files are in a local directory, no Web server is yet configured.
it the first, then where shoud I get files? If "via FTP/SFTP/FTPS" - how to set up? I don't get it.
I know that PhpStorm has Deployment - Docker settings and I can configure it. That how it looks on my machine:
Docker settings Image
Debug/Run configuration Image
But it only gives an ability to start containers, connect to them via console. Shoulв I use it somehow?
Docker Image
Please, explain me, what should I do? I would like to see the answers for Windows и Linux (if there is a difference, of course)
P.S. I use Docker on Windows. But in Settings it's switched to Linux containers.
I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster.
Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a php backbend. Since what I want is that every person can run a container environment with their own files (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container.
The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers.
Thanks for your help, I appreciate your comments.
Not so hard, actually. Check my gists may give you some tips:
https://gist.github.com/resouer/378bcdaef1d9601ed6aa
See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.
is possible to run two web apps at the same time, one using PHP the other using Ruby, each one on a Docker container ?
Should be no problem. Normally you have one App per container.
You could create a Docker container for your PHP server and a container for your Ruby server.
You need to choose different ports, because by default, both will run on port 80 or 443 and then it should work
Docker is designed to run one software, if you want to run more than one, you need a tool like supervisor, s6, daemontools, check the doc for supervisor
https://docs.docker.com/articles/using_supervisord/
I have PHP web application and i want to convert it into docker.
I have these containers
mysql
php
nginx
I have source code in my host folder as /var/www/site1
Now when i launch nginx , then i can mount site1 to nginx as /usr/nginx/share/html
But i am not sure how does i link with PHP conainer. Can't i have stand alone PHP container with only PHP installed or I need to have some webserver along with PHP
My view on docker containers is that each container typically represents one process. E.g. mysql or nginx as in your example. Containers typically communicates with each other using networking or via shared files in volumes.
Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile. In your case, you are suggesting that the nginx-container runs in one process with one operating system and that the php-libraries run in a different process (in a different os). I'm not sure if this is doable but it seems as if it is a strange way of doing things.
My suggestion is that you create two containers:
nginx+php - this container holds the PHP installation as well as the Nginx-stuff
mysql - this container contains the database
The container can communicate via classic networking or as linked containers.
However, the PHP-files that you wish to execute (i.e. your website) should be dynamically mounted as a data volume on the nginx+php container or as data volume container.