How to make communicate between nginx and php container - php

I have PHP web application and i want to convert it into docker.
I have these containers
mysql
php
nginx
I have source code in my host folder as /var/www/site1
Now when i launch nginx , then i can mount site1 to nginx as /usr/nginx/share/html
But i am not sure how does i link with PHP conainer. Can't i have stand alone PHP container with only PHP installed or I need to have some webserver along with PHP

My view on docker containers is that each container typically represents one process. E.g. mysql or nginx as in your example. Containers typically communicates with each other using networking or via shared files in volumes.
Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile. In your case, you are suggesting that the nginx-container runs in one process with one operating system and that the php-libraries run in a different process (in a different os). I'm not sure if this is doable but it seems as if it is a strange way of doing things.
My suggestion is that you create two containers:
nginx+php - this container holds the PHP installation as well as the Nginx-stuff
mysql - this container contains the database
The container can communicate via classic networking or as linked containers.
However, the PHP-files that you wish to execute (i.e. your website) should be dynamically mounted as a data volume on the nginx+php container or as data volume container.

Related

How to share code between Docker containers?

I'm pretty new to Docker, and I have this task. I need to write a configuration for creating a container infrastructure to run a PHP application using nginx, PHP-FPM & MySQL. The code for the application is in a tarball on a remote server.
What I did so far:
Created the fully functioning nginx, php and mysql containers.
Downloaded the code manually, extracted it to a host directory and mounted to both nginx and PHP-FPM containers with bind mount.
This setup works, but I don't want to keep the code locally. What I want is to download it during the build step of one of the containers and use it. My first idea is to use a shared volume to store downloaded code and mount this volume both to nginx and PHP-FPM containers. However, if I do it from within one of the Dockerfiles, I don't have access to the mounted volume (volumes are mounted after the container is built). I can do it to the host filesystem, but this doesn't seem right. What is the right way to deal with it?
Actually I found that the data is copied from the container to the volume, so after the container builds, it can be then reused from another container.

Deploying Laravel with Docker containers

I plan to deploy my Laravel application with docker containers.
I need the following components for my application:
MySQl server
nginx server
cerbot for activation of ssl
Queue worker for Laravel
Since the application is still under development (and probably will always be), it should be very easy to update (I will automate this with GitLab CI/CD) and it should have as little downtime as possible during the update.
Also, I want to be able to host multiple instances of the application, whereby only the .env file for Laravel is different. In addition to the live application, I want to host a staging application.
My current approach is to create a container for the MySQL server, one for the nginx server and one for the queue worker. The application code would be a layer in the nginx server container and in the queue worker container. When updating the application, I would rebuild the nginx conatiner and queue worker container. Is this a good approach? Or are there better ways to achieve this? And what would be a good approach for my mysql server, nginx server, php version,... to stay up to date without downtime for the application?
The main idea of the docker is to divide your app by containers. So yep it is good to have one container for one service. In your example, I suggest keeping MySQL in one container the queue worker in another and so on. As a result, u will have containers for each service. Then suggest to create the internal docket network and connect containers to them. Also, I suggest using docker volumes to store all your application data. To make it much easyer I suggest for configuration to use docker compose.

FTP access inside docker for a PHP envrionnement

I have a web server with multiple PHP website. I push all my updates through FTP.
I intend to move to a more containerized environnement without changing to much of my current basic workflow.
I would like to deploy each of my website in a Docker container. The database for all the website would be in another container.
I will have a Docker as a reverse proxy.
To update my website, i have two ideas :
Set up an FTP access in the container so i can update it directly
Set up a shared directories with the host through volume, so i can set up FTP access from the host.
What do you think of it ?
Thanks for your help
Changing the code inside a running container, or at all, is against docker best practices, as containers are designed to be ephemeral.
A better idea would be to rebuild the image every time you update the code, allowing the containers to stay ephemeral, and making it easier to scale. You could implement this through CI/CD, but that is out of the scope of this question.
If you really want to continue with the idea of ftp still, it's a good idea to have one container with an ftp service in it, and another one with the web server in, as containers should have only one concern.
If your FTP server image is my/ftp-image and your web server is my/web-server-image, then you can start your containers like this:
docker run -itd --name my-web-server -p 80:80 -v files_volume_name_here:/path/to/files/in/container my/web-server-image
docker run -itd --name my-ftp-server [ports for ftp server here] -v files_volume_name_here:/path/to/files/in/container my/ftp-image

How to build a sidecar container for passing files from a machine outside of the Kubernetes cluster?

I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster.
Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a php backbend. Since what I want is that every person can run a container environment with their own files (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container.
The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers.
Thanks for your help, I appreciate your comments.
Not so hard, actually. Check my gists may give you some tips:
https://gist.github.com/resouer/378bcdaef1d9601ed6aa
See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.

PHP and Ruby with Docker

is possible to run two web apps at the same time, one using PHP the other using Ruby, each one on a Docker container ?
Should be no problem. Normally you have one App per container.
You could create a Docker container for your PHP server and a container for your Ruby server.
You need to choose different ports, because by default, both will run on port 80 or 443 and then it should work
Docker is designed to run one software, if you want to run more than one, you need a tool like supervisor, s6, daemontools, check the doc for supervisor
https://docs.docker.com/articles/using_supervisord/

Categories