Docker: communication between web container and php container - php

I'm trying to dockerizing a project runs with php + Apache http server. I learned that I need to have a container for apache http server and another container for php script. I searched a lot but still don't understanding how that works. What I know now is I should resort to docker networking, as long as they are in the same network they should be communicating with each other.
The closest info I got is this, but it uses nginx:
https://www.codementor.io/patrickfohjnr/developing-laravel-applications-with-docker-4pwiwqmh4
quote from original article:
vhost.conf
The vhost.conf file contains standard Nginx configuration that will handle http
requests and proxy traffic to our app container on port 9000. Remember from
earlier, we named our container app in the Docker Compose file and linked it to the web container; here, we can just reference that container by its name and Docker will route traffic to that app container.
My question is what configuring I should do to make the communication between php container and web container happen using Apache http server like above? what is the rationale behind this? I'm really confused, any information will be much much appreciated.

The example that you linked to utilizes two containers:
a container that runs nginx
a container that runs php-fpm
The two containers are then able to connect to each other due to the links directive in the web service in the article's example docker-compose.yml. With this, the two containers can resolve the name web and app to the corresponding docker container. This means that the nginx service in the web container is able to forward any requests it receives to the php-fpm container by simply forwarding to app:9000 which is <hostname>:<port>.
If you are looking to stay with PHP + Apache there is a core container php:7-apache that will do what you're looking for within a single container. Assuming the following project structure
/ Project root
- /www/ Your PHP files
You can generate a docker-compose.yml as follows within your project root directory:
web:
image: php:7-apache
ports:
- "8080:80"
volumes:
- ./www/:/var/www/html
Then from your project root run docker-compose up and will be able to visit your app at localhost:8080
The above docker-compose.yml will mount the www directory in your project as a volume at /var/www/html within the container which is where Apache will serve files from.

The configuration in this case is Docker Compose. They are using Docker Compose to facilitate the DNS changes in the containers that allow them to resolve names like app to IP addresses. In the example you linked, the web service links to the app service. The name app can now be resolved via DNS to one of the app service containers.
In the article, the web service nginx configuration they use has a host and port pair of app:9000. The app service is listening inside the container on port 9000 and nginx will resolve app to one of the IP addresses for the app service containers.
The equivalent of this in just Docker commands would be something like:
App container:
docker run --name app -v ./:/var/www appimage
Web container:
docker run --name web --link app:app -v ./:/var/www webimage

Related

Why do we need to map the project files in both PHP-FPM and web server container?

I am pretty new with all of this docker stuff and I have this docker-compose.yml file:
fpm:
build:
context: "./php-fpm"
dockerfile: "my-fpm-dockerfile"
restart: "always"
ports:
- "9002:9000"
volumes:
- ./src:/var/www/src
depends_on:
- "db"
nginx:
build:
context: "./nginx"
dockerfile: "my-nginx-dockerfile"
restart: "always"
ports:
- "8084:80"
volumes:
- ./docker/logs/nginx/:/var/log/nginx:cached
- ./src:/var/www/src
depends_on:
- "fpm"
I am curious why do I need to add my project files in the fpm container as well as in the nginx?
Why isn't it enough to add it just only to my webserver? A web server is a place that holds the files and handles the request...
I believe that this information would be useful to other docker newbies as well.
Thanks in advance.
In your NGinx container you only need the statics and in your PHP-FPM container you only need the PHP files. If you are capable of splitting the files, you don't need any file in both sites.
Why isn't it enough to add it just only to my webserver? A web server
is a place that holds the files and handles the request...
NGinx handles requests from users. If a request is to a static file (configured in NGinx site), it sends the contents back to the user. If the request is to a PHP file (and NGinx is correctly configured to use FPM on that place), it sends the request to the FPM server (via socket or TCP request), which knows how to execute PHP files (NGinx doesn't know that). You can use PHP-FPM or whatever other interpreter you prefer, but this one works great when configured correctly.
If you just want an explanation why both need the access to the same files under
/var/www/src, I cannot provide a reliable answer since I’m not familiar neither with
nginx not fpm.
But I can provide an explanation what’s the purpose of doing it so.
First, to learn about docker, I highly recommend the official documentation, since it provides a great explanation: docs.docker.com
For learning the syntax of a docker-compose file, see docs.docker.com: Compose file reference
Your specific question
Let me break down what you have her:
You got two different images, fpm and nginx.
fpm:
...
nginx:
...
In principle, these containers (or services as they are called )
run completely independent from each other. This basically means, that they don't know that the other one exists.
Note: depends_on just expresses a dependency between services
Conclusion: Your webserver knows nothing about your second container.
As said: While I don't know the purpose of fpm, I assume that a common folder is the connection between these two containers. By using a common folder ./src on your host, they both have access to this ./src folder, thus can write to and read from it.
The syntax ./src:/var/www/src means, that this ./src folder is mapped (internally in your container) to :/var/www/src.
If a container writes to /var/www/src it will actually write to ./src on your host. This works vice versa.
Conclusion: They share a common directory where both containers can access the very same files.
Hope my explanation helps you understanting your docker-compose better.

Docker-compose: Connect to database using localhost & mysql as hostname

I'm currently trying to dockerize my app for local development. For a bit of context, it is using Magento.
I have a configuration file where I'm used to set 127.0.0.1 as MySQL hostname as the web app is running on the same host as MariaDB.
Initially, I've tried to link my container within my docker-compose file using 'links' (below an extract of my docker-compose setting at this point)
mariadb:
image: mariadb:5.5
php-fpm:
build: docker/php-fpm
links:
- "mariadb:mysql"
At this point, MariaDB was reachable by setting mysql as hostname in my configuration file instead of 127.0.0.1. However, I wanted to keep 127.0.0.1.
After a bit of digging, I've found this blog post where it is explained how to set up containers so that it can be reached through localhost or 127.0.0.1
This is working as I'm expecting but it has a flaw.
Without Docker, I'm able to run PHP scripts which leverage magento core modules by loading it. But with Docker and the blog post configuration, I'm not able to do it anymore as Magento is weirdly expecting for a DB hostname called "mysql".
Is there anyway through docker-compose to have a container be reachable with localhost and an hostname?
Without Docker, if I install MariaDB on my host machine, I am able to connect to its instance through 127.0.0.1:3306 or mysql://. I want to get a similar behaviour.
As said by #Sai Kumar you can connect the docker containers to the host network and then can use localhost to access services. But the problem is the port will be reserved by that container and will not be available till it is deleted.
But from your question, the following sentence caught my attention
Without Docker, I'm able to run PHP scripts which leverage magento
core modules by loading it. But with Docker and the blog post
configuration, I'm not able to do it anymore as Magento is weirdly
expecting for a DB hostname called "mysql"
So If I understand properly Magento is expecting to connect to MySQL with mysql as hostname instead of localhost. If so this can be easily solved.
How?
So in docker, there is a concept called service discovery. I've explained the same in many of my answers. Basically what it does is it resolves IP of containers by container hostname/aliases.So, instead of connecting between containers using IP address you can connect between them by using their hostname such that even if container restarts(which results in change of IP), Docker will take care of resolving it to respective container.
This works only with user-defined networks. So what you can do is create a bridge network and connect both Magento and Mysql to it. and give the container_name as mysql for mysql or you can also use alias as mentioned here. So putting it all together a sample docker compose will be
version: '3'
services:
mariadb:
image: mariadb:5.5
container_name: mysql #This can also be used but I always prefer using aliases
networks:
test:
aliases:
- mysql #Any container connected to test network can access this simply by using mysql as hostname
php-fpm:
build: docker/php-fpm
networks:
test:
aliases:
- php-fpm
networks:
test:
external: true
More references
1. Networking in Compose.
2. Docker Networking.
Yes you can connect through your db with localhost or 127.0.0.1 But this is only possible when you create docker-compose network in host mode.
But when you set your docker network in host mode then containerising concept will fail. So you have to choose host or bridge network mode
You can find networking in docker-compose
network_mode: "host"

Cannot use NGINX subdomain inside the Docker container

This question is kinda stupid, it's about using the Docker's service names as hostnames, so here's the context:
I am running the following NGINX containers: base, php-fpm and nginx. I also have a Laravel project who is located in the root project, in the /api folder. I also run haproxy on port 5000 for load balancing the requests over php-fpm containers.
The base container contains the linux environment from which i can run commands to phpunit, npm and literally have access to other containers' files that are sent using the volume from docker-compose.
The php-fpm contains the environment for PHP to run.
The nginx contains the NGINX server which is configured to hold two websites: the root website (localhost) and the api subdomain (api.localhost). The api. subdomain points to the /api folder within the root project, and the root website (localhost) points to the /frontend folder within the root project.
The problem is that within the base service container, i cannot run curl command to access the api.localhost website. I tried to use curl to access the nginx using the service name within the docker-compose (which is nginx):
$ curl http://nginx
and it works perfectly, but the frontend folder answers with code from the frontend folder. I have no idea how to use the service name to access the api.localhost wihin the container.
I have tried
$ curl http://api.nginx
$ curl http://api.localhost
Not even the localhost answers to the curl command:
$ curl http://localhost
Is there any way i can access the subdomain from a NGINX container using the service name as hostname?
I have found out that subdomains are not working well using NGINX and Docker Service name as hostname.
Instead, i had to change the structure of my project so that i don't use subdomains while trying to access URLs using service names as hostnames.

How can i access a second laravel app from another pc

On this link:
how can i access my laravel app from another pc?
It is perfectly described how to access an laravel app from another pc on the same network.
Now, my question is:
How to access another app served on the same pc?
I have a virtual machine serving two apps app.dev and demo.dev
both are accessible inside VM through internet browser
app.dev is accessible on http://localhost and http://app.dev
demo.dev is accessible only on http://demo.dev
Outside VM only app.dev is accessible on IP address 192.168.0.60
i have used this command inside VM
sudo php artisan serve --host 192.168.0.60 --port 80
Should i use again
sudo php artisan serve ????
but how? anybody help?
Laravel's artisan serve command uses the PHP Built-in web server. Because that is not a full featured web server, it has no concept of virtual hosts, so it can only run one instance of the server mapped to an IP and port pair.
Normally to serve two hosts from the same IP address you'd add in your VM's /etc/hosts file the following mappings:
192.168.0.60 app.dev
192.168.0.60 demo.dev
Now you can run app.dev by running:
php artisan serve --host app.dev --port 80
And it will be available on your host machine using http://app.dev. However if you would try to spin up a second server instance for demo.dev using this:
php artisan serve --host demo.dev --port 80
It won't work and will complain that:
Address already in use
You could get around that by using a different port for your demo.dev app by using for example this:
php artisan serve --host demo.dev --port 8080
And now you'd be able to access http://demo.dev:8080 for your second app on your host machine.
That being said, I suggest you install a full featured web server such as Apache or nginx and then setup a virtual host for each application (just make sure to keep the mappings from the /etc/hosts file I showcased above).
Setting up virtual hosts can be really easy for both server solutions. Below are links to two articles from the Laravel Recipes website that showcase how to do that specifically for Laravel:
Creating an Apache VirtualHost
Creating a Nginx VirtualHost

Deploying with Docker into production: Zero downtime

Im failing to see how it is possible to achieve zero-downtime deployments with Docker.
Let's say I have a PHP container running MyWebApp being served by an Nginx container on the same server. I then change some code, as Docker containers are immutable I have to build/deploy the MyWebApp container again with the code changes. During the time it takes to do this MyWebApp is down for the count...
Previously I would use Ansible or similar to do deploy my code, then symlink the new release directory to the web dir... zero-downtime!
Is it possible to achieve zero downtime deployments with Docker and a single server app?
You could do some kind of blue-green deployment with your containers, using nginx upstreams's:
upstream containers {
server 127.0.0.1:9990; # blue
server 127.0.0.1:9991; # green
}
location ~ \.php$ {
fastcgi_pass containers;
...
}
Then, when deploying your containers, you'll have to alternate between port mappings:
# assuming php-fpm runs on port 9000 inside the container
# current state: green container running, need to deploy blue
# get last app version
docker pull my_app
# remove previous container (was already stopped)
docker rm blue
# start new container
docker run -p 9990:9000 --name blue my_app
# at this point both containers are running and serve traffic
docker stop green
# nginx will detect failure on green and stop trying to send traffic to it
To deploy green, change color name and port mapping.
You might want to fiddle with upstream server entry parameters to make the switchover faster, or use haproxy in your stack and manually (or automatically via management socket) manage backends.
If things go wrong, just docker start the_previous_color and docker stop the_latest_color.
Since you use Ansible, you could use it to orchestrate this process, and even add smoke tests to the mix so a rollback is automatically triggered if something goes wrong.

Categories