How link N php containers with 1 nginx container - php

i´m moving my wordpress farm (10 installs) to docker architecture,
I want had one nginx container and run 10 php-fpm containers (mysql is on external server)
the php containers are named php_domainname, and also contain persistent storage
i want know how do this:
a)How pass domainname and containername to vhost conf file¿
b)when i start a php-fpm container
1) add a vhost.conf file into nginx confs folder
2) add volume (persistent storage) to nginx instance
3) restart nginx instance
All nginx-php dockers that i founded, has both process per instance, but i think that had 10+1 nginx is overloading the machine, and break the docker advantages
Thanks

No need to reinvent the wheel, this one has already been solved by docker-proxy which is also available on docker hub.

You can also use consul or like with service-autodiscovery. This means:
you add a consul server to your stack
you register all FPM servers as nodes
you register every FPM-daemon as a service "fpm" in consul
For your nginx vhost conf, lets say located /etc/nginx/conf.d/mywpfarm.conf you use consul-template https://github.com/hashicorp/consul-template to generate the config in a go-template were you use
upstream fpm {
{{range service "fpm"}}
server {{.Name}} {{.Address}}:{{.Port}};
{{end}}
}
In your location when you forward .php based request to the FPM upstream, you now use the upstream above. This way nginx will load-balance through all available servers. If you shutdown one FPM host, the config changes automatically and the FPM upstream gets adjusted ( thats what consul-template is for, it watches for changes ) - so you can add new FPM services at any time and scale horizontally very easy

Related

How php-fpm container communicates with nginx on host?

I have installed nginx, php and php-fpm on server and my website is working fine. I am trying to containerise only php files. It should be in a way that nginx should stop communicating with my host php and nginx should connect to php-fpm container and my website should work fine. I need to communicate them via TCP.
I am using php:7.1-fpm as base image and copying all php files in Dockerfile.
my questions is
what will be the "listen" value in php-fpm pool configuration for the container?
If both are in the same server, the listen value will be 127.0.0.1:9000. but it’s not the case here.
I know that the "listen" value in php-fpm pool configuration and "fastcgi_pass" in the nginx configuration should be the same.
Here the nginx is in the host and php-fpm is a container. I tried to use X.X.X.X:9000 (X.X.X.X is the ip of host) but i am getting errors like
ERROR: failed to post process the configuration
ERROR: FPM initialization failed
can anyone help me

Docker: communication between web container and php container

I'm trying to dockerizing a project runs with php + Apache http server. I learned that I need to have a container for apache http server and another container for php script. I searched a lot but still don't understanding how that works. What I know now is I should resort to docker networking, as long as they are in the same network they should be communicating with each other.
The closest info I got is this, but it uses nginx:
https://www.codementor.io/patrickfohjnr/developing-laravel-applications-with-docker-4pwiwqmh4
quote from original article:
vhost.conf
The vhost.conf file contains standard Nginx configuration that will handle http
requests and proxy traffic to our app container on port 9000. Remember from
earlier, we named our container app in the Docker Compose file and linked it to the web container; here, we can just reference that container by its name and Docker will route traffic to that app container.
My question is what configuring I should do to make the communication between php container and web container happen using Apache http server like above? what is the rationale behind this? I'm really confused, any information will be much much appreciated.
The example that you linked to utilizes two containers:
a container that runs nginx
a container that runs php-fpm
The two containers are then able to connect to each other due to the links directive in the web service in the article's example docker-compose.yml. With this, the two containers can resolve the name web and app to the corresponding docker container. This means that the nginx service in the web container is able to forward any requests it receives to the php-fpm container by simply forwarding to app:9000 which is <hostname>:<port>.
If you are looking to stay with PHP + Apache there is a core container php:7-apache that will do what you're looking for within a single container. Assuming the following project structure
/ Project root
- /www/ Your PHP files
You can generate a docker-compose.yml as follows within your project root directory:
web:
image: php:7-apache
ports:
- "8080:80"
volumes:
- ./www/:/var/www/html
Then from your project root run docker-compose up and will be able to visit your app at localhost:8080
The above docker-compose.yml will mount the www directory in your project as a volume at /var/www/html within the container which is where Apache will serve files from.
The configuration in this case is Docker Compose. They are using Docker Compose to facilitate the DNS changes in the containers that allow them to resolve names like app to IP addresses. In the example you linked, the web service links to the app service. The name app can now be resolved via DNS to one of the app service containers.
In the article, the web service nginx configuration they use has a host and port pair of app:9000. The app service is listening inside the container on port 9000 and nginx will resolve app to one of the IP addresses for the app service containers.
The equivalent of this in just Docker commands would be something like:
App container:
docker run --name app -v ./:/var/www appimage
Web container:
docker run --name web --link app:app -v ./:/var/www webimage

PHP-FPM sockets temporary unavailable in NGINX under high-load even if nginx hits static file

I setup a docker container (alpine) with the following configuration:
Nginx
PHP7
PHPFPM
Wordpress with WP-Super-Cache
Nginx was configured (or so I believe) to serve the static html pages generated by wp-super-cache.
Most connections in the docker container are done through unix sockets (mysql db in wp, phpfpm in nginx).
Problem:
The initial and consequent request to the site are really fast but when I stress-test the server I get strange php-fpm errors:
*144 connect() to unix:/var/run/php-fpm.sock failed (11: Resource temporarily unav
ailable) while connecting to upstream, client: 192.168.0.102, server: www.local.dev, request: "GET /hello-world/ HTTP
/2.0", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "www.local.dev"
My question is why php-fpm is used if nginx takes care of serving those files under high-stress situations and even if php-fpm is used why the unix socket fails.
And of course any tips for solving this?
I discovered that if I let the stress tester tool run for a long time php-fpm is creating new processes to take care of the load, but I'm looking to push on a aws ec2 micro.t2 instance and I don't think it can support all the processes that it spawn on my 8 core machine.
Configuration:
Nginx:
https://gist.github.com/taosx/c1ffc7294b5ca64d11a6607d36d5b49e
I have tried switching the php-fpm unix socket with the TCP/IP (127.0.0.1:9000) but I still get the same error and initial request get slower by 20%.
I solved my problem.
I had the wrong path for my wp-super-cache generated html files.
Instead of /wp-content/cache/supercache/$http_host/$cache_uri/index.html I had /wp-content/cache/$http_host/$cache_uri/index.html.
Note the missing supercache subfolder.

I lost my php-fpm.sock file from / var / run / php-fpm /

I installed PHP 7 on Red Hat Linux server, but apparently due to running a few commands on the server to configure PHP I have the lost the php-fpm.sock file.
Could anyone please assist me with contents of the file?
Yes that file should be auto generated, do not create the file manually! Ensure that the service is running service php-fpm start If it still fails, check the permissions. Check here for help: /etc/php-fpm.d/www.conf This is your main php-fpm config file. Make sure user, group, listen.owner, listen.group are set to your either nginx or apache user, depending on what web server you use. Also note that listen point to the actual socket file.

Deploying with Docker into production: Zero downtime

Im failing to see how it is possible to achieve zero-downtime deployments with Docker.
Let's say I have a PHP container running MyWebApp being served by an Nginx container on the same server. I then change some code, as Docker containers are immutable I have to build/deploy the MyWebApp container again with the code changes. During the time it takes to do this MyWebApp is down for the count...
Previously I would use Ansible or similar to do deploy my code, then symlink the new release directory to the web dir... zero-downtime!
Is it possible to achieve zero downtime deployments with Docker and a single server app?
You could do some kind of blue-green deployment with your containers, using nginx upstreams's:
upstream containers {
server 127.0.0.1:9990; # blue
server 127.0.0.1:9991; # green
}
location ~ \.php$ {
fastcgi_pass containers;
...
}
Then, when deploying your containers, you'll have to alternate between port mappings:
# assuming php-fpm runs on port 9000 inside the container
# current state: green container running, need to deploy blue
# get last app version
docker pull my_app
# remove previous container (was already stopped)
docker rm blue
# start new container
docker run -p 9990:9000 --name blue my_app
# at this point both containers are running and serve traffic
docker stop green
# nginx will detect failure on green and stop trying to send traffic to it
To deploy green, change color name and port mapping.
You might want to fiddle with upstream server entry parameters to make the switchover faster, or use haproxy in your stack and manually (or automatically via management socket) manage backends.
If things go wrong, just docker start the_previous_color and docker stop the_latest_color.
Since you use Ansible, you could use it to orchestrate this process, and even add smoke tests to the mix so a rollback is automatically triggered if something goes wrong.

Categories