How php-fpm container communicates with nginx on host? - php

I have installed nginx, php and php-fpm on server and my website is working fine. I am trying to containerise only php files. It should be in a way that nginx should stop communicating with my host php and nginx should connect to php-fpm container and my website should work fine. I need to communicate them via TCP.
I am using php:7.1-fpm as base image and copying all php files in Dockerfile.
my questions is
what will be the "listen" value in php-fpm pool configuration for the container?
If both are in the same server, the listen value will be 127.0.0.1:9000. but it’s not the case here.
I know that the "listen" value in php-fpm pool configuration and "fastcgi_pass" in the nginx configuration should be the same.
Here the nginx is in the host and php-fpm is a container. I tried to use X.X.X.X:9000 (X.X.X.X is the ip of host) but i am getting errors like
ERROR: failed to post process the configuration
ERROR: FPM initialization failed
can anyone help me

Related

PHP-FPM sockets temporary unavailable in NGINX under high-load even if nginx hits static file

I setup a docker container (alpine) with the following configuration:
Nginx
PHP7
PHPFPM
Wordpress with WP-Super-Cache
Nginx was configured (or so I believe) to serve the static html pages generated by wp-super-cache.
Most connections in the docker container are done through unix sockets (mysql db in wp, phpfpm in nginx).
Problem:
The initial and consequent request to the site are really fast but when I stress-test the server I get strange php-fpm errors:
*144 connect() to unix:/var/run/php-fpm.sock failed (11: Resource temporarily unav
ailable) while connecting to upstream, client: 192.168.0.102, server: www.local.dev, request: "GET /hello-world/ HTTP
/2.0", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "www.local.dev"
My question is why php-fpm is used if nginx takes care of serving those files under high-stress situations and even if php-fpm is used why the unix socket fails.
And of course any tips for solving this?
I discovered that if I let the stress tester tool run for a long time php-fpm is creating new processes to take care of the load, but I'm looking to push on a aws ec2 micro.t2 instance and I don't think it can support all the processes that it spawn on my 8 core machine.
Configuration:
Nginx:
https://gist.github.com/taosx/c1ffc7294b5ca64d11a6607d36d5b49e
I have tried switching the php-fpm unix socket with the TCP/IP (127.0.0.1:9000) but I still get the same error and initial request get slower by 20%.
I solved my problem.
I had the wrong path for my wp-super-cache generated html files.
Instead of /wp-content/cache/supercache/$http_host/$cache_uri/index.html I had /wp-content/cache/$http_host/$cache_uri/index.html.
Note the missing supercache subfolder.

I lost my php-fpm.sock file from / var / run / php-fpm /

I installed PHP 7 on Red Hat Linux server, but apparently due to running a few commands on the server to configure PHP I have the lost the php-fpm.sock file.
Could anyone please assist me with contents of the file?
Yes that file should be auto generated, do not create the file manually! Ensure that the service is running service php-fpm start If it still fails, check the permissions. Check here for help: /etc/php-fpm.d/www.conf This is your main php-fpm config file. Make sure user, group, listen.owner, listen.group are set to your either nginx or apache user, depending on what web server you use. Also note that listen point to the actual socket file.

How link N php containers with 1 nginx container

i´m moving my wordpress farm (10 installs) to docker architecture,
I want had one nginx container and run 10 php-fpm containers (mysql is on external server)
the php containers are named php_domainname, and also contain persistent storage
i want know how do this:
a)How pass domainname and containername to vhost conf file¿
b)when i start a php-fpm container
1) add a vhost.conf file into nginx confs folder
2) add volume (persistent storage) to nginx instance
3) restart nginx instance
All nginx-php dockers that i founded, has both process per instance, but i think that had 10+1 nginx is overloading the machine, and break the docker advantages
Thanks
No need to reinvent the wheel, this one has already been solved by docker-proxy which is also available on docker hub.
You can also use consul or like with service-autodiscovery. This means:
you add a consul server to your stack
you register all FPM servers as nodes
you register every FPM-daemon as a service "fpm" in consul
For your nginx vhost conf, lets say located /etc/nginx/conf.d/mywpfarm.conf you use consul-template https://github.com/hashicorp/consul-template to generate the config in a go-template were you use
upstream fpm {
{{range service "fpm"}}
server {{.Name}} {{.Address}}:{{.Port}};
{{end}}
}
In your location when you forward .php based request to the FPM upstream, you now use the upstream above. This way nginx will load-balance through all available servers. If you shutdown one FPM host, the config changes automatically and the FPM upstream gets adjusted ( thats what consul-template is for, it watches for changes ) - so you can add new FPM services at any time and scale horizontally very easy

nginx - php-fpm cluster

I have a three php-fpm servers, and a one nginx server, where I want to loadbalance php-fpm using nginx server.
php-fpm server1 - 192.168.10.31
php-fpm server2 - 192.168.10.32
php-fpm server3 - 192.168.10.33
nginx - server - 192.168.10.12
My Configuration on nginx server was;
upstream php_backend {
server 192.168.10.31:9000;
server 192.168.10.32:9000;
server 192.168.10.33:9000;
}
location ~ \.php$ {
fastcgi_pass php_backend;
}
But my problem is, where should I define the webroot [ root /path/to/webfiles ]
Because on nginx server [ 192.168.10.12 ], access log says file not found - 404. Where should I keep website php files? On nginx server or php servers? or in both nginx and php servers?
This is kind of an old question, but I'll give my answer here for anyone googling this.
Robbie Averill's comment is correct. You should host your files both on Nginx and PHP servers. You can do this with an NFS share, but this might slow things down.
To work around this you could update your code on the nginx server and then rsync to the php servers.
You could easily build a bash script that does something like:
rsync -avzp -e ssh /srv/www/ svc_internal#php.insypro.com:/srv/www/
rsync -avzp -e ssh /srv/www/ svc_internal#php2.insypro.com:/srv/www/
rsync -avzp -e ssh /srv/www/ svc_internal#php3.insypro.com:/srv/www/
Of course, you'd want to include this in one bash script that does the updating of your code, and synchronises the php machines.

How can I avoid getting a 502 Gateway Error while restarting php-fpm?

When restarting the php-fpm service on my Linux system, the PHP CGI process take a while to shutdown completely. Until it does, trying to start a new PHP CGI instance fails because port 9000 is still held by the terminating process. Accessing the site during this time results in a 502 Gateway Error, which I'd like to avoid.
How can I restart php-fpm smoothly without getting this error?
Run two instances of php-fpm, describe it in one upstream section.
upstream fast_cgi {
server localhost:9000;
server localhost:9001 backup;
}
Change nginx.conf, to use fastcgi_pass fast_cgi;.
After that, if you restart one instance, nginx will process request through second php-fpm instance.

Categories