I'm struggling to understand where my error is. I've looked at various answers and tried the remedies, only to find that their solutions did not rectify my problem. I've stripped everything down to the VERY basics to see if I can just get a basic PHP index.php to present itself.
Here is what I'm trying to accomplish at the core:
I have docker-compose standing up 1 network, and 2 services connected to the network. One service is PHP-FPM, and the other is nginx to serve the PHP-FPM. Every time I stand this up, no matter how I seem to configure it, I just get a 502 Bad Gateway, and when I inspect the nginx container logs, I get [error] 29#29: *1 connect() failed (113: Host is unreachable) while connecting to upstream.
./docker-compose.yml
version: "3.7"
networks:
app:
driver: bridge
services:
php:
image: php:7.4-fpm
container_name: php
volumes:
- /home/admin/dev/test/php/www.conf:/usr/local/etc/php-fpm.d/www.conf
- /home/admin/dev/test/src/:/var/www/html
networks:
- app
nginx:
image: nginx:alpine
container_name: nginx
depends_on:
- php
ports:
- "80:80"
- "443:443"
volumes:
- /home/admin/dev/test/src/:/usr/share/nginx/html
- /home/admin/dev/test/nginx/conf.d/app.conf:/etc/nginx/conf.d/app.conf
networks:
- app
./php/www.conf -> /usr/local/etc/php-fpm.d/www.conf
[www]
user = www-data
group = www-data
listen = 0.0.0.0:9000
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
./nginx/conf.d/app.conf -> /etc/nginx/conf.d/app.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
./src/index.php -> (php)/var/www/html && (nginx)/usr/share/nginx/html (just for reference)
<?php
phpinfo();
Docker: Docker version 19.03.12, build 48a66213fe
Docker-compose: docker-compose version 1.25.4, build unknown
Environment: Linux localhost.localdomain 5.7.14-200.fc32.x86_64 #1 SMP Fri Aug 7 23:16:37 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux (Fedora 32 Workstation)
I believe I just have a major misunderstanding of PHP-FPM but maybe there is something else.
Update During Troubleshooting
The thought occurred that my overall environment (i.e. Fedora 32) was messing it up. Fedora 32 is not supported out of the box for Docker (had to change the repo settings in /etc/yum.repos.d to get it to work - had to use Fedora 31's repo). So I decided to spin up an Ubuntu 20.0.4 VM and test it there. Now the PHP-FPM and Nginx are talking; I get responses from the PHP-FPM container! However, now even with just the basic script, I'm getting 404 errors, but that is MUCH closer to where I need to be... now to fix the 404.
The exact error is: [error] 30#30: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream
Final Update (Answer)
For anyone coming across this, as of today's date, Docker didn't work with Fedora 32 (some parts did). At least not with the time I had available to troubleshoot/patch. It was a fresh Fedora 32 with no previous docker/docker-compose or anything installed.
I stood up a fresh Fedora 31 and Ubuntu 20.0.4 just to verify my "conclusion". Both worked right out of the box with no extra tweaks.
Can you check if your php-fpm service is running ?
Issue could be the php-fpm service is not running hence the nginx could not connect to it
Related
I'm trying to create a simple docker project and connect PHP and Nginx containers for a test. but i got this error :
Building php
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM php:latest
---> 52cdb5f30a05
Successfully built 52cdb5f30a05
Successfully tagged test_php:latest
WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM nginx:latest
---> 55f4b40fe486
Step 2/2 : ADD default.conf /etc/nginx/conf.d/default.conf
---> 20190910ffec
Successfully built 20190910ffec
Successfully tagged test_nginx:latest
WARNING: Image for service nginx was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating php ... done
Creating nginx ... done
Attaching to php, nginx
php | Interactive shell
php |
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
php | php > nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
php exited with code 0
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/07/10 05:34:07 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx | nginx: [emerg] host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx exited with code 1
Here is the full directory structure of the project:
- docker
-- nginx
-- default.conf
-- Dockerfile
-- php
-- Dockerfile
- src
-- index.php
docker-compose.yml
and this is all files and their contents which i use :
# docker/nginx/default.conf
server {
listen 80;
index index.php index.htm index.html;
root /var/www/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# docke/nginx/Dockerfile
FROM nginx:latest
ADD default.conf /etc/nginx/conf.d/default.conf
# docker/php/Dockerfile
FROM php:latest
# src/index.php
<?php
echo phpinfo();
# docker-compose.yml
version: "3.8"
services:
nginx:
container_name: nginx
build: ./docker/nginx
command: nginx -g "daemon off;"
links:
- php
ports:
- "80:80"
volumes:
- ./src:/var/www/html
php:
container_name: php
build: ./docker/php
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
the main problem occurs when I want to connect the PHP container to the project and without PHP, Nginx will work correctly.
You can try and add depends_on: php in your nginx service to at least try to make sure the nginx service doesnt' start until the php service is Running. Probably the dependency is starting after the main service that requires it. This is a race condition problem, I think.
I had 3 nodes, where nginx and php containers lived on different nodes.
After trying various methods, such as:
define dedicated network for services inside docker-compose
in nginx config use upstream definition instead of direct name of the service
explicitly adding docker's 127.0.0.11 resolver to nginx
neither worked...
And the reason actually was in a closed ports: https://docs.docker.com/engine/swarm/networking/#firewall-considerations
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container overlay network.
After I revert back all changes I did (network, resolver, upstream definition) to original simple setup and open the ports for inner node communication - service discovery begin to work as expected.
Docker 20.10
Several issues:
It seems you have containers that can't see other.
It seems containers exits/fails and it is certainly not because of the first issue; nginx would still work of php-fpm socket is unavailable, it might crash but it should manage such unavailability very well
Make sure the php.ini is really opening an fpm socket on port 9000.
you index.php file script is not closed with "?>" [but that does not matter here]
For a summary, you were suggested:
to consider docker swarm networking configuration [but it seems you are not using docker swarm]
to use depends_on which helps docker decide what to start first, but it should not be an issue in your case, nginx can wait. it will use the socket only upon web user requests.
So it seems the internal docker name resolution is your issue and It seems defining the network manually is best practice. In my case I wandered too long before just giving the docker-compose file a specific network name and attaching containers to that network.
If containers are in the same docker-compose file they should be in the same yourserver_default network that is autogenerated for your composed services.
Have a look at https://blog.devsense.com/2019/php-nginx-docker, they actually define that network manually.
And eventually redo everything from scratch, if you haven't solved this yet. Else all the best to you!
I have the following docker-compose.yaml file for local development that works without issue:
Nginx container just runs the webserver with an upstream pointing to php
Php runs just php-fpm + my extensions
I have an external docker-sync volume which contains my code base which is shared with both nginx + php.
The entire contents of my application is purely PHP returning a bunch of json api data. No static assets get served up.
version: '3.9'
networks:
backend:
driver: bridge
services:
site:
container_name: nginx
depends_on: [php]
image: my-nginx:latest
networks: [backend]
ports: ['8080:80', '8081:443']
restart: always
volumes: [code:/var/www/html:nocopy]
working_dir: /var/www/html
php:
container_name: php
image: my-php-fpm:latest
networks: [backend]
ports: ['9000:9000']
volumes: [code:/var/www/html:nocopy]
working_dir: /var/www/html
volumes:
code:
external: true
I'm playing around with ways to deploy this in my production infrastructure and am liking AWS ECS for it. I can create a single task definition, that launches a single service with both containers defined (and both sharing a code volume that I add in during my build process) and the application works.
This solution seems odd to me because now the only way my application can scale out is by giving me a {php + nginx} container set each time. My PHP needs are going to scale faster than my nginx ones, so this strikes me as a bit wasteful.
I've tried experimenting with the following setup:
1 ECS service for just nginx
1 different ECS service for just php
Both are load balanced, but by virtue of using Fargate and them being on different services, I don't have a way to add a volumesFrom block on the nginx container that would give it access to my code (which I package on the PHP container during my build process). There is no reference to the PHP docker container that I can make that allows this to happen.
My configuration "works" in that the load balanced Nginx service can now scale independent of the load balanced PHP service. They're able to both talk to each other. But Nginx not having my code means it can't help but return a 404 on anything that I want my php upstream to handle.
server {
listen 80;
server_name localhost;
root /var/www/html/public;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location /health {
access_log off;
return 200 'PASS';
add_header Content-Type text/plain;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app-upstream;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
proxy_http_version 1.1;
proxy_set_header "Connection" "";
}
}
Is there any nginx configuration I can write that would make this setup (without nginx having access to my code) work?
It feels like my only options are either copying the same code onto both containers (which feels weird), combining them both into the same container (which violates the 1 service/1 container rule), or accepting that I can't scale them as independently as I would like (which is not the end of the world).
I've never seen a setup where Nginx and PHP were running in separate ECS tasks that scaled independently. I've always seen it where they are both running in the same ECS task, with a shared folder.
I wouldn't worry too much about this being "wasteful". You're adding a tiny amount of CPU usage to each Fargate ECS task by adding Nginx to each task. I would focus more on the fact that you are keeping latency as low as possible by running them both in the same task, so Nginx can pass requests to PHP over localhost.
It's not required to share the volume between those two containers, the PHP scripts are required only by the PHP container, for Nginx it's only required to have network access to the PHP container, so it can proxy the requests.
To run your application on AWS ECS, you need to pack Nginx + PHP in the same container, so the load balancer proxy the request to the container, Nginx accepts the connection and proxy it to PHP, and then return the response.
Using one container for Nginx to act as a proxy to multiple PHP containers it's not possible using Fargate, it would require running the containers on the same network and somehow making the Nginx container proxy and balancing the incoming connections. Besides that, when a new PHP container were deployed, it should be registered on Nginx to start receiving connections.
I had the same struggle for a long time till I have moved all my PHP Apps to NGINX Unit.
https://unit.nginx.org/howto/cakephp/
This is an example how easy it is to have a single container setup to handle static files (html, css, js) as well as all the php code. To learn more about Unit in Docker check this out. https://unit.nginx.org/howto/docker/
Let me know if you have any issues with the tutorials.
I am attempting to get nginx-proxy to work with the php-fpm variant of the official php image via fastcgi. Unfortunately, I seem to be unable to do so. I'm sure the problem is just something simple that I don't know about.
I have followed the instructions for nginx-proxy to the best of my ability and have boiled it down to a very simple way to re-create the issue. Here's my docker-compose.yml file:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- DEFAULT_HOST=test.local
fpm:
image: php:fpm
environment:
- VIRTUAL_HOST=test.local
- VIRTUAL_PROTO=fastcgi
I then drop in a simple index.php file by running:
docker container exec -it web_fpm_1 /bin/bash -c 'echo "<?php phpinfo(); ?>" > /var/www/html/index.php'
(It puts web_ in front because this project is in a directory named web/.)
I also modify my hosts file to point test.local to 127.0.0.1, so I can test it.
However, every attempt to browse to test.local results in a blank white page.
The logs for the web_proxy_1 container don't indicate anything out of the ordinary, as far as I know:
❯ docker container logs web_proxy_1
WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one
is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded.
forego | starting dockergen.1 on port 5000
forego | starting nginx.1 on port 5100
dockergen.1 | 2020/07/20 19:24:54 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
dockergen.1 | 2020/07/20 19:24:54 Watching docker events
dockergen.1 | 2020/07/20 19:24:54 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1 | test.local 172.18.0.1 - - [20/Jul/2020:19:25:12 +0000] "GET / HTTP/1.1" 200 5 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"
nginx.1 | test.local 172.18.0.1 - - [20/Jul/2020:19:25:13 +0000] "GET /favicon.ico HTTP/1.1" 200 5 "http://test.local/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"
The logs for the web_fpm_1 container show that nothing gets sent except a 200 response:
❯ docker container logs web_fpm_1
[20-Jul-2020 19:24:54] NOTICE: fpm is running, pid 1
[20-Jul-2020 19:24:54] NOTICE: ready to handle connections
172.18.0.3 - 20/Jul/2020:19:25:12 +0000 "- " 200
172.18.0.3 - 20/Jul/2020:19:25:13 +0000 "- " 200
What am I doing wrong?
Incidentally, I have asked this question on the nginx-proxy repo, the nginx-proxy Google Group, and the php repo. I either get no response or they pass the buck.
The default generated config of nginx-proxy is not fully working.
I think something is messed up with VIRTUAL_ROOT environment variable, because the root of the problem is PHP getting a wrong path via SCRIPT_FILENAME (that's why you see no PHP output) and there is no try_files with =404 symbol (that's why you get 200 with everything).
I have a prepared working setup using docker-compose in GitHub to demonstrate that it would work with an existing SCRIPT_FILENAME in the nginx config.
I have changed test.local to test.localhost.
I think to get it working as it should, you would have to use an nginx template for nginx-proxy, so the generated default.conf does work with php fpm and have the missing fastcgi param included.
Another, yet different approach would be to pack PHP and a manually configured webserver (nginx) in a project and having the automated reverse nginx proxy in a standalone project.
This would cost you an additional process running but gives you more control and easier deployment.
Alternatively, you might want to have a look into traefik which does essentially the same as nginx-proxy.
Daniel's answer is definitely on the right track. I use the php-fpm image with nginx as my main stack for php sites. Having said that, I don't use the nginx-proxy docker image. Instead, I use plain nginx on the host machine, and configure ports to point to backend php-fpm docker images.
I'm not using docker-compose either. Since it's just docker containers running single sites, I don't need it. Here's an example docker run command:
docker rm -f www.example.com || true
docker run -itd -p 9001:9000 -P \
--name www.example.com \
--volume /var/www/html/www.example.com:/var/www/html/www.example.com \
--link mariadb:database.example.com \
--restart="always" \
--hostname="example.com" \
--log-opt max-size=2m \
--log-opt max-file=5 \
mck7/php-fpm:7.4.x-wordpress
And here is an example nginx config:
server {
server_name example.com www.example.com;
location ~ /.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
root /var/www/html/www.example.com/src;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
fastcgi_pass 127.0.0.1:9001;
fastcgi_index index.php;
}
}
A few things about this setup are key. The port re-mapping for the docker container. In this example I map port 9001 to 9000. The other "gotcha" is that the root for the container must be an actual location on the host. I have no idea why this is the case, but for whatever reason the path docker thinks it's using has to actually be the path on the host as well.
Using this project/Docker setup:
https://gitlab.com/martinpham/symfony-5-docker
When I do docker-compose up -d I have to wait about 2-3 minutes to actually get it working.
Before it loads, it gives me "502 Bad Gateway" and logs error:
2020/05/10 09:22:23 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.28.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.28.0.3:9000", host: "localhost"
Why nginx or php-fpm or smth else is loading so slow ?
It's my first time using nginx and Symfony. Is it something normal ? I expect it to be loaded max in 1-2 second, not 2-3 minutes.
Yes, I have seen similar issues, but not appropriate solutions for me.
Some nginx/php-fpm/docker-compose configuration should be changed - I tried, but no luck.
I modified a little bit nginx/sites/default.conf (just added xdebug stuff)
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#!!!!fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
fastcgi_param PHP_VALUE "xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM
xdebug.remote_enable=1
xdebug.remote_port=9001
xdebug.remote_host=192.168.0.12";
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx/conf.d/default.conf:
upstream php-upstream {
server php-fpm:9000;
}
docker-compose.yml:
version: '3'
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
build:
context: ./php-fpm
depends_on:
- database
environment:
- TIMEZONE=Europe/Tallinn
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
EDIT:
I think I know why now your project is taking ages to start. I had a closer look at the Dockerfile in the php-fpm folder and you have that command:
CMD composer install ; wait-for-it database:3306 -- bin/console doctrine:migrations:migrate ; php-fpm
As you can see, that command will install all composer dependencies and then wait until it can connect to the database container defined in the docker-compose.yml configuration :
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
Once the database is up and running, it will run the migration files in src/src/Migrations to update the database and then start php-fpm.
Until all of this is done, your project won't be ready and you will get the nice '502 Bad Gateway' error.
You can verify this and what's happening by running docker-compose up but omitting the -d argument this time so that you don't run in detached mode, and this will display all your container logs in real time.
You will see a bunch of logs, including the ones related to what composer is doing in the background, ex:
api-app | - Installing ocramius/package-versions (1.8.0): Downloading (100%)
api-app | - Installing symfony/flex (v1.6.3): Downloading (100%)
api-app |
api-app | Prefetching 141 packages
api-app | - Downloading (100%)
api-app |
api-app | - Installing symfony/polyfill-php73 (v1.16.0): Loading from cache
api-app | - Installing symfony/polyfill-mbstring (v1.16.0): Loading from cache
Composer install might take more or less time depending on whether you have all the repositories cached or not.
The solution here if you want to speed things up during your development would be remove the composer install command from the Dockerfile and run it manually only when you want to update/install new dependencies. This way, you avoid composer install to be run every time you run docker-compose up -d.
To do it manually, you would need to connect to your container and then run composer install manually or if you have composer directly installed in your OS, you could simply navigate to the src folder and run that same command.
This + the trick below should help you have a nice a fast enough project locally.
I have a similar configuration and everything is working well, the command docker-compose should take some time the first time you run it as your images need to be built, but then it shouldn't even take a second to run.
From what I see however, you have a lot of mounted volumes that could affect your performances. When I ran tests with nginx and Symfony on a Mac, I had really bad performances at the beginning with pages taking at least 30 seconds load.
One solution to speed this up in my case was to use the :delegated option on some of my volumes to speed up their access.
Try adding that options to your volumes and see if it changes anything for you:
[...]
volumes:
- ../src:/var/www:delegated
[...]
If delegated is not good option for you, read more about the other options consistent and cached here to see what would best fits your needs.
I'm trying to take advantage of nginx upstream using socket but receiving errors in my log:
connect() to unix:/var/run/user_fpm2.sock failed (2: No such file or directory) while connecting to upstream
I might be going about this wrong and looking for some advice/input.
Here's the nginx conf block:
upstream backend {
server unix:/var/run/user_fpm1.sock;
server unix:/var/run/user_fpm2.sock;
server unix:/var/run/user_fpm3.sock;
}
And:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass backend;
fastcgi_index index.php;
include fastcgi_params;
}
Then, I have 3 PHP pools at /etc/php/7.0/fpm/pool.d/ that look pretty much the same as below. The only difference between the pools is _fpm1, _fpm2, and _fpm3 to match the upstream block.
[user]
listen = /var/run/user_fpm1.sock
listen.owner = user
listen.group = user
listen.mode = 0660
user = user
group = user
pm = ondemand
pm.max_children = 200
pm.process_idle_timeout = 30s
pm.max_requests = 500
request_terminate_timeout = 120s
chdir = /
php_admin_value[session.save_path] = "/home/user/_sessions"
php_admin_value[open_basedir] = "/home/user:/usr/share/pear:/usr/share/php:/tmp:/usr/local/lib/php"
I've noticed the /var/run always ONLY has the user_fpm3.sock file.
Am I going about this wrong? Is it possible to make this upstream config work? All advice and critique welcome.
I'm running PHP7 on Debian Jessie with nginx 1.10.3 - Server has 6 CPU's and 12GB RAM.
Thanks in advance.
UPDATE: I figured the answer myself, but leaving the question in case someone else is trying to do the same thing, or there's a way to optimize this further.
All I had to do was change my pool names to [user_one], [user_two], and [user_three]
Changing the the name of each PHP pool fixed the problem, like so:
[user_one]
[user_two]
[user_three]