PHP: Guzzle 6 - cURL error 7 Connection Refused - php

I've searched and searched, and read the documentation at http://docs.guzzlephp.org/en/stable/request-options.html and confirmed the error at https://curl.haxx.se/libcurl/c/libcurl-errors.html and for the life of me, I cannot figure out what's going on. I have the URLs for both app-one and app-two in my /etc/hosts file, and I know they're correct as I can access them in my browser and with cURL via terminal just fine.
My setup:
Docker containers configured as:
App 1 = php-fpm - responding app
App 2 = php-fpm - requesting app, using Guzzle 6.3.2
Nginx Reverse Proxy
nginx configurations:
App 1:
upstream php-app-one {
server php-app-one:9000;
}
server {
listen 80;
listen [::]:80;
server_name app-one.local;
return 301 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
ssl_certificate /etc/nginx/certs/app-one.crt;
ssl_certificate_key /etc/nginx/certs/app-one.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
server_name app-one.local;
root /var/www/app-one;
index index.php index.html;
gzip_types text/plain text/css application/json application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
# Add headers to serve security related headers
#
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
# add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header Pragma "no-cache";
add_header Cache-Control "no-cache";
add_header X-uri "$uri";
location ~* \.(eot|otf|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
try_files $uri $uri/ /index.php?$args;
}
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ [^/]\.php(/|$) {
add_header X-debug-message "A php file was used" always;
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# This is a robust solution for path info security issue and
# works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
# if (!-f $document_root$fastcgi_script_name) {
# return 404;
# }
# Check that the PHP script exists before passing it
# try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_intercept_errors on;
fastcgi_pass php-app-one;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
add_header X-debug-message "A static file was served" always;
expires max;
# log_not_found off;
}
location ~ /\. {
deny all;
}
}
App 2:
upstream php-app-two {
server php-app-two:9000;
}
server {
listen 80;
listen [::]:80;
server_name app-two.local;
return 301 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/nginx/certs/app-two.crt;
ssl_certificate_key /etc/nginx/certs/app-two.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
server_name app-two.local;
root /var/www/app-two;
index index.php index.html;
gzip_types text/plain text/css application/json application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
# Add headers to serve security related headers
#
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
# add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header Pragma "no-cache";
add_header Cache-Control "no-cache";
add_header X-uri "$uri";
location ~* \.(eot|otf|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
try_files $uri $uri/ /index.php;
}
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ [^/]\.php(/|$) {
add_header X-debug-message "A php file was used" always;
# add_header Location "$uri" always;
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# This is a robust solution for path info security issue and
# works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_intercept_errors on;
fastcgi_pass php-app-two;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
Nginx Reverse Proxy:
worker_processes 1;
daemon off;
events {
worker_connections 1024;
}
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
default_type application/octet-stream;
include /etc/nginx/conf/mime.types;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
application/x-font-ttf ttc ttf;
application/x-font-otf otf;
application/font-woff woff;
application/font-woff2 woff2;
application/vnd.ms-fontobject eot;
include /etc/nginx/conf.d/*.conf;
}
docker-compose.yml:
version: '3.3'
services:
# configured to act as a proxy for wp and member portal
nginx:
image: evild/alpine-nginx:1.9.15-openssl
container_name: nginx
# volumes offer persistent storage
volumes:
- ./app_one:/var/www/app_one/:ro
- ./app_two:/var/www/app_two/:ro
- ./nginx/conf/nginx.conf:/etc/nginx/conf/default.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certs:/etc/nginx/certs
# ports to bind to
ports:
- 80:80
- 443:443
# allows service to be accessible by other docker containers
expose:
- "80"
- "443"
depends_on:
- php-app_one
- php-app_two
environment:
TZ: "America/Los_Angeles"
# app-two php container
php-app_two:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app_two_php
restart: always
volumes:
- ./app_two:/var/www/app_two
ports:
- 9000:9000
php-app_one:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app_one_php
restart: always
volumes:
- ./app-one:/var/www/app-one
ports:
- 9001:9000
db:
image: mysql:5.6
container_name: app_two_mysql
volumes:
- db-data:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/conf.d/ZZ-app-one.cnf:ro
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: cora
TZ: "America/Los_Angeles"
ports:
- 3306:3306
expose:
- "3306"
volumes:
db-data:
App 1 and App 2 have SSL enabled with a self signed certificates that are imported on creation by docker-compose.
App 1 has several API endpoints App 2 needs to access. When I try to access via Guzzle, I receive:
Fatal error: Uncaught GuzzleHttp\Exception\ConnectException: cURL error 7: Failed to connect to app-one.local port 443: Connection refused (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /var/www/app/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 185
GuzzleHttp\Exception\ConnectException: cURL error 7: Failed to connect to app-one.local port 443: Connection refused (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /var/www/app/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 185
Call Stack:
0.0026 366656 1. {main}() /var/www/app/index.php:0
0.2229 3355944 2. Cora\Route->routeProcess() /var/www/app/index.php:45
0.2230 3357208 3. Cora\Route->routeFind() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:89
0.2240 3357912 4. Cora\Route->routeFind() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:474
0.2245 3358576 5. Cora\Route->getController() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:441
0.2364 3477872 6. Controllers\Api\Dashboard->__construct() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:501
0.2984 4086336 7. GuzzleHttp\Client->get() /var/www/app/controllers/api/controller.Dashboard.php:36
0.2984 4086712 8. GuzzleHttp\Client->__call() /var/www/app/controllers/api/controller.Dashboard.php:36
0.2984 4086712 9. GuzzleHttp\Client->request() /var/www/app/vendor/guzzlehttp/guzzle/src/Client.php:89
0.3521 4321000 10. GuzzleHttp\Promise\RejectedPromise->wait() /var/www/app/vendor/guzzlehttp/guzzle/src/Client.php:131
This is how I'm currently implementing the client (including some of the code I've added in my attempts to remedy this):
<?php
namespace Controllers\Api;
use \GuzzleHttp\Client;
// use \GuzzleHttp\Psr7\Uri;
define('URL', 'https://app-one.local/api/');
class Dashboard extends ApiController
{
private $http;
public function __construct($container)
{
// We're using guzzle for our requests to help keep opportunity
// for cURL errors to a minimum
$this->http = new Client([
'base_uri' => URL,
'timeout' => 30.0,
'allow_redirects' => true,
'verify' => false,
'curl' => [
CURLOPT_VERIFYPEER => false
],
'headers' => [
'User-Agent' => 'curl/7.38.0',
],
]);
$response = $this->http->get('member/sales/hasalestest');
var_dump($response);
exit;
}
}
As I mentioned, I can access this endpoint via browser just fine, and can access it directly with cURL in the terminal so long I use the -k flag for "insecure". I'm not sure what else I can do, as Guzzle's documentation isn't very clear on the syntax differences between 5 and 6. Then the Drupal and Laravel crowds tend to have unrelated issues.
This SO post seemed similar (minus the hard-coded port number and Guzzle v.5) but doesn't mention anything I haven't tried: PHP Guzzle 5: Cannot handle URL with PORT number in it .
This question is also of interest, but based on other apps that interact with App 1, it does allow other apps to consume certain API endpoints: cURL error 7: Failed to connect to maps.googleapis.com port 443
All I can think of at this point is maybe it's an nginx configuration issue? A push in the right direction is all I need to get moving forward and get the rest of the endpoints I need to consume, being consumed.
Thanks for any guidance!

The issue is that your hosts file on your local machine will not impact how the docker instances map an IP to a host.
Try accessing the endpoints via the container name...

This turned out to be a relatively simple fix. The problem was the two fpm containers weren't aware of each other, and by referring to app-one.local in app-two's request, app-two was basically sending the request into the void. The fix for this was as follows:
version: '3.3'
services:
nginx:
image: evild/alpine-nginx:1.9.15-openssl
container_name: nginx
volumes:
- ./app-one:/var/www/app-one/:ro
- ./app-two:/var/www/app-two/:ro
- ./nginx/conf/nginx.conf:/etc/nginx/conf/default.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certs:/etc/nginx/certs
ports:
- 80:80
- 443:443
expose:
- "80"
- "443"
depends_on:
- app-one
- app-two
environment:
TZ: "America/Los_Angeles"
# This is the fix
networks:
default:
aliases:
- app-one.local
- app-two.local
app-one:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app-one
restart: always
volumes:
- ./app-one:/var/www/app-one
ports:
- 9000:9000
# This is the fix
networks:
- default
app-two:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app-two
restart: always
volumes:
- ./app-two:/var/www/app-two
ports:
- 9001:9000
# This is the fix
networks:
- default
db:
image: mysql:5.6
container_name: mysql
volumes:
- db-data:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/conf.d/ZZ-mysql.cnf:ro
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: cora
TZ: "America/Los_Angeles"
ports:
- 3306:3306
expose:
- "3306"
# This is the fix
networks:
- default
volumes:
db-data:
# This is the fix
networks:
default:
driver: bridge
What I ended up doing is creating an overlay network, and making the nginx container aware of each of the fpm's domain name. This allows the two containers to now send requests back and forth between each other via FQDN as opposed to IP or container ID/name. A simple thing to overlook in hindsight.

In My case URL was not valid i was missing the "https://" in start of URL. when add it was fine

Related

Connection refused while connecting to upstream to a port other than 9000 in docker project

I've been struggling with running two php docker projects on the same server for a long time. For the first project, I use for the php port 9000:9000 in docker-compose.yaml. If I use the same port for another project, logically, when starting docker, it reports an error that port 9000 is already in use. Therefore, I set the port to 9002, but I get the error 502 Bad Gateway and Connection refused.
connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.19.0.3:9002", host: "127.0.0.1:4443"
The first project working correctly.
Can someone advise me how to adjust the configuration or where am i doing wrong?
First docker project:
version: '3.9'
services:
php:
container_name: php
build:
context: ./docker/php
ports:
- '9000:9000'
volumes:
- .:/var/www/
- ~/.ssh:/root/.ssh:ro
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '5080:80'
- '5443:443'
volumes:
- .:/var/www/
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt:/etc/letsencrypt
depends_on:
- php
upstream php-upstream {
server php:9000;
}
server {
listen 80;
server_name localhost;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
index index.php index.html index.htm;
server_name localhost;
root /var/www;
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
ssl_certificate /etc/letsencrypt/live/domain1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain1/privkey.pem;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
include fastcgi_params;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
internal;
}
location ~ \\.php$ {
return 404;
}
}
Second docker project:
version: '3.9'
services:
php:
container_name: hostmagic_php
build:
context: ./docker/php
ports:
- '9002:9000'
volumes:
- .:/var/www/symfony
- ~/.ssh:/root/.ssh:ro
nginx:
container_name: hostmagic_nginx
image: nginx:stable-alpine
ports:
- '8080:80'
- '4443:443'
volumes:
- .:/var/www/symfony
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt:/etc/letsencrypt
depends_on:
- php
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: hostmagic_rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- 15672:15672
- 5672:5672
upstream php-upstream {
server php:9002;
}
server {
listen 80;
server_name localhost;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
index index.php index.html index.htm;
server_name localhost;
root /var/www/symfony/public;
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
ssl_certificate /etc/letsencrypt/live/domain2/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain2/privkey.pem;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param HTTP_HOST $host;
include fastcgi_params;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
internal;
}
location ~ \\.php$ {
return 404;
}
}
Connections between containers always use the standard port number for the destination service. These connections don't require ports:, and ignore any port remapping that might be specified there.
That means, in the second Nginx proxy, you need to use the standard PHP-FPM port 9000 and not the remapped port:
upstream php-upstream {
server php:9000;
}
If you're not going to access the FastCGI service directly from the host (and tools to do this are limited) then you can delete the ports: on both php containers, which further avoids this conflict. (You can also delete container_name: and Compose will pick a non-conflicting default.)

nginx server connecting to docker containers running laravel

I need to configure an nginx server to connect to multiple docker networks for different projects. As of now, I am trying to connect nginx to one docker network.
The docker network has php (laravel) and mysql in two different containers.
Here is the nginx configuration:
upstream web {
server 127.0.0.1:9000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name tinyurl.abc.org;
root /home/azureuser/app/url-shorterner/public;
ssl_certificate /home/azureuser/app/certs/abc.org.crt;
ssl_certificate_key /home/azureuser/app/certs/abc.org.key;
error_log /var/log/nginx/error.log error;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass web;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME /tinyurl/public$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
server {
listen 80;
listen [::]:80;
server_name tinyurl.abc.org;
return 301 https://$server_name$request_uri;
}
The docker-compose.yml file has the following:
version: '3.8'
# Services
services:
# Web (Application) Service
web:
build:
context: ./.docker/web
args:
HOST_UID: $HOST_UID
container_name: tinyurl-web
ports:/
- '9000:9000'
volumes:
- '.:/var/www/html'
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
container_name: tinyurl-mysql
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_ROOT_HOST: '${DB_HOST}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_HOST: '${DB_HOST}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'no'
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 5s
retries: 10
# Scheduler Service
scheduler:
image: mcuadros/ofelia:latest
container_name: tinyurl-scheduler
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./.docker/scheduler/config.ini:/etc/ofelia/config.ini
depends_on:
- web
volumes:
mysqldata:
driver: local
The problem seems to be in the routing of the request. The index.page within the public folder of laravel shows up, but none of the other static pages are showing up. Also, the routing of the requests is not happening correctly.
Looks like the fastcgi configuration is not okay.
Can anyone help with this, and point me to what needs to be done to the nginx configuration to route to the web container having laravel.
With Regards,
Sharat
In your nginx config, replace 127.0.0.1 with the service name in your docker-compose
upstream web {
server web:9000;
}
server {
...
location ~ \.php$ {
fastcgi_pass http://web;
...

Configure nextcloud-fpm docker-compose with bare metal nginx

I'm trying to install Nextcloud on my server.
The nginx service is installed directly on bare metal (Ubuntu)
I starting from the docker-compose found at https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy/postgres/fpm
version: '3.8'
services:
postgres-nextcloud:
image: postgres:alpine
restart: always
ports:
- 5435:5432
volumes:
- postgres-nextcloud-data:/var/lib/postgresql/data
env_file:
- db.env
redis-nextcloud:
image: redis:alpine
restart: always
nextcloud:
image: nextcloud:fpm-alpine
restart: always
ports:
- 8083:9000
volumes:
- /var/www/cloud.domain.com:/var/www/html
environment:
- POSTGRES_HOST=postgres-nextcloud
- REDIS_HOST=redis-nextcloud
- POSTGRES_PORT=5432
env_file:
- db.env
depends_on:
- postgres-nextcloud
- redis-nextcloud
web:
build: ./web
restart: always
volumes:
- /var/www/cloud.domain.com:/var/www/html:ro
environment:
- VIRTUAL_HOST=cloud.domain.com
- LETSENCRYPT_HOST=cloud.domain.com
- LETSENCRYPT_EMAIL=dev#domain.com
depends_on:
- nextcloud
networks:
- proxy-tier
- default
cron:
image: nextcloud:fpm-alpine
restart: always
volumes:
- /var/www/cloud.domain.com:/var/www/html
entrypoint: /cron.sh
depends_on:
- postgres-nextcloud
- redis-nextcloud
But with my knowledge in web server I haven't found the way to properly configure my "local" nginx.
I've many other website, app already working using this nginx instance
All the different config are in the sites-available directory
The config for the Nextcloud project is named cloud.mydomain.com
with this nginx config I only get a File not found. Page
server {
root /var/www/cloud.domain.com;
server_name cloud.domain.com www.cloud.domain.com;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass localhost:8083;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/cloud.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.cloud.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = cloud.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name cloud.domain.com www.cloud.domain.com;
listen 80;
listen [::]:80;
return 404; # managed by Certbot
}
I understand that -fpm app need a proxy but I don't really understand how to link it to my existing nginx set up. With the nginx NOT running in a docker container.
Thanks for your time!
One way I know is to install another nginx in another container and put those two containers in one network in docker...
Use your baremetal nginx as reverse proxy then forward traffic to the nginx in container will do the trick.
The other way is to use the image nextcloud:latest which comes with built in Apache server. Its actually the first way with built-in web server.
I heard there some way to configure your docker image to behave like a service installed on baremetal (with own public IP) by setting the network-mode in docker-compose files but I think its easier to just include another nginx server in docker...
Either way your existing service on baremetal will not be affected.

Docker php 8 nginx configuration

I tried add PHP 8 to my docker project but can't run it. My error maybe in nginx config file.
File docker-compose.yml:
version: "3.7"
services:
phpfpm:
image: php:8.0.2-fpm-alpine3.13
container_name: phpfpm
volumes:
- ./php/src:/var/www/html
- ./php/config/php.ini:/usr/local/etc/php/conf.d/php.ini
networks:
- Project
nginx:
image: nginx:latest
container_name: proxy_nginx
restart: always
ports:
- "8888:8888"
- "9999:9999"
- "5555:5555"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/logs:/var/log/nginx/
- ./php/src:/var/www/html
depends_on:
- grafana
- clickhouse
- phpfpm
networks:
- Project
clickhouse:
container_name: clickhouse
image: yandex/clickhouse-server
volumes:
- ./clickhouse/data:/var/lib/clickhouse:rw
- /var/log/clickhouse-server
- ./clickhouse/init_schema.sql:/docker-entrypoint-initdb.d/init_schema.sql
# - ./clickhouse/config.xml:/etc/clickhouse-server/config.xml
networks:
- Project
grafana:
container_name: grafana
image: grafana/grafana
volumes:
- ./grafana:/var/lib/grafana:rw
- ./grafana/grafana-clickhouse-datasource.yaml:/etc/grafana/provisioning/datasources/grafana-clickhouse-datasource.yaml
- ./grafana/grafana-dashboards.yaml:/etc/grafana/provisioning/dashboards/grafana-dashboards.yaml
- ./grafana/dashboards/:/var/lib/grafana/dashboards/
environment:
- GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource
depends_on:
- clickhouse
networks:
- Project
networks:
Project:
driver: bridge
Nginx config file:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
worker_rlimit_nofile 4096;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log syslog:server=127.0.0.1:8000;
# error_log /var/log/nginx/error.log warn;
# access_log /var/log/nginx/access.log;
access_log syslog:server=127.0.0.1:8000;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
types_hash_max_size 2048;
keepalive_requests 1000;
keepalive_timeout 5;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
client_max_body_size 100m;
client_body_buffer_size 256k;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_static on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_http_version 1.1;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_disable "msie6";
proxy_max_temp_file_size 0;
upstream proj {
server clickhouse:8123;
}
upstream grafana {
server grafana:3000;
}
server {
listen 8888;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://proj;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
server {
listen 9999;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://grafana;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
server {
listen 5555;
server_name 127.0.0.1;
index index.php index.html;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass phpfpm:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
}
In my case grafana and clickhouse works. But phpfpm doesn't works. How I can fix nginx config in this case? Maybe also I must use upstream for phpfpm in this case?
it seems that the Nginx file does not contain any reference to the yaml one, check if the yaml extension is working in php 8 and check also which is the file that parse the yaml document.
try running this code:
if(function_exists("yaml_parse"))echo "yaml extension is enabled";
else echo "yaml extension is not enabled";
Make sure your php-fpm is running. and Also replace fastcgi_pass phpfpm:9000 with fastcgi_pass 127.0.0.1:9000.
See the article: How to setup PHP 8, NGINX, PHP-FPM and Alpine with Docker

Docker Swarm get real IP (client host) in Nginx

I have a stack with nginx and PHP to run on Docker Swarm Cluster.
In a moment in my PHP application, I need to get the remote_addr ($_SERVER['REMOTE_ADDR']) which contains the real IP from the client host accessing my webapp.
But the problem is that the IP informed for nginx by docker swarm cluster. It's showed an Internal IP like 10.255.0.2, but the real IP it's the external IP from the client Host (like 192.168.101.151).
How I can solve that?
My docker-compose file:
version: '3'
services:
php:
image: php:5.6
volumes:
- /var/www/:/var/www/
- ./data/log/php:/var/log/php5
networks:
- backend
deploy:
replicas: 1
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- /var/www/:/var/www/
- ./data/log/nginx:/var/log/nginx
networks:
- backend
networks:
backend:
My default.conf (vhost.conf) file:
server {
listen 80;
root /var/www;
index index.html index.htm index.php;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log error;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
try_files $uri $uri/ /index.php;
}
location = /50x.html {
root /var/www;
}
# set expiration of assets to MAX for caching
location ~* \.(js|css|gif|png|jp?g|pdf|xml|oga|ogg|m4a|ogv|mp4|m4v|webm|svg|svgz|eot|ttf|otf|woff|ico|webp|appcache|manifest|htc|crx|oex|xpi|safariextz|vcf)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_read_timeout 300;
}
}
My nginx config file:
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
keepalive_timeout 15;
client_body_buffer_size 100K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
log_format main '$remote_addr - $remote_user [$time_local] "$request_filename" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
include /etc/nginx/conf.d/*.conf;
}
for those don't want to read all the github thread ( https://github.com/moby/moby/issues/25526 ), the answer that was good for me was to change the config to this :
version: '3.7'
services:
nginx:
ports:
- mode: host
protocol: tcp
published: 80
target: 80
- mode: host
protocol: tcp
published: 443
target: 81
This still lets the internal overlay network work, but uses some tricks with iptables to forward those ports directly to the container, so the service inside the container see the correct source IP address of the packets.
There is no facility in iptables to allow balancing of ports between multiple containers, so you can only assign one port to one container (which includes multiple replicas of a container).
You can't get this yet through an overlay network. If you scroll up from bottom on this long-running GitHub issue, you'll see some options for using bridge networks in Swarm with your proxies to get around this issue for now.
changing port binding mode to host worked for me
ports:
- mode: host
protocol: tcp
published: 8082
target: 80
however your web front end must listen on a specific host inside swarm cluster
i.e.
deploy:
placement:
constraints:
[node.role == manager]
X-Real-IP will be passthrough and you can use it to access client IP. You can look at http://dequn.github.io/2019/06/22/docker-web-get-real-client-ip/ for reference.

Categories