Docker php 8 nginx configuration - php

I tried add PHP 8 to my docker project but can't run it. My error maybe in nginx config file.
File docker-compose.yml:
version: "3.7"
services:
phpfpm:
image: php:8.0.2-fpm-alpine3.13
container_name: phpfpm
volumes:
- ./php/src:/var/www/html
- ./php/config/php.ini:/usr/local/etc/php/conf.d/php.ini
networks:
- Project
nginx:
image: nginx:latest
container_name: proxy_nginx
restart: always
ports:
- "8888:8888"
- "9999:9999"
- "5555:5555"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/logs:/var/log/nginx/
- ./php/src:/var/www/html
depends_on:
- grafana
- clickhouse
- phpfpm
networks:
- Project
clickhouse:
container_name: clickhouse
image: yandex/clickhouse-server
volumes:
- ./clickhouse/data:/var/lib/clickhouse:rw
- /var/log/clickhouse-server
- ./clickhouse/init_schema.sql:/docker-entrypoint-initdb.d/init_schema.sql
# - ./clickhouse/config.xml:/etc/clickhouse-server/config.xml
networks:
- Project
grafana:
container_name: grafana
image: grafana/grafana
volumes:
- ./grafana:/var/lib/grafana:rw
- ./grafana/grafana-clickhouse-datasource.yaml:/etc/grafana/provisioning/datasources/grafana-clickhouse-datasource.yaml
- ./grafana/grafana-dashboards.yaml:/etc/grafana/provisioning/dashboards/grafana-dashboards.yaml
- ./grafana/dashboards/:/var/lib/grafana/dashboards/
environment:
- GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource
depends_on:
- clickhouse
networks:
- Project
networks:
Project:
driver: bridge
Nginx config file:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
worker_rlimit_nofile 4096;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log syslog:server=127.0.0.1:8000;
# error_log /var/log/nginx/error.log warn;
# access_log /var/log/nginx/access.log;
access_log syslog:server=127.0.0.1:8000;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
types_hash_max_size 2048;
keepalive_requests 1000;
keepalive_timeout 5;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
client_max_body_size 100m;
client_body_buffer_size 256k;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_static on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_http_version 1.1;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_disable "msie6";
proxy_max_temp_file_size 0;
upstream proj {
server clickhouse:8123;
}
upstream grafana {
server grafana:3000;
}
server {
listen 8888;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://proj;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
server {
listen 9999;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://grafana;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
server {
listen 5555;
server_name 127.0.0.1;
index index.php index.html;
root /var/www/html;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass phpfpm:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
}
In my case grafana and clickhouse works. But phpfpm doesn't works. How I can fix nginx config in this case? Maybe also I must use upstream for phpfpm in this case?

it seems that the Nginx file does not contain any reference to the yaml one, check if the yaml extension is working in php 8 and check also which is the file that parse the yaml document.
try running this code:
if(function_exists("yaml_parse"))echo "yaml extension is enabled";
else echo "yaml extension is not enabled";

Make sure your php-fpm is running. and Also replace fastcgi_pass phpfpm:9000 with fastcgi_pass 127.0.0.1:9000.
See the article: How to setup PHP 8, NGINX, PHP-FPM and Alpine with Docker

Related

Upstream php-fpm sometimes gives 404 file not found when using docker-compose with nginx and php-fpm

In my local development environment, I'm using docker-compose with nginx and php-fpm containers, but sometimes php-fpm has a 404 file not found error. I only consistently get it when I have multiple ajax calls happening.
Here is my docker-compose file.
version: '3'
services:
nginx:
build:
context: docker
dockerfile: nginx.dockerfile
volumes:
- ./:/var/www/html
ports:
- 80:80
depends_on:
- php
php:
build:
context: docker
dockerfile: php.dockerfile
volumes:
- ./:/var/www/html
nginx.conf
user nginx;
worker_processes 4;
daemon off;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
# Switch logging to console out to view via Docker
#access_log /dev/stdout;
#error_log /dev/stderr;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*.conf;
}
conf.d/default.conf
upstream php-upstream {
server php:9000;
}
sites-available/site.conf
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/html/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx.dockerfile
FROM nginx
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY nginx/site.conf /etc/nginx/sites-available/site.conf
WORKDIR /var/www/
CMD [ "nginx" ]
EXPOSE 80 443
When I visit a page with two ajax calls happening one will give me a 200
"GET /index.php" 200
the next will give me a 404
ERROR: Unable to open primary script: /var/www/html/public/index.php (No such file or directory)
Often when I refresh the one that failed will now work and the one that did work will now 404.

Using nginX and HTTPS downloads the index.php file instead of rendering it

I am using docker with nginx.
When I go to:
http://www.mypage.com
Everything is fine, however when I add HTTPS it literally just downloads index.php.
There is nothing written in the logs so I am not sure where to even start to fix this.
Here is my relevant configuration:
Docker-compose.yml:
php:
image: myImage
ports:
- "9000:9001"
volumes:
- home/me/my-site/:/var/www/symfony:cached
- ./logs/symfony:/var/www/symfony/var/log:cached
extra_hosts:
- "docker-host.localhost:127.0.0.1"
- "otherhost:10.5.221.132"
nginx:
build: ./nginx
ports:
- "80:80"
- "443:443"
links:
- php
volumes:
- ./logs/nginx:/var/log/nginx:cached
- /home/me/my-site/:/var/www/symfony:cached
- ./nginx/my-site.com.crt:/etc/nginx/my-site.com.crt
- ./nginx/my-site.com.key:/etc/nginx/my-site.com.key
My dockerfile:
FROM alpine:3.8
RUN apk add --update nginx
RUN rm -rf /var/cache/apk/* && rm -rf /tmp/*
ADD nginx.conf /etc/nginx/
ADD symfony.conf /etc/nginx/conf.d/
ADD fastcgi_params /etc/nginx/
ADD my-site.com.crt /etc/nginx/
ADD my-site.com.key /etc/nginx/
RUN echo "upstream php-upstream { server php:9000; }" > /etc/nginx/conf.d/upstream.conf
RUN adduser -D -g '' -G www-data www-data
CMD ["nginx"]
EXPOSE 80
EXPOSE 443
My nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
open_file_cache max=100;
client_body_temp_path /tmp 1 2;
client_body_buffer_size 256k;
client_body_in_file_only off;
server {
listen 443 ssl;
server_name symfony.localhost;
ssl_certificate /etc/nginx/my-site.com.crt;
ssl_certificate_key /etc/nginx/my-site.com.key;
root /var/www/symfony/public/;
index index.php;
}
}
daemon off;
You dont have any php processing in your nginx config, you need something like below to listen to php files
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
More info https://www.nginx.com/resources/wiki/start/topics/examples/phpfcgi/#connecting-nginx-to-php-fpm

PHP files are DOWNLOADING instead of EXECUTING on Nginx

I have a simple docker-compose config with php-fpm and nginx.
It looks like Nginx can't pass the php file to php-fpm which results in download of php files instead of execution.
It works with html files.(localhost:8080/readme.html).
I always get a 403 Forbidden error when I go to root of localhost(http://localhost/).
Please help.
docker-compose.yml
version: '3'
services:
nginx:
image: nginx:alpine
container_name: nginx
restart: always
volumes:
- './etc/nginx/nginx.conf:/etc/nginx/nginx.conf'
- './var/log:/var/log'
- './web:/usr/share/nginx/html'
ports:
- 8080:80
- 443:443
depends_on:
- php
php:
image: php:fpm-alpine
container_name: php
restart: always
volumes:
- "./web:/var/www/html"
- './var/log:/var/log'
nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
index index.php index.html index.htm;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename/index.php){
rewrite (.*) $1/index.php;
}
if (!-f $request_filename){
rewrite (.*) /index.php;
}
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;
}
}
}
The problem is that the root folder for php-fpm (/var/www/html) and nginx (/usr/share/nginx/html) are different but you pass the name of the root folder from nginx to php-fpm in this line: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Because of that, php-fpm looks in the wrong folder and can't execute the PHP file.
Try using /var/www/html as the root for nginx (change it in the nginx config and the docker-comnpose file) and php-fpm should be able to find and execute the PHP files.

PHP: Guzzle 6 - cURL error 7 Connection Refused

I've searched and searched, and read the documentation at http://docs.guzzlephp.org/en/stable/request-options.html and confirmed the error at https://curl.haxx.se/libcurl/c/libcurl-errors.html and for the life of me, I cannot figure out what's going on. I have the URLs for both app-one and app-two in my /etc/hosts file, and I know they're correct as I can access them in my browser and with cURL via terminal just fine.
My setup:
Docker containers configured as:
App 1 = php-fpm - responding app
App 2 = php-fpm - requesting app, using Guzzle 6.3.2
Nginx Reverse Proxy
nginx configurations:
App 1:
upstream php-app-one {
server php-app-one:9000;
}
server {
listen 80;
listen [::]:80;
server_name app-one.local;
return 301 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
ssl_certificate /etc/nginx/certs/app-one.crt;
ssl_certificate_key /etc/nginx/certs/app-one.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
server_name app-one.local;
root /var/www/app-one;
index index.php index.html;
gzip_types text/plain text/css application/json application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
# Add headers to serve security related headers
#
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
# add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header Pragma "no-cache";
add_header Cache-Control "no-cache";
add_header X-uri "$uri";
location ~* \.(eot|otf|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
try_files $uri $uri/ /index.php?$args;
}
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ [^/]\.php(/|$) {
add_header X-debug-message "A php file was used" always;
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# This is a robust solution for path info security issue and
# works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
# if (!-f $document_root$fastcgi_script_name) {
# return 404;
# }
# Check that the PHP script exists before passing it
# try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_intercept_errors on;
fastcgi_pass php-app-one;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
add_header X-debug-message "A static file was served" always;
expires max;
# log_not_found off;
}
location ~ /\. {
deny all;
}
}
App 2:
upstream php-app-two {
server php-app-two:9000;
}
server {
listen 80;
listen [::]:80;
server_name app-two.local;
return 301 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/nginx/certs/app-two.crt;
ssl_certificate_key /etc/nginx/certs/app-two.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
server_name app-two.local;
root /var/www/app-two;
index index.php index.html;
gzip_types text/plain text/css application/json application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
# Add headers to serve security related headers
#
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
# add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header Pragma "no-cache";
add_header Cache-Control "no-cache";
add_header X-uri "$uri";
location ~* \.(eot|otf|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
try_files $uri $uri/ /index.php;
}
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ [^/]\.php(/|$) {
add_header X-debug-message "A php file was used" always;
# add_header Location "$uri" always;
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# This is a robust solution for path info security issue and
# works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_intercept_errors on;
fastcgi_pass php-app-two;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
Nginx Reverse Proxy:
worker_processes 1;
daemon off;
events {
worker_connections 1024;
}
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
default_type application/octet-stream;
include /etc/nginx/conf/mime.types;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
application/x-font-ttf ttc ttf;
application/x-font-otf otf;
application/font-woff woff;
application/font-woff2 woff2;
application/vnd.ms-fontobject eot;
include /etc/nginx/conf.d/*.conf;
}
docker-compose.yml:
version: '3.3'
services:
# configured to act as a proxy for wp and member portal
nginx:
image: evild/alpine-nginx:1.9.15-openssl
container_name: nginx
# volumes offer persistent storage
volumes:
- ./app_one:/var/www/app_one/:ro
- ./app_two:/var/www/app_two/:ro
- ./nginx/conf/nginx.conf:/etc/nginx/conf/default.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certs:/etc/nginx/certs
# ports to bind to
ports:
- 80:80
- 443:443
# allows service to be accessible by other docker containers
expose:
- "80"
- "443"
depends_on:
- php-app_one
- php-app_two
environment:
TZ: "America/Los_Angeles"
# app-two php container
php-app_two:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app_two_php
restart: always
volumes:
- ./app_two:/var/www/app_two
ports:
- 9000:9000
php-app_one:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app_one_php
restart: always
volumes:
- ./app-one:/var/www/app-one
ports:
- 9001:9000
db:
image: mysql:5.6
container_name: app_two_mysql
volumes:
- db-data:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/conf.d/ZZ-app-one.cnf:ro
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: cora
TZ: "America/Los_Angeles"
ports:
- 3306:3306
expose:
- "3306"
volumes:
db-data:
App 1 and App 2 have SSL enabled with a self signed certificates that are imported on creation by docker-compose.
App 1 has several API endpoints App 2 needs to access. When I try to access via Guzzle, I receive:
Fatal error: Uncaught GuzzleHttp\Exception\ConnectException: cURL error 7: Failed to connect to app-one.local port 443: Connection refused (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /var/www/app/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 185
GuzzleHttp\Exception\ConnectException: cURL error 7: Failed to connect to app-one.local port 443: Connection refused (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /var/www/app/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 185
Call Stack:
0.0026 366656 1. {main}() /var/www/app/index.php:0
0.2229 3355944 2. Cora\Route->routeProcess() /var/www/app/index.php:45
0.2230 3357208 3. Cora\Route->routeFind() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:89
0.2240 3357912 4. Cora\Route->routeFind() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:474
0.2245 3358576 5. Cora\Route->getController() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:441
0.2364 3477872 6. Controllers\Api\Dashboard->__construct() /var/www/app/vendor/cora/cora-framework/system/classes/Route.php:501
0.2984 4086336 7. GuzzleHttp\Client->get() /var/www/app/controllers/api/controller.Dashboard.php:36
0.2984 4086712 8. GuzzleHttp\Client->__call() /var/www/app/controllers/api/controller.Dashboard.php:36
0.2984 4086712 9. GuzzleHttp\Client->request() /var/www/app/vendor/guzzlehttp/guzzle/src/Client.php:89
0.3521 4321000 10. GuzzleHttp\Promise\RejectedPromise->wait() /var/www/app/vendor/guzzlehttp/guzzle/src/Client.php:131
This is how I'm currently implementing the client (including some of the code I've added in my attempts to remedy this):
<?php
namespace Controllers\Api;
use \GuzzleHttp\Client;
// use \GuzzleHttp\Psr7\Uri;
define('URL', 'https://app-one.local/api/');
class Dashboard extends ApiController
{
private $http;
public function __construct($container)
{
// We're using guzzle for our requests to help keep opportunity
// for cURL errors to a minimum
$this->http = new Client([
'base_uri' => URL,
'timeout' => 30.0,
'allow_redirects' => true,
'verify' => false,
'curl' => [
CURLOPT_VERIFYPEER => false
],
'headers' => [
'User-Agent' => 'curl/7.38.0',
],
]);
$response = $this->http->get('member/sales/hasalestest');
var_dump($response);
exit;
}
}
As I mentioned, I can access this endpoint via browser just fine, and can access it directly with cURL in the terminal so long I use the -k flag for "insecure". I'm not sure what else I can do, as Guzzle's documentation isn't very clear on the syntax differences between 5 and 6. Then the Drupal and Laravel crowds tend to have unrelated issues.
This SO post seemed similar (minus the hard-coded port number and Guzzle v.5) but doesn't mention anything I haven't tried: PHP Guzzle 5: Cannot handle URL with PORT number in it .
This question is also of interest, but based on other apps that interact with App 1, it does allow other apps to consume certain API endpoints: cURL error 7: Failed to connect to maps.googleapis.com port 443
All I can think of at this point is maybe it's an nginx configuration issue? A push in the right direction is all I need to get moving forward and get the rest of the endpoints I need to consume, being consumed.
Thanks for any guidance!
The issue is that your hosts file on your local machine will not impact how the docker instances map an IP to a host.
Try accessing the endpoints via the container name...
This turned out to be a relatively simple fix. The problem was the two fpm containers weren't aware of each other, and by referring to app-one.local in app-two's request, app-two was basically sending the request into the void. The fix for this was as follows:
version: '3.3'
services:
nginx:
image: evild/alpine-nginx:1.9.15-openssl
container_name: nginx
volumes:
- ./app-one:/var/www/app-one/:ro
- ./app-two:/var/www/app-two/:ro
- ./nginx/conf/nginx.conf:/etc/nginx/conf/default.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certs:/etc/nginx/certs
ports:
- 80:80
- 443:443
expose:
- "80"
- "443"
depends_on:
- app-one
- app-two
environment:
TZ: "America/Los_Angeles"
# This is the fix
networks:
default:
aliases:
- app-one.local
- app-two.local
app-one:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app-one
restart: always
volumes:
- ./app-one:/var/www/app-one
ports:
- 9000:9000
# This is the fix
networks:
- default
app-two:
environment:
TZ: "America/Los_Angeles"
image: joebubna/php
container_name: app-two
restart: always
volumes:
- ./app-two:/var/www/app-two
ports:
- 9001:9000
# This is the fix
networks:
- default
db:
image: mysql:5.6
container_name: mysql
volumes:
- db-data:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/conf.d/ZZ-mysql.cnf:ro
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: cora
TZ: "America/Los_Angeles"
ports:
- 3306:3306
expose:
- "3306"
# This is the fix
networks:
- default
volumes:
db-data:
# This is the fix
networks:
default:
driver: bridge
What I ended up doing is creating an overlay network, and making the nginx container aware of each of the fpm's domain name. This allows the two containers to now send requests back and forth between each other via FQDN as opposed to IP or container ID/name. A simple thing to overlook in hindsight.
In My case URL was not valid i was missing the "https://" in start of URL. when add it was fine

Docker Swarm get real IP (client host) in Nginx

I have a stack with nginx and PHP to run on Docker Swarm Cluster.
In a moment in my PHP application, I need to get the remote_addr ($_SERVER['REMOTE_ADDR']) which contains the real IP from the client host accessing my webapp.
But the problem is that the IP informed for nginx by docker swarm cluster. It's showed an Internal IP like 10.255.0.2, but the real IP it's the external IP from the client Host (like 192.168.101.151).
How I can solve that?
My docker-compose file:
version: '3'
services:
php:
image: php:5.6
volumes:
- /var/www/:/var/www/
- ./data/log/php:/var/log/php5
networks:
- backend
deploy:
replicas: 1
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- /var/www/:/var/www/
- ./data/log/nginx:/var/log/nginx
networks:
- backend
networks:
backend:
My default.conf (vhost.conf) file:
server {
listen 80;
root /var/www;
index index.html index.htm index.php;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log error;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
try_files $uri $uri/ /index.php;
}
location = /50x.html {
root /var/www;
}
# set expiration of assets to MAX for caching
location ~* \.(js|css|gif|png|jp?g|pdf|xml|oga|ogg|m4a|ogv|mp4|m4v|webm|svg|svgz|eot|ttf|otf|woff|ico|webp|appcache|manifest|htc|crx|oex|xpi|safariextz|vcf)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_read_timeout 300;
}
}
My nginx config file:
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
keepalive_timeout 15;
client_body_buffer_size 100K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
log_format main '$remote_addr - $remote_user [$time_local] "$request_filename" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
include /etc/nginx/conf.d/*.conf;
}
for those don't want to read all the github thread ( https://github.com/moby/moby/issues/25526 ), the answer that was good for me was to change the config to this :
version: '3.7'
services:
nginx:
ports:
- mode: host
protocol: tcp
published: 80
target: 80
- mode: host
protocol: tcp
published: 443
target: 81
This still lets the internal overlay network work, but uses some tricks with iptables to forward those ports directly to the container, so the service inside the container see the correct source IP address of the packets.
There is no facility in iptables to allow balancing of ports between multiple containers, so you can only assign one port to one container (which includes multiple replicas of a container).
You can't get this yet through an overlay network. If you scroll up from bottom on this long-running GitHub issue, you'll see some options for using bridge networks in Swarm with your proxies to get around this issue for now.
changing port binding mode to host worked for me
ports:
- mode: host
protocol: tcp
published: 8082
target: 80
however your web front end must listen on a specific host inside swarm cluster
i.e.
deploy:
placement:
constraints:
[node.role == manager]
X-Real-IP will be passthrough and you can use it to access client IP. You can look at http://dequn.github.io/2019/06/22/docker-web-get-real-client-ip/ for reference.

Categories