Running Wordpress on Nginx + Php-FPM. We are getting these kind of warning message in our error logs:
[warn] 25518#25518: *34774 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/5/01/0000000015 while reading upstream, client: 80.94.93.51, server: domain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:"
After reading several other threads about similar issues we have modified our configuration by adding proxy_buffers as seen below but this doesn't solve the issue.
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_intercept_errors on;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
}
Related
Im trying to run the local project (php phalcon) using nginx on macOS but it seems nginx can't load the project correctly.
Here is my nginx log given :
2023/01/12 11:38:04 [error] 6170#0: *1 kevent() reported about an closed connection (54: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: bahana.front, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9074", host: "127.0.0.1:7002"
2023/01/12 11:38:05 [error] 6170#0: *1 kevent() reported about an closed connection (54: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: bahana.front, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9074", host: "127.0.0.1:7002", referrer: "http://127.0.0.1:7002/"
and here is my site nginx config : (for loading the php phalcon project)
server {
listen 7002;
root /Applications/XAMPP/xamppfiles/htdocs/bahana-front/bahana/public;
index index.php;
server_name bahana.front;
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9074;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
also the php-fpm work well in the background (port 9074) :
So, what did i miss for the configuration, im little stuck and keep getting closed connection at nginx log and HTTP Error 502 (Bad Gateway) when accessing the site at http://127.0.0.1:7002, Gonna need your help, thanks.
Unfortunately I can't download any file completely which is bigger than 1GB (~1054 MB). The download file is generated by php and everything below 1GB works. I already did some research and edited the config files without luck.
NGINX server config:
server {
listen 80;
listen [::]:80;
server_name example.de;
return 301 https://$server_name$request_uri; }
server { listen 443 http2 ssl;
listen [::]:433 http2 ssl;
server_name example.de;
server_tokens off;
root /var/www/lychee/;
index index.php index.html index.htm;
access_log /var/log/nginx/lychee.access.log combined_ssl;
error_log /var/log/nginx/lychee.error.log info;
location ~ \.html$ { perl Minify::html_handler; }
location ~ \.css$ { perl Minify::css_handler; }
location ~ \.js$ { perl Minify::js_handler; }
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 100m;
large_client_header_buffers 2 1k;
location / {
try_files $uri $uri/ =404;
client_body_buffer_size 10K;
client_max_body_size 100m;
proxy_request_buffering off;
proxy_max_temp_file_size 2048m;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param PHP_VALUE open_basedir="/var/www/:/tmp/";
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
include fastcgi_params;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
access_log off;
fastcgi_read_timeout 600;
}
location ~* \.(png|jpg|jpeg|gif|ico|woff|otf|ttf|css|js|svg|txt|pdf|docx?|xlsx?)$ {
access_log off;
log_not_found off;
}
}
FPM php.ini: (options I explicitly edited)
[PHP]
output_buffering = 4096
open_basedir = "/var/www/:/tmp/:/usr/share/php/"
max_execution_time = 600
max_input_time = 600
memory_limit = 1024M
post_max_size = 100M
NGINX error.log:
2017/12/15 13:22:15 [warn] 22198#22198: *7 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/1/00/0000000001 while reading upstream, client: xxx.xxx.xxx.xxx, server: example.de, request: "GET /php/index.php?function=Album::getArchive&albumID=15015961245901&password= HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "example.de", referrer: "https://example.de/"
2017/12/15 13:27:59 [error] 22198#22198: *7 readv() failed (104: Connection reset by peer) while reading upstream, client: xxx.xxx.xxx.xxx, server: example.de, request: "GET /php/index.php?function=Album::getArchive&albumID=15015961245901&password= HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "example.de", referrer: "example.de"
FPM error.log:
WARNING: [pool www] child 22223, script '/var/www/lychee/php/index.php' (request: "GET /php/index.php?function=Album::getArchive&albumID=150145645645901&password=") execution timed out (148.539892 sec), terminating
[15-Dec-2017 13:23:23] WARNING: [pool www] child 22223 exited on signal 15 (SIGTERM) after 160.008470 seconds from start
[15-Dec-2017 13:23:23] NOTICE: [pool www] child 22270 started
Running shopware 5 on a Debian Jessie machine with nginx and php5-fpm, we get very often a 502 Bad Gateway. This happens mostly in backend when longer operations are working like thumbnail creation, even if this is done within small chunks of single ajax requests.
The used server with 64 GB RAM and 16 Cores is sleeping at all, because there is no real traffic on it. We use it like a staging system currently unless we have fixed all errors like this one.
Error log:
In the nginx-error log the following lines can be found then:
[error] 20524#0: *175 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "POST /backend/MediaManager/createThumbnails HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com", referrer: "http://www.domain.com/backend/"
[error] 20524#0: *175 no live upstreams while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "POST /backend/Log/createLog HTTP/1.1", upstream: "fastcgi://php-fpm", host: "www.domain.com", referrer: "http://www.domain.com/backend/"
[error] 20524#0: *175 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /backend/login/getLoginStatus?_dc=1457014588680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com", referrer: "http://www.domain.com/backend/"
[error] 20522#0: *209 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /backend/login/getLoginStatus?_dc=1457014618682 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com", referrer: "http://www.domain.com/backend/"
Maybe it is notable, that at first lot of "*175 connect" errors occure and then finally a "*209 connect".
Config files:
I'll try to post only significant lines related to this topic and will leave out all those lines which are commented out.
php-fpm:
/etc/php5-fpm/pool.d/www.conf:
[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
listen.owner = www-data
listen.group = www-data
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
nginx:
/etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
## MIME types.
include /etc/nginx/mime.types;
default_type application/octet-stream;
## Default log and error files.
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
## Use sendfile() syscall to speed up I/O operations and speed up
## static file serving.
sendfile on;
## Handling of IPs in proxied and load balancing situations.
# set_real_ip_from 192.168.1.0/24; # set to your proxies ip or range
# real_ip_header X-Forwarded-For;
## Timeouts.
client_body_timeout 60;
client_header_timeout 60;
keepalive_timeout 10 10;
send_timeout 60;
## Reset lingering timed out connections. Deflect DDoS.
reset_timedout_connection on;
## Body size.
client_max_body_size 10m;
## TCP options.
tcp_nodelay on;
## Optimization of socket handling when using sendfile.
tcp_nopush on;
## Compression.
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 1;
gzip_http_version 1.1;
gzip_min_length 10;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon application/vnd.ms-fontobject font/opentype application/x-font-ttf;
gzip_vary on;
gzip_proxied any; # Compression for all requests.
gzip_disable "msie6";
## Hide the Nginx version number.
server_tokens off;
## Upstream to abstract backend connection(s) for PHP.
upstream php-fpm {
server unix:/var/run/php5-fpm.sock;
# server 127.0.0.1:9000;
## Create a backend connection cache.
keepalive 32;
}
## Include additional configs
include /etc/nginx/conf.d/*.conf;
## Include all vhosts.
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-available/site.conf:
server {
listen 80;
listen 443 ssl;
server_name xxxxxxxx.com;
root /var/www/shopware;
## Access and error logs.
access_log /var/log/nginx/xxxxxxxx.com.access.log;
error_log /var/log/nginx/xxxxxxxx.com.error.log;
## leaving out lots of shopware/mediafiles-related settings
## ....
## continue:
location ~ \.php$ {
try_files $uri $uri/ =404;
## NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_split_path_info ^(.+\.php)(/.+)$;
## required for upstream keepalive
# disabled due to failed connections
#fastcgi_keep_conn on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SHOPWARE_ENV $shopware_env if_not_empty;
fastcgi_param ENV $shopware_env if_not_empty; # BC for older SW versions
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
client_max_body_size 24M;
client_body_buffer_size 128k;
## upstream "php-fpm" must be configured in http context
fastcgi_pass php-fpm;
}
}
What to do now? Please let me now if i should provide further information to this question.
Update
After applying nginx- and fpm-settings from #peixotorms, the errors in nginx-logs changed to:
30 upstream timed out (110: Connection timed out) while reading response header from upstream
But the issue itself isn't solved. It has just another face...
It might sound strange to you, but your problem is most probably due to the fact that you're running PHP on a socket instead of a tcp port. You will start seeing 502 errors (and others) when you have around 300 concurrent requests (sometimes less) to php on a socket configuration.
Also your pm.max_children is way too low, unless you want to limit your server to around 5 simultaneous php requests maximum: http://php.net/manual/en/install.fpm.configuration.php
Configure it this way, and those errors should go away:
For your nginx.conf change the following values:
worker_processes 4;
worker_rlimit_nofile 750000;
# handles connection stuff
events {
worker_connections 50000;
multi_accept on;
use epoll;
}
upstream php-fpm {
keepalive 30;
server 127.0.0.1:9001;
}
Your /etc/php5-fpm/pool.d/www.conf
(Use these settings because you have plenty or RAM and CPU)
[www]
user = www-data
group = www-data
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
listen = 127.0.0.1:9001
listen.allowed_clients = 127.0.0.1
listen.backlog = 65000
pm = dynamic
pm.max_children = 1024
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 16
pm.max_requests = 10000
Also add this on your location ~ \.php$ { block:
location ~ \.php$ {
try_files $uri $uri/ =404;
## NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_split_path_info ^(.+\.php)(/.+)$;
## required for upstream keepalive
# disabled due to failed connections
#fastcgi_keep_conn on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SHOPWARE_ENV $shopware_env if_not_empty;
fastcgi_param ENV $shopware_env if_not_empty; # BC for older SW versions
fastcgi_keep_conn on;
fastcgi_connect_timeout 20s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s;
fastcgi_pass php-fpm;
}
EDIT:
Change the values below on your /etc/php5/fpm/php.ini file to this and restart:
safe_mode = Off
output_buffering = Off
zlib.output_compression = Off
max_execution_time = 900
max_input_time = 900
memory_limit = 2048M
post_max_size = 120M
file_uploads = On
upload_max_filesize = 120M
Try binding to 0.0.0.0:9000:
listen = 0.0.0.0:9000
When login form requests login_check action, nginx throws 502 error.
Cannot figure out why is this happens
2015/11/07 18:49:37 [error] 26038#0: *2 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: test.local, request: "POST /admin/login/check HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "test.local", referrer: "http://test.local/admin/login"
site nginx config:
server {
server_name test.local;
root /var/www/test.local/web;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /app_dev.php$is_args$args;
}
location ~ .php(/|$) {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
error_log /var/log/nginx/test.local_error.log;
access_log /var/log/nginx/test.local_access.log;
}
Seems that NGINX wont pass the correct HTTP protocol to HHVM backend.
Here the tests I've done:
If the client send a HTTP/1.1 request (non chunked) NGINX pass it to the HHVM backend and the correct response come back to the client
(client) (server backend)
HTTP/1.1(non chunked) -> NGINX -> HHVM
|
v
(HTTP/1.1 200) <- NGINX <- (non chunked response)
If the client send a HTTP/1.1 request (chuncked), the server produce a chunked reponse, but NGINX fire this log message
upstream prematurely closed connection while reading response header from upstream, request: "POST /soap HTTP/1.1", upstream: "fastcgi://unix:/var/run/hhvm/sock:"
and return a 502 (bad gateway) response
(client) (server backend)
HTTP/1.1(chunked) -> NGINX -> HHVM
|
v
(HTTP/1.1 502) <- NGINX <- (chunked response)
Why I got a 502 from NGINX using chunked response?
Is there something wrong with hhvm/fastcgi/hhvm config?
I double checked that backend give a correct 1/1 response :(
I've a scenario with:
NGINX:nginx/1.4.6 (Ubuntu)
HHVM: HipHop VM 3.6.1 (rel)
The NGINX config is
server {
listen 80;
server_name xxxxx;
root /usr/share/nginx/web;
include hhvm.conf;
location / {
# try to serve file directly, fallback to rewrite
try_files $uri #rewriteapp;
}
location #rewriteapp {
# rewrite all to app.php
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|app_benchmark|app_logall|config)\.php(/|$) {
fastcgi_keep_conn on;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/hhvm/sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_read_timeout 240;
fastcgi_intercept_errors on;
include fastcgi_params;
}
}