Script times out after 240 seconds - php

I'm running an Nginx + PHP-FPM server and I have a script that is supposed to execute over 30 minutes. After 240 seconds it ceases to function and returns a 502 Gateway error from Nginx.
PHP-FPM Log for the execution:
[03-May-2013 19:52:02] WARNING: [pool www] child 2949, script
'/var/www/shell/import_db.php' (request: "GET /shell/import_db.php")
execution timed out (239.971196 sec), terminating
[03-May-2013 19:52:02] WARNING: [pool www] child 2949 exited on signal 15 (SIGTERM) after 540.054237 seconds from start
[03-May-2013 19:52:02] NOTICE:
[pool www] child 3049 started
Nginx log for the execution:
2013/05/03 19:52:02 [error] 2970#0: *1 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:
98.172.80.203, server: www.example.net, request: "GET /shell/import_db.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000",
host: "www.example.net"
I've run this script on an apache+suphp server and it executes flawlessly. Nonethless, I am including:
set_time_limit(0); at the top of my script.
The max_execution_time according to phpinfo() is 1800 seconds (this was originally 300, I bumped it up to this in an attempt to get it working).
FastCGI Nginx Configuration:
## Fcgi Settings
include fastcgi_params;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 600s;
fastcgi_buffer_size 4k;
fastcgi_buffers 512 4k;
fastcgi_busy_buffers_size 8k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors off;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
# nginx will buffer objects to disk that are too large for the buffers above
fastcgi_temp_path /tmpfs/nginx/tmp 1 2;
#fastcgi_keep_conn on; # NGINX 1.1.14
expires off; ## Do not cache dynamic content
I've restarted php-fpm and nginx to no avail. What setting or configuration am I missing from this or that I could check?

When you run nginx with php-fpm, nginx tries to so stay connected with php-fpm's children. You should set the following directive to change that:
fastcgi_keep_conn off;

Related

nginx 502 error in POST request when uploading file in API request

Environment:
Laravel Version: 5.8.29
PHP Version $ php --version: PHP 7.2.24 (cli)
NGINX Version $ nginx -v: nginx version: nginx/1.14.0 (Ubuntu)
Problem Statement:
I'm getting 502 bad gateway error on one particular endpoint from NGINX when trying to POST request, while including file in API request.
However, It's working properly on other endpoint, where I don't need to add file in request.
What could be possibly wrong here?
Error
NGINX 502 Bad Gateway
Logs
$ sudo tail -30 /var/log/nginx/error.log
[error] 4713#4713: *705118 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 111.11.11.111, server: domain.com, request: "POST /action/api/path HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "domain.com", referrer: "domain.com/path"
$ sudo tail /var/log/php7.2-fpm.log
WARNING: [pool www] child 28524 exited on signal 11 (SIGSEGV - core dumped) after
NOTICE: [pool www] child 8033 started
Files & Configuration:
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
# max post size
client_max_body_size 100M;
}
/etc/nginx/sites-available/domain.com
server {
listen 443;
server_name domain.com;
root /path/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
ssl on;
ssl_certificate /etc/nginx/ssl/domain.com.chained.crt;
ssl_certificate_key /etc/nginx/ssl/domain.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
}
server {
listen 80;
server_name domain.com;
rewrite ^/(.*) https :// domain.com/$1 permanent;
}
/etc/php/7.2/fpm/pool.d/www.conf (Pool directives)
[www]
user = www-data
group = www-data
listen = /run/php/php7.2-fpm.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
/etc/php/7.2/fpm/php-fpm.conf (Global directives)
[global]
pid = /run/php/php7.2-fpm.pid
error_log = /var/log/php7.2-fpm.log
include=/etc/php/7.2/fpm/pool.d/*.conf
PHP cURL request
private $headers = [
'Accept: application/json',
'Content-Type: application/json',
];
private $baseURL = 'http://otherdomain.in/api/v1/';
private function postRequest($data, $endpoint) {
if ( !is_null($this->apiToken) ) {
$authorization = "Authorization: Bearer {$this->apiToken}";
array_push($this->headers, $authorization);
}
$url = "{$this->baseURL}/{$endpoint}";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $this->headers );
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$responseJSON = curl_exec($ch);
$response = json_decode($responseJSON, TRUE);
return $response
}
EDIT 1:
I've restarted fastcgi process using following query
$ sudo service php7.2-fpm restart
$ sudo tail /var/log/php7.2-fpm.log
NOTICE: systemd monitor interval set to 10000ms
WARNING: [pool www] child 28870 exited on signal 11 (SIGSEGV - core dumped) after 53.229996 seconds from start
NOTICE: [pool www] child 28879 started
NOTICE: Terminating ...
NOTICE: exiting, bye-bye!
NOTICE: fpm is running, pid 29564
NOTICE: ready to handle connections
NOTICE: systemd monitor interval set to 10000ms
WARNING: [pool www] child 29592 exited on signal 11 (SIGSEGV - core dumped) after 17.115362 seconds from start
NOTICE: [pool www] child 29596 started
EDIT 2:
I've found that my opcache is already comment. So there is no point of disabling or increasing its memory limit as per following answer
/etc/php/7.2/fpm/php.ini
[opcache]
; Determines if Zend OPCache is enabled
;opcache.enable=0
; Determines if Zend OPCache is enabled for the CLI version of PHP
;opcache.enable_cli=0
; The OPcache shared memory storage size.
;opcache.memory_consumption=196
EDIT 3:
When I changed listen as per following query
/etc/php/7.2/fpm/pool.d/www.conf
listen = /run/php/php7.2-fpm.sock
to
listen = 127.0.0.1:9000
$ sudo service php7.2-fpm restart
Job for php7.2-fpm.service failed because the control process exited with error code.
See "systemctl status php7.2-fpm.service" and "journalctl -xe" for details.
$ systemctl status php7.2-fpm.service
● php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-11-13 06:13:51 UTC; 1min 21s ago
Docs: man:php-fpm7.2(8)
Process: 30120 ExecStart=/usr/sbin/php-fpm7.2 --nodaemonize --fpm-config /etc/php/7.2/fpm/php-fpm.conf (code=exited, status=78)
Main PID: 30120 (code=exited, status=78)
systemd[1]: Starting The PHP 7.2 FastCGI Process Manager...
php-fpm7.2[30120]: [13-Nov-2021 06:13:51] ERROR: unable to bind listening socket for address '127.0.0.1
php-fpm7.2[30120]: [13-Nov-2021 06:13:51] ERROR: FPM initialization failed
systemd[1]: php7.2-fpm.service: Main process exited, code=exited, status=78/n/a
systemd[1]: php7.2-fpm.service: Failed with result 'exit-code'.
systemd[1]: Failed to start The PHP 7.2 FastCGI Process Manager.
EDIT 4:
As pointed by #valery-viktorovsky in answer. I've changed the version of PHP from 7.4 to 7.2, as I only have php 7.2.
It looks like you use 2 versions of PHP, 7.2 and 7.4
/etc/nginx/sites-available/domain.com
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
..
}
Still getting same logs
$ sudo tail -f /var/log/nginx/error.log
[error] 23250#23250: *74 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 111.11.11.111, server: domain.com, request: "POST /action/api/path HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "domain.com", referrer: "domain.com/path"
EDIT 5:
I tried adding the following as per these answer [1] [2] [3]
http {
fastcgi_read_timeout 400s;
# max post size
client_max_body_size 100M;
send_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
large_client_header_buffers 8 1024k;
server {
location / {
proxy_buffer_size 1024k;
proxy_buffers 4 1024k;
proxy_busy_buffers_size 1024k;
}
}
}
It looks like you use 2 versions of PHP, 7.2 and 7.4
Environment:
PHP Version $ php --version: PHP 7.2.24 (cli)
but in the /etc/nginx/sites-available/domain.com config:
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
..
}
You can get rid of 7.2 and use 7.4 only and then check the php logs again.

Nginx + Php-FPM - upstream response is buffered

Running Wordpress on Nginx + Php-FPM. We are getting these kind of warning message in our error logs:
[warn] 25518#25518: *34774 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/5/01/0000000015 while reading upstream, client: 80.94.93.51, server: domain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:"
After reading several other threads about similar issues we have modified our configuration by adding proxy_buffers as seen below but this doesn't solve the issue.
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_intercept_errors on;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
}

NGINX cuts download after 1GB

Unfortunately I can't download any file completely which is bigger than 1GB (~1054 MB). The download file is generated by php and everything below 1GB works. I already did some research and edited the config files without luck.
NGINX server config:
server {
listen 80;
listen [::]:80;
server_name example.de;
return 301 https://$server_name$request_uri; }
server { listen 443 http2 ssl;
listen [::]:433 http2 ssl;
server_name example.de;
server_tokens off;
root /var/www/lychee/;
index index.php index.html index.htm;
access_log /var/log/nginx/lychee.access.log combined_ssl;
error_log /var/log/nginx/lychee.error.log info;
location ~ \.html$ { perl Minify::html_handler; }
location ~ \.css$ { perl Minify::css_handler; }
location ~ \.js$ { perl Minify::js_handler; }
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 100m;
large_client_header_buffers 2 1k;
location / {
try_files $uri $uri/ =404;
client_body_buffer_size 10K;
client_max_body_size 100m;
proxy_request_buffering off;
proxy_max_temp_file_size 2048m;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param PHP_VALUE open_basedir="/var/www/:/tmp/";
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
include fastcgi_params;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
access_log off;
fastcgi_read_timeout 600;
}
location ~* \.(png|jpg|jpeg|gif|ico|woff|otf|ttf|css|js|svg|txt|pdf|docx?|xlsx?)$ {
access_log off;
log_not_found off;
}
}
FPM php.ini: (options I explicitly edited)
[PHP]
output_buffering = 4096
open_basedir = "/var/www/:/tmp/:/usr/share/php/"
max_execution_time = 600
max_input_time = 600
memory_limit = 1024M
post_max_size = 100M
NGINX error.log:
2017/12/15 13:22:15 [warn] 22198#22198: *7 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/1/00/0000000001 while reading upstream, client: xxx.xxx.xxx.xxx, server: example.de, request: "GET /php/index.php?function=Album::getArchive&albumID=15015961245901&password= HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "example.de", referrer: "https://example.de/"
2017/12/15 13:27:59 [error] 22198#22198: *7 readv() failed (104: Connection reset by peer) while reading upstream, client: xxx.xxx.xxx.xxx, server: example.de, request: "GET /php/index.php?function=Album::getArchive&albumID=15015961245901&password= HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "example.de", referrer: "example.de"
FPM error.log:
WARNING: [pool www] child 22223, script '/var/www/lychee/php/index.php' (request: "GET /php/index.php?function=Album::getArchive&albumID=150145645645901&password=") execution timed out (148.539892 sec), terminating
[15-Dec-2017 13:23:23] WARNING: [pool www] child 22223 exited on signal 15 (SIGTERM) after 160.008470 seconds from start
[15-Dec-2017 13:23:23] NOTICE: [pool www] child 22270 started

nginx: connection timeout to php5-fpm

I've moved my server from apache2+fcgi to nginx+fpm because I wanted a lighter environment, and apache's memory footprint was high. The server is a dual core (I know, not very much) with 8G of ram. It runs also a rather busy FreeRadius server and related MySQL. CPU load average is ~1, with some obvious peaks.
One of those peaks happens every 30 minutes when I get web pings from some controlled devices. With Apache the server load was spiking up a lot, slowing down everything. Now with nginx the process is much faster (I also did some optimization in the code), tough now I miss some of these connections. I configured both nginx and fpm to what I believe should be enough, but I must be missing something because in these moments php isn't (apparently) able to reply to nginx. This is a recap of the config:
nginx/1.8.1
user www-data;
worker_processes auto;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 20m;
large_client_header_buffers 2 1k;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
set $fsn /$yii_bootstrap;
if (-f $document_root$fastcgi_script_name){
set $fsn $fastcgi_script_name;
}
fastcgi_pass 127.0.0.1:9011;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fsn;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fsn;
fastcgi_read_timeout 150s;
}
php5-fpm 5.4.45-1~dotdeb+6.1
[pool01]
listen = 127.0.0.1:9011
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 150
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 8
pm.max_requests = 2000
pm.process_idle_timeout = 10s
When the peak arrives I start seeing this in fpm logs:
[18-Feb-2016 11:30:04] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 c
hildren, there are 0 idle, and 13 total children
[18-Feb-2016 11:30:05] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16
children, there are 0 idle, and 15 total children
[18-Feb-2016 11:30:06] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32
children, there are 0 idle, and 17 total children
[18-Feb-2016 11:30:07] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32
children, there are 0 idle, and 19 total children
and worse in nginx's error.log
2016/02/18 11:30:22 [error] 23400#23400: *209920 connect() failed (110: Connection timed out) while connecting to upstream, client: 79.1.1.9,
server: host.domain.com, request: "GET /ping/?whoami=abc02 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209923 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.1.9.71,
server: host.domain.com, request: "GET /utilz/pingme.php?whoami=abc01 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209925 connect() failed (110: Connection timed out) while connecting to upstream, client: 3.7.0.4,
server: host.domain.com, request: "GET /ping/?whoami=abc03 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209926 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.7.2.1
, server: host.domain.com, request: "GET /ping/?whoami=abc04 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
Those connections are lost!
First question, why nginx returns timeout within 22s (the pings are made at 00 and 30 minutes of every hour) if fastcgi_read_timeout is set to 150s?
Second question: why do I get so many fpm warnings? The total children displayed never reaches pm.max_children. I know warnings are not errors, but I get warned... Is there a relation between those messages and nginx's timeouts?
Given that the server handles perfectly fine the regular traffic, and it has no problem with ram and swap neither during these peak times (it always has ~1.5G or more free), is there a better tuning to handle those ping connections (which doesn't involve changing the schedule)? Should I raise pm.start_servers and/or pm.min_spare_servers?
You need some changes + I would recommend upgrading your php to 5.6.
Nginx tunning: /etc/nginx/nginx.conf
user www-data;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log crit;
# NOTE: Max simultaneous requests = worker_processes*worker_connections/(keepalive_timeout*2)
worker_processes 1;
worker_rlimit_nofile 750000;
# handles connection stuff
events {
worker_connections 50000;
multi_accept on;
use epoll;
}
# http request stuff
http {
access_log off;
log_format main '$remote_addr $host $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $ssl_cipher $request_time';
types_hash_max_size 2048;
server_tokens off;
fastcgi_read_timeout 180;
keepalive_timeout 20;
keepalive_requests 1000;
reset_timedout_connection on;
client_body_timeout 20;
client_header_timeout 10;
send_timeout 10;
tcp_nodelay on;
tcp_nopush on;
sendfile on;
directio 100m;
client_max_body_size 100m;
server_names_hash_bucket_size 100;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# index default files
index index.html index.htm index.php;
# use hhvm with php-fpm as backup
upstream php {
keepalive 30;
server 127.0.0.1:9001; # php5-fpm (check your port)
}
# Virtual Host Configs
include /etc/nginx/sites-available/*;
}
For a default server, create and add to /etc/nginx/sites-available/default.conf
# default virtual host
server {
listen 80;
server_name localhost;
root /path/to/your/files;
access_log off;
log_not_found off;
# handle staic files first
location / {
index index.html index.htm index.php ;
}
# serve static content directly by nginx without logs
location ~* \.(jpg|jpeg|gif|png|bmp|css|js|ico|txt|pdf|swf|flv|mp4|mp3)$ {
access_log off;
log_not_found off;
expires 7d;
# Enable gzip for some static content only
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
}
# no cache for xml files
location ~* \.(xml)$ {
access_log off;
log_not_found off;
expires 0s;
add_header Pragma no-cache;
add_header Cache-Control "no-cache, no-store, must-revalidate, post-check=0, pre-check=0";
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/xml application/xml application/rss+xml;
}
# run php only when needed
location ~ .php$ {
# basic php params
fastcgi_pass php;
fastcgi_index index.php;
fastcgi_keep_conn on;
fastcgi_connect_timeout 20s;
fastcgi_send_timeout 30s;
fastcgi_read_timeout 30s;
# fast cgi params
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
}
}
Idealy, you want php5-fpm to autorestart if it starts failing, therefore you can this to /etc/php5/fpm/php-fpm.conf
emergency_restart_threshold = 60
emergency_restart_interval = 1m
process_control_timeout = 10s
Change /etc/php5/fpm/pool.d/www.conf
[www]
user = www-data
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
listen = 127.0.0.1:9001
listen.allowed_clients = 127.0.0.1
listen.backlog = 65000
pm = dynamic
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 16
; maxnumber of simultaneous requests that will be served (if each php page needs 32 Mb, then 128x32 = 4G RAM)
pm.max_children = 128
; We want to keep it hight (10k to 50k) to prevent server respawn, however if there are memory leak on PHP code we will have a problem.
pm.max_requests = 10000

upstream sent too big header while reading response header from upstream

I am getting these kind of errors:
2014/05/24 11:49:06 [error] 8376#0: *54031 upstream sent too big header while reading response header from upstream, client: 107.21.193.210, server: aamjanata.com, request: "GET /the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https://aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20ht
Always it is the same. A url repeated over and over with comma separating. Can't figure out what is causing this. Anyone have an idea?
Update: Another error:
http request count is zero while sending response to client
Here is the config. There are other irrelevant things, but this part was added/edited
fastcgi_cache_path /var/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Upstream to abstract backend connection(s) for PHP.
upstream php {
#this should match value of "listen" directive in php-fpm pool
server unix:/var/run/php5-fpm.sock;
}
And then in the server block:
set $skip_cache 0;
# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
# Don't cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
set $skip_cache 1;
}
# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
location / {
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri /index.php;
include fastcgi_params;
fastcgi_pass php;
fastcgi_read_timeout 3000;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache WORDPRESS;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
}`
Add the following to your conf file
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
If nginx is running as a proxy / reverse proxy
that is, for users of ngx_http_proxy_module
In addition to fastcgi, the proxy module also saves the request header in a temporary buffer.
So you may need also to increase the proxy_buffer_size and the proxy_buffers, or disable it totally (Please read the nginx documentation).
Example of proxy buffering configuration
http {
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
Example of disabling your proxy buffer (recommended for long polling servers)
http {
proxy_buffering off;
}
For more information: Nginx proxy module documentation
Plesk instructions
I combined the top two answers here
In Plesk 12, I had nginx running as a reverse proxy (which I think is the default). So the current top answer doesn't work as nginx is also being run as a proxy.
I went to Subscriptions | [subscription domain] | Websites & Domains (tab) | [Virtual Host domain] | Web Server Settings.
Then at the bottom of that page you can set the Additional nginx directives which I set to be a combination of the top two answers here:
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
upstream sent too big header while reading response header from upstream is nginx's generic way of saying "I don't like what I'm seeing"
Your upstream server thread crashed
The upstream server sent an invalid header back
The Notice/Warnings sent back from STDERR overflowed their buffer and both it and STDOUT were closed
3: Look at the error logs above the message, is it streaming with logged lines preceding the message? PHP message: PHP Notice: Undefined index:
Example snippet from a loop my log file:
2015/11/23 10:30:02 [error] 32451#0: *580927 FastCGI sent in stderr: "PHP message: PHP Notice: Undefined index: Firstname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice: Undefined index: Lastname in /srv/www/classes/data_convert.php on line 1090
... // 20 lines of same
PHP message: PHP Notice: Undefined index: Firstname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice: Undefined index: Lastname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice: Undef
2015/11/23 10:30:02 [error] 32451#0: *580927 FastCGI sent in stderr: "ta_convert.php on line 1090
PHP message: PHP Notice: Undefined index: Firstname
you can see in the 3rd line from the bottom that the buffer limit was hit, broke, and the next thread wrote in over it. Nginx then closed the connection and returned 502 to the client.
2: log all the headers sent per request, review them and make sure they conform to standards (nginx does not permit anything older than 24 hours to delete/expire a cookie, sending invalid content length because error messages were buffered before the content counted...). getallheaders function call can usually help out in abstracted code situations php get all headers
examples include:
<?php
//expire cookie
setcookie ( 'bookmark', '', strtotime('2012-01-01 00:00:00') );
// nginx will refuse this header response, too far past to accept
....
?>
and this:
<?php
header('Content-type: image/jpg');
?>
<?php //a space was injected into the output above this line
header('Content-length: ' . filesize('image.jpg') );
echo file_get_contents('image.jpg');
// error! the response is now 1-byte longer than header!!
?>
1: verify, or make a script log, to ensure your thread is reaching the correct end point and not exiting before completion.
I have a django application deployed to EBS, and I am using Python 3.8 running on 64bit Amazon Linux 2. The following method worked for me (note folder structure might be DIFFERENT if you're using previous Linux versions. For more, see official documentation here
Make the .platform folder and its sub-directory as shown below:
|-- .ebextensions # Don't put nginx config here
| |-- django.config
|-- .platform # Make ".platform" folder and its subfolders
|-- nginx
| -- conf.d
| -- proxy.conf
Note that proxy.conf file should be placed inside .platform folder, NOT .ebextensions folder or the .elasticbeanstalk folder. The extension should end with .conf NOT .config.
Inside the proxy.conf file, copy & paste these lines directly:
client_max_body_size 50M;
large_client_header_buffers 4 32k;
fastcgi_buffers 16 32k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
There is no need to issue command to restart nginx (for Amazon Linux 2)
Deploy the source code to elastic beanstalk again.
Add:
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
To server{} in nginx.conf
Works for me.
We ended up realising that our one server that was experiencing this had busted fpm config resulting in php errors/warnings/notices that'd normally be logged to disk were being sent over the FCGI socket. It looks like there's a parsing bug when part of the header gets split across the buffer chunks.
So setting php_admin_value[error_log] to something actually writeable and restarting php-fpm was enough to fix the problem.
We could reproduce the problem with a smaller script:
<?php
for ($i = 0; $i<$_GET['iterations']; $i++)
error_log(str_pad("a", $_GET['size'], "a"));
echo "got here\n";
Raising the buffers made the 502s harder to hit but not impossible, e.g native:
bash-4.1# for it in {30..200..3}; do for size in {100..250..3}; do echo "size=$size iterations=$it $(curl -sv "http://localhost/debug.php?size=$size&iterations=$it" 2>&1 | egrep '^< HTTP')"; done; done | grep 502 | head
size=121 iterations=30 < HTTP/1.1 502 Bad Gateway
size=109 iterations=33 < HTTP/1.1 502 Bad Gateway
size=232 iterations=33 < HTTP/1.1 502 Bad Gateway
size=241 iterations=48 < HTTP/1.1 502 Bad Gateway
size=145 iterations=51 < HTTP/1.1 502 Bad Gateway
size=226 iterations=51 < HTTP/1.1 502 Bad Gateway
size=190 iterations=60 < HTTP/1.1 502 Bad Gateway
size=115 iterations=63 < HTTP/1.1 502 Bad Gateway
size=109 iterations=66 < HTTP/1.1 502 Bad Gateway
size=163 iterations=69 < HTTP/1.1 502 Bad Gateway
[... there would be more here, but I piped through head ...]
fastcgi_buffers 16 16k; fastcgi_buffer_size 32k;:
bash-4.1# for it in {30..200..3}; do for size in {100..250..3}; do echo "size=$size iterations=$it $(curl -sv "http://localhost/debug.php?size=$size&iterations=$it" 2>&1 | egrep '^< HTTP')"; done; done | grep 502 | head
size=223 iterations=69 < HTTP/1.1 502 Bad Gateway
size=184 iterations=165 < HTTP/1.1 502 Bad Gateway
size=151 iterations=198 < HTTP/1.1 502 Bad Gateway
So I believe the correct answer is: fix your fpm config so it logs errors to disk.
Faced the same problem when running Symfony app in php-fpm and nginx in docker containers.
After some research found that it was caused by php-fpm's stderr written to nginx logs. I.e. php warnings (which is generated intensively in Symfony debug mode) was duplicated in docker logs php-fpm:
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\DisallowRobotsIndexingListener::onResponse"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\StreamedResponseListener::onKernelResponse"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\LocaleListener::onKernelFinishRequest"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\RouterListener::onKernelFinishRequest"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\LocaleAwareListener::onKernelFinishRequest"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.terminate" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelTerminate"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
and docker logs nginx:
2021/07/09 12:25:46 [error] 30#30: *2 FastCGI sent in stderr: "ller" to listener "OblgazAPI\API\Common\Infrastructure\EventSubscriber\LegalAuthenticationChecker::checkAuthentication".
PHP message: [debug] Notified event "kernel.controller_arguments" to listener "Symfony\Component\HttpKernel\EventListener\ErrorListener::onControllerArguments".
PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ResponseListener::onKernelResponse".
PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\DataCollector\RequestDataCollector::onKernelResponse".
PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelResponse".
PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Bundle\WebProfilerBundle\EventListener\WebDebugToolbarListener::onKernelResponse".
PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\DisallowRobotsIndexingListener::onResponse".
PHP message: [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\StreamedResponseListener::onKernelResponse".
PHP message: [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\LocaleListener::onKernelFinishRequest".
PHP message: [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\RouterListener::onKernelFinishRequest".
PHP message: [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\LocaleAwareListener::onKernelFinishRequest".
PHP message: [debug] Notified event "kernel.exception" to listener "Symfony\Component\HttpKernel\EventListener\ErrorListener::logKernelException".
PHP message: [debug] Notified event "kernel.exception" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelException".
and then nginx logs ended with
2021/07/09 12:25:46 [error] 30#30: *2 upstream sent too big header while reading response header from upstream ...
and I got 502 error.
Increasing the fastcgi_buffer_size in nginx config helped, but it seems more like a suppression the problem, not a treatment.
A better solution is to disable php-fpm to send logs by FastCGI. Found it can be made by setting fastcgi.logging=0 in php.ini (by default it is 1). php docs.
After changing it to 0, the problem goes away and nginx logs looks much cleaner docker logs nginx:
172.18.0.1 - - [09/Jul/2021:12:36:02 +0300] "GET /my/symfony/app HTTP/1.1" 401 73 "-" "PostmanRuntime/7.26.8"
172.18.0.1 - - [09/Jul/2021:12:36:04 +0300] "GET /my/symfony/app HTTP/1.1" 401 73 "-" "PostmanRuntime/7.26.8"
and all php-fpm logs are still in their place in php-fpm log.
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
it worked for me when I increased the numbers
If you're using Symfony framework:
Before messing with Nginx config, try to disable ChromePHP first.
1 - Open app/config/config_dev.yml
2 - Comment these lines:
#chromephp:
#type: chromephp
#level: info
ChromePHP pack the debug info json-encoded in the X-ChromePhp-Data header, which is too big for the default config of nginx with fastcgi.
Source: https://github.com/symfony/symfony/issues/8413#issuecomment-20412848
This is still the highest SO-question on Google when searching for this error, so let's bump it.
When getting this error and not wanting to deep-dive into the NGINX settings immediately, you might want to check your outputs to the debug console.
In my case I was outputting loads of text to the FirePHP / Chromelogger console, and since this is all sent as a header, it was causing the overflow.
It might not be needed to change the webserver settings if this error is caused by just sending insane amounts of log messages.
I am not sure that the issue is related to what header php is sending.
Make sure that the buffering is enabled. The simple way is to create a proxy.conf file:
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 100m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
And a fascgi.conf file:
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_buffers 128 4096k;
fastcgi_buffer_size 4096k;
fastcgi_index index.php;
fastcgi_param REDIRECT_STATUS 200;
Next you need to call them in your default config server this way:
http {
include /etc/nginx/mime.types;
include /etc/nginx/proxy.conf;
include /etc/nginx/fastcgi.conf;
index index.html index.htm index.php;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log /logs/access.log main;
sendfile on;
tcp_nopush on;
# ........
}
In our case we got this nginx error because our backend generated redirect response with a very long URL:
HTTP/1.1 301 Moved Permanently
Location: https://www.example.com/?manyParamsHere...
Just for curiosity, we saved that big URL to a file and it size was 4.4 Kb.
Adding two lines to the config file /etc/nginx/conf.d/some_site.conf helped us to fix this error:
server {
# ...
location ~ ^/index\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# Add these lines:
fastcgi_buffer_size 32k;
fastcgi_buffers 4 32k;
}
}
Read more about these params at the official nginx documentation.
For Symfony projects, try adding this line to your .env file:
SHELL_VERBOSITY=0
https://symfony.com/doc/current/console/verbosity.html
I will just leave it here, because I spent a crazy amount of time debugging this in my project and only this particular solution worked 100% for me (for the right reasons) and I have never found this answer related to this topic. Maybe someone will find it helpful.
I had this error and I found 3 ways to fix that:
Set SHELL_VERBOSITY=0 or another value < 3 in .env: https://stackoverflow.com/a/69321273/13653732 In this case you disable your PHP logs, but they can be useful for development and debugging.
Set fastcgi.logging=0 in php.ini. It is the same result like above.
Update Symfony from 5.2 to 5.3. I think the old version has a problem with that.
All PHP Symfony logs perceived like errors for Nginx, but PHP worked correctly.
I had Nginx 1.17, PHP 8.0.2, PHP-FPM, Symfony 5.2, Xdebug, Docker.
I tried new versions Nginx 1.21, PHP 8.0.14, but without any result. That problem wasn't with Apache.
I changed Nginx configuration, but also without any result.
I am using Symfony, it has a nice exception page (when the ErrorHandler is used). And this one will put the message of your exception as a header to the response created.
vendor/symfony/error-handler/ErrorRenderer/SerializerErrorRenderer.php
the header is called: X-Debug-Exception
So be careful, if you constructed a VERY large exception message, neither nginx nor chrome (limit 256k) nor curl (~128kb) can display your page and make it really hard to debug, what is outputting those big headers.
My suggestion would be to not blindly copy n paste increased buffer sizes into your nginx config, they treat the symptom not the cause.

Categories