Request body truncated at ~8k - php

I have two apps, one running in docker deployed in cloud - A - and the other - B - on a dedicated server. Both are written in PHP and using nginx server. App B is doing PUT requests with bigger payloads (let's think about 1M). The problem is that what gets to PHP in app A is truncated to approximately 8k (8209 bytes to be exact). This causes json_decode to fail decoding the request body and the whole request fails.
I have googled and checked configs for quite a long time already but cannot find the issue.
This is my nginx.conf for app A (running in docker in cloud):
worker_processes auto;
user www-data;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log warn;
error_log /var/log/nginx.error.log notice;
error_log /var/log/nginx.error.log info;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 0;
client_header_timeout 60;
client_body_timeout 60;
client_body_buffer_size 10m;
client_max_body_size 100m;
server_tokens off;
reset_timedout_connection on;
send_timeout 60;
include /etc/nginx/mime.types;
default_type text/html;
charset UTF-8;
large_client_header_buffers 4 16k;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
# cache informations about file descriptors, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=65000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent $request_time "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log timed;
upstream backend {
server 127.0.0.1:9000;
}
include conf.d/*;
}
This is my site's conf:
server {
listen 80 default_server;
root /var/www/appB/public;
index index.php;
location = /.well-known/schema-discovery {
add_header Content-Type application/json;
return 200 '{}';
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~* \.(gif|jpg|png|js)$ {
expires 30d;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include /etc/nginx/fastcgi_params;
}
}
This is the www.conf (for php-fpm):
[www]
user = www-data
group = www-data
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 150
pm.start_servers = 30
pm.min_spare_servers = 10
pm.max_spare_servers = 50
pm.process_idle_timeout = 60s
pm.max_requests = 5000
pm.status_path = /status
ping.path = /ping
slowlog = /var/log/fpm/slow.log
request_slowlog_timeout = 60s
request_terminate_timeout = 300s
catch_workers_output = yes
access.log = /var/log/fpm/access.log
php_flag[display_errors] = off
php_flag[html_errors] = off
php_admin_value[error_log] = /var/log/fpm/php_error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 1024M
php_admin_value[upload_max_filesize] = 100M
php_admin_value[post_max_size] = 100M
According to the logs neither nginx nor php-fpm do not complain about anything (no errors in the logs).
Does anybody have an idea what might be wrong?
Thanks a lot in advance!

Related

Time-out 504 Gateway Time-out nginx in wordpress-php-fpm

I want to setup WordPress PHP-fpm in Kubernetes, so I already setup that but there is some problem that currently I am facing with Nginx proxy, so when I am trying to install the woo-commerce plugin then it gives the error of
Installation failed: 504 Gateway Time-out 504 Gateway Time-out nginx padding to disable MSIE and Chrome friendly error page -> < ! - padding to disable MSIE and Chrome friendly error page ->
I don't know what's going wrong on proxy I already set the max value for proxy_read_timeout 100. but then also it will not work. I tried so many proxy time-out values but it didn't work, so here is my Nginx proxy config
wordpress.conf
server {
listen 80;
server_name localhost;
root /var/www/html;
index index.php;
#access_log /var/log/nginx/hakase-access.log;
#error_log /var/log/nginx/hakase-error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache phpcache;
fastcgi_cache_valid 200 301 302 60m;
fastcgi_cache_min_uses 1;
fastcgi_cache_lock on;
add_header X-FastCGI-Cache $upstream_cache_status;
fastcgi_cache_use_stale error timeout updating invalid_header http_500 http_503;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=phpcache:100m max_size=10g inactive=60m use_temp_path=off;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
The one working for me
location ~ \.php$ {
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 60;
fastcgi_read_timeout 60;
}
You must be running the two containers inside the single pod you can debug the logs of ingress and nginx of WordPress to check more details.
you can use this github as reference : https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx
Also, check the blog to understand more : https://medium.com/#harsh.manvar111/kubernetes-wordpress-php-fpm-nginx-73cb4f9aef02

AntiDDOS protection slowing nginx server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
AntiDDOS slowing server
Hello,
i just migrated from Apache24 to Nginx. OS on server is FreeBSD 10.3 (amd64) with custom kernel. I have one strange problem - when i uncomment this line in nginx config:
limit_req zone=antiddosphp burst=5;
then wordpress dashboard load take >2s then with this option disabled. Where could be problem? Almost every page take more time to load with this option (or i dont know how to right set this)...
My second question is about right perormance setting in config file. My VPS is
1x 2Ghz + 2GB ram + 15gb ssd
Free memory after startup is about 1700 MB. Do i have right settings for nginx? I also have postfix, dovecot, mariadb and php-fpm installed. Mariadb take about 200M ram, MTA take about 150MB so i have about 1300 free for webserver.
My nginx conf file:
load_module /usr/local/libexec/nginx/ngx_mail_module.so;
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
user www;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
limit_req_zone $binary_remote_addr zone=antiddosphp:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=antiddos:10m rate=10r/s;
limit_req zone=antiddosphp burst=5;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
sendfile on;
fastcgi_connect_timeout 100;
fastcgi_send_timeout 100;
fastcgi_read_timeout 100;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
fastcgi_intercept_errors on;
gzip on;
gzip_min_length 1k;
gzip_comp_level 9;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
open_file_cache max=2000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_buffer_size 8k;
client_header_buffer_size 16k;
client_max_body_size 20m;
client_body_timeout 10;
client_header_timeout 10;
large_client_header_buffers 4 32k;
keepalive_timeout 15;
send_timeout 10;
keepalive_requests 1000;
server {
listen 80;
server_name localhost;
location / {
root /usr/local/www/nginx;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}
}
server {
server_name test.test.cz;
#limit_req zone=antiddos burst=60;
#limit_req zone=antiddosphp burst=2;
access_log /var/log/example.com.access.log;
error_log /var/log/example.com.error.log;
root /usr/local/www/domains/test-cz/webserver/test;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
}
}
PHP-FPM configuration:
user = www
group = www
pm = dynamic
pm.start_servers = 3
pm.max_children = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 3
pm.max_requests = 200
request_terminate_timeout = 10
request_slowlog_timeout = 0
slowlog = log/$pool.log.slow
catch_workers_output = yes
Thank you all for any reply.
If your are loading a website, you are not loading only this site, but assets as well. Nginx will think of them as independent connections. You have 10r/s defined and a burst size of 5. Therefore after 10 Requests/s the next requests will be delayed for rate limiting purposes. If the burst size (5) gets exceeded the following requests will receive a 503 error.
If the requests rate exceeds the rate configured for a zone, their
processing is delayed such that requests are processed at a defined
rate. Excessive requests are delayed until their number exceeds the
maximum burst size in which case the request is terminated with an
error 503 (Service Temporarily Unavailable).
(Source: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
Answer: Up the r/s a bit and the pages should flow out a bit faster.
For the second questions:
But i can't give a definit answer, just look at your load and determine if you have to tweak these values.
Usually the error log is a good start :)

Optimize Nginx + PHP-FPM for 5 million daily pageviews

We run a few high volume websites which together generate around 5 million pageviews per day. We have the most overkill servers as we anticipate growth but we are having reports of a few active users saying the site is sometimes slow on the first pageview. I've seen this myself every once in a while where the first pageview will take 3-5 seconds then it's instant after that for the rest of the day. This has happened to me maybe twice in the last 24 hours so not enough to figure out what's happening. Every page on our site uses PHP but one of the times it happened to me it was on a PHP page that doesn't have any database calls which makes me think the issue is limited to NGINX, PHP-FPM or network settings.
We have 3 NGINX servers running behind a load balancer. Our database is separate on a cluster. I included our configuration files for nginx and php-fpm as well as our current RAM usage and PHP-FPM status. This is based on middle of the day (average traffic for us). Please take a look and let me know if you see any red flags in my setup or have any suggestions to optimize further.
Specs for each NGINX Server:
OS: CentOS 7
RAM: 128GB
CPU: 32 cores (2.4Ghz each)
Drives: 2xSSD on RAID 1
RAM Usage (free -g)
total used free shared buff/cache available
Mem: 125 15 10 3 100 103
Swap: 15 0 15
PHP-FPM status (IE: http://server1_ip/status)
pool: www
process manager: dynamic
start time: 03/Mar/2016:03:42:49 -0800
start since: 1171262
accepted conn: 69827961
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 1670
active processes: 1
total processes: 1671
max active processes: 440
max children reached: 0
slow requests: 0
php-fpm config file:
[www]
user = nginx
group = nginx
listen = /var/opt/remi/php70/run/php-fpm/php-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 6000
pm.start_servers = 1600
pm.min_spare_servers = 1500
pm.max_spare_servers = 2000
pm.max_requests = 1000
pm.status_path = /status
slowlog = /var/opt/remi/php70/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/opt/remi/php70/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path] = /var/opt/remi/php70/lib/php/session
php_value[soap.wsdl_cache_dir] = /var/opt/remi/php70/lib/php/wsdlcache
nginx config file:
user nginx;
worker_processes 32;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1000;
multi_accept on;
use epoll;
}
http {
log_format main '$remote_addr - $remote_user [$time_iso8601] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10 10;
send_timeout 60;
types_hash_max_size 2048;
client_max_body_size 50M;
client_body_buffer_size 5m;
client_body_timeout 60;
client_header_timeout 60;
fastcgi_buffers 256 16k;
fastcgi_buffer_size 128k;
fastcgi_connect_timeout 60s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
reset_timedout_connection on;
server_names_hash_bucket_size 100;
#compression
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/javascript application/xml;
gzip_disable "MSIE [1-6]\.";
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name domain1.com;
root /folderpath;
location / {
index index.php;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
#server status
location /server-status {
stub_status on;
access_log off;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
location = /status {
access_log off;
allow 127.0.0.1;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
UPDATE:
I installed opcache as per the suggestion below. Not sure if it fixes the issue. Here are my settings
opcache.enable=1
opcache.memory_consumption=1024
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=32531
opcache.max_wasted_percentage=10
2 minor tips:
if you use opcache, monitor it to check if its configuration (especially memory size) is ok, and avoid OOM reset, you can use https://github.com/rlerdorf/opcache-status (a single php page)
increase pm.max_requests to keep using same processes

How to create virtual host in nginx server ? and ajax call

I am using WT-NMP software with combination of php,mysql and ngnix server.
worker_processes 1;
events {
worker_connections 1024;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
ssi off;
#Timeouts
client_body_timeout 5;
client_header_timeout 5;
keepalive_timeout 25 25;
send_timeout 15s;
resolver_timeout 3s;
#Directive sets timeout period for connection with FastCGI-server. It should be noted that this value can't exceed 75 seconds.
fastcgi_connect_timeout 5s;
#Directive sets the amount of time for upstream to wait for a fastcgi process to send data. Change this directive if you have long running fastcgi processes that do not produce output until they have finished processing. If you are seeing an upstream timed out error in the error log, then increase this parameter to something more appropriate.
fastcgi_read_timeout 40s;
#Directive specifies request timeout to the server. The timeout is calculated between two write operations, not for the whole request. If no data have been written during this period then serve closes the connection.
fastcgi_send_timeout 15s;
fastcgi_buffers 8 32k;
fastcgi_buffer_size 32k;
#fastcgi_busy_buffers_size 256k;
#fastcgi_temp_file_write_size 256k;
open_file_cache off;
#php max upload limit cannot be larger than this
client_max_body_size 8m;
####client_body_buffer_size 1K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
types_hash_max_size 2048;
include nginx.mimetypes.conf;
default_type text/html;
##
# Logging Settings
##
access_log "c:/wt-nmp/log/nginx_access.log";
error_log "c:/wt-nmp/log/nginx_error.log" warn; #debug or warn
log_not_found on; #enables or disables messages in error_log about files not found on disk.
rewrite_log off;
#Leave this off
fastcgi_intercept_errors off;
gzip off;
index index.php index.htm index.html;
server {
listen 127.0.0.1:80 default_server;
listen 127.0.0.1:8080;
#listen [::1]:80 ipv6only=on;
server_name mylocalhost;
root "c:/wt-nmp/www/projectname";
autoindex on;
error_log "c:/wt-nmp/log/nginx_error.log";
allow 127.0.0.1;
#allow ::1;
deny all;
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
#tools are now served from wt-nmp/include/tools/
location ~ ^/tools/.*\.php$ {
root "c:/wt-nmp/include";
try_files $uri =404;
include nginx.fastcgi.conf;
fastcgi_pass php_farm;
}
location ~ ^/tools/ {
root "c:/wt-nmp/include";
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass php_farm;
include nginx.fastcgi.conf;
}
}
include domains.d/*.conf;
include nginx.phpfarm.conf;
}
when I am trying to access with "mylocalhost" its working fine when I am firing an event and call ajax method . It is giving page not found message
WT-NMP - portable Nginx Mysql Php development stack for Windows README.md states:
Starting only one PHP-CGI server with wt-nmp.exe --phpCgiServers=1 will result in slow ajax requests since Nginx will not be able to process PHP scripts simultaneous.
So, make sure you use the latest version of WT-NMP and choose at least 3 PHP-CGI servers.

Nginx + PHP-FPM Using CPU and Not RAM

I am doing load testing on an Nginx Server and I am having an issue where my CPU hits 100% but only 50% of my ram is being utilized. The server is this:
2 vCPU
2 GB of RAM
40GB SSD Drive
Rackspace High Performance Server
This is my Nginx Config
worker_processes 2;
error_log /var/log/nginx/error.log crit;
pid /var/run/nginx.pid;
events {
worker_connections 1524;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
access_log off;
# Sendfile copies data between one FD and other from within the kernel.
# More efficient than read() + write(), since the requires transferring data to and from the user space.
sendfile on;
# Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet,
# instead of using partial frames. This is useful for prepending headers before calling sendfile,
# or for throughput optimization.
tcp_nopush on;
# don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time.
tcp_nodelay on;
# allow the server to close the connection after a client stops responding. Frees up socket-associated memory.
reset_timedout_connection on;
#keepalive_timeout 0;
keepalive_timeout 65;
# send the client a "request timed out" if the body is not loaded by this time. Default 60.
client_body_timeout 10;
# If the client stops reading data, free up the stale client connection after this much time. Default 60.
send_timeout 2;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
gzip on;
server_tokens off;
client_max_body_size 20m;
client_body_buffer_size 128k;
client_max_body_size 20m;
client_body_buffer_size 128k;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Kernal additions in /etc/sysctl.conf
# Increase system IP port limits to allow for more connections
net.ipv4.ip_local_port_range = 2000 65000
net.ipv4.tcp_window_scaling = 1
# number of packets to keep in backlog before the kernel starts dropping them
net.ipv4.tcp_max_syn_backlog = 3240000
# increase socket listen backlog
net.core.somaxconn = 3240000
net.ipv4.tcp_max_tw_buckets = 1440000
# Increase TCP buffer sizes
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = cubic
Example VHost Config with PHP-FPM
server {
listen 80;
server_name www.example.com;
location / {
root /data/sites/example.com/public_html;
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?rt=$uri&$args;
}
location ~ \.php {
root /data/sites/example.com/public_html;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param ENV production;
include fastcgi_params;
}
}
The server can handle about 60 active SBU connections clicking around, or about 300 request per second. Is the fact that is not fully utilizing RAM and more CPU a bad thing? Can I optimize this further?

Categories