I have two apps, one running in docker deployed in cloud - A - and the other - B - on a dedicated server. Both are written in PHP and using nginx server. App B is doing PUT requests with bigger payloads (let's think about 1M). The problem is that what gets to PHP in app A is truncated to approximately 8k (8209 bytes to be exact). This causes json_decode to fail decoding the request body and the whole request fails.
I have googled and checked configs for quite a long time already but cannot find the issue.
This is my nginx.conf for app A (running in docker in cloud):
worker_processes auto;
user www-data;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log warn;
error_log /var/log/nginx.error.log notice;
error_log /var/log/nginx.error.log info;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 0;
client_header_timeout 60;
client_body_timeout 60;
client_body_buffer_size 10m;
client_max_body_size 100m;
server_tokens off;
reset_timedout_connection on;
send_timeout 60;
include /etc/nginx/mime.types;
default_type text/html;
charset UTF-8;
large_client_header_buffers 4 16k;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
# cache informations about file descriptors, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=65000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent $request_time "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log timed;
upstream backend {
server 127.0.0.1:9000;
}
include conf.d/*;
}
This is my site's conf:
server {
listen 80 default_server;
root /var/www/appB/public;
index index.php;
location = /.well-known/schema-discovery {
add_header Content-Type application/json;
return 200 '{}';
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~* \.(gif|jpg|png|js)$ {
expires 30d;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include /etc/nginx/fastcgi_params;
}
}
This is the www.conf (for php-fpm):
[www]
user = www-data
group = www-data
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 150
pm.start_servers = 30
pm.min_spare_servers = 10
pm.max_spare_servers = 50
pm.process_idle_timeout = 60s
pm.max_requests = 5000
pm.status_path = /status
ping.path = /ping
slowlog = /var/log/fpm/slow.log
request_slowlog_timeout = 60s
request_terminate_timeout = 300s
catch_workers_output = yes
access.log = /var/log/fpm/access.log
php_flag[display_errors] = off
php_flag[html_errors] = off
php_admin_value[error_log] = /var/log/fpm/php_error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 1024M
php_admin_value[upload_max_filesize] = 100M
php_admin_value[post_max_size] = 100M
According to the logs neither nginx nor php-fpm do not complain about anything (no errors in the logs).
Does anybody have an idea what might be wrong?
Thanks a lot in advance!
I'm new to all of this, but can't keep my newly spun micro ec2 server up and running (running wordpress). The PHP-FPM log only has this with logging set to debug.
[17-Oct-2016 15:46:38] NOTICE: configuration file /etc/php5/fpm/php-fpm.conf test is successful
My nginx log is continuously filling with errors trying to connect to php5-fpm.sock (hundreds of entries per minute even though there is no one else accessing the site).
2016/10/17 16:32:16 [error] 26389#0: *7298 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 191.96.249.80, server: mysiteredacted.com, request: "POST /xmlrpc.php HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "removed"
After restarting nginx and PHP-FPM the site works for a few minutes before throwing 502 Bad Gateway errors until I restart them both again.
I don't know where to begin with this. Here is my nginx config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
port_in_redirect off;
gzip on;
gzip_types text/css text/xml text/javascript application/x-javascript;
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
}
Which also include this file in the /conf.d folder:
server {
## Your website name goes here.
server_name mysiteredacted.com www.mysiteredacted.com;
## Your only path reference.
root /var/www/;
listen 80;
## This should be in your http block and if it is, it's not needed here.
index index.html index.htm index.php;
include conf.d/drop;
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_pass unix:/dev/shm/php-fpm-www.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
location ~* \.(css|js|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
}
The second file has this line:
fastcgi_pass unix:/var/run/php5-fpm.sock;
If that file does not exist it will throw this error.
Check this previous question: How to find my php-fpm.sock?
After hours of searching I finally figured it out.. Turns out it's some sort of brute force attack on /xmlrpc.php as indicated by the thousands of requests of "POST /xmlrpc.php HTTP/1.0".
It's a common WordPress attack. Thanks all.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
AntiDDOS slowing server
Hello,
i just migrated from Apache24 to Nginx. OS on server is FreeBSD 10.3 (amd64) with custom kernel. I have one strange problem - when i uncomment this line in nginx config:
limit_req zone=antiddosphp burst=5;
then wordpress dashboard load take >2s then with this option disabled. Where could be problem? Almost every page take more time to load with this option (or i dont know how to right set this)...
My second question is about right perormance setting in config file. My VPS is
1x 2Ghz + 2GB ram + 15gb ssd
Free memory after startup is about 1700 MB. Do i have right settings for nginx? I also have postfix, dovecot, mariadb and php-fpm installed. Mariadb take about 200M ram, MTA take about 150MB so i have about 1300 free for webserver.
My nginx conf file:
load_module /usr/local/libexec/nginx/ngx_mail_module.so;
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
user www;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
limit_req_zone $binary_remote_addr zone=antiddosphp:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=antiddos:10m rate=10r/s;
limit_req zone=antiddosphp burst=5;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
sendfile on;
fastcgi_connect_timeout 100;
fastcgi_send_timeout 100;
fastcgi_read_timeout 100;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
fastcgi_intercept_errors on;
gzip on;
gzip_min_length 1k;
gzip_comp_level 9;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
open_file_cache max=2000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_buffer_size 8k;
client_header_buffer_size 16k;
client_max_body_size 20m;
client_body_timeout 10;
client_header_timeout 10;
large_client_header_buffers 4 32k;
keepalive_timeout 15;
send_timeout 10;
keepalive_requests 1000;
server {
listen 80;
server_name localhost;
location / {
root /usr/local/www/nginx;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}
}
server {
server_name test.test.cz;
#limit_req zone=antiddos burst=60;
#limit_req zone=antiddosphp burst=2;
access_log /var/log/example.com.access.log;
error_log /var/log/example.com.error.log;
root /usr/local/www/domains/test-cz/webserver/test;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
}
}
PHP-FPM configuration:
user = www
group = www
pm = dynamic
pm.start_servers = 3
pm.max_children = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 3
pm.max_requests = 200
request_terminate_timeout = 10
request_slowlog_timeout = 0
slowlog = log/$pool.log.slow
catch_workers_output = yes
Thank you all for any reply.
If your are loading a website, you are not loading only this site, but assets as well. Nginx will think of them as independent connections. You have 10r/s defined and a burst size of 5. Therefore after 10 Requests/s the next requests will be delayed for rate limiting purposes. If the burst size (5) gets exceeded the following requests will receive a 503 error.
If the requests rate exceeds the rate configured for a zone, their
processing is delayed such that requests are processed at a defined
rate. Excessive requests are delayed until their number exceeds the
maximum burst size in which case the request is terminated with an
error 503 (Service Temporarily Unavailable).
(Source: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
Answer: Up the r/s a bit and the pages should flow out a bit faster.
For the second questions:
But i can't give a definit answer, just look at your load and determine if you have to tweak these values.
Usually the error log is a good start :)
We run a few high volume websites which together generate around 5 million pageviews per day. We have the most overkill servers as we anticipate growth but we are having reports of a few active users saying the site is sometimes slow on the first pageview. I've seen this myself every once in a while where the first pageview will take 3-5 seconds then it's instant after that for the rest of the day. This has happened to me maybe twice in the last 24 hours so not enough to figure out what's happening. Every page on our site uses PHP but one of the times it happened to me it was on a PHP page that doesn't have any database calls which makes me think the issue is limited to NGINX, PHP-FPM or network settings.
We have 3 NGINX servers running behind a load balancer. Our database is separate on a cluster. I included our configuration files for nginx and php-fpm as well as our current RAM usage and PHP-FPM status. This is based on middle of the day (average traffic for us). Please take a look and let me know if you see any red flags in my setup or have any suggestions to optimize further.
Specs for each NGINX Server:
OS: CentOS 7
RAM: 128GB
CPU: 32 cores (2.4Ghz each)
Drives: 2xSSD on RAID 1
RAM Usage (free -g)
total used free shared buff/cache available
Mem: 125 15 10 3 100 103
Swap: 15 0 15
PHP-FPM status (IE: http://server1_ip/status)
pool: www
process manager: dynamic
start time: 03/Mar/2016:03:42:49 -0800
start since: 1171262
accepted conn: 69827961
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 1670
active processes: 1
total processes: 1671
max active processes: 440
max children reached: 0
slow requests: 0
php-fpm config file:
[www]
user = nginx
group = nginx
listen = /var/opt/remi/php70/run/php-fpm/php-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 6000
pm.start_servers = 1600
pm.min_spare_servers = 1500
pm.max_spare_servers = 2000
pm.max_requests = 1000
pm.status_path = /status
slowlog = /var/opt/remi/php70/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/opt/remi/php70/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path] = /var/opt/remi/php70/lib/php/session
php_value[soap.wsdl_cache_dir] = /var/opt/remi/php70/lib/php/wsdlcache
nginx config file:
user nginx;
worker_processes 32;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1000;
multi_accept on;
use epoll;
}
http {
log_format main '$remote_addr - $remote_user [$time_iso8601] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10 10;
send_timeout 60;
types_hash_max_size 2048;
client_max_body_size 50M;
client_body_buffer_size 5m;
client_body_timeout 60;
client_header_timeout 60;
fastcgi_buffers 256 16k;
fastcgi_buffer_size 128k;
fastcgi_connect_timeout 60s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
reset_timedout_connection on;
server_names_hash_bucket_size 100;
#compression
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/javascript application/xml;
gzip_disable "MSIE [1-6]\.";
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name domain1.com;
root /folderpath;
location / {
index index.php;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
#server status
location /server-status {
stub_status on;
access_log off;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
location = /status {
access_log off;
allow 127.0.0.1;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
UPDATE:
I installed opcache as per the suggestion below. Not sure if it fixes the issue. Here are my settings
opcache.enable=1
opcache.memory_consumption=1024
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=32531
opcache.max_wasted_percentage=10
2 minor tips:
if you use opcache, monitor it to check if its configuration (especially memory size) is ok, and avoid OOM reset, you can use https://github.com/rlerdorf/opcache-status (a single php page)
increase pm.max_requests to keep using same processes
I've moved my server from apache2+fcgi to nginx+fpm because I wanted a lighter environment, and apache's memory footprint was high. The server is a dual core (I know, not very much) with 8G of ram. It runs also a rather busy FreeRadius server and related MySQL. CPU load average is ~1, with some obvious peaks.
One of those peaks happens every 30 minutes when I get web pings from some controlled devices. With Apache the server load was spiking up a lot, slowing down everything. Now with nginx the process is much faster (I also did some optimization in the code), tough now I miss some of these connections. I configured both nginx and fpm to what I believe should be enough, but I must be missing something because in these moments php isn't (apparently) able to reply to nginx. This is a recap of the config:
nginx/1.8.1
user www-data;
worker_processes auto;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 20m;
large_client_header_buffers 2 1k;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
set $fsn /$yii_bootstrap;
if (-f $document_root$fastcgi_script_name){
set $fsn $fastcgi_script_name;
}
fastcgi_pass 127.0.0.1:9011;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fsn;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fsn;
fastcgi_read_timeout 150s;
}
php5-fpm 5.4.45-1~dotdeb+6.1
[pool01]
listen = 127.0.0.1:9011
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 150
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 8
pm.max_requests = 2000
pm.process_idle_timeout = 10s
When the peak arrives I start seeing this in fpm logs:
[18-Feb-2016 11:30:04] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 c
hildren, there are 0 idle, and 13 total children
[18-Feb-2016 11:30:05] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16
children, there are 0 idle, and 15 total children
[18-Feb-2016 11:30:06] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32
children, there are 0 idle, and 17 total children
[18-Feb-2016 11:30:07] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32
children, there are 0 idle, and 19 total children
and worse in nginx's error.log
2016/02/18 11:30:22 [error] 23400#23400: *209920 connect() failed (110: Connection timed out) while connecting to upstream, client: 79.1.1.9,
server: host.domain.com, request: "GET /ping/?whoami=abc02 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209923 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.1.9.71,
server: host.domain.com, request: "GET /utilz/pingme.php?whoami=abc01 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209925 connect() failed (110: Connection timed out) while connecting to upstream, client: 3.7.0.4,
server: host.domain.com, request: "GET /ping/?whoami=abc03 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209926 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.7.2.1
, server: host.domain.com, request: "GET /ping/?whoami=abc04 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
Those connections are lost!
First question, why nginx returns timeout within 22s (the pings are made at 00 and 30 minutes of every hour) if fastcgi_read_timeout is set to 150s?
Second question: why do I get so many fpm warnings? The total children displayed never reaches pm.max_children. I know warnings are not errors, but I get warned... Is there a relation between those messages and nginx's timeouts?
Given that the server handles perfectly fine the regular traffic, and it has no problem with ram and swap neither during these peak times (it always has ~1.5G or more free), is there a better tuning to handle those ping connections (which doesn't involve changing the schedule)? Should I raise pm.start_servers and/or pm.min_spare_servers?
You need some changes + I would recommend upgrading your php to 5.6.
Nginx tunning: /etc/nginx/nginx.conf
user www-data;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log crit;
# NOTE: Max simultaneous requests = worker_processes*worker_connections/(keepalive_timeout*2)
worker_processes 1;
worker_rlimit_nofile 750000;
# handles connection stuff
events {
worker_connections 50000;
multi_accept on;
use epoll;
}
# http request stuff
http {
access_log off;
log_format main '$remote_addr $host $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $ssl_cipher $request_time';
types_hash_max_size 2048;
server_tokens off;
fastcgi_read_timeout 180;
keepalive_timeout 20;
keepalive_requests 1000;
reset_timedout_connection on;
client_body_timeout 20;
client_header_timeout 10;
send_timeout 10;
tcp_nodelay on;
tcp_nopush on;
sendfile on;
directio 100m;
client_max_body_size 100m;
server_names_hash_bucket_size 100;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# index default files
index index.html index.htm index.php;
# use hhvm with php-fpm as backup
upstream php {
keepalive 30;
server 127.0.0.1:9001; # php5-fpm (check your port)
}
# Virtual Host Configs
include /etc/nginx/sites-available/*;
}
For a default server, create and add to /etc/nginx/sites-available/default.conf
# default virtual host
server {
listen 80;
server_name localhost;
root /path/to/your/files;
access_log off;
log_not_found off;
# handle staic files first
location / {
index index.html index.htm index.php ;
}
# serve static content directly by nginx without logs
location ~* \.(jpg|jpeg|gif|png|bmp|css|js|ico|txt|pdf|swf|flv|mp4|mp3)$ {
access_log off;
log_not_found off;
expires 7d;
# Enable gzip for some static content only
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
}
# no cache for xml files
location ~* \.(xml)$ {
access_log off;
log_not_found off;
expires 0s;
add_header Pragma no-cache;
add_header Cache-Control "no-cache, no-store, must-revalidate, post-check=0, pre-check=0";
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/xml application/xml application/rss+xml;
}
# run php only when needed
location ~ .php$ {
# basic php params
fastcgi_pass php;
fastcgi_index index.php;
fastcgi_keep_conn on;
fastcgi_connect_timeout 20s;
fastcgi_send_timeout 30s;
fastcgi_read_timeout 30s;
# fast cgi params
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
}
}
Idealy, you want php5-fpm to autorestart if it starts failing, therefore you can this to /etc/php5/fpm/php-fpm.conf
emergency_restart_threshold = 60
emergency_restart_interval = 1m
process_control_timeout = 10s
Change /etc/php5/fpm/pool.d/www.conf
[www]
user = www-data
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
listen = 127.0.0.1:9001
listen.allowed_clients = 127.0.0.1
listen.backlog = 65000
pm = dynamic
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 16
; maxnumber of simultaneous requests that will be served (if each php page needs 32 Mb, then 128x32 = 4G RAM)
pm.max_children = 128
; We want to keep it hight (10k to 50k) to prevent server respawn, however if there are memory leak on PHP code we will have a problem.
pm.max_requests = 10000