We are performing a load test using locust(1000 Users) on a webpage of our application.
Instance type: t3a.medium
The instance is running behind a load balancer. And we are using RDS Aurora Database which peaks at around 70% CPU utilization. EC2 instance metrics are healthy. EDIT: Instance memory consumption is within 800 MB out of available 4 GB
There are multiple 502 Server error: Bad Gateway and sometimes 500 and 520 errors as well.
Error 1:
2020/10/08 16:58:21 [error] 4344#4344: *41841 connect() to unix:/var/run/php/php7.2-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: <PublicIP>, server: <Domain name>, request: "GET <webpage> HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "<Domain name>"
Error 2(Alert):
2020/10/08 19:15:11 [alert] 9109#9109: *105735 socket() failed (24: Too many open files) while connecting to upstream, client: <PublicIP>, server: <Domain name>, request: "GET <webpage> HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "<Domain name>"
Listing down configuration files:
Nginx Configuration
server {
listen 80;
listen [::]:80;
root /var/www/####;
index index.php;
access_log /var/log/nginx/###access.log;
error_log /var/log/nginx/####error.log ;
server_name #####;
client_max_body_size 100M;
autoindex off;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
/etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 8096;
multi_accept on;
use epoll;
epoll_events 512;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_types text/xml text/css;
gzip_http_version 1.1;
gzip_vary on;
gzip_disable "MSIE [4-6] \.";
include /etc/nginx/conf.d/*.conf;
}
/etc/php/7.2/fpm/php-fpm.conf
emergency_restart_threshold 10
emergency_restart_interval 1m
process_control_timeout 10s
Php-fpm Important Parameters:
user = www-data
group = www-data
listen = /run/php/php7.2-fpm.sock
listen.owner = www-data
listen.group = www-data
;listen.mode = 0660
pm = static
pm.max_children = 300
/etc/security/limits.conf
nginx soft nofile 30000
nginx hard nofile 50000
/etc/sysctl.conf
net.nf_conntrack_max = 131072
net.core.somaxconn = 131072
net.core.netdev_max_backlog = 65535
kernel.msgmnb = 131072
kernel.msgmax = 131072
fs.file-max = 131072
What are we missing? Can anyone please point to the right direction?
So we were able to resolve this issue. The problem was php-fpm did not have access to access system resources. You may need to change values according to hardware specifications.
So, our final configuration looks like this:
In /etc/security/limits.conf, add following lines:
nginx soft nofile 10000
nginx hard nofile 30000
root soft nofile 10000
root hard nofile 30000
www-data soft nofile 10000
www-data hard nofile 30000
In /etc/sysctl.conf, add following values
net.nf_conntrack_max = 231072
net.core.somaxconn = 231072
net.core.netdev_max_backlog = 65535
kernel.msgmnb = 231072
kernel.msgmax = 231072
fs.file-max = 70000
In /etc/nginx/nginx.conf, change or add so finally it should have these values(kindly change them according to your use case and server capacity):
worker_processes auto;
worker_rlimit_nofile 30000;
events {
worker_connections 8096;
multi_accept on;
use epoll;
epoll_events 512;
}
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_types text/xml text/css;
gzip_http_version 1.1;
gzip_vary on;
gzip_disable "MSIE [4-6] .";
In /etc/php/7.2/fpm/php-fpm.conf , change values to look like this:
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
rlimit_files = 10000
In /etc/php/7.2/fpm/pool.d/www.conf , change values to look like this:
user = www-data
group = www-data
listen.backlog = 4096
listen.owner = www-data
listen.group = www-data
;listen.mode = 0660
pm = static
pm.max_children = 1000
I have two apps, one running in docker deployed in cloud - A - and the other - B - on a dedicated server. Both are written in PHP and using nginx server. App B is doing PUT requests with bigger payloads (let's think about 1M). The problem is that what gets to PHP in app A is truncated to approximately 8k (8209 bytes to be exact). This causes json_decode to fail decoding the request body and the whole request fails.
I have googled and checked configs for quite a long time already but cannot find the issue.
This is my nginx.conf for app A (running in docker in cloud):
worker_processes auto;
user www-data;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log warn;
error_log /var/log/nginx.error.log notice;
error_log /var/log/nginx.error.log info;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 0;
client_header_timeout 60;
client_body_timeout 60;
client_body_buffer_size 10m;
client_max_body_size 100m;
server_tokens off;
reset_timedout_connection on;
send_timeout 60;
include /etc/nginx/mime.types;
default_type text/html;
charset UTF-8;
large_client_header_buffers 4 16k;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
# cache informations about file descriptors, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=65000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent $request_time "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log timed;
upstream backend {
server 127.0.0.1:9000;
}
include conf.d/*;
}
This is my site's conf:
server {
listen 80 default_server;
root /var/www/appB/public;
index index.php;
location = /.well-known/schema-discovery {
add_header Content-Type application/json;
return 200 '{}';
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~* \.(gif|jpg|png|js)$ {
expires 30d;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include /etc/nginx/fastcgi_params;
}
}
This is the www.conf (for php-fpm):
[www]
user = www-data
group = www-data
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 150
pm.start_servers = 30
pm.min_spare_servers = 10
pm.max_spare_servers = 50
pm.process_idle_timeout = 60s
pm.max_requests = 5000
pm.status_path = /status
ping.path = /ping
slowlog = /var/log/fpm/slow.log
request_slowlog_timeout = 60s
request_terminate_timeout = 300s
catch_workers_output = yes
access.log = /var/log/fpm/access.log
php_flag[display_errors] = off
php_flag[html_errors] = off
php_admin_value[error_log] = /var/log/fpm/php_error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 1024M
php_admin_value[upload_max_filesize] = 100M
php_admin_value[post_max_size] = 100M
According to the logs neither nginx nor php-fpm do not complain about anything (no errors in the logs).
Does anybody have an idea what might be wrong?
Thanks a lot in advance!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
AntiDDOS slowing server
Hello,
i just migrated from Apache24 to Nginx. OS on server is FreeBSD 10.3 (amd64) with custom kernel. I have one strange problem - when i uncomment this line in nginx config:
limit_req zone=antiddosphp burst=5;
then wordpress dashboard load take >2s then with this option disabled. Where could be problem? Almost every page take more time to load with this option (or i dont know how to right set this)...
My second question is about right perormance setting in config file. My VPS is
1x 2Ghz + 2GB ram + 15gb ssd
Free memory after startup is about 1700 MB. Do i have right settings for nginx? I also have postfix, dovecot, mariadb and php-fpm installed. Mariadb take about 200M ram, MTA take about 150MB so i have about 1300 free for webserver.
My nginx conf file:
load_module /usr/local/libexec/nginx/ngx_mail_module.so;
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
user www;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
limit_req_zone $binary_remote_addr zone=antiddosphp:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=antiddos:10m rate=10r/s;
limit_req zone=antiddosphp burst=5;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
sendfile on;
fastcgi_connect_timeout 100;
fastcgi_send_timeout 100;
fastcgi_read_timeout 100;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
fastcgi_intercept_errors on;
gzip on;
gzip_min_length 1k;
gzip_comp_level 9;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
open_file_cache max=2000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_buffer_size 8k;
client_header_buffer_size 16k;
client_max_body_size 20m;
client_body_timeout 10;
client_header_timeout 10;
large_client_header_buffers 4 32k;
keepalive_timeout 15;
send_timeout 10;
keepalive_requests 1000;
server {
listen 80;
server_name localhost;
location / {
root /usr/local/www/nginx;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}
}
server {
server_name test.test.cz;
#limit_req zone=antiddos burst=60;
#limit_req zone=antiddosphp burst=2;
access_log /var/log/example.com.access.log;
error_log /var/log/example.com.error.log;
root /usr/local/www/domains/test-cz/webserver/test;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
}
}
PHP-FPM configuration:
user = www
group = www
pm = dynamic
pm.start_servers = 3
pm.max_children = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 3
pm.max_requests = 200
request_terminate_timeout = 10
request_slowlog_timeout = 0
slowlog = log/$pool.log.slow
catch_workers_output = yes
Thank you all for any reply.
If your are loading a website, you are not loading only this site, but assets as well. Nginx will think of them as independent connections. You have 10r/s defined and a burst size of 5. Therefore after 10 Requests/s the next requests will be delayed for rate limiting purposes. If the burst size (5) gets exceeded the following requests will receive a 503 error.
If the requests rate exceeds the rate configured for a zone, their
processing is delayed such that requests are processed at a defined
rate. Excessive requests are delayed until their number exceeds the
maximum burst size in which case the request is terminated with an
error 503 (Service Temporarily Unavailable).
(Source: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
Answer: Up the r/s a bit and the pages should flow out a bit faster.
For the second questions:
But i can't give a definit answer, just look at your load and determine if you have to tweak these values.
Usually the error log is a good start :)
I've moved my server from apache2+fcgi to nginx+fpm because I wanted a lighter environment, and apache's memory footprint was high. The server is a dual core (I know, not very much) with 8G of ram. It runs also a rather busy FreeRadius server and related MySQL. CPU load average is ~1, with some obvious peaks.
One of those peaks happens every 30 minutes when I get web pings from some controlled devices. With Apache the server load was spiking up a lot, slowing down everything. Now with nginx the process is much faster (I also did some optimization in the code), tough now I miss some of these connections. I configured both nginx and fpm to what I believe should be enough, but I must be missing something because in these moments php isn't (apparently) able to reply to nginx. This is a recap of the config:
nginx/1.8.1
user www-data;
worker_processes auto;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 20m;
large_client_header_buffers 2 1k;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
set $fsn /$yii_bootstrap;
if (-f $document_root$fastcgi_script_name){
set $fsn $fastcgi_script_name;
}
fastcgi_pass 127.0.0.1:9011;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fsn;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fsn;
fastcgi_read_timeout 150s;
}
php5-fpm 5.4.45-1~dotdeb+6.1
[pool01]
listen = 127.0.0.1:9011
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 150
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 8
pm.max_requests = 2000
pm.process_idle_timeout = 10s
When the peak arrives I start seeing this in fpm logs:
[18-Feb-2016 11:30:04] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 c
hildren, there are 0 idle, and 13 total children
[18-Feb-2016 11:30:05] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16
children, there are 0 idle, and 15 total children
[18-Feb-2016 11:30:06] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32
children, there are 0 idle, and 17 total children
[18-Feb-2016 11:30:07] WARNING: [pool pool01] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32
children, there are 0 idle, and 19 total children
and worse in nginx's error.log
2016/02/18 11:30:22 [error] 23400#23400: *209920 connect() failed (110: Connection timed out) while connecting to upstream, client: 79.1.1.9,
server: host.domain.com, request: "GET /ping/?whoami=abc02 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209923 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.1.9.71,
server: host.domain.com, request: "GET /utilz/pingme.php?whoami=abc01 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209925 connect() failed (110: Connection timed out) while connecting to upstream, client: 3.7.0.4,
server: host.domain.com, request: "GET /ping/?whoami=abc03 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
2016/02/18 11:30:22 [error] 23400#23400: *209926 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.7.2.1
, server: host.domain.com, request: "GET /ping/?whoami=abc04 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9011", host: "host.domain.com"
Those connections are lost!
First question, why nginx returns timeout within 22s (the pings are made at 00 and 30 minutes of every hour) if fastcgi_read_timeout is set to 150s?
Second question: why do I get so many fpm warnings? The total children displayed never reaches pm.max_children. I know warnings are not errors, but I get warned... Is there a relation between those messages and nginx's timeouts?
Given that the server handles perfectly fine the regular traffic, and it has no problem with ram and swap neither during these peak times (it always has ~1.5G or more free), is there a better tuning to handle those ping connections (which doesn't involve changing the schedule)? Should I raise pm.start_servers and/or pm.min_spare_servers?
You need some changes + I would recommend upgrading your php to 5.6.
Nginx tunning: /etc/nginx/nginx.conf
user www-data;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log crit;
# NOTE: Max simultaneous requests = worker_processes*worker_connections/(keepalive_timeout*2)
worker_processes 1;
worker_rlimit_nofile 750000;
# handles connection stuff
events {
worker_connections 50000;
multi_accept on;
use epoll;
}
# http request stuff
http {
access_log off;
log_format main '$remote_addr $host $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $ssl_cipher $request_time';
types_hash_max_size 2048;
server_tokens off;
fastcgi_read_timeout 180;
keepalive_timeout 20;
keepalive_requests 1000;
reset_timedout_connection on;
client_body_timeout 20;
client_header_timeout 10;
send_timeout 10;
tcp_nodelay on;
tcp_nopush on;
sendfile on;
directio 100m;
client_max_body_size 100m;
server_names_hash_bucket_size 100;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# index default files
index index.html index.htm index.php;
# use hhvm with php-fpm as backup
upstream php {
keepalive 30;
server 127.0.0.1:9001; # php5-fpm (check your port)
}
# Virtual Host Configs
include /etc/nginx/sites-available/*;
}
For a default server, create and add to /etc/nginx/sites-available/default.conf
# default virtual host
server {
listen 80;
server_name localhost;
root /path/to/your/files;
access_log off;
log_not_found off;
# handle staic files first
location / {
index index.html index.htm index.php ;
}
# serve static content directly by nginx without logs
location ~* \.(jpg|jpeg|gif|png|bmp|css|js|ico|txt|pdf|swf|flv|mp4|mp3)$ {
access_log off;
log_not_found off;
expires 7d;
# Enable gzip for some static content only
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
}
# no cache for xml files
location ~* \.(xml)$ {
access_log off;
log_not_found off;
expires 0s;
add_header Pragma no-cache;
add_header Cache-Control "no-cache, no-store, must-revalidate, post-check=0, pre-check=0";
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_types text/plain text/xml application/xml application/rss+xml;
}
# run php only when needed
location ~ .php$ {
# basic php params
fastcgi_pass php;
fastcgi_index index.php;
fastcgi_keep_conn on;
fastcgi_connect_timeout 20s;
fastcgi_send_timeout 30s;
fastcgi_read_timeout 30s;
# fast cgi params
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
}
}
Idealy, you want php5-fpm to autorestart if it starts failing, therefore you can this to /etc/php5/fpm/php-fpm.conf
emergency_restart_threshold = 60
emergency_restart_interval = 1m
process_control_timeout = 10s
Change /etc/php5/fpm/pool.d/www.conf
[www]
user = www-data
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
listen = 127.0.0.1:9001
listen.allowed_clients = 127.0.0.1
listen.backlog = 65000
pm = dynamic
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 16
; maxnumber of simultaneous requests that will be served (if each php page needs 32 Mb, then 128x32 = 4G RAM)
pm.max_children = 128
; We want to keep it hight (10k to 50k) to prevent server respawn, however if there are memory leak on PHP code we will have a problem.
pm.max_requests = 10000
I am doing load testing on an Nginx Server and I am having an issue where my CPU hits 100% but only 50% of my ram is being utilized. The server is this:
2 vCPU
2 GB of RAM
40GB SSD Drive
Rackspace High Performance Server
This is my Nginx Config
worker_processes 2;
error_log /var/log/nginx/error.log crit;
pid /var/run/nginx.pid;
events {
worker_connections 1524;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
access_log off;
# Sendfile copies data between one FD and other from within the kernel.
# More efficient than read() + write(), since the requires transferring data to and from the user space.
sendfile on;
# Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet,
# instead of using partial frames. This is useful for prepending headers before calling sendfile,
# or for throughput optimization.
tcp_nopush on;
# don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time.
tcp_nodelay on;
# allow the server to close the connection after a client stops responding. Frees up socket-associated memory.
reset_timedout_connection on;
#keepalive_timeout 0;
keepalive_timeout 65;
# send the client a "request timed out" if the body is not loaded by this time. Default 60.
client_body_timeout 10;
# If the client stops reading data, free up the stale client connection after this much time. Default 60.
send_timeout 2;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
gzip on;
server_tokens off;
client_max_body_size 20m;
client_body_buffer_size 128k;
client_max_body_size 20m;
client_body_buffer_size 128k;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Kernal additions in /etc/sysctl.conf
# Increase system IP port limits to allow for more connections
net.ipv4.ip_local_port_range = 2000 65000
net.ipv4.tcp_window_scaling = 1
# number of packets to keep in backlog before the kernel starts dropping them
net.ipv4.tcp_max_syn_backlog = 3240000
# increase socket listen backlog
net.core.somaxconn = 3240000
net.ipv4.tcp_max_tw_buckets = 1440000
# Increase TCP buffer sizes
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = cubic
Example VHost Config with PHP-FPM
server {
listen 80;
server_name www.example.com;
location / {
root /data/sites/example.com/public_html;
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?rt=$uri&$args;
}
location ~ \.php {
root /data/sites/example.com/public_html;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param ENV production;
include fastcgi_params;
}
}
The server can handle about 60 active SBU connections clicking around, or about 300 request per second. Is the fact that is not fully utilizing RAM and more CPU a bad thing? Can I optimize this further?