I am running some siege tests on my nginx server. The bottleneck doesn't seem to be cpu or memory so what is it?
I try to do this on my macbook:
sudo siege -t 10s -c 500 server_ip/test.php
The response time goes to 10 seconds, I get errors and siege aborts before completing.
But I if run the above on my server
siege -t 10s -c 500 localhost/test.php
I get:
Transactions: 6555 hits
Availability: 95.14 %
Elapsed time: 9.51 secs
Data transferred: 117.30 MB
Response time: 0.18 secs
Transaction rate: 689.27 trans/sec
Throughput: 12.33 MB/sec
Concurrency: 127.11
Successful transactions: 6555
Failed transactions: 335
Longest transaction: 1.31
Shortest transaction: 0.00
I also noticed for lower concurrent figures, I get vastly improved transaction rate on localhost compared to externally.
But when the above is running on localhost the CPU usage is low, memory usage is low on HTOP. So I'm confused how I can boost performance because I can't see a bottleneck.
ulimit returns 50000 because I've increased it. There are 4 nginx worker processes which is 2 times my cpu cores. Here are my other settings
worker_rlimit_nofile 40000;
events {
worker_connections 20000;
# multi_accept on;
}
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
The test.php is just a echo phpinfo() script, nothing else. No database connections.
The machine is an AWS m3 large, 2 cpu cores and about 7gb of ram I believe.
Here is the contents of my server block:
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www/sitename;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
try_files $uri $uri.html $uri/ #extensionless-php;
}
location #extensionless-php {
rewrite ^(.*)$ $1.php last;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Also this was in my error log:
connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, cli$
Related
I've been dealing with this for a long time now and I've narrowed down where the problem lies.
Initially, I thought the problem was either my medium EC2 instance or my medium RDS instance but after connecting to my RDS via MySQL command line from my EC2 and experiencing no delay in query load/response time, I see now that the only things left to be causing this issue are PHP or Nginx.
Here is the code I'm executing:
$connection = mysqli_connect("database-3...us-east-2.rds.amazonaws.com", "admin", "...", "prod_db", 3306);
$start_time = microtime(true);
mysqli_query($connection, "SELECT * FROM `product_categories` ORDER BY `list_order` ASC");
$end_time = microtime(true);
echo "execution time: ".(($end_time-$start_time)*1000)."ms<br><br>";
Here is a screenshot of what I'm seeing most of the time when this code is executed:
However, a good chunk of time, I'm seeing this:
You can see it takes 10x as long some of the time to execute these queries.
This wouldn't be such a big deal if this was the worst of it. But my website is suffering immensely because of this delay. My page loads are sometimes taking 2-3 seconds because of 10ms query responses taking 100ms each.
Here is my nginx config file for my site:
server {
fastcgi_read_timeout 300;
proxy_read_timeout 300;
set $no_cache 1;
if ($arg_trycache = 1) {
set $no_cache 0;
}
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.php;
server_name server.com www.server.com;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ ^\.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
# With php-cgi (or other tcp sockets):
#fastcgi_pass 127.0.0.1:9000;
fastcgi_cache ZONE_1;
fastcgi_cache_valid 200 1m;
fastcgi_cache_bypass $no_cache;
fastcgi_no_cache $no_cache;
fastcgi_param PHP_VALUE "memory_limit = 1024M";
#fastcgi_buffer_size 16k;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
#limit_req zone=two;
#limit_req zone=one burst=40 nodelay;
}
location ~ \.php$ { # will be used for any PHP request except 'index.php'
return 404;
}
location ~ /api/v1/app-content/search-all.php {
#include snippets/fastcgi-php.conf;
#fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_cache ZONE_1;
fastcgi_cache_valid 200 1m;
fastcgi_cache_bypass $no_cache;
fastcgi_no_cache $no_cache;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
# With php-cgi (or other tcp sockets):
#fastcgi_pass 127.0.0.1:9000;
fastcgi_cache ZONE_1;
fastcgi_cache_valid 200 1m;
fastcgi_cache_bypass $no_cache;
fastcgi_no_cache $no_cache;
fastcgi_param PHP_VALUE "memory_limit = 1024M";
#fastcgi_buffer_size 16k;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
}
location #rewrite {
rewrite ^ $uri.php last;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /home/ubuntu/chain.crt; # managed by Certbot
ssl_certificate_key /home/ubuntu/server.key; # managed by Certbot
}
I'm truly at a loss at this point. I thought for days that it was somehow the RDS instance I have running but after eliminating that possibility by connecting via command line, I'm back at square one.
So essentially my question is this: is it possible that PHP itself or Nginx are the culprit here? Keep in mind that my PHP code is as simple as possible. No extra layers on top, just straight mysqli_ functions.
I use Laravel Forge for spinning up my EC2 environments, which makes a LEMP stacks for me. I recently started getting 504 timeouts on requests.
I'm no sysadmin (hence subscription to Forge), but I looked through the logs and narrowed the issue down to these 2 repeated entries in my logs:
in: /var/log/nginx/default-error.log
2017/09/15 09:32:17 [error] 2308#2308: *1 upstream timed out (110: Connection timed out) while sending request to upstream, client: x.x.x.x, server: xxxx.com, request: "POST /upload HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock", host: "xxxx.com", referrer: "https://xxxx.com/rest/of/the/path"
in: /var/log/php7.1-fpm-log
[15-Sep-2017 09:35:09] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 14 total children
It seems like fpm opens connections that never die, and from my RDS load logs I can see that the RAM is constantly maxed out.
I've tried:
Rolling back to a definite stable version of my app (2months ago)
Reinstalling my EC2 with 5.6, 7.0, and 7.1 (with their respective fpm)
Doing all the above on 14.04 and 16.04
Creating a bigger RDS
Right now the only thing that works is a beefy RDS (8gb RAM) + killing fpm pooled connections every 300 requests. But obviously throwing resources at this problem is not the solution.
Here is my config for /etc/php/7.1/fpm/pool.d/www.conf
user = forge
group = forge
listen = /run/php/php7.1-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0666
pm = dynamic
pm.max_children = 30
pm.start_servers = 7
pm.min_spare_servers = 6
pm.max_spare_servers = 10
pm.process_idle_timeout = 7s;
pm.max_requests = 300
And here is my config for nginx.conf
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxxx.com;
root /home/forge/xxxx.com/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/xxxx.com/111111/server.crt;
ssl_certificate_key /etc/nginx/ssl/xxxx.com/111111/server.key;
ssl_protocols xxxx;
ssl_ciphers ...;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/xxxx.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico
location = /robots.txt
access_log /var/log/nginx/xxxx.com-access.log;
error_log /var/log/nginx/xxxx.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
fastcgi_read_timeout 60;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
OK, after a LOT of debugging and testing I've noticed these few causes.
Primary Cause for me: The AWS RDS instance that I was using for my MySQL had 500Mb of memory. Looking back, all these issues started once the DB size surpassed 400Mb.
Solution: Make sure you have 2x RAM of your DB size at all times. Otherwise the entire B+Tree doesn't fit in the memory, so it has to do constant swaps. This can take your query time upwards of 15 secs.
Primary Cause for problems like these: Not optimized SQL queries.
Solution: In your localhost maintain data similar to the size of your data on the server.
Is it possible to run multiple NGINX on a single Dedicated server?
I have a dedicated server with 256gb of ram, and I am running multiple PHP scripts on it but it's getting hangs because of memory used with PHP.
when I check
free -m
it's not even using 1% of memory.
So, I am guessing its has some to do with NGINX.
Can I install multiple NGINX on this server and use them like
5.5.5.5:8080, 5.5.5.5:8081, 5.5.5.5:8082
I have already allocated 20 GB memory to PHP, but still not working Properly.
Reason :- NGINX gives 504 Gateway Time-out
Either PHP or NGINX is misconfigured
You may run multiple instances of nginx on the same server provided that some conditions are met. But this is not the solution you should look for (also this may not solve your problem at all).
I got my Ubuntu / PHP / Nginx server set this way (it actually also runs some Node.js servers in parallel). Here is a configuration example which works fine on a AWS EC2 medium instance (m3).
upstream xxx {
# server unix:/var/run/php5-fpm.sock;
server 127.0.0.1:9000 max_fails=0 fail_timeout=10s weight=1;
ip_hash;
keepalive 512;
}
server {
listen 80;
listen 8080;
listen 443 ssl;
#listen [::]:80 ipv6only=on;
server_name xxx.mydomain.io yyy.mydomain.io;
if ( $http_x_forwarded_proto = 'http' ) {
return 301 https://$server_name$request_uri;
}
root /home/ubuntu/www/xxxroot;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
location ~ ^/(status|ping)$ {
access_log off;
allow 127.0.0.1;
#allow 1.2.3.4#your-ip;
#deny all;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass adn;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
#fastcgi_param SCRIPT_FILENAME /xxxroot/$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $request_filename;
#fastcgi_param DOCUMENT_ROOT /home/ubuntu/www/xxxroot;
# send bad requests to 404
#fastcgi_intercept_errors on;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Hope it helps,
I think you are running into a timeout. Your PHP-Scripts seams to run to long.
Check following:
max_execution_time in your php.ini
request_terminate_timeout in www.conf of your PHP-FPM configuration
fastcgi_read_timeout in http section or location section of your nginx configuration.
Nginx is designed more to be used as a reverse proxy or load balancer than to control application logic and run php scripts. Running multiple instances of nginx that each execute php isn't really playing to the server application's strengths. As an alternative, I'd recommend using nginx to proxy between one or more apache instances, which are better suited to executing heavy php scripts. http://kbeezie.com/apache-with-nginx/ contains information on getting apache and nginx to play nicely together.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
AntiDDOS slowing server
Hello,
i just migrated from Apache24 to Nginx. OS on server is FreeBSD 10.3 (amd64) with custom kernel. I have one strange problem - when i uncomment this line in nginx config:
limit_req zone=antiddosphp burst=5;
then wordpress dashboard load take >2s then with this option disabled. Where could be problem? Almost every page take more time to load with this option (or i dont know how to right set this)...
My second question is about right perormance setting in config file. My VPS is
1x 2Ghz + 2GB ram + 15gb ssd
Free memory after startup is about 1700 MB. Do i have right settings for nginx? I also have postfix, dovecot, mariadb and php-fpm installed. Mariadb take about 200M ram, MTA take about 150MB so i have about 1300 free for webserver.
My nginx conf file:
load_module /usr/local/libexec/nginx/ngx_mail_module.so;
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
user www;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
limit_req_zone $binary_remote_addr zone=antiddosphp:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=antiddos:10m rate=10r/s;
limit_req zone=antiddosphp burst=5;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
sendfile on;
fastcgi_connect_timeout 100;
fastcgi_send_timeout 100;
fastcgi_read_timeout 100;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
fastcgi_intercept_errors on;
gzip on;
gzip_min_length 1k;
gzip_comp_level 9;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
open_file_cache max=2000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_buffer_size 8k;
client_header_buffer_size 16k;
client_max_body_size 20m;
client_body_timeout 10;
client_header_timeout 10;
large_client_header_buffers 4 32k;
keepalive_timeout 15;
send_timeout 10;
keepalive_requests 1000;
server {
listen 80;
server_name localhost;
location / {
root /usr/local/www/nginx;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}
}
server {
server_name test.test.cz;
#limit_req zone=antiddos burst=60;
#limit_req zone=antiddosphp burst=2;
access_log /var/log/example.com.access.log;
error_log /var/log/example.com.error.log;
root /usr/local/www/domains/test-cz/webserver/test;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
}
}
PHP-FPM configuration:
user = www
group = www
pm = dynamic
pm.start_servers = 3
pm.max_children = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 3
pm.max_requests = 200
request_terminate_timeout = 10
request_slowlog_timeout = 0
slowlog = log/$pool.log.slow
catch_workers_output = yes
Thank you all for any reply.
If your are loading a website, you are not loading only this site, but assets as well. Nginx will think of them as independent connections. You have 10r/s defined and a burst size of 5. Therefore after 10 Requests/s the next requests will be delayed for rate limiting purposes. If the burst size (5) gets exceeded the following requests will receive a 503 error.
If the requests rate exceeds the rate configured for a zone, their
processing is delayed such that requests are processed at a defined
rate. Excessive requests are delayed until their number exceeds the
maximum burst size in which case the request is terminated with an
error 503 (Service Temporarily Unavailable).
(Source: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
Answer: Up the r/s a bit and the pages should flow out a bit faster.
For the second questions:
But i can't give a definit answer, just look at your load and determine if you have to tweak these values.
Usually the error log is a good start :)
We run a few high volume websites which together generate around 5 million pageviews per day. We have the most overkill servers as we anticipate growth but we are having reports of a few active users saying the site is sometimes slow on the first pageview. I've seen this myself every once in a while where the first pageview will take 3-5 seconds then it's instant after that for the rest of the day. This has happened to me maybe twice in the last 24 hours so not enough to figure out what's happening. Every page on our site uses PHP but one of the times it happened to me it was on a PHP page that doesn't have any database calls which makes me think the issue is limited to NGINX, PHP-FPM or network settings.
We have 3 NGINX servers running behind a load balancer. Our database is separate on a cluster. I included our configuration files for nginx and php-fpm as well as our current RAM usage and PHP-FPM status. This is based on middle of the day (average traffic for us). Please take a look and let me know if you see any red flags in my setup or have any suggestions to optimize further.
Specs for each NGINX Server:
OS: CentOS 7
RAM: 128GB
CPU: 32 cores (2.4Ghz each)
Drives: 2xSSD on RAID 1
RAM Usage (free -g)
total used free shared buff/cache available
Mem: 125 15 10 3 100 103
Swap: 15 0 15
PHP-FPM status (IE: http://server1_ip/status)
pool: www
process manager: dynamic
start time: 03/Mar/2016:03:42:49 -0800
start since: 1171262
accepted conn: 69827961
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 1670
active processes: 1
total processes: 1671
max active processes: 440
max children reached: 0
slow requests: 0
php-fpm config file:
[www]
user = nginx
group = nginx
listen = /var/opt/remi/php70/run/php-fpm/php-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 6000
pm.start_servers = 1600
pm.min_spare_servers = 1500
pm.max_spare_servers = 2000
pm.max_requests = 1000
pm.status_path = /status
slowlog = /var/opt/remi/php70/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/opt/remi/php70/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path] = /var/opt/remi/php70/lib/php/session
php_value[soap.wsdl_cache_dir] = /var/opt/remi/php70/lib/php/wsdlcache
nginx config file:
user nginx;
worker_processes 32;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1000;
multi_accept on;
use epoll;
}
http {
log_format main '$remote_addr - $remote_user [$time_iso8601] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10 10;
send_timeout 60;
types_hash_max_size 2048;
client_max_body_size 50M;
client_body_buffer_size 5m;
client_body_timeout 60;
client_header_timeout 60;
fastcgi_buffers 256 16k;
fastcgi_buffer_size 128k;
fastcgi_connect_timeout 60s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
reset_timedout_connection on;
server_names_hash_bucket_size 100;
#compression
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/javascript application/xml;
gzip_disable "MSIE [1-6]\.";
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name domain1.com;
root /folderpath;
location / {
index index.php;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
#server status
location /server-status {
stub_status on;
access_log off;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
location = /status {
access_log off;
allow 127.0.0.1;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
UPDATE:
I installed opcache as per the suggestion below. Not sure if it fixes the issue. Here are my settings
opcache.enable=1
opcache.memory_consumption=1024
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=32531
opcache.max_wasted_percentage=10
2 minor tips:
if you use opcache, monitor it to check if its configuration (especially memory size) is ok, and avoid OOM reset, you can use https://github.com/rlerdorf/opcache-status (a single php page)
increase pm.max_requests to keep using same processes