I have a load balancer and two ec2 instances with php-fpm + nginx to serve my website and have configured redis to store php sessions. By running command "keys *" on redis-cli, I realized that php is creating a lot of empty sessions beyond the correct ones. Even if I close the browser, clean all cookies and do not run any php command or open any urls, it keeps creating empty sessions. The problem is that the session expiry time is 15hours, so it will create more sessions than will remove in this time, since it's creating about 30 empty sessions per hour. The only way that it stoped creating new sessions was by stopping php-fpm at my instances.
My guess is that is maybe something with the load balancer health check, I added my nginx.conf and php.ini below and you can see how I'm handling these load balancer checkings and my php session configs.
keys *
1) "PHPREDIS_SESSION:22u4tot1ilj2jn2pegsvsa9455"
2) "PHPREDIS_SESSION:u9c530pk3h0kr0moigf9a030c7"
...
316) "PHPREDIS_SESSION:d3t36ou13ljuj5ntt2l2b6sne0"
317) "PHPREDIS_SESSION:5kbn03dn01qdn405pg43bbd1i3"
Only 1 session is filled up, other 316 are empty.
By running "ttl key", I see the expire time is same that I've set on php.ini.
My php code is just a session_start(); for tests purposes.
My php.ini:
session.use_strict_mode = 0
session.use_cookies = 1
session.cookie_secure = 1
session.use_only_cookies = 1
session.name = PHPSESSID
session.cookie_lifetime = 54000
session.cookie_path = /
session.cookie_domain = .domain.xxx
session.cookie_httponly = 1
session.serialize_handler = php
session.gc_probability = 1
session.gc_divisor = 50
session.gc_maxlifetime = 54000
session.cache_limiter = nocache
session.cache_expire = 900
session.use_trans_sid = 0
I've checked phpinfo() and nothing is overwriting these configs
Nginx.conf:
#This is the block that responds to loadbalancer requests and serves the website
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/www/html;
upstream php-fpm {
server 127.0.0.1:9000;
}
location /nginx-health {
access_log off;
return 200 "healthy\n";
}
try_files $uri $uri/ #rewrite;
location #rewrite {
rewrite ^/(.*)$ /index.php?param=$1;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
}
}
#this is the block to serve a websocket listener. It handles conections directly to the node, no passing by load balancer.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name sub.domain.xxx;
root /var/www/html;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 720m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:4555;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
ssl_certificate /etc/letsencrypt/live/sub.domain.xxx/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sub.domain.xxx/privkey.pem;
}
I tried to create a cronjob to get redis keys, test its value and delete the empty ones but I saw running "keys" command is really bad for production enviroments. Does anyone have an idea how to fix this issue?
I can't tell you exactly what's going on, but I do have a few suggestions:
Yes, running keys is an O(n) operation, but if your instance is small, then it is trivial. Keep an eye on your slow log and see if any of your keys operations really are taking too long, but my guess is that they are not.
If you think that the extra sessions are being created by nginx health checks, take a peak at the access logs, you should see all accesses to your site.
I also see that you are using http2. I don't know that much about how http2 interacts with php, but consider reverting back to http 1.1 and see if you have the same behavior.
Related
So I am running an Express.js/Passport.js web server with Nginx on CentOS 7.
Here's the problem: I cannot display any endpoint that starts with w. Trying to do so results in a Cannot GET /wiki/Main_Page message. However, changing that same route from e.g. /welcome to /selcome works just fine.
Most likely culprit:
I installed Mediawiki but removed it not long after. IIRC there was some kind of setting that prettied up URLs that starting with /w after the domain. So I'm guessing a rewrite rule is persisting... just have no idea where.
Here are my config files:
/srv/node/example.com/app/routes/auth.js
module.exports = function(app, passport) {
//Changing the URLs below from welcome to selcome works fine
app.get('/activate', passport.authenticate('user-activate-account', {
successRedirect: '/welcome',
failureRedirect: '/404',
failureFlash: true
}));
...
app.get('/welcome', isLoggedIn, function(req, res) {
res.render('inside', {
page_title: 'Welcome!',
inc_style: true,
style_sheet: 'style/dashboard.css',
portal: function() {
return 'welcome';
}
});
});
};
/etc/nginx.conf
user nginx nginx;
error_log /var/log/nginx/error.log info; # [ debug | info | notice | warn | error | crit ]
events {
worker_connections 1024;
}
http {
include mime.types;
include /etc/nginx/sites_enabled/.conf; //Really *.conf but more readable this way
include /etc/letsencrypt/options-ssl-nginx.conf;
server_names_hash_bucket_size 64;
# Compression - requires gzip and gzip static modules.
gzip on;
gzip_static on;
gzip_vary on;
gzip_http_version 1.1;
gzip_min_length 700;
# Compression levels over 6 do not give an appreciable improvement
# in compression ratio, but take more resources.
gzip_comp_level 6;
# IE 6 and lower do not support gzip with Vary correctly.
gzip_disable "msie6";
# Before nginx 0.7.63:
#gzip_disable "MSIE [1-6]\.";
# Redirect http traffic to https
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
# Catch-all server for requests to invalid hosts.
# Also catches vulnerability scanners probing IP addresses.
server {
listen 443 ssl;
server_name bogus;
root /var/empty;
return 444;
location / {
try_files $uri $uri/ =404;
}
}
# If running php as fastcgi, specify php upstream.
upstream php {
server unix:/var/run/php7.2-fpm.socket;
}
}
/etc/nginx/sites_available/example.com.conf
#sub.example.com
server {
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
listen 443 ssl;
server_name sub.example.com;
access_log /var/log/nginx/sub.example.com.access.log;
error_log /var/log/nginx/sub.example.com.error.log;
location /something\.js {
alias /var/www/html/example.com/sub.example.com/design/;
location ~* \.(gif|jpe?g|png|svg|webm)$ {
try_files $uri =404;
expires 30d;
}
}
location / {
proxy_pass http://localhost:3847;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
So far I have
Purged Nginx config files, and every config file in the include tree, of any legacy rules.
Cleared Nginx cache
My node.js project folder does not even contain the string wiki anywhere except for NPM links in the node_modules folder.
All the PHP files in /etc, /etc/php.d, /etc/php-fpm.d do not contain the string wiki
Deleted mediawiki folder
Restarted nginx
Restarted php-fpm
Restarted entire machine
I'm genuinely baffled at where this problem could be. Any ideas?
Turns out it was the browser. Despite clearing history multiple times and using incognito mode, nothing changed. But using another browser worked flawlessly. I feel like an idiot, but at least I can finally put this rage-inducing mystery behind me.
Was using: Firefox (I guess private mode isn't enough)
Chrome showed it was a browser error.
I use Laravel Forge for spinning up my EC2 environments, which makes a LEMP stacks for me. I recently started getting 504 timeouts on requests.
I'm no sysadmin (hence subscription to Forge), but I looked through the logs and narrowed the issue down to these 2 repeated entries in my logs:
in: /var/log/nginx/default-error.log
2017/09/15 09:32:17 [error] 2308#2308: *1 upstream timed out (110: Connection timed out) while sending request to upstream, client: x.x.x.x, server: xxxx.com, request: "POST /upload HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock", host: "xxxx.com", referrer: "https://xxxx.com/rest/of/the/path"
in: /var/log/php7.1-fpm-log
[15-Sep-2017 09:35:09] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 14 total children
It seems like fpm opens connections that never die, and from my RDS load logs I can see that the RAM is constantly maxed out.
I've tried:
Rolling back to a definite stable version of my app (2months ago)
Reinstalling my EC2 with 5.6, 7.0, and 7.1 (with their respective fpm)
Doing all the above on 14.04 and 16.04
Creating a bigger RDS
Right now the only thing that works is a beefy RDS (8gb RAM) + killing fpm pooled connections every 300 requests. But obviously throwing resources at this problem is not the solution.
Here is my config for /etc/php/7.1/fpm/pool.d/www.conf
user = forge
group = forge
listen = /run/php/php7.1-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0666
pm = dynamic
pm.max_children = 30
pm.start_servers = 7
pm.min_spare_servers = 6
pm.max_spare_servers = 10
pm.process_idle_timeout = 7s;
pm.max_requests = 300
And here is my config for nginx.conf
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxxx.com;
root /home/forge/xxxx.com/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/xxxx.com/111111/server.crt;
ssl_certificate_key /etc/nginx/ssl/xxxx.com/111111/server.key;
ssl_protocols xxxx;
ssl_ciphers ...;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/xxxx.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico
location = /robots.txt
access_log /var/log/nginx/xxxx.com-access.log;
error_log /var/log/nginx/xxxx.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
fastcgi_read_timeout 60;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
OK, after a LOT of debugging and testing I've noticed these few causes.
Primary Cause for me: The AWS RDS instance that I was using for my MySQL had 500Mb of memory. Looking back, all these issues started once the DB size surpassed 400Mb.
Solution: Make sure you have 2x RAM of your DB size at all times. Otherwise the entire B+Tree doesn't fit in the memory, so it has to do constant swaps. This can take your query time upwards of 15 secs.
Primary Cause for problems like these: Not optimized SQL queries.
Solution: In your localhost maintain data similar to the size of your data on the server.
Is it possible to run multiple NGINX on a single Dedicated server?
I have a dedicated server with 256gb of ram, and I am running multiple PHP scripts on it but it's getting hangs because of memory used with PHP.
when I check
free -m
it's not even using 1% of memory.
So, I am guessing its has some to do with NGINX.
Can I install multiple NGINX on this server and use them like
5.5.5.5:8080, 5.5.5.5:8081, 5.5.5.5:8082
I have already allocated 20 GB memory to PHP, but still not working Properly.
Reason :- NGINX gives 504 Gateway Time-out
Either PHP or NGINX is misconfigured
You may run multiple instances of nginx on the same server provided that some conditions are met. But this is not the solution you should look for (also this may not solve your problem at all).
I got my Ubuntu / PHP / Nginx server set this way (it actually also runs some Node.js servers in parallel). Here is a configuration example which works fine on a AWS EC2 medium instance (m3).
upstream xxx {
# server unix:/var/run/php5-fpm.sock;
server 127.0.0.1:9000 max_fails=0 fail_timeout=10s weight=1;
ip_hash;
keepalive 512;
}
server {
listen 80;
listen 8080;
listen 443 ssl;
#listen [::]:80 ipv6only=on;
server_name xxx.mydomain.io yyy.mydomain.io;
if ( $http_x_forwarded_proto = 'http' ) {
return 301 https://$server_name$request_uri;
}
root /home/ubuntu/www/xxxroot;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
location ~ ^/(status|ping)$ {
access_log off;
allow 127.0.0.1;
#allow 1.2.3.4#your-ip;
#deny all;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass adn;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
#fastcgi_param SCRIPT_FILENAME /xxxroot/$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $request_filename;
#fastcgi_param DOCUMENT_ROOT /home/ubuntu/www/xxxroot;
# send bad requests to 404
#fastcgi_intercept_errors on;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Hope it helps,
I think you are running into a timeout. Your PHP-Scripts seams to run to long.
Check following:
max_execution_time in your php.ini
request_terminate_timeout in www.conf of your PHP-FPM configuration
fastcgi_read_timeout in http section or location section of your nginx configuration.
Nginx is designed more to be used as a reverse proxy or load balancer than to control application logic and run php scripts. Running multiple instances of nginx that each execute php isn't really playing to the server application's strengths. As an alternative, I'd recommend using nginx to proxy between one or more apache instances, which are better suited to executing heavy php scripts. http://kbeezie.com/apache-with-nginx/ contains information on getting apache and nginx to play nicely together.
i have this vhost
server {
server_name admin.ex.com ;
listen 80 ;
listen [::]:80 ;
##SSL
#listen 443 ssl ;
listen *:443 ssl http2 ;
listen [::]:443 ssl http2 ;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384$
ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_certificate /etc/nginx/ssl/admin.crt;
ssl_certificate_key /etc/nginx/ssl/admin.key;
root /var/www/admin/public/;
index index.php index.html index.htm;
access_log /var/www/admin/admin.log;
auth_basic "Top Secret";
auth_basic_user_file /var/www/admin/.htpasswd;
location / {
try_files $uri $uri/ =404;
allow 192.168.1.1;
#deny all;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.0-fpm.admin.sock;
fastcgi_intercept_errors on;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}
}
its SO SLOW over HTTPS , i tried to visit phpmyadmin and my own php code over http it loads in 2 secs with HTTPS it takes 2-3 minutes , it loads the HTML code it self fast , but to download the resources (CSS-images) it takes so much time , i'm using chrome with the Nginx 1.9 and a self signed certificate .
i even tried curl -i both the HTTP and HTTPS , again so much latency between the two , i don't understand what's going on !
UPDATE ::
okay after some researching i figured out that if i take an image of exactly the same VPS and applied it to one which is in a server closer to me (Frankfurt instead of NY ) that it gets way faster . is it a distance problem then ?
What makes me think again is that why when i use HTTP it's so fast no matter what server it is in .
Any ideas ?
It turned out to be a location-related problem. I changed the location of the server from New York to Amsterdam which seems to solve the problem.
I understand that a server can be far and thus reduce the connection speed but I don't understand why it reduces the speed under HTTPS only and not HTTP requests. Kind of weird!
I have a challenge that I can't figure out how to solve.
I a DigitalOcean VPS running CentOS7. Here I host a domain, e.q. www.example.com.
I'm running nginx right now on it. It used to be Apache but I could not figure it out how to make the websockets reversed proxy, for meteor work with Apache.
Now in Nginx I created a vhost configuration to load for the main domain, www.example.com and example.com, from a meteor server that runs with pm2-meteor:
server_tokens off; # for security-by-obscurity: stop displaying nginx version
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl spdy; # we enable SPDY here
server_name www.example.com; # this domain must match Common Name (CN) in the SSL certificate
root /var/www/html/example/bundle; # irrelevant
index index.html; # irrelevant
ssl_certificate /etc/httpd/ssl/example/example.crt; # full path to SSL certificate and CA certificate concatenated together
ssl_certificate_key /etc/httpd/ssl/example/example.key; # full path to SSL key
# performance enhancement for SSL
ssl_stapling off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
# safety enhancement to SSL: make sure we actually use a safe cipher
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
# to avoid ssl stripping https://en.wikipedia.org/wiki/SSL_stripping#SSL_stripping
add_header Strict-Transport-Security "max-age=31536000;";
# If your application is not compatible with IE <= 10, this will redirect visitors to a page advising a browser update
# This works because IE 11 does not present itself as MSIE anymore
#if ($http_user_agent ~ "MSIE" ) {
# return 303 https://browser-update.org/update.html;
#}
# pass all requests to Meteor
location / {
proxy_pass http://127.0.0.1:4001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr; # preserve client IP
# this setting allows the browser to cache the application in a way compatible with Meteor
45,1 Top
I'm running the meteor server on 4001 port.
Ok, this works perfectly for meteor. My problem now is that I want to create a vhost for a subdomain, payment.example.com, which will load larval 5.
Normally I want this subdomain to load over HTTPS also.
How on earth can I do this without loading the default nginx vhost settings or the domain vhost which loads meteor?
I can't figure out a way to make it work. This is the payment subdomain virtual host I'm trying to make and it's not working:
server {
listen 431 ssl spdy; # we enable SPDY here
listen [::]:431 ipv6only=on default_server;
server_name payment.example.ro; # this domain must match Common Name (CN) in the SSL certificate
root /var/www/html/example/payment/;
access_log /var/log/nginx/nginx_access.log;
error_log /var/log/nginx/nginx_error.log;
ssl_certificate /etc/httpd/ssl/example/example.crt; # full path to SSL certificate and CA certificate concatenated together
ssl_certificate_key /etc/httpd/ssl/example/example.key; # full path to SSL key
# performance enhancement for SSL
ssl_stapling off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
# safety enhancement to SSL: make sure we actually use a safe cipher
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
# to avoid ssl stripping https://en.wikipedia.org/wiki/SSL_stripping#SSL_stripping
add_header Strict-Transport-Security "max-age=31536000;";
location / {
root /var/www/html/example/payment/;
index index.php index.html index.htm;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Now when I load payment.example.com I load the meteor reversed proxy.
Any suggestions or ideas on what to do to make this work. Is this even possible?
Do I have to put it in a subfolder rather that subdomain? I prefer the subdomain...
Thank you guy!