Time-out 504 Gateway Time-out nginx in wordpress-php-fpm - php

I want to setup WordPress PHP-fpm in Kubernetes, so I already setup that but there is some problem that currently I am facing with Nginx proxy, so when I am trying to install the woo-commerce plugin then it gives the error of
Installation failed: 504 Gateway Time-out 504 Gateway Time-out nginx padding to disable MSIE and Chrome friendly error page -> < ! - padding to disable MSIE and Chrome friendly error page ->
I don't know what's going wrong on proxy I already set the max value for proxy_read_timeout 100. but then also it will not work. I tried so many proxy time-out values but it didn't work, so here is my Nginx proxy config
wordpress.conf
server {
listen 80;
server_name localhost;
root /var/www/html;
index index.php;
#access_log /var/log/nginx/hakase-access.log;
#error_log /var/log/nginx/hakase-error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache phpcache;
fastcgi_cache_valid 200 301 302 60m;
fastcgi_cache_min_uses 1;
fastcgi_cache_lock on;
add_header X-FastCGI-Cache $upstream_cache_status;
fastcgi_cache_use_stale error timeout updating invalid_header http_500 http_503;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=phpcache:100m max_size=10g inactive=60m use_temp_path=off;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

The one working for me
location ~ \.php$ {
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 60;
fastcgi_read_timeout 60;
}
You must be running the two containers inside the single pod you can debug the logs of ingress and nginx of WordPress to check more details.
you can use this github as reference : https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx
Also, check the blog to understand more : https://medium.com/#harsh.manvar111/kubernetes-wordpress-php-fpm-nginx-73cb4f9aef02

Related

Server performance issue while running cakephp web application

Our application server is getting down or slow randomly throughout the day, CakePHP 2 application with Mysql is running on this server. We have some cronjobs set up and all are working perfectly.
This performance issue mostly occurs at business time (Day time)
Server configuration: AWS instance t2.large, FreeBSD 10.3-RELEASE-p11, Disk space 20% free (30GB)
I go through many server logs as well as application logs like below
Nginx error log (Few lines from the log)
2020/04/30 23:04:57 [info] 66440#101049: *71645 client closed connection while waiting for request, client: XX.XX.XX.XX, server: 0.0.0.0:80
2020/04/30 23:05:01 [info] 66440#101049: *71820 kevent() reported that client XX.XX.XX.XX closed keepalive connection
2020/04/30 23:05:42 [info] 66440#101049: *72494 peer closed connection in SSL handshake while SSL handshaking, client: XX.XX.XX.XX, server: 0.0.0.0:443
dmesg.today Log (Few lines)
sonewconn: pcb 0xfffff800a70cf7a8: Listen queue overflow: 193 already in queue awaiting acceptance (62 occurrences)
sonewconn: pcb 0xfffff800a70cf7a8: Listen queue overflow: 193 already in queue awaiting acceptance (57 occurrences)
sonewconn: pcb 0xfffff80115d9e7a8: Listen queue overflow: 193 already in queue awaiting acceptance (63 occurrences)
sonewconn: pcb 0xfffff80115d9e7a8: Listen queue overflow: 193 already in queue awaiting acceptance (126 occurrences)
HTOP outcome
PHP-fpm: pool www, sometimes consumes 100% of CPU and memory
NodePing Alert (Receiving continuous notification in day time)
failed the HTTP check. It is down as of Thu Apr 30 2020 12:29:09 GMT-0700.Timeout.
HTTP is back up after being down for 4 minutes as of Thu Apr 30 2020 23:28:19 GMT-0700.
Nginx.conf file
user www;
worker_processes 2;
#error_log logs/error.log;
#error_log logs/error.log notice;
error_log /var/log/nginx/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 32m;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
server_name_in_redirect on;
server_names_hash_bucket_size 64;
server_names_hash_max_size 8192;
#access_log logs/access.log main;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
include /etc/nginx/ssl/*.conf;
server {
listen 80;
autoindex off;
server_name localhost;
add_header X-Frame-Options "SAMEORIGIN";
root /usr/local/www/html/webroot;
index index.html index.php;
# redirect server error pages to the static page /50x.html
location / {
# try_files $uri $uri/ /index.php?$uri&$args;
# set $new_uri $uri;
# try_files $uri $uri/ /index.php?$args;
try_files $uri $uri?$args $uri/ /index.php?$uri&$args /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include /etc/nginx/fastcgi_params;
#fastcgi_param PATH_INFO $new_uri;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fastcgi_pass /var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
log_not_found off;
access_log off;
}
error_page 500 502 503 504 /50x.html;
location = /50.html {
root /etc/nginx/html;
}
location ~ /(\.ht|\.user.ini|\.git|\.hg|\.bzr|\.svn) {
deny all;
}
}
# HTTPS server
#
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name *.XXXX.com;
ssl on;
ssl_certificate /XXXXX.crt;
ssl_certificate_key /XXXXX.key;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.1 TLSv1.2; # Dropping
root /usr/local/www/html/webroot;
index index.html index.php;
# redirect server error pages to the static page /50x.html
location / {
# try_files $uri $uri/ /index.php?$uri&$args;
# set $new_uri $uri;
# try_files $uri $uri/ /index.php?$args;
try_files $uri $uri?$args $uri/ /index.php?$uri&$args /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_read_timeout 3000;
}
location = /favicon.ico { log_not_found off; access_log off;}
location = /robots.txt { log_not_found off; access_log off;}
location ~ /.well-known { allow all; }
}
##
# Cache Proxy
##
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=512m;
proxy_temp_path /var/tmp;
}
below are Some CPU utilization screens from AWS console
Above are some of my findings to found the issue, but I don't know what is causing poor server performance. Please suggest
UPDATE
I observe that in business hours (TTFB) is taking too much time (20 -25 sec)
I check the log of Mysql queries running on that page took Total Time: 1441 ms
So, Something else taking up time to load the page.
HTOP outcome at that time

Images not found using Laravel with nginx and docker-compose

This is basically an extension on my question here. I have now added the permissions on storage folder and I do not get the 500 error anymore. But now I get 404 instead.
The image URL I'm trying is URL: http://localhost:8888/images/content/test/myimage.jpg
The server path to the image is /var/www/public/images/content/test/myimage.jpg
My nginx-config is the following:
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_rlimit_nofile 100000;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
server {
listen 8888;
root /var/www/public;
index index.php index.html index.htm;
client_max_body_size 100m;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass app:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
}
My guess is that it's something wrong with the config above but I have no idea what it might be. I have tried the solution suggested here but it did not do any difference at all.
Does anyone have any idea what to do?
It might be the path in this line: location ~ ^/.+\.php(/|$)
Try it like this:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;

Nginx + PHP-FPM 7.1 - 504 Gateway Time-out

I'm running a nginx 1.12 and a php-fpm 7.1 as seperate docker containers on a synology nas and i get a 504 Gateway error if the php-script runs longer than 60s. I've tried already several nginx configuration parameters but the error still exists.
Here is my actual nginx config:
#user www-data;
#group http
worker_processes 1;
error_log /opt/data/logs/nginx_error.log notice;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#keepalive_timeout 30s;
sendfile on;
#tcp_nopush off;
tcp_nodelay on;
#gzip off;
send_timeout 300
server {
listen 80;
server_name "";
root /opt/php;
index index.php;
location /data/ {
sendfile on;
root /opt;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_read_timeout 300;
#fastcgi_buffering off;
#fastcgi_keep_conn on;
#fastcgi_intercept_errors on;
#fastcgi_cache off;
#fastcgi_ignore_client_abort on;
}
location ~ ^/(status|ping)$ {
access_log off;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php:9000;
}
}
}
The php-testscript:
<?php
sleep(65);
echo "done!";
file_put_contents("/opt/data/timetest.txt", "\nEnd", FILE_APPEND);
After 60s the browser shows up the 504 Gateway Time-out. The php-script is still running and is also writing the text to the file.
Nginx errorlog:
2017/07/22 08:16:32 [error] 8#8: *10 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.17.0.1, server: , request: "GET /timetest.php HTTP/1.1", upstream: "fastcgi://172.17.0.3:9000", host: "192.168.0.100:8081"
Has anyone an idea?
The question is probably why does your backend take so long to respond? Not sure about your usecase but normally it's not user-friendly to wait to long for a response.
To answer your question:
I found this link: https://easyengine.io/tutorials/php/increase-script-execution-time/
Add in /etc/php5/fpm/php.ini
max_execution_time = 300
Set in /etc/php5/fpm/pool.d/www.conf
request_terminate_timeout = 300
Set in /etc/nginx/nginx.conf
http {
#...
fastcgi_read_timeout 300;
#...
}
And in your config:
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
And reload services
service php5-fpm reload
service nginx reload

PHP5-FPM keeps crashing on new micro instance, log is blank

I'm new to all of this, but can't keep my newly spun micro ec2 server up and running (running wordpress). The PHP-FPM log only has this with logging set to debug.
[17-Oct-2016 15:46:38] NOTICE: configuration file /etc/php5/fpm/php-fpm.conf test is successful
My nginx log is continuously filling with errors trying to connect to php5-fpm.sock (hundreds of entries per minute even though there is no one else accessing the site).
2016/10/17 16:32:16 [error] 26389#0: *7298 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 191.96.249.80, server: mysiteredacted.com, request: "POST /xmlrpc.php HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "removed"
After restarting nginx and PHP-FPM the site works for a few minutes before throwing 502 Bad Gateway errors until I restart them both again.
I don't know where to begin with this. Here is my nginx config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
port_in_redirect off;
gzip on;
gzip_types text/css text/xml text/javascript application/x-javascript;
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
}
Which also include this file in the /conf.d folder:
server {
## Your website name goes here.
server_name mysiteredacted.com www.mysiteredacted.com;
## Your only path reference.
root /var/www/;
listen 80;
## This should be in your http block and if it is, it's not needed here.
index index.html index.htm index.php;
include conf.d/drop;
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_pass unix:/dev/shm/php-fpm-www.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
location ~* \.(css|js|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
}
The second file has this line:
fastcgi_pass unix:/var/run/php5-fpm.sock;
If that file does not exist it will throw this error.
Check this previous question: How to find my php-fpm.sock?
After hours of searching I finally figured it out.. Turns out it's some sort of brute force attack on /xmlrpc.php as indicated by the thousands of requests of "POST /xmlrpc.php HTTP/1.0".
It's a common WordPress attack. Thanks all.

Nginx php files loading slow

EDIT: I have noticed the first time you visit it to goes fast, and it also goes fast if you close the browser tab and re-visit it, but if you simply reload or visit it when you have a tab of it open it goes slow, it is really confusing.
today I come with a problem about PHP CGI, I am brand new to nginx and have just installed it, when I noticed I need to start PHP cgi also with it because with IIS it started it for me. so I start php with batch file below but the problem is... slow php files, they load really slowly even if its just html in them.
#ECHO off
echo Starting PHP, please wait!
C:\nginx\php7\php-cgi.exe -b 127.0.0.1:9054 -c C:\nginx\php7\php.ini
ping 127.0.0.1 -n 1>NUL
ping 127.0.0.1 >NUL
EXIT
Am I doing anything wrong with my batch file or nginx config below? (I have 2 configs) the example.com one is the website with a .php file and the nginx (localhost) just has index.html
localhost loads super fast but example.com one loads really slow because of php.
nginx.conf
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root C:\Users\Administrator\Dropbox\websites\local_website;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location ~ \.php$ {
if (!-e $document_root$document_uri){return 404;}
fastcgi_pass localhost:9054;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
include vhosts/*.conf;
}
example.com.conf
server {
listen ***.***.**.***:80;
server_name example.com www.example.com;
root C:\Users\Administrator\Dropbox\websites\php_website;
index index.php index.html;
log_not_found off;
charset utf-8;
#access_log logs/example.com-access.log main;
location ~ /\. {allow all;}
location / {
rewrite ^/(|/)$ /index.php?url=$1;
rewrite ^/([a-zA-Z0-9_-]+)(|/)$ /index.php?url=$1;
rewrite ^/(.*)\.htm$ /$1.php;
}
location = /favicon.ico {
}
location = /robots.txt {
}
location ~ \.php$ {
if (!-e $document_root$document_uri){return 404;}
fastcgi_pass localhost:9054;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I had the exactly same problem as you do. try changing your fastcgi_pass.
from this
fastcgi_pass localhost:9054
to this
fastcgi_pass 127.0.0.1:9054

Categories