I have Opencart Store and have an error on the front page. I can access the backend as well.
You will get below error after loading the page:
504 Gateway Time-out
The server didn't respond in time.
I'm giving my php.ini configuration help me to solve out.
Please suggest me php.ini changes by editing .htaccess
504 Gateway Timeout error on Nginx + FastCGI (php-fpm)
For Nginx + FastCGI (php-fpm), you should try to tweak nginx configuration in this way:
Try raising max_execution_time setting in php.ini file (CentOS path is /etc/php.ini):
max_execution_time = 300
But, you should also change set request_terminate_timeout parameter (commented by default) at www.conf file from PHP-FPM:
pico -w /etc/php-fpm.d/www.conf
Then set the variable to the same value as max_execution_time:
request_terminate_timeout = 300
Now let’s add fastcgi_read_timeout variable inside our Nginx virtual host configuration:
location ~ .php$ {
root /var/www/sites/nginxtips.com;
try_files $uri =404;
fastcgi_pass unix:/tmp/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_read_timeout 300;
}
Then restart nginx:
service nginx reload
504 Gateway Timeout error using Nginx as Proxy
For Nginx as Proxy for Apache web server, this is what you have to try to fix the 504 Gateway Timeout error:
Add these variables to nginx.conf file:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
Then restart nginx:
service nginx reload
Related
I have been trying to upload a csv file through backend dashboard of my laravel site, but it's throwing an error , 504 gateway Timeout nginx . (This works perfectly in staging)
While looking for the solution, I have updated the following parameters.
In php.ini --->
post_max_size
upload_max_filesize
max_execution_time
max_input_time
In nginx config --->
fastcgi_read_timeout
client_max_body_size
send_timeout
keepalive_timeout
While checking nginx /php logs its not showing any errors in logs , and also I have tried enabling slow logs, but still it doesnt show any errors.
Can anyone help me with this issue.
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_read_timeout 1000s;
client_max_body_size 100M;
send_timeout 1000;
keepalive_timeout 1000;
I am having issues with a long-running PHP script:
<?php
sleep(70); # extend 60s
phpinfo();
Which gets terminated every time after 60 seconds with a response 504 Gateway Time-out from Nginx.
When I inspect the Nginx errors I can see that the request times out:
... [error] 1312#1312: *2023 upstream timed out (110: Connection timed out) while reading response header from upstream, ... , upstream: "fastcgi://unix:/run/php/php7.0-fpm.sock", ...
I went through the related questions and tried increasing the timeouts creating a /etc/nginx/conf.d/timeout.conf file with the following content:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 600;
I also read through the Nginx documentation for both fastcgi and core modules, searching for any configurations with defaults set to 60 seconds.
I ruled out the client_* timeouts because they return HTTP 408 instead of HTTP 504 responses.
This is my Nginx server config portion of FastCGI:
location ~ \.php$ {
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
include fastcgi_params;
}
From what I read so far this doesn't seem to be an issue with PHP rather Nginx is to blame for the timeout. Nonetheless, I tried modifying the limits in PHP as well:
My values from the phpinfo():
default_socket_timeout=600
max_execution_time=300
max_input_time=-1
memory_limit=512M
The php-fpm pool config also has the following enabled:
catch_workers_output = yes
request_terminate_timeout = 600
There is nothing in the php-fpm logs.
I am also using Amazon's Load Balancer to route to the server, but the timeout configuration is also increased from the default 60 seconds.
I don't know where else to look, during all the changes I restarted both php-fpm and nginx.
Thank you
As it happens in these cases, I was actually editing a wrong configuration file that didn't get loaded by Nginx.
Adding the following to the right file did the trick:
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 600;
I have a domain on a Plesk server (version 17.8) with CentOS 7. Prestashop is installed on this domain and the products are imported via a self-programmed module.
When I start import then I get the message:
Service Temporarily Unavailable
The server is unable to service your request due to downtime or capacity problems. Please try again later.
Web Server at sportsams.ch
In the log I get this message: (70007) The timeout has been specified: AH01075: Error dispatching request to:
PHP setting for the domain:
PHP version: 7.2.18 with FPM
Memory_limit: 256M
max_execution_time: 1000
max_input_time: 1000
post_max_size: 16M
upload_max_filesize:16M
The support of Plesk told me that this must make adjustments:
Plesk> domains> sportsams.ch> Apache & nginx Settings.
Additional directives for HTTP and Additional directives for HTTPS:
FcgidIdleTimeout 1200
FcgidProcessLifeTime 1200
FcgidConnectTimeout 1200
FcgidIOTimeout 1200
Timeout 1200
ProxyTimeout 120
Click OK button to apply the changes
Unfortunately, these settings have not been successful.
I hope someone else can give me an idea.
If you need more information, let me know.
Centos 7 Server with Plesk 17.8.
PHP-Version 7.2.18 With FPM
Not sure if this applies to your Plesk configuration, but I've been using the following configuration to set the timeout correctly with PrestaShop & Nginx:
location ~ .php$ {
fastcgi_split_path_info ^(.+.php)(/.*)$;
fastcgi_keep_conn on;
include /etc/nginx/fastcgi_params;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_read_timeout 3600;
fastcgi_param PHP_VALUE open_basedir="/var/www/myshop.com/:/tmp/";
}
If fastcgi_read_timeout does not work for you, it might be related to a hosting provider limitation detecting you are consuming too much resources.
I hope this helps!
Is it possible to run multiple NGINX on a single Dedicated server?
I have a dedicated server with 256gb of ram, and I am running multiple PHP scripts on it but it's getting hangs because of memory used with PHP.
when I check
free -m
it's not even using 1% of memory.
So, I am guessing its has some to do with NGINX.
Can I install multiple NGINX on this server and use them like
5.5.5.5:8080, 5.5.5.5:8081, 5.5.5.5:8082
I have already allocated 20 GB memory to PHP, but still not working Properly.
Reason :- NGINX gives 504 Gateway Time-out
Either PHP or NGINX is misconfigured
You may run multiple instances of nginx on the same server provided that some conditions are met. But this is not the solution you should look for (also this may not solve your problem at all).
I got my Ubuntu / PHP / Nginx server set this way (it actually also runs some Node.js servers in parallel). Here is a configuration example which works fine on a AWS EC2 medium instance (m3).
upstream xxx {
# server unix:/var/run/php5-fpm.sock;
server 127.0.0.1:9000 max_fails=0 fail_timeout=10s weight=1;
ip_hash;
keepalive 512;
}
server {
listen 80;
listen 8080;
listen 443 ssl;
#listen [::]:80 ipv6only=on;
server_name xxx.mydomain.io yyy.mydomain.io;
if ( $http_x_forwarded_proto = 'http' ) {
return 301 https://$server_name$request_uri;
}
root /home/ubuntu/www/xxxroot;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
location ~ ^/(status|ping)$ {
access_log off;
allow 127.0.0.1;
#allow 1.2.3.4#your-ip;
#deny all;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass adn;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
#fastcgi_param SCRIPT_FILENAME /xxxroot/$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $request_filename;
#fastcgi_param DOCUMENT_ROOT /home/ubuntu/www/xxxroot;
# send bad requests to 404
#fastcgi_intercept_errors on;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Hope it helps,
I think you are running into a timeout. Your PHP-Scripts seams to run to long.
Check following:
max_execution_time in your php.ini
request_terminate_timeout in www.conf of your PHP-FPM configuration
fastcgi_read_timeout in http section or location section of your nginx configuration.
Nginx is designed more to be used as a reverse proxy or load balancer than to control application logic and run php scripts. Running multiple instances of nginx that each execute php isn't really playing to the server application's strengths. As an alternative, I'd recommend using nginx to proxy between one or more apache instances, which are better suited to executing heavy php scripts. http://kbeezie.com/apache-with-nginx/ contains information on getting apache and nginx to play nicely together.
One can set error_reporting in nginx.conf like so:
fastcgi_param PHP_VALUE error_reporting=E_ALL;
But if I do this in one server block, will it affect all the others as well? Should I change php settings in all server blocks simultaneously?
You can set PHP_VALUE per server and this will affect that server only.
If you need equal PHP_VALUE for all your servers with PHP, use include files.
For example (debian), create /etc/nginx/conf.d/php_settings.cnf:
fastcgi_param PHP_VALUE "upload_max_filesize=5M;\n error_reporting=E_ALL;";
Then include this file into any server or location config you need:
server {
...
location ~ \.php$ {
...
include /etc/nginx/conf.d/php_settings.cnf;
}
...
}
If every host on your server runs in its own PHP-FPM pool, than adding fastcgi_param PHP_VALUE ... to one nginx host will not affect the other ones.
If on the other hand all nginx hosts use one PHP-FPM pool, you should specify PHP_VALUE for every host you have (error_reporting=E_ALL for one of them, empty value for others). Since fastcgi_param passes PHP_VALUE if specified, and doesn't pass if not. In a while all workers will have PHP_VALUE=error_reporting=E_ALL, unless you explicitly set PHP_VALUE in other hosts.
Additionally, fastcgi_param PHP_VALUE ... declarations override one another (the last one takes effect).
Steps to reproduce:
apt install nginx php5-fpm
/etc/nginx/sites-enabled/hosts.conf:
server {
server_name s1;
root /srv/www/s1;
location = / {
include fastcgi.conf;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param PHP_VALUE error_reporting=E_ERROR;
}
}
server {
server_name s2;
root /srv/www/s1;
location = / {
include fastcgi.conf;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
}
Add s1, s2 to /etc/hosts
Change pm to static, pm.max_children to 1 in /etc/php5/fpm/pool.d/www.conf
cat /srv/www/s1/index.php:
<?php var_dump(error_reporting());
systemctl restart php5-fpm && systemctl restart nginx
curl s2 && curl s1 && curl s2
int(22527)
int(1)
int(1)