I am having some trouble with an eCommerce site, which is using SagePay as the payment gateway. Some payments are being completed, others are not, and the error that seems to be coming up for users is either an Internal Server Error, or 502 Bad Gateway Error.
I have looked into the Server Logs (specifically proxy_error_log) and found that each transaction that is failing is showing an error in the logs as follows:
2014/12/02 04:24:11 [error] 9179#0: *70668 upstream sent too big header while reading response header from upstream...
After doing a bit of digging, I found that supposedly editing the proxy buffer size seems to fix it. I have added the following code to /etc/nginx/nginx.conf:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
The second step is to add another block of code to the location ~ .php$ {} block in the vhost file:
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
However the vhost file contains the following text:
ATTENTION!
DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.
Any idea why it says this, and is there a way to get around it?!
If you're using Plesk 11 you can add extra nginx directives per vhost through the Plesk panel.
Go to Domains > example.co.uk > Web Server Settings.
At the bottom of this page is a textarea labelled "Additional nginx directives" where you can just drop in your directives. Click OK and Plesk will restart the web server and the directives will be in effect
To add the fastcgi directives within the php location block you'd need to do add something like this to the additional nginx directives textarea:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location ~ .php$ {
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
}
Related
I am really struggling to get an Azure App Service running Linux and PHP 8 or 8.1 to parse HTML files through PHP. The 8.0 and 8.1 versions run on nginx rather than Apache.
I have started by updating the nginx blocked used by the app service, located at /etc/nginx/sites-enabled/default, to read the below. This is as per default, except adding the HTML location alongside PHP:
server {
listen 8080;
listen [::]:8080;
root /home/site/wwwroot;
index index.php index.html index.htm;
server_name example.com www.example.com;
port_in_redirect off;
location / {
index index.php index.html index.htm hostingstart.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /html/;
}
location ~ [^/]\.(php|html)(/|$) {
fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
fastcgi_pass 127.0.0.1:9000;
include fastcgi.conf;
fastcgi_param HTTP_PROXY "";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_intercept_errors on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
}
This correctly then attempts to parse HTML files through PHP, however they all then display the following error:
NOTICE: Access to the script '/home/site/wwwroot/login.html' has been denied (see security.limit_extensions)
I have then gone to /usr/local/etc/php-fpm.d/www.conf and added the following line:
security.limit_extensions = .php .php3 .php4 .php5 .php7 .html
Normally, this would then work. Under an App Service however, I still receive the security.limit_extensions error message. Running php-fpm -tt appears to show the correct settings are in place and applied.
Other posts reference SeLinux, which doesn't appear to be running, and cgi.fix_pathinfo which is set to 1.
I am aware the above files do not maintain on app services after a restart, but are currently in place.
Has anyone (Please!!) got this to work and successfully parsed HTML as PHP on PHP8.1, as an Azure App Service?
When developing locally on this project, I'm having issues where when my PHP Laravel application throws a 500 error I see a 502 Bag Gateway instead of an error page rendered by PHP. I do have the following env vars set:
APP_ENV=local
APP_DEBUG=true
APP_LOG_LEVEL=debug
In prod, I see Laravel resolve the 500.blade.php error page as expected, but locally nothing is shown.
For example, a bad method call can trigger this:
022/09/04 22:19:45 [error] 867#867: *103 FastCGI sent in stderr: "PHP message: [2022-09-04 22:19:45] local.ERROR: Call to undefined method....
I haven't been able to identify any configuration setting that I can tweak within nginx that'll enable it to show errors rather than a Bad Gateway.
Any suggestions on what configuration might need to be changed here?
Nginx configuration:
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
server_name app;
access_log off;
error_log /dev/stdout;
root /var/www/html/public;
index index.php;
charset utf-8;
# this causes issues with Docker
sendfile off;
location = favicon.ico { log_not_found off; access_log off; }
location = robots.txt { access_log off; log_not_found off; }
# look for local files on the container before sending the request to fpm
location / {
try_files $uri /index.php?$query_string;
}
# nothing local, let fpm handle it
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass localhost:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
# Httpoxy exploit (https://httpoxy.org/) fix
fastcgi_param HTTP_PROXY "";
# allow larger POSTS for handling oauth tokens
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
# Deny .htaccess file access
location ~ /\.ht {
deny all;
}
}
I created my nginx virtual-host/code-block by this way and it's working for me, I'm using 8000 port but you can use 80 port. However, it preferable if we don't map port 80 with any of single project in local development because mostly we are working on multiple projects so you should need to enable different ports of every project.
server {
listen 8000;
root /var/www/html/<project-path>/public;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
# pass the PHP scripts to FastCGI server listening on /var/run/php/php7.4-fpm.sock
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I hope that will help you.
I believe that the behavior is caused by the php configuration, not by nginx.
Try setting
display_errors = on;
Unless otherwise instructed nginx passes the exact error code it receives.
There is one other alternative I can think of, perhaps the script is timing out on error for some reason causing the 502.
You need to simply route the error codes to your index.php file so Laravel can deal with them.
This has been covered in multiple other questions on StackOverflow. Here's one -
Allow Laravel to respond to 403 instead of nginx
Just use 500 (502) instead of 403.
What is the actual header response you getting from your request?
I'd suggest you do some test and try to isolate the issue if this is a problem with nginx config, php config or your laravel environment and error handling instead.
you can create a test.php file in your public folder
i.e.
<?php
http_response_code(500);
TEST;
now if you open site/test.php are you getting 500 error or 502 error? and is the error displaying something like undefine TEST constant.
or how about you edit public/index.php and just break the code like adding . somewhere, are you also getting 502 response in your laravel app?
502 errors usually happens when you set nginx as a proxy or does not get a valid response, you can enable debug mode to see what happens with your request.
also post your nginx.conf
and maybe try adding fastcgi_intercept_errors off; on your php or main location block
EDIT
Another possible cause of this is the upstream too big and more than your nginx config buffer_size
You can try increasing the buffer size,
add inside http block in whatever-environment-you-have/nginx/nginx.conf
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
then on your ~php block add the following
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
if still doesnt work try increasing all to 4096k
I have Ubuntu 18.04, Nginx VPN running 4 wordpress sites (https enabled). These sites uses Cloudflare plugin. When I published new posts or purge cloudflare cache by using WordPress plugin it gives following error.
PHP message: [Cloudflare] ERROR: Bad Request" while reading response header >from upstream, client
In my Nginx configuration, I have increased the fastcgi_buffer_size to 32k.
fastcgi_buffer_size 32k;
Moreover, I have increased following nginx header related optimizations to handle large headers,
client_header_buffer_size 64k;
http2_max_header_size 32k;
large_client_header_buffers 4 64k;
proxy_buffer_size 256k;
proxy_buffers 8 256k;
proxy_busy_buffers_size 512k;
But still I'm getting this error. How do I determine up to which level should I increase these things? Or are there any other better way to fix this error?
This is the full error log: - https://pastebin.com/aPfZ5V7a
I'd configured an AWS server for both my Magento and Wordpress sites (2 of each). My based config was:
Nginx
Mysql
Php
This sites and server worked over the last 2 years without any major issues, recently i had to migrate to a new EC2 server Instance and there the issues started. Then i realised that the web sites weren't available for less than a minute after restarting the Nginx and started throwing "502 Bad getaway"
My nginx hosts files have this php configuration.
location ~ .php$ {
if (!-e $request_filename) { rewrite / /index.php last; }
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
include fastcgi_params; ## See /etc/nginx/fastcgi_params;
## Tweak fastcgi buffers, just in case.
fastcgi_buffer_size 128k;
fastcgi_buffers 256 4k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
Then i dinged a bit deeper and found that the cause of the error was not Nginx it self, the service php5-fpm was collapsing every 30 or 45 second even with very low traffic over the websites.
For now am running a cronjob that restarts the php5-fpm service every 30 seconds. I know this is killing a fly with a Bazooka but for now it keeps the sites up and running.
Any Help would be very very Helpful!
I get 502 Bad Gateway for some requests on my server. I get it for some particular AJAX requests but if I replay the failed request in the console, it works (wtf). In nginx/error.log it says
[error] 13867#0: *74180 recv() failed (104: Connection reset by peer) while reading response header from upstream
My website is in PHP. Thanx
I had similar problem on wordpress site. add these lines inside http block of /etc/nginx/nginx.conf file.
fastcgi_temp_file_write_size 10m;
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
If it still not working also add this line
client_max_body_size 50M;
I had similar problem with my gitlab setup on nginx. What helped solve my problem was to higher max client's body size by client_max_body_size 50m directive inside http block of /etc/nginx/nginx.conf file.