I am aware that there are many similar questions posted on stackoverflow, I have been through them but it still doesn't resolve my issue. Please read on before marking it as a similar question.
I have hosted my laravel based web application through the use of Nginx. The web application is accessible just fine both locally and through the server. However there is a particular URL where when too much data is being returned, it results in the server crashing and returning 404 error.
In nginx error logs the following error message is shown
Nginx Upstream prematurely closed FastCGI stdout while reading response header from upstream
Attempted Solution
I have tried adjusting settings from both PHP ini files as well as nginx conf files to no avail. I also restarted the server using
systemctl restart nginx
systemctl restart php-fpm
PHP.ini
upload_max_filesize = 256M
post_max_size = 1000M
nginx conf
client_max_body_size 300M;
client_body_timeout 2024;
client_header_timeout 2024;
fastcgi_buffers 16 512k
fastcgi_buffer_size 512k
fastcgi_read_timeout 500;
fastcgi_send_timeout 500;
Can someone kindly tell me what i am missing out?
This error usually occurs when a web page has too much content and the server cannot handle it. So, the first step to resolve this error is to do the following −
1.Reduce the amount of content on the page
2.Remove any errors, warnings and notices
3.Make sure your code is clean
Modify the configuration of Nginx, such as what you wrote above
If there are too many warning messages, it cannot be changed. You can turn up the error level. error_reporting(E_ERROR);
Related
I know this is a massive repost but I couldn't figure this out. The server is Ubuntu using nginx.
Doing phpinfo() I see the configuration file I am using is /etc/php/7.0/fpm/php.ini.
These are the properties I set:
upload_max_filesize = 256M
post_max_size = 256M
I restarted nginx, as well as the php7.0-fpm process and the max upload size is still not changing.
I am using wordpress so as a last resort I even tried using a plugin that increases the max upload size and even that didn't work.
I also tried setting it in my .htaccess as well and still nothing:
php_value post_max_size 256M
php_value uploads_max_filesize 256M
By default NGINX has a limit of 1MB on file uploads. To change this you will need to set the client_max_body_size variable. You can do this in the http block in your nginx.conf
http {
#...
client_max_body_size 100m;
client_body_timeout 120s; # Default is 60, May need to be increased for very large uploads
#...
}
If you are expecting upload very large files where upload times will exceed 60 seconds you will also need to add the client_body_timeout variable with a large value
After updating you NGINX configuration don’t forget to restart NGINX.
you need to restart nginx and php to reload the configs. This can be done using the following commands;
sudo service nginx restart
sudo service php7.0-fpm restart
Note:if you don't have to host multiple
Websites just add it to the server block
server {
client_max_body_size 8M;
}
The answer I found here:
Have you tried to put your php.ini under /etc/php5/fpm/php.ini? This is normally the default location that php reads from, if I understand php5-fpm correctly.
A couple of things.
When you mention that your server uses nginx it is unnecesary to use an .htaccess file since those are for Apache servers.
That being said, I would try a couple of things.
Do you know what's the ini file of your php instance?
You mention the one for php 7 but you could also have php 5 installed.
If you go to your console and type "php --ini" what's the loaded configuration file?
Once you know that, using vi / vim or your editor of choice you can set:
upload_max_filesize = 100M
post_max_size = 100M
Now, have into account that you have to restart your services, both php and nginx:
for php 5:
service php5-fpm reload
for php 7:
service php7-fpm reload
for nginx:
service nginx reload
try printing the current values as well:
$uploadMaxFilesize = ini_get('upload_max_filesize');
$postMaxSize = ini_get('post_max_size');
Also, since this is for WordPress, did you try setting it up in the WordPress admin settings?
Admin Dashboard > Settings > Upload Settings
There's a couple things I had to change to get it working for me.
Firstly from user NID here, add this to your /etc/nginx/nginx.conf file (inside the http block):
http {
#...
client_max_body_size 100m;
client_body_timeout 120s; # Default is 60, May need to be increased for very large uploads
#...
}
Then I also had to follow something similar to user Just Rudy here.
Edit your php.ini file - contrary to many guides, for me it was not located in my wordpress root folder, but instead, /etc/php/7.2/fpm/php.ini.
There should be some predefined values for upload_max_filesize and post_max_size. Change these to what you want.
Restart nginx and php-fpm:
sudo systemctl reload nginx
sudo systemctl restart php7.2-fpm
I suddently get a 502 Bad gateway error and I don't understand why this error appear. This error appear moreover only for 1 single page!!...
The exact error in my Nginx log is :
Upstream prematurely closed FastCGI stdout while reading response header from upstream [..] upstream: "fastcgi://unix:/var/run/php5-fpm.sock:"
I tried :
service nginx restart : NOTHING CHANGE
service php5-fpm restart : NOTHING CHANGE
to reboot the server : NOTHING CHANGE
even to restart the mysql service : NOTHING CHANGE
My /var/log/upstart/php5-fpm.log (only a lot of NOTICES) :
Terminating...
exiting, bye-bye!
fpm is running, pid 9887
ready to handle connections
systemd monitor interval set to 10000ms
that makes me crazy, any idea ?
Strangely enough, this is what worked for me after chasing down a mysterious 502 Bad Gateway error:
Change error_reporting setting in php-fpm ini like this:
error_reporting = ~E_ALL
Then restart php-fpm and nginx
This worked for me after trying the suggested nginx settings above to no avail. I don't have a verified explanation for this, but it would appear that excessive error reporting can overload the php-fpm process.
Try to include <fcgi_stdio.h> in your source file.
This error happens to me just now,then i add include <fcgi_stdio.h>in my c file,here is my code.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcgi_config.h>
#include <fcgi_stdio.h>
int main()
{
int count = 0;
while(FCGI_Accept() >= 0)
{
printf("content-type:text/html\r\n");
printf("\r\n");
printf("<title>Fast CGI Hello</title>");
printf("<h1>fast CGI hello</h1>");
printf("Request number %d running on host<i>%s</i>\n",++count,getenv("SERVER_NAME"));
}
return 0;
}
I had the error only for one type of requests that had 12 items in the body. May be it was related with a body size, but it worked when it was more (13 and more items) or some less (11 and less).
upstream prematurely closed FastCGI stdout while reading response header from upstream
2021/12/24 07:14:44 [error] 9#9: *49 upstream prematurely closed FastCGI stdout while reading response header from upstream, client: 172.21.0.1, server: , request: "PUT /v1/MY_PAGE/a976d2e5-afc0-4f6d-8d05-17af7dc73f46 HTTP/1.1", upstream: "fastcgi://172.21.0.3:9000", host: "localhost"
I used Docker containers with Nginx and PHP-FPM.
Nginx returned 502 Bad Gateway.
PHP-FPM was used XDebug and it made the error.
When I turned on debugging the error left and when I turned off debugging the error returned.
Solution
To solve the error I disabled XDebug. Also it works when I removed XDebug.
Options to solve the error (one of these):
Disable XDebug (one of these):
Set the XDEBUG_MODE='off' environment variable.
Set xdebug.mode=off in xdebug.ini or another PHP config.
https://xdebug.org/docs/all_settings#mode
Remove XDebug from your PHP server and restart PHP-FPM (one of these):
Remove through PECL (works with Docker): pecl uninstall xdebug.
Remove through Ubuntu APT: sudo apt-get purge php-xdebug.
Restart your PHP-FPM or your service/server.
I did the following to server nginx config:
client_body_timeout 1200;
client_header_timeout 600;
and added zend_extension to php fpm php.ini:
zend_extension = xdebug.so
I use nginX/1.6 and laravel when i posted data to server i get this error 413 Request Entity Too Large. i tried many solutions as bellow
1- set client_max_body_size 100m; in server and location and http in nginx.conf.
2- set upload_max_filesize = 100m in php.ini
3- set post_max_size = 100m in php.ini
After restarting php5-fpm and nginx the problem still not solved
Add ‘client_max_body_size xxM’ inside the http section in /etc/nginx/nginx.conf, where xx is the size (in megabytes) that you want to allow.
http {
client_max_body_size 20M;
}
I had the same issue but in docker. when I faced this issue, added client_max_body_size 120M; to my Nginx server configuration,
nginx default configuration file path is /etc/nginx/conf.d/default.conf
server {
client_max_body_size 120M;
...
it resizes max body size to 120 megabytes. pay attention to where you put client_max_body_size, because it effects on its scope. for example if you put client_max_body_size in a location scope, only the location scope will be effected with.
after that, I did add these three lines to my PHP docker file
RUN echo "max_file_uploads=100" >> /usr/local/etc/php/conf.d/docker-php-ext-max_file_uploads.ini
RUN echo "post_max_size=120M" >> /usr/local/etc/php/conf.d/docker-php-ext-post_max_size.ini
RUN echo "upload_max_filesize=120M" >> /usr/local/etc/php/conf.d/docker-php-ext-upload_max_filesize.ini
since docker PHP image automatically includes all setting files from the path (/usr/local/etc/php/conf.d/) into php.ini file, PHP configuration file will change by these three lines and the issue must disappear
I am using Docker and PHP 7.4 and I faced this issue too.
Just added a line to the file *.conf inside docker/php74/ directory.
server {
client_max_body_size 100M;
...
}
I have a php script which does a few curl requests through proxies.
I keep getting a 500 error after about 30s. I have no idea why as there is nothing in the nginx error log and nothing in the httpd error log.
I have set php timeout to 600s.
I also tried to add this to my script:
ini_set('error_log','/var/www/php_errors.txt');
so I could get info but nothing there either.
I also added these variables to nginx.conf file:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
How can I test to see if my nginx timeout setting is functioning?
Is there an equivalent of php_info for nginx?
Wow after 3 days I finally worked this out.
The 500 timout issue is due to fastCGI timeout setting.
Answered here: PHP curl put 500 error
I've just installed a nginx+php-fpm server. Everything seems fine except that PHP-FPM never writes error to its log.
fpm.conf
[default]
listen = /var/run/php-fpm/default.sock
listen.allowed_clients = 127.0.0.1
listen.owner = webusr
listen.group = webusr
listen.mode = 0666
user = webusr
group = webusr
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35
pm.status_path = /php/fpm/status
ping.path = /php/fpm/ping
request_terminate_timeout = 30s
request_slowlog_timeout = 10s
slowlog = /var/log/php-fpm/default/slow.log
chroot = /var/www/sites/webusr
catch_workers_output = yes
env[HOSTNAME] = mapsvr.mapking.com
php_flag[display_errors] = on
php_admin_value[error_log] = /var/log/php-fpm/default/error.log
php_admin_flag[log_errors] = on
nginx.conf
server
{
listen 80 default_server;
server_name _;
charset utf-8;
access_log /var/log/nginx/access.log rest;
include conf.d/drops.conf.inc;
location /
{
root /var/www/sites/webusr/htdocs;
index index.html index.htm index.php;
}
# pass the PHP scripts to FastCGI server listening on socket
#
location ~ \.php$
{
root /var/www/sites/webusr/htdocs;
include /etc/nginx/fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /htdocs/$fastcgi_script_name;
if (-f $request_filename)
{
fastcgi_pass unix:/var/run/php-fpm/default.sock;
}
}
location = /php/fpm/status
{
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php-fpm/default.sock;
}
location = /php/fpm/ping
{
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php-fpm/default.sock;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html
{
root /usr/share/nginx/html;
}
}
I've made an erroneous php script and run, and see error output on the web browser. Also nginx error log states stderr output from fpm with the same message. I've check that the user have write (I've even tried 777) permission to the appointed log folder. Even the appointed error.log file has be created successfully by php-fpm. However, the log file is always empty, no matter what outrageous error has been made from php script.
What's going on?
[Found the reason quite a while later]
It was permission. Changed the owner to the sites's users solved the problem.
This worked for me:
; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Default Value: no
catch_workers_output = yes
Edit:
The file to edit is the file that configure your desired pool.
By default its: /etc/php-fpm.d/www.conf
I struggled with this for a long time before finding my php-fpm logs were being written to /var/log/upstart/php5-fpm.log. It appears to be a bug between how upstart and php-fpm interact. See more here: https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1319595
I had a similar issue and had to do the following to the pool.d/www.conf file
php_admin_value[error_log] = /var/log/fpm-php.www.log
php_admin_flag[log_errors] = on
It still wasn't writing the log file so I actually had to create it by touch /var/log/fpm-php.www.log then setting the correct owner sudo chown www-data:www-data /var/log/fpm-php.www.log.
Once this was done, and php5-fpm restarted, logging was resumed.
There are multiple php config files, but THIS is the one you need to edit:
/etc/php(version)?/fpm/pool.d/www.conf
uncomment the line that says:
catch_workers_output
That will allow PHPs stderr to go to php-fpm's error log instead of /dev/null.
I gathered insights from a bunch of answers here and I present a comprehensive solution:
So, if you setup nginx with php5-fpm and log a message using error_log() you can see it in /var/log/nginx/error.log by default.
A problem can arise if you want to log a lot of data (say an array) using error_log(print_r($myArr, true));. If an array is large enough, it seems that nginx will truncate your log entry.
To get around this you can configure fpm (php.net fpm config) to manage logs. Here are the steps to do so.
Open /etc/php5/fpm/pool.d/www.conf:
$ sudo nano /etc/php5/fpm/pool.d/www.conf
Uncomment the following two lines by removing ; at the beginning of the line: (error_log is defined here: php.net)
;php_admin_value[error_log] = /var/log/fpm-php.www.log
;php_admin_flag[log_errors] = on
Create /var/log/fpm-php.www.log:
$ sudo touch /var/log/fpm-php.www.log;
Change ownership of /var/log/fpm-php.www.log so that php5-fpm can edit it:
$ sudo chown vagrant /var/log/fpm-php.www.log
Note: vagrant is the user that I need to give ownership to. You can see what user this should be for you by running $ ps aux | grep php.*www and looking at first column.
Restart php5-fpm:
$ sudo service php5-fpm restart
Now your logs will be in /var/log/fpm-php.www.log.
There is a bug https://bugs.php.net/bug.php?id=61045 in php-fpm from v5.3.9 and till now (5.3.14 and 5.4.4). Developer promised fix will go live in next release. If you don't want to wait - use patch on that page and re-build or rollback to 5.3.8.
In your fpm.conf file you haven't set 2 variable which are only for error logging.
The variables are error_log (file path of your error log file) and log_level (error logging level).
; Error log file
; Note: the default prefix is /usr/local/php/var
; Default Value: log/php-fpm.log
error_log = log/php-fpm.log
; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
log_level = notice
I'd like to add another tip to the existing answers because they did not solve my problem.
Watch out for the following nginx directive in your php location block:
fastcgi_intercept_errors on;
Removing this line has brought an end to many hours of struggling and pulling hair.
It could be hidden in some included conf directory like /etc/nginx/default.d/php.conf in my fedora.
in my case I show that the error log was going to /var/log/php-fpm/www-error.log . so I commented this line in /etc/php-fpm.d/www.conf
php_flag[display_errors] is commented
php_flag[display_errors] = on log will be at /var/log/php-fpm/www-error.log
and as said above I also uncommented this line
catch_workers_output = yes
Now I can see logs in the file specified by nginx.
In my case php-fpm outputs 500 error without any logging because of missing php-mysql module. I moved joomla installation to another server and forgot about it. So apt-get install php-mysql and service restart solved it.
I started with trying to fix broken logging without success. Finally with strace i found fail message after db-related system calls. Though my case is not directly related to op's question, I hope it could be useful.
On alpine 3.15 with php8 i found on /var/log/php8/error.log
/var/log/php8 # cat error.log
16:10:52] NOTICE: fpm is running, pid 14
16:10:52] NOTICE: ready to handle connections
i also have this :
catch_workers_output = yes
Check the Owner directory of "PHP-FPM"
You can do:
ls -lah /var/log/php-fpm/
chown -R webusr:webusr /var/log/php-fpm/
chmod -R 777 /var/log/php-fpm/