I have a multipart file upload in a form with a php backend. I've set max_execution_time and max_input_time in php.ini to 180 and confirmed on the file upload that these values are set and set TimeOut 180 in Apache. I've also set
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
When I upload a 250MB file on a fast connection it works fine. When I'm on a slower connection or a network link conditioner to artificially slow it down, the same file times out and on Chrome gives me net::ERR_CONNECTION_RESET after 1 minute (and 5 seconds) reliably. I've also tried other browsers with the same outcome, just different error messages.
There is no indication to an error in any log and I've tried both on http and https.
What would cause the upload connection to be reset after 1 minute?
EDIT
I've now also tried to have a simple upload form that bypasses any framework I'm using, still timeouts at 1 minute.
I've also just made a sleep script that timeouts after 2 and a half minutes, and that works, page takes around 2.5 minutes to load so I can't see how it's browser or header related.
I've also used a server with more RAM to ensure it's not related to that. I've tested on 3 different servers with different specs but all from the same CentOS 7 base.
I've now also upgraded to PHP 7.2 and updated the relevant fields again with no change in the problem.
EDIT 2
The tech stack for this isolated instance is
Apache 2.4.6
PHP 5.6 / 7.2 (tried both), has OPCache
Redis 3.2.6 for session information and key / value storage (ElastiCache)
PostgreSQL 10.2 (RDS)
Everything else in my tech stack has been removed from this test area to try and isolate the problem. EFS is on the system but in my most isolated test it's just using EBS.
EDIT 3
Here some logs from the chrome network debugger:
{"params":{"net_error":-101,"os_error":32},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":69},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":216,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":56},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":159},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":164},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":113,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":55},
{"params":{"net_error":-101},"phase":2,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":164},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":97},
{"phase":1,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":34},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":2},
I went through a similar problem, in my case it was related to mod_reqtimeout by adding:
RequestReadTimeout header=20-40, MinRate=500 body=20, MinRate=500
to httpd.conf did the trick!
You can check the documentation here.
Hope it helps!
Original source here
ERR_CONNECTION_RESET usually means that the connection to the server has ceased without sending any response to the client. This means that the entire PHP process has died without being able to shut down properly.
This is usually not caused by something like an exceeded memory_limit. It could be some sort of Segmentation Fault or something like that. If you have access to error logs, check them. Otherwise, you might get support from your hosting company.
I would recommend you to try some of these things:
Try cleaning the browser's cache. If you have already visited the page, it is possible for the cache to contain information that doesn’t match the current version of the website and so blocks the connection setup, making the ERR_CONNECTION_RESET message appear.
Add the following to your settings:
memory_limit = 1024M
max_input_vars = 2000
upload_max_filesize = 300M
post_max_size = 300M
max_execution_time = 990
Try setting the following input in your form:
In your processing script, increase the session timeout:
set_time_limit(200);
You might need to tune up the SSL buffer size in your apache config file.
SSLRenegBufferSize 10486000
The name and location of the conf file is different depending on distributions.
In Debian you find the conf file in /etc/apache2/sites-available/default-ssl.conf
A few times it is mod_security module which prevents post of large data approximately 171 KB. Try adding/modifying the following in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
I hope something might work out!
Incase anybody else runs into this - there is also a problem with this relating to PHP-FPM. If you dont set "ProxyTimeout" in your httpd.conf - PHP-FPM uses a default timeout of one minute. It took me several hours to figure out the problem as I initially was thinking of all the normal settings like everyone else.
I had the same problem. I used the resumable file upload method where if the internet is disconnected and reconnects back then the upload resumes from the same progress.
Check out the library https://packagist.org/packages/pion/laravel-chunk-upload
Installation
composer require pion/laravel-chunk-upload
Add service provider
\Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider::class
Publish the config
php artisan vendor:publish --provider="Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider"
In my opinion it maybe relative to one of them:
About apache config (/etc/httpd2/conf ou /etc/apache2/conf):
Timeout 300
max_execution_time = 300
About php config ('php.ini'):
upload_max_filesize = 2000M
post_max_size = 2000M
max_input_time = 300
memory_limit = 3092M
max_execution_time = 300
About PostgreSQL config (execute this request):
SET statement_timeout TO 0;
About proxy, (or apache mod_proxy), it maybe also be due to proxy timeout configuration
in case anyone has the same issue, the problem I encountered is that the http request has to go through proxy sever and waf, small files upload is ok, but with large files the tcp connection automatically closed, how to validate:
simply change your hosts setting point the domain to the web server ip address (or you may use firefox with no-proxy if there is no waf), if your problem gone then it's the caused by the proxy or the waf in between your web server and the browser
Connection-Reset occurs when php process dies without proper error message.
Changing oracle client version from 19 to 12c and then appropriately configuring in php.ini solved the connection reset issue for our team.
Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.
I need help on resetting server timeout, at present, the php scripts on my linode servers timeout in 50 seconds, i have tried all options to increase the timeout time but nothing helps.
Environment -
Linode VPS
Server OS - UBuntu 12
Nginx
PHP / Fast CGI
apache
My Development is being done with CodeIgniter
I have a test script (test.php), with the following code -
<?php
sleep(70);
die( 'Hello World' );
?>
When I call this script through a browser, I Get a 504 Timeout Error.
I have setup the following -
PHP - set max_execution_time to 300, max_input time to 300
updated NGINX Configuration file with all the timeout variables (proxy, fastcgi etc)
Setup FastCGI to timeout after 600 seconds
Still my script timeouts at 50 seconds, i have reloaded nginx, rebooted vps, but the changes are not reflecting.
Please help me, this issue is turning me mad.
Sincerely,
At the end of last week I noticed a problem on one of my medium AWS instances where Nginx always returns a HTTP 499 response if a request takes more than 60 seconds. The page being requested is a PHP script
I've spent several days trying to find answers and have tried everything that I can find on the internet including several entries here on Stack Overflow, nothing works.
I've tried modifying the PHP settings, PHP-FPM settings and Nginx settings. You can see a question I raised on the NginX forums on Friday (http://forum.nginx.org/read.php?9,237692) though that has received no response so I am hoping that I might be able to find an answer here before I am forced to moved back to Apache which I know just works.
This is not the same problem as the HTTP 500 errors reported in other entries.
I've been able to replicate the problem with a fresh micro AWS instance of NginX using PHP 5.4.11.
To help anyone who wishes to see the problem in action I'm going to take you through the set-up I ran for the latest Micro test server.
You'll need to launch a new AWS Micro instance (so it's free) using the AMI ami-c1aaabb5
This PasteBin entry has the complete set-up to run to mirror my test environment. You'll just need to change example.com within the NginX config at the end
http://pastebin.com/WQX4AqEU
Once that's set-up you just need to create the sample PHP file which I am testing with which is
<?php
sleep(70);
die( 'Hello World' );
?>
Save that into the webroot and then test. If you run the script from the command line using php or php-cgi, it will work. If you access the script via a webpage and tail the access log /var/log/nginx/example.access.log, you will notice that you receive the HTTP 1.1 499 response after 60 seconds.
Now that you can see the timeout, I'll go through some of the config changes I've made to both PHP and NginX to try to get around this. For PHP I'll create several config files so that they can be easily disabled
Update the PHP FPM Config to include external config files
sudo echo '
include=/usr/local/php/php-fpm.d/*.conf
' >> /usr/local/php/etc/php-fpm.conf
Create a new PHP-FPM config to override the request timeout
sudo echo '[www]
request_terminate_timeout = 120s
request_slowlog_timeout = 60s
slowlog = /var/log/php-fpm-slow.log ' >
/usr/local/php/php-fpm.d/timeouts.conf
Change some of the global settings to ensure the emergency restart interval is 2 minutes
# Create a global tweaks
sudo echo '[global]
error_log = /var/log/php-fpm.log
emergency_restart_threshold = 10
emergency_restart_interval = 2m
process_control_timeout = 10s
' > /usr/local/php/php-fpm.d/global-tweaks.conf
Next, we will change some of the PHP.INI settings, again using separate files
# Log PHP Errors
sudo echo '[PHP]
log_errors = on
error_log = /var/log/php.log
' > /usr/local/php/conf.d/errors.ini
sudo echo '[PHP]
post_max_size=32M
upload_max_filesize=32M
max_execution_time = 360
default_socket_timeout = 360
mysql.connect_timeout = 360
max_input_time = 360
' > /usr/local/php/conf.d/filesize.ini
As you can see, this is increasing the socket timeout to 3 minutes and will help log errors.
Finally, I'll edit some of the NginX settings to increase the timeout's that side
First I edit the file /etc/nginx/nginx.conf and add this to the http directive
fastcgi_read_timeout 300;
Next, I edit the file /etc/nginx/sites-enabled/example which we created earlier (See the pastebin entry) and add the following settings into the server directive
client_max_body_size 200;
client_header_timeout 360;
client_body_timeout 360;
fastcgi_read_timeout 360;
keepalive_timeout 360;
proxy_ignore_client_abort on;
send_timeout 360;
lingering_timeout 360;
Finally I add the following into the location ~ .php$ section of the server dir
fastcgi_read_timeout 360;
fastcgi_send_timeout 360;
fastcgi_connect_timeout 1200;
Before retrying the script, start both nginx and php-fpm to ensure that the new settings have been picked up. I then try accessing the page and still receive the HTTP/1.1 499 entry within the NginX example.error.log.
So, where am I going wrong? This just works on apache when I set PHP's max execution time to 2 minutes.
I can see that the PHP settings have been picked up by running phpinfo() from a web-accessible page. I just don't get, I actually think that too much has been increased as it should just need PHP's max_execution_time, default_socket_timeout changed as well as NginX's fastcgi_read_timeout within just the server->location directive.
Update 1
Having performed some further test to show that the problem is not that the client is dying I have modified the test file to be
<?php
file_put_contents('/www/log.log', 'My first data');
sleep(70);
file_put_contents('/www/log.log','The sleep has passed');
die('Hello World after sleep');
?>
If I run the script from a web page then I can see the content of the file be set to the first string. 60 seconds later the error appears in the NginX log. 10 seconds later the contents of the file changes to the 2nd string, proving that PHP is completing the process.
Update 2
Setting fastcgi_ignore_client_abort on; does change the response from a HTTP 499 to a HTTP 200 though nothing is still returned to the end client.
Update 3
Having installed Apache and PHP (5.3.10) onto the box straight (using apt) and then increasing the execution time the problem does appear to also happen on Apache as well. The symptoms are the same as NginX now, a HTTP200 response but the actual client connection times out before hand.
I've also started to notice, in the NginX logs, that if I test using Firefox, it makes a double request (like this PHP script executes twice when longer than 60 seconds). Though that does appear to be the client requesting upon the script failing
The cause of the problem is the Elastic Load Balancers on AWS. They, by default, timeout after 60 seconds of inactivity which is what was causing the problem.
So it wasn't NginX, PHP-FPM or PHP but the load balancer.
To fix this, simply go into the ELB "Description" tab, scroll to the bottom, and click the "(Edit)" link beside the value that says "Idle Timeout: 60 seconds"
Actually I faced the same issue on one server and I figured out that after nginx configuration changes I didn't restart the nginx server, so with every hit of nginx url I was getting a 499 http response. After nginx restart it started working properly with http 200 responses.
I thought I would leave my two cents. First the problem is not related with php(still could be a php related, php always surprises me :P ). Thats for sure. its mainly caused of a server proxied to itself, more specifically hostname/aliases names issue, in your case it could be the load balancer is requesting nginx and nginx is calling back the load balancer and it keeps going that way.
I have experienced a similar issue with nginx as the load balancer and apache as the webserver/proxy
In my case - nginx was sending a request to an AWS ALB and getting a timeout with a 499 status code.
The solution was to add this line:
proxy_next_upstream off;
The default value for this in current versions of nginx is proxy_next_upstream error timeout; - which means that on a timeout it tries the next 'server' - which in the case of an ALB is the next IP in the list of resolved ips.
You need to find in which place problem live. I dont' know exact answer, but just let's try to find it.
We have here 3 elements: nginx, php-fpm, php. As you told, same php settings under apache is ok. Does it's same no same setup? Did you try apache instead of nginx on same OS/host/etc.?
If we will see, that php is not suspect, then we have two suspects: nginx & php-fpm.
To exclude nginx: try to setup same "system" on ruby. See https://github.com/garex/puppet-module-nginx to get idea to install simplest ruby setup. Or use google (may be it will be even better).
My main suspect here is php-fpm.
Try to play with these settings:
php-fpm`s request_terminate_timeout
nginx`s fastcgi_ignore_client_abort