I am trying to upload a file using uppy. On my server I am using php 8.0 and Apache 2.
I am uploading a file which is about 156Mb in size but server returns response with 413 status code and no message.
As per instruction given on all over internet I tried to configure my php.ini file and here are the updated configurations
post_max_size = 20480M
upload_max_filesize = 20480M
max_execution_time = 24000
max_input_time = 24000
memory_limit = 800M
Unfortunately above settings didn't help me. I have confirmed the php.ini file location with following command
php -i | grep Conf
Apart from this, I came across an answer that asked to set SecRequestBodyLimit value in modsecurity.conf. modsecurity was not even installed in my system but still I installed it and set the SecRequestBodyNoFilesLimit value as SecRequestBodyLimit 1000000000 but no luck.
I highly doubt that this is from server and Uppy has no role in this issue but I cannot predict the exact problem.
Response 413 is a typical error when you use ModSecurity, and the limit was set incorrectly. You should review the relevant documentation. If the size of your file is 156MB, you should calculate the base64 encoded size: multiply it with 4 and divide it by 3, so the approximate value is 208MB. I should set up 250MB for SecRequestBodyLimit, but not for SecRequestBodyNoFilesLimit - please keep it as low. 250MB is 262144000 byte, so try to set up this:
SecRequestBodyLimit 262144000
Also please check your Apache's error.log, you have to see every relevant information there.
I'm having some issues with jQuery/PHP file uploading on my AMI Linux running on EC2 instance.
I tried using this and a couple other plugins (just to ensure the problem wasnt with the plugin itself), with the same results.
When trying to upload a 14MB PDF the progress bar reaches 75% (or more, up to 99%), then it restarts from 0%, it reaches again 75% (or more, up to 99%) and then it just stops with no error (not even saying Request Timeout).
Doing several attempts, only a couple of times the progress bar reached 99% without restarting, and then error popped up saying Request Timeout.
This is what i found in apache's access_log:
12.34.56.78 - - [02/Jul/2019:15:50:09 +0000] "POST /uploader/demo/backend/upload.php HTTP/1.1" 408 221
12.34.56.78 - - [02/Jul/2019:15:50:31 +0000] "POST /uploader/demo/backend/upload.php HTTP/1.1" 408 221
So it prints 408 Request Timeout on 2 lines (infact the upload restarts once). The upload takes 22 or 23 seconds (as can be seen in the logs).
This is how i set my php.ini (i'm using PHP 7.1 FPM):
max_execution_time = 360
max_input_time = 360
memory_limit = 256M
post_max_size = 100M
upload_max_filesize = 100M
phpinfo() shows that those values are properly applied.
I also tried to use set_time_limit(0) and set all above values in the upload.php file with ini_set(), but nothing changed.
Upload directory has proper permissions, infact an 9.3MB PDF is uploaded correctly with no errors.
In one of my attempts i tried also to set these apache directives:
KeepAlive On
KeepAliveTimeout 360
TimeOut 360
With the only result that the upload progress kept reaching 99% and restarting several times instead than restating only once.
Now i run out of ideas and most solutions proposed are related to php.ini settings which in my case are properly applied.
After hours of attempts, as soon as i posted my question, i found the answer. It might be useful to anyone having the same issue. If everything else listed in my question doesnt work, it might be due to mod_reqtimeout (as it was for me).
I simply created a file named: /etc/httpd/conf.d/mod_reqtimeout.conf
and put this inside of it:
<IfModule reqtimeout_module>
RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500
</IfModule>
as suggested by the apache doc linked above. Then i restarted apache and the upload reached 100% with no problems.
Also mod_security has taken part in this when i tried with a 60MB PDF. I had to set the following setting to match the 100MB limit that was set in php.ini, inside /etc/httpd/conf.d/mod_security.conf:
SecRequestBodyLimit 100000000
I have a multipart file upload in a form with a php backend. I've set max_execution_time and max_input_time in php.ini to 180 and confirmed on the file upload that these values are set and set TimeOut 180 in Apache. I've also set
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
When I upload a 250MB file on a fast connection it works fine. When I'm on a slower connection or a network link conditioner to artificially slow it down, the same file times out and on Chrome gives me net::ERR_CONNECTION_RESET after 1 minute (and 5 seconds) reliably. I've also tried other browsers with the same outcome, just different error messages.
There is no indication to an error in any log and I've tried both on http and https.
What would cause the upload connection to be reset after 1 minute?
EDIT
I've now also tried to have a simple upload form that bypasses any framework I'm using, still timeouts at 1 minute.
I've also just made a sleep script that timeouts after 2 and a half minutes, and that works, page takes around 2.5 minutes to load so I can't see how it's browser or header related.
I've also used a server with more RAM to ensure it's not related to that. I've tested on 3 different servers with different specs but all from the same CentOS 7 base.
I've now also upgraded to PHP 7.2 and updated the relevant fields again with no change in the problem.
EDIT 2
The tech stack for this isolated instance is
Apache 2.4.6
PHP 5.6 / 7.2 (tried both), has OPCache
Redis 3.2.6 for session information and key / value storage (ElastiCache)
PostgreSQL 10.2 (RDS)
Everything else in my tech stack has been removed from this test area to try and isolate the problem. EFS is on the system but in my most isolated test it's just using EBS.
EDIT 3
Here some logs from the chrome network debugger:
{"params":{"net_error":-101,"os_error":32},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":69},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":216,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":56},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":159},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":164},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":113,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":55},
{"params":{"net_error":-101},"phase":2,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":164},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":97},
{"phase":1,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":34},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":2},
I went through a similar problem, in my case it was related to mod_reqtimeout by adding:
RequestReadTimeout header=20-40, MinRate=500 body=20, MinRate=500
to httpd.conf did the trick!
You can check the documentation here.
Hope it helps!
Original source here
ERR_CONNECTION_RESET usually means that the connection to the server has ceased without sending any response to the client. This means that the entire PHP process has died without being able to shut down properly.
This is usually not caused by something like an exceeded memory_limit. It could be some sort of Segmentation Fault or something like that. If you have access to error logs, check them. Otherwise, you might get support from your hosting company.
I would recommend you to try some of these things:
Try cleaning the browser's cache. If you have already visited the page, it is possible for the cache to contain information that doesn’t match the current version of the website and so blocks the connection setup, making the ERR_CONNECTION_RESET message appear.
Add the following to your settings:
memory_limit = 1024M
max_input_vars = 2000
upload_max_filesize = 300M
post_max_size = 300M
max_execution_time = 990
Try setting the following input in your form:
In your processing script, increase the session timeout:
set_time_limit(200);
You might need to tune up the SSL buffer size in your apache config file.
SSLRenegBufferSize 10486000
The name and location of the conf file is different depending on distributions.
In Debian you find the conf file in /etc/apache2/sites-available/default-ssl.conf
A few times it is mod_security module which prevents post of large data approximately 171 KB. Try adding/modifying the following in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
I hope something might work out!
Incase anybody else runs into this - there is also a problem with this relating to PHP-FPM. If you dont set "ProxyTimeout" in your httpd.conf - PHP-FPM uses a default timeout of one minute. It took me several hours to figure out the problem as I initially was thinking of all the normal settings like everyone else.
I had the same problem. I used the resumable file upload method where if the internet is disconnected and reconnects back then the upload resumes from the same progress.
Check out the library https://packagist.org/packages/pion/laravel-chunk-upload
Installation
composer require pion/laravel-chunk-upload
Add service provider
\Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider::class
Publish the config
php artisan vendor:publish --provider="Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider"
In my opinion it maybe relative to one of them:
About apache config (/etc/httpd2/conf ou /etc/apache2/conf):
Timeout 300
max_execution_time = 300
About php config ('php.ini'):
upload_max_filesize = 2000M
post_max_size = 2000M
max_input_time = 300
memory_limit = 3092M
max_execution_time = 300
About PostgreSQL config (execute this request):
SET statement_timeout TO 0;
About proxy, (or apache mod_proxy), it maybe also be due to proxy timeout configuration
in case anyone has the same issue, the problem I encountered is that the http request has to go through proxy sever and waf, small files upload is ok, but with large files the tcp connection automatically closed, how to validate:
simply change your hosts setting point the domain to the web server ip address (or you may use firefox with no-proxy if there is no waf), if your problem gone then it's the caused by the proxy or the waf in between your web server and the browser
Connection-Reset occurs when php process dies without proper error message.
Changing oracle client version from 19 to 12c and then appropriately configuring in php.ini solved the connection reset issue for our team.
I would like to upload large files up to 50GB.
I edited my php.ini
max_execution_time = 18000
max_input_time = 18000
post_max_size = 50G
upload_max_filesize = 50G
I increased mod_fcgid values in my vhost
IdleTimeout 18000
ProcessLifeTime 18000
FcgidMaxRequestLen 64424509440
FcgidIOTimeout 18000
I can upload files around 2-3GB maximum, but for more there are two cases:
No error in apache logs for files around 10GB
An error for files around 5GB : (70008)Partial results are valid but processing is incomplete: mod_fcgid: can't get data from http client
The site (if it can help you) : http://filetransfer.fr
Thank you in advance to any one who will help me !
Debian 7, apache 2.2.22, PHP 5.4.45
We got that only with Internet Explorer / Edge.
As long we use something different everything's OK.
It seems compressed content in IE / Edge is somehow buggy.
You may try to disable compression and check if that goes away.
I have a Debian Squeeze install on an Amazon EC2 instance running Apache2, and PHP 5.3.3-7. I would like it to be able to accept uploads from a standard point-and-shoot camera (about 5 MB). Accordingly, I've edited php.ini in /etc/php5/apache2/ to allow for up to 18MB uploads, and I've upped the time PHP will allow to work on a script.
Despite restarting Apache and even the machine itself, it absolutely refuses to upload any file larger than 2 MB. Is this an EC2 problem or is it still a PHP issue. I'm fairly sure I've ironed out all possibility of it being PHP, but I've been staring at the same 4 lines of code for the last week and searching like a mad person for what this could possibly be.
/etc/php5/apache2/php.ini:
max_execution_time = 120
...
max_input_time = 120
...
upload_max_filesize = 18M
...
post_max_size = 18M
I have double checked just now with phpinfo(), these settings are in effect, but it still does not work.
Check settings upload_max_filesize and post_max_size in your php.ini file
The problem is likely to be Sohusin. The default packages from APT on debian has Sohusin built in.
This also affects your upload size limit. Take a look at this link for a fix and an explanation:
http://www.cyberciti.biz/faq/linux-unix-apache-increase-php-upload-limit/
don't know if that will be of any help on your setup?
In Apache:
TimeOut
Amount of time the server will wait for certain events before failing a request
LimitRequestBody
Restricts the total size of the HTTP request body sent from the client
Also on some server setups you cant change php.inp via scipts
try this
max_execution_time = 120
max_input_time = 120
upload_max_filesize = 40M
post_max_size = 40M
Save then run
sudo service apache2 restart