Failed POST request to Laravel containing files larger than 1MB - php

When sending files larger than 1MB in a POST request I get a (failed) response in Chrome. Files smaller than 1MB work fine so I expect that something is setting a limit of 1MB.
I am running phpinfo() to check the values in my php.ini file, which shows that upload_max_filesize=100M and post_max_size=100M. I have also checked the Laravel logs but there are no errors regarding this.
I am using Laravel 6.18.13 running on a Homestead box with PHP 7.4.5. My front-end application is using Angular 9. This is the code in Angular that sends the request:
const formData = new FormData();
params.files.forEach((file: File) => {
formData.append(`audio_files[]`, file, file.name);
});
return this.http.post<APIResponse>(url, formData);
Any ideas as to what might be setting this 1MB limit are appreciated.

Most web sites have multiple layers to consider:
web server / reverse proxy, e.g. nginx
php process
database
browser security
Nginx limits
For nginx make sure that the client_max_body_size directive is properly set. Must be set both in http and server context.
(would yield 413 entity too large http status code)
PHP limits
You are correct to adjust the following:
upload_max_filesize
post_max_size
file_uploads
max_file_uploads
Check also your PHP error and access log for more information.
Database limits
If the data is parsed to the a database, that could have some limits too. For MySQL consider the following:
max_allowed_packet
Browser
The browser or JavaScript could cancel the request, please check your console for warnings, make sure you are not sending incorrect headers, the request is cancelled by browser addins or similar.

Related

net::ERR_CONNECTION_RESET when large file takes longer than a minute

I have a multipart file upload in a form with a php backend. I've set max_execution_time and max_input_time in php.ini to 180 and confirmed on the file upload that these values are set and set TimeOut 180 in Apache. I've also set
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
When I upload a 250MB file on a fast connection it works fine. When I'm on a slower connection or a network link conditioner to artificially slow it down, the same file times out and on Chrome gives me net::ERR_CONNECTION_RESET after 1 minute (and 5 seconds) reliably. I've also tried other browsers with the same outcome, just different error messages.
There is no indication to an error in any log and I've tried both on http and https.
What would cause the upload connection to be reset after 1 minute?
EDIT
I've now also tried to have a simple upload form that bypasses any framework I'm using, still timeouts at 1 minute.
I've also just made a sleep script that timeouts after 2 and a half minutes, and that works, page takes around 2.5 minutes to load so I can't see how it's browser or header related.
I've also used a server with more RAM to ensure it's not related to that. I've tested on 3 different servers with different specs but all from the same CentOS 7 base.
I've now also upgraded to PHP 7.2 and updated the relevant fields again with no change in the problem.
EDIT 2
The tech stack for this isolated instance is
Apache 2.4.6
PHP 5.6 / 7.2 (tried both), has OPCache
Redis 3.2.6 for session information and key / value storage (ElastiCache)
PostgreSQL 10.2 (RDS)
Everything else in my tech stack has been removed from this test area to try and isolate the problem. EFS is on the system but in my most isolated test it's just using EBS.
EDIT 3
Here some logs from the chrome network debugger:
{"params":{"net_error":-101,"os_error":32},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":69},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":216,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":56},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":159},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":164},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":113,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":55},
{"params":{"net_error":-101},"phase":2,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":164},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":97},
{"phase":1,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":34},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":2},
I went through a similar problem, in my case it was related to mod_reqtimeout by adding:
RequestReadTimeout header=20-40, MinRate=500 body=20, MinRate=500
to httpd.conf did the trick!
You can check the documentation here.
Hope it helps!
Original source here
ERR_CONNECTION_RESET usually means that the connection to the server has ceased without sending any response to the client. This means that the entire PHP process has died without being able to shut down properly.
This is usually not caused by something like an exceeded memory_limit. It could be some sort of Segmentation Fault or something like that. If you have access to error logs, check them. Otherwise, you might get support from your hosting company.
I would recommend you to try some of these things:
Try cleaning the browser's cache. If you have already visited the page, it is possible for the cache to contain information that doesn’t match the current version of the website and so blocks the connection setup, making the ERR_CONNECTION_RESET message appear.
Add the following to your settings:
memory_limit = 1024M
max_input_vars = 2000
upload_max_filesize = 300M
post_max_size = 300M
max_execution_time = 990
Try setting the following input in your form:
In your processing script, increase the session timeout:
set_time_limit(200);
You might need to tune up the SSL buffer size in your apache config file.
SSLRenegBufferSize 10486000
The name and location of the conf file is different depending on distributions.
In Debian you find the conf file in /etc/apache2/sites-available/default-ssl.conf
A few times it is mod_security module which prevents post of large data approximately 171 KB. Try adding/modifying the following in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
I hope something might work out!
Incase anybody else runs into this - there is also a problem with this relating to PHP-FPM. If you dont set "ProxyTimeout" in your httpd.conf - PHP-FPM uses a default timeout of one minute. It took me several hours to figure out the problem as I initially was thinking of all the normal settings like everyone else.
I had the same problem. I used the resumable file upload method where if the internet is disconnected and reconnects back then the upload resumes from the same progress.
Check out the library https://packagist.org/packages/pion/laravel-chunk-upload
Installation
composer require pion/laravel-chunk-upload
Add service provider
\Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider::class
Publish the config
php artisan vendor:publish --provider="Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider"
In my opinion it maybe relative to one of them:
About apache config (/etc/httpd2/conf ou /etc/apache2/conf):
Timeout 300
max_execution_time = 300
About php config ('php.ini'):
upload_max_filesize = 2000M
post_max_size = 2000M
max_input_time = 300
memory_limit = 3092M
max_execution_time = 300
About PostgreSQL config (execute this request):
SET statement_timeout TO 0;
About proxy, (or apache mod_proxy), it maybe also be due to proxy timeout configuration
in case anyone has the same issue, the problem I encountered is that the http request has to go through proxy sever and waf, small files upload is ok, but with large files the tcp connection automatically closed, how to validate:
simply change your hosts setting point the domain to the web server ip address (or you may use firefox with no-proxy if there is no waf), if your problem gone then it's the caused by the proxy or the waf in between your web server and the browser
Connection-Reset occurs when php process dies without proper error message.
Changing oracle client version from 19 to 12c and then appropriately configuring in php.ini solved the connection reset issue for our team.

PHP 7 post_max_size ini setting work for limiting raw POST data?

I am using Apache Thrift to move small base64 encoded files to a PHP backend (with Apache web server). It is essentially just an HTTP POST request with large amounts of raw body data. I want to limit how much data can be POSTed so that I don't even attempt to process files larger than my target (I will also have a small memory_limit). To test this I set in php.ini:
post_max_size=1K
I then confirmed that my setting was correctly picked up by running phpinfo().
However, when POSTing roughly 12K of raw textual data to my server I can still get the full contents using
file_get_contents("php://input");
My understanding is that PHP will simply strip out the data if post_max_size is exceeded instead of throwing an error or exception. Through searching how post_max_size works, information always seems to relate to file uploads rather than a raw post body. Is the post_max_size ini setting not actually looking at the raw size of post requests in addition to posted file upload data? Why when my post_max_size is exceeded do I still get everything that was posted? How can I prevent serving a request or handling large data if the POST raw body data size exceeds my limit? Any help is greatly appreciated.

The POST request is not executed by server properly if textarea has long content

I have the Laravel 5 web-application with the regular form with textarea
When I make POST request, if lehgth of request exceeds some threshold (the threshold seems to be smth like 200-300 symbols) I wait the responce from server very long and then "The connection to the server was reset"
Execution of php-script is disabled at the controller level (also tried to at routes.php; and also before the authorization middleware) -
the only thing that I ask from controller is to return me the raw post-data (simple 'Hallo World!' tried also)
The cuted controller method looks like:
public function handle_referencies( Request $request){
return '<pre>'.print_r(file_get_contents("php://input"),true).'</pre>';}
file_get_contents("php://input") - already in dispair, tried $_POST, in order to be sure that the request is really POST
With the shorter textarea content everything work properly, also the long-time script at background. At my local machine everything was good, at server logs I see nothing confusing
Apache has been installed and tuned (with virtual hosts) by myself - may be not so properly
php.ini virables, which I tried to change settings:
max_execution_time 300
max_file_uploads 50
max_input_time 300
max_input_vars 100000
memory_limit 256M
post_max_size 64M
in .htaccess at the app root is set to
LimitRequestBody 2147483647 (0 also tried)
help help...
sorry, if smth wrong with my question, - it is my very first experience at this service

Uploading Images with PHP Script Failing

I'm fairly new to PHP, but I'm having a recurring issue via multiple different scripts and servers when uploading images via ShareX to my server with a custom script, specifically this one.
I've migrated servers (I was on a shared host, now I'm on a VPS), and have since changed to using this script, but I'm still having the issue and I don't know what exactly the problem is.
The issue (does not occur 100% of the time, but it does most of the time; sometimes it works after retrying) is that uploading images over a certain size, about 250-500KB times out or fails most of the time. After 60 seconds, I get a 502 error (Bad Gateway) on ShareX.
I've looked up common solutions to similar problems ("large" files timing out in PHP), and have checked the following variables in my PHP.ini file.
max_execution_time = 60
max_input_time = 60
memory_limit = 128M
post_max_size = 8M
When uploads are successful, it takes a few seconds in total to upload and get the link of the uploaded image returned, but when it fails, it's always 60 seconds and then error. There is no middle ground, it's either it succeeds instantly or times out after 60 seconds.
I don't know exactly how to go about finding what exactly the error (if any) is. When it happens, ShareX reports a (502) Bad Gateway error, the 'Response:' is just the source code of the page (the script is set up to redirect you to this page if it detects you aren't uploading anything or it fails), and the 'Stack Trace' is the following:
StackTrace:
at System.Net.HttpWebRequest.GetResponse()
at ShareX.UploadersLib.Uploader.UploadData(Stream dataStream, String url, String fileName, String fileFormName, Dictionary`2 arguments, NameValueCollection headers, CookieCollection cookies, ResponseType responseType, HttpMethod method, String requestContentType, String metadata)
Edit: My server is behind cloudflare, and I read that cloudflare might cause problems. However, I've checked the settings and the maximum upload size is set at 100MB on cloudflare, and pausing it doesn't seem to help.
Edit: I removed the limit on post_max_size which was 8M and it seems to have partly fixed the issue. I can now upload things up to about 3MB but after that it always fails with a custom error message from the script.
When increasing file POST limits, you may need to change at least 2 settings:
upload_max_filesize = 30M
post_max_size = 32M
Dont think it has anything to do with CloudFlare. See if you can check the error log for Apache if the above settings dont work.

php loses form POST parameters

I have a form which sends data with the POST method, about 3000 array keys to be inserted in MySQL like this:
client_add[]=1
client_add[]=3
client_add[]=47
...
The problem is on my localhost on the development server works just fine. On production I only get about 1000 rows, on the localhot it seems to get lost, we confronted the php.ini files and the development server has everything set to more memory than my localhost.
I've run out of ideas.
The size of the post body will be somewhere around 50kb, which is ok as long as the server and/or PHP doesn't enforce a limit. It seems like your production environment enforces such a limit. You should check the entire webserver configuration, and if that is identical as well, compare compile-time defaults. Maybe the phpinfo() call shows more on the actual limits.
PHP has an ini setting which dictates the size of your POST request, you can probably find it in your ini under the name of post_max_size.
Also, if you've got the Suhosin patch installed it will enforce a limit on the number of POST variables you can submit on each request. I think this is around 2000 by default.

Categories