Serving large files (>1GB) via PHP on Plesk / Nginx / Apache - php

I'm trying to serve arbitrary large files via PHP.
Since I need to check permissions first, i cannot let the Web server handle the file downloads directly.
The server is running Plesk 17.0. As far as I know, Plesk uses Nginx as a proxy for Apache by default. But this can be turned off, so Apache serves everything directly.
My Problem:
On downloads I get a Network error at a few MB after 1GB if I use the default configuration (which is with Nginx running).
I've read many suggestions on how to handle large file uploads in PHP. Currently I'm using essentially this:
if ( isset( $_SERVER['MOD_X_SENDFILE_ENABLED'] ) &&
$_SERVER['MOD_X_SENDFILE_ENABLED'] == 1 ) {
header( "X-Sendfile: " . $this->options['base_path'] . $file );
} else {
readfile( $file );
}
As you see, I've tried offloading the file download to Apache. It also works fine, if I turn off Nginx.
With Nginx turned on, the max_execution_time in PHP marks a limit at which Nginx produces a network error. Though it serves at least 1GB. To me it seems, that there is some kind of block size limit between Apache and Nginx, which is set to 1GB. But I could not find such an option. For example setting max_execution_time to 5 sec, still delivers 1GB, even if the Download takes 10 Minutes.
This error is logged in the proxy_error_log when 1GB is served and the max_execution_time is passed:
[error] 3524#0: *796853 upstream prematurely closed connection while reading upstream
With Apache serving directly and mod_xsendfile active the max_execution_time does not matter. Using PHP readfile the max_execution_time matters. This also makes sense to me.
But according to the Plesk documentation it is beneficial to use Nginx for the serving.
So I'm looking to a way to keep Nginx and Apache running and not being limited by the max_execution_time when serving multiple GB files.

Related

Apache resetting connection (?) on large file uploads [duplicate]

This question already has answers here:
net::ERR_CONNECTION_RESET when large file takes longer than a minute
(7 answers)
Closed 3 years ago.
I have a site that used to be able to upload large files (large being > 10 or 20mb) but no longer can. I've been debugging this for hours at this point.
All php values are set ludicrously high:
post_max_size = 512M
upload_max_filesize = 512M
memory_limit = 1024M
max_execution_time = 600
max_input_time = 600
I've also set TimeOut 600 in httpd.conf.
Essentially, if I add a large file to an upload field, it never uploads. I can witness the "Uploading (1%)..." in the lower left in chrome showing the file start uploading. It will count up, sometimes even reaching 100%, then start over again at 0 and start counting up again, eventually failing with an ERR_CONNECTION_RESET message.
The eventual failure seems to happen after a random amount of time, sometimes 24 seconds, simetimes 3 minutes.
I tried a 170mb file and it will always get to 16 or 17% before it restarts. That always takes something like 22 seconds. Then, it will restart at 0 and count up to 16 or 17% again, then restart again. It ultimately fails with the ERR_CONNECTION_RESET message sometimes after restarting once, sometimes after restarting 4 or 5 times.
I also tried a 30mb file. This one will always reach right around 100% before restarting.
df -h shows plenty of file space remaining, and I was able to upload files fine via SFTP confirming that there is indeed sufficient hard disk space.
Files also upload fine using the exact same application on my development server, so I can rule out any application issues.
Smaller files also upload fine on the production server, i've tried files as large as 3 or 5mb with no issue.
I'm able to execute code like:
echo "start";
sleep(60);
echo "stop";
without any hiccup on production, so it isn't timing out all requests, only the uploads.
I've tried multiple browsers, and this is happening from multiple client locations.
There is never an error in any log I can find in /var/log/httpd.
I'm not running mod security. Nowhere in my application are any of the php settings overwritten. It's a pretty standard installation of apache and php.
The production server is Amazon Linux running Apache/2.4.39 and I've tried it on php 7.1 and php 7.2 and got the same result, both using mod_php.
I am well into the "banging head against wall" stage of this issue. Does anyone have any ideas what I can do to debug this?
Finally got this to work. Thanks to net::ERR_CONNECTION_RESET when large file takes longer than a minute
I had to add RequestReadTimeout header=0 body=0 to my httpd.conf file
It couldn't be within a vhost definition, at least I tried that hours ago and nothing. But, circled back and tried it again in httpd.conf and it worked.
TG.

net::ERR_CONNECTION_RESET when large file takes longer than a minute

I have a multipart file upload in a form with a php backend. I've set max_execution_time and max_input_time in php.ini to 180 and confirmed on the file upload that these values are set and set TimeOut 180 in Apache. I've also set
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
When I upload a 250MB file on a fast connection it works fine. When I'm on a slower connection or a network link conditioner to artificially slow it down, the same file times out and on Chrome gives me net::ERR_CONNECTION_RESET after 1 minute (and 5 seconds) reliably. I've also tried other browsers with the same outcome, just different error messages.
There is no indication to an error in any log and I've tried both on http and https.
What would cause the upload connection to be reset after 1 minute?
EDIT
I've now also tried to have a simple upload form that bypasses any framework I'm using, still timeouts at 1 minute.
I've also just made a sleep script that timeouts after 2 and a half minutes, and that works, page takes around 2.5 minutes to load so I can't see how it's browser or header related.
I've also used a server with more RAM to ensure it's not related to that. I've tested on 3 different servers with different specs but all from the same CentOS 7 base.
I've now also upgraded to PHP 7.2 and updated the relevant fields again with no change in the problem.
EDIT 2
The tech stack for this isolated instance is
Apache 2.4.6
PHP 5.6 / 7.2 (tried both), has OPCache
Redis 3.2.6 for session information and key / value storage (ElastiCache)
PostgreSQL 10.2 (RDS)
Everything else in my tech stack has been removed from this test area to try and isolate the problem. EFS is on the system but in my most isolated test it's just using EBS.
EDIT 3
Here some logs from the chrome network debugger:
{"params":{"net_error":-101,"os_error":32},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":69},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":216,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":56},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":159},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":164},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":113,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":55},
{"params":{"net_error":-101},"phase":2,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":164},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":97},
{"phase":1,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":34},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":2},
I went through a similar problem, in my case it was related to mod_reqtimeout by adding:
RequestReadTimeout header=20-40, MinRate=500 body=20, MinRate=500
to httpd.conf did the trick!
You can check the documentation here.
Hope it helps!
Original source here
ERR_CONNECTION_RESET usually means that the connection to the server has ceased without sending any response to the client. This means that the entire PHP process has died without being able to shut down properly.
This is usually not caused by something like an exceeded memory_limit. It could be some sort of Segmentation Fault or something like that. If you have access to error logs, check them. Otherwise, you might get support from your hosting company.
I would recommend you to try some of these things:
Try cleaning the browser's cache. If you have already visited the page, it is possible for the cache to contain information that doesn’t match the current version of the website and so blocks the connection setup, making the ERR_CONNECTION_RESET message appear.
Add the following to your settings:
memory_limit = 1024M
max_input_vars = 2000
upload_max_filesize = 300M
post_max_size = 300M
max_execution_time = 990
Try setting the following input in your form:
In your processing script, increase the session timeout:
set_time_limit(200);
You might need to tune up the SSL buffer size in your apache config file.
SSLRenegBufferSize 10486000
The name and location of the conf file is different depending on distributions.
In Debian you find the conf file in /etc/apache2/sites-available/default-ssl.conf
A few times it is mod_security module which prevents post of large data approximately 171 KB. Try adding/modifying the following in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
I hope something might work out!
Incase anybody else runs into this - there is also a problem with this relating to PHP-FPM. If you dont set "ProxyTimeout" in your httpd.conf - PHP-FPM uses a default timeout of one minute. It took me several hours to figure out the problem as I initially was thinking of all the normal settings like everyone else.
I had the same problem. I used the resumable file upload method where if the internet is disconnected and reconnects back then the upload resumes from the same progress.
Check out the library https://packagist.org/packages/pion/laravel-chunk-upload
Installation
composer require pion/laravel-chunk-upload
Add service provider
\Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider::class
Publish the config
php artisan vendor:publish --provider="Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider"
In my opinion it maybe relative to one of them:
About apache config (/etc/httpd2/conf ou /etc/apache2/conf):
Timeout 300
max_execution_time = 300
About php config ('php.ini'):
upload_max_filesize = 2000M
post_max_size = 2000M
max_input_time = 300
memory_limit = 3092M
max_execution_time = 300
About PostgreSQL config (execute this request):
SET statement_timeout TO 0;
About proxy, (or apache mod_proxy), it maybe also be due to proxy timeout configuration
in case anyone has the same issue, the problem I encountered is that the http request has to go through proxy sever and waf, small files upload is ok, but with large files the tcp connection automatically closed, how to validate:
simply change your hosts setting point the domain to the web server ip address (or you may use firefox with no-proxy if there is no waf), if your problem gone then it's the caused by the proxy or the waf in between your web server and the browser
Connection-Reset occurs when php process dies without proper error message.
Changing oracle client version from 19 to 12c and then appropriately configuring in php.ini solved the connection reset issue for our team.

PHP scripting timing out after 60 seconds

Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.

How to address : The FastCGI process exceeded configured request timeout error: on IIS 7.5

My PHP script executes some program on my server IIS 7.5
Takes time about 10 mins to execute but above error in browser.
How to resolve this.
Error:
HTTP Error 500.0 - Internal Server Error
C:\php\php-cgi.exe - The FastCGI process exceeded configured request timeout
Module FastCgiModule
Notification ExecuteRequestHandler
Handler FastCGI
Error Code 0x80070102
php.ini settings:
fastcgi.impersonate = 1
fastcgi.logging = 0
cgi.fix_pathinfo=1
cgi.force_redirect = 0
max_execution_time = 0
upload_max_filesize = 20M
memory_limit = 128M
post_max_size = 30M
C:\Windows\System32\inetsrv\config\ applicationHost.config file settings for fast-cgi
<fastCgi>
<application
fullPath="C:\php\php-cgi.exe" activityTimeout = "3600" requestTimeout = "300" />
</fastCgi>
This is sort of a quick explanation of what is going on. When you are using a CGI/FCGI configuration for PHP. The webserver (in this case IIS), routes requests that require php processing to the PHP process (which runs separately from the web server).
Generally, to prevent connections from getting stuck open and waiting (if php process happens to crash) the web server will only wait a set amount of time for the PHP process to return a result (usually 30-60 seconds).
In your configuration you have this:
requestTimeout = "300"
300 seconds = 5 minutes. IIS will cancel the request since your request takes 10 minutes to complete. Simple fix, increase the timeout to something 600 or greater.
Now, running a script for 10 minutes with an http request is not a good design pattern. Generally, http works best with short lived requests. The reason is that timeouts can exist in any part of the process (server, proxy, or client) and the script could be accidentally interrupted.
So, when you have a web application that has a long running job like this, the best way to run it is via console or job queue.
There is one more setting I found from this source that helped me with the same problem I was having.
Copied and pasted:
Open Server Manager
At Server Level (not default Web site)
Double click FastCGI Settings
open PHP.EXE listed there
Monitor changes to file php.ini
activity timeout default is 60s - change to 600s or whatever
This can be solve by adjusting fast-cgi configuration.
Goto "C:\Windows\System32\inetsrv\" and edit "fcgiext.ini" file
[PHP]
ExePath=C:\xampp\php\php-cgi.exe
MonitorChangesTo=C:\xampp\php\php.ini
ActivityTimeout=3600
IdleTimeout=3600
RequestTimeout=3600
Make sure to place ActivityTimeout,IdleTimeout & RequestTimeout inside [PHP] section as shown above.

Internal Server Error (500) and PHP max_execution_time on Linux server

I have a php script that need to be processed for one to 5 hours (sending newsletters to our customers). I tried both set_time_limit(2000); and ini_set('max_execution_time', 360000); but neither works. They work perfectly on the XAMPP local server, but they do not work on our dedicated server (Unix & Apache). I also changed the Apache timeout to 300 (which was 50), yet after 30 seconds of script running, it returns this:
Internal Server Error Page (Error 500)
I have no idea if there is any other place for timeout and/or why the server does not honor the ini_set() or set_time_limit() functions. We are using Unix CentOS 6 and Plesk 11.9 as server. I also changed the default php.ini max_execution_time, and it works...
I read many articles and forums, yet I don't know why this happens. I appreciate your help.
// add, in your php file header or config
ini_set('max_execution_time','256'); //max_execution_time','0' <- unlimited time
ini_set('memory_limit','512M');
Good work!
a better way would be using ini_set() or set_time_limit() at the top of the script which sends newsletters to the customers...you should not try to main config files...and also, as someone suggested above, cron jobs are good fit for such situations..
I appreciate your answers and comments. I setup the cron job, and it works perfect. I also have tried the chunk-chunk (150 emails per chunk) approach, and that one works too.
If you using Vps:
Edit your php.ini file:
max_execution_time = 256
memory_limit = 512M
Then, run command line to restart apache
service httpd restart
Or header file
ini_set('max_execution_time','256');
ini_set('memory_limit','512M');
Good luck!

Categories