I have a multipart file upload in a form with a php backend. I've set max_execution_time and max_input_time in php.ini to 180 and confirmed on the file upload that these values are set and set TimeOut 180 in Apache. I've also set
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
When I upload a 250MB file on a fast connection it works fine. When I'm on a slower connection or a network link conditioner to artificially slow it down, the same file times out and on Chrome gives me net::ERR_CONNECTION_RESET after 1 minute (and 5 seconds) reliably. I've also tried other browsers with the same outcome, just different error messages.
There is no indication to an error in any log and I've tried both on http and https.
What would cause the upload connection to be reset after 1 minute?
EDIT
I've now also tried to have a simple upload form that bypasses any framework I'm using, still timeouts at 1 minute.
I've also just made a sleep script that timeouts after 2 and a half minutes, and that works, page takes around 2.5 minutes to load so I can't see how it's browser or header related.
I've also used a server with more RAM to ensure it's not related to that. I've tested on 3 different servers with different specs but all from the same CentOS 7 base.
I've now also upgraded to PHP 7.2 and updated the relevant fields again with no change in the problem.
EDIT 2
The tech stack for this isolated instance is
Apache 2.4.6
PHP 5.6 / 7.2 (tried both), has OPCache
Redis 3.2.6 for session information and key / value storage (ElastiCache)
PostgreSQL 10.2 (RDS)
Everything else in my tech stack has been removed from this test area to try and isolate the problem. EFS is on the system but in my most isolated test it's just using EBS.
EDIT 3
Here some logs from the chrome network debugger:
{"params":{"net_error":-101,"os_error":32},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":69},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":216,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":56},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":159},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":164},
{"phase":1,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params": {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":113,"net_error":-101,"ssl_error":1},"phase":0,"source": {"id":274043,"type":8},"time":"3332701830","type":55},
{"params":{"net_error":-101},"phase":2,"source": {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":164},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":97},
{"phase":1,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":34},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":2},
I went through a similar problem, in my case it was related to mod_reqtimeout by adding:
RequestReadTimeout header=20-40, MinRate=500 body=20, MinRate=500
to httpd.conf did the trick!
You can check the documentation here.
Hope it helps!
Original source here
ERR_CONNECTION_RESET usually means that the connection to the server has ceased without sending any response to the client. This means that the entire PHP process has died without being able to shut down properly.
This is usually not caused by something like an exceeded memory_limit. It could be some sort of Segmentation Fault or something like that. If you have access to error logs, check them. Otherwise, you might get support from your hosting company.
I would recommend you to try some of these things:
Try cleaning the browser's cache. If you have already visited the page, it is possible for the cache to contain information that doesn’t match the current version of the website and so blocks the connection setup, making the ERR_CONNECTION_RESET message appear.
Add the following to your settings:
memory_limit = 1024M
max_input_vars = 2000
upload_max_filesize = 300M
post_max_size = 300M
max_execution_time = 990
Try setting the following input in your form:
In your processing script, increase the session timeout:
set_time_limit(200);
You might need to tune up the SSL buffer size in your apache config file.
SSLRenegBufferSize 10486000
The name and location of the conf file is different depending on distributions.
In Debian you find the conf file in /etc/apache2/sites-available/default-ssl.conf
A few times it is mod_security module which prevents post of large data approximately 171 KB. Try adding/modifying the following in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
I hope something might work out!
Incase anybody else runs into this - there is also a problem with this relating to PHP-FPM. If you dont set "ProxyTimeout" in your httpd.conf - PHP-FPM uses a default timeout of one minute. It took me several hours to figure out the problem as I initially was thinking of all the normal settings like everyone else.
I had the same problem. I used the resumable file upload method where if the internet is disconnected and reconnects back then the upload resumes from the same progress.
Check out the library https://packagist.org/packages/pion/laravel-chunk-upload
Installation
composer require pion/laravel-chunk-upload
Add service provider
\Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider::class
Publish the config
php artisan vendor:publish --provider="Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider"
In my opinion it maybe relative to one of them:
About apache config (/etc/httpd2/conf ou /etc/apache2/conf):
Timeout 300
max_execution_time = 300
About php config ('php.ini'):
upload_max_filesize = 2000M
post_max_size = 2000M
max_input_time = 300
memory_limit = 3092M
max_execution_time = 300
About PostgreSQL config (execute this request):
SET statement_timeout TO 0;
About proxy, (or apache mod_proxy), it maybe also be due to proxy timeout configuration
in case anyone has the same issue, the problem I encountered is that the http request has to go through proxy sever and waf, small files upload is ok, but with large files the tcp connection automatically closed, how to validate:
simply change your hosts setting point the domain to the web server ip address (or you may use firefox with no-proxy if there is no waf), if your problem gone then it's the caused by the proxy or the waf in between your web server and the browser
Connection-Reset occurs when php process dies without proper error message.
Changing oracle client version from 19 to 12c and then appropriately configuring in php.ini solved the connection reset issue for our team.
Lately my site (with 260000 posts, 12000 images, 2,360,987 mysql rows and 450.7 MiB size) is running slow and at times not loading for many mins
I installed this Debug bar plugin https://wordpress.org/plugins/debug-bar/
Memory usage
on server is: 174,319,288 bytes
Intel(R) Xeon(R) CPU E3-1230 V2 # 3.30GHz , 16 GB
(PHP: 5.5.23, MySQL: 5.6.23, Apache 2.4)
Even tried disabling all plugins it doesnt help much... it comes down 160-163,xxx,xxx bytes
on wamp is : 37,834,920 bytes
(PHP: 5.5.12, MySQL: 5.6.17)
Why the difference is huge? How to detect the problem?
Been using the following plugins
Acunetix WP Security
Akismet
Antispam Bee
CloudFlare
Contact Form 7
Custom Post Type UI
Debug Bar
Login LockDown
Redirection
Theme Test Drive
W3 Total Cache
WordPress SEO
WP-Optimize
WP Missed Schedule
my.cnf values for the above server are
[mysqld]
slow-query-log=1
long-query-time=1
slow-query-log-file="/var/log/mysql-slow.log"
default-storage-engine = MyISAM
local-infile = 0
innodb_buffer_pool_size = 1G
innodb_log_file_size = 256M
innodb_file_per_table=1
innodb_stats_on_metadata=0
max_connections=360
wait_timeout=60
connect_timeout = 15
thread_cache_size=20
thread_concurrency=8
key_buffer_size = 1024M
join_buffer_size = 2M
sort_buffer_size=1M
query_cache_limit=64M
query_cache_size=128M
query_cache_type=1
max_heap_table_size=32M
tmp_table_size=32MB
table_open_cache=1000
table_definition_cache=1024
open_files_limit=10000
max_allowed_packet=268435456
low_priority_updates=1
concurrent_insert=2
#port = 8881
#innodb_force_recovery=0
#innodb_purge_threads=0
The "server" has Apache; that accounts for some (all?) of the difference.
Windows and Unix handle memory differently, and measure it differently. So, the difference may be irrelevant.
The numbers you have quoted are not big; what is the problem?
"Tried restarting the server and checked it in initial moments" -- That's mostly irrelevant. Programs tend to grow over time, up to some limit. Let's see the values in "steady state" with a typical load.
You have enough RAM to cache the entire dataset in RAM. But, due to inactivity, probably most of the data is not touched, hence has not been read into cache.
"High" memory usage is when you are swapping. Actually, that is probably "too high". So, say, 90% is "high". Your numbers are nowhere near that.
innodb_buffer_pool_size=200M -- is not enough to hold the entire 450.7MB dataset, but, as I say, most of the data is probably not actively used.
Edit (after posting of settings)
table_cache=10M
That is terrible! You won't be opening 10 million tables. Set it to 1000.
max_heap_table_size=512M
tmp_table_size=512MB
Those are dangerous. If you have multiple connections, each needing a tmp table (because of a complex query), you could run out of memory fast. Set them down to 32M.
innodb_force_recovery=3
Comment out that line -- It is to be used once, then removed.
The rest of the settings look harmless for this discussion.
After installing APC, see the apc.php script, the uptime restart every one or two hours? why?
How can I change that?
I set apc.gc_ttl = 0
APC caches lives as long as their hosting process, it could be that your apache workers reach their MaxConnectionsPerChild limit and they get killed and respawned clearing the cache with it. This a safety mechanism against leaking processes.
mod_php: MaxConnectionsPerChild
mod_fcgid or other fastcgi: FcgidMaxRequestsPerProcess and PHP_FCGI_MAX_REQUESTS (enviroment variable, the example is for lighttpd but it should be considered everywhere php -b used)
php-fpm: pm.max_requests individually for every pool.
You could try setting the option you are using to it's "doesn't matter" value (usually 0) and run test the setup with a simple hello world php script, and apachebench ab2 -n 10000 -c 10 http://localhost/hello.php (tweak the values as needed) to see if the worker pid's are changing or not.
If you use a TTL of 0 APC will clear all cache slots when it runs out of memory. This is what appends every 2 hours.
TTL must never be set to 0
Just read the manual to understand how TTL is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Use apc.php from http://pecl.php.net/get/APC, copy it to your webserver to check memory usage.
You must allow enough memory so APC have 20% free after some hours running. Check this on a regular basis.
If you don't have enough memory available, use filters option to prevent rarely accessed files from being cached.
Check my answer there
What is causing "Unable to allocate memory for pool" in PHP?
I ran into the same issue today, found the solution here:
http://www.itofy.com/linux/cpanel/apc-cache-reset-every-2-hours/
You need to go to AccesWHM > Apache Configuration > Piped Log Configuration and Enable Piped Apache Logs.
I have a bunch of client point of sale (POS) systems that periodically send new sales data to one centralized database, which stores the data into one big database for report generation.
The client POS is based on PHPPOS, and I have implemented a module that uses the standard XML-RPC library to send sales data to the service. The server system is built on CodeIgniter, and uses the XML-RPC and XML-RPCS libraries for the webservice component. Whenever I send a lot of sales data (as little as 50 rows from the sales table, and individual rows from sales_items pertaining to each item within the sale) I get the following error:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 54 bytes)
128M is the default value in php.ini, but I assume that is a huge number to break. In fact, I have even tried setting this value to 1024M, and all it does is take a longer time to error out.
As for steps I've taken, I've tried disabling all processing on the server-side, and have rigged it to return a canned response regardless of the input. However, I believe the problem lies in the actual sending of the data. I've even tried disabling the maximum script execution time for PHP, and it still errors out.
Changing the memory_limit by ini_set('memory_limit', '-1'); is not a proper solution. Please don't do that.
Your PHP code may have a memory leak somewhere and you are telling the server to just use all the memory that it wants. You wouldn't have fixed the problem at all. If you monitor your server, you will see that it is now probably using up most of the RAM and even swapping to disk.
You should probably try to track down the offending code in your code and fix it.
ini_set('memory_limit', '-1'); overrides the default PHP memory limit.
The correct way is to edit your php.ini file.
Edit memory_limit to your desire value.
As from your question, 128M (which is the default limit) has been exceeded, so there is something seriously wrong with your code as it should not take that much.
If you know why it takes that much and you want to allow it set memory_limit = 512M or higher and you should be good.
The memory allocation for PHP can be adjusted permanently, or temporarily.
Permanently
You can permanently change the PHP memory allocation two ways.
If you have access to your php.ini file, you can edit the value for memory_limit to your desire value.
If you do not have access to your php.ini file (and your webhost allows it), you can override the memory allocation through your .htaccess file. Add php_value memory_limit 128M (or whatever your desired allocation is).
Temporary
You can adjust the memory allocation on the fly from within a PHP file. You simply have the code ini_set('memory_limit', '128M'); (or whatever your desired allocation is). You can remove the memory limit (although machine or instance limits may still apply) by setting the value to "-1".
It's very easy to get memory leaks in a PHP script - especially if you use abstraction, such as an ORM. Try using Xdebug to profile your script and find out where all that memory went.
When adding 22.5 million records into an array with array_push I kept getting "memory exhausted" fatal errors at around 20M records using 4G as the memory limit in file php.ini. To fix this, I added the statement
$old = ini_set('memory_limit', '8192M');
at the top of the file. Now everything is working fine. I do not know if PHP has a memory leak. That is not my job, nor do I care. I just have to get my job done, and this worked.
The program is very simple:
$fh = fopen($myfile);
while (!feof($fh)) {
array_push($file, stripslashes(fgets($fh)));
}
fclose($fh);
The fatal error points to line 3 until I boosted the memory limit, which
eliminated the error.
I kept getting this error, even with memory_limit set in php.ini, and the value reading out correctly with phpinfo().
By changing it from this:
memory_limit=4G
To this:
memory_limit=4096M
This rectified the problem in PHP 7.
You can properly fix this by changing memory_limit on fastcgi/fpm:
$vim /etc/php5/fpm/php.ini
Change memory, like from 128 to 512, see below
; Maximum amount of memory a script may consume (128 MB)
; http://php.net/memory-limit
memory_limit = 128M
to
; Maximum amount of memory a script may consume (128 MB)
; http://php.net/memory-limit
memory_limit = 512M
When you see the above error - especially if the (tried to allocate __ bytes) is a low value, that could be an indicator of an infinite loop, like a function that calls itself with no way out:
function exhaustYourBytes()
{
return exhaustYourBytes();
}
Your site's root directory:
ini_set('memory_limit', '1024M');
After enabling these two lines, it started working:
; Determines the size of the realpath cache to be used by PHP. This value should
; be increased on systems where PHP opens many files to reflect the quantity of
; the file operations performed.
; http://php.net/realpath-cache-size
realpath_cache_size = 16k
; Duration of time, in seconds for which to cache realpath information for a given
; file or directory. For systems with rarely changing files, consider increasing this
; value.
; http://php.net/realpath-cache-ttl
realpath_cache_ttl = 120
Rather than changing the memory_limit value in your php.ini file, if there's a part of your code that could use a lot of memory, you could remove the memory_limit before that section runs, and then replace it after.
$limit = ini_get('memory_limit');
ini_set('memory_limit', -1);
// ... do heavy stuff
ini_set('memory_limit', $limit);
In Drupal 7, you can modify the memory limit in the settings.php file located in your sites/default folder. Around line 260, you'll see this:
ini_set('memory_limit', '128M');
Even if your php.ini settings are high enough, you won't be able to consume more than 128 MB if this isn't set in your Drupal settings.php file.
Change the memory limit in the php.ini file and restart Apache. After the restart, run the phpinfo(); function from any PHP file for a memory_limit change confirmation.
memory_limit = -1
Memory limit -1 means there is no memory limit set. It's now at the maximum.
Just add a ini_set('memory_limit', '-1'); line at the top of your web page.
And you can set your memory as per your need in the place of -1, to 16M, etc..
For Drupal users, this Chris Lane's answer of:
ini_set('memory_limit', '-1');
works but we need to put it just after the opening
<?php
tag in the index.php file in your site's root directory.
PHP 5.3+ allows you to change the memory limit by placing a .user.ini file in the public_html folder.
Simply create the above file and type the following line in it:
memory_limit = 64M
Some cPanel hosts only accept this method.
Crash page?
(It happens when MySQL has to query large rows. By default, memory_limit is set to small, which was safer for the hardware.)
You can check your system existing memory status, before increasing php.ini:
# free -m
total used free shared buffers cached
Mem: 64457 63791 666 0 1118 18273
-/+ buffers/cache: 44398 20058
Swap: 1021 0 1021
Here I have increased it as in the following and then do service httpd restart to fix the crash page issue.
# grep memory_limit /etc/php.ini
memory_limit = 512M
For those who are scratching their heads to find out why on earth this little function should cause a memory leak, sometimes by a little mistake, a function starts recursively call itself for ever.
For example, a proxy class that has the same name for a function of the object that is going to proxy it.
class Proxy {
private $actualObject;
public function doSomething() {
return $this->actualObjec->doSomething();
}
}
Sometimes you may forget to bring that little actualObjec member and because the proxy actually has that doSomething method, PHP wouldn't give you any error and for a large class, it could be hidden from the eyes for a couple of minutes to find out why it is leaking the memory.
I had the error below while running on a dataset smaller than had worked previously.
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes) in C:\workspace\image_management.php on line 173
As the search for the fault brought me here, I thought I'd mention that it's not always the technical solutions in previous answers, but something more simple. In my case it was Firefox. Before I ran the program it was already using 1,157 MB.
It turns out that I'd been watching a 50 minute video a bit at a time over a period of days and that messed things up. It's the sort of fix that experts correct without even thinking about it, but for the likes of me it's worth bearing in mind.
In my case on mac (Catalina - Xampp) there was no loaded file so I had to do this first.
sudo cp /etc/php.ini.default /etc/php.ini
sudo nano /etc/php.ini
Then change memory_limit = 512M
Then Restart Apache and check if file loaded
php -i | grep php.ini
Result was
Configuration File (php.ini) Path => /etc
Loaded Configuration File => /etc/php.ini
Finally Check
php -r "echo ini_get('memory_limit').PHP_EOL;"
Using yield might be a solution as well. See Generator syntax.
Instead of changing the PHP.ini file for a bigger memory storage, sometimes implementing a yield inside a loop might fix the issue. What yield does is instead of dumping all the data at once, it reads it one by one, saving a lot of memory usage.
The reason for this error is that your server configuration has a very low memory limit. Try adding this to wp-config.php (put it after <?php in this file):
define('WP_MEMORY_LIMIT', '96M');
Please note that this limit is OK for the theme and the plugins that come with the theme. If you want to enable other plugins you may need to increase the limit further.
define('WP_MEMORY_LIMIT', '256M');
Running the script like this (cron case for example): php5 /pathToScript/info.php produces the same error.
The correct way: php5 -cli /pathToScript/info.php
If you're running a WHM-powered VPS (virtual private server) you may find that you do not have permissions to edit PHP.INI directly; the system must do it. In the WHM host control panel, go to Service Configuration → PHP Configuration Editor and modify memory_limit:
I find it useful when including or requiring _dbconnection.php_ and _functions.php in files that are actually processed, rather than including in the header. Which is included in itself.
So if your header and footer is included, simply include all your functional files before the header is included.
Greetings is a very common problem because if you have very little memory allocated to php and your website is growing will require more resources.
I found myself in a site that had problems that gave error 500 to modify only some products, the problem was that they had used very heavy images in those specific products, solution:
1.- Increase "memory_limit" in php.ini
2.- Lower the weight of the images.
3.- Adapt again "memory_limit" to an acceptable value "512M" at least for me more than enough.
now it is important that you verify that the changes are being made because php apart from having several versions and several types of installations on the server, maybe you modify one and it does not work and this is because you are not modifying the correct php.ini file.
How do you verify that you are modifying the correct file?
In the prestashop dashboard go to advanced settings/information there you can see "Memory limit".
always remember that after making a change in the php.ini file it is advisable to restart apache or Nginx.
Ubuntu: sudo services apache2 restart
IMPORTANT NOTE: Never set the "memory_limit = -1" as many people mention here. The problem is that if you have a problem with a file or module you could be in a continuous loop consuming all the server's memory and processor. Let's take a simple example: a module has an error and makes a call to a function and until it is not positive it keeps calling, this will create an infinite loop and it will never stop doing it because php has no limit.
I hope it helps colleagues who have this problem.
The most common cause of this error message for me is omitting the "++" operator from a PHP "for" statement. This causes the loop to continue forever, no matter how much memory you allow to be used. It is a simple syntax error, yet is difficult for the compiler or runtime system to detect. It is easy for us to correct if we think to look for it!
But suppose you want a general procedure for stopping such a loop early and reporting the error? You can simply instrument each of your loops (or at least the innermost loops) as discussed below.
In some cases such as recursion inside exceptions, set_time_limit fails, and the browser keeps trying to load the PHP output, either with an infinite loop or with the fatal error message which is the topic of this question.
By reducing the allowed allocation size near the beginning of your code you might be able to prevent the fatal error, as discussed in the other answers.
Then you may be left with a program that terminates, but is still difficult to debug.
Whether or not your program terminates, instrument your code by inserting BreakLoop() calls inside your program to gain control and find out what loop or recursion in your program is causing the problem.
The definition of BreakLoop is as follows:
function BreakLoop($MaxRepetitions=500,$LoopSite="unspecified")
{
static $Sites=[];
if (!#$Sites[$LoopSite] || !$MaxRepetitions)
$Sites[$LoopSite]=['n'=>0, 'if'=>0];
if (!$MaxRepetitions)
return;
if (++$Sites[$LoopSite]['n'] >= $MaxRepetitions)
{
$S=debug_backtrace(); // array_reverse
$info=$S[0];
$File=$info['file'];
$Line=$info['line'];
exit("*** Loop for site $LoopSite was interrupted after $MaxRepetitions repetitions. In file $File at line $Line.");
}
} // BreakLoop
The $LoopSite argument can be the name of a function in your code. It isn't really necessary, since the error message you will get will point you to the line containing the BreakLoop() call.
In my case it was a brief issue with the way a function was written. A memory leak can be caused by assigning a new value to a function's input variable, e.g.:
/**
* Memory leak function that illustrates unintentional bad code
* #param $variable - input function that will be assigned a new value
* #return null
**/
function doSomehting($variable){
$variable = 'set value';
// Or
$variable .= 'set value';
}
Increasing the memory_limit fixed the problem. However, I had problems finding the memory limit. I am working on my project directly from live server, so if you're doing the same, on cPanel you can find the memory_limit if you go to Software - MultiPHP INI Editor and select the location. I increased mine from 256M to 512M. You can also find instructions here.