Laravel 3 - POST Content-Length Exception - php

Running Laravel 3.
I am trying to upload files with the laravel framework. If the file is larger than the php setting for upload_max_filesize it throws the exception below.
I have tried this in my controller and routes with no success (the if statement runs - it sets a session - but the exception is still thrown showing the error page)
if ($_SERVER['CONTENT_LENGTH'] > 8380000) {
//do stuff here because its too big
// set a session and exit()
}
How can I prevent this exception from being thrown without upping the php memory limits?
Error:
Unhandled Exception
Message:
POST Content-Length of 9306598 bytes exceeds the limit of 8388608 bytes
Location:
Unknown on line 0
As a side note, this question has been asked atleast twice in the laravel forum with no good answer given except for 'increase your php memory limits'.
EDIT: the problem seems to be that laravel is loading all the _POST inputs before I can even check them in the route or controllers. Seems like a bug to me.

This looks like PHP's max post size, which defaults to 8M on many systems (around 8388608 bytes). There is nothing you can do in Laravel to get around this as it is handled/managed/configured at the PHP level. Read Increasing the maximum post size to see how to change this.

PHP is raising this Warning and Laravel is threating it as a fatal error.
This is done in Error::shutdown, any PHP error thrown will result in an application shutdown due this error.
A solution I've found is to filter which error types are allowed to end in Error::shutdown.
The downside is:
Needs to modify a laravel file: laravel/laravel.php which is not a good idea if you plan to update laravel with new versions (something that will hardly happen now that version 4.1 out there).
Could not fully test if this implies some side effect in laravel behaviour by not aborting Warning errors.
This is the modification I made in file laravel/laravel.php line 46:
register_shutdown_function(function()
{
require_once path('sys').'error'.EXT;
$error = error_get_last();
if ( $error!==null )
{
if( $error['type']!=E_WARNING )
{
Error::shutdown();
}
}
});

Related

Sudden PHP Memory Leak in Codeigniter

Something very strange started happening yesterday while coding.
I was testing a new function, all was going fine. No issues. Was building json object and print_r on screen each time to check the successful building of the object in a testing method.
As I was implementing it into the codebase, again it was still working fine. I then went off to change a different method, updated code to work with that new method and tested it's related screens and all worked fine still.
Then all of a sudden on page reload (after seeing everything work fine), I'm getting a PHP memory leak error.
Fatal error: Allowed memory size of 1342177280 bytes exhausted (tried to allocate 65488 bytes) in D:\public_html\genesis\system\core\Common.php on line 901
This happens no matter what I isolate.
I've even converted the index page to:
public function index() {
echo 'Hello World';
//$this->buildPage("login");
}
and it still throws the error.
I currently have the this for my memory limit:
memory_limit=2480M
It was at 1280, then I added another 1200 and still no difference.
My other sites are loading fine, just this one. But I can't seem to troubleshoot it at all cause I can't get ANY methods to load.
Has anyone else had this issue?
Any ideas on how to figure it out?
OK so I figure it out, here is what I did and what was happening.
1) First I had to get xDebug installed. (https://xdebug.org/wizard.php)
2) Then I could see the errors when trying to load the page.
I had reached an maximum allowed nesting limit in Codeigniter. This was due to loading models within models and back again. I didn't realise cross model usage was not allowed.
So I moved my class based construct loading of primary models to the autoload.php file.
This got things loading again.

Laravel memory exception 'Database/Connection.php:301'

I got a strange bug going down here, sometimes select queries dies from a memory exception. The big problem here is that I've never been able to see the error by myself. I know its existence from users and the laravel.log file that contains things like :
[2015-03-05 11:46:07] production.ERROR: exception 'Symfony\Component\Debug\Exception\FatalErrorException' with message 'Allowed memory size of 134217728 bytes exhausted (tried to allocate 196605 bytes)' in [...]/vendor/laravel/framework/src/Illuminate/Database/Connection.php:301
Stack trace:
#0 [internal function]: Illuminate\Exception\Handler->handleShutdown()
#1 {main} [] []
My questions are :
How to debug the issue (find oud what is the query that blows everything up)
Or if there's a known workaround for this, what is it ?
I already tried to DB::disableQueryLog(); within my start/artisan.php
Main issue is:
The PHP process is running out of memory, aka hitting the memory_limit set in your php.ini. The cause may be various, i.e infinite loops, large select statements; so to say: anything that requires PHP to store information in memory during processing.
My specific Issue & Solution:
I had this issue due to long running php processes, i.e Queue workers.
Solved it by setting up a cron job that does a php artisan queue:restart every 20 mins.
As the error message said, the error was about a certain database query. But I also asked how I could traceback the faulty query. I used an error tracking service called Rollbar to find out which API call died. Pretty sure it isn't the best way to figure this out but it worked...

Wordpress Dies When trying to edit the homepage

The problem is very very weird. Let me explain a little bit better - We have a multi site built for the client. Until recently we could edit the homepage with no issues. In the meantime we have upgraded the core (it was still working with new core). Just recently whenever I try to edit the homepage I get this error
Fatal error: Out of memory (allocated 42467328) (tried to allocate 64 bytes) in /home/officete/public_html/wp-includes/wp-db.php on line 1557
Ok so the apparent solution was to change the php memory allowance... Well I have increased it on the server via WHM, increased it in htaccess, wp-config and php.ini file to over 1.2GB (never really expected I will have to increase it so much) just for testing reasons. Every time I try to edit the page I get the same god damn error and 42467328 allocation limit doesn't change at all the " 64 bytes" part does though and its between 32-128 bytes so far.
I am stumped. And have no idea what else I can do. I did contact the server provider they say it looks ok from their end.
I am assuming its the amount of data that is being collected it does contain few ACF repeater fields (15 of them... I know... But I haven't built it). I did disable all the plugins the error persists (I know that disabling them don't really change what is being pulled from db).
BTW The line 1557 is the return result function that returns the query in an array.
you page need more execution time.add below code into config file and try:
define('WP_MEMORY_LIMIT', '128M');

DOMDocument->load() I/O warning : failed to load external entity

On PHP 5.3.3-7, I'm using DOMDocument->load() to load a local file, and recently I've started encountering situations where I start getting E_WARNINGs
DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "/path/to/local/file/that/does/exist"
Once this starts happening the only way I've found to make the errors stop is to restart apache, and then it's fine for a while again.
I haven't changed any related code recently, but it occurred to me that this started happening after I installed a Debian patch for CVE-2013-1643, which seems to possibly disable entity loading... if there's a single instance of an event that would trigger disabling, could that disable it permanently for all future PHP requests until a restart? That seems aggressive. By contrast, libxml_disable_entity_loader() seems to operate on the current request only.
I have no code that I know of that should ever load remote XML and that would ever trigger disabling, but if this is what's happening, I would have expected something to show up in my php error log, but I don't see anything. What other avenues should I investigate?
EDIT: Finally, I've been able to predictably repeat the problem. If I intentionally exceed the allowed memory limit in a single session...
mod_fcgid: stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32 bytes)
...then I start getting the I/O warning on all subsequent calls to DOMDocument->load(). This time, I was able to get it working again without restarting apache... just by calling libxml_disable_entity_loader(false). This is truly funky behavior--it's starting to smell like a bug in php?

Zend session_start gives Fatal error: Exception thrown without a stack frame in Unknown on line 0

When running a Zend application locally I get Fatal error: Exception thrown without a stack frame in Unknown on line 0, i traced that error to a line $startedCleanly = session_start();
I can't get through it, when I restart the server and reload the page I do not get the error, but on every other reload I get it, I looked into a php/tmp dir too see if there are any files, and as I see they aren't there. I think that session isn't written but when I try just a simple test.php file with session_start(); line, without zend framework, I see that there is a file created in that dir.
I really don't know where to go next.
Happens when your destructor or error handler throws an exception. That can happen for multiple reasons depending on your exact setup and method for session storage you're using. For example the session directory is not writeable or does not exist, database is not accessible or fields are invalid, redis does not respond, etc.
So, check your settings and look for something that would prevent saving the session data.
More elaborate description can be found here.
I know this post is old, but I've just figured out that I was getting "Fatal error: Exception thrown without a stack frame in Unknown on line 0" because my 'modified' and 'lifetime' columns were of type 'timestamp without time zone' when they should have been 'integer' (I'm using Postgres 9 BTW)
Hope this helps someone.
The problem could also be a disk full problem !!!

Categories