Something very strange started happening yesterday while coding.
I was testing a new function, all was going fine. No issues. Was building json object and print_r on screen each time to check the successful building of the object in a testing method.
As I was implementing it into the codebase, again it was still working fine. I then went off to change a different method, updated code to work with that new method and tested it's related screens and all worked fine still.
Then all of a sudden on page reload (after seeing everything work fine), I'm getting a PHP memory leak error.
Fatal error: Allowed memory size of 1342177280 bytes exhausted (tried to allocate 65488 bytes) in D:\public_html\genesis\system\core\Common.php on line 901
This happens no matter what I isolate.
I've even converted the index page to:
public function index() {
echo 'Hello World';
//$this->buildPage("login");
}
and it still throws the error.
I currently have the this for my memory limit:
memory_limit=2480M
It was at 1280, then I added another 1200 and still no difference.
My other sites are loading fine, just this one. But I can't seem to troubleshoot it at all cause I can't get ANY methods to load.
Has anyone else had this issue?
Any ideas on how to figure it out?
OK so I figure it out, here is what I did and what was happening.
1) First I had to get xDebug installed. (https://xdebug.org/wizard.php)
2) Then I could see the errors when trying to load the page.
I had reached an maximum allowed nesting limit in Codeigniter. This was due to loading models within models and back again. I didn't realise cross model usage was not allowed.
So I moved my class based construct loading of primary models to the autoload.php file.
This got things loading again.
Related
Today when I tried to edit my site on local I got strange error like this :
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried
to allocate 6874029536211531203 bytes) in
D:\wamp64ario\www\owjgraph\wp-includes\functions.php on line 5231
Why it tried to allocate for 6874029536211531203?
Sometimes I get this when I try to reach Login Page, other times in different situations like dashboard or update posts etc.
I tried many things but no success:
My other website in local (with no plugins or various plugins) get the same error.
Increased memory_limit to 256, 512 or 1GB in WAMP but no success.
I heard version 4.9.7 have some memory leak bug so I downgraded to older versions and problem still exists.
Uninstall and reinstall WAMP.
Install other local tools like MAMP.
Clear cache and cookies and use different browser
Install fresh WordPress 4.9.7, 4.9.5 and 4.9.1
None of them solved my problem and I'm really confused.
Is this something wrong with my Windows or registry? How can I debug or trace where this problem is coming from?
Try: Look in your error logs, you may get a trace that will zero in on the problem for you (like an infinite loop).
Locate the line that is throwing the error & determine if it can be refactored to avoid the memory usage.
You can override php's memory allocation in your wp-config, like so:
define( 'WP_MAX_MEMORY_LIMIT' , '512M' ); but this is just a bandaid in my opinion. There is likely something wrong, which you'll need to fix for the long term health of your application.
The same thing is happening to me. If I var_dump stream_get_wrappers() I get weird results with very long lines of gibberish text. I think it's WAMP's fault, because I also get this error in plain php.
Temporary solution for WordPress:
Open wordpress/wp-includes/functions.php and modify wp_is_stream function to look like this:
function wp_is_stream( $path ) {
$wrappers = stream_get_wrappers();
for ($i=0;$i<count($wrappers);$i++) {
if (strlen($wrappers[$i])>100) {
unset($wrappers[$i]);
}
}
$wrappers_re = '(' . join('|', $wrappers) . ')';
return preg_match( "!^$wrappers_re://!", $path ) === 1;
}
I use WampServer 3.0.6 64bit
Prestashop suddenly gives an http 500 error. I turned on error log and got this:
"Fatal error: Out of memory (allocated 709623808) (tried to allocate
130968 bytes) in
/var/www/vhosts/44/252639/webspace/httpdocs/shop.mywebsite.com/classes/Configuration.php
on line 206".
I double checked the Configuration.php on line 206 and it's just a standard prestashop file, nothing weird in it. After all it's an "Out of memory" error so maybe I should increase memory.
phpinfo() shows memory_limit 1024M, which is already pretty much, but maybe I should try 2048M. I tried to create a new custom php.ini but that did not work, because (according to phpinfo) the loaded ini file is searched in this directory: /opt/alt/php56/etc.
My hosting provider does not allow me to edit it, it's read only.
What could I do to solve the problem ?
I did not make a backup yet.
I fixed it. It had nothing to do with not having enough memory or whatever. Someone created a product and that caused errors, I don't know what exactly but for everyone in the future:
Create a backup
Turn off third party modules
Delete all products
Delete cache
etc etc. until your webpage loads correctly again ->
after that, restore your backup and just delete what caused the error.
Hope it will work.
The problem is very very weird. Let me explain a little bit better - We have a multi site built for the client. Until recently we could edit the homepage with no issues. In the meantime we have upgraded the core (it was still working with new core). Just recently whenever I try to edit the homepage I get this error
Fatal error: Out of memory (allocated 42467328) (tried to allocate 64 bytes) in /home/officete/public_html/wp-includes/wp-db.php on line 1557
Ok so the apparent solution was to change the php memory allowance... Well I have increased it on the server via WHM, increased it in htaccess, wp-config and php.ini file to over 1.2GB (never really expected I will have to increase it so much) just for testing reasons. Every time I try to edit the page I get the same god damn error and 42467328 allocation limit doesn't change at all the " 64 bytes" part does though and its between 32-128 bytes so far.
I am stumped. And have no idea what else I can do. I did contact the server provider they say it looks ok from their end.
I am assuming its the amount of data that is being collected it does contain few ACF repeater fields (15 of them... I know... But I haven't built it). I did disable all the plugins the error persists (I know that disabling them don't really change what is being pulled from db).
BTW The line 1557 is the return result function that returns the query in an array.
you page need more execution time.add below code into config file and try:
define('WP_MEMORY_LIMIT', '128M');
Running Laravel 3.
I am trying to upload files with the laravel framework. If the file is larger than the php setting for upload_max_filesize it throws the exception below.
I have tried this in my controller and routes with no success (the if statement runs - it sets a session - but the exception is still thrown showing the error page)
if ($_SERVER['CONTENT_LENGTH'] > 8380000) {
//do stuff here because its too big
// set a session and exit()
}
How can I prevent this exception from being thrown without upping the php memory limits?
Error:
Unhandled Exception
Message:
POST Content-Length of 9306598 bytes exceeds the limit of 8388608 bytes
Location:
Unknown on line 0
As a side note, this question has been asked atleast twice in the laravel forum with no good answer given except for 'increase your php memory limits'.
EDIT: the problem seems to be that laravel is loading all the _POST inputs before I can even check them in the route or controllers. Seems like a bug to me.
This looks like PHP's max post size, which defaults to 8M on many systems (around 8388608 bytes). There is nothing you can do in Laravel to get around this as it is handled/managed/configured at the PHP level. Read Increasing the maximum post size to see how to change this.
PHP is raising this Warning and Laravel is threating it as a fatal error.
This is done in Error::shutdown, any PHP error thrown will result in an application shutdown due this error.
A solution I've found is to filter which error types are allowed to end in Error::shutdown.
The downside is:
Needs to modify a laravel file: laravel/laravel.php which is not a good idea if you plan to update laravel with new versions (something that will hardly happen now that version 4.1 out there).
Could not fully test if this implies some side effect in laravel behaviour by not aborting Warning errors.
This is the modification I made in file laravel/laravel.php line 46:
register_shutdown_function(function()
{
require_once path('sys').'error'.EXT;
$error = error_get_last();
if ( $error!==null )
{
if( $error['type']!=E_WARNING )
{
Error::shutdown();
}
}
});
I'm experiencing a strange situation.
My application logs lot of trace logs to a file. (I don't know exactly how, I use my frameworks logger. Can check this though)
Problems is, when an application is terminated by a fatal error (only fatal) [example - "Fatal error: Call to a member function someFunction() on a non-object"] , I end up with no logs, even logs that should have been recorded much earlier during the execution of my script.
(Yes, I tried to flush logs, this doesn't help either. It looks like the termination of the application by a fatal error, somehow cancels writing to files done at earlier points of the application.
Any ideas what goes on here?
Thank you
A Fatal Error is... well... Fatal : it stops the execution of the script, which will not do anything that should have been done.
In your case, I suppose your logging framework logs into memory -- and that this in-memory log is only written to the file when the processing of the request is done.
Some logging mecanisms do that, to avoid writing to a file several times, at different points during the generation of the response (which means keeping the file locked, to avoid concurrency problems ; or opening-closing-reopening-reclosing-... it)
As you get a Fatal Error, the normal operation that should be done at the end of the response's generation is not called -- and, so, the in-memory log is not written to the file.
Now, the only way to know for sure would be to take a look at the logging mecanisms of your Framework ;-)
Apparently, the fatality of the fatal error that Pascal mentioned, is not 100% fatal.
The below allowed me to have my logs even on fatal errors:
function correctShutdown()
{
logger->flush();
}
register_shutdown_function('correctShutdown');