At my workplace, we are running a Magento 1.3 storefront, and we are having a problem with a job that is being run by Magento's internal cron service. Troubleshooting is nearly at a halt because we can't identify which job is causing the problem. The only feedback that we're getting is that every night at 00:05, cron coughs up the following, notoriously unhelpful PHP error while executing /usr/bin/php-cgi -f /path/to/app/magento/html/cron.php.
PHP Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 50 bytes) in /chroot/magento/html/lib/Zend/Db/Statement/Pdo.php on line 294
Increasing PHP's memory limit is clearly not the answer - at 512mb, the problem is almost certainly that an algorithm is doing something grievously wrong, not that we've underestimated the requirements of the problem. Our database is fairly modest in size - a plaintext dump of the entire thing is less than 512mb, so a query would have to be pretty pathological to eat more than that. Our best guess is that something is probably using Zend's fetchAll() incorrectly, but that method isn't being called directly in anything we can find.
How can we get Magento to give us a stack trace or some other indication of its internal state at the time of the problem? Is there a way to get more transparency into exactly what PHP is trying to execute when it hits the memory wall?
Ideally, we would like to do this without modifying third-party code - sometimes plugin developers use bullfeathers measures like Zend Guard, or have a license that does not permit us to modify their broken code, or other measures that basically make me want to go find Mr. Stallman and give him a warm, grateful hug.
Please note that the problem is not "how do we solve the out-of-memory error?" That's been asked many times before, and answered with varying excellence. The problem is "how can we tell which PHP file is causing the out-of-memory error?" It's a question about Magento internals, not about PHP qua PHP.
I would suggest using Mage::log() to log the beginning and end of your multiple jobs so you can narrow it down to the task. After that just create a controller that will execute it manually so you can start debugging it to nail down the problem.
If possible I would run the cronjob with the Xdebug module, with Xdebug function trace or with Xdebug stack trace (which if Xdebug is enabled should display/log the stack trace automatically).
For the first, you should probably configure (either in php.ini or by using
php -d xdebug.auto_trace=1 ... cron.php
) the following:
xdebug.auto_trace=1
xdebug.trace_output_dir=/some/temp/path/
xdebug.
Also check other interesting settings like xdebug.collect_params
Good luck!
NB: be careful where you output the traces since they probably contain sensitive data.
Related
I am using MysqliDb Class from there.
https://github.com/ajillion/PHP-MySQLi-Database-Class/blob/master/MysqliDb.php
When i used on local pc, i don't have any problem. But I bought host yesterday. And I uploaded my files about 5 min ago and doesn't work. I checked my host and created error_log file and this..
PHP Fatal error: Allowed memory size of 75497472 bytes exhausted (tried to allocate 4294967296 bytes) in /home/(..)/MysqliDb.php on line 417
What is this problem?
I used this code on my config file. But same didn't work.
ini_set('memory_limit', '192M');
I think that is that you are using LongText in your database.
Please read this message: https://bugs.php.net/bug.php?id=51386
So try to mysqli::store_result before bind_result.
ini_set commands that affect the limits of the server tend to be blocked (so, you may only be able to use the ones related to showing errors or so).
Bear in mind that you are trying to allocate 4.2 Gb, which is a fairly big amount of information for a website.
Recommendations:
Check if you are creating an infinite loop which tries to load that big bunch of information (not only in that class, but prior to that call in your own code).
Use a lighter mysqli class (did you try the one that comes by default with PHP 5)?
Check if your problem can be solved in another (maybe asynchronous? that amount of memory is insane for a website), lighter way, probably with another language like C or C++ to release the load out of Apache.
Talk to your hosting provider and try to convince them to let you load 4.2 Gb of data.
I have been tasked with setting up a website on various environments for different stages of evaluation (dev/test/staging/etc).
On our staging environment however, it seems there is some difference preventing the PHP script from finishing, so the page is never delivered to the browser.
I'm wondering if there is a way I can output to log some sort of stack trace or backtrace upon cutting the connection, or is there some other method to find out what exactly PHP is doing at any given point in the script's life cycle?
It's a Drupal site, so it involves a lot of code I'm not familiar with, and could take hours to sprinkle die; commands throughout to see where the script is loading to.
I understand I should probably be looking at the differences in environments, however all should have very similar configuration (Ubuntu 11.04) and the staging environment seems entirely happy to serve other PHP sites whilst this particular site is refusing to finish. If anything this staging site has more resources available that other environments which are not having problems.
UPDATE: Sorry all, found the problem in the end. The staging environment was on a VLAN that was not permitted to access itself via public IP, and for whatever reason (still confused about this) it was trying to access itself as part of the page load and never completing the request. Setting a hosts file entry for 127.0.0.1 fixed the issue.
Debugging an issue like this step-by-step using a tool like xDebug is an option, but will probably take a long time -- finding where to put the breakpoints is going to be on about the same level as working out where to put die statements around the code. The debugger option is a better way of doing it, but won't save much in comparison, when you have a problem like this where you have an unknown blocker somewhere in large amounts of unknown code.
But xDebug also has a profiler tool which can show you what functions were called during the program run, how long they took, and highlight where the bottlenecks are. This will probably be a better place to start. Just configure xDebug to generate a profiler trace, and then use kCacheGrind to view the trace in a graphical environment.
If your program is getting stuck in a loop or something specific is taking a long time to complete, this will pinpoint the problem almost straight away; you'll be able to see exactly which function is taking the time, and what the call chain looks like to get to it.
It's quite possible that once you've seen that, you'll be able to find the problem just by looking at the relevant code. But if you can't, you can then use xDebug's step-thru debugger to analyse the function as it runs and see what the variables are set to to see why it's looping.
xDebug can be found here: http://www.xdebug.org/
Use xDebug.
Its very easy to install and use.
it has few options like breakpoints and step by step to track status of PHP script before finishes loading
and you can download xDebug from here http://www.xdebug.org/
step by step tutoril for set up xdebug is availble at sachithsays.blogspot.com/
I would like xdebug to trace only "jumps" of over X ms or of over Y KB RAM, for instance, every execution that took longer than 100ms or increased memory use by more than 100KB. This would let me ignore thousands of trace lines I don't need to see and would make optimisation much easier - as it is, in ZF2, the bloated framework takes 1 second just to start with the composer autoloader on our enterprise project, which results in thousands of lines I really have no use for. Somewhere along the line I do find the bigger jumps in execution time, but not after a long bout of scrolling.
Is there a default option to disable logging of "faster than X" executions, or if not, do you know of a helper shell/python script that could grep just the slower ones out?
For clarification, I am talking about the contents of the .xt file I get by running xdebug_start_trace() in my app.
I know nothing about such options, but what I may suggest is to use profile instead of trace.
Here is an article how you can use it.
If short, place these lines to your php.ini file:
xdebug.profiler_enable = 0
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir="c:\Projects"
and when you want to start profiler, run url with query parameter ?XDEBUG_PROFILE=1
This will produce a file with name like cachegrind.out.* and place it into profiler_output_dir.
That file could be viewed with CacheGrind viewer for your OS. Link above has a list of apps to view those files for different platforms. I were using wincachegrind (for Windows) to profile ZendFramework app. Very useful tool, as for me. And interface allow to see call tree, execution time, number of calls etc.
Well, but I see no option to measure memory usage with it.
I have a CakePHP 2.2.3 applicaiton that's running perfectly fine on our Dev server, a Debian Squeeze LAMP box from Turnkey Linux. We're using InMotion hosting for our production server, and moving our code over to this server has been DISASTEROUS.
While testing out AJAX functionality on one page, we were getting the terribly unhelpful:
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 389245600 bytes) in Unknown on line 0
tl;dr: I am looking for suggestions on how we can debug this issue
My first course of action was to strip down all the code within the controller functions to the bare minimum. The index() action of one of my controllers contains ONE line of code, and still somehow manages to exceed 256mb of memory per execution:
$this->autoRender = false;
To take the above point to the extreme, I commented out EVERY line of the Model & Controller that is generating this error. Still running out of memory. Several other pages that make MySQL database requests also display this "memory exhausted" error despite the fact that they load completely. Other pages, the memory error is more of a show-stopper and completely prevents execution.
I have tried raising the memory limit from 256 to 512 or even 1024mb, all this does is suppress the error message itself. The page does not route/render or do anything, it just silently fails.
At the suggestion of another SO post, I tried turning Debug from 2 down to 0, which does not help the issue at all either.
We do not have XDebug installed on our production server, so I am at a loss as to how I'm supposed to track down the issue for our web host to fix the problem.
The VPS we are using is a CentOS 5.8 server running Apache 2.2.23, MySQL 5.3.18, and CakePHP 2.2.3
Our webhost can't or won't provide any further information on the subject. They suggested we "ask the Cake devs if they've seen anything like this before", which I feel is a very cowardly way to kick the can down the road. I'm hoping that someone here on SO has seen something like this issue before and might be able to help.
I've seen this problem before, and it may be because you're not using Containable behavior.
It's happened to me many times before I learn to set $recursive = -1 on AppModel (or any model you're using).
Unless you're managing tons of info per page knowingly, you should restrict the data retrived. It's important to maintain the retrieval of models to the minimum, using a combination of the Containable Behaviour and $recursive
Just a tip: it can be a session problem. If you store too much in $_SESSION, session_start() can do such a thing for it has to read all the shit you've stored. Just try this:
$_SESSION = array();
If this helps, you'll find out the rest.
Anyone got a problem with php 5.2.12 getting a lot of " Maximum execution time" error when trying to include() files?
I can't seem to find the bug in php.net, but it's consistently giving us that error on numerous scripts.
Anyone can recommend solutions?
The same script runs on a few other servers with php 5.2 without any problems. So just to let you guys know it isn't a script problem.
This is much, much more likely to be a problem with your code rather than with a specific version of PHP. PHP by default has a maximum execution time of 30 seconds, which you can modify by calling set_time_limit() or adjusting your php.ini settings.
If you're not doing something that you expect to take a long time, then usually the cause of this error is an infinite loop somewhere in your code. I'd throw a debug_print_backtrace() and a couple of exit() calls into some key locations and try to figure out which file is giving you grief, and then take a closer look in there. Perhaps you're stuck in an infinite include() hierarchy, in which case you should be using include_once() for all your class and function library files.
I would check to make sure the same include isn't getting requested time and time again somehow. You might try include_once() just to see if it changes things for you. That isn't a solution so much as it's a potential temporary fix. You should find out what is causing this if indeed it is getting called over and over again.
If you have xdebug setup and an IDE that supports debugging this would be a great way to dig into the code.
Otherwise, can you try putting some output statements in the first line of the included file and in the line PROIR to calling the include. See what's going on ...