My Apache gets killed sometimes because it runs out of memory. This did not use to happen. I have several hundred PHP files in this project.
I suspect that there was an unintentional recursion created -- not just a function calling itself, but function A calling B which then calls A again.
I've tried just reading the code and finding such recursion, but with no luck.
Is there a way I can tell PHP to keep track of ALL recursions, or throw a warning when its internal stack is over a certain size?
My Apache gets killed sometimes because it runs out of memory.
Well that's at least a point to start with. I could read in your question you suspect that is caused by a PHP script.
To find out which one, you need to look further. One way I could think of is that you enable PHP error logging. Then set the memory_limit to a low value so you can provoke errors where much memory consumption happens. You find the line where this happens in the error log (see as well Protocol of some PHP Memory Stretching Fun).
This should give you some potential places.
I'm not specifically sure if recursion must cause this and you wrote yourself that you only assume that. To detect recursion, you can use xdebug and limit recursion depth:
xdebug.max_nesting_level=<your preferred value>
This should give you as well more useful information in the error logs.
If both memory and recursion is not the case, it might either not be related to PHP or PHP segaults and your webserver can't handle that situation. No idea how mod_php deals with segfaults for example but you haven't specified the SAPI you're using anyway.
A typical source of PHP segfaults however are regular expressions that kick out PHP before triggering the pcre recursion limit. If you're making use of regular expressions in your code, you can just reduce the recursion limit to a lower value like and test if that helps. However, from your question it's not clear if that is a cause at all. So you first need to find out more.
Related
I'm struggling here, probably due to the fact that I'm not a php expert (yet) and definitely a little lacking in linux administration, but learning. I've looked for documentation on how to find the memory leak that my code definitely has, and a lot of articles suggested xdebug. I got it installed, turned on logging briefly and used tracefile-analyser.php to dump out the following report. The problem I now have, is finding proper documentation on the tracefile script to explain what each column means. Can someone either point me to a straight forward "if you see X, this means it is a memory leak" type documentation or explain how I would find the function calls leaking memory in the following output?
http://hully.net/ML.jpg
Based on the documentation, if xdebug is set to capture trace information in machine-readable format, it records the amount of memory in use when the program begins a function, and then how much memory is being used when the program exits the function.
I'm not sure what program you are using the parse the trace file, but I would guess that they are displaying the total memory usage for your program in the first column, and in the second column displaying how much memory the individual function uses.
If you are already pulling the traces in machine-readable format (xdebug.trace_format = 1 in your php.ini) you may want to try Xdebug Trace Tree to view the results. It shows a column specifically for the change in memory usage.
I have a PHP program that will run forever (not a webpage a socket server). After processing over 1000 requests the program eventually crashes due to an out of memory exception.
Here is a link to my project.
Here is a link to my program.
I am not sure why this happens, I have tried using garbage collection functions in the function that processes requests (onMessage) in the program but it does not result in any changes. Any suggestions would be appreciated.
Investing huge amounts of effort, you may be able to mitigate this for a while. But in the end you will have trouble running a non-terminating PHP application.
Check out PHP is meant to die. This article discusses PHP's memory handling (among other things) and specifically focuses on why all long-running PHP processes eventually fail. Some excerpts:
There’s several issues that just make PHP the wrong tool for this. Remember, PHP will die, no matter how hard you try. First and foremost, there’s the issue of memory leaks. PHP never cared to free memory once it’s not used anymore, because everything will be freed at the end — by dying. In a continually-running process, that will slowly keep increasing the allocated memory (which is, in fact, wasted memory), until reaching PHP’s memory_limit value and killing your process without a warning. You did nothing wrong, except expecting the process to live forever. Under load, replace the “slowly” part for "pretty quickly".
There’s been improvements in the “don’t waste memory” front. Sadly, they’re not enough. As things get complex or the load increases, it’ll crash.
This may be better suited to server fault but I thought I'd ask here first.
We have a file that is prepended to every PHP file on our servers using auto-prepend that contains a class called Bootstrap that we use for autoloading, environment detection, etc. It's all working fine.
However, when there is an "OUT OF MEMORY" error directly preceding (i.e., less than a second or even at the same time) a request to another file on the same server, one of three things happens:
Our code for checking if(class_exists('Bootstrap'), which we used to wrap the class definition when we first got this error, returns true, meaning that class has already been declared despite this being the auto-prepend file.
We get a "cannot redeclare class Bootstrap" error from our auto-prepended file, meaning that class_exists('Bootstrap') returned false but it somehow was still declared.
The file is not prepended at all, leading to a one-time fatal error for files that depend on it.
We could, of course, try to fix the out of memory issues since those seem to be causing the other errors, but for various reasons, they are unfixable in our setup or very difficult to fix. But that's beside the point - it seems to me that this is a bug in PHP with some sort of memory leak causing issues with the auto-prepend directive.
This is more curiosity than anything since this rarely happens (maybe once a week on our high-traffic servers). But I'd like to know - why is this happening, and what can we do to fix it?
We're running FreeBSD 9.2 with PHP 5.4.19.
EDIT: A few things we've noticed while trying to fix this over the past few months:
It seems to only happen on our secure servers. The out of memory issues are predominantly on our secure servers (they're usually from our own employees trying to download too much data), so it could just be a coincidence, but it deserves pointing out
The dump of get_declared_classes when we have this issue contains classes that are not used on the page that is triggering the error. For example, the output of $_SERVER says the person is on xyz.com, but one of the declared classes is only used in abc.com, which is where the out of memory issues usually originate from.
All of this leads me to believe that PHP is not doing proper end-of-cycle garbage collection after getting an out of memory error, which causes the Bootstrap class to either be entirely or partly in memory on the next page request if it's soon enough after the error. I'm not familiar enough with PHP garbage collection to actually act on this, but I think this is most likely the issue.
You might not be able to "fix" the problem without fixing the out of memory issue. Without knowing the framework you're using, I'll just go down the list of areas that come to mind.
You stated "they're usually from our own employees trying to download too much data". I would start there, as it could be the biggest/loudest opportunity for optimizations, a few idea come to mind.
if the data being downloaded is files, perhaps you could use streams to chunk the reads, to a constant size, so the memory is not gobbled up on big downloads.
can you do download queueing, throttling.
if the data is coming from a database, besides optimizing your queries, you could rate limit them, reduce the result set sizes and ideally move such workloads to a dedicated environment, with mirrored data.
ensure your code is releasing file pointers and database connections responsibly, leaving it to PHP teardown, could result in delayed garbage collection and a sort of cascading effect, in high traffic situations.
Other low hanging fruit when it comes to memory limits
you are running php 5.4.19, if your software permits it, consider updating to more resent version "PHP 5.4 has not been patched since 2015" besides PHP 7 comes with a whole slew of performance improvements.
if you have a client side application involved monitor it's xhr and overall network activity, look for excessive polling and hanging connections.
as for your autoloader, based on your comment "The dump of get_declared_classes when we have this issue contains classes that are not used on the page that is triggering the error" you may want to check the implementation, to make sure it's not loading some sort of bundled class cache, if you are using composer, dump-autoload might be helpful.
sessions, I've seen some applications load files based on cookies and sessions, if you have such a setup, I would audit that logic and ensure there are no sticky sessions loading unneeded resources.
It's clear from your question you are running a multi-tenency server. Without proper stats it hard to be more specific, but I would think it's clear the issue is not a PHP issue, as it seems to be somewhat isolated, based on your description.
Proper Debugging and Profiling
I would suggest installing a PHP profiler, even for a short time, new relic is pretty good. You will be able to see exactly what is going on, and have the data to fix the right problem. I think they have a free trial, which should get you pointed in the right direction... There are others too, but their names escape me at the moment.
Even if class_exists returns false, it would never return true if an interface of the same name exists. However, you cannot declare an interface and class of the same name.
Try running class_exists('Bootstrap') && interface_exists('Bootstrap') to make sure you do not redeclare.
Did you have a look at __autoload function?
I believe that you could workaround this issue by creating some function like that in your code:
function __autoload($className)
{
if (\file_exists($className . '.php'))
include_once($className . '.php');
else
eval('class ' . $className . ' { function __call($method, $args) { return false; } }');
}
If you have a file called Bootstrap.php with class Bootstrap declared inside it, PHP will automatically load file, otherwise declare a ghost class that could handle any function call inside it, avoiding any error messages. Note that for ghost function I used __call magic method.
zend_mm_heap corrupted is coming up as an error message on a PHP program I wrote to pre-render a large environment.
I suspect it's being caused by having too many variable assignments in the script, although I'm uncertain of this since I wrote the script to only have about 20 variables at any given time, of which one is an array that may hold up to 500 elements. That said, the number of iterations in total is on the order of a few billion.
Am I correct in my suspicion, and if so is there anything that can be done about it? Would it be better, for instance, to run the script for a while, then dump out important variables to a file and restart the script, making it pick up those variables and continuing?
I've seen this problem, and can reproduce it using phalcon, but it seems to originate from APC cache. I fixed by switching from APC to zend opcache. You can try disabling APC to see if it goes away.
Best I can reason from my investigations is that APC is doing something to memory that zend is using. PS, it doesn't have anything to do with zend framework, it's an error related to the parts of zend that were merged into php.
The solution to your problem is to dowload the latest version of APC compatible with your PHP version.
You'll have to force install it making it overwrite the old version of APC. This will in many cases fix the issue you're having.
i've written a daemon in php and want to make sure it doesn't leak memory, as it'll be running 24/7.
even in its simplest form memory_get_peak_usage for a daemon will report that the script consumes more memory for each cycle. memory_get_usage on the other hand will not grow.
the question is: should i worry? i've stripped down the daemon to the bare basics but this is still happening. any thoughts?
#!/usr/bin/php -q
<?php
require_once "System/Daemon.php";
System_Daemon::setOption("appName", "smsd");
System_Daemon::start();
while(!System_Daemon::isDying()){
System_Daemon::info("debug: memory_get_peak_usage: ".memory_get_peak_usage());
System_Daemon::info("debug: memory_get_usage: ".memory_get_usage());
System_Daemon::iterate(2);
}
FINAL NOTE + CONCLUSION: i ended up writing my own daemon wrapper, not using pear's system_daemon. regardless of how i tweaked this library i could not stop it from leaking memory. hope this helps someone else.
FINAL NOTE + CONCLUSION 2: my script has been in production for over a week and is still not leaking 1 bytes of memory. so - writing a daemon in php actually seems to be ok, as long as you're very careful about its memory consumtion.
I got the same problem. Maybe the best idea is to report new bug at PEAR
BTW, code like that doesn't show that memleak:
#!/usr/bin/php -q
<?php
require_once "System/Daemon.php";
System_Daemon::setOption("appName", "smsd");
System_Daemon::start();
while(!System_Daemon::isDying()) {
print ("debug: memory_get_peak_usage: ".memory_get_peak_usage()."\n");
print ("debug: memory_get_usage: ".memory_get_usage()."\n\n");
System_Daemon::iterate(2);
}
Look's like System_Daemon::info() is a problem.
It turns out file_get_contents was leaking memory. Whenever I disabled that one line, peak memory usage was stable. When I commented it back in, peak memory usage would increase by 32 bytes every iteration.
Replaced the file_get_contents call (used to retrieve the number inside the pid-file in /var/run) with fread, and solved this problem.
This patch will be part of the next System_Daemon release.
Thanks whoever (can't find matching nick) also reported this bug (#18036) otherwise I'd probably never known.
Thanks again!
You can try using the new garbage collector in PHP 5.3 to prevent issues with circular references.
gc_enable()
gc_collect_cycles()
You should not use PHP to write a daemon. Why? Because PHP is not a language that is sufficiently mature to run for hours, days, weeks or months. PHP is written in C, all of the magic that it provides has to be handled. Garbage collection, depending on your version, might or might not work, depending on what extensions you have compiled and used. Yes, if they ship with official releases, they should 'play nice', but do you check to see what release you are using? Are you sure all loaded extensions realize that they might run for more than 10 - 30 seconds? Given that most execution times never spot leaks, are you sure it even works?
I am quite close to going off on a 'don't use regex to parse HTML rant' regarding this, as I see the question creeping up more and more. Twice today that I'm aware of.
Would you use a crowbar as a toothpick? Neither Zend, nor Roadsend, Nor PHC is sufficiently mature to handle running for any period of time that could be considered protracted, given the expected life of a PHP process when rendering a web page. Yes, even with the GC facilities provided by C++ based PHP compiler, it is unwise to write a daemon in PHP.
I hate answers that say you can't do that, with that, but in this case, it's true, at least for now.