How do I use xdebug to find a php memory leak? - php

I'm struggling here, probably due to the fact that I'm not a php expert (yet) and definitely a little lacking in linux administration, but learning. I've looked for documentation on how to find the memory leak that my code definitely has, and a lot of articles suggested xdebug. I got it installed, turned on logging briefly and used tracefile-analyser.php to dump out the following report. The problem I now have, is finding proper documentation on the tracefile script to explain what each column means. Can someone either point me to a straight forward "if you see X, this means it is a memory leak" type documentation or explain how I would find the function calls leaking memory in the following output?
http://hully.net/ML.jpg

Based on the documentation, if xdebug is set to capture trace information in machine-readable format, it records the amount of memory in use when the program begins a function, and then how much memory is being used when the program exits the function.
I'm not sure what program you are using the parse the trace file, but I would guess that they are displaying the total memory usage for your program in the first column, and in the second column displaying how much memory the individual function uses.
If you are already pulling the traces in machine-readable format (xdebug.trace_format = 1 in your php.ini) you may want to try Xdebug Trace Tree to view the results. It shows a column specifically for the change in memory usage.

Related

PHP 7.2.9 out of memory at random times

We use Apache (32-bit) on Windows in combination with PHP 7.2.9. It works most of the time but after a lot of refreshing (a lot is a random number of times each time apache gets restarted) we get this error: Fatal error: Out of memory (allocated 27262976) tried to allocate 4096 bytes in [random file, always a different one] on line x.
Weird thing is that it keeps giving the exact same error until we restart Apache, then it works for a couple of hours.
Also weird is that we set 512M as memory limit in the php.ini, but it says allocated 27262976 which is (exactly) 26MB. We have 2GB+ RAM free, so that isn't the problem.
It would be great if anyone knows how to solve this.
Thanks,
Lars
Most probably the memory just gets fragmented. (I had similar issues before.) You have to let garbage collection work more while your code is running.
One way to do that
You have to identify the part of the whole process where you create the biggest arrays or objects, and split it into multiple smaller steps. I don't know what your code does, but the important part is that PHP does garbage collection at certain steps, for example when a function returns and frees up its own environment. So if you, let's say, process 10000 files in a loop, it would be helpful to implement a queue system where you put in 100 files, call a function to deal with them, then go on processing the queue. Sounds silly, I know, but makes sense if you think about it.
Another way
You can allocate same-size structures for variable-length data, like 50-100k bricks that you only partially use; this way the memory won't get as fragmented. But garbage collection is a lot better and this would typically be his job.
Last resort
When your memory is about halfway exhausted - which you can check by calling memory_get_usage(true) - serialize the big structure you're using, unset the variable, then unserialize it back. This should sort out the allocation problems.
Hope some of the above helps.

debugging long running PHP script

I have php script running as a cron job, extensively using third party code. Script itself has a few thousands LOC. Basically it's the data import / treatment script. (JSON to MySQL, but it also makes a lot of HTTP calls and some SOAP).
Now, performance is downgrading with the time. When testing with a few records (around 100), performance is ok, it is done in a 10-20 minutes. When running whole import (about 1600 records), mean time of import of one record grows steadily, and whole thing takes more than 24 hours, so at least 5 times longer than expected.
Memory seems not to be a problem, usage growing as it should, without unexpected peaks.
So, I need to debug it to find the bottleneck. It can be some problem with the script, underlying code base, php itself, database, os or network part. I am suspecting for now some kind of caching somewhere which is not behaving well with a near 100 % miss ratio.
I cannot use XDebug, profile file grows too fast to be treatable.
So question is: how can I debug this kind of script?
PHP version: 5.4.41
OS: Debian 7.8
I can have root privileges if necessary, and install the tools. But it's the production server and ideally debugging should not be too disrupting.
Yes its possible and You can use Kint (PHP Debugging Script)
What is it?
Kint for PHP is a tool designed to present your debugging data in the absolutely best way possible.
In other words, it's var_dump() and debug_backtrace() on steroids. Easy to use, but powerful and customizable. An essential addition to your development toolbox.
Still lost? You use it to see what's inside variables.
Act as a debug_backtrace replacer, too
you can download here or Here
Total Documentations and Help is here
Plus, it also supports almost all php framework
CodeIgniter
Drupal
Symfony
Symfony 2
WordPress
Yii
framework
Zend Framework
All the Best.... :)
There are three things that come to mind:
Set up an IDE so you can debug the PHP script line by line
Add some logging to the script
Look for long running queries in MySQL
Debug option #2 is the easiest. Since this is running as a cron job, you add a bunch of echo's in your script:
<?php
function log_message($type, $message) {
echo "[{strtoupper($type)}, {date('d-m-Y H:i:s')}] $message";
}
log_message('info', 'Import script started');
// ... the rest of your script
log_message('info', 'Import script finished');
Then pipe stdout to a log file in the cron job command.
01 04 * * * php /path/to/script.php >> /path/to/script.log
Now you can add log_message('info|warn|debug|error', 'Message here') all over the script and at least get an idea of where the performance issue lies.
Debug option #3 is just straight investigation work in MySQL. One of your queries might be taking a long time, and it might show up in a long running query utility for MySQL.
Profiling tool:
There is a PHP profiling tool called Blackfire which is currently in public beta. There is specific documentation on how to profile CLI applications. Once you collected the profile you can analyze the application control flow with time measurements in a nice UI:
Memory consumption suspicious:
Memory seems not to be a problem, usage growing as it should, without unexpected peaks.
A growing memory usage actually sounds suspicious! If the current dataset does not depend on all previous datasets of the import, then a growing memory most probably means, that all imported datasets are kept in memory, which is bad. PHP may also frequently try to garbage collect, just to find out that there is nothing to remove from memory. Especially long running CLI tasks are affected, so be sure to read the blog post that discovered the behavior.
Use strace to see what the program is basically doing from the system perspective. Is it hanging in IO operations etc.? strace should be the first thing you try when encountering performance problems with whatever kind of Linux application. Nobody can hide from it! ;)
If you should find out that the program hangs in network related calls like connect, readfrom and friends, meaning the network communication does hang at some point while connecting or waiting for responses than you can use tcpdump to analyze this.
Using the above methods you should be able to find out most common performance problems. Note that you can even attach to a running task with strace using -p PID.
If the above methods doesn't help, I would profile the script using xdebug. You can analyse the profiler output using tools like KCachegrind
Although it is not stipulated, and if my guess is correct you seem to be dealing with records one at a time, but in one big cron.
i.e. Grab a record#1, munge it somehow, add value to it, reformat it then save it, then move to record#2
I would consider breaking the big cron down. ie
Cron#1: grab all the records, and cache all the salient data locally (to that server). Set a flag if this stage is achieved.
Cron #2: Now you have the data you need, munge and add value, cache that output. Set a flag if this stage is achieved.
Cron #3: Reformat that data and store it. Delete all the files.
This kind of "divide and conquer" will ease your debugging woes, and lead to a better understanding of what is actually going on, and as a bonus give you the opportunity to rerun say, cron 2.
I've had to do this many times, and for me logging is the key to identifying weaknesses in your code, identify poor assumptions about data quality, and can hint at where latency is causing a problem.
I've run into strange slowdowns when doing network heavy efforts in the past. Basically, what I found was that during manual testing the system was very fast but when left to run unattended it would not get as much done as I had hoped.
In my case the issue I found was that I had default network timeouts in place and many web requests would simply time out.
In general, though not an external tool, you can use the difference between two microtime(TRUE) requests to time sections of code. To keep the logging small set a flag limit and only test the time if the flag has not been decremented down to zero after reducing for each such event. You can have individual flags for individual code segments or even for different time limits within a code segment.
$flag['name'] = 10; // How many times to fire
$slow['name'] = 0.5; // How long in seconds before it's a problem?
$start = microtime(TRUE);
do_something($parameters);
$used = microtime(TRUE) - $start;
if ( $flag['name'] && used >= $slow['name'] )
{
logit($parameters);
$flag['name']--;
}
If you output what URL, or other data/event took to long to process, then you can dig into that particular item later to see if you can find out how it is causing trouble in your code.
Of course, this assumes that individual items are causing your problem and not simply a general slowdown over time.
EDIT:
I (now) see it's a production server. This makes editing the code less enjoyable. You'd probably want to make integrating with the code a minimal process having the testing logic and possibly supported tags/flags and quantities in an external file.
setStart('flagname');
// Do stuff to be checked for speed here
setStop('flagname',$moredata);
For maximum robustness the methods/functions would have to ensure they handled unknown tags, missing parameters, and so forth.
xdebug_print_function_stack is an option, but what you can also do is to create a "function trace".There are three output formats. One is meant as a human readable trace, another one is more suited for computer programs as it is easier to parse, and the last one uses HTML for formatting the trace
http://www.xdebug.org/docs/execution_trace
Okay, basically you have two possibilities - it's either the ineffective PHP code or ineffective MySQL code. Judging by what you say, it's probably inserting into indexed table a lot of records separately, which causes the insertion time to skyrocket. You should either disable indexes and rebuild them after insertion, or optimize the insertion code.
But, about the tools.
You can configure the system to automatically log slow MySQL queries:
https://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
You can also do the same with PHP scripts, but you need a PHP-FPM environment (and you probably have Apache).
https://rtcamp.com/tutorials/php/fpm-slow-log/
These tools are very powerful and versatile.
P.S. 10-20 minutes for 100 records seems like A LOT.
You can use https://github.com/jmartin82/phplapse to record the application activity for determinate time.
For example start recording after n iterations with:
phplapse_start();
And stop it in next iteration with:
phplapse_stop();
With this process you was created a snapshot of execution when all seems works slow.
(I'm the author of project, don't hesitate to contact with me to improve the functionality)
I have a similar thing running each night (a cron job to update my database). I have found the most reliable way to debug is to set up a log table in the database and regularly insert / update a json string containing a multi-dimensional array with info about each record and whatever useful info you want to know about each record. This way if your cron job does not finish you still have detailed information about where it got up to and what happened along the way. Then you can write a simple page to pull out the json string, turn it back into an array and print useful data onto the page including timing and passed tests etc. When you see something as an issue you can concentrate on putting more info from that area into the json string.
Regular "top" command can show you, if CPU usage by php or mysql is bottleneck. If not, then delays may be caused by http calls.
If CPU usage by mysqld is low, but constant, then it may be disk usage bottleneck.
Also, you can check your bandwidth usage by installing and using "speedometer", or other tools.

How to catch recursive runaway script in PHP?

My Apache gets killed sometimes because it runs out of memory. This did not use to happen. I have several hundred PHP files in this project.
I suspect that there was an unintentional recursion created -- not just a function calling itself, but function A calling B which then calls A again.
I've tried just reading the code and finding such recursion, but with no luck.
Is there a way I can tell PHP to keep track of ALL recursions, or throw a warning when its internal stack is over a certain size?
My Apache gets killed sometimes because it runs out of memory.
Well that's at least a point to start with. I could read in your question you suspect that is caused by a PHP script.
To find out which one, you need to look further. One way I could think of is that you enable PHP error logging. Then set the memory_limit to a low value so you can provoke errors where much memory consumption happens. You find the line where this happens in the error log (see as well Protocol of some PHP Memory Stretching Fun).
This should give you some potential places.
I'm not specifically sure if recursion must cause this and you wrote yourself that you only assume that. To detect recursion, you can use xdebug and limit recursion depth:
xdebug.max_nesting_level=<your preferred value>
This should give you as well more useful information in the error logs.
If both memory and recursion is not the case, it might either not be related to PHP or PHP segaults and your webserver can't handle that situation. No idea how mod_php deals with segfaults for example but you haven't specified the SAPI you're using anyway.
A typical source of PHP segfaults however are regular expressions that kick out PHP before triggering the pcre recursion limit. If you're making use of regular expressions in your code, you can just reduce the recursion limit to a lower value like and test if that helps. However, from your question it's not clear if that is a cause at all. So you first need to find out more.

php daemon possible memory leak

i've written a daemon in php and want to make sure it doesn't leak memory, as it'll be running 24/7.
even in its simplest form memory_get_peak_usage for a daemon will report that the script consumes more memory for each cycle. memory_get_usage on the other hand will not grow.
the question is: should i worry? i've stripped down the daemon to the bare basics but this is still happening. any thoughts?
#!/usr/bin/php -q
<?php
require_once "System/Daemon.php";
System_Daemon::setOption("appName", "smsd");
System_Daemon::start();
while(!System_Daemon::isDying()){
System_Daemon::info("debug: memory_get_peak_usage: ".memory_get_peak_usage());
System_Daemon::info("debug: memory_get_usage: ".memory_get_usage());
System_Daemon::iterate(2);
}
FINAL NOTE + CONCLUSION: i ended up writing my own daemon wrapper, not using pear's system_daemon. regardless of how i tweaked this library i could not stop it from leaking memory. hope this helps someone else.
FINAL NOTE + CONCLUSION 2: my script has been in production for over a week and is still not leaking 1 bytes of memory. so - writing a daemon in php actually seems to be ok, as long as you're very careful about its memory consumtion.
I got the same problem. Maybe the best idea is to report new bug at PEAR
BTW, code like that doesn't show that memleak:
#!/usr/bin/php -q
<?php
require_once "System/Daemon.php";
System_Daemon::setOption("appName", "smsd");
System_Daemon::start();
while(!System_Daemon::isDying()) {
print ("debug: memory_get_peak_usage: ".memory_get_peak_usage()."\n");
print ("debug: memory_get_usage: ".memory_get_usage()."\n\n");
System_Daemon::iterate(2);
}
Look's like System_Daemon::info() is a problem.
It turns out file_get_contents was leaking memory. Whenever I disabled that one line, peak memory usage was stable. When I commented it back in, peak memory usage would increase by 32 bytes every iteration.
Replaced the file_get_contents call (used to retrieve the number inside the pid-file in /var/run) with fread, and solved this problem.
This patch will be part of the next System_Daemon release.
Thanks whoever (can't find matching nick) also reported this bug (#18036) otherwise I'd probably never known.
Thanks again!
You can try using the new garbage collector in PHP 5.3 to prevent issues with circular references.
gc_enable()
gc_collect_cycles()
You should not use PHP to write a daemon. Why? Because PHP is not a language that is sufficiently mature to run for hours, days, weeks or months. PHP is written in C, all of the magic that it provides has to be handled. Garbage collection, depending on your version, might or might not work, depending on what extensions you have compiled and used. Yes, if they ship with official releases, they should 'play nice', but do you check to see what release you are using? Are you sure all loaded extensions realize that they might run for more than 10 - 30 seconds? Given that most execution times never spot leaks, are you sure it even works?
I am quite close to going off on a 'don't use regex to parse HTML rant' regarding this, as I see the question creeping up more and more. Twice today that I'm aware of.
Would you use a crowbar as a toothpick? Neither Zend, nor Roadsend, Nor PHC is sufficiently mature to handle running for any period of time that could be considered protracted, given the expected life of a PHP process when rendering a web page. Yes, even with the GC facilities provided by C++ based PHP compiler, it is unwise to write a daemon in PHP.
I hate answers that say you can't do that, with that, but in this case, it's true, at least for now.

Zend php memory memory_limit

All,
I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server:
Allowed memory size of XXXX bytes exhausted (tried YYYY...
We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM.
Thanks for all of the help!
Update
I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example:
(tried to allocate 13344 bytes)
If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally.
Any more ideas? I have a ticket out to get Xdebug installed on the Linux box.
Thanks,
-rep
Generally speaking, with PHP 5.2 and/or PHP 5.3, I tend to consider than more than 32M for memory_limit is "too much" :
Using Frameworks / ORM and stuff like this, 16M is often not enough
Using 32M is generally enough for the kind of web-applications I'm working on (typical websites)
Using more than 64M means the server will not be able to handle as many users as we'd like.
When, it comes to a script reaching memory_limit, the usual problem is trying to load too much data into memory ; a couple of examples :
Loading a big file in memory, with functions such as file or file_get_contents, or XML-related functions/classes
Creating a too big array of data
Creating too many objects
Considering you are using an ORM, you might be in a situation where :
You are doing some SQL query that returns a lot of rows
Your ORM is converting each row in objects, putting those in an array
In which case a solution would be to load less data
using pagination, for instance
or trying to load data as arrays instead of objects (I don't know if this is possible with Propel -- but it is with Doctrine ; so maybe Propel has some way of doing that too ? )
What exactly is your application doing at the time it runs out of memory. There can be a lot of causes for this. I'd say most common would be allocating too much data to an array. Is your application doing anything along those lines.
You have one of two things happening, perhaps both:
You have a runaway process somewhere that isn't ending when it should be.
You have algorithms that throw lots of data around, such as huge strings or arrays or objects, and are making needless copies instead of processing just what they need and discarding what they don't.
I think this has something to do with deployment from cruise control. I only get the very high (on the order of gigs) memory error when someone is deploying new code (or just after new code has been deployed). This makes a little bit of sense too since the error always points to a line that is a "require_once." Each time I get an error:
Fatal error: Out of memory (allocated 4456448) (tried to allocate 3949907977 bytes) in /directory/file.php on line 2
I have replaced the "require_once" line with:
class_exists('Ingrain_Security_Auth') || require('Ingrain/Security/Auth.php');
I have replaced that line in 3 files so far, and have not had any more memory issues. Can anyone shed some light into what might be going on? I am using Cruise Control to deploy.

Categories