I have a zf2 php application which is executed in a bash script every minute. This is running inside an ec2 instance.
here's my code
while :
do
php public/index.php start-processor &
wait
sleep 60
done
Metrics Reading
Based on the metrics it keeps on leaking memory until it reaches 100% then drops. Is this normal or is there really a leak happening to my application?
I've also tried using htops and it looks fine and does not eat memory that much.
Hope someone could explain what is happening here. Should I worry about this?
Thanks and more power.
It does not look like a memory leak to me, there the used amount would just rise and never go back, causing you app to eventually crash.
This graph looks very similar to garbage collection as it's happening in JVM, does you PHP use such thing under the hood? I searched the web and looks like PHP 5.3+ has GC built in: https://secure.php.net/manual/en/features.gc.php
When I upload my project on the online hosting server, the entry processes usage goes to 100%. It results to website down.
Using
$this->output->enable_profiler(TRUE);
I got this result.
MEMORY USAGE
1,263,856 bytes
Even a single page takes 1.2 MB
How can I solve my this problem, Please Help
I don't believe so. Maybe just make sure you aren't loading things where they aren't actually needed (like autoload loads things for each request whether you utilize it or not). CI actually has one of the smallest memory footprints of any a framework. All frameworks take more overhead than straight PHP. Try benchmarking the same thing in another framework and it will most likely use more memory than CI does
I have php script running as a cron job, extensively using third party code. Script itself has a few thousands LOC. Basically it's the data import / treatment script. (JSON to MySQL, but it also makes a lot of HTTP calls and some SOAP).
Now, performance is downgrading with the time. When testing with a few records (around 100), performance is ok, it is done in a 10-20 minutes. When running whole import (about 1600 records), mean time of import of one record grows steadily, and whole thing takes more than 24 hours, so at least 5 times longer than expected.
Memory seems not to be a problem, usage growing as it should, without unexpected peaks.
So, I need to debug it to find the bottleneck. It can be some problem with the script, underlying code base, php itself, database, os or network part. I am suspecting for now some kind of caching somewhere which is not behaving well with a near 100 % miss ratio.
I cannot use XDebug, profile file grows too fast to be treatable.
So question is: how can I debug this kind of script?
PHP version: 5.4.41
OS: Debian 7.8
I can have root privileges if necessary, and install the tools. But it's the production server and ideally debugging should not be too disrupting.
Yes its possible and You can use Kint (PHP Debugging Script)
What is it?
Kint for PHP is a tool designed to present your debugging data in the absolutely best way possible.
In other words, it's var_dump() and debug_backtrace() on steroids. Easy to use, but powerful and customizable. An essential addition to your development toolbox.
Still lost? You use it to see what's inside variables.
Act as a debug_backtrace replacer, too
you can download here or Here
Total Documentations and Help is here
Plus, it also supports almost all php framework
CodeIgniter
Drupal
Symfony
Symfony 2
WordPress
Yii
framework
Zend Framework
All the Best.... :)
There are three things that come to mind:
Set up an IDE so you can debug the PHP script line by line
Add some logging to the script
Look for long running queries in MySQL
Debug option #2 is the easiest. Since this is running as a cron job, you add a bunch of echo's in your script:
<?php
function log_message($type, $message) {
echo "[{strtoupper($type)}, {date('d-m-Y H:i:s')}] $message";
}
log_message('info', 'Import script started');
// ... the rest of your script
log_message('info', 'Import script finished');
Then pipe stdout to a log file in the cron job command.
01 04 * * * php /path/to/script.php >> /path/to/script.log
Now you can add log_message('info|warn|debug|error', 'Message here') all over the script and at least get an idea of where the performance issue lies.
Debug option #3 is just straight investigation work in MySQL. One of your queries might be taking a long time, and it might show up in a long running query utility for MySQL.
Profiling tool:
There is a PHP profiling tool called Blackfire which is currently in public beta. There is specific documentation on how to profile CLI applications. Once you collected the profile you can analyze the application control flow with time measurements in a nice UI:
Memory consumption suspicious:
Memory seems not to be a problem, usage growing as it should, without unexpected peaks.
A growing memory usage actually sounds suspicious! If the current dataset does not depend on all previous datasets of the import, then a growing memory most probably means, that all imported datasets are kept in memory, which is bad. PHP may also frequently try to garbage collect, just to find out that there is nothing to remove from memory. Especially long running CLI tasks are affected, so be sure to read the blog post that discovered the behavior.
Use strace to see what the program is basically doing from the system perspective. Is it hanging in IO operations etc.? strace should be the first thing you try when encountering performance problems with whatever kind of Linux application. Nobody can hide from it! ;)
If you should find out that the program hangs in network related calls like connect, readfrom and friends, meaning the network communication does hang at some point while connecting or waiting for responses than you can use tcpdump to analyze this.
Using the above methods you should be able to find out most common performance problems. Note that you can even attach to a running task with strace using -p PID.
If the above methods doesn't help, I would profile the script using xdebug. You can analyse the profiler output using tools like KCachegrind
Although it is not stipulated, and if my guess is correct you seem to be dealing with records one at a time, but in one big cron.
i.e. Grab a record#1, munge it somehow, add value to it, reformat it then save it, then move to record#2
I would consider breaking the big cron down. ie
Cron#1: grab all the records, and cache all the salient data locally (to that server). Set a flag if this stage is achieved.
Cron #2: Now you have the data you need, munge and add value, cache that output. Set a flag if this stage is achieved.
Cron #3: Reformat that data and store it. Delete all the files.
This kind of "divide and conquer" will ease your debugging woes, and lead to a better understanding of what is actually going on, and as a bonus give you the opportunity to rerun say, cron 2.
I've had to do this many times, and for me logging is the key to identifying weaknesses in your code, identify poor assumptions about data quality, and can hint at where latency is causing a problem.
I've run into strange slowdowns when doing network heavy efforts in the past. Basically, what I found was that during manual testing the system was very fast but when left to run unattended it would not get as much done as I had hoped.
In my case the issue I found was that I had default network timeouts in place and many web requests would simply time out.
In general, though not an external tool, you can use the difference between two microtime(TRUE) requests to time sections of code. To keep the logging small set a flag limit and only test the time if the flag has not been decremented down to zero after reducing for each such event. You can have individual flags for individual code segments or even for different time limits within a code segment.
$flag['name'] = 10; // How many times to fire
$slow['name'] = 0.5; // How long in seconds before it's a problem?
$start = microtime(TRUE);
do_something($parameters);
$used = microtime(TRUE) - $start;
if ( $flag['name'] && used >= $slow['name'] )
{
logit($parameters);
$flag['name']--;
}
If you output what URL, or other data/event took to long to process, then you can dig into that particular item later to see if you can find out how it is causing trouble in your code.
Of course, this assumes that individual items are causing your problem and not simply a general slowdown over time.
EDIT:
I (now) see it's a production server. This makes editing the code less enjoyable. You'd probably want to make integrating with the code a minimal process having the testing logic and possibly supported tags/flags and quantities in an external file.
setStart('flagname');
// Do stuff to be checked for speed here
setStop('flagname',$moredata);
For maximum robustness the methods/functions would have to ensure they handled unknown tags, missing parameters, and so forth.
xdebug_print_function_stack is an option, but what you can also do is to create a "function trace".There are three output formats. One is meant as a human readable trace, another one is more suited for computer programs as it is easier to parse, and the last one uses HTML for formatting the trace
http://www.xdebug.org/docs/execution_trace
Okay, basically you have two possibilities - it's either the ineffective PHP code or ineffective MySQL code. Judging by what you say, it's probably inserting into indexed table a lot of records separately, which causes the insertion time to skyrocket. You should either disable indexes and rebuild them after insertion, or optimize the insertion code.
But, about the tools.
You can configure the system to automatically log slow MySQL queries:
https://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
You can also do the same with PHP scripts, but you need a PHP-FPM environment (and you probably have Apache).
https://rtcamp.com/tutorials/php/fpm-slow-log/
These tools are very powerful and versatile.
P.S. 10-20 minutes for 100 records seems like A LOT.
You can use https://github.com/jmartin82/phplapse to record the application activity for determinate time.
For example start recording after n iterations with:
phplapse_start();
And stop it in next iteration with:
phplapse_stop();
With this process you was created a snapshot of execution when all seems works slow.
(I'm the author of project, don't hesitate to contact with me to improve the functionality)
I have a similar thing running each night (a cron job to update my database). I have found the most reliable way to debug is to set up a log table in the database and regularly insert / update a json string containing a multi-dimensional array with info about each record and whatever useful info you want to know about each record. This way if your cron job does not finish you still have detailed information about where it got up to and what happened along the way. Then you can write a simple page to pull out the json string, turn it back into an array and print useful data onto the page including timing and passed tests etc. When you see something as an issue you can concentrate on putting more info from that area into the json string.
Regular "top" command can show you, if CPU usage by php or mysql is bottleneck. If not, then delays may be caused by http calls.
If CPU usage by mysqld is low, but constant, then it may be disk usage bottleneck.
Also, you can check your bandwidth usage by installing and using "speedometer", or other tools.
I have a problem with a possible memory leak in my console PHP application. It is created with the Laravel framework, using artisan. Running PHP 5.6.2. It's a huge loop that will gather data from a webservice and insert it into my own database, probably around 300k rows.
For each loop I print out memory usage in the console. The weird thing is that memory_get_usage() and memory_get_usage(true) reports that it uses roughly 13MB memory. But the php process keeps using more and more memory. If I let it run for a few hours it uses almost 1GB memory, and the loop keeps going slower and slower.
It will not terminate due to the PHP memory limit, even if it passes it by far.
I am trying to figure out why this happens and how this actually works. As I understand it, memory_get_usage reports the memory used by MY script, what I have written. So unsetting variables, cleaning up etc. should not be the problem, right? I also try to force garbage collection every ~300 entries with no luck.
Do anyone have some general tips on how I can troubleshoot this? And maybe explain why the memory used by the process is shown by the memory_get_usage function :-)
Any help is greatly appreciated.
I have a PHP program that will run forever (not a webpage a socket server). After processing over 1000 requests the program eventually crashes due to an out of memory exception.
Here is a link to my project.
Here is a link to my program.
I am not sure why this happens, I have tried using garbage collection functions in the function that processes requests (onMessage) in the program but it does not result in any changes. Any suggestions would be appreciated.
Investing huge amounts of effort, you may be able to mitigate this for a while. But in the end you will have trouble running a non-terminating PHP application.
Check out PHP is meant to die. This article discusses PHP's memory handling (among other things) and specifically focuses on why all long-running PHP processes eventually fail. Some excerpts:
There’s several issues that just make PHP the wrong tool for this. Remember, PHP will die, no matter how hard you try. First and foremost, there’s the issue of memory leaks. PHP never cared to free memory once it’s not used anymore, because everything will be freed at the end — by dying. In a continually-running process, that will slowly keep increasing the allocated memory (which is, in fact, wasted memory), until reaching PHP’s memory_limit value and killing your process without a warning. You did nothing wrong, except expecting the process to live forever. Under load, replace the “slowly” part for "pretty quickly".
There’s been improvements in the “don’t waste memory” front. Sadly, they’re not enough. As things get complex or the load increases, it’ll crash.