Finding memory and CPU time bottlenecks in CakePHP - php

I read this but it doesn't fit my solution.
I need to find out memory and CPU time bottlenecks in my CakePHP 2 application.
With microtime and memory_get_usage in controller actions I found out some clues. I fixed some of with this. But it is so hard to diagnose every controller action one by one.
I need to log CPU and memory loads for each of my actions. I'm planning to put 2 global variables in my controller. And calculate them inside beforeFilter and afterFilter and log them for checking afterwards.
Is this proper way or can you recommend another solution?
class AppController extends Controller {
var $requestStartTime = 0;
var $requestDifTime = 0;
var $memoryBefore = 0;
var $memoryAfter = 0;
function beforeFilter() {
$requestStartTime = microtime(true);
$memoryBefore =memory_get_usage(true);
}
function afterFilter() {
$requestDifTime = microtime(true) - $requestStartTime;
$memoryAfter = memory_get_usage(true);
$myFile = TMP.'logs'.DS.'mylog.txt';
$fh = fopen($myFile, 'a');
$string = "start time:" . $requestStartTime .
" dif time: " . $requestDifTime
" memory usage: " . $memoryBefore . " and " . $memoryAfter
."\n";
fwrite($fh,$string);
fclose($fh);
}
}

The best tool I have found when working with PHP, and any PHP framework including CakePHP is "Xdebug". Xdebug is a PHP extension that can be enabled to provide profiling output files that can be analysed with tools like "Webgrind" (or CallGrind, and others).
Webgrind will take the xdebug trace files and provide you with a visual tree of time spent and resource allocation. This enables you to selectively drill down into the method and function calls made during the execution of a system, and find where time is being lost, and where resources are being allocated.
In addition, Xdebug enables you to start debugging your application. You can allocate breakpoints, and pause execution, modify values and step through your code providing you with a more flexible debugging approach to development.
This has been a valuable tool while building things with CakePHP, and also while building on the core for CakePHP itself.

Related

Is pg_free_result() necessary, even if the result goes out of scope?

The PHP docs have this to say about pg_free_result():
This function need only be called if memory consumption during script
execution is a problem. Otherwise, all result memory will be
automatically freed when the script ends.
http://www.php.net/manual/en/function.pg-free-result.php
I would (perhaps naively) have expected the resource returned by a call to pg_query() to be garbage collected when it goes out of scope.
In a hypothetical function like this:
function selectSomething ()
{
$res = pg_query("SELECT blah FROM sometable");
// do something with $res
pg_free_result($res); // required or not?
}
Is it really necessary to call pg_free_result() at the end?
In other words, if I call this function 1000 times, will it eat up memory to store all 1000 results?
EDIT: I'm talking about the typical case, i.e. pg_connect() instead of pg_pconnect().
As Elias Van Ootegem rightly indicates you are almost certainly using a persistent connection. With persistent connections after the query the result must continue in memory because you may wish to gather more data from it (for example the last error).
So it comes down to good practice. If you are operating in an environment where you have 2M of available memory and your script can, at times, hit up to 0.1M of memory then the upper limit is 20 concurrent connections calling that script. After that further web requests are going to queue or drop. It doesn't take a genius to realise how vulnerable this might be to DDoS attack.
Best practice, then, is to empty memory just as soon as you are done with it. This goes for just about any programming or scripting ever. When the system is being stressed and demand is high the more requests that can be serviced inside the total scope of memory the better. If you can lower the maximum memory footprint of a script you can increase the number of concurrent connections that can reasonably try to call it and thus increase the load the script can handle.
The ideal way to do things is to release resources as soon as you can. Just because we don't and everything seems to work anyway during testing is no reason not to.
Here are my test results:
(before / after)
with: 631288 / 631384
without: 631288 / 631640
Feel free to run the test yourself with this code:
<?php
class test {
private static $tests = 5000;
public function __construct() {
$dbconn = pg_connect("host=### dbname=### user=### password=###")
or die('Could not connect: ' . pg_last_error());
self::test(self::$tests, true);
self::test(self::$tests, false);
}
private function test($times, $with) {
echo ($with ? "with:<br />\n" : "without:<br />\n") . memory_get_usage() ."<br />\n";
for($i = 0; $i < $times; $i++) {
$res = pg_query("SELECT * FROM chowder");
if($with) {
pg_free_result($res);
}
}
echo memory_get_usage() ."<br /><br />\n\n";
}
}
$test = new test();

Eclipse PHP Profiler - How Do You Get Parameters?

Using Eclipse's PHP Profiler, I have discovered a bottleneck in my code on a method that is called many times. The problem is that I cannot tell what parameters were passed in to the method to determine how to reproduce the symptom.
I have tried surrounding the code that the profiler is reporting as taking a full second to complete with the following:
$startTime = microtime(true);
$safe_text = wp_check_invalid_utf8( $text );
$endTime = microtime(true);
$time = $endTime - $startTime;
if ($time > .05) {
error_log('Time: ' . $time . ' text [' . $text . ']');
}
I never have a single hit in the error log for this, yet the profiler will continue to report one as taking a full second to complete. Refreshing the page in the browser does indicate that there is significant slowness.
I have this same problem in 3 different areas of my code and knowing what was being passed in to the methods at the time they run slowly may be of assistance in fixing the problem. Is there any way to determine what is being passed in to the intermittently slow method when it is running slowly?
It is possible to set this up using xdebug which is what Eclipse uses for profiling, however it's disabled by default because recording this data in a project that makes a lot of calls or passes large data structures would quickly overwhelm your available memory.
I recommend you do manual logging as you are currently doing although I'd start by measuring the total time of all calls to wp_check_invalid_utf8 to make sure you do infact have a problem in just that part and that is isn't just a problem caused by the Eclipse profiler itself. Once you've established the total time is more than you would like, then start logging individual calls and their parameters.

best way to measure (and refine) performance with PHP?

A site I am working with is starting to get a little sluggish, and I would like to refine it. I think the problem is with the PHP, but I can't be sure. How can I see how long functions are taking to perform?
If you want to test the execution time :
<?php
$startTime = microtime(true);
// Your content to test
$endTime = microtime(true);
$elapsed = $endTime - $startTime;
echo "Execution time : $elapsed seconds";
?>
Try the profiler feature in XDebug or Zend Debugger?
Two things you can do.
place Microtime calls everywhere although its not convenient if you want to test more than one function. So there is a simpler way to do it a better solution if you want to test many functions which i assume you would like to do.
just have a class (click on link to follow tutorial) where you can test how long all your functions take. Rather than place microtime everywhere. you just use this class. which is very convenient
http://codeaid.net/php/calculate-script-execution-time-%28php-class%29
the second thing you can do is to optimize your script is by taking a look at the memory usage.
By observing the memory usage of your scripts, you may be able optimize your code better.
PHP has a garbage collector and a pretty complex memory manager. The amount of memory being used by your script. can go up and down during the execution of a script. To get the current memory usage, we can use the memory_get_usage() function, and to get the highest amount of memory used at any point, we can use the memory_get_peak_usage() function.
view plaincopy to clipboardprint?
echo "Initial: ".memory_get_usage()." bytes \n";
/* prints
Initial: 361400 bytes
*/
// let's use up some memory
for ($i = 0; $i < 100000; $i++) {
$array []= md5($i);
}
// let's remove half of the array
for ($i = 0; $i < 100000; $i++) {
unset($array[$i]);
}
echo "Final: ".memory_get_usage()." bytes \n";
/* prints
Final: 885912 bytes
*/
echo "Peak: ".memory_get_peak_usage()." bytes \n";
/* prints
Peak: 13687072 bytes
*/
http://net.tutsplus.com/tutorials/php/9-useful-php-functions-and-features-you-need-to-know/
PK
You can also make it manually, by recording microtime() value in various places, like this:
<?
$TIMER['start']=microtime(TRUE);
// some code
$query="SELECT ...";
$TIMER['before q']=microtime(TRUE);
$res=mysql_query($query);
$TIMER['after q']=microtime(TRUE);
while ($row = mysql_fetch_array($res)) {
// some code
}
$TIMER['array filled']=microtime(TRUE);
// some code
$TIMER['pagination']=microtime(TRUE);
/and so on
?>
and then visualize it
<?
if ('127.0.0.1' === $_SERVER['REMOTE_ADDR']) {
echo "<table border=1><tr><td>name</td><td>so far</td><td>delta</td><td>per cent</td></tr>";
reset($TIMER);
$start=$prev=current($TIMER);
$total=end($TIMER)-$start;
foreach($TIMER as $name => $value) {
$sofar=round($value-$start,3);
$delta=round($value-$prev,3);
$percent=round($delta/$total*100);
echo "<tr><td>$name</td><td>$sofar</td><td>$delta</td><td>$percent</td></tr>";
$prev=$value;
}
echo "</table>";
}
?>
an IP address check implies that we are doing this profiling on the working site
Though I doubt it's PHP itself. Most likely it's database. So, pay most attention to query execution timing.
however, a "site" term is very broad. It includes also JS, CSS, images and stuff. So, I'd suggest to start form FirebFug's Net page to see what part of whole page takes more time.
Of course, refining can be done only after analysis of profiling results, and cannot be advised here without it.
Your best bet is Xdebug. Im happy as it comes bundled in my PHPed IDE. I can get profiler data at the click of a button.
So maybe you could consider that.
I had similar issues and so I created 2 new tables on the database and two new functions. One was audit_sql and the other was audit_code. Because I used an SQL abstraction class it was easy to time every single SQL call (I used php microtime as some others have suggested). So, I called microtime before and after the SQL call and stored the results on the database.
Similarly with pages. I called microtime at the start and end of each page and if necessary at the start and end of functons, divs - whatever I thought might be a culprit.
The general results were:
SQL calls to MySQL were almost instantaneous and were nto a problem at all. The only thing I would say is that even I was surprised at the number being executed! The site is generated from the database - even the menus, permissions etc. To produce the home page the SQL calls were measured in the 100s.
PHP was not the culprit. This was even more instantaneous that MySQL.
The culprit was.... (big build up!) calls to You Tube and Picassa and other sites like that. I host videos and photo albums on the site (well, I don't actually store them - they are stored on YT etc.) and on the home page are thumbnails that are extracted from You Tube and the like via the You Tube PHP API/Zend Framework. Because this is all http based to the other sites, each one was taking 1, 2 or 3 seconds. This was causing those divs containing these to take between 6 and 12 seconds and the home page up to 17 seconds.
The solution - store all thumbnails on my server. The first time one has to be served from the remote site (YT, Picassa etc.) so do that and then store it on your own site. Future times, you check if you have it and if so serve it always from your server. Cuts the page load time down to 2-3 seconds tops. Granted the first person to view the first home page load after someone has loaded more videos/images will take some time, but not thereafter. People will put a long one-off page load time down to their connection/the internet in general. Too many slow loads of your site and they will stop visiting!
I hope that helps somewhat.

Patterns for PHP multi processes?

Which design pattern exist to realize the execution of some PHP processes and the collection of the results in one PHP process?
Background:
I do have many large trees (> 10000 entries) in PHP and have to run recursive checks on it. I want to reduce the elapsed execution time.
If your goal is minimal time - the solution is simple to describe, but not that simple to implement.
You need to find a pattern to divide the work (You don't provide much information in the question in this regard).
Then use one master process that forks children to do the work. As a rule the total number of processes you use should be between n and 2n, where n is the number of cores the machine has.
Assuming this data will be stored in files you might consider using non-blocking IO to maximize the throughput. Not doing so will make most of your process spend time waiting for the disk. PHP has stream_select() that might help you. Note that using it is not trivial.
If you decide not to use select - increasing the number of processes might help.
In regards to pcntl functions: I've written a deamon with them (a proper one with forking, changing session id, the running user, etc...) and it's one of the most reliable piece of software I've written. Because it spawns workers for every task, even if there is a bug in one of the tasks, it does not affect the others.
From your php script, you could launch another script (using exec) to do the processing. Save status updates in a text file, which could then be read periodically by the parent thread.
Note: to avoid php waiting for the exec'd script to complete, pipe the output to a file:
exec('/path/to/file.php | output.log');
Alternatively, you can fork a script using the PCNTL functions. This uses one php script, which when forked can detect whether it is the parent or the child and operate accordingly. There are functions to send/receive signals for the purpose of communicating between parent/child, or you have the child log to a file and the parent read from that file.
From the pcntl_fork manual page:
$pid = pcntl_fork();
if ($pid == -1) {
die('could not fork');
} else if ($pid) {
// we are the parent
pcntl_wait($status); //Protect against Zombie children
} else {
// we are the child
}
This might be a good time to consider using a message queue, even if you run it all on one machine.
The question seems to be a bit confused.
I want to reduce the absolute execution time.
Do you mean elapsed time? Certainly use of the right data-structure will improve throughput, but for a given data-structure, the minmimum order of the algorithm is absolute, and nothing to do with how you implement the algorithm.
Which design pattern exist to realize....?
Design Patterns are something which code is, not a template for writing programs, and a useful tools for curriculum design. To start with a pattern and make your code fit it is in itself an anti-pattern.
Nobody can answer this question withuot knowing a lot more about your data and how its structured, however the key driver for efficiency will be the data-structure you use to implement your tree. If elapsed time is important then certainly look at parallel execution, however it may also be worth considering performing the operation in a different tool - databases are highly optimized for dealing with large sets of data, however note that the obvious method for describing a tree in a relational database is very inefficient when it comes to isolating sub-trees and walking the tree.
In response to Adam's suggesting of forking you replied:
I "heard" that pcntl isnt a good solution. Any experiences?
Where did you hear that? Certainly forking from a CGI or mod_php invoked script is a bad idea, but nothing wrong with doing it from the command line. Do have a google for long running PHP processes (be warned there is a lot of bad information out there). What code you write will vary depending on the underlying OS - which you've not stated.
I suspect that you could solve a large part of your performance issues by identifying which parts of the tree need to be checked and only checking those parts AND triggering the checks when the tree is updated, or at least marking the nodes as 'dirty'.
You might find these helpful:
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
http://en.wikipedia.org/wiki/Threaded_binary_tree
C.
You could use a more efficient data structure, such as a btree. I used once in Java but not in PHP. You can try this script: http://www.phpclasses.org/browse/file/708.html, it is an implementation of btree.
If it is not enough, you can use Hadoop to implement a Map/Reduce pattern, as Michael said. I would not fork PHP process, it does not seem to help for performace.
Personally, I would use PHP as client and put everything in Hadoop. This tutorial might help: http://www.lunchpauze.com/2007/10/writing-hadoop-mapreduce-program-in-php.html.
Another solution can be to use a Java implementation of Btree: http://jdbm.sourceforge.net/. JDBM is an object database using a Btree+ data astructures. Then you can search with PHP by exposing data with a web service or by accessing it directly with Quercus
Using web or CLI?
If you use web, you could intergrate that part in Quercus Then you could use the advantages of JAVA multithreading.
I don't actually know how reliable Quercus is though. I'd also suggest using a kind of message queue and refactoring the code, so it doesn't need the scope.
Maybe you could rebuild the code to a Map/Reduce pattern. You then can run the PHP code in Hadoop Then you can cluster the processing through a couple of machines.
I don't know if it's useful, but I came across another project, called Gearman. It's also used to cluster PHP processes. I guess you can combine that with a reduce script as well, if Hadoop is not the way you want to go.
pthreads
There is a rather new (since 2012) PHP extension available: pthreads. It can be installed via PECL.
Simple Implementation in PHP Code: extend from Thread Class. Add a run() method and execute the start() method.
<?php
// Example from http://www.phpgangsta.de/richtige-threads-in-php-einfach-erstellen-mit-pthreads
class AsyncOperation extends Thread
{
public function __construct($threadId)
{
$this->threadId = $threadId;
}
public function run()
{
printf("T %s: Sleeping 3sec\n", $this->threadId);
sleep(3);
printf("T %s: Hello World\n", $this->threadId);
}
}
$start = microtime(true);
for ($i = 1; $i <= 5; $i++) {
$t[$i] = new AsyncOperation($i);
$t[$i]->start();
}
echo microtime(true) - $start . "\n";
echo "end\n";
Outputs
>php pthreads.php
0.041301012039185
end
T 1: Sleeping 3sec
T 2: Sleeping 3sec
T 3: Sleeping 3sec
T 4: Sleeping 3sec
T 5: Sleeping 3sec
T 1: Hello World
T 2: Hello World
T 3: Hello World
T 4: Hello World
T 5: Hello World
Try this: PHPThreads
Code Example:
function threadproc($thread, $param) {
echo "\tI'm a PHPThread. In this example, I was given only one parameter: \"". print_r($param, true) ."\" to work with, but I can accept as many as you'd like!\n";
for ($i = 0; $i < 10; $i++) {
usleep(1000000);
echo "\tPHPThread working, very busy...\n";
}
return "I'm a return value!";
}
$thread_id = phpthread_create($thread, array(), "threadproc", null, array("123456"));
echo "I'm the main thread doing very important work!\n";
for ($n = 0; $n < 5; $n++) {
usleep(1000000);
echo "Main thread...working!\n";
}
echo "\nMain thread done working. Waiting on our PHPThread...\n";
phpthread_join($thread_id, $retval);
echo "\n\nOur PHPThread returned: " . print_r($retval, true) . "!\n";
Requires PHP extensions:
posix
pcntl
sockets

How can I measure the speed of code written in PHP? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
How can I say which class of many (which all do the same job) execute faster? is there a software to measure that?
You have (at least) two solutions :
The quite "naïve" one is using microtime(true) tobefore and after a portion of code, to get how much time has passed during its execution ; other answers said that and gave examples already, so I won"t say much more.
This is a nice solution if you want to benchmark a couple of instructions ; like compare two types of functions, for instance -- it's better if done thousands of times, to make sure any "perturbating element" is averaged.
Something like this, so, if you want to know how long it take to serialize an array :
$before = microtime(true);
for ($i=0 ; $i<100000 ; $i++) {
serialize($list);
}
$after = microtime(true);
echo ($after-$before)/$i . " sec/serialize\n";
Not perfect, but useful, and it doesn't take much time to set up.
The other solution, that works quite nice if you want to identify which function takes lots of time in an entire script, is to use :
The Xdebug extension, to generate profiling data for the script
Software that read the profiling data, and presents you something readable. I know three of those :
Webgrind ; web interface ; should work on any Apache+PHP server
WinCacheGrind ; only on windows
KCacheGrind ; probably only Linux and linux-like ; That's the one I prefer, btw
To get profiling files, you have to install and configure Xdebug ; take a look at the Profiling PHP Scripts page of the documentation.
What I generally do is not enable the profiler by default (it generates quite big files, and slows things down), but use the possibility to send a parameter called XDEBUG_PROFILE as GET data, to activate profiling just for the page I need.
The profiling-related part of my php.ini looks like this :
xdebug.profiler_enable = 0 ; Profiling not activated by default
xdebug.profiler_enable_trigger = 1 ; Profiling activated when requested by the GET parameter
xdebug.profiler_output_dir = /tmp/ouput_directory
xdebug.profiler_output_name = files_names
(Read the documentation for more informations)
This screenshot is from a C++ program in KcacheGrind :
(source: sourceforge.net)
You'll get exactly the same kind of thing with PHP scripts ;-)
(With KCacheGrind, I mean ; WinCacheGrind is not as good as KCacheGrind...)
This allows you to get a nice view of what takes time in your application -- and it sometimes definitly helps to locate the function that is slowing everything down ^^
Note that Xdebug counts the CPU time spent by PHP ; when PHP is waiting for an answer from a Database (for instance), it is not working ; only waiting. So Xdebug will think the DB request doesn't take much time !
This should be profiled on the SQL server, not PHP, so...
Hope this is helpful :-)
Have fun !
For quick stuff I do this (in PHP):
$startTime = microtime(true);
doTask(); // whatever you want to time
echo "Time: " . number_format(( microtime(true) - $startTime), 4) . " Seconds\n";
You can also use a profiler like http://xdebug.org/.
I've made a simple timing class, maybe it's useful to someone:
class TimingHelper {
private $start;
public function __construct() {
$this->start = microtime(true);
}
public function start() {
$this->start = microtime(true);
}
public function segs() {
return microtime(true) - $this->start;
}
public function time() {
$segs = $this->segs();
$days = floor($segs / 86400);
$segs -= $days * 86400;
$hours = floor($segs / 3600);
$segs -= $hours * 3600;
$mins = floor($segs / 60);
$segs -= $mins * 60;
$microsegs = ($segs - floor($segs)) * 1000;
$segs = floor($segs);
return
(empty($days) ? "" : $days . "d ") .
(empty($hours) ? "" : $hours . "h ") .
(empty($mins) ? "" : $mins . "m ") .
$segs . "s " .
$microsegs . "ms";
}
}
Use:
$th = new TimingHelper();
<..code being mesured..>
echo $th->time();
$th->start(); // if it's the case
<..code being mesured..>
echo $th->time();
// result: 4d 17h 34m 57s 0.00095367431640625ms
2020 Update
It's been many years since I last answered this questions so I thought this deserves an update on the APM landscape.
AppDynamics has been bought by Cisco and free forever account they used to offer has been taken out from their website.
NewRelic has dropped their pricing from $149/month/host to $25/month/host to compete with the new comer to the APM market, Datadog which offers $31/month/host.
Datadog APM features are still light and leaves much to be desired for. However, I see them enhancing and improving these throughout the next year.
Ruxit has been bought by Dynatrace. No shocker here as Ruxit is built by ex Dynatrace Employees. This allowed Dynatrace to transform to a truly SaaS model for better. Say goodbye to that bulky Java client if you'd like to.
There are free/open-source options now as well. Checkout Apache Skywalking which is very popular in China among their top tech companies and PinPoint which offers a demo that you can try before installing. Both of these require you manage hosting so get ready to spin up few Virtual Machine and spend some time with installation and configuration.
I haven't tried either of these opensource APM solution so I'm in no position to recommend them, however, I've personally managed deploying all of these APM solutions for multiple organizations either on-premise or on cloud for hundreds of application/microservices. So I can say with confidence, you can't go wrong with any of the vendors if they fit your bill.
Originally Answered on October 2015
Here is a direct answer to your question
is there a software to measure that?
Yes, there is. I'm wondering why anyone hasn't mentioned it yet. Although the answers suggested above seems fine for a quick check but isn't scalable in the long run or for a bigger project.
Why not use an Application Performance Monitoring (APM) tool which are build exactly for that and so much more. Check out NewRelic, AppDynamics, Ruxit (all have free version) to monitor the execution time, resource usage, throughput of every application to the method level.
If you want to quick test performance of a framework, you can put in index.php file
//at beginning
$milliseconds = round(microtime(true) * 1000);
//and at the end
echo round(microtime(true) * 1000) - $milliseconds;
Every time you will get execution time in milliseconds. Because microseconds is not too useful in testing a framework case.
I've been using XHProf lately http://pecl.php.net/package/xhprof. It was originally developed by Facebook and it comes with a decent web interface.
I'd like to share with you a self made function I use to measure the speed of any existing function up to 10 arguments:
function fdump($f_name='', $f_args=array()){
$f_dump=array();
$f_result='';
$f_success=false;
$f_start=microtime();
$f_start=explode(' ', $f_start);
$f_start=$f_start[1] + $f_start[0];
if(function_exists($f_name)){
if(isset($f_args[0])&&is_array($f_args[0])){
if($f_result=$f_name($f_args)){
$f_success=true;
}
}
elseif(!isset($f_args[1])){
if($f_result=$f_name($f_args[0])){
$f_success=true;
}
}
elseif(!isset($f_args[2])){
if($f_result=$f_name($f_args[0],$f_args[1])){
$f_success=true;
}
}
elseif(!isset($f_args[3])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2])){
$f_success=true;
}
}
elseif(!isset($f_args[4])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3])){
$f_success=true;
}
}
elseif(!isset($f_args[5])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3],$f_args[4])){
$f_success=true;
}
}
elseif(!isset($f_args[6])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3],$f_args[4],$f_args[5])){
$f_success=true;
}
}
elseif(!isset($f_args[7])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3],$f_args[4],$f_args[5],$f_args[6])){
$f_success=true;
}
}
elseif(!isset($f_args[8])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3],$f_args[4],$f_args[5],$f_args[6],$f_args[7])){
$f_success=true;
}
}
elseif(!isset($f_args[9])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3],$f_args[4],$f_args[5],$f_args[6],$f_args[7],$f_args[8])){
$f_success=true;
}
}
elseif(!isset($f_args[10])){
if($f_result=$f_name($f_args[0],$f_args[1],$f_args[2],$f_args[3],$f_args[4],$f_args[5],$f_args[6],$f_args[7],$f_args[8],$f_args[9])){
$f_success=true;
}
}
}
$f_end=microtime();
$f_end=explode(' ', $f_end);
$f_end=$f_end[1] + $f_end[0];
$f_time=round(($f_end - $f_start), 4);
$f_dump['f_success']=$f_success;
$f_dump['f_time']=$f_time;
$f_dump['f_result']=$f_result;
var_dump($f_dump);exit;
//return $f_result;
}
Example
function do_stuff($arg1='', $arg2=''){
return $arg1.' '.$arg2;
}
fdump('do_stuff',array('hello', 'world'));
Returns
array(3) {
["f_success"]=>
bool(true)
["f_time"]=>
float(0) //too fast...
["f_result"]=>
string(11) "hello world"
}
If it's something that can be tested outside the Web context, I just use the Unix time command.
Zend Studio has built in support for profiling using XDebug or ZendDebugger. It will profile your code, telling you exactly how long every function took. It's a fantastic tool for figuring out where your bottlenecks are.
You can use basic stuff like storing timestamps or microtime() before and after an operation to calculate the time needed. That's easy to do, but not very accurate. Maybe a better solution is Xdebug, i've never worked with it but it seems to be the best-known PHP debugger/profiler I can find.

Categories