How do we know script' execution time ? OOP - php

I would love to know if this script is good to know the execution of php script ?
for miliseconds
<?php
$timestart = microtime(true);
/* Blah Blah here ... */
$timeend = microtime(true);
echo 'Execution Time : '.round((timeend - timestart) * 1000, 2);
?>
I have no ideas about using OOP (object-oriented programming) with it.
Also I'll make a script who will parse a text files (.txt), I'll have maybe 120 - 700 lines, which way is better to know the data treatment ?
Does the time depend on number of lines?

I use this Timer class i wrote some time ago. An advantage is that it's "incremental" you can start a timer inside a loop, and it will append the time between start and stop.
Please note that if you do that, it will had quite some time to execution.
basic usage:
$myTimer = new Timer();
$myTimer->start('hello'); // $myTimer = new Timer('hello'); is a shorthand for this.
for ($i=0; $i<100000; $i++)
{
$myTimer->start('slow suspect 1');
// stuff
$myTimer->stop('slow suspect 1');
$myTimer->start('slow suspect 2');
// moar stuff
$myTimer->stop('slow suspect 2');
}
$myTimer->stop('hello');
$myTimer->print_all();
please note it's limited and far not the fastest way to do it. creating and destoying classes takes time. When done inside "logical" loops, it can add quite some time. but to trace some complex program with multiple imbricated loops, or recursive functions calls, this stuff is precious
<?php
class Timer_
{
public $start;
public $stop;
public function __construct()
{
$this->start = microtime(true);
}
}
class Timer
{
private $timers = array();
public function __construct($firsTimer=null)
{
if ( $firsTimer != null) $this->timers[$firsTimer][] = new Timer_();
}
public function start($name)
{
$this->timers[$name][] = new Timer_();
}
public function stop($name)
{
$pos = count($this->timers[$name]) -1 ;
$this->timers[$name][$pos]->stop = microtime(true);
}
public function print_all($html=true)
{
if ( $html ) echo '<pre>';
foreach ($this->timers as $name => $timerArray)
{
$this->print_($name, $html);
}
if ( $html ) echo '</pre>';
}
public function print_($name, $html=true)
{
$nl = ( $html ) ? '<br>' : NEWLINE;
$timerTotal = 0;
foreach ($this->timers[$name] as $key => $timer)
{
if ( $timer->stop != null )
{
$timerTotal += $timer->stop - $timer->start;
}
}
echo $name, ': ', $timerTotal, $nl;
}
}
?>

If you want to do it in a OO manner, you can have a class, where you start.
$timer->start('piece1');
//code1
$timer->stop('piece1');
echo 'Script piece1 took '.$timer->get('piece1').' ms';
I believe it is done like that in codeigniter framework.
The point of these names ('piece1') is that you can have multiple timers running at the same time (example one in another). The code for implementihg is fairly simple, about 10 lines.

Related

Why is $GLOBALS significately slower than global keywords?

I'Ve made a few test on global vs function parameter and the difference is futile.
But as i was testing stuff, i found that $GLOBALS is about 10% slower. than using function parameter or global keyword. Anyone care to explain why?
I wish to understand further the mechanism behind PHP. This so i can make better compromise on futur dev. Not that i ever use $GLOBALS except in some exception case.
$md5 = md5(1000000);
function byGlobal() {
global $md5;
$c = 0;
while( md5($c) != $md5 ){
$c++;
}
}
function superGlobal() {
$c = 0;
while( md5($c) != $GLOBALS[ 'md5' ] ){
$c++;
}
}
function asParam($md5) {
$c = 0;
while( md5($c) != $md5 ){
$c++;
}
}
$time3 = microtime(true);
asParam($md5);
echo (microtime(true) - $time3);
echo "<br/>";
$time1 = microtime(true);
byGlobal();
echo (microtime(true) - $time1);
echo "<br/>";
$time2 = microtime(true);
superGlobal();
echo (microtime(true) - $time2);
echo "<br/>";
I'm not arguing about function vs global or good practice. I really just wonder why so much difference. Ran the test 20 times and results are pretty much all consistent.
i move the calls up and down in the code to avoid caching influence.
I'm doing a million md5 iteration on each call so the server works a
load.
Time results are cosistent with other hashing function (tested sha1, crc32) and they are all in the same +10% slower)
Ran on a VM with PHP 5.4.16 on CentOs7.
1st run
as param : 0.8601s
as global : 0.8262s
as $GLOBALS : 0.9463s (more than 10% slower)
2nd run
as param : 0.8100s
as global : 0.8058s
as $GLOBALS : 0.9624s (more than 10% slower again)
Related studies : (mainely debate about global best practice, nothing about $GLOBALS vs global performance.
The advantage / disadvantage between global variables and function parameters in PHP?
php global variable overhead in a framework
Does using global create any overhead?
EDIT: New version, I made a mistake in the first one.
I've rewritten your tests, so it does test the things you want:
<?php
$value = 6235;
function byGlobal() {
global $value;
return $value++;
}
function superGlobal() {
return $GLOBALS['value']++;
}
function asParam($parameter) {
return $parameter++;
}
$time = microtime(true);
for ($i = 0;$i < 10000000;$i++) $value = asParam($value);
echo 'asParam: '.(microtime(true)-$time).'<br/>';
$time = microtime(true);
for ($i = 0;$i < 10000000;$i++) $value = byGlobal();
echo 'byGlobal: '.(microtime(true)-$time).'<br/>';
$time = microtime(true);
for ($i = 0;$i < 10000000;$i++) $value = superGlobal();
echo 'superGlobal: '.(microtime(true)-$time).'<br/>';
Example results for PHP 7.0.17 on CentOs7:
asParam: 0.43703699111938
byGlobal: 0.55213189125061
superGlobal: 0.70462608337402
and
asParam: 0.4569981098175
byGlobal: 0.55681920051575
superGlobal: 0.76146912574768
So, the superGlobal is the slowest, but not by that much. I guess the reason is that it is an array.
What I take away from this is that PHP is fast! I would not worry about the small differences in these tiny time slices. Having readable code is far more important. I my experiences the slowest thing in a website are the database queries.

PHP - Slow performance on function return assignment

As part of a project i came across this situation where inside a loop, i store the value of a function return.
This happened to be a bottleneck for the application, where big arrays would take forever to be processed.
To me, the assignment should be no reason for the incredible slow performance.
On the other hand, the same function call, with no assign on return gives much better performance.
Can you explain me why the first loop is much slower?
Output:
First took 1.750 sec.
Second took 0.003 sec.
class one {
private $var;
public function __construct() {
$this->var = array();
}
public function saveIteration($a) {
$this->var[] = $a;
return $this->var;
}
public function getVar() {
return $this->var;
}
}
$one = new one();
$time_start = microtime(true);
for ($i = 0; $i < 10000; $i++) {
$res = $one->saveIteration($i);
}
echo "First took " . number_format(microtime(true) - $time_start, 3) . " sec.";
$time_start = microtime(true);
for ($i = 0; $i < 10000; $i++) {
$one->saveIteration($i);
}
$res = $one->getVar();
echo "<br>Second took " . number_format(microtime(true) - $time_start, 3) . " sec.";
According to http://php.net/manual/en/functions.returning-values.php#99660 the array return value is not passed by-reference but by-value. Which means that a copy of the array is created (at the very least, when you change it again), which in turn needs time to actually create the copy (allocate memory, memcopy the data).
It probably have something to do with the fact that you're creating 10.000 arrays; each time incrementing the number of elements of the new array by 1 element.
My guess while you're inside the loop the local variable isn't freed on its own; Therefore I went ahead & tried freeing it using unset which got the results very close.
I know this is not a real world example; but in your code if you have something similar you could get away with it by just freeing (unsetting) the local variable once you're finished with it
here's your test code again:
class one {
private $var;
public function __construct() {
$this->var = array();
}
public function saveIteration($a) {
$this->var[] = $a;
return $this->var;
}
public function getVar() {
return $this->var;
}
}
$one = new one();
$time_start = microtime(true);
for ($i = 0; $i < 10000; $i++) {
$res = $one->saveIteration($i);
unset($res);
}
echo "First took " . number_format(microtime(true) - $time_start, 3) . " sec.".PHP_EOL;
$time_start = microtime(true);
for ($i = 0; $i < 10000; $i++) {
$one->saveIteration($i);
}
$res = $one->getVar();
echo "Second took " . number_format(microtime(true) - $time_start, 3) . " sec.".PHP_EOL;
Note: the only thing I've modify is adding unset in the first example
Result:
First took 0.068 sec.
Second took 0.062 sec.
#Jakumi made an excellent point. Since the values must be copied when you assign, 10,000 extra operations and a lot more memory are required in the first loop.
The difference between the two loops is actually much greater than your testing shows. Your comparison would be more fair if between the two tests, you reset with:
unset($one); $one = new one();
In your current test, the second loop is being executed while still holding the large array from the first loop in memory, so your results are not independent. See this modification

PHP generator yield the first value, then iterate over the rest

I have this code:
<?php
function generator() {
yield 'First value';
for ($i = 1; $i <= 3; $i++) {
yield $i;
}
}
$gen = generator();
$first = $gen->current();
echo $first . '<br/>';
//$gen->next();
foreach ($gen as $value) {
echo $value . '<br/>';
}
This outputs:
First value
First value
1
2
3
I need the 'First value' to yielding only once. If i uncomment $gen->next() line, fatal error occured:
Fatal error: Uncaught exception 'Exception' with message 'Cannot rewind a generator that was already run'
How can I solve this?
The problem is that the foreach try to reset (rewind) the Generator. But rewind() throws an exception if the generator is currently after the first yield.
So you should avoid the foreach and use a while instead
$gen = generator();
$first = $gen->current();
echo $first . '<br/>';
$gen->next();
while ($gen->valid()) {
echo $gen->current() . '<br/>';
$gen->next();
}
chumkiu's answer is correct. Some additional ideas.
Proposal 0: remaining() decorator.
(This is the latest version I am adding here, but possibly the best)
PHP 7+:
function remaining(\Generator $generator) {
yield from $generator;
}
PHP 5.5+ < 7:
function remaining(\Generator $generator) {
for (; $generator->valid(); $generator->next()) {
yield $generator->current();
}
}
Usage (all PHP versions):
function foo() {
for ($i = 0; $i < 5; ++$i) {
yield $i;
}
}
$gen = foo();
if (!$gen->valid()) {
// Not even the first item exists.
return;
}
$first = $gen->current();
$gen->next();
$values = [];
foreach (remaining($gen) as $value) {
$values[] = $value;
}
There might be some indirection overhead. But semantically this is quite elegant I think.
Proposal 1: for() instead of while().
As a nice syntactic alternative, I propose using for() instead of while() to reduce clutter from the ->next() call and the initialization.
Simple version, without your initial value:
for ($gen = generator(); $gen->valid(); $gen->next()) {
echo $gen->current();
}
With the initial value:
$gen = generator();
if (!$gen->valid()) {
echo "Not even the first value exists.<br/>";
return;
}
$first = $gen->current();
echo $first . '<br/>';
$gen->next();
for (; $gen->valid(); $gen->next()) {
echo $gen->current() . '<br/>';
}
You could put the first $gen->next() into the for() statement, but I don't think this would add much readability.
A little benchmark I did locally (with PHP 5.6) showed that this version with for() or while() with explicit calls to ->next(), current() etc are slower than the implicit version with foreach(generator() as $value).
Proposal 2: Offset parameter in the generator() function
This only works if you have control over the generator function.
function generator($offset = 0) {
if ($offset <= 0) {
yield 'First value';
$offset = 1;
}
for ($i = $offset; $i <= 3; $i++) {
yield $i;
}
}
foreach (generator() as $firstValue) {
print "First: " . $firstValue . "\n";
break;
}
foreach (generator(1) as value) {
print $value . "\n";
}
This would mean that any initialization would run twice. Maybe not desirable.
Also it allows calls like generator(9999) with really high skip numbers. E.g. someone could use this to process the generator sequence in chunks. But starting from 0 each time and then skipping a huge number of items seems really a bad idea performance-wise. E.g. if the data is coming from a file, and skipping means to read + ignore the first 9999 lines of the file.
solutions provided here does not work if you need to iterate more than once.
so I used iterator_to_array function to convert it to array;
$items = iterator_to_array($items);

mysqli select function with array parameters

I want to create a function where the input are two array's. One for the fields and one for the tables.
public function s($fields = array(), $tables = array()) { }
but I have no idea how to one from here. I had the idea to loop through the two array's and save each value to a string like this (but this to me like not the best way to do this):
$length = count($fields);
$fieldsString = "";
for ($i = 0; $i < $length; $i++) {
$fieldsString += $fields[$i];
}
then do the same thing to $tables and output the $fieldsString and $tablesString to a SQL query.
My question: can this be done in a more effective way?
EDIT
I know how do to this with for-loop and get the output I want. But I'm looking for a more "professional" way to deal with this problem, to learn from this.
Instead of looping through the whole Array, you could also use implode() to create a string containing all the elements of the array:
$ieldsString = implode(',' , $fields);
http://php.net/manual/de/function.implode.php
The OOP Approach:
First I'll just say that the OOP is much more than just one class in a code.
It's a concept, that allows us to think in bigger scale, and creates much more functionality in our code.
I created a DBHandler Class which will handle all of my DB Requests what so ever, so it's better to keep that in a separate file, even in a higher hierarchy directory.
This is how the class looks like:
<?php
class DBHandler{
public $dbCon;
function __construct(mysqli $dbCon){
$this->dbCon = $dbCon;
}
private function createQuery($parameters, $tables){
$params = implode(',', $parameters);
$tParam = implode(',',$tables);
return "SELECT $params FROM $tParam";
}
public function query($dbCon, $parameters, $tables){
$query = createQuery($parameters, $tables);
$result =$this->dbCon->query($query);
return $result;
}
}
?>
And this is how your main should look like:
<?php
$myownDB = new DBHandler($dbCon);
$myownDB->query($parameters, $tables);
?>
This way is way more maintainable. and I really suggest using it.
It's easier to separate your database handler from your actual code, so it wont get messy.
Please note, that this call $myownDB->query($parameters, $tables); will run the query instantly, and won't echo it.
The Procedural Programming:
<?php
function s($parameters){
$i = 0;
$addtoQuery ='';
for($i =0; $i<count($parameters); $i++){
$addtoQuery .= $parameters[$i].',';
}
return substr($addtoQuery, 0, -1);
}
function t($tables){
$i = 0;
$addtoQuery ='';
for($i =0; $i<count($tables); $i++){
$addtoQuery .= $tables[$i].',';
}
return substr($addtoQuery, 0, -1);
}
$parameters = array("range", "distance", "which");
$tables = array("north", "south");
$s = s($parameters);
$t = t($tables);
function createQuery($s, $t){
$query = "SELECT $s FROM $t";
return $query;
}
echo createQuery($s,$t);
?>
Result of echo => SELECT range,distance,which FROM north,south
Which you can simply add to Mysqli - Query
If it's not what you were looking for, I'm sorry.
Good luck!

Why Does This Perform Better?

So I'm trying to implement an Aspect-Oriented Design into my architecture using debug_backtrace and PHP reflection. The design works but I decided to see just how badly it impacts performance so I wrote up the following profiling test. The interesting thing is that when the Advisable and NonAdvisable methods do nothing, the impact is about 5 times that for using an advisable method versus using a non-advisable method, but when I increase the complexity of each method (here by increasing the number of iterations to 30 or more), advisable methods perform begin to perform better and continue to increase as the complexity increases.
Base class:
abstract class Advisable {
private static $reflections = array();
protected static $executions = 25;
protected static function advise()
{
$backtrace = debug_backtrace();
$method_trace = $backtrace[1];
$object = $method_trace['object'];
$function = $method_trace['function'];
$args = $method_trace['args'];
$class = get_called_class();
// We'll introduce this later
$before = array();
$around = array();
$after = array();
$method_info = array(
'args' => $args,
'object' => $object,
'class' => $class,
'method' => $function,
'around_queue' => $around
);
array_unshift($args, $method_info);
foreach ($before as $advice)
{
call_user_func_array($advice, $args);
}
$result = self::get_advice($method_info);
foreach ($after as $advice)
{
call_user_func_array($advice, $args);
}
return $result;
}
public static function get_advice($calling_info)
{
if ($calling_info['around_queue'])
{
$around = array_shift($calling_info['around_queue']);
if ($around)
{
// a method exists in the queue
return call_user_func_array($around, array_merge(array($calling_info), $calling_info['args']));
}
}
$object = $calling_info['object'];
$method = $calling_info['method'];
$class = $calling_info['class'];
if ($object)
{
return null; // THIS IS THE OFFENDING LINE
// this is a class method
if (isset(self::$reflections[$class][$method]))
{
$parent = self::$reflections[$class][$method];
}
else
{
$parent = new ReflectionMethod('_'.$class, $method);
if (!isset(self::$reflections[$class]))
{
self::$reflections[$class] = array();
}
self::$reflections[$class][$method] = $parent;
}
return $parent->invokeArgs($object, $calling_info['args']);
}
// this is a static method
return call_user_func_array(get_parent_class($class).'::'.$method, $calling_info['args']);
}
}
An implemented class:
abstract class _A extends Advisable
{
public function Advisable()
{
$doing_stuff = '';
for ($i = 0; $i < self::$executions; $i++)
{
$doing_stuff .= '.';
}
return $doing_stuff;
}
public function NonAdvisable()
{
$doing_stuff = '';
for ($i = 0; $i < self::$executions; $i++)
{
$doing_stuff .= '.';
}
return $doing_stuff;
}
}
class A extends _A
{
public function Advisable()
{
return self::advise();
}
}
And profile the methods:
$a = new A();
$start_time = microtime(true);
$executions = 1000000;
for ($i = 0; $i < $executions; $i++)
{
$a->Advisable();
}
$advisable_execution_time = microtime(true) - $start_time;
$start_time = microtime(true);
for ($i = 0; $i < $executions; $i++)
{
$a->NonAdvisable();
}
$non_advisable_execution_time = microtime(true) - $start_time;
echo 'Ratio: '.$advisable_execution_time/$non_advisable_execution_time.'<br />';
echo 'Advisable: '.$advisable_execution_time.'<br />';
echo 'Non-Advisable: '.$non_advisable_execution_time.'<br />';
echo 'Overhead: '.($advisable_execution_time - $non_advisable_execution_time);
If I run this test with the complexity at 100 (A::executions = 100), I get the following:
Ratio: 0.289029437803
Advisable: 7.08797502518
Non-Advisable: 24.5233671665
Overhead: -17.4353921413
Any ideas?
You're skipping all of the iterations when you call A's Advisable method... you're overwriting it with one single call to the inherited advise() method. So, when you add iterations, you're only adding them to the NonAdvisable() call.
method overhead should apply to PHP as well as to Java i guess - "the actual method that is called is determined at run time" => the overhead for shadowed Advisible method is bigger
but it would be O(1) instead of Non-Advisable's O(n)
Sorry for the bother, I just found the line return null; in the get_advice method before the parent method gets called. I hate to answer my own question but it's not really worth someone else searching for it.

Categories