PHP function efficiencies - php

Is there a table of how much "work" it takes to execute a given function in PHP? I'm not a compsci major, so I don't have maybe the formal background to know that "oh yeah, strings take longer to work with than integers" or anything like that. Are all steps/lines in a program created equal? I just don't even know where to start researching this.
I'm currently doing some Project Euler questions where I'm very sure my answer will work, but I'm timing out my local Apache server at a minute with my requests (and PE has said that all problems can be solved < 1 minute). I don't know how/where to start optimizing, so knowing more about PHP and how it uses memory would be useful. For what it's worth, here's my code for question 206:
<?php
$start = time();
for ($i=1010374999; $i < 1421374999; $i++) {
$a = number_format(pow($i,2),0,".","");
$c = preg_split('//', $a, -1, PREG_SPLIT_NO_EMPTY);
if ($c[0]==1) {
if ($c[2]==2) {
if ($c[4]==3) {
if ($c[6]==4) {
if ($c[8]==5) {
if ($c[10]==6) {
if ($c[12]==7) {
if ($c[14]==8) {
if ($c[16]==9) {
if ($c[18]==0) {
echo $i;
}
}
}
}
}
}
}
}
}
}
}
$end = time();
$elapsed = ($end-$start);
echo "<br />The time to calculate was $elapsed seconds";
?>
If this is a wiki question about optimization, just let me know and I'll move it. Again, not looking for an answer, just help on where to learn about being efficient in my coding (although cursory hints wouldn't be flat out rejected, and I realize there are probably more elegant mathematical ways to set up the problem)

There's no such table that's going to tell you how long each PHP function takes to execute, since the time of execution will vary wildly depending on the input.
Take a look at what your code is doing. You've created a loop that's going to run 411,000,000 times. Given the code needs to complete in less than 60 seconds (a minute), in order to solve the problem you're assuming each trip through the loop will take less than (approximately) .000000145 seconds. That's unreasonable, and no amount of using the "right" function will solve your call. Try your loop with nothing in there
for ($i=1010374999; $i < 1421374999; $i++) {
}
Unless you have access to science fiction computers, this probably isn't going to complete execution in less than 60 seconds. So you know this approach will never work.
This is known a brute force solution to a problem. The point of Project Euler is to get you thinking creatively, both from a math and programming point of view, about problems. You want to reduce the number of trips you need to take through that loop. The obvious solution will never be the answer here.
I don't want to tell you the solution, because the point of these things is to think your way through it and become a better algorithm programmer. Examine the problem, think about it's restrictions, and think about ways you reduce the total number of numbers you'd need to check.

A good tool for taking a look at execution times for your code is xDebug: http://xdebug.org/docs/profiler
It's an installable PHP extension which can be configured to output a complete breakdown of function calls and execution times for your script. Using this, you'll be able to see what in your code is taking longest to execute and try some different approaches.
EDIT: now that I'm actually looking at your code, you're running 400 million+ regex calls! I don't know anything about project Euler, but I have a hard time believing this code can be excuted in under a minute on commodity hardware.

preg_split is likely to be slow because it's using a regex. Is there not a better way to do that line?
Hint: You can access chars in a string like this:
$str = 'This is a test.';
echo $str[0];

Try switching preg_split() to explode() or str_split() which are faster

First, here's a slightly cleaner version of your function, with debug output
<?php
$start = time();
$min = (int)floor(sqrt(1020304050607080900));
$max = (int)ceil(sqrt(1929394959697989990));
for ($i=$min; $i < $max; $i++) {
$c = $i * $i;
echo $i, ' => ', $c, "\n";
if ($c[0]==1
&& $c[2]==2
&& $c[4]==3
&& $c[6]==4
&& $c[8]==5
&& $c[10]==6
&& $c[12]==7
&& $c[14]==8
&& $c[16]==9
&& $c[18]==0)
{
echo $i;
break;
}
}
$end = time();
$elapsed = ($end-$start);
echo "<br />The time to calculate was $elapsed seconds";
And here's the first 10 lines of output:
1010101010 => 1020304050403020100
1010101011 => 1020304052423222121
1010101012 => 1020304054443424144
1010101013 => 1020304056463626169
1010101014 => 1020304058483828196
1010101015 => 1020304060504030225
1010101016 => 1020304062524232256
1010101017 => 1020304064544434289
1010101018 => 1020304066564636324
1010101019 => 1020304068584838361
That, right there, seems like it oughta inspire a possible optimization of your algorithm. Note that we're not even close, as of the 6th entry (1020304060504030225) -- we've got a 6 in a position where we need a 5!
In fact, many of the next entries will be worthless, until we're back at a point where we have a 5 in that position. Why bother caluclating the intervening values? If we can figure out how, we should jump ahead to 1010101060, where that digit becomes a 5 again... If we can keep skipping dozens of iterations at a time like this, we'll save well over 90% of our run time!
Note that this may not be a practical approach at all (in fact, I'm fairly confident it's not), but this is the way you should be thinking. What mathematical tricks can you use to reduce the number of iterations you execute?

Related

PHP - Checking Character Type and Length and Trimming Leading Zeros

I have this code that works well, just inquiring to see if there is a better, shorter way to do it:
$sString = $_GET["s"];
if (ctype_digit($sString) == true) {
if (strlen($sString) == 5){
$sString = ltrim($sString, '0');
}
}
The purpose of this code is to remove the leading zeros from a search query before passing it to a search engine if it is 5 digits and. The exact scenario is A) All digits, B) Length equals 5, and C) It has leading zeros.
Better/Shorter means in execution time...
$sString = $_GET["s"];
if (preg_match('/^\d{5}$/', $sString)) {
$sString = ltrim($sString, '0');
}
This is shorter (and makes some assumptions you're not clear about in your question). However, shorter is not always better. Readability is sacrificed here, as well as some performance I guess (which will be negligable in a single execution, could be "costly" when run in tight loops though) etc. You ask for a "better, shorter way to do it"; then you need to define "better" and "shorter". Shorter in terms of lines/characters of code? Shorter in execution time? Better as in more readable? Better as in "most best practices" applied?
Edit:
So you've added the following:
The purpose of this code is to remove the leading zeros from a search
query before passing it to a search engine if it is 5 digits and. The
exact scenario is A) All digits, B) Length equals 5, and C) It has
leading zeros.
A) Ok, that clears things up (You sure characters like '+' or '-' won't be required? I assume these 'inputs' will be things like (5 digit) invoicenumbers, productnumbers etc.?)
B) What to do on inputs of length 1, 2, 3, 4, 6, 7, ... 28? 97? Nothing? Keep input intact?
Better/Shorter means in execution time...
Could you explain why you'd want to "optimize" this tiny piece of code? Unless you run this thousands of times in a tight loop the effects of optimizing this will be negligible. (Mumbles something about premature, optimization, root, evil). What is it that you're hoping to "gain" in "optimizing" this piece of code?
I haven't profiled/measured it yet, but my (educated) guess is that my preg_match is more 'expensive' in terms of execution time than your original code. It is "shorter" in terms of code though (but see above about sacrificing readability etc.).
Long story short: this code is not worth optimizing*; any I/O, database queries etc. will "cost" a hundred, maybe even thousands of times more. Go optimize that first
* Unless you have "optimized" everything else as far as possible (and even then, a database query might be more efficient when written in another way so the execution plan is mor eefficient or I/O might be saved using (more) caching etc. etc.). The fraftion of a millisecond (at best) you're about to save optimizing this code better be worth it! And even then you'd probably need to consider switching to better hardware, another platform or other programming language for example.
Edit 2:
I have quick'n'dirty "profiled" your code:
$start = microtime(true);
for ($i=0; $i<1000000;$i++) {
$sString = '01234';
if (ctype_digit($sString) == true) {
if (strlen($sString) == 5){
$sString = ltrim($sString, '0');
}
}
}
echo microtime(true) - $start;
Output: 0.806390047073
So, that's 0.8 seconds for 1 million(!) iterations. My code, "profiled" the same way:
$start = microtime(true);
for ($i=0; $i<1000000;$i++) {
$sString = '01234';
if (preg_match('/^\d{5}$/', $sString)) {
$sString = ltrim($sString, '0');
}
}
echo microtime(true) - $start;
Output: 1.09024000168
So, it's slower. As I guessed/predicted.
Note my explicitly mentioning "quick'n'dirty": for good/accurate profiling you need better tools and if you do use a poor-man's profiling method like I demonstrated above then at least make sure you average your numbers over a few dozen runs etc. to make sure your numbers are consistent and reliable.
But, either solution takes, worst case, less than 0,0000011 to run. If this code runs "Tens of Thousands of times a day" (assuming 100.000 times) you'll save exactly 0.11 seconds over an entire day IF you got it down to 0! If this 0.11 seconds is a problem you have a bigger problem at hand, not these few lines of code. It's just not worth "optimizing" (hell, it's not even worth discussing; the time you and I have taken going back-and-forth over this will not be "earned back" in at least 50 years).
Lesson learned here:
Measure / Profile before optimizing!
A little shorter:
if (ctype_digit($sString) && strlen($sString) == 5) {
$sString = ltrim($sString, '0');
}
It also matters if you want "digits" 0-9 or a "numeric value", which may be -1, +0123.45e6, 0xf4c3b00c, 0b10100111001 etc.. in which case use is_numeric.
I guess you could also just do this to remove 0s:
$sString = (int)$sString;

How can I speed my php loop to finish before server times out?

Please bear with me, as I am a bit new to this.
I have a page that calls another page after a form is submitted.
The new page contains a lengthy php loop which runs before the page is displayed.
The problem is, the server sends a 500 error before the loop has time to finish.
I have economy shared linux hosting with GoDaddy, and they don't allow me access to
error logs. I am pretty sure it is not a memory issue. I suspect that apache is simply timing out. I have shortened the loop and it works fine, so I am sure that there is nothing wrong with the code, but I would like to know if there are any tricks to code the loop better, or trick apache into giving the loop more time, or anything to make this work.
Thanks in advance, and here is the code:
Some variables before the loop starts:
$usernum1 = $_POST['num1'];
$usernum2 = $_POST['num2'];
$usernum3 = $_POST['num3'];
$usernum4 = $_POST['num4'];
$usernum5 = $_POST['num5'];
$usernum6 = $_POST['num6'];
$usernum7 = $_POST['num7'];
$usernumbers = array($usernum1, $usernum2,
$usernum3, $usernum4,
$usernum5, $usernum6,
$usernum7);
sort($usernumbers, SORT_NUMERIC);
$i = 0;
$counter = 0;
set_time_limit(0);
$input = range(1, 49);
Here is the loop:
do {
$rand_keys = array_rand($input, 7);
sort($rand_keys, SORT_NUMERIC);
if ($input[$rand_keys[0]] == $usernumbers[0] &&
$input[$rand_keys[1]] == $usernumbers[1] &&
$input[$rand_keys[2]] == $usernumbers[2] &&
$input[$rand_keys[3]] == $usernumbers[3] &&
$input[$rand_keys[4]] == $usernumbers[4] &&
$input[$rand_keys[5]] == $usernumbers[5] &&
$input[$rand_keys[6]] == $usernumbers[6])
{
$i = 1;
}
$counter = $counter + 1;
} while ($i == 0);
You can see the that the loop ends when $i == 1, and $i is assigned 1 when all numbers match. I should also mention that I added max_execution_time = 1000 to my php5.ini file on the server, and when I check phpinfo, it seems to have taken effect.
And That's IT! Not very complex. If anyone can help me figure out some trick, or better way, please, please help, as I have busted my brain for 2 days on this.
I just need the loop to have more time to finish. Thanks again in advance :-)
EDIT: For those who want to see the script working, I have it for a lottery that only has SIX numbers, is easier to win, and doesn't timeout. Go here and MAKE SURE TO select the top option (Lotto 649). http://diablogosse.com/test/lottosims.php
EDIT2: Sorry if this comes across badly, but I just wanted to clarify something after quite a few posts. My question is not "Why in God's name would I WANT to do this??". The question is more like: "How can I make what I WANT to do work.". ;-)
Chris
Just a thought:
App Goals:
user enters six favourite numbers between 1 and 49 i.e. [1..49]
app generates six random numbers between 1 and 49 i.e. [1..49]
if both sets of numbers same, user WINSssss..
Simulation Goals:
Let the users know how many days it'll take to win with lucky numbers
Implementing in javascript will save you lot of server resources. Instead of looping so many times, return after every check. If there is match user wins. If user didn't win, respond with taunting messages to provoke user to play again.
You were so close, 4 matching numbers! wanna play again?
you could use
set_time_limit ( int $seconds )
at the start of the loop and after each rotation the script time limit will be increased by x
set_time_limit doesn't work when PHP is in safe mode, as it seems to be the case on GoDaddy.
See here: ini_set, set_time_limit, (max_execution_time) - not working.
May I ask you what you are attempting to achieve with that loop?

PHP: Which is faster - array_sum or a foreach?

I'm trying to work out which method would be faster (if either would be?). I've designed a test here: http://codepad.org/odyUN0xg
When I run that test my results are very inconsistent. Both vary wildly.
Is there a problem with the test?
otherwise, can anyone suggest which one would or wouldn't be faster??
Edit
Okay guys, thanks I've edited the codepad here: http://codepad.org/n1Xrt98J
With all the comments and discussion i've decided to go with array sum. As soon as I used that microtime(true) thing it started to look alot faster (array_sum)
Cheers for the advice, oh and I've added a "for" loop so that results are more even, but as noted on the results there is little time saving if any over a foreach.
The problem is that you took a very low limit, 1000. The overhead of the PHP interpreter is MUCH larger. I would take 100000000 or something like that.
However I think array_sum is faster since it's more specialized and probably implemented in fast C.
Oh, and as Michael McTiernan said you must change every instance of microtime() to microtime(true). http://php.net/manual/en/function.microtime.php
And finally, I wouldn't use codepad as testing environment since you have no control over it. You have no idea what happens and whether your process is paused or not.
To be honest, there's little value in using an artificial test and either way this sounds like fairly pointless micro-optimisation unless you've specifically identified this as a problem area after profiling the necessary code.
As such, it probably makes sense to use which ever feels more appropriate. I'd personally plump for array_sum - that's what it's there for after all.
Change any instance of microtime() to microtime(true).
Also, after testing this, the results aren't that wildly different.
$s = microtime();
A call to microtime() with no arguments will return a string like this: 0.35250000 1300737802. You probably want this:
$s = microtime(TRUE);
array sum took 0.125188 seconds sum
numbers took 0.166603 seconds
These kind of tests need to be run a few thousand times so you can get large execution times that are not affected by tiny external factors.
You need much bigger runs, and need to average several of them. You should also separate the two tests into two files. The server has other things going on that will disturb test timing, hence the need to average many runs.
array_sum() should be the faster of two as there is no extra script parsing associated with it, but its worth checking.
Almost always array_sum is faster. It depends more on your server php.ini configuration than actual usage between array_sum and foreach.
Coming to the point of this question, for a sample setup like following:
<?php
set_time_limit(0);
$s = microtime(TRUE);
$array = range(1, 10000);
$sum = 0;
for($j = 0; $j < 1000; $j++){
$sum += array_sum($array);
}
$s1 = microtime(TRUE);
$diff = $s1 - $s;
echo "for 1000 pass, array_sum took {$diff} seconds. Result = {$sum}<br/>";
$sum = 0;
$s2 = microtime(TRUE);
for($j = 0; $j < 1000; $j++){
foreach($array as $val){
$sum += $val;
}
}
$s3 = microtime(TRUE);
$diff = $s3 - $s2;
echo "for 1000 pass, foreach took {$diff} seconds. Result = {$sum}<br/>";
I got results where foreach was always slower. So that should answer your question. Sample:
for 1000 pass, array_sum took 0.2720000743866 seconds. Result = 50005000000
for 1000 pass, foreach took 1.7239999771118 seconds. Result = 50005000000

Is any solution the correct solution?

I always think to myself after solving a programming challenge that I have been tied up with for some time, "It works, thats good enough".
I don't think this is really the correct mindset, in my opinion and I think I should always be trying to code with the greatest performance.
Anyway, with this said, I just tried a ProjectEuler question. Specifically question #2.
How could I have improved this solution. I feel like its really verbose. Like I'm passing the previous number in recursion.
<?php
/* Each new term in the Fibonacci sequence is generated by adding the previous two
terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
Find the sum of all the even-valued terms in the sequence which do not exceed
four million.
*/
function fibonacci ( $number, $previous = 1 ) {
global $answer;
$fibonacci = $number + $previous;
if($fibonacci > 4000000) return;
if($fibonacci % 2 == 0) {
$answer = is_numeric($answer) ? $answer + $fibonacci : $fibonacci;
}
return fibonacci($fibonacci, $number);
}
fibonacci(1);
echo $answer;
?>
Note this isn't homework. I left school hundreds of years ago. I am just feeling bored and going through the Project Euler questions
I always think to myself after solving
a programming challenge that I have
been tied up with for some time, "It
works, thats good enough".
I don't think this is really the
correct mindset, in my opinion and I
think I should always be trying to
code with the greatest performance.
One of the classic things presented in Code Complete is that programmers, given a goal, can create an "optimum" computer program using one of many metrics, but its impossible to optimize for all of the parameters at once. Parameters such as
Code Readabilty
Understandability of Code Output
Length of Code (lines)
Speed of Code Execution (performance)
Speed of writing code
Feel free to optimize for any one of these parameters, but keep in mind that optimizing for all of them at the same time can be an exercise in frustration, or result in an overdesigned system.
You should ask yourself: what are your goals? What is "good enough" in this situation? If you're just learning and want to make things more optimized, by all means go for it, just be aware that a perfect program takes infinite time to build, and time is valuable in and of itself.
You can avoid the mod 2 section by doing the operation three times (every third element is even), so that it reads:
$fibonacci = 3*$number + 2*$previous;
and the new input to fibonacci is ($fibonnacci,2*$number+$previous)
I'm not familiar with php, so this is just general algorithm advice, I don't know if it's the right syntax. It's practically the same operation, it just substitutes a few multiplications for moduluses and additions.
Also, make sure that you start with $number as even and the $previous as the odd one that precedes it in the sequence (you could start with $number as 2, $previous as 1, and have the sum also start at 2).
Forget about Fibonacci (Problem 2), i say just advance in Euler. Don't waste time finding the optimal code for every question.
If your answer achieves the One minute rule then you are good to try the next one. After few problems, things will get harder and you will be optimizing the code while you write to achieve that goal
Others on here have said it as well "This is part of the problem with example questions vs real business problems"
The answer to that question is very difficult to answer for a number of reasons:
Language plays a huge role. Some languages are much more suited to some problems and so if you are faced with a mismatch you are going to find your solution "less than eloquent"
It depends on how much time you have to solve the problem, the more time to solve the problem the more likely it is you will come to a solution you like (though the reverse is occasionally true as well too much time makes you over think)
It depends on your level of satisfaction overall. I have worked on several projects where I thought parts where great and coded beautifully, and other parts where utter garbage, but they were outside of what I had time to address.
I guess the bottom line is if you think its a good solution, and your customer/purchaser/team/etc agree then its a good solution for the time. You might change your mind in the future but for now its a good solution.
Use the guideline that the code to solve the problem shouldn't take more than about a minute to execute. That's the most important thing for Euler problems, IMO.
Beyond that, just make sure it's readable - make sure that you can easily see how the code works. This way, you can more easily see how things worked if you ever get a problem like one of the Euler problems you solved, which in turn lets you solve that problem more quickly - because you already know how you should solve it.
You can set other criteria for yourself, but I think that's going above and beyond the intention of Euler problems - to me, the context of the problems seem far more suitable for focusing on efficiency and readability than anything else
I didn't actually test this ... but there was something i personally would have attempted to solve in this solution before calling it "done".
Avoiding globals as much as possible by implementing recursion with a sum argument
EDIT: Update according to nnythm's algorithm recommendation (cool!)
function fibonacci ( $number, $previous, $sum ) {
if($fibonacci > 4000000) { return $sum; }
else {
$fibonacci = 3*$number + 2*$previous;
return fibonacci($fibonnacci,2*$number+$previous,$sum+$fibonacci);
}
}
echo fibonacci(2,1,2);
[shrug]
A solution should be evaluated by the requirements. If all requirements are satisfied, then everything else is moxy. If all requirements are met, and you are personally dissatisfied with the solution, then perhaps the requirements need re-evaluation. That's about as far as you can take this meta-physical question, because we start getting into things like project management and business :S
Ahem, regarding your Euler-Project question, just my two-pence:
Consider refactoring to iterative, as opposed to recursive
Notice every third term in the series is even? No need to modulo once you are given your starting term
For example
public const ulong TermLimit = 4000000;
public static ulong CalculateSumOfEvenTermsTo (ulong termLimit)
{
// sum!
ulong sum = 0;
// initial conditions
ulong prevTerm = 1;
ulong currTerm = 1;
ulong swapTerm = 0;
// unroll first even term, [odd + odd = even]
swapTerm = currTerm + prevTerm;
prevTerm = currTerm;
currTerm = swapTerm;
// begin iterative sum,
for (; currTerm < termLimit;)
{
// we have ensured currTerm is even,
// and loop condition ensures it is
// less than limit
sum += currTerm;
// next odd term, [odd + even = odd]
swapTerm = currTerm + prevTerm;
prevTerm = currTerm;
currTerm = swapTerm;
// next odd term, [even + odd = odd]
swapTerm = currTerm + prevTerm;
prevTerm = currTerm;
currTerm = swapTerm;
// next even term, [odd + odd = even]
swapTerm = currTerm + prevTerm;
prevTerm = currTerm;
currTerm = swapTerm;
}
return sum;
}
So, perhaps more lines of code, but [practically] guaranteed to be faster. An iterative approach is not as "elegant", but saves recursive method calls and saves stack space. Second, unrolling term generation [that is, explicitly expanding a loop] reduces the number of times you would have had to perform modulus operation and test "is even" conditional. Expanding also reduces the number of times your end conditional [if current term is less than limit] is evaluated.
Is it "better", no, it's just "another" solution.
Apologies for the C#, not familiar with php, but I am sure you could translate it fairly well.
Hope this helps, :)
Cheers
It is completely your choice, whether you are happy with a solution or whether you want to improve it further. There are many project Euler problems where a brute force solution would take too long, and where you will have to look for a clever algorithm.
Problem 2 doesn't require any optimisation. Your solution is already more than fast enough.
Still let me explain what kind of optimisation is possible. Often it helps to do some research on the subject. E.g. the wiki page on Fibonacci numbers contains this formula
fib(n) = (phi^n - (1-phi)^n)/sqrt(5)
where phi is the golden ratio. I.e.
phi = (sqrt(5)+1)/2.
If you use that fib(n) is approximately phi^n/sqrt(5) then you can find the index of the largest Fibonacci number smaller than M by
n = floor(log(M * sqrt(5)) / log(phi)).
E.g. for M=4000000, we get n=33, hence fib(33) the largest Fibonacci number smaller than 4000000. It can be observed that fib(n) is even if n is a multiple of 3. Hence the sum of the even Fibonacci numbers is
fib(0) + fib(3) + fib(6) + ... + fib(3k)
To find a closed form we use the formula above from the wikipedia page and notice that
the sum is essentially just two geometric series. The math isn't completely trivial, but using these ideas it can be shown that
fib(0) + fib(3) + fib(6) + ... + fib(3k) = (fib(3k + 2) - 1) /2 .
Since fib(n) has size O(n), the straight forward solution has a complexity of O(n^2).
Using the closed formula above together with a fast method to evaluate Fibonacci numbers
has a complexity of O(n log(n)^(1+epsilon)). For small numbers either solution is of course fine.

Is wrapping code into a function that doesn't need to be, bad in PHP?

Many many times on a page I will have to set post and get values in PHP like this
I just want to know if it is better to just continue doing it the way I have above or if performance would not be touched by adding it into a function like in the code below?
This would make it much easiar to write code but at the expense of making extra function calls on the page.
I have all the time in the world so making the code as fast as possible is more important to me then making it "easiar to write or faster to develop"
Appreciate any advice and please nothing about whichever makes it easier to develop, I am talking pure performance here =)
<?php
function arg_p($name, $default = null) {
return (isset($_GET[$name]))?$_GET[$name]:$default;
}
$pagesize = arg_p('pagesize', 10);
$pagesize = (isset($_GET['pagesize'])) ? $_GET['pagesize'] : 10;
?>
If you have all the time in the world, why don't you just test it?
<?php
// How many iterations?
$iterations = 100000;
// Inline
$timer_start = microtime(TRUE);
for($i = 0; $i < $iterations; $i++) {
$pagesize = (isset($_GET['pagesize'])) ? $_GET['pagesize'] : 10;
}
$time_spent = microtime(TRUE) - $timer_start;
printf("Inline: %.3fs\n", $time_spent);
// By function call
function arg_p($name, $default = null) {
return (isset($_GET[$name])) ? $_GET[$name] : $default;
}
$timer_start = microtime(TRUE);
for($i = 0; $i < $iterations; $i++) {
$pagesize = arg_p('pagesize', 10);
}
$time_spent = microtime(TRUE) - $timer_start;
printf("By function call: %.3fs\n", $time_spent);
?>
On my machine, this gives pretty clear results in favor of inline execution by a factor of almost 10. But you need a lot of iterations to really notice it.
(I would still use a function though, even if me answering this shows that I have time to waste ;)
Sure you'll probably get a performance benefit from not wrapping it into a function. But would it be noticeable? Not really.
Your time is worth more than the small about of CPU resources you'd save.
I doubt the difference in speed would be noticeable unless you are doing it many hundreds of times.
Function call is a performance hit, but you should also think about maintainability - wrapping it in the function could ease future changes (and copy-paste is bad for that).
Whilst performance wouldn't really be affected, anything that takes code out of the html stream the better.
Even with a thousand calls to your arg_p() you wouldn't be able to measure —let alone notice— the difference in performance. The time you will spend typing the extra "inline" code plus the time you will spend whenever you'll have to duplicate the changes to every inlined copy plus the added complexity and higher probability of typo or random error will cost you more than the unmeasurable performance improvement. In fact, that time could be spent on optimizing what really counts such as improving your database design, profile your code to find the areas that really affect the generation time, etc...
You'll be better off keeping your code clean. It will save you time that you can in turn invest into optimizing what really counts.

Categories