I am writing some PHP code to create PDFs using the FPDF library. And I basically use the same 4 lines of code to print every line of the document. I was wondering which is more efficient, repeating these 4 lines over and over, or would making it into a function be better? I'm curious because it feels like a function would have a larger overhead becuse the function would only be 4 lines long.
The code I am questioning looks like this:
$pdf->checkIfPageBreakNeeded($lineheight * 2, true);
$text = ' label';
$pdf->MultiCell(0, $lineheight, $text, 1, 'L', 1);
$text = $valueFromForm;
$pdf->MultiCell(0, $lineheight, $text, 1, 'L');
$pdf->Ln();
This should answer it:
http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
and
http://www.codinghorror.com/blog/2007/03/curlys-law-do-one-thing.html
Curly's Law, Do One Thing, is
reflected in several core principles
of modern software development:
Don't Repeat Yourself
If you have more than one way to express the same thing, at some point
the two or three different
representations will most likely fall
out of step with each other. Even if
they don't, you're guaranteeing
yourself the headache of maintaining
them in parallel whenever a change
occurs. And change will occur. Don't
repeat yourself is important if you
want flexible and maintainable
software.
Once and Only Once
Each and every declaration of behavior should occur once, and only
once. This is one of the main goals,
if not the main goal, when refactoring
code. The design goal is to eliminate
duplicated declarations of behavior,
typically by merging them or replacing
multiple similar implementations with
a unifying abstraction.
Single Point of Truth
Repetition leads to inconsistency and code that is subtly
broken, because you changed only some
repetitions when you needed to change
all of them. Often, it also means that
you haven't properly thought through
the organization of your code. Any
time you see duplicate code, that's a
danger sign. Complexity is a cost;
don't pay it twice.
Rather than asking yourself which is more efficient you should instead ask yourself which is more maintainable.
Writing a function is far more maintainable.
I'm curious because it feels like a
function would have a larger overhead
becuse the function would only be 4
lines long.
This is where spaghetti comes from.
Defininely encapsulate it into a function and call it. The overhead that you fear is the worst kind of premature optimization.
DRY - Don't Repeat Yourself.
Make it a function. Function call overhead is pretty small these days. In general you'll be able to save far more time by finding better high-level algorithms than fiddling with such low-level details. And making and keeping it correct is far easier with such a function. For what shall it profit a man, if he shall gain a little speed, and lose his program's correctness?
A function is certainly preferable, especially if you have to go back later to make a change.
Don't worry about overhead; worry about yourself, a year in the future, trying to debug this.
In the light of the above, Don't Repeat Yourself and make a tiny function.
In addition to all the valuable answers about the far more important topic of maintainability; I'd like to add a little something on the question of overhead.
I don't understand why you fear that a four line function would have a greater overhead.
In a compiled language, a good compiler would probably be able to inline it anyway, if appropriate.
In an interpreted language (such as PHP) the interpreter has to parse all of this repeated code each time it is encountered, at runtime. To me, that suggests that repetition might carry an even greater overhead than a function call.
Worrying about function call overhead here is ghastly premature optimisation. In matters like this, the only way to really know which is faster, is to profile it.
Make it work, make it right, make it fast. In that order.
The overhead is actually very small and wont be causing a big difference in your application.
Would u rather these small overhead, but have a easier program to maintain, or u want to save the mere millisecond but take hours to correct small changes which are repeated.
If you ask me or other developer out there, we definitely want the 1st option.
So go on with the function. U may not be maintaining the code today, but when u do, u will hate yourself for trying to save that mere milliseconds
Related
I mean, things like doing:
$length=count($someArray);
for($i=0; $i<$length; $i++){
//Stuff
}
rather than:
for($i=0; $i<count($someArray); $i++){
//Stuff
}
so that it doesn't have to calculate the length of the array every time it loops.
Does anyone else have any tips like these that are pretty simple concepts but improve performance?
Consider having a look here:
http://tipsandtricks.runicsoft.com/General/Performance.html
I think hat covers almost all of the generic improvement tips.
After that, pay special attention to point 5: Know your language as many optimizations can be achieved in a language-dependent manner.
The Best tip i can think of
Don't
At least not until you understand the actual performance characteristics of your program!
The thing is, 99% of compilers are going to make that optimization for you anyway. And even if they didn't, the performance gain from that example alone is going to be completely unnoticeable on most platforms in most situations.
Write code that makes the most since/expresses what is going and uses good algorithms first. If and only if you have performance issues should you go back and investigate why after.
Three simple things:
1.) Do less, less often
2.) Do allocate less memory, less often
3.) Think hard about your algorithms and database requests and layout
to 1.) Just like what you did here, calling count less often, this can be done on large scale also. Stuff like, why requesting more than 20 rows from a database when you can only show 15 at a time if the user dos not scroll.
to 2.)
// Allocate here what you will need, eventually reusing, reinitializing it
for($i=0; $i<count($someArray); $i++){
// Only allocate here what can not be avoided by reusing
}
to 3.)
There are slow and fast algorithms, if you handle something in a loop, make sure you understand the implications. For a database, make sure you have the right structure for your specific needs (i.e. relational)
But there is one major rule in professional development:
Premature optimization is the root of all evil -- DonaldKnuth
Yes, programming clean and avoiding the badest performance issues upfront is good. But at number one there is always understandability, readability and maintainability. And the "you ain't gonna need it" principle. If your code is readable and well structured, and you have no lag, nor any complaints from users or the server admins or a profiler tool shows wierd things are going on. Do not "optimize". Without profiling and a known reason you never know if you are really fixing the bottleneck or wasting time and money on the wrong place. In worst case messing up perfectly understandable code for nothing.
Here is a nice discussion on that topic http://c2.com/cgi/wiki?PrematureOptimization
I know this question has been asked before but I haven't been able to find a definitive answer.
Does the overly use of the echo statement slow down end user load times?
By having more echo statements in the file the file size increases so I know this would be a factor. Correct me if I'm wrong.
I know after some research that using php's ob_start() function along with upping Apaches SendBufferSize can help decrease load times, but from what I understand this is more of decrease in php execution time by allowing php to finish/exit sooner, which in turn allows Apache to exit sooner.
With that being said, php does exit sooner, but does that mean php actually took less time to execute and in turn speed things up on the end user side ?
To be clear, what I mean by this is if I had 2 files, same content, and one made use of the echo statement for every html tag and the other file used the standard method of breaking in and out of php, aside for the difference in file size from the "overly" use of the echo statement (within reason I'm guessing?), which one would be faster? Or would there really not be any difference?
Maybe I'm going about this or looking at this wrong?
Edit: I have done a bit of checking around and found a way to create a stop watch to check execution time of a script and seems to work quit well. If anybody is interested in doing the same here is the link to the method I have chosen to use for now.
http://www.phpjabbers.com/measuring-php-page-load-time-php17.html
Does the overly use of the echo statement slow down end user load times?
No.
By having more echo statements in the file the file size increases so I know this would be a factor. Correct me if I'm wrong.
You are wrong.
does that mean php actually took less time to execute and in turn speed things up on the end user side?
No.
Or would there really not be any difference?
Yes.
Maybe I'm going about this or looking at this wrong?
Definitely.
There is a common problem with performance related questions.
Most of them coming up not from the real needs but out of imagination.
While one have to solve only real problems, not imaginable ones.
This is not an issue.
You are overthinking things.
This is an old question, but the problem with the logic presented here is it assumes that “More commands equals slower performance…” when—in terms of modern programming and modern systems—this is an utterly irrelevant issue. These concerns are only of concerns of someone who—for some reason—programs at an extremely low level in something like assembler and such,.
The reason why is there might be a slowdown… But nothing anyone would ever humanly be able to perceive. Such as a slowdown of such a small fraction of a second that the any effort you make to optimize that code would not result in anything worth anything.
That said, speed and performance should always be a concern when programming, but not in terms of how many of a command you use.
As someone who uses PHP with echo statements, I would recommend that you organize your code for readability. A pile of echo statements is simply hard to read and edit. Depending on your needs you should concatenate the contents of those echo statements into a string that you then echo later on.
Or—a nice technique I use—is to create an array of values I need to echo and then run echo implode('', $some_array); instead.
The benefit of an array over string concatenation is it’s naturally easier to understand that some_array[] = 'Hello!'; will be a new addition to that array where something like $some_string .= 'Hello!'; might seem simple but it might be confusing to debug when you have tons of concatenation happening.
But at the end of the day, clean code that is easy to read is more important to all involved than shaving fractions of a second off of a process. If you are a modern programmer, program with an eye towards readability as a first draft and then—if necessary—think about optimizing that code.
Do not worry about having 10 or 100 calls to echo. When optimizing these shouldn't be even take in consideration.
Think that on a normal server you can run an echo simple call faster than 1/100,000 part of a second.
Always worry about code readability and maintenance than those X extra echo calls.
Didn't made any benchmark. All I can say is, in fact when your echo strings (HTML or not) and use double quotes (") it's slower than single quotes (').
For strings with double quotes PHP has to parse those strings. You could know the possibility to get variables inside of strings by just insert them into your string:
echo "you're $age years old!";
PHP has to parse your string to lookup those variables and automatically replace them. When you're sure, you don't have any variables inside your string use single quotes.
Hope this would help you.
Even when you use a bunch of echo calls, I don't think it would slow down your loading time. Loading time depends on reaction time of your server and execution time. When your loading time would be to high for the given task, check the whole code not only the possibility of echoes could slow down your server. I think there would be something wrong inside your code.
I'm creating a web application that does some very heavy floating point arithmetic calculations, and lots of them! I've been reading a lot and have read you can make C(and C++) functions and call them from within PHP, I was wondering if I'd notice a speed increase by doing so?
I would like to do it this way even if it's only a second difference, unless it's actually slower.
It all depends on the actual number of calculations you are doing. If you have thousands of calculations to do then certainly it will be worthwhile to write an extension to handle it for you. In particular, if you have a lot of data this is where PHP really fails: it's memory manager can't handle a lot of objects, or large arrays (based on experience working with such data).
If the algorithm isn't too difficult you may wish to write it in PHP first anyway. This gives you a good reference speed but more importantly it'll help define exactly what API you need to implement in a module.
Update to "75-100 calculations with 6 numbers".
If you are doing this only once per page load I'd suspect it won't be a significant part of the overall load time (depends what else you do of course). If you are calling this function many times then yes, even 75 ops might be slow -- however since you use only 6 variables perhaps their optimizer will do a good job (whereas with 100 variables it's pretty much guaranteed not to).
Check SWIG.
Swig is a way to make php (and other languages) modules from your C sources rather easily.
So you have the option to structure an array as you please knowing that a few lines further down in your code you are going to need to check for the existence of a value in that array. The way I see it you have at least two options:
$values_array = array(
'my_val',
'my_val2',
'and_so_on',
);
if(in_array('my_val', $values_array)) {
var_dump('Its there!');
}
Or you could use an associative array and use the keys to contain your value:
$values_array = array(
'my_val' => '',
'my_val2' => '',
'and_so_on' => '',
);
if(isset($values_array['my_val'])) {
var_dump('Its there!');
}
Which method would you pick and why? Would you be solely aiming to reduce process time or also minimise the amount of memory used?
Perhaps you wouldn't use my two puny methods and have another awesome way to solve this simple problem.
This is a theoretical question with no real world application in mind, but there could be thousands of options in the array. It is a speculative question really to see which method is considered better by everyone. Whether it be considered so for readability, speed or memory usage reasons.
Really? Use the variant, that fits better to your and your applications needs. Especially with so less elements, it is far out of scope of measurement.
But there is a real semantic difference between both. The first one defines a list, the second one defines a map. If $array should represent a list, use the first one, if it should represent a map, use the second one (obvious, huh? ;)).
At all: Let never such micro optimization approaches influence your application design.
Code readability and maintainability always trump optimisation.
The developer-time wasted trying to untangle code that is deliberately obtuse just to save a few microseconds generally outweights the value of those saved microseconds.
If you're writing something where execution speed really makes enough of a difference to care about this sort of thing, then PHP (or indeed any interpreted language) is probably the wrong language. And if you are trying to optimise PHP code, there's almost certainly better places to start than this.
Well, I'd tend to the second one, as I have a feeling that this one is more optimal. And I am using it quite often.
However, if it become a bottleneck, I'd measure alternatives.
Anyway, I would never think of these thing while my arrays being relatively small - up to several hundreds items.
And I would try to avoid such searches on heavy arrays at any cost anyway.
I've always been curious about this. Between the two code example which is more processor efficient? I understand that various languages may be different and I need to know what the answer is in PHP, but I am also curious about javaScript, CFScript, and ActionScript if the answer would be different.
Thanks
ps:The example may not be exact for a particular language, but I'm sure you will get the point
Example 1:
if(myVar < 1){
return;//returned with nothing
else{
//do something
}
Example 2:
if(myVar < 1){
//left blank
else{
//do something
}
EDIT:
Guys, you're right, this would probably be completely unnecessary, I am asking this out of curiosity more than anything. I've been doing web development for more than a decade and It's all pretty much been self taught. I've seen a lot of code using both methods from "trained professionals" and was wondering if this was a personal preference or one group of people knew something the other didn't.
Also these are pseudo code examples.I'm asking about multiple languages where the details would be slightly different and the test is a simple one just to make sure you knew what I was asking about. It should be assumed that these examples would be in a function.
If you're looking for a performance problem in your code, this is not it. If you haven't benchmarked your performance, then you don't have a performance problem (yet).
Use the method that best expresses your intentions. Anything else is baseless micro-optimisation.
Update: Assume that the people writing the compiler are smarter than you are (this isn't always true, but it works most of the time). The compiler will look at the code you've written, and figure out the best way to represent that in object code (whether that is machine code or bytecode or some other kind of intermediate representation). So if you've got two chunks of code that mean exactly the same thing, chances are good that the compiler will generate the same output in both cases.
As long as there are no statements after the end of the if..else block, I believe they are identical in most languages.
That being said, I tend to favor using as few return statements as possible as I find it makes my code more readable and easier to debug. I doubt it's worth your time to concern yourself with efficiency in this case - focus on keeping your code as clean as possible.
Leaving the return out is faster.
Test This
Setup
function a(){if(true) return; else ;}
Code Under Test
a();
Tear Down
Output
Ran in 0.042321920394897 seconds 0.00018978118896484 over
And This
Setup
function b(){if(true) ; else ;}
Code Under Test
b();
Tear Down
Output
Ran in 0.042132139205933 seconds
Tested for 100,000 repetitions.
I would suggest using "return null;" but not for speed reasons, simply because you can be sure the function stops running when you expect it too. For example, it's possible you or somebody else will modify this function, add code outside the if statement and not realize the code shouldn't have run under certain conditions.
It's generally a good idea to force things to happen a way you expect, don't let it work by coincidence. Mind you I'm getting a bit more theory than practicality, but it's always better safe than sorry.
Depends on what is happening and what happens most often in that part of the code to get the most efficency from it.
In the first example, you don't need the else.
They should compile equivalently, if the compiler is decent.
Sometimes a deeply nested set of ifs can be converted to an unnested sequence of ifs, or ifs connected with ANDs can be converted into a straight sequnce of ifs each of which does a return in the positive case. This can improve readability.
if (a and b and c and d and e and f)
do_something()
can be converted to:
if (!a)
return;
if (!b)
return;
if (!c)
return;
if (!d)
return;
if (!e)
return;
if (!f)
return;
do_something();
In my trivial example, the first way is probably easier to read, but when a-f are each fairly complicated expressions, the 2nd way can be easier to read, result in less indentation, etc.
http://jsperf.com/empty-statement-vs-return
But really, by the time you're getting to tens of millions of executions per second, it's really premature optimization. It would be a lot more effective just to reduce things like long loops and excessive DOM operations.