If I were to say
echo $arr[some_index];
as opposed to saying
echo $arr['some_index'];
Will there be a significant amount of processor time/power lost to the error notice? I am aware that it is not proper syntax, but there is a huge amount of code written like this already on a project I am working on.
Well, it's simple enough to check. You can check the execution time on any statement(s) like such:
$start = microtime(true);
//Do your code. Try an echo of one kind here.
$end = microtime(true);
echo($end - $start); //The elapsed time, in seconds. Precise up to a microsecond.
Do one of those for each type you'd like to test. Whichever is consistently fastest will be the quickest, naturally.
You can also use memory_get_usage to determine how much memory has been used, before and after each call.
Now, you should also be getting a large number of NOTICE's. If a constant isn't defined, it's treated as a string instead, but throws a notice. Another problem is if your key ever conflicts with a constant, you'll be checking the wrong value. It's really just not good practice. I'd go through and replace everything.
I think the performance impact would be negligible, however the purist in me would want to see consistent use of quotes/no quotes.
Related
Let's say I have $variable holding more than 500 kb info.
while ($row = mysqli_fetch_assoc($selectFromTable))
{
$variable .= "<p>$row[info]</p>";
}
or
while ($row = mysqli_fetch_assoc($selectFromTable))
{
echo "<p>$row[info]</p>";
}
Optimization wise, is it better to echo the info right away than saving it to a variable?
I can't decide because I can't see the difference in performance because I don't know what tool to use in monitoring the response time. Any suggestion?
Even though there is not enough difference in performance, I still wanted to learn on how can I optimize my coding.
There is no significant difference in speed or memory usage between the two pieces of code you listed. They both build a new string that contains the value of $row['info'] enclosed in a <p> HTML element.
You can pass each string as an individual argument to echo:
echo "<p>", $row['info'], "</p>";
This avoids the creation of a new string, uses less memory and runs slightly faster (the improvement speed is not significant unless you do it thousands of times in a loop).
Read about the echo language construct.
Also please note that $row[info] is not correct. The correct way is $row['info']. It is explained in the documentation why.
You need to do something with variable, instead of just saving some data in it in the first loop.
With your current setup first loop with only variable storage will always be faster as operation with IO (input/output) devices are slow that means echo to print output in screen.
But if you add an echo after the variable statement then, loop with only single echo will obviously be faster.
I got to wondering about the efficiency of this:
I have a csv file with about 200 rows in it, I use a class to filter/break up the csv and get the bits I want. It is cached daily.
I found that many descriptions (can be up to ~500 chars each) have a hanging word "Apply" and it needs chopping off.
Thinking that calling toString() on my object more than once would be bad practice, I created a temp var : $UJM_desc (this code is inside a loop)
// mad hanging 'Apply' in `description` very often, cut it off
$UJM_desc = $description->toString();
$hanging = substr($UJM_desc, -5);
if($hanging == "Apply")
$UJM_desc = substr($UJM_desc, 0 , -5);
$html .= '<p>' . $UJM_desc ;
But could have just called $description->toString() a couple of times, I am aware there is room to simplify this maybe with a ternary, but still, I froze the moment and thought I'd ask.
Call a method twice or use a temp var? Which is best?
I'd just use regex to strip off the end:
$html .= '<p>' . preg_replace('/Apply$/', '', $description->toString());
That said, if $description->toString() gives the same output no matter where you use it, there's absolutely no reason to call it multiple times, and a temporary variable will be the most efficient.
There's also no reason to save $hanging to a variable, as you only use it once.
In general, it depends, and it's a tradeoff.
Keeping a calculated value in a variable takes up memory, and runs the risk of containing stale data.
Calculating the value anew might be slow, or expensive in some other way.
So it's a matter of deciding which resource is most important to you.
In this case, however, the temporary variable is so short-lived, it's definitely worth using.
What is the benefit of using multiple steps to test variables:
$VarLength = strlen($message);
if ($VarLength > 10)
echo "Over Ten";
...versus just pushing the whole process into one if statement:
if ( strlen($message) > 10 )
echo "Over Ten";
I'm wondering if the benefits go beyond code style, and the ability to re-use the results of the (in the example above) strlen result.
Your question is not really possible to answer technically, so this is more a comment than an answer.
Benefits beyond code-style and re-use of the result is when you change the code.
You might want to replace the strlen() function with some other function but you don't want to edit the line with the if clause while you do so. E.g. to prevent errors or side-effects. That could be a benefit, however it depends on code-style somehow. So as you exclude coding style from your question, it makes it hard to answer as that domain touches a lot how you can/should/would/want/must write code.
If the result of a function will be used multiple times, it should be cached in a variable so as to obviate the need to waste resources to re-calculate its result.
If the function result won't be re-used, it can simply be a matter of code readability to clearly delineate what's happening by storing the function return value in a variable before using it in an if condition.
Also, in terms of readability, you should always use curly braces even when not mandated by PHP syntax rules as #AlexHowansky mentions.
Most of it is in the code style. In terms of rapidity of the results, it doesn't change much. If you are using $varLenght more then once, then you are saving the call to the function to obtain the length. But even that, the time difference is extremely minimal (I would even like to say unnoticable).
But: When developping any applications, you have to keep in mind that you might not be the only one making changes to it down the road, or you might not be as fresh and up to date with the exact program you are writing. Therefore, the cleaner the code, the easier it is in terms of maintenance, and THAT'S where you save a lot of time down the road.
Best Practice dictates that functions be called minimally. In your case the practice doesn't violate the rule, but it is not uncommon to find code like:
if ( strlen($message) > 100 )
echo "Over Ten";
else if ( strlen($message) > 20 )
echo "Over Ten";
else if ( strlen($message) > 10 )
echo "Over Ten";
...
A common prevention is to always assign function results to a variable for consistency.
I wouldn't say there is any benefit apart from the re-use case you've already mentioned. Your latter case is more readable, probably faster, and probably less memory-intensive. I would however strongly recommend always using braces, even when your conditional is only one line:
if (condition) {
statement;
}
I use often the function sizeof($var) on my web application, and I'd like to know if is better (in resources term) store this value in a new variable and use this one, or if it's better call/use every time that function; or maybe is indifferent :)
TLDR: it's better to set a variable, calling sizeof() only once. (IMO)
I ran some tests on the looping aspect of this small array:
$myArray = array("bill", "dave", "alex", "tom", "fred", "smith", "etc", "etc", "etc");
// A)
for($i=0; $i<10000; $i++) {
echo sizeof($myArray);
}
// B)
$sizeof = sizeof($myArray);
for($i=0; $i<10000; $i++) {
echo $sizeof;
}
With an array of 9 items:
A) took 0.0085 seconds
B) took 0.0049 seconds
With a array of 180 items:
A) took 0.0078 seconds
B) took 0.0043 seconds
With a array of 3600 items:
A) took 0.5-0.6 seconds
B) took 0.35-0.5 seconds
Although there isn't much of a difference, you can see that as the array grows, the difference becomes more and more. I think this has made me re-think my opinion, and say that from now on, I'll be setting the variable pre-loop.
Storing a PHP integer takes 68 bytes of memory. This is a small enough amount, that I think I'd rather worry about processing time than memory space.
In general, it is preferable to assign the result of a function you are likely to repeat to a variable.
In the example you suggested, the difference in processing code produced by this approach and the alternative (repeatedly calling the function) would be insignificant. However, where the function in question is more complex it would be better to avoid executing it repeatedly.
For example:
for($i=0; $i<10000; $i++) {
echo date('Y-m-d');
}
Executes in 0.225273 seconds on my server, while:
$date = date('Y-m-d');
for($i=0; $i<10000; $i++) {
echo $date;
}
executes in 0.134742 seconds. I know these snippets aren't quite equivalent, but you get the idea. Over many page loads by many users over many months or years, even a difference of this size can be significant. If we were to use some complex function, serious scalability issues could be introduced.
A main advantage of not assigning a return value to a variable is that you need one less line of code. In PHP, we can commonly do our assignment at the same time as invoking our function:
$sql = "SELECT...";
if(!$query = mysql_query($sql))...
...although this is sometimes discouraged for readability reasons.
In my view for the sake of consistency assigning return values to variables is broadly the better approach, even when performing simple functions.
If you are calling the function over and over, it is probably best to keep this info in a variable. That way the server doesn't have to keep processing the answer, it just looks it up. If the result is likely to change, however, it will be best to keep running the function.
Since you allocate a new variable, this will take a tiny bit more memory. But it might make your code a tiny bit more faster.
The troubles it bring, could be big. For example, if you include another file that applies the same trick, and both store the size in a var $sizeof, bad things might happen. Strange bugs, that happen when you don't expect it. Or you forget to add global $sizeof in your function.
There are so many possible bugs you introduce, for what? Since the speed gain is likely not measurable, I don't think it's worth it.
Unless you are calling this function a million times your "performance boost" will be negligible.
I do no think that it really matters. In a sense, you do not want to perform the same thing over and over again, but considering that it is sizeof(); unless it is a enormous array you should be fine either way.
I think, you should avoid constructs like:
for ($i = 0; $i < sizeof($array), $i += 1) {
// do stuff
}
For, sizeof will be executed every iteration, even though it is often not likely to change.
Whereas in constructs like this:
while(sizeof($array) > 0) {
if ($someCondition) {
$entry = array_pop($array);
}
}
You often have no choice but to calculate it every iteration.
In a PHP program, I sequentially read a bunch of files (with file_get_contents), gzdecode them, json_decode the result, analyze the contents, throw most of it away, and store about 1% in an array.
Unfortunately, with each iteration (I traverse over an array containing the filenames), there seems to be some memory lost (according to memory_get_peak_usage, about 2-10 MB each time). I have double- and triple-checked my code; I am not storing unneeded data in the loop (and the needed data hardly exceeds about 10MB overall), but I am frequently rewriting (actually, strings in an array). Apparently, PHP does not free the memory correctly, thus using more and more RAM until it hits the limit.
Is there any way to do a forced garbage collection? Or, at least, to find out where the memory is used?
it has to do with memory fragmentation.
Consider two strings, concatenated to one string. Each original must remain until the output is created. The output is longer than either input.
Therefore, a new allocation must be made to store the result of such a concatenation. The original strings are freed but they are small blocks of memory.
In a case of 'str1' . 'str2' . 'str3' . 'str4' you have several temps being created at each . -- and none of them fit in the space thats been freed up. The strings are likely not laid out in contiguous memory (that is, each string is, but the various strings are not laid end to end) due to other uses of the memory. So freeing the string creates a problem because the space can't be reused effectively. So you grow with each tmp you create. And you don't re-use anything, ever.
Using the array based implode, you create only 1 output -- exactly the length you require. Performing only 1 additional allocation. So its much more memory efficient and it doesn't suffer from the concatenation fragmentation. Same is true of python. If you need to concatenate strings, more than 1 concatenation should always be array based:
''.join(['str1','str2','str3'])
in python
implode('', array('str1', 'str2', 'str3'))
in PHP
sprintf equivalents are also fine.
The memory reported by memory_get_peak_usage is basically always the "last" bit of memory in the virtual map it had to use. So since its always growing, it reports rapid growth. As each allocation falls "at the end" of the currently used memory block.
In PHP >= 5.3.0, you can call gc_collect_cycles() to force a GC pass.
Note: You need to have zend.enable_gc enabled in your php.ini enabled, or call gc_enable() to activate the circular reference collector.
Found the solution: it was a string concatenation. I was generating the input line by line by concatenating some variables (the output is a CSV file). However, PHP seems not to free the memory used for the old copy of the string, thus effectively clobbering RAM with unused data. Switching to an array-based approach (and imploding it with commas just before fputs-ing it to the outfile) circumvented this behavior.
For some reason - not obvious to me - PHP reported the increased memory usage during json_decode calls, which mislead me to the assumption that the json_decode function was the problem.
There's a way.
I had this problem one day. I was writing from a db query into csv files - always allocated one $row, then reassigned it in the next step. Kept running out of memory. Unsetting $row didn't help; putting an 5MB string into $row first (to avoid fragmentation) didn't help; creating an array of $row-s (loading many rows into it + unsetting the whole thing in every 5000th step) didn't help. But it was not the end, to quote a classic.
When I made a separate function that opened the file, transferred 100.000 lines (just enough not to eat up the whole memory) and closed the file, THEN I made subsequent calls to this function (appending to the existing file), I found that for every function exit, PHP removed the garbage. It was a local-variable-space thing.
TL;DR
When a function exits, it frees all local variables.
If you do the job in smaller portions, like 0 to 1000 in the first function call, then 1001 to 2000 and so on, then every time the function returns, your memory will be regained. Garbage collection is very likely to happen on return from a function. (If it's a relatively slow function eating a lot of memory, we can safely assume it always happens.)
Side note: for reference-passed variables it will obviously not work; a function can only free its inside variables that would be lost anyway on return.
I hope this saves your day as it saved mine!
I've found that PHP's internal memory manager is most-likely to be invoked upon completion of a function. Knowing that, I've refactored code in a loop like so:
while (condition) {
// do
// cool
// stuff
}
to
while (condition) {
do_cool_stuff();
}
function do_cool_stuff() {
// do
// cool
// stuff
}
EDIT
I ran this quick benchmark and did not see an increase in memory usage. This leads me to believe the leak is not in json_decode()
for($x=0;$x<10000000;$x++)
{
do_something_cool();
}
function do_something_cool() {
$json = '{"a":1,"b":2,"c":3,"d":4,"e":5}';
$result = json_decode($json);
echo memory_get_peak_usage() . PHP_EOL;
}
I was going to say that I wouldn't necessarily expect gc_collect_cycles() to solve the problem - since presumably the files are no longer mapped to zvars. But did you check that gc_enable was called before loading any files?
I've noticed that PHP seems to gobble up memory when doing includes - much more than is required for the source and the tokenized file - this may be a similar problem. I'm not saying that this is a bug though.
I believe one workaround would be not to use file_get_contents but rather fopen()....fgets()...fclose() rather than mapping the whole file into memory in one go. But you'd need to try it to confirm.
HTH
C.
Call memory_get_peak_usage() after each statement, and ensure you unset() everything you can. If you are iterating with foreach(), use a referenced variable to avoid making a copy of the original (foreach()).
foreach( $x as &$y)
If PHP is actually leaking memory a forced garbage collection won't make any difference.
There's a good article on PHP memory leaks and their detection at IBM
There recently was a similar issue with System_Daemon. Today I isolated my problem to file_get_contents.
Could you try using fread instead? I think this may solve your problem.
If it does, it's probably time to do a bugreport over at PHP.