I do not believe this to be a duplicate, I've looked for it, but really had no clue what to call it exactly.
I want to know why a loop that is ten times larger than another loop doesn't take ten times longer to run.
I was doing some testing to try and figure out how to make my website faster and more reactive, so I was using microtime() before and after functions. On my website, I'm not sure how to pull lists of table rows with certain attributes out without going through the entire table, and I wanted to know if this was what was slowing me down.
So using the following loop:
echo microtime(), "<br>";
echo microtime(), "<br>";
session_start();
$connection = mysqli_connect("localhost", "root", "", "") or die(mysqli_connection_error());;
echo microtime(), "<br>";
echo microtime(), "<br>";
$x=1000;
$messagequery = mysqli_query($connection, "SELECT * FROM users WHERE ID='$x'");
while(!$messagequery or mysqli_num_rows($messagequery) == 0) {
echo('a');
$x--;
$messagequery = mysqli_query($connection, "SELECT * FROM users WHERE ID='$x'");
}
echo "<br>";
echo microtime(), "<br>";
echo microtime(), "<br>";
I got the following output and similar outputs:
0.14463300 1376367329
0.14464400 1376367329
0.15548900 1376367330
0.15550000 1376367330 < these two
[a's omitted, for readability]
0.33229800 1376367330 < these two
0.33230700 1376367330
~18-20 microseconds, not that bad, nobody will notice that. So I wondered what would happen as my website grew. What would happen if I had 10 times as many (10,000) table rows to search through?
0.11086600 1376367692
0.11087600 1376367692
0.11582100 1376367693
0.11583600 1376367693
[lots of a's]
0.96294500 1376367694
0.96295500 1376367694
~83-88 microseconds. Why isn't it 180-200 microseconds? Does it take time to start and stop a loop or something?
UPDATE: To see if it was the mySQL adding variables, I tested it without the mySQL:
echo microtime(), "<br>";
echo microtime(), "<br>";
session_start();
$connection = mysqli_connect("localhost", "root", "W2072a", "triiline1") or die(mysqli_connection_error());;
echo microtime(), "<br>";
echo microtime(), "<br>";
$x=1000000;
while($x > 10) {
echo('a');
$x--;
}
echo "<br>";
echo microtime(), "<br>";
echo microtime(), "<br>";
Now it appears that at one million, it takes ~100 milliseconds(right?) and at ten million it takes ~480 milliseconds. So, my question still stands. Why do larger loops move more quickly? It's not important, I'm not planning my entire website design based off of this, but I am interested.
Normally, loops will scale linearly.
Possible bug: If you haven't already done so, consider what might happen if there was no record with id 900.
I would strongly recommend using MySQL to do your filtration work for you via WHERE clauses rather than sorting thru information this way. It's not really scalable.
Frankly, the line
while(!$messagequery or mysqli_num_rows($messagequery) == 0) {
doesn't make sense to me. $messagequery will be false if a failure occurs, and you want the loop to run as long as mysqli_num_rows($messagequery) is NOT equal to zero, I think. However, that's not what the above code does.
If mysqli_num_rows($messagequery) is equal to zero, the loop will continue.
If mysqli_num_rows($messagequery) is NOT equal to zero, the loop will stop.
See operator precedence: http://php.net/manual/en/language.operators.precedence.php
Does that help answer your question?
If you are really interested in this, you might take a look at the op codes that PHP creates. The Vulcan Logic Disassembler (VLD) might help you with this.
However, this shouldn't be your problem if you are only interested in your site speed. You won't have speed benefits/drawbacks just because of the loops themselves, but on the things they actually loop on (MySQL queries, arrays, ...).
Compare this small test script:
<pre>
<?php
$small_loop = 3000;
$big_loop = $small_loop*$small_loop;
$start = microtime(true);
// Big loop
for ($i = 0; $i < $big_loop; $i++) {
; // do nothing
}
echo "Big loop took " . (microtime(true) - $start) . " seconds\n";
$start = microtime(true);
// Small loops
for ($i = 0; $i < $small_loop; $i++) {
for ($j = 0; $j < $small_loop; $j++) {
;
}
}
echo"Small loops took " . (microtime(true) - $start) . " seconds\n";
?>
</pre>
The output for me was:
Big loop took 0.59838700294495 seconds
Small loops took 0.592453956604 seconds
As you can see the difference in 1 loop vs. 3000 loops isn't really significant.
Related
I'm trying to modify a script that generates random numbers to filenames and instead have a pretty increasing counter of +1 each new file
The original function is very simple it looks like this:
$name .= generateRandomString(5);
What I've came up with my mediocre skills is:
$name .= $count = 1; while ($count <= 10) { echo "$count "; ++$count; }
However, what happens when I run the code is that it just keeps on looping. I was looking for a function similar to the: generateRandomString but for increasing numbers, is there any?
Any ideas?
user current timestamp in place of $count, it will always keep increasing.
$name .= time()
This question already has answers here:
Performance of FOR vs FOREACH in PHP
(5 answers)
Closed 9 years ago.
Intro
If I loop in PHP, and know how many times I want to iterate, I usually use the for-loop like this:
for($y=0; $y<10; $y++) {
// ...
}
But lately I have seen someone use the foreach-loop:
foreach (range(1, 10) as $y) {
// ...
}
Now, I found the foreach-loop much more readable and thought about adopt this foreach-construct. But on the other side the for-loop is faster as you can see in the following.
Speed Test
I did then some speed tests with the following results.
Foreach:
$latimes = [];
for($x=0; $x<100; $x++) {
$start = microtime(true);
$lcc = 0;
foreach (range(1, 10) as $y) {
$lcc ++;
}
$latimes[$x] = microtime(true) - $start;
}
echo "Test 'foreach':\n";
echo (float) array_sum($latimes)/count($latimes);
Results after I runnt it five times:
Test 'foreach': 2.2873878479004E-5
Test 'foreach': 2.2327899932861E-5
Test 'foreach': 2.9709339141846E-5
Test 'foreach': 2.5603771209717E-5
Test 'foreach': 2.2120475769043E-5
For:
$latimes = [];
for($x=0; $x<100; $x++) {
$start = microtime(true);
$lcc = 0;
for($y=0; $y<10; $y++) {
$lcc++;
}
$latimes[$x] = microtime(true) - $start;
}
echo "Test 'for':\n";
echo (float) array_sum($latimes)/count($latimes);
Results after I runnt it five times:
Test 'for': 1.3396739959717E-5
Test 'for': 1.0268688201904E-5
Test 'for': 1.0945796966553E-5
Test 'for': 1.3313293457031E-5
Test 'for': 1.9807815551758E-5
Question
What I like to know is what would you prefer and why? Which one is more readable for you, and would you prefer readability over speed?
The following code samples are written in php under codeigniter framework’s benchmark library(it just to save my time as i am currently using these :D ), if you are using other languages, consider this as a pseudo code implement in your language way. There shouldn’t be any problem to implement this to any programming language. If you have experience with php/codeigniter, then you are lucky, just copy paste this code and test :) .
$data = array();
for($i=0;$i<500000;$i++){
$data[$i] = rand();
}
$this->benchmark->mark('code_start');
for($i=0;$i<500000;$i++){
;
}
$this->benchmark->mark('code_end');
echo $this->benchmark->elapsed_time('code_start', 'code_end');
echo "<br/>";
$this->benchmark->mark('code_start');
foreach($data as $row){
;
}
$this->benchmark->mark('code_end');
I have got 2 seconds difference between these two loops(one later one ran in around 3 seconds while first one ran in around 5 seconds). So, foreach loop won the ‘battle of for vs foreach’ here. However, you might be thinking, we won’t need such big loop; May be not in all cases, but there are some cases where long loops may be needed, like update big product database from web service/xml/csv etc. And, this example is only to aware you about the performance difference between them.
But, yes, they both have some exclusive use where they should exclusively because of extreme easiness/optimization. Like, if you are working in a loop where, on a certain condition the loop can be terminated. In this case, for loop will do the work with the most flexibility. On the other hand, if you are taking every objects/item from a list array and processing them, in this case, foreach will serve you best.
I need help creating PHP code to echo and run a function only 30% of the time.
Currently I have code below but it doesn't seem to work.
if (mt_rand(1, 3) == 2)
{
echo '';
theFunctionIWantCalled();
}
Are you trying to echo what the function returns? That would be
if(mt_rand(1,100) <= 30)
{
echo function();
}
What you currently have echoes a blank statement, then executes a function. I also changed the random statement. Since this is only pseudo-random and not true randomness, more options will give you a better chance of hitting it 30% of the time.
If you intended to echo a blank statement, then execute a function,
if(mt_rand(1,100) <= 30)
{
echo '';
function();
}
would be correct. Once again, I've changed the if-statement to make it more evenly distributed. To help insure a more even distribution, you could even do
if(mt_rand(1,10000) <= 3000)
since we aren't dealing with true randomness here. It's entirely possible that the algorithm is choosing one number more than others. As was mentioned in the comments of this question, since the algorithm is random, it could be choosing the same number over, and over, and over again. However, in practice, having more numbers to choose from will most likely result in an even distribution. Having only 3 numbers to choose from can skew the results.
Since you are using rand you can't guarantee it will be called 30% of the time. Where you could instead use modulus which will effectively give you 1/3 of the time, not sure how important this is for you but...
$max = 27;
for($i = 1; $i < $max; $i++){
if($i % 3 == 0){
call_function_here();
}
}
Since modulus does not work with floats you can use fmod, this code should be fairly close you can substitute the total iterations and percent...
$total = 50;
$percent = 0.50;
$calls = $total * $percent;
$interval = $total / $calls;
$called = 0;
$notcalled = 0;
for($i = 0; $i <= $total; $i++){
if(fmod($i, $interval) < 1){
$called++;
echo "Called" . "\n";
}else{
$notcalled++;
echo "Not Called" . "\n";
}
}
echo "Called: " . $called . "\n";
echo "Not Called: " . $notcalled . "\n";
I have to compare two very large number of values, for that I put them in arrays but it didn't work. Below is the code I use. Is this the most efficient way? I have set the time and memory to unlimited as well. error 101 (connection reset) unknown error this is error shown by chrome
for ($k = 0; $k < sizeof($pid); $k++) {
$out = 0;
for ($m = 0; $m < sizeof($oid); $m++) {
if ($pid[$k] == $oid[$m]) // $pid have 300000 indexes
//and $oid have about 500000 indexes
{
$out++;
}
}
if ($out) {
echo "OID for ID ".$pid[$k]." = ".$out;
echo "<br>";
}
}
Doesn't work how? Won't give you an answer? You're comparing every possible pair. How many combinations is that? More than 10^13. That will take something like an hour on a modern machine, if you don't run out of memory first.A more efficient way would be to sort them first: NlogN + MlogM + N + M time, instead of N*M time.Sorting a list of size x with a comparison sort takes x*log(x) time. Then, you can walk from the front of each list once, confident that if there are any matches you will find them. This takes linear time.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
In this situation, is it better to use a loop or not?
echo "0";
echo "1";
echo "2";
echo "3";
echo "4";
echo "5";
echo "6";
echo "7";
echo "8";
echo "9";
echo "10";
echo "11";
echo "12";
echo "13";
or
$number = 0;
while ($number != 13)
{
echo $number;
$number = $number + 1;
}
The former may be a little faster. The latter is a lot more readable. Choice is yours ;)
Even better would be
$totalWhatevers = 14;
for ($number = 0; $number < $totalWhatevers ; $number++)
{
echo $number;
}
Where 'totalWhatevers' is something descriptive to tell us what you are actually counting.
It's clearer to use a loop; as a rule of thumb, if you use fewer lines of code to loop, do so.
Your loop could be written more succinctly:
foreach (range(0,13) as $n)
echo $n, "\n";
I'd use a for loop, just to keep everything out in the open (your original loop seems to only print up to 12):
for ($number = 0; $number <= 13; $number++)
{
echo $number;
}
It's a lot cleaner then writing out a million 'echo', and the code is fairly self explanatory.
$string = "<?php \n";
$repeatNum = 20;
for($i = 0; $i < $repeatNum; $i++)
{
$string .= "echo \"" . $number . "\"; \n";
}
$string = "?>";
Now you can either
eval($string);
or
file_put_contents("newfile.text", $string);
and you will get a file with all the echos!
Note: This is not really a 'serious' answer. If you really want to create a PHP file with a number of echos, it works, but evaling the statement at the end is probably not the best idea for regular programming practice.
Depending on what you are trying to do, a loop could help to keep your code clearer and simpler to update.
For example, when the number of display could vary, you could use a config variables.
$repeat = 20;
for ($i = 0; $ < $repeat; ++$i) {
echo $i;
}
The mistake is in thinking that a loop generates code. This is not the case at all.
echo "0";
echo "1";
echo "2";
echo "3";
PHP will come along and execute each of these statements one by one.
for ($i = 0; $i <= 3; ++$i) {
echo $i;
}
In this case, PHP executes the same echo statement four times. Each time the end of the block is reached (a block being code between curly braces) execution jumps back to the condition. If the condition is still true, the block executes again.
Which method offers the best performance? It is very, very hard to tell.
Slowdown of method 1) The greater the range of numbers to echo, the more echo statements that have to be parsed in the script.
Slowdown of method 2) The greater the range of numbers to echo, the more times execution has to jump, the more times a condition has to be tested, and the more times a counter has to be incremented.
I do not know which scenario leads to more CPU instructions, or which leads to more memory usage. But as someone who has programmed before, I know a secret: it doesn't matter. The difference in execution will probably weigh around a few ten millionths of a second!
Therefore, the only way to make the effects become noticeable at all is if you are echoing ten million numbers, in which case one method might take a second more than the other. Isn't this a worthy gain though? No! By the time we are echoing ten million numbers, we are going to be spending minutes on either method. Only the difference between them will differ by a second, which is completely negligible. Not to mention, who knows which one is better anyhow?
As a programmer, the best thing you can do is make your code readable and maintainable. A loop is much easier to understand than a page of echo lines. With a loop I know you haven't missed any intermediate numbers, but with echoes I have to check every single line.
Technical jargon version: both algorithms have complexity O(n). There can only be a constant difference in their performance, and if that constant is not particularly large, it is negligible.
What do you mean by better? Better for the parser? better for the computer? Faster execution time? I can tell you that Making a loop is always better, For the people working with your code and for the system.