Branch prediction at php - php

Just read a great post about branch prediction. I was trying to reproduce it using php language.
<?php
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
return ((float)$usec + (float)$sec);
}
$time_start = microtime_float();
$count = 300000;
$sum = 0;
for ($i = 0; $i <= $count; $i++) {
$array[] = rand(0, $count);
}
sort($array);
for ($i = 0; $i <= $count; $i++) {
if ($array[$i] <= 150000) {
$sum += $array[$i];
}
}
$time_end = microtime_float();
$time = $time_end - $time_start;
echo $sum . '<br />';
echo 'End:' . $time;
?>
But I always get the same results with sorting and without it. Maybe I'm doing something wrong? Or maybe php has built in optimization for branch predictor?
UPD:
I made code modifications according to comments and measure the time on my local machine.
Not sorted array: 1.108197927475
Sorted array: 1.6477839946747
Difference: 0.539586067.
I think this difference spent on sorting. It looks like true that branch predictor has no impact on speed.

You won't replicate this in PHP. End of story. The reason is that the Java RTS uses JiT compilation techniques to compile the Java intermediate code down to the underlying X86 order code. This underlying order code will expose these branch prediction artefacts.
The PHP runtime system compiles PHP down to a bytecode which is a pseudo machine code that is interpreted. This interpreter will execute of the order of 0.5M opcodes /sec on a typical single core -- that is each PHP opcode takes perhaps 2-6K native instructions. Any subtleties of branching will be lost in this.

Related

Timing attack with PHP

I'm trying to produce a timing attack in PHP and am using PHP 7.1 with the following script:
<?php
$find = "hello";
$length = array_combine(range(1, 10), array_fill(1, 10, 0));
for ($i = 0; $i < 1000000; $i++) {
for ($j = 1; $j <= 10; $j++) {
$testValue = str_repeat('a', $j);
$start = microtime(true);
if ($find === $testValue) {
// Do nothing
}
$end = microtime(true);
$length[$j] += $end - $start;
}
}
arsort($length);
$length = key($length);
var_dump($length . " found");
$found = '';
$alphabet = array_combine(range('a', 'z'), array_fill(1, 26, 0));
for ($len = 0; $len < $length; $len++) {
$currentIteration = $alphabet;
$filler = str_repeat('a', $length - $len - 1);
for ($i = 0; $i < 1000000; $i++) {
foreach ($currentIteration as $letter => $time) {
$testValue = $found . $letter . $filler;
$start = microtime(true);
if ($find === $testValue) {
// Do nothing
}
$end = microtime(true);
$currentIteration[$letter] += $end - $start;
}
}
arsort($currentIteration);
$found .= key($currentIteration);
}
var_dump($found);
This is searching for a word with the following constraints
a-z only
up to 10 characters
The script finds the length of the word without any issue, but the value of the word never comes back as expected with a timing attack.
Is there something I am doing wrong?
The script loops though lengths, correctly identifies the length. It then loops though each letter (a-z) and checks the speed on these. In theory, 'haaaa' should be slightly slower than 'aaaaa' due to the first letter being a h. It then carries on for each of the five letters.
Running gives something like 'brhas' which is clearly wrong (it's different each time, but always wrong).
Is there something I am doing wrong?
I don't think so. I tried your code and I too, like you and the other people who tried in the comments, get completely random results for the second loop. The first one (the length) is mostly reliable, though not 100% of the times. By the way, the $argv[1] trick suggested didn't really improve the consistency of the results, and honestly I don't really see why it should.
Since I was curious I had a look at the PHP 7.1 source code. The string identity function (zend_is_identical) looks like this:
case IS_STRING:
return (Z_STR_P(op1) == Z_STR_P(op2) ||
(Z_STRLEN_P(op1) == Z_STRLEN_P(op2) &&
memcmp(Z_STRVAL_P(op1), Z_STRVAL_P(op2), Z_STRLEN_P(op1)) == 0));
Now it's easy to see why the first timing attack on the length works great. If the length is different then memcmp is never called and therefore it returns a lot faster. The difference is easily noticeable, even without too many iterations.
Once you have the length figured out, in your second loop you are basically trying to attack the underlying memcmp. The problem is that the difference in timing highly depends on:
the implementation of memcmp
the current load and interfering processes
the architecture of the machine.
I recommend this article titled "Benchmarking memcmp for timing attacks" for more detailed explanations. They did a much more precise benchmark and still were not able to get a clear noticeable difference in timing. I'm simply going to quote the conclusion of the article:
In conclusion, it highly depends on the circumstances if a memcmp() is subject to a timing attack.

Why is self-implemented quicksort faster than internal quicksort?

Being a mostly PHP developer (and self taught), I've never really had a reason to know or understand the algorithms behind things like sorting algorithms, except that quicksort is on average the quickest, and it's usually the algorithm behind PHP's sort functions.
But I have a pending interview coming up soon, and they recommend understanding basic algorithms like this one. So I broke open http://www.geeksforgeeks.org/quick-sort/ and implemented my own QuickSort and Partition functions, for practice of course, for sorting an array by one of it's values. I came up with this (I'm using PHP 7.1, so a fair bit of the syntax is relatively new)
function Partition(array &$Array, $Column, int $Low, int $High): int {
$Pivot = $Array[$High][$Column];
$i = $Low - 1;
for ($j = $Low; $j <= $High - 1; $j++) {
if ($Array[$j][$Column] > $Pivot) {
$i++;
[$Array[$i], $Array[$j]] = [$Array[$j], $Array[$i]];
}
}
[$Array[$i + 1], $Array[$High]] = [$Array[$High], $Array[$i + 1]];
return $i + 1;
}
function QuickSort(array &$Array, $Column, int $Low = 0, ?int $High = null): void {
$High = $High ?? (count($Array) - 1);
if ($Low < $High) {
$PartitionIndex = Partition($Array, $Column, $Low, $High);
QuickSort($Array, $Column, $Low, $PartitionIndex - 1);
QuickSort($Array, $Column, $PartitionIndex + 1, $High);
}
}
And it works! Awesome! And so I thought, no real point to using it, since there's no way the PHP interpreted version of this algorithm is faster than the compiled C version (like what would be used in usort). But for the heck of it, I decided to benchmark the two approaches.
And very much to my surprised, mine is faster!
$Tries = 1000;
$_Actions = $Actions;
$Start = microtime(true);
for ($i = 0; $i < $Tries; $i++) {
$Actions = $_Actions;
usort($Actions, function($a, $b) {
return $b['Timestamp'] <=> $a['Timestamp'];
});
}
echo microtime(true) - $Start, "\n";
$Start = microtime(true);
for ($i = 0; $i < $Tries; $i++) {
$Actions = $_Actions;
QuickSort($Actions, 'Timestamp');
}
echo microtime(true) - $Start, "\n";
This gives me consistent numbers around 1.274071931839 for the first one and 0.87327885627747 for the second.
Is there something silly that I'm missing that would cause this? Does usort not really use an implementation of quicksort? Is it because I'm not taking into account the array keys (in my case I don't need the key/value pairs to stay the same)?
Just in case anyone wants the finished QuickSort function in PHP, this is what I ended up with, which sorts arrays by column, descending, in about half the time as the native usort. (Iterative, not recursive, and the partition function was also inlined)
function array_column_sort_QuickSort_desc(array &$Array, $Column, int $Start = 0, int $End = null): void {
$End = $End ?? (count($Array) - 1);
$Stack = [];
$Top = 0;
$Stack[$Top++] = $Start;
$Stack[$Top++] = $End;
while ($Top > 0) {
$End = $Stack[--$Top];
$Start = $Stack[--$Top];
if ($Start < $End) {
$Pivot = $Array[$End][$Column];
$PartitionIndex = $Start;
for ($i = $Start; $i < $End; $i++) {
if ($Array[$i][$Column] >= $Pivot) {
[$Array[$i], $Array[$PartitionIndex]] = [$Array[$PartitionIndex], $Array[$i]];
$PartitionIndex++;
}
}
[$Array[$End], $Array[$PartitionIndex]] = [$Array[$PartitionIndex], $Array[$End]];
$Stack[$Top++] = $Start;
$Stack[$Top++] = $PartitionIndex - 1;
$Stack[$Top++] = $PartitionIndex + 1;
$Stack[$Top++] = $End;
}
}
}
Consider the difference between the arguments you pass to your QuickSort and those you pass to usort(). usort() has a much more generic interface, which operates in terms of a comparison function. Your QuickSort is specialized for your particular kind of data, and for performing comparisons via the > operator.
Very likely, then, the difference in performance is attributable to the much higher cost of evaluating function calls relative to evaluating individual > operations. That difference could easily swamp any inherent efficiency advantage that usort() might have. Consider, moreover, that because it relies on a comparison function written in PHP, usort()'s operation involves running a lot of PHP, not just compiled C code.
If you want to explore this further then consider modifying your implementation to present the same interface that usort() does. I'd be inclined to guess that usort() would win an apples-to-apples comparison with such a hand-rolled variation, but performance is notoriously hard to predict. This is why we test.

Fastest way of getting a character inside a string given the index (PHP)

I know of several ways to get a character off a string given the index.
<?php
$string = 'abcd';
echo $string[2];
echo $string{2};
echo substr($string, 2, 1);
?>
I don't know if there are any more ways, if you know of any please don't hesitate to add it. The question is, if I were to choose and repeat a method above a couple of million times, possibly using mt_rand to get the index value, which method would be the most efficient in terms of least memory consumption and fastest speed?
To arrive at an answer, you'll need to setup a benchmark test rig. Compare all methods over several (hundreds of thousands or millions) iterations on an idle box. Try the built-in microtime function to measure the difference between start and finish. That's your elapsed time.
The test should take you all of 2 minutes to write.
To save you some effort, I wrote a test. My own test shows that the functional solution (substr) is MUCH slower (expected). The idiomatic PHP ({}) solution is as fast as the index method. They are interchangeable. The ([]) is preferred, as this is the direction where PHP is going regarding string offsets.
<?php
$string = 'abcd';
$limit = 1000000;
$r = array(); // results
// PHP idiomatic string index method
$s = microtime(true);
for ($i = 0; $i < $limit; ++$i) {
$c = $string{2};
}
$r[] = microtime(true) - $s;
echo "\n";
// PHP functional solution
$s = microtime(true);
for ($i = 0; $i < $limit; ++$i) {
$c = substr($string, 2, 1);
}
$r[] = microtime(true) - $s;
echo "\n";
// index method
$s = microtime(true);
for ($i = 0; $i < $limit; ++$i) {
$c = $string[2];
}
$r[] = microtime(true) - $s;
echo "\n";
// RESULTS
foreach ($r as $i => $v) {
echo "RESULT ($i): $v \n";
}
?>
Results:
RESULT (PHP4 & 5 idiomatic braces syntax): 0.19106006622314
RESULT (string slice function): 0.50699090957642
RESULT (*index syntax, the future as the braces are being deprecated *): 0.19102001190186

php array_intersect() efficiency

consider the below script. two arrays with only three values.when i compare these two arrays using array_intersect(). the result is fast.
<?php
$arrayOne = array('3', '4', '5');
$arrayTwo = array('4', '5', '6');
$intersect = array_intersect($arrayOne, $arrayTwo);
print_r($intersect );
?>
my question is what is the efficiency of the array_intersect(). whether if we compare two arrays both having 1000 values each. would produce better result..... r we need to use some hash function to deal with finding common values quickly which will be effective???.. i need ur suggestion for this...
i am doing an application.if an person comes and login using facebook login.then the application will get his friends list and find whether any friends as commented in my app before and show it to him. roughly a friends may have 200 to300 friends in facebook and db has more than 1000 records. i need to find that efficiently how can i do that.......
Intersection can be implemented by constructing a set of the searched values in the second array, and looking up in a set can be made so fast that it takes essentially constant time on average. Therefore, the runtime of the whole algorithm can be in O(n).
Alternatively, one can sort the second array (in O(n log n)). Since looking up in a sorted array has a runtime in O(log n), the whole algorithm should then have a runtime in O(n log n).
According to a (short, unscientific) test I just ran, this seems to be the case for php's array_intersect:
Here's the code I used to test it. As you can see, for an input size as small as 1000, you don't need to worry.
array_intersect sorts the arrays before comparing their values in parallel (see the use of zend_qsort in the source file array.c). This alone takes O(n·log n) for each array. Then the actual intersection does only take linear time.
Depending on the values in your arrays, you could implement this intersection in linear time without the sorting, for example:
$index = array_flip($arrayOne);
foreach ($arrayTwo as $value) {
if (isset($index[$value])) unset($index[$value]);
}
foreach ($index as $value => $key) {
unset($arrayOne[$key]);
}
var_dump($arrayOne);
The fastest solution I found:
function arrayIntersect($arrayOne, $arrayTwo) {
$index = array_flip($arrayOne);
$second = array_flip($arrayTwo);
$x = array_intersect_key($index, $second);
return array_flip($x);
}
Tests I have made looks like below:
function intersect($arrayOne, $arrayTwo)
{
$index = array_flip($arrayOne);
foreach ($arrayTwo as $value) {
if (isset($index[$value])) unset($index[$value]);
}
foreach ($index as $value => $key) {
unset($arrayOne[$key]);
}
return $arrayOne;
}
function intersect2($arrayOne, $arrayTwo)
{
$index = array_flip($arrayOne);
$second = array_flip($arrayTwo);
$x = array_intersect_key($index, $second);
return array_flip($x);
}
for($i =0; $i < 1000000; $i++) {
$one[] = rand(0,1000000);
$two[] = rand(0,100000);
$two[] = rand(0,10000);
}
$one = array_unique($one);
$two = array_unique($two);
$time_start = microtime(true);
$res = intersect($one, $two);
$time = microtime(true) - $time_start;
echo "Sort time $time seconds 'intersect' \n";
$time_start = microtime(true);
$res2 = array_intersect($one, $two);
$time = microtime(true) - $time_start;
echo "Sort time $time seconds 'array_intersect' \n";
$time_start = microtime(true);
$res3 = intersect2($one, $two);
$time = microtime(true) - $time_start;
echo "Sort time $time seconds 'intersect2' \n";
Results from php 5.6 :
Sort time 0.77021193504333 seconds 'intersect'
Sort time 6.9765028953552 seconds 'array_intersect'
Sort time 0.4631941318512 seconds 'intersect2'
From what you state above, I would recommend you to implement a caching mechanism. That way you would of load the DB and speed up your application. I would also recommend you to profile the speed of array_intersect with increasing amount of data to see how performance scale. You could do this by simply wrapping the call in calls for the system time and calculate the difference. But I would recommend you to use a real profiler to get good data.
I implementing this simple code of comparing array_intersect and array_intersect_key,
$array = array();
for( $i=0; $i<130000; $i++)
$array[$i] = $i;
for( $i=200000; $i<230000; $i++)
$array[$i] = $i;
for( $i=300000; $i<340000; $i++)
$array[$i] = $i;
$array2 = array();
for( $i=100000; $i<110000; $i++)
$array2[$i] = $i;
for( $i=90000; $i<100000; $i++)
$array2[$i] = $i;
for( $i=110000; $i<290000; $i++)
$array2[$i] = $i;
echo 'Intersect to arrays -> array1[' . count($array) . '] : array2[' . count($array2) . '] ' . '<br>';
echo date('Y-m-d H:i:s') . '<br>';
$time = time();
$array_r2 = array_intersect_key($array,$array2);
echo 'Intercept key: ' . (time()-$time) . ' segs<br>';
$time = time();
$array_r = array_intersect($array,$array2);
echo 'Intercept: ' . (time()-$time) . ' segs<br>';
the result....
Intersect to arrays -> array1[200000] : array2[200000]
2014-10-30 08:52:52
Intercept key: 0 segs
Intercept: 4 segs
In this comparing of the efficency between array_intersect and array_intersect_key, we can see the interception with keys it is much faster.

PHP Performance : Copy vs. Reference

Hey there. Today I wrote a small benchmark script to compare performance of copying variables vs. creating references to them. I was expecting, that creating references to large arrays for example would be significantly slower than copying the whole array. Here is my benchmark code:
<?php
$array = array();
for($i=0; $i<100000; $i++) {
$array[] = mt_rand();
}
function recursiveCopy($array, $count) {
if($count === 1000)
return;
$foo = $array;
recursiveCopy($array, $count+1);
}
function recursiveReference($array, $count) {
if($count === 1000)
return;
$foo = &$array;
recursiveReference($array, $count+1);
}
$time = microtime(1);
recursiveCopy($array, 0);
$copyTime = (microtime(1) - $time);
echo "Took " . $copyTime . "s \n";
$time = microtime(1);
recursiveReference($array, 0);
$referenceTime = (microtime(1) - $time);
echo "Took " . $referenceTime . "s \n";
echo "Reference / Copy: " . ($referenceTime / $copyTime);
The actual result I got was, that recursiveReference took about 20 times (!) as long as recursiveCopy.
Can somebody explain this PHP behaviour?
PHP will very likely implement copy-on-write for its arrays, meaning when you "copy" an array, PHP doesn't do all the work of physically copying the memory until you modify one of the copies and your variables can no longer reference the same internal representation.
Your benchmarking is therefore fundamentally flawed, as your recursiveCopy function doesn't actually copy the object; if it did, you would run out of memory very quickly.
Try this: By assigning to an element of the array you force PHP to actually make a copy. You'll find you run out of memory pretty quickly as none of the copies go out of scope (and aren't garbage collected) until the recursive function reaches its maximum depth.
function recursiveCopy($array, $count) {
if($count === 1000)
return;
$foo = $array;
$foo[9492] = 3; // Force PHP to copy the array
recursiveCopy($array, $count+1);
}
in recursiveReference you're calling recursiveCopy... this doesn't make any sense, in this case you're calling recursiveReference just once. correct your code, rund the benchmark again and come back with your new results.
in addition, i don't think it's useful for a benchmark to do this recursively. a better solution would be to call a function 1000 times in a loop - once with the array directly and one with a reference to that array.
You don't need to (and thus shouldn't) assign or pass variables by reference just for performance reasons. PHP does such optimizations automatically.
The test you ran is flawed because of these automatic optimizations. In ran the following test instead:
<?php
for($i=0; $i<100000; $i++) {
$array[] = mt_rand();
}
$time = microtime(1);
for($i=0; $i<1000; $i++) {
$copy = $array;
unset($copy);
}
$duration = microtime(1) - $time;
echo "Normal Assignment and don't write: $duration<br />\n";
$time = microtime(1);
for($i=0; $i<1000; $i++) {
$copy =& $array;
unset($copy);
}
$duration = microtime(1) - $time;
echo "Assignment by Reference and don't write: $duration<br />\n";
$time = microtime(1);
for($i=0; $i<1000; $i++) {
$copy = $array;
$copy[0] = 0;
unset($copy);
}
$duration = microtime(1) - $time;
echo "Normal Assignment and write: $duration<br />\n";
$time = microtime(1);
for($i=0; $i<1000; $i++) {
$copy =& $array;
$copy[0] = 0;
unset($copy);
}
$duration = microtime(1) - $time;
echo "Assignment by Reference and write: $duration<br />\n";
?>
This was the output:
//Normal Assignment without write: 0.00023698806762695
//Assignment by Reference without write: 0.00023508071899414
//Normal Assignment with write: 21.302103042603
//Assignment by Reference with write: 0.00030708312988281
As you can see there is no significant performance difference in assigning by reference until you actually write to the copy, i.e. when there is also a functional difference.
Generally speaking in PHP, calling by reference is not something you'd do for performance reasons; it's something you'd do for functional reasons - ie because you actually want the referenced variable to be updated.
If you don't have a functional reason for calling by reference then you should stick with regular parameter passing, because PHP handles things perfectly efficiently that way.
(that said, as others have pointed out, your example code isn't exactly doing what you think it is anyway ;))
In recursiveReference() function you call recursiveCopy() function. It it what you really intended to do?
You do nothing with $foo variable - probably it was supposed to be used in further method call?
Passing variable by reference should generally save stack memory in case of passing large objects.
recursiveReference is calling recursiveCopy.
Not that that would necessarily harm performance, but that's probably not what you're trying to do.
Not sure why performance is slower, but it doesn't reflect the measurement you're trying to make.

Categories