Comparing large numbers of values using php arrays - php

I have to compare two very large number of values, for that I put them in arrays but it didn't work. Below is the code I use. Is this the most efficient way? I have set the time and memory to unlimited as well. error 101 (connection reset) unknown error this is error shown by chrome
for ($k = 0; $k < sizeof($pid); $k++) {
$out = 0;
for ($m = 0; $m < sizeof($oid); $m++) {
if ($pid[$k] == $oid[$m]) // $pid have 300000 indexes
//and $oid have about 500000 indexes
{
$out++;
}
}
if ($out) {
echo "OID for ID ".$pid[$k]." = ".$out;
echo "<br>";
}
}

Doesn't work how? Won't give you an answer? You're comparing every possible pair. How many combinations is that? More than 10^13. That will take something like an hour on a modern machine, if you don't run out of memory first.A more efficient way would be to sort them first: NlogN + MlogM + N + M time, instead of N*M time.Sorting a list of size x with a comparison sort takes x*log(x) time. Then, you can walk from the front of each list once, confident that if there are any matches you will find them. This takes linear time.

Related

PHP: Optimizing array iteration

i am working on an algorithm for sorting teams based on highest number of score. Teams are to be generated from a list of players. The conditions for creating a team is
It should have 6 players.
The collective salary for 6 players must be less than or equal to 50K.
Teams are to be generated based on highest collective projection.
What i did to get this result is generate all possibilities of team then run checks on them to exclude those teams that have more than 50K salary and then sort the remainder based on projection. But generating all the possibilities takes a lot of time and sometimes it consume all the memory. For a list of 160 players it takes around 90 seconds. Here is the code
$base_array = array();
$query1 = mysqli_query($conn, "SELECT * FROM temp_players ORDER BY projection DESC");
while($row1 = mysqli_fetch_array($query1))
{
$player = array();
$mma_id = $row1['mma_player_id'];
$salary = $row1['salary'];
$projection = $row1['projection'];
$wclass = $row1['wclass'];
array_push($player, $mma_id);
array_push($player, $salary);
array_push($player, $projection);
array_push($player, $wclass);
array_push($base_array, $player);
}
$result_base_array = array();
$totalsalary = 0;
for($i=0; $i<count($base_array)-5; $i++)
{
for($j=$i+1; $j<count($base_array)-4; $j++)
{
for($k=$j+1; $k<count($base_array)-3; $k++)
{
for($l=$k+1; $l<count($base_array)-2; $l++)
{
for($m=$l+1; $m<count($base_array)-1; $m++)
{
for($n=$m+1; $n<count($base_array)-0; $n++)
{
$totalsalary = $base_array[$i][1]+$base_array[$j][1]+$base_array[$k][1]+$base_array[$l][1]+$base_array[$m][1]+$base_array[$n][1];
$totalprojection = $base_array[$i][2]+$base_array[$j][2]+$base_array[$k][2]+$base_array[$l][2]+$base_array[$m][2]+$base_array[$n][2];
if($totalsalary <= 50000)
{
array_push($result_base_array,
array($base_array[$i], $base_array[$j], $base_array[$k], $base_array[$l], $base_array[$m], $base_array[$n],
$totalprojection, $totalsalary)
);
}
}
}
}
}
}
}
usort($result_base_array, "cmp");
And the cmp function
function cmp($a, $b) {
if ($a[6] == $b[6]) {
return 0;
}
return ($a[6] < $b[6]) ? 1 : -1;
}
Is there anyway to reduce the time it takes to do this task, or any other workaround for getting the desired number of teams
Regards
Because number of elements in array can be very big (for example 100 players can generate 1.2*10^9 teams), you can't hold it in memory. Try to save resulting array to file by parts (truncate array after each save). Then use external file sorting.
It will be slow, but at least it will not fall because of memory.
If you need top n teams (like 10 teams with highest projection) then you should convert code that generates result_base_array to Generator, so it will yield next team instead of pushing it into array. Then iterate over this generator. On each iteration add new item to sorted resulted array and cut redundant elements.
Depending on whether the salaries are often the cause of exclusion, you could perform tests on this in the other loops as well. If after 4 player selections their summed salaries are already above 50K, there is no use to select the remaining 2 players. This could save you some iterations.
This can be further improved by remembering the lowest 6 salaries in the pack, and then check if after selecting 4 members you would still stay under 50K if you would add the 2 lowest existing salaries. If this is not possible, then again it is of no use to try to add the two remaining players. Of course, this can be done at each stage of the selection (after selecting 1 player, 2 players, ...)
Another related improvement comes into play when you sort your data by ascending salary. If after selecting the 4th player, the above logic brings you to conclude you cannot stay under 50K by adding 2 more players, then there is no use to replace the 4th player with the next one in the data series either: that player would have a greater salary, so it would also yield to a total above 50K. So that means you can backtrack immediately and work on the 3rd player selection.
As others pointed out, the number of potential solutions is enormous. For 160 teams and a team size of 6 members, the number of combinations is:
160 . 159 . 158 . 157 . 156 . 155
--------------------------------- = 21 193 254 160
6 . 5 . 4 . 3 . 2
21 billion entries is a stretch for memory, and probably not useful to you either: will you really be interested in the team at the 4 432 456 911th place?
You'll probably be interested in something like the top-10 of those teams (in terms of projection). This you can achieve by keeping a list of 10 best teams, and then, when you get a new team with an acceptable salary, you add it to that list, keeping it sorted (via a binary search), and ejecting the entry with the lowest projection from that top-10.
Here is the code you could use:
$base_array = array();
// Order by salary, ascending, and only select what you need
$query1 = mysqli_query($conn, "
SELECT mma_player_id, salary, projection, wclass
FROM temp_players
ORDER BY salary ASC");
// Specify with option argument that you only need the associative keys:
while($row1 = mysqli_fetch_array($query1, MYSQLI_ASSOC)) {
// Keep the named keys, it makes interpreting the data easier:
$base_array[] = $row1;
}
function combinations($base_array, $salary_limit, $team_size) {
// Get lowest salaries, so we know the least value that still needs to
// be added when composing a team. This will allow an early exit when
// the cumulative salary is already too great to stay under the limit.
$remaining_salary = [];
foreach ($base_array as $i => $row) {
if ($i == $team_size) break;
array_unshift($remaining_salary, $salary_limit);
$salary_limit -= $row['salary'];
}
$result = [];
$stack = [0];
$sum_salary = [0];
$sum_projection = [0];
$index = 0;
while (true) {
$player = $base_array[$stack[$index]];
if ($sum_salary[$index] + $player['salary'] <= $remaining_salary[$index]) {
$result[$index] = $player;
if ($index == $team_size - 1) {
// Use yield so we don't need to build an enormous result array:
yield [
"total_salary" => $sum_salary[$index] + $player['salary'],
"total_projection" => $sum_projection[$index] + $player['projection'],
"members" => $result
];
} else {
$index++;
$sum_salary[$index] = $sum_salary[$index-1] + $player['salary'];
$sum_projection[$index] = $sum_projection[$index-1] + $player['projection'];
$stack[$index] = $stack[$index-1];
}
} else {
$index--;
}
while (true) {
if ($index < 0) {
return; // all done
}
$stack[$index]++;
if ($stack[$index] <= count($base_array) - $team_size + $index) break;
$index--;
}
}
}
// Helper function to quickly find where to insert a value in an ordered list
function binary_search($needle, $haystack) {
$high = count($haystack)-1;
$low = 0;
while ($high >= $low) {
$mid = (int)floor(($high + $low) / 2);
$val = $haystack[$mid];
if ($needle < $val) {
$high = $mid - 1;
} elseif ($needle > $val) {
$low = $mid + 1;
} else {
return $mid;
}
}
return $low;
}
$top_team_count = 10; // set this to the desired size of the output
$top_teams = []; // this will be the output
$top_projections = [];
foreach(combinations($base_array, 50000, 6) as $team) {
$j = binary_search($team['total_projection'], $top_projections);
array_splice($top_teams, $j, 0, [$team]);
array_splice($top_projections, $j, 0, [$team['total_projection']]);
if (count($top_teams) > $top_team_count) {
// forget about lowest projection, to keep memory usage low
array_shift($top_teams);
array_shift($top_projections);
}
}
$top_teams = array_reverse($top_teams); // Put highest projection first
print_r($top_teams);
Have a look at the demo on eval.in, which just generates 12 players with random salary and projection data.
Final remarks
Even with the above mentioned optimisations, doing this for 160 teams might still require a lot of iterations. The more often the salaries amount to more than 50K, the better the performance will be. If this never happens, the algorithm cannot escape from having to look at each of the 21 billion combinations. If you would know beforehand that the 50K limit would not play any role, you would of course order the data by descending projection, like you originally did.
Another optimisation could be if you would feed back into the combination function the 10th highest team projection you have so far. The function could then eliminate combinations that would lead to a lower total projection. You could first take the 6 highest player projection values and use this to determine how high a partial team projection can still grow by adding the missing players. It might turn out that this becomes impossible after having selected a few players, and then you can skip some iterations, much like done on the basis of salaries.

Why is a sorted array slower than a non sorted array in PHP

I have the following script, and I know about the principle "Branch prediction" but it seems that's not the case here.
Why is it faster to process a sorted array than an unsorted array?
It seems to work the other way around.
When I run the following script without the sort($data) the script takes 193.23883700371 seconds to complete.
When I enable the sort($data) line the scripts takes 300.26129794121 seconds to complete.
Why is it so much slower in PHP? I used PHP 5.5 and 5.6.
In PHP 7 the script is faster when the sort() is not commented out.
<?php
$size = 32768;
$data = array_fill(0, $size, null);
for ($i = 0; $i < $size; $i++) {
$data[$i] = rand(0, 255);
}
// Improved performance when disabled
//sort($data);
$total = 0;
$start = microtime(true);
for ($i = 0; $i < 100000; $i++) {
for ($x = 0; $x < $size; $x++) {
if ($data[$x] >= 127) {
$total += $data[$x];
}
}
}
$end = microtime(true);
echo($end - $start);
Based on my comments above the solution is to either find or implement a sort function that moves the values so that memory remains contiguous and gives you the speedup, or push the values from the sorted array into a second array so that the new array has contiguous memory.
Assuming you MEANT to not time the actual sort, since your code doesn't time that action, it's difficult to assess any true performance difference because you've filled the array with random data. This means that one pass might have MANY more values greater than or equal to 127 (and thus running an additional command) then another pass. To really compare the two, fill your array with an identical set of fixed data. Otherwise, you'll never know if the random fill is causing the time differences you're seeing.

What is better to use: in_array or array_unique?

I am in doubt what to use:
foreach(){
// .....
if(!in_array($view, $this->_views[$condition]))
array_push($this->_views[$condition], $view);
// ....
}
OR
foreach(){
// .....
array_push($this->_views[$condition], $view);
// ....
}
$this->_views[$condition] = array_unique($this->_views[$condition]);
UPDATE
The goal is to get array of unique values. This can be done by checking every time if value already exists with in_array or add all values each time and in the end use array_unique. So is there any major difference between this two ways?
I think the second approach would be more efficient. In fact, array_unique sorts the array then scans it.
Sorting is done in N log N steps, then scanning takes N steps.
The first approach takes N^2 steps (foreach element scans all N previous elements). On big arrays, there is a very big difference.
Honestly if you're using a small dataset it does not matter which one you use. If your dataset is in the 10000s you'll most definitely want to use a hash map for this sort of thing.
This is assuming the views are a string or something, which it looks like it is.
This is typically O(n) and possibly the fastest way to deal with tracking unique values.
foreach($views as $view)
{
if(!array_key_exists($view,$unique_views))
{
$unique_views[$condition][$view] = true;
}
}
TL;DR: foreach combined with if (!in_array()) is better.
Truthfully you should not really worry about what performs better; in most cases the difference is so small, its negligible (unless you're really doing some big data stuff). I would suggest to go with whatever seems more readable.
If you're interested, check out this script I wrote. It loops each case 100.000 times and both take between 50 and 200 ms.
https://3v4l.org/lkTCF
Note that array_unique() keeps the original keys so to counter that we also have to wrap the result with array_values().
In case the link ever dies:
<?php
$loops = 100000;
$start = microtime(true);
for ($l = 0; $l < $loops; $l++) {
$x = [1,2,3,4,6,7,8,9];
for ($i = 0; $i <= 10; $i++) {
if (!in_array($i, $x)) {
$x[] = $i;
}
}
}
$duration = microtime(true) - $start;
echo "in_array took $duration<br>".PHP_EOL;
$start = microtime(true);
for ($l = 0; $l < $loops; $l++) {
$x = [1,2,3,4,6,7,8,9];
$x = array_values(array_unique(array_merge($x, [0,1,2,3,4,5,6,7,8,9,10])));
}
$duration = microtime(true) - $start;
echo "array_unique took $duration<br>".PHP_EOL;

Project Euler || Question 10

I'm attempting to solve Project Euler in PHP and running into a problem with my for loop conditions inside the while loop. Could someone point me towards the right direction? Am I on the right track here?
The problem, btw, is to find the sums of all prime numbers below 2,000,000
Other note: The problem I'm encountering is that it seems to be a memory hog and besides implementing the sieve, I'm not sure how else to approach this. So, I'm wondering if I did something wrong in the implementation.
<?php
// The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
// Additional information:
// Sum below 100: 1060
// 1000: 76127
// (for testing)
// Find the sum of all the primes below 2,000,000.
// First, let's set n = 2 mill or the number we wish to find
// the primes under.
$n = 2000000;
// Then, let's set p = 2, the first prime number.
$p = 2;
// Now, let's create a list of all numbers from p to n.
$list = range($p, $n);
// Now the loop for Sieve of Eratosthenes.
// Also, let $i = 0 for a counter.
$i = 0;
while($p*$p < $n)
{
// Strike off all multiples of p less than or equal to n
for($k=0; $k < $n; $k++)
{
if($list[$k] % $p == 0)
{
unset($list[$k]);
}
}
// Re-initialize array
sort ($list);
// Find first number on list after p. Let that equal p.
$i = $i + 1;
$p = $list[$i];
}
echo array_sum($list);
?>
You can make a major optimization to your middle loop.
for($k=0; $k < $n; $k++)
{
if($list[$k] % $p == 0)
{
unset($list[$k]);
}
}
By beginning with 2*p and incrementing by $p instead of by 1. This eliminates the need for divisibility check as well as reducing the total iterations.
for($k=2*$p; $k < $n; $k += $p)
{
if (isset($list[k])) unset($list[$k]); //thanks matchu!
}
The suggestion above to check only odds to begin with (other than 2) is a good idea as well, although since the inner loop never gets off the ground for those cases I don't think its that critical. I also can't help but thinking the unsets are inefficient, tho I'm not 100% sure about that.
Here's my solution, using a 'boolean' array for the primes rather than actually removing the elements. I like using map,filters,reduce and stuff, but i figured id stick close to what you've done and this might be more efficient (although longer) anyway.
$top = 20000000;
$plist = array_fill(2,$top,1);
for ($a = 2 ; $a <= sqrt($top)+1; $a++)
{
if ($plist[$a] == 1)
for ($b = ($a+$a) ; $b <= $top; $b+=$a)
{
$plist[$b] = 0;
}
}
$sum = 0;
foreach ($plist as $k=>$v)
{
$sum += $k*$v;
}
echo $sum;
When I did this for project euler i used python, as I did for most. but someone who used PHP along the same lines as the one I did claimed it ran it 7 seconds (page 2's SekaiAi, for those who can look). I don't really care for his form (putting the body of a for loop into its increment clause!), or the use of globals and the function he has, but the main points are all there. My convenient means of testing PHP runs thru a server on a VMWareFusion local machine so its well slower, can't really comment from experience.
I've got the code to the point where it runs, and passes on small examples (17, for instance). However, it's been 8 or so minutes, and it's still running on my machine. I suspect that this algorithm, though simple, may not be the most effective, since it has to run through a lot of numbers a lot of times. (2 million tests on your first run, 1 million on your next, and they start removing less and less at a time as you go.) It also uses a lot of memory since you're, ya know, storing a list of millions of integers.
Regardless, here's my final copy of your code, with a list of the changes I made and why. I'm not sure that it works for 2,000,000 yet, but we'll see.
EDIT: It hit the right answer! Yay!
Set memory_limit to -1 to allow PHP to take as much memory as it wants for this very special case (very, very bad idea in production scripts!)
In PHP, use % instead of mod
The inner and outer loops can't use the same variable; PHP considers them to have the same scope. Use, maybe, $j for the inner loop.
To avoid having the prime strike itself off in the inner loop, start $j at $i + 1
On the unset, you used $arr instead of $list ;)
You missed a $ on the unset, so PHP interprets $list[j] as $list['j']. Just a typo.
I think that's all I did. I ran it with some progress output, and the highest prime it's reached by now is 599, so I'll let you know how it goes :)
My strategy in Ruby on this problem was just to check if every number under n was prime, looping through 2 and floor(sqrt(n)). It's also probably not an optimal solution, and takes a while to execute, but only about a minute or two. That could be the algorithm, or that could just be Ruby being better at this sort of job than PHP :/
Final code:
<?php
ini_set('memory_limit', -1);
// The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
// Additional information:
// Sum below 100: 1060
// 1000: 76127
// (for testing)
// Find the sum of all the primes below 2,000,000.
// First, let's set n = 2 mill or the number we wish to find
// the primes under.
$n = 2000000;
// Then, let's set p = 2, the first prime number.
$p = 2;
// Now, let's create a list of all numbers from p to n.
$list = range($p, $n);
// Now the loop for Sieve of Eratosthenes.
// Also, let $i = 0 for a counter.
$i = 0;
while($p*$p < $n)
{
// Strike off all multiples of p less than or equal to n
for($j=$i+1; $j < $n; $j++)
{
if($list[$j] % $p == 0)
{
unset($list[$j]);
}
}
// Re-initialize array
sort ($list);
// Find first number on list after p. Let that equal p.
$i = $i + 1;
$p = $list[$i];
echo "$i: $p\n";
}
echo array_sum($list);
?>

Calculate greatest number of days between two consecutive dates

If you have an array of ISO dates, how would you calculate the most days between two sequential dates from the array?
$array = array('2009-03-11', '2009-03-12', '2009-04-12', '2009-05-03', '2009-10-30');
I think I need a loop, some sort of iterating variable and a sort. I can't quite figure it out.
This is actually being output from MYSQL.
Here is how you can do it in PHP:
<?php
$array = array('2009-03-11', '2009-03-12', '2009-04-12', '2009-05-03', '2009-10-30');
# PHP was throwing errors until I set this
# it may be unnecessary depending on where you
# are using your code:
date_default_timezone_set("GMT");
$max = 0;
if( count($array) > 1 ){
for($i = 0; $i < count($array) - 1; $i++){
$start = strtotime( $array[$i] );
$end = strtotime( $array[$i + 1] );
$diff = $end - $start;
if($diff > $max) $max = $diff;
}
}
$max = $max / (60*60*24);
?>
It loops throw your items (it executes one less time than there are number of items) and compares each one. If the comparison is larger than the next, it updates max. Time is in seconds, so after the loop is over we convert the seconds into days.
This PHP script will give you the largest interval
1 ){
for($i = 0; $i $maxinterval) $maxinterval = $days;
}
}
?>
EDIT:
As [originally] worded the question can be understood in [at least ;-)] two ways:
A) The array contains a list of dates in ascending order. The task is to find the longuest period (expressed in number of days) between to consecutive dates in the array.
B) The array is not necessarily sorted. The task is to find the longuest period (expr. in number of days) between any two dates in the array
The following provides an answer to the "B" understanding of the question. For a response to "A", see dcneiner's solution
No Sorting needed!...
If it comes from MySQL, you may have this DBMS returns directly the MIN and MAX values for the considered list.
EDIT: As indicated by Darkerstar, the way the way the data is structured [and also the existing SQL query which returns the complete list as indicated in the question] generally dictate the way the query which produces the MIN and MAX value should be structured.
Maybe something like this:
SELECT MIN(the_date_field), MAX(the_date_field)
FROM the_table
WHERE -- whatever where conditions if any
--Note: no GROUP BY needed
If, somehow, you cannot use SQL, a single pass through the list will allow you to obtain the MIN and MAX value in the list (in O(n) time, that is).
Algorithm is trivial:
Set Min and Max Value to first item in [unsorted] list.
Iterate through each following item in the list, comparing it with the Min Value and replacing it if found smaller, and doing like-wise for the Max value...
With Min and Max values in hand, a simple difference gives the max number of days...
In PHP, it's looks like the following:
<?php
$array = array('2009-03-11', '2009-03-12', '2009-04-12', '2009-05-03', '2009-10-30');
# may need this as suggested by dcneiner
date_default_timezone_set("GMT");
$max = $array[0];
$min = $max;
for($i = 1; $i < count($array); $i++){
// Note that since the strings in the array are in the format YYYY-MM-DD,
// they can be compared as-is without requiring say strtotime conversion.
if ($array[$i] < $min)
$min = $array[$i];
if ($array[$i] > $max)
$max = $array[$i];
}
$day_count = (strtotime($max) - strtotime($min)) / (60*60*24);
?>

Categories