Background;
to create a dropdown menu for a fun gambling game (Students can 'bet' how much that they are right) within a form.
Variables;
$balance
Students begin with £3 and play on the £10 table
$table(there is a;
£10 table, with a range of 1,2,3 etc to 10.
£100 table with a range of 10,20,30 etc to 100.
£1,000 table with a range of 100, 200, 300, 400 etc to 1000.)
I have assigned $table to equal number of zeros on max value,
eg $table = 2; for the £100 table
Limitations;
I only want the drop down menu to offer the highest 12 possible values (this could include the table below -IMP!).
Students are NOT automatically allowed to play on the 'next' table.
resources;
an array of possible values;
$a = array(1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90,10,20,30,40,50,60,70,80,90,100,200,300,400,500,600,700,800,900,1000);
I can write a way to restrict the array by table;
(the maximum key for any table is (9*$table) )//hence why i use the zeroes above (the real game goes to $1 billion!)
$arrayMaxPos = (9*$table);
$maxbyTable = array_slice($a, 0, $arrayMaxPos);
Now I need a way to make sure no VALUE in the $maxbyTable is greater than $balance.
to create a $maxBet array of all allowed bets.
THIS IS WHERE I'M STUCK!
(I would then perform "array_slice($maxBet, -12);" to present only the highest 12 in the dropdown)
EDIT - I'd prefer to NOT have to use array walk because it seems unnecessary when I know where i want the array to end.
SECOND EDIT Apologies I realised that there is a way to mathematically ascertain which KEY maps to the highest possible bid.
It would be as follows
$integerLength = strlen($balance);//number of digits in $balance
$firstDigit = substr($balance, 0, 1);
then with some trickery because of this particular pattern
$maxKeyValue = (($integerlength*9) - 10 + $firstDigit);
So for example;
$balance = 792;
$maxKeyValue = ((3*9) - 10 + 7);// (key[24] = 700)
This though works on this problem and does not solve my programming problem.
Optional!
First of all, assuming the same rule applies, you don't need the $a array to know what prices are allowed on table $n
$table = $n; //$n being an integer
for ($i = 1; $i <= 10; $i++) {
$a[] = $i * pow(10, $n);
}
Will generate a perfectly valid array (where table #1 is 1-10, table #2 is 10-100 etc).
As for slicing it according to value, use a foreach loop and generate a new array, then stop when you hit the limit.
foreach ($a as $value) {
if ($value > $balance) { break; }
$allowedByTable[] = $value;
}
This will leave you with an array $allowedByTable that only has the possible bets which are lower then the user's current balance.
Important note
Even though you set what you think is right as options, never trust the user input and always validate the input on the server side. It's fairly trivial for someone to change the value in the combobox using DOM manipulation and bet on sums he's not supposed to have. Always check that the input you're getting is what you expect it to be!
Related
I am attempting to find the cartesian product and append specific criteria.
I have four pools of 25 people each. Each person has a score and a price. Each person in each pool looks as such.
[0] => array(
"name" => "jacob",
"price" => 15,
"score" => 100
),
[1] => array(
"name" => "daniel",
"price" => 22,
"score" => 200
)
I want to find the best combination of people, with one person being picked from each pool. However, there is a ceiling price where no grouping can exceed a certain price.
I have been messing with cartesians and permutation functions and cannot seem to figure out how to do this. The only way I know how to code it is to have nested foreach loops, but that is incredibly taxing.
This code below, as you can see, is incredibly inefficient. Especially if the pools increase!
foreach($poolA as $vA) {
foreach($poolb as $vB) {
foreach($poolC as $vC) {
foreach($poolD as $vD) {
// calculate total price and check if valid
// calculate total score and check if greatest
// if so, add to $greatest array
}
}
}
}
I also thought I could find a way to calculate the total price/score ratio and use that to my advantage, but I don't know what I'm missing.
As pointed out by Barmar, sorting the people in each pool allows you to halt the loops early when the total price exceeds the limit and hence reduces the number of cases you need to check. However, the asymptotic complexity for applying this improvement is still O(n4) (where n is the number of people in a pool).
I will outline an alternative approach with better asymptotic complexity as follow:
Construct a pool X that contains all pairs of people with one from pool A and the other from pool B.
Construct a pool Y that contains all pairs of people with one from pool C and the other from pool D.
Sort the pairs in pool X by total price. Then for any pairs with the same price, retain the one with the highest score and discard the remaining pairs.
Sort the pairs in pool Y by total price. Then for any pairs with the same price, retain the one with the highest score and discard the remaining pairs.
Do a loop with two pointers to check over all possible combinations that satisfy the price constraint, where the head pointer starts at the first item in pool X, and the tail pointer starts at the last item in pool Y. Sample code is given below to illustrate how this loop works:
==========================================================================
$head = 0;
$tail = sizeof($poolY) - 1;
while ($head < sizeof($poolX) && $tail >= 0) {
$total_price = $poolX[$head].price + $poolY[$tail].price;
// Your logic goes here...
if ($total_price > $price_limit) {
$tail--;
} else if ($total_price < $price_limit) {
$head++;
} else {
$head++;
$tail--;
}
}
for ($i = $head; $i < sizeof($poolX); $i++) {
// Your logic goes here...
}
for ($i = $tail; $i >= 0; $i--) {
// Your logic goes here...
}
==========================================================================
The complexity of steps 1 and 2 are O(n2), and the complexity of steps 3 and 4 can be done in O(n2 log(n)) using balanced binary tree. And step 5 is essentially a linear scan over n2 items, so the complexity is also O(n2). Therefore the overall complexity of this approach is O(n2 log(n)).
A couple of things to note about your approach here. Speaking strictly from a mathematics perspective, you're calculating way more permutations than is actually necessary to arrive at a definitive answer.
In combinatorics, there are two important questions to ask in order to arrive at the exact number of permutations necessary to yield all possible combinations.
Does order matter? (for your case, it does not)
Is repetition allowed? (for your case, it is not necessary to repeat)
Since the answer to both of these question is no, you need only a fraction of the iterations you're currently doing with your nested loop. Currently you are doing, pow(25, 4) permutations, which is 390625. You only actually need n! / r! (n-r)! or gmp_fact(25) / (gmp_fact(4) * gmp_fact(25 - 4)) which is only 12650 total permutations needed.
Here's a simple example of a function that produces combinations without repetition (and where order does not matter), using a generator in PHP (taken from this SO answer).
function comb($m, $a) {
if (!$m) {
yield [];
return;
}
if (!$a) {
return;
}
$h = $a[0];
$t = array_slice($a, 1);
foreach(comb($m - 1, $t) as $c)
yield array_merge([$h], $c);
foreach(comb($m, $t) as $c)
yield $c;
}
$a = range(1,25); // 25 people in each pool
$n = 4; // 4 pools
foreach(comb($n, $a) as $i => $c) {
echo $i, ": ", array_sum($c), "\n";
}
It would be pretty easy to modify the generator function to check whether the sum of prices meets/exceeds the desired threshhold and only return valid results from there (i.e. abandoning early where needed).
The reason repetition and order are not important here for your use case, is because it doesn't matter whether you add $price1 + $price2 or $price2 + $price1, the result will undoubtedly be the same in both permutations. So you only need to add up each unique set once to ascertain all possible sums.
Similar to chiwangs solutions, you may eliminate up front every group member, where another group member in that group exists, with same or higher score for a lower price.
Maybe you can eliminate many members in each group with this approach.
You may then either use this technique, to build two pairs and repeat the filtering (eliminate pairs, where anothr pair exists, with higher score for the same or lower costs) and then combine the pairs the same way, or add a member step by step (one pair, a triple, a quartett).
If there exists some member, who exceed the allowed sum price on their own, they can be eliminated up front.
If you order the 4 groups by score descending, and you find a solution abcd, where the sum price is legal, you found the optimal solution for a given set of abc.
The reponses here helped me figure out the best way for me to do this.
I haven't optimized the function yet, but essentially I looped through each results two at a time to find the combined salaries / scores for each combination in the two pools.
I stored the combined salary -> score combination in a new array, and if the salary already existed, I'd compare scores and remove the lower one.
$results = array();
foreach($poolA as $A) {
foreach($poolB as $B) {
$total_salary = $A['Salary'] + $B['Salary'];
$total_score = $A['Score'] + $B['Score'];
$pids = array($A['pid'], $B['pid']);
if(isset($results[$total_salary]) {
if($total_score > $results[$total_salary]['Score']) {
$results[$total_salary]['Score'] => $total_score;
$results[$total_salary]['pid'] => $pids;
} else {
$results[$total_salary]['Score'] = $total_score;
$results[$total_salary]['pid'] = $pids;
}
}
}
After this loop, I have another one that is identical, except my foreach loops are between $results and $poolC.
foreach($results as $R) {
foreach($poolC as $C) {
and finally, I do it one last time for $poolD.
I am working on optimizing the code by putting all four foreach loops into one.
Thank you everyone for your help, I was able to loop through 9 lists with 25+ people in each and find the best result in an incredibly quick processing time!
I have a set of items. I need to randomly pick one. The problem is that they each have a weight of 1-10. A weight of 2 means that the item is twice as likely to be picked than a weight of 1. A weight of 3 is three times as likely.
I currently fill an array with each item. If the weight is 3, I put three copies of the item in the array. Then, I pick a random item.
My method is fast, but uses a lot of memory. I am trying to think of a faster method, but nothing comes to mind. Anyone have a trick for this problem?
EDIT: My Code...
Apparently, I wasn't clear. I do not want to use (or improve) my code. This is what I did.
//Given an array $a where $a[0] is an item name and $a[1] is the weight from 1 to 100.
$b = array();
foreach($a as $t)
$b = array_merge($b, array_fill(0,$t[1],$t));
$item = $b[array_rand($b)];
This required me to check every item in $a and uses max_weight/2*size of $a memory for the array. I wanted a COMPLETELY DIFFERENT algorithm.
Further, I asked this question in the middle of the night using a phone. Typing code on a phone is nearly impossible because those silly virtual keyboards simply suck. It auto-corrects everything, ruining any code I type.
An yet further, I woke up this morning with an entirely new algorithm that uses virtual no extra memory at all and does not require checking every item in the array. I posted it as an answer below.
This ones your huckleberry.
$arr = array(
array("val" => "one", "weight" => 1),
array("val" => "two", "weight" => 2),
array("val" => "three", "weight" => 3),
array("val" => "four", "weight" => 4)
);
$weight_sum = 0;
foreach($arr as $val)
{
$weight_sum += $val['weight'];
}
$r = rand(1, $weight_sum);
print "random value is $r\n";
for($i = 0; $i < count($arr); $i++)
{
if($r <= $arr[$i]['weight'])
{
print "$r <= {$arr[$i]['weight']}, this is our match\n";
print $arr[$i]['val'] . "\n";
break;
}
else
{
print "$r > {$arr[$i]['weight']}, subtracting weight\n";
$r -= $arr[$i]['weight'];
print "new \$r is $r\n";
}
}
No need to generate arrays containing an item for every weight, no need to fill an array with n elements for a weight of n. Just generate a random number between 1 and total weight, then loop through the array until you find a weight less than your random number. If it isn't less than the number, subtract that weight from the random and continue.
Sample output:
# php wr.php
random value is 8
8 > 1, subtracting weight
new $r is 7
7 > 2, subtracting weight
new $r is 5
5 > 3, subtracting weight
new $r is 2
2 <= 4, this is our match
four
This should also support fractional weights.
modified version to use array keyed by weight, rather than by item
$arr2 = array(
);
for($i = 0; $i <= 500000; $i++)
{
$weight = rand(1, 10);
$num = rand(1, 1000);
$arr2[$weight][] = $num;
}
$start = microtime(true);
$weight_sum = 0;
foreach($arr2 as $weight => $vals) {
$weight_sum += $weight * count($vals);
}
print "weighted sum is $weight_sum\n";
$r = rand(1, $weight_sum);
print "random value is $r\n";
$found = false;
$elem = null;
foreach($arr2 as $weight => $vals)
{
if($found) break;
for($j = 0; $j < count($vals); $j ++)
{
if($r < $weight)
{
$elem = $vals[$j];
$found = true;
break;
}
else
{
$r -= $weight;
}
}
}
$end = microtime(true);
print "random element is: $elem\n";
print "total time is " . ($end - $start) . "\n";
With sample output:
# php wr2.php
weighted sum is 2751550
random value is 345713
random element is: 681
total time is 0.017189025878906
measurement is hardly scientific - and fluctuates depending on where in the array the element falls (obviously) but it seems fast enough for huge datasets.
This way requires two random calculations but they should be faster and require about 1/4 of the memory but with some reduced accuracy if weights have disproportionate counts. (See Update for increased accuracy at the cost of some memory and processing)
Store a multidimensional array where each item is stored in the an array based on its weight:
$array[$weight][] = $item;
// example: Item with a weight of 5 would be $array[5][] = 'Item'
Generate a new array with the weights (1-10) appearing n times for n weight:
foreach($array as $n=>$null) {
for ($i=1;$i<=$n;$i++) {
$weights[] = $n;
}
}
The above array would be something like: [ 1, 2, 2, 3, 3, 3, 4, 4, 4, 4 ... ]
First calculation: Get a random weight from the weighted array we just created
$weight = $weights[mt_rand(0, count($weights)-1)];
Second calculation: Get a random key from that weight array
$value = $array[$weight][mt_rand(0, count($array[$weight])-1)];
Why this works: You solve the weighted issue by using the weighted array of integers we created. Then you select randomly from that weighted group.
Update: Because of the possibility of disproportionate counts of items per weight, you could add another loop and array for the counts to increase accuracy.
foreach($array as $n=>$null) {
$counts[$n] = count($array[$n]);
}
foreach($array as $n=>$null) {
// Calculate proportionate weight (number of items in this weight opposed to minimum counted weight)
$proportion = $n * ($counts[$n] / min($counts));
for ($i=1; $i<=$proportion; $i++) {
$weights[] = $n;
}
}
What this does is if you have 2000 10's and 100 1's, it'll add 200 10's (20 * 10, 20 because it has 20x the count, and 10 because it is weighted 10) instead of 10 10's to make it proportionate to how many are in there opposed the minimum weight count. So to be accurate, instead of adding one for EVERY possible key, you are just being proportionate based on the MINIMUM count of weights.
I greatly appreciate the answers above. Please consider this answer, which does not require checking every item in the original array.
// Given $a as an array of items
// where $a[0] is the item name and $a[1] is the item weight.
// It is known that weights are integers from 1 to 100.
for($i=0; $i<sizeof($a); $i++) // Safeguard described below
{
$item = $a[array_rand($a)];
if(rand(1,100)<=$item[1]) break;
}
This algorithm only requires storage for two variables ($i and $item) as $a was already created before the algorithm kicked in. It does not require a massive array of duplicate items or an array of intervals.
In a best-case scenario, this algorithm will touch one item in the original array and be done. In a worst-case scenario, it will touch n items in an array of n items (not necessarily every item in the array as some may be touched more than once).
If there was no safeguard, this could run forever. The safeguard is there to stop the algorithm if it simply never picks an item. When the safeguard is triggered, the last item touched is the one selected. However, in millions of tests using random data sets of 100,000 items with random weights of 1 to 10 (changing rand(1,100) to rand(1,10) in my code), the safeguard was never hit.
I made histograms comparing the frequency of items selected among my original algorithm, the ones from answers above, and the one in this answer. The differences in frequencies are trivial - easy to attribute to variances in the random numbers.
EDIT... It is apparent to me that my algorithm may be combined with the algorithm pala_ posted, removing the need for a safeguard.
In pala_'s algorithm, a list is required, which I call an interval list. To simplify, you begin with a random_weight that is rather high. You step down the list of items and subtract the weight of each one until your random_weight falls to zero (or less). Then, the item you ended on is your item to return. There are variations on this interval algorithm that I've tested and pala_'s is a very good one. But, I wanted to avoid making a list. I wanted to use only the given weighted list and never touch all the items. The following algorithm merges my use of random jumping with pala_'s interval list. Instead of a list, I randomly jump around the list. I am guaranteed to get to zero eventually, so no safeguard is needed.
// Given $a as the weighted array (described above)
$weight = rand(1,100); // The bigger this is, the slower the algorithm runs.
while($weight>0)
{
$item = $a[array_rand($a)];
$weight-= $item[1];
}
// $item is the random item you want.
I wish I could select both pala_ and this answer as the correct answers.
I'm not sure if this is "faster", but I think it may be more "balance"d between memory usage and speed.
The thought is to transform your current implementation (500000 items array) into an equal-length array (100000 items), with the lowest "origin" position as key, and origin index as value:
<?php
$set=[["a",3],["b",5]];
$current_implementation=["a","a","a","b","b","b","b","b"];
// 0=>0 means the lowest "position" 0
// points to 0 in the set;
// 3=>1 means the lowest "position" 3
// points to 1 in the set;
$my_implementation=[0=>0,3=>1];
And then randomly picks a number between 0 and highest "origin" position:
// 3 is the lowest position of the last element ("b")
// and 5 the weight of that last element
$my_implemention_pick=mt_rand(0,3+5-1);
Full code:
<?php
function randomPickByWeight(array $set)
{
$low=0;
$high=0;
$candidates=[];
foreach($set as $key=>$item)
{
$candidates[$high]=$key;
$high+=$item["weight"];
}
$pick=mt_rand($low,$high-1);
while(!array_key_exists($pick,$candidates))
{
$pick--;
}
return $set[$candidates[$pick]];
}
$cache=[];
for($i=0;$i<100000;$i++)
{
$cache[]=["item"=>"item {$i}","weight"=>mt_rand(1,10)];
}
$time=time();
for($i=0;$i<100;$i++)
{
print_r(randomPickByWeight($cache));
}
$time=time()-$time;
var_dump($time);
3v4l.org demo
3v4l.org have some time limitation on codes, so the demo didn't finished. On my laptop the above demo finished in 10 seconds (i7-4700 HQ)
ere is my offer in case I've understand you right. I offer you take a look and if there are some question I'll explain.
Some words in advance:
My sample is with only 3 stages of weight - to be clear
- With outer while I'm simulating your main loop - I count only to 100.
- The array must to be init with one set of initial numbers as shown in my sample.
- In every pass of main loop I get only one random value and I'm keeping the weight at all.
<?php
$array=array(
0=>array('item' => 'A', 'weight' => 1),
1=>array('item' => 'B', 'weight' => 2),
2=>array('item' => 'C', 'weight' => 3),
);
$etalon_weights=array(1,2,3);
$current_weights=array(0,0,0);
$ii=0;
while($ii<100){ // Simulates your main loop
// Randomisation cycle
if($current_weights==$etalon_weights){
$current_weights=array(0,0,0);
}
$ft=true;
while($ft){
$curindex=rand(0,(count($array)-1));
$cur=$array[$curindex];
if($current_weights[$cur['weight']-1]<$etalon_weights[$cur['weight']-1]){
echo $cur['item'];
$array[]=$cur;
$current_weights[$cur['weight']-1]++;
$ft=false;
}
}
$ii++;
}
?>
I'll use this input array for my explanation:
$values_and_weights=array(
"one"=>1,
"two"=>8,
"three"=>10,
"four"=>4,
"five"=>3,
"six"=>10
);
The simple version isn't going to work for you because your array is so large. It requires no array modification but may need to iterate the entire array, and that's a deal breaker.
/*$pick=mt_rand(1,array_sum($values_and_weights));
$x=0;
foreach($values_and_weights as $val=>$wgt){
if(($x+=$wgt)>=$pick){
echo "$val";
break;
}
}*/
For your case, re-structuring the array will offer great benefits.
The cost in memory for generating a new array will be increasingly justified as:
array size increases and
number of selections increases.
The new array requires the replacement of "weight" with a "limit" for each value by adding the previous element's weight to the current element's weight.
Then flip the array so that the limits are the array keys and the values are the array values.
The selection logic is: the selected value will have the lowest limit that is >= $pick.
// Declare new array using array_walk one-liner:
array_walk($values_and_weights,function($v,$k)use(&$limits_and_values,&$x){$limits_and_values[$x+=$v]=$k;});
//Alternative declaration method - 4-liner, foreach() loop:
/*$x=0;
foreach($values_and_weights as $val=>$wgt){
$limits_and_values[$x+=$wgt]=$val;
}*/
var_export($limits_and_values);
$limits_and_values looks like this:
array (
1 => 'one',
9 => 'two',
19 => 'three',
23 => 'four',
26 => 'five',
36 => 'six',
)
Now to generate the random $pick and select the value:
// $x (from walk/loop) is the same as writing: end($limits_and_values); $x=key($limits_and_values);
$pick=mt_rand(1,$x); // pull random integer between 1 and highest limit/key
while(!isset($limits_and_values[$pick])){++$pick;} // smallest possible loop to find key
echo $limits_and_values[$pick]; // this is your random (weighted) value
This approach is brilliant because isset() is very fast and the maximum number of isset() calls in the while loop can only be as many as the largest weight (not to be confused with limit) in the array.
FOR YOUR CASE, THIS APPROACH WILL FIND THE VALUE IN 10 ITERATIONS OR LESS!
Here is my Demo that will accept a weighted array (like $values_and_weights), then in just four lines:
Restructure the array,
Generate a random number,
Find the correct value, and
Display it.
I have a fully-populated array of values, and I would like to arbitrarily remove elements from this array with more removed towards the far end.
For example, given input ( where a . signifies a populated index )
............................................
I would like something like
....... . ... .. . . .. . .
My first thought was to count the elements, then iterate over the array generating a random number somewhere between the current index and the total size of the array, eg:
if ( mt_rand( 0, $total ) > $total - $current_index )
//remove this element
however, as this entails making a random number each time the loop goes round it becomes very arduous.
Is there a better way of doing this?
One easy way is to flip a weighted coin for each entry with coin flips more weighted towards the end. For example, if the array is size n, for each entry you could choose a random number from 0 to n-1 and only keep the value if the index is less than or equal to the random number. (That is, keep each entry with probability 1 - index/total.) This has the nice advantage that if you're going to be compacting your array anyways, and you're using a good enough but efficient random number generator (could be a simple integer hash over a nonce), it's going to be rather fast for memory access.
On the other hand if you're only blanking out a few items and aren't rearranging the array, you can go with some sort of weighted random number generator that more often chooses numbers that are toward the end of the index. For example, if you have a random number generator that generates floats in the value of [0,1] (closed or open bounds not mattering that much likely), consider obtaining such a random float r and squaring it. This will tend to prefer lower values. You can fix this by flipping it around: 1-r^2. Of course, you need this to be in your index range of 0 to n - 1, so take floor(n * (1 - r^2)) and also round n down to n-1.
There's practically an infinite number of variations on both of these techniques.
This is quite probably not the best/most efficient way to do this, but it is the best I can come up with and it does work.
N.B. the codepad example takes a long time to execute, but this is because of the pretty-print loop I added to the end so you can see it visibly working. If you remove the inner loop, execution time drops to acceptable levels.
<?php
$array = range(0, 99);
for ($i = 0, $count = count($array); $i < $count; $i++) {
// Get array keys
$keys = array_keys($array);
// Get a random number between 0 and count($keys) - 1
$rand = mt_rand(0, count($keys) - 1);
// Cut $rand elements off the beginning of the keys
$keys = array_slice($keys, $rand);
// Unset a random key from the remaining keys
unset($array[$keys[array_rand($keys)]]);
}
This method isn't random- it works by you defining a function, and its inverse. Different functions, with different constant coefficients will have different distribution characteristics.
The results are very pattern like, as expected when mapping a continuous function to a discrete structure like an array.
Here's an example using a quadratic function. You could try varying the constant.
demo: http://codepad.org/ojU3s9xM
#as in y = x^2 / 7;
function y($x) {
return $x * $x / 7;
}
function x($y) {
return 7 * sqrt($y);
}
$theArray = range(0,100);
$size = count($theArray);
//use func inverse to find the max value we can input to $y() without going out of array bounds
$maximumX = x($size);
for ($i=0; $i<$maximumX; $i++) {
$index = (int) y($i);
//unset the index if it still exists, else, the next greatest index
while (!isset($theArray[$index]) && $index < $size) {
$index++;
}
unset($theArray[$index]);
}
for ($i=0; $i<$size; $i++) {
printf("[%-3s]", isset($theArray[$i]) ? $theArray[$i] : '');
}
Is there is any way to avoid duplication in random number generation .
I want to create a random number for a special purpose. But it's should be a unique value. I don't know how to avoid duplicate random number
ie, First i got the random number like 1892990070. i have created a folder named that random number(1892990070). My purpose is I will never get that number in future. I it so i have duplicate random number in my folder.
A random series of number can always have repeated numbers. You have to keep a record of which numbers are already used so you can regenerate the number if it's already used. Like this:
$used = array(); //Initialize the record array. This should only be done once.
//Do like this to draw a number:
do {
$random = rand(0, 2000);
}while(in_array($random, $used));
$used[] = $random; //Save $random into to $used array
My example above will of course only work across a single page load. If it should be static across page loads you'll have to use either sessions (for a single user) or some sort of database (if it should be unique to all users), but the logic is the same.
You can write a wrapper for mt_rand which remembers all the random number generated before.
function my_rand() {
static $seen = array();
do{
$rand = mt_rand();
}while(isset($seen[$rand]));
$seen[$rand] = 1;
return $rand;
}
The ideas to remember previously generated numbers and create new ones is a useful general solution when duplicates are a problem.
But are you sure an eventual duplicate is really a problem? Consider rolling dice. Sometimes they repeat the same value, even in two sequential throws. No one considers that a problem.
If you have a controlled need for a choosing random number—say like shuffling a deck of cards—there are several approaches. (I see there are several recently posted answer to that.)
Another approach is to use the numbers 0, 1, 2, ..., n and modify them in some way, like a Gray Code encoding or exclusive ORing by a constant bit pattern.
For what purpose are you generating the random number? If you are doing something that generates random "picks" of a finite set, like shuffling a deck of cards using a random-number function, then it's easiest to put the set into an array:
$set = array('one', 'two', 'three');
$random_set = array();
while (count($set)) {
# generate a random index into $set
$picked_idx = random(0, count($set) - 1);
# copy the value out
$random_set []= $set[$picked_idx];
# remove the value from the original set
array_splice($set, $picked_idx, 1);
}
If you are generating unique keys for things, you may need to hash them:
# hold onto the random values we've picked
$already_picked = array();
do {
$new_pick = rand();
# loop until we know we don't have a value that's been picked before
} while (array_key_exists($new_pick, $already_picked));
$already_picked[$new_pick] = 1;
This will generate a string with one occurence of each digit:
$randomcharacters = '0123456789';
$length = 5;
$newcharacters = str_shuffle($randomcharacters);
$randomstring = substr($newcharacters, 0, $length);
I'm writing an algorithm in PHP to solve a given Sudoku puzzle. I've set up a somewhat object-oriented implementation with two classes: a Square class for each individual tile on the 9x9 board, and a Sudoku class, which has a matrix of Squares to represent the board.
The implementation of the algorithm I'm using is a sort of triple-tier approach. The first step, which will solve only the most basic puzzles (but is the most efficient), is to fill in any squares which can only take a single value based on the board's initial setup, and to adjust the constraints accordingly on the rest of the unsolved squares.
Usually, this process of "constant propagation" doesn't solve the board entirely, but it does solve a sizable chunk. The second tier will then kick in. This parses each unit (or 9 squares which must all have unique number assignments, e.g. a row or column) for the "possible" values of each unsolved square. This list of possible values is represented as a string in the Square class:
class Square {
private $name; // 00, 01, 02, ... , 86, 87, 88
private $peers; // All squares in same row, col, and box
private $number; // Assigned value (0 if not assigned)
private $possibles; // String of possible numbers (1-9)
public function __construct($name, $p = 0) {
$this->name = $name;
$this->setNumber($p);
if ($p == 0) {
$this->possibles = "123456789";
}
}
// ... other functions
Given a whole array of unsolved squares in a unit (as described in the second tier above), the second tier will concatenate all the strings of "possibles" into a single string. It will then search through that single string for any unique character values - values which do not repeat themselves. This will indicate that, within the unit of squares, there is only one square that can take on that particular value.
My question is: for implementing this second tier, how can I parse this string of all the possible values in a unit and easily detect the unique value(s)? I know I could create an array where each index is represented by the numbers 1-9, and I could increment the value at the corresponding index by 1 for each possible-value of that number that I find, then scan the array again for any values of 1, but this seems extremely inefficient, requiring two linear scans of an array for each unit, and in a Sudoku puzzle there are 27 units.
This is somewhat like what you have already ruled out as "extremely inefficient", but with builtin functions so it might be quite efficient:
$all_possibilities = "1234567891234";
$unique = array();
foreach (count_chars($all_possibilities, 1) as $c => $occurrences) {
if ($occurrences == 1)
$unique[] = chr($c);
}
print join("", $unique) . "\n";
Prints: "56789"
Consider using a binary number to represent your "possibles" instead, because binary operations like AND, OR, XOR tend to be much faster than string operations.
E.g. if "2" and "3" are possible for a square, use the binary number 000000110 to represent the possibilities for that square.
Here's how you could find uniques:
$seenonce = 0;
$seenmore = 0;
foreach(all_possibles_for_this_unit as $possibles) {
$seenmore |= ($possibles & $seenonce);
$seenonce |= $possibles;
}
$seenonce ^= $seenmore;
if ($seenonce) {
//something was seen once - now it must be located
}
I'm not sure if this method will actually work faster but it's worth looking into.
function singletonsInString($instring) {
$results = array();
for($i = 1; $i < 10; $i++) {
$first_pos = strpos($instring, str($i));
$last_pos = strrpos($instring, str($i));
if ( $first_pos !== FALSE and $first_pos == $last_pos )
$results[] = $i;
}
return $results;
}
That'll give you every singleton. Get the first and last positions of a number in that array, and if they match and aren't both FALSE (strict comparison in case there's a singleton right at the start) then there's only one such number in that array.
If you're super super worried about speed here, you can probably replace the interior of that loop with
$istr = str($i);
if ( ($first = strpos($instring, $istr)) !== FALSE
and $first == $strrpos($instring, $istr) ) $results[] = $i;
for a minimum number of computations. Well, assuming PHP's native strpos is the best way to go about these things, which as far as I know is not unreasonable.
The last time I fooled with Sudoku solving, I had a third class called "Run". A Run instance is created for each row, col and 3x3 square. Every square has three runs associated with it. The Run class contains the set of numbers not yet placed within the run. Solving the board then involves intersecting the sets at each square iteratively. This takes care of 80% of most medium boards and 60% of most hard boards. Once you've gone through the whole board with no changes, you can move on to higher level logic. Each time your higher level logic fills a square, you run through your squares again.
The nice thing about this setup is you can easily add variants to the solver. Say you use the variant where the two diagonals are also unique. You just add a 4th run to those 18 squares.
What I would do, is actually use binary bits for storing actual values as another answer suggested. That allows to do efficient checks and in general might lend Sudoku itself to more mathematical(=efficient and shorter) solution (just my impression, I have not researched this).
Basically, you represent the numbers in squares not with digits, but with powers of 2
"1" = 2^0 = 1 = 000000001
"2" = 2^1 = 2 = 000000010
"3" = 2^2 = 4 = 000000100
"4" = 2^3 = 8 = 000001000
... etc up to
"9" = 2^8 = 256= 100000000
this way, you can simply add contents' of single squares to find out what numbers are missing in a 3x3 or a row or any other subset of sudoku, like this:
// shows the possibles for 3x3 square number 1 (00-22)
$sum=0;
for ($i=0; $i< 3; $i++)
for ($j=0; $j < 3; $j++)
$sum += $square["${i}${j}"]->number
$possibles = $sum ^ 511 // ^ stands for bitwise XOR and 511 is binary 11111111
now the $possibles contains "1" in bit positions of digits that are possible in this square and you can do bitwise operations with the results for other squares to match them together, like this:
eg. let's say:
$possibles1 = 146 // is binary 100100101,
//indicating that this row or 3x3 square has place for "9", "6", "3" and "1"
$possibles2 = 7 // is binary 000000111, indicating it has place for "3", "2" and "1".
// so:
$possibles1 & $possibles2
// bitwise AND, will show binary 101 saying that "3" and "1" is unfilled in both bloces
$possibles1 | $possibles2
// bitwise OR will give that in total it is possible to use "9", "6", "3", "2" and "1" in those two squares together
Here is a way using only PHP built-in functions which should be pretty fast.
function getUniques($sNumbers)
{
return join(array_keys(array_count_values(str_split($sNumbers)),1));
}
echo getUniques("1234567891234"); // return 56789;