I'm reading "PHP 7 Data Structures and Algorithms" chapter "Shortest path using the Floyd-Warshall algorithm"
the author is generating a graph with this code:
$totalVertices = 5;
$graph = [];
for ($i = 0; $i < $totalVertices; $i++) {
for ($j = 0; $j < $totalVertices; $j++) {
$graph[$i][$j] = $i == $j ? 0 : PHP_INT_MAX;
}
}
i don't understand this line :
$graph[$i][$j] = $i == $j ? 0 : PHP_INT_MAX;
looks like a one line if statement
is it the same as ?
if ($i == $j) {
$graph[$i][$j] = 0;
} else {
$graph[$i][$j] = PHP_INT_MAX;
}
what is the point of using PHP_INT_MAX ?
at the end what does the graph look like ?
You've correctly understood the ternary (? :) operator
To answer the other part of your question, have a look if the following makes sense to you.
First:
The author initializes the $graph array using the following code:
<?php
$totalVertices = 5; // total nodes (use 0, 1, 2, 3, and 4 instead of A, B, C, D, and E, respectively)
$graph = [];
for ($i = 0; $i < $totalVertices; $i++) {
for ($j = 0; $j < $totalVertices; $j++) {
$graph[$i][$j] = $i == $j ? 0 : PHP_INT_MAX;
}
}
which results in the following matrix
All the nodes(vertices) on the main diagonal(grey) are set to 0 as a node's distance to itself equals 0.
All the remaining nodes in the 'matrix' are set to PHP_INT_MAX (the largest integer supported) - we'll see why this is in a minute.
Second:
The author then sets the distances between the nodes that have a direct connection(edges), writing them manually to the $graph array, as follows:
$graph[0][1] = $graph[1][0] = 10;
$graph[2][1] = $graph[1][2] = 5;
$graph[0][3] = $graph[3][0] = 5;
$graph[3][1] = $graph[1][3] = 5;
$graph[4][1] = $graph[1][4] = 10;
$graph[3][4] = $graph[4][3] = 20;
This results in the following 'matrix' stored in array $graph (green: edge distances):
So why does the author use PHP_INT_MAX for the nodes that are not directly connected(the non-edges)?
The reason is, because it allows for the algorithm to work with
node-connection(edge) distances up to and including PHP_INT_MAX.
In this particular example, any number smaller than 20 in stead of PHP_INT_MAX in the ternary would warp the outcomes of the algorithm - it would spit out wrong results.
Or another way to look at this, in this particular example the author could have just used any number bigger than 20 in stead of PHP_INT_MAX to get satisfactory results from the algorithm,
because the biggest distance between two directly connected nodes in this case equals 20. Use any number smaller than 20 and the results will come out wrong.
You can give it a try, and test:
$graph[$i][$j] = $i == $j ? 0 : 19;
the algorithm will now tell us that the shortest distance between A to E - i.e. $graph[0][4] equals 19... WRONG
So using PHP_INT_MAX here gives 'leeway', it allows for the algorithm to work successfully with edge distances smaller than or equal to 9223372036854775807 (the largest int that can be stored on a 64 bit system),
or 2147483647 (on a 32 bit system).
You have two questions here.
The first is regarding the syntax condition ? val_if_true : val_if_false. This is called the "ternary operator". Your assessment regarding the behavior is correct.
The second is regarding the use of PHP_INT_MAX. All distances between two nodes are being initialized to one of two values: 0 if nodes i and j are the same node (i.e. a vertex), and PHP_INT_MAX if the nodes are not the same (i.e. an edge). That is, a node's distance to itself is 0 and a node's distance to any other node is the largest integer value PHP recognizes. The reason for this is that the Floyd-Warshall algorithm utilizes the concept of "infinity" to represent minimum distances that have not yet been calculated, but as there is no concept of "infinity" in PHP, the value PHP_INT_MAX is being used as a stand-in for it.
Related
How to solve nth degree equations in PHP
Example:
1/(1+i)+1/(1+i)2+...1/(1+i)n=k
While k is the constant,I'd like to find value of i.
How can I achieve this in PHP?
First of all, your expression on the left is a geometric sum, so you can rewrite it as (using x=1+i)
1/x*(1+...+1/x^(n-1)) = 1/x * (1-1/x^n)/(1-1/x) = (1-x^(-n))/(x-1)
and consequently the equation can be rewritten as
(1 - pow( 1+i, -n))/i = k
Now from the original expression one knows that the left side as a sum of convex monotonically decreasing functions is equally so, thus any of bisection, regula falsi variants or secant method will work sufficiently well.
Use
(1+i)^(-n)=1 - n*i + (n*(n+1))/2*i^2 +...
to get the approximative equation and first approximation
1-(n+1)/2*i = k/n <=> i = (1-k/n)*2/(n+1)
so that you can start bracketing method with the interval from 0 to twice this i.
Try something like this....
$n = 5;
$i = 2;
$k = null;
for ($x = 1; $x <= $n; $x++) {
$k += 1 / pow((1 + $i), $x);
}
echo $k; //Answer --> 0.49794238683128
I'm attempting to solve Project Euler in PHP and running into a problem with my for loop conditions inside the while loop. Could someone point me towards the right direction? Am I on the right track here?
The problem, btw, is to find the sums of all prime numbers below 2,000,000
Other note: The problem I'm encountering is that it seems to be a memory hog and besides implementing the sieve, I'm not sure how else to approach this. So, I'm wondering if I did something wrong in the implementation.
<?php
// The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
// Additional information:
// Sum below 100: 1060
// 1000: 76127
// (for testing)
// Find the sum of all the primes below 2,000,000.
// First, let's set n = 2 mill or the number we wish to find
// the primes under.
$n = 2000000;
// Then, let's set p = 2, the first prime number.
$p = 2;
// Now, let's create a list of all numbers from p to n.
$list = range($p, $n);
// Now the loop for Sieve of Eratosthenes.
// Also, let $i = 0 for a counter.
$i = 0;
while($p*$p < $n)
{
// Strike off all multiples of p less than or equal to n
for($k=0; $k < $n; $k++)
{
if($list[$k] % $p == 0)
{
unset($list[$k]);
}
}
// Re-initialize array
sort ($list);
// Find first number on list after p. Let that equal p.
$i = $i + 1;
$p = $list[$i];
}
echo array_sum($list);
?>
You can make a major optimization to your middle loop.
for($k=0; $k < $n; $k++)
{
if($list[$k] % $p == 0)
{
unset($list[$k]);
}
}
By beginning with 2*p and incrementing by $p instead of by 1. This eliminates the need for divisibility check as well as reducing the total iterations.
for($k=2*$p; $k < $n; $k += $p)
{
if (isset($list[k])) unset($list[$k]); //thanks matchu!
}
The suggestion above to check only odds to begin with (other than 2) is a good idea as well, although since the inner loop never gets off the ground for those cases I don't think its that critical. I also can't help but thinking the unsets are inefficient, tho I'm not 100% sure about that.
Here's my solution, using a 'boolean' array for the primes rather than actually removing the elements. I like using map,filters,reduce and stuff, but i figured id stick close to what you've done and this might be more efficient (although longer) anyway.
$top = 20000000;
$plist = array_fill(2,$top,1);
for ($a = 2 ; $a <= sqrt($top)+1; $a++)
{
if ($plist[$a] == 1)
for ($b = ($a+$a) ; $b <= $top; $b+=$a)
{
$plist[$b] = 0;
}
}
$sum = 0;
foreach ($plist as $k=>$v)
{
$sum += $k*$v;
}
echo $sum;
When I did this for project euler i used python, as I did for most. but someone who used PHP along the same lines as the one I did claimed it ran it 7 seconds (page 2's SekaiAi, for those who can look). I don't really care for his form (putting the body of a for loop into its increment clause!), or the use of globals and the function he has, but the main points are all there. My convenient means of testing PHP runs thru a server on a VMWareFusion local machine so its well slower, can't really comment from experience.
I've got the code to the point where it runs, and passes on small examples (17, for instance). However, it's been 8 or so minutes, and it's still running on my machine. I suspect that this algorithm, though simple, may not be the most effective, since it has to run through a lot of numbers a lot of times. (2 million tests on your first run, 1 million on your next, and they start removing less and less at a time as you go.) It also uses a lot of memory since you're, ya know, storing a list of millions of integers.
Regardless, here's my final copy of your code, with a list of the changes I made and why. I'm not sure that it works for 2,000,000 yet, but we'll see.
EDIT: It hit the right answer! Yay!
Set memory_limit to -1 to allow PHP to take as much memory as it wants for this very special case (very, very bad idea in production scripts!)
In PHP, use % instead of mod
The inner and outer loops can't use the same variable; PHP considers them to have the same scope. Use, maybe, $j for the inner loop.
To avoid having the prime strike itself off in the inner loop, start $j at $i + 1
On the unset, you used $arr instead of $list ;)
You missed a $ on the unset, so PHP interprets $list[j] as $list['j']. Just a typo.
I think that's all I did. I ran it with some progress output, and the highest prime it's reached by now is 599, so I'll let you know how it goes :)
My strategy in Ruby on this problem was just to check if every number under n was prime, looping through 2 and floor(sqrt(n)). It's also probably not an optimal solution, and takes a while to execute, but only about a minute or two. That could be the algorithm, or that could just be Ruby being better at this sort of job than PHP :/
Final code:
<?php
ini_set('memory_limit', -1);
// The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
// Additional information:
// Sum below 100: 1060
// 1000: 76127
// (for testing)
// Find the sum of all the primes below 2,000,000.
// First, let's set n = 2 mill or the number we wish to find
// the primes under.
$n = 2000000;
// Then, let's set p = 2, the first prime number.
$p = 2;
// Now, let's create a list of all numbers from p to n.
$list = range($p, $n);
// Now the loop for Sieve of Eratosthenes.
// Also, let $i = 0 for a counter.
$i = 0;
while($p*$p < $n)
{
// Strike off all multiples of p less than or equal to n
for($j=$i+1; $j < $n; $j++)
{
if($list[$j] % $p == 0)
{
unset($list[$j]);
}
}
// Re-initialize array
sort ($list);
// Find first number on list after p. Let that equal p.
$i = $i + 1;
$p = $list[$i];
echo "$i: $p\n";
}
echo array_sum($list);
?>
I'm generating an array of random numbers, between 0 and 2 with this code:
for ($j = 0; $j < 60; $j++) {
for ($i = 0; $i < 100; $i++) {
$value = rand(0,2);
$DBH->query("INSERT INTO map (x, y, value) VALUES($i, $j, $value);");
}
And i found and oddity, as you may see here, the rows are random, but they repeat:
22121000210211220022122200120200122000122121
22121000210211220022122200120200122000122121
22121000210211220022122200120200122000122121
22121000210211220022122200120200122000122121
22121000210211220022122200120200122000122121
How can avoid that?
You might want to explicitly seed your generator using srand, e.g. srand(time()) (note that the srand link has a better example of seeding than just using time, depends on how random you need, I suppose).
Failing that
You could try using mt_rand with mt_srand
You could always use MySQL's rand function to generate the numbers as a workaround.
I want to calculate Frequency (Monobits) test in PHP:
Description: The focus of the test is
the proportion of zeroes and ones for
the entire sequence. The purpose of
this test is to determine whether that
number of ones and zeros in a sequence
are approximately the same as would be
expected for a truly random sequence.
The test assesses the closeness of the
fraction of ones to ½, that is, the
number of ones and zeroes in a
sequence should be about the same.
I am wondering that do I really need to calculate the 0's and 1's (the bits) or is the following adequate:
$value = 0;
// Loop through all the bytes and sum them up.
for ($a = 0, $length = strlen((binary) $data); $a < $length; $a++)
$value += ord($data[$a]);
// The average should be 127.5.
return (float) $value/$length;
If the above is not the same, then how do I exactly calculate the 0's and 1's?
No, you really need to check all zeroes and ones. For example, take the following binary input:
01111111 01111101 01111110 01111010
. It is clearly (literally) one-sided(8 zeroes, 24 ones, correct result 24/32 = 3/4 = 0.75) and therefore not random. However, your test would compute 125.0 /255 which is close to ½.
Instead, count like this:
function one_proportion($binary) {
$oneCount = 0;
$len = strlen($binary);
for ($i = 0;$i < $len;$i++) {
$intv = ord($binary{$i});
for ($bitp = 0;$bitp < 7;$bitp++) {
$oneCount += ($intv>>$bitp) & 0x1;
}
}
return $oneCount / (8 * $len);
}
I want a random number generator with non-uniform distribution, ie:
// prints 0 with 0.1 probability, and 1 with 0.9 probability
echo probRandom(array(10, 90));
This is what I have right now:
/**
* method to generated a *not uniformly* random index
*
* #param array $probs int array with weights
* #return int a random index in $probs
*/
function probRandom($probs) {
$size = count($probs);
// construct probability vector
$prob_vector = array();
$ptr = 0;
for ($i=0; $i<$size; $i++) {
$ptr += $probs[$i];
$prob_vector[$i] = $ptr;
}
// get a random number
$rand = rand(0, $ptr);
for ($i=0, $ret = false; $ret === false; $i++) {
if ($rand <= $prob_vector[$i])
return $i;
}
}
Can anyone think of a better way? Possibly one that doesn't require me to do pre-processing?
If you know the sum of all elements in $probs, you can do this without preprocessing.
Like so:
$max = sum($probs);
$r = rand(0,$max-1);
$tot = 0;
for ($i = 0; $i < length($probs); $i++) {
$tot += $probs[$i];
if ($r < $tot) {
return $i;
}
}
This will do what you want in O(N) time, where N is the length of the array. This is a firm lower bound on the algorithmic runtime of such an algorithm, as each element in the input must be considered.
The probability a given index $i is selected is $probs[$i]/sum($probs), given that the rand function returns independent uniformly distributed integers in the given range.
In your solution you generate an accumulated probability vector, which is very useful.
I have two suggestions for improvement:
if $probs are static, i.e. it's the same vector every time you want to generate a random number, you can preprocess $prob_vector just once and keep it.
you can use binary search for the $i (Newton bisection method)
EDIT: I now see that you ask for a solution without preprocessing.
Without preprocessing, you will end up with worst case linear runtime (i.e., double the length of the vector, and your running time will double as well).
Here is a method that doesn't require preprocessing. It does, however, require you to know a maximum limit of the elements in $probs:
Rejection method
Pick a random index, $i and a random number, X (uniformly) between 0 and max($probs)-1, inclusive.
If X is less than $probs[$i], you're done - $i is your random number
Otherwise reject $i (hence the name of the method) and restart.