I have a very large number:
char *big_numbr_str = "4325242066733762057192832260226492766115114212292022277";
I want to keep square-rooting this number until it's < 1000. In PHP, I can do this relatively easily:
while($num > 1000):
$num = sqrt($num);
endwhile;
$num = floor($num);
I'm now trying to achieve the same in C, to end with the same result. For reference, after 5 cycles in the while loop, the end result in PHP from the above snippet + starting number is 50; If you square-root this number 5 times anywhere else you should get a similar result, rounded down.
How would I achieve the same in plain C? Seems storing a number of this size is more complex in C than expected.
You'll need a big number library to handle numbers like this. On Linux, you can try GMP.
Alternately, you can write your own bigint routine and implement square root manually. This will take some time to implement properly, as you basically have to do all the math by hand a digit at a time. It can be done (I've done it), but it won't be "simple".
In order to store such a large integer (4325242066733762057192832260226492766115114212292022277) you can use array of integers, store a single digit per element. The array should act like a single integer. Write the routines to do the computation (like the way you would do in a piece of paper).
Alternatively, google bignum c, and see if you can find any library that implements large integer arithmetic. Take a look at bignum.c by Steven Skiena, libraries like MPFR and MPIR.
Related
I will be happy to get some help. I have the following problem:
I'm given a list of numbers and a target number.
subset_sum([11.96,1,15.04,7.8,20,10,11.13,9,11,1.07,8.04,9], 20)
I need to find an algorithm that will find all numbers that combined will sum target number ex: 20.
First find all int equal 20
And next for example the best combinations here are:
11.96 + 8.04
1 + 10 + 9
11.13 + 7.8 + 1.07
9 + 11
Remaining value 15.04.
I need an algorithm that uses 1 value only once and it could use from 1 to n values to sum target number.
I tried some recursion in PHP but runs out of memory really fast (50k values) so a solution in Python will help (time/memory wise).
I'd be glad for some guidance here.
One possible solution is this: Finding all possible combinations of numbers to reach a given sum
The only difference is that I need to put a flag on elements already used so it won't be used twice and I can reduce the number of possible combinations
Thanks for anyone willing to help.
there are many ways to think about this problem.
If you do recursion make sure to identify your end cases first, then proceed with the rest of the program.
This is the first thing that comes to mind.
<?php
subset_sum([11.96,1,15.04,7.8,20,10,11.13,9,11,1.07,8.04,9], 20);
function subset_sum($a,$s,$c = array())
{
if($s<0)
return;
if($s!=0&&count($a)==0)
return;
if($s!=0)
{
foreach($a as $xd=>$xdd)
{
unset($a[$xd]);
subset_sum($a,$s-$xdd,array_merge($c,array($xdd)));
}
}
else
print_r($c);
}
?>
This is possible solution, but it's not pretty:
import itertools
import operator
from functools import reduce
def subset_num(array, num):
subsets = reduce(operator.add, [list(itertools.combinations(array, r)) for r in range(1, 1 + len(array))])
return [subset for subset in subsets if sum(subset) == num]
print(subset_num([11.96,1,15.04,7.8,20,10,11.13,9,11,1.07,8.04,9], 20))
Output:
[(20,), (11.96, 8.04), (9, 11), (11, 9), (1, 10, 9), (1, 10, 9), (7.8, 11.13, 1.07)]
DISCLAIMER: this is not a full solution, it is a way to just help you build the possible subsets. It does not help you to pick which ones go together (without using the same item more than once and getting the lowest remainder).
Using dynamic programming you can build all the subsets that add up to the given sum, then you will need to go through them and find which combination of subsets is best for you.
To build this archive you can (I'm assuming we're dealing with non-negative numbers only) put the items in a column, go from top to bottom and for each element compute all the subsets that add up to the sum or a lower number than it and that include only items from the column that are in the place you are looking at or higher. When you build a subset you put in its node both the sum of the subset (which may be the given sum or smaller) and the items that are included in the subset. So in order to compute the subsets for an item [i] you need only look at the subsets you've created for item [i-1]. For each of them there are 3 options:
1) the subset's sum is the given sum ---> Keep the subset as it is and move to the next one.
2) the subset's sum is smaller than the given sum but larger than it if item [i] is added to it ---> Keep the subset as it is and move on to the next one.
3) the subset's sum is smaller than the given sum and it will still be smaller or equal to it if item [i] is added to it ---> Keep one copy of the subset as it is and create another one with item [i] added to it (both as a member and added to the sum of the subset).
When you're done with the last item (item [n]), look at the subsets you've created - each one has its sum in its node and you can see which ones are equal to the given sum (and which ones are smaller - you don't need those anymore).
As I wrote at the beginning - now you need to figure out how to take the best combination of subsets that do not have a shared member between any of them.
Basically you're left with a problem that resembles the classic knapsack problem but with another limitation (not every stone can be taken with every other stone). Maybe the limitation actually helps, I'm not sure.
A bit more about the advantage of dynamic programming in this case
The basic idea of dynamic programming instead of recursion is to trade redundancy of operations with occupation of memory space. By that I mean to say that recursion with a complex problem (normally a backtrack knapsack-like problem, as we have here) normally ends up calculating the same thing a fair amount of times because the different branches of calculation have no concept of each other's operations and results. Dynamic programming saves the results and uses them along the way to build "bigger" results, relying on the previous/"smaller" ones. Because the use of the stack is much more straightforward than in recursion, you don't get the memory problem you get with recursion regarding the maintenance of the function's state, but you do need to handle a great deal of memory that you store (sometimes you can optimise that).
So for example in our problem, trying to combine a subset that would add up to the required sum, the branch that starts with item A and the branch that starts with item B do not know of each other's operations. let's assume item C and item D together add up to the sum, but either of them added alone to A or B would not exceed the sum, and that A don't go with B in the solution (we can have sum=10, A=B=4, C=D=5 and there is no subset that sums up to 2 (so A and B can't be in the same group)). The branch trying to figure out A's group would (after trying and rejecting having B in its group) add C (A+C=9) and then add D, in which point would reject this group and trackback (A+C+D=14 > sum=10). The same would happen to B of course (A=B) because the branch figuring out B's group has no information regarding what just happened to the branch dealing with A. So in fact we've calculated C+D twice, and haven't even used it yet (and we're about to calculate it yet a third time to realise they belong in a group of their own).
NOTE:
Looking around while writing this answer I came across a technique I was not familiar with and might be a better solution for you: memoization. Taken from wikipedia:
memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.
So I have a possbile solution:
#compute difference between 2 list but keep duplicates
def list_difference(a, b):
count = Counter(a) # count items in a
count.subtract(b) # subtract items that are in b
diff = []
for x in a:
if count[x] > 0:
count[x] -= 1
diff.append(x)
return diff
#return combination of numbers that match target
def subset_sum(numbers, target, partial=[]):
s = sum(partial)
# check if the partial sum is equals to target
if s == target:
print "--------------------------------------------sum_is(%s)=%s" % (partial, target)
return partial
else:
if s >= target:
return # if we reach the number why bother to continue
for i in range(len(numbers)):
n = numbers[i]
remaining = numbers[i+1:]
rest = subset_sum(remaining, target, partial + [n])
if type(rest) is list:
#repeat until rest is > target and rest is not the same as previous
def repeatUntil(subset, target):
currSubset = []
while sum(subset) > target and currSubset != subset:
diff = subset_sum(subset, target)
currSubset = subset
subset = list_difference(subset, diff)
return subset
Output:
--------------------------------------------sum_is([11.96, 8.04])=20
--------------------------------------------sum_is([1, 10, 9])=20
--------------------------------------------sum_is([7.8, 11.13, 1.07])=20
--------------------------------------------sum_is([20])=20
--------------------------------------------sum_is([9, 11])=20
[15.04]
Unfortunately this solution does work for a small list. For a big list still trying to break the list in small chunks and calculate but the answer is not quite correct. You can see it o a new thread here:
Finding unique combinations of numbers to reach a given sum
I have a tricky question that I've looked into a couple of times without figuring it out.
Some backstory: I am making a textbased RPG-game where players fight against animals/monsters etc. It works like any other game where you hit a number of hitpoints on each other every round.
The problem: I am using the random-function in php to generate the final value of the hit, depending on levels, armor and such. But I'd like the higher values (like the max hit) to appear less often than the lower values.
This is an example-graph:
How can I reproduce something like this using PHP and the rand-function? When typing rand(1,100) every number has an equal chance of being picked.
My idea is this: Make a 2nd degree (or quadratic function) and use the random number (x) to do the calculation.
Would this work like I want?
The question is a bit tricky, please let me know if you'd like more information and details.
Please, look at this beatiful article:
http://www.redblobgames.com/articles/probability/damage-rolls.html
There are interactive diagrams considering dice rolling and percentage of results.
This should be very usefull for you.
Pay attention to this kind of rolling random number:
roll1 = rollDice(2, 12);
roll2 = rollDice(2, 12);
damage = min(roll1, roll2);
This should give you what you look for.
OK, here's my idea :
Let's say you've got an array of elements (a,b,c,d) and you won't to randomly pick one of them. Doing a rand(1,4) to get the random element index, would mean that all elements have an equal chance to appear. (25%)
Now, let's say we take this array : (a,b,c,d,d).
Here we still have 4 elements, but not every one of them has equal chances to appear.
a,b,c : 20%
d : 40%
Or, let's take this array :
(1,2,3,...,97,97,97,98,98,98,99,99,99,100,100,100,100)
Hint : This way you won't only bias the random number generation algorithm, but you'll actually set the desired probability of apparition of each one (or of a range of numbers).
So, that's how I would go about that :
If you want numbers from 1 to 100 (with higher numbers appearing more frequently, get a random number from 1 to 1000 and associate it with a wider range. E.g.
rand = 800-1000 => rand/10 (80->100)
rand = 600-800 => rand/9 (66->88)
...
Or something like that. (You could use any math operation you imagine, modulo or whatever... and play with your algorithm). I hope you get my idea.
Good luck! :-)
I need to work out a way to create 10,000 non-repeating random numbers in PHP, and then put it into database table. Number will be 12 digits long.
What is the best way to do this?
At 12 digits long, I don't think the possibility of getting repeats is very large. I would probably just generate the numbers, try to insert them into the table, and if it already exists (assuming you have a unique constraint on that column) just generate another one.
Read e.g. 40000 (=PHP_INT_SIZE * 10000) bytes from /dev/random at once, then split it, modularize it (the % operator), and there you have it.
Then filter it, and repeat the procedure.
That avoids too many syscalls/context switches (between the php runtime, the zend engine, and the operating system itself - I'm not going to dive into details here).
That should be the most performant way of doing it.
Generate 10000 random numbers and place them in an array. Run the array through array_unique. Check the length. If less than 10000, add on a bunch more. Run the array through array_unique. If greater than 10000, then run through array_slice to give 10000. Otherwise, lather, rinse, repeat.
This assumes that you can generate a 12 digit random number without problems (use getrandmax() to see how big you can get....according to php.net on some systems 32k is as large a number as you can get.
$array = array();
while(sizeof($array)<=10000){
$number = mt_rand(0,999999999999);
if(!array_key_exists($number,$array)){
$array[$number] = null;
}
}
foreach($array as $key=>$val){
//write array records to db.
}
You could use either rand() or mt_rand(). mt_rand() is supposed to be faster however.
This is more of a maths/general programming question, but I am programming with PHP is that makes a difference.
I think the easiest way to explain is with an example.
If the range is between 1 and 10.
I want to generate a number that is between 1 an 10 but is more likely lower than high.
The only way I can think is generate an array with 10 elements equal to 1, 9 elements equal to 2, 8 elements equal to 3.....1 element equal to 10. Then generate a random number based on the number of elements.
The trouble is I am potentially dealing with 1 - 100000 and that array would be ridiculously big.
So how best to do it?
Generate a random number between 0 and a random number!
Generate a number between 1 and foo(n), where foo runs an algorithm over n (e.g. a logarithmic function). Then reverse foo() on the result.
Generate number n which is 0 <= n < 1, multiply it by itself, than multiply by x, run floor on it and add 1. Sorry I used php toooo long ago to write code in it
You could do
$rand = floor(100000 * (rand(0, 1)*rand(0, 1)));
Or something along these lines
There are basically two (or more?) ways to map uniform density to any distribution function: Inverse transformation sampling and Rejection sampling. I think in your case you should use the former.
Quick and simple:
rand(1, rand(1, n))
What you need to do is generate a random number over a greater interval (preferably floating point), and map that into [1,10] in a nonuniform way. Exactly what way depends on how much more likely you want a 1 to be than a 9 or 10.
For C language solutions, see these libraries. You may find use for this in PHP.
Generally speaking, it looks like you want to draw a random number from a Poisson distribution rather than the [uniform distribution](http://en.wikipedia.org/wiki/Uniform_distribution_(continuous)). On the wiki page cited above there is a section which specifically states how you can use the continuous distribution to generate a pseudo-Poisson distribution... check it out. Note that you may want to test different values of λ to ensure the distribution works as you want it to.
It depends on what distribution you want to have exactly, i.e., what number should appear with what probability.
For instance, for even n you could do the following: generate one integer random number x between 1 and n/2 and generate a second number between 1 and n+1. If y > x you generate x otherwise you generate n-x+1. This should give you the distribution in your example.
I think this should give the requested distribution:
Generate a random number in the range 1 .. x. Generate another one in the range 1 .. x+1.
Return the minimum of the two.
Let's think about how your array idea changes the probabilities. Normally every element from 1 to n has a probability of 1/n and is thus equally likely.
Since you have n entries for 1, n-1 entries for 2...1 entry for n, then the total number of entries you have is an arithmetic series. The sum of an arithmetic series counting from 1 to n is n(1+n)/2. So now we know every element's probability should use that as the denominator.
Element 1 has n entries, so it's probability is n/n(1+n)/2. Element 2 is n-1/n(1+n)/2 ... n is 1/n(1+n)/2. That gives a general formula of the numerator as n+1 -i, where i is the number you are checking. That means we now have a function for the probability of any element as n-i+1/n(1+n)/2. all probabilities are between 0 and 1 and sum to 1 by definition, and that is key to the next step.
How can we use this function to skew the number of times an element appears? It's easier with continuous distributions (ie doubles instead of ints) but we can do it. First let's make an array of our probabilities, call it c, and make a running sum of them (cumsum) and store it back in c. If that doesn't make sense, its just a loop like
for(j=0; j < n-1; j++)
if(j) c[j]+=c[j-1]
Now that we have this cumulative distribution, generate a number i from 0 to 1 (a double, not an int. We can check if i is between 0 and c[0], return 1. if i is between c[1] and c[2] return 2...all the way up to n. e.g.
for(j=0; j < n=1;j++)
if(i %lt;= c[j]) return i+1
This will distribute the integers according to the probabilities you have calculated.
<?php
//get random number between 1 and 10,000
$random = mt_rand(1, 10000);
?>
My primary question is:
Is this alot of loops?
while ($decimals < 50000 and $remainder != "0") {
$number = floor($remainder/$currentdivider); //Always round down! 10/3 =3, 10/7 = 1
$remainder = $remainder%$currentdivider; // 10%3 =1, 10%1
$thisnumber = $thisnumber . $number;
$remainder = $remainder . 0; //10
$decimals += 1;
}
Or could I fit more into it? -without the server crashing/lagging.
I'm just wondering,
Also is there a more effiecent way of doing the above? (e.g. finidng out that 1/3 = 0.3 to 50,000 decimals.)
Finally:
I'm doing this for a pi formulae the (1 - 1/3 + 1/5 - 1/7 etc.) one,
And i'm wondering if there is a better one. (In php)
I have found one that finds pi to 2000 in 4 seconds.
But thats not what I want. I want an infinite series that converges closer to Pi
so every refresh, users can view it getting closer...
But obv. converging using the above formulae takes ALONG time.
Is there any other 'loop' like Pi formulaes (workable in php) that converge faster?
Thanks alot...
Here you have several formulas for calculating Pi:
http://mathworld.wolfram.com/PiFormulas.html
All of them are "workable" in PHP, like in any other programming language. A different question is how fast they are or how difficult they are to implement.
If the formulas converge faster or slower, it's a Math question, not about programming, so I can't help you. I can tell you that as a rule of a thumb, the less nested loops you put, the faster will be your algorithm (this is a general rule, don't take it as the absolute truth!)
Anyway, since the digits of Pi are known until a certain digit, why don't you copy it into a file and then just index it? That will be extremely fast :)
You can check previous answers to similar questions:
How can pi be calculated to a set number of digits in PHP?
https://stackoverflow.com/questions/3045020/which-is-the-best-formulae-to-find-pi
Check http://mathworld.wolfram.com/PiIterations.html (taken from the last answer). Those formulaes are using iterations and can therefor be implemented using a loop.
You should use google and search for "php implementation xxxxxxx" (where xxxxxx stands for the algorithm name you want to search for).
EDIT: Here is an implementation of Vietas formula using a while-loop in php.