I m having a application in which i have to select a number out of many numbers according to their weights. Every time I will select , I have send the result to flash.I have found a algorithm in python. I have implemented it in php and was testing for its results. If i was running that algo in python it was giving good results but in php not so good. Ex. (1=>30,2=>40,3=>30) After running many times , the probablity of occurence first number in weighted array is always more but in python it is uniform. I have attatched the PHP code.
define("MAX",100000);
$reelfrequencies=array(30,40,30);
echo weightedselect($reelfrequencies);
/*function weightedselect($frequency)
{
$arr=cumWghtArray($frequency);//array(35,96,100);
print_r($arr);
$len=sizeof($frequency);
$count=array();
echo $r=mt_rand(0,$arr[$len-1]);
$index=binarysearch($arr,$r,0,$len-1);
return $index;
}*/
function cumWghtArray($arr)
{
$cumArr=array();
$cum=0;
$size=sizeof($arr);
for($i=0;$i<$size;$i++)
{
$cum+=$arr[$i];
array_push($cumArr,$cum);
}
return $cumArr;
}
function weightedselect($frequency)
{
$arr=cumWghtArray($frequency);//array(35,96,100);
$len=sizeof($frequency);
$count=array();
$count[0]=$count[1]=$count[2]=0;
for($i=0;$i<MAX;$i++)
{
$r=mt_rand(0,$arr[$len-1]);
$index=binarysearch($arr,$r,0,$len-1);
$count[$index]++;
}
for($i=0;$i<3;$i++)
{
$count[$i]/=MAX;
echo $i." ".$count[$i]."\n";
}
}
function binarySearch($ar,$value,$first,$last)
{
if($last<$first)
return -1;
$mid=intVal(($first+$last)/2);
$a=$ar[$mid];
if($a===$value)
return $mid;
if($a>$value&&(($mid-1>=0&&$ar[$mid-1]<$value)||$mid==0))
return $mid;
else if($a>$value)
$last=$mid-1;
else if($a<$value)
$first=$mid+1;
return binarySearch($ar,$value,$first,$last);
}
Here is the Python Code. I have taken this code from this forum .
import random
import bisect
import collections
def cdf(weights):
total=sum(weights)
result=[]
cumsum=0
for w in weights:
cumsum+=w
result.append(cumsum/total)
return result
def choice(population,weights):
assert len(population) == len(weights)
cdf_vals=cdf(weights)
x=random.random()
idx=bisect.bisect(cdf_vals,x)
return population[idx]
weights=[0.30,0.40,0.30]
population="ABC"
counts={"A":0.0,"B":0.0,"C":0.0}
max=10000
for i in range(max):
c=choice(population,weights)
counts[c]=counts[c]+1
print(counts)
for k, v in counts.iteritems():
counts[k]=v/max
print(counts)
Problem is of mt_rand() function which is not uniform. The python random.rand() is very much uniform. Which random function should i implement in php with a proper seeding value every time it runs. I was thinking of using Withcmann (used by python random.random) but how will i provide the seed.
Both rand and mt_rand should both be more than sufficiently random for your task here. If you needed to seed mt_rand you could use mt_srand, but there's no need since PHP 4.2 as this is done for you.
I suspect the issue is with your code, which seems unnecessarily involved given what I believe you're trying to do, which is just pick a random number with weighted probabilities.
This may help: Generating random results by weight in PHP?
Related
I'm trying to cacluate the variance of an array in PHP and I can' use built in "stats_variance" function.
I've searched a lot and found a few functions in php to calculate Variance But the problem is that the number that "VAR.P" excel functions calculates is way different in php.
This is my Array of data :
1153680
1118118
1118912
1116187
1068844
1028538
1028538
988121
970411
973495
938557
938550
926133
959840
959841
986759
986759
1002125
995177
968461
987475
1017662
1021445
1047763
1043253
1020218
977923
977923
980571
979106
956817
917406
878101
878094
845952
820505
797824
769385
741932
741902
726137
708995
690126
668906
645763
645723
626389
624527
623260
607855
597254
597069
583344
569674
573903
567532
547658
545661
532522
521267
520108
508879
512900
512903
502077
505943
494479
502694
520211
520210
520059
535684
549185
555167
555165
543926
529170
515452
505676
523850
524394
517911
503306
498856
480875
478755
478754
472119
476812
472749
461988
459079
447151
454618
452842
445894
440199
438880
428216
426184
427139
420668
418312
417001
411211
406008
409807
410694
409962
399445
395912
392038
375650
359807
353807
358843
365815
367334
384099
377012
375547
369729
367632
361088
362259
359365
356721
353996
350322
346870
344487
344115
343291
339386
339434
335792
332567
327624
322527
320814
320194
318670
314587
311230
309643
308477
305774
305622
304997
304688
302726
302340
306885
307094
306655
303944
302795
305858
305333
304676
308960
308730
309661
305969
303449
303453
305807
308314
302173
301391
309640
308978
319871
326190
322059
313049
314469
320909
320208
327305
326117
326098
324705
318250
320023
314409
312206
311471
297712
294166
302082
305917
302395
304460
299930
296731
294601
290178
285573
283062
And the calculated variance in excel using VAR.P function is "61222801119"
This is one of the functions that I have tried
function getVariance1($arr){
$arr_size=count($arr);
$mu=array_sum($arr)/$arr_size;
$ans=0;
foreach($arr as $elem){
$ans+=pow(($elem-$mu),2);
}
return sqrt($ans/$arr_size);
}
which returns "1633416980.1583" that is way wrong.
What's the problem ?
Thanks
I've been thinking about ways to program for exteme distances, in the game Hellion for example, orbits could be near to scale in the ranges of millions of kilometers. However there was a common glitch where movement would be very choppy the further you were from the object you were orbiting. I might be wrong in my speculation to why that was, but my best guess was that it was down to loss of precision at that distance.
As a little exercise I've been thinking about ways to solve that problem and what I currently have is a a pretty basic unit staged distance system.
class Distance
{
public const MAX_AU = 63018.867924528;
public const MAX_MM = 149598000000000;
private $ly = 0;
private $au = 0;
private $mm = 0;
public function add(Distance $add): Distance
{
$distance = new Distance();
$distance->mm = $this->mm + $add->mm;
if ($distance->mm > self::MAX_MM) {
$distance->mm-= self::self::MAX_MM;
$distance->au++;
}
$distance->au+= $this->au + $add->au;
if ($distance->au > self::MAX_AU) {
$distance->au-= self::self::MAX_AU;
$distance->ly++;
}
$distance->ly+= $this->ly + $add->ly;
return $distance;
}
}
I put in the addition method, which is written as though by hand. I didn't want to use arbitary precision because it would have been too extreme for calculating the smaller distances a player would normally interact with.
My question is; is this how something like this is normally done? and if not, could someone please explain what is wrong with this (inefficient for example), and how it could be done better?
Thanks
PS. I am aware that in the context of a game, this is normally handled with sub-grids, but to simulate how objects in orbit would drift apart, thats what this is for.
You can use the BCMath functions. BCMath supports numbers of any size and precision.
Example:
$mm = "149598000000000";
$au = "63018.867924528";
$sum = bcadd($mm,$au,10);
echo $sum;
//149598000063018.8679245280
I'm making a new project in Zend 3 that requires me to have a unique ID or HASH which I can use in several places later.
I looked at many examples on Google, and could not find a function that can satisfy my requirements because this needs to be 99% unique all the time, and it needs to be able to generate hundreds, millions of "hashes" unique all the time.
The following function caught my attention:
function uniqidReal($lenght = 13) {
// uniqid gives 13 chars, but you could adjust it to your needs.
if (function_exists("random_bytes")) {
$bytes = random_bytes(ceil($lenght / 2));
} elseif (function_exists("openssl_random_pseudo_bytes")) {
$bytes = openssl_random_pseudo_bytes(ceil($lenght / 2));
} else {
throw new Exception("no cryptographically secure random function available");
}
return substr(bin2hex($bytes), 0, $lenght);
}
A simple test:
echo "<pre>";
for($i = 0; $i < 100; $i++)
{
echo $this->uniqidReal(25) .PHP_EOL ;
}
The result:
a8ba1942ad99d09f496d3d564
5b24746d09cada4b2dc9816bd
c6630c35bc9b4ed0907c803e0
48e04958b633e8a5ead137bb1
643a4ce1bcbca66cea397e85e
d2cd4c6f8dc7054dd0636075f
d9c78bae38720b7e0cc6361f2
54e5f852862adad2ad7bc3349
16c4e42e4f63f62bf9653f96e
c63d64af261e601e4b124e38f
29a3efa07a4d77406349e3020
107d78fdfca13571c152441f2
591b25ebdb695c8259ccc7fe9
105c4f2cc5266bb82222480ba
84e9ad8fd76226f86c89c1ac1
39381d31f494d320abc538a8e
7f8141db50a41b15a85599548
7b15055f6d9fb1228b7438d2a
659182c7bcd5b050befd3fc4c
06f70d134a3839677caa0d246
600b15c9dc53ef7a4551b8a90
a9c8af631c5361e8e1e1b8d9d
4b4b0aca3bbf15d35dd7d1050
f77024a07ee0dcee358dc1f5e
408c007b9d771718263b536e1
2de08e01684805a189224db75
c3838c034ae22d21f27e5d040
b15e9b0bab6ef6a56225a5983
251809396beb9d24b384f5fe8
cec6d262803311152db31b723
95d271ffdfe9df5861eefbaa4
7c11f3401530790b9ef510e55
e363390e2829097e7762bddc4
7ef34c69d9b8e38d72c6db29f
309a84490a7e387aaff1817ca
c214af2927c683954894365df
9f70859880b7ffa4b28265dbb
608e2f2f9e38025d92a1a4f03
c457a54d2da30a4a517edf14c
8670acbded737b1d2febdd954
99899b74b6469e366122b658c
3066408f5b4e86ef84bdb3fb9
010715f4955f66da3402bfa7b
fa01675690435b914631b46e1
2c5e234c5868799f31a6c983c
8345da31809ab2d9714a01d05
7b4e0e507dd0a8b6d7170a265
5aa71aded9fe7afa9a93a98c5
3714fb9f061398d4bb6af909d
165dd0af233cce64cefec12ed
849dda54070b868b50f356068
fe5f6e408eda6e9d429fa34ed
cd13f8da95c5b92b16d9d2781
65d0f69b41ea996ae2f8783a5
5742caf7a922eb3aaa270df30
f381ac4b84f3315e9163f169e
8c2afa1ab32b6fe402bf97ba3
a9f431efe6fc98aa64dbecbc2
8f0746e4e9529326d087f828b
bfc3cbea4d7f5c4495a14fc49
e4bf2d1468c6482570612360e
f1c7238766acdb7f199049487
60ae8a1ffd6784f7bbbc7b437
30afd67f207de6e893f7c9f42
dfa151daccb0e8d64d100f719
07be6a7d4aab21ccd9942401b
73ca1a54fcc40f7a46f46afbd
94ed2888fb93cb65d819d9d52
b7317773c6a15aa0bdf25fa01
edbb7f20f7523d9d941f3ebce
99a3c204b9f2036d3c38342bb
a0585424b8ab2ffcabee299d5
64e669fe2490522451cf10f85
18b8be34d4c560cda5280a103
9524d1f024b3c9864a3fccf75
0e7e94e7974894c98442241bc
4a17cc5e3d2baabaa338f592e
b070eaf38f390516f5cf61aa7
cc7832ea327b7426d8d2b8c2b
0df0a1d4833ebbb5d463c56bf
1bb610a8bb4e241996c9c756a
34ac2fdeb4b88fe6321a1d9c3
f0b20f8e79090dcb65195524c
307252efdd2b833228e0c301f
3908e63b405501782e629ac0b
29e66717adf14fb30c626103d
c8abd48af5f9332b322dffad0
80cd4e162bc7e8fb3a756b48c
825c00cec2294061eb328dd97
106205a2e24609652d149bc17
f1f896657fbc6f6287e7dee20
0fbd16ade658e24d69f76a225
4ab3b5eeeda86fa81afba796a
11d34f3d2ffb61d55da560ddb
013d6151bad187906fcc579a4
4509279a28f34bcf5327dd4c0
3c0eb47b3f9dc5a2f794bb9ad
1e6506906f23542c889330836
e7b1c5012390f3c7c48def9f3
d86caa695cb5fa1e0a2ead4cc
But I cannot confirm that this does guarantee me a 99% success rate for my production environment.
If someone can advise me, or provide me an example I would much appreciate it!
Function random_bytes generates cryptographically secure random bytes
For openssl_random_pseudo_bytes add the crypto_strong paramdeter to ensure the algorithm used is cryptographically strong.
Since your requirement is only 99% unique cryptographically secure random bytes will meet your requirement.
This should be a comment, but its a bit long.
There is some confusion over your use of "unique" and "all the time". A token is either unique or it is not. Using a random number generator to create tokens alone is not sufficient to guarantee uniqueness - the whole point of a random number generator is that you don't know what the next value to be generated will be - meaning you also don't know that the next number won't be the same as a previous number. OTOH, using random_bytes() or openssl_random_pseudo_bytes() to generate a token which is "99% unique all the time" seems like a massive overkill.
To work out how unique this is likely to be we would need to know how many tokens will be considered within the populations at any one time (or to be able to calculate this from the expected rate of creation and the TTL).
That you are using large numbers rather implies you have a very good reason for not using the simplest and most obvious unique identifier - i.e. an incrementing integer. Hence the resistance to guessing an existing identifier is clearly critical to the implementation - but again you've told us nothing about that.
Pasting the title of your post into Google turns up your post as the top result - with PHP's uniqid() function immediately after it - yet for some reason you've either not found uniqid() or have rejected it for some reason.
The title of your post is also an oxymoron - In order to define an infinite set of identifiers, the identifiers would need to be of infinite length.
it needs to be able to generate hundreds, millions of "hashes"
....and you want it all to run within the Zend Framework? - LOL.
But I cannot confirm that this does guarantee me a 99% success rate for my production environment.
Why not? You have sufficient information here to confirm that the bitwise entropy is evenly distributed and should know the planned capacity of the production environment. The rest is basic arithmetic.
We are about 8x10⁹ people. Imagine all us access your site once each second needing a unique identifier during a year. You need about 2,52288×10²³ identifiers. If you think your site will be in production about 1000 years, and population get bigger by a 1000 factor you need about 10²⁹ identifiers; so a 32 bytes auto-incremental string is good enough. Add as suffix a pseudo-random 32 bytes string to get a secure 64 bytes identifier. Doing a bit plus you can hash identifiers to create tokens.
Then is easy to write a function to get them.
Edited 2017/04/13
A small sample:
The first thing you need is a pseudo-random strong keys generator. I'll post the function I'm using currently:
<?php
function pseudoRandomBytes($count = 32){
static $random_state, $bytes, $has_openssl, $has_hash;
$missing_bytes = $count - strlen($bytes);
if ($missing_bytes > 0) {
// If you are using a Php version before 5.3.4 avoid using
// openssl_random_pseudo_bytes()
if (!isset($has_openssl)) {
$has_openssl = version_compare(PHP_VERSION, '5.3.4', '>=')
&& function_exists('openssl_random_pseudo_bytes');
}
// to get entropy
if ($has_openssl) {
$bytes .= openssl_random_pseudo_bytes($missing_bytes);
} elseif ($fh = #fopen('/dev/urandom', 'rb')) {
// avoiding openssl_random_pseudo_bytes()
// you find entropy at /dev/urandom usually available in most
// *nix systems
$bytes .= fread($fh, max(4096, $missing_bytes));
fclose($fh);
}
// If it fails you must create enough entropy
if (strlen($bytes) < $count) {
// Initialize on the first call. The contents of $_SERVER
// includes a mix of user-specific and system information
// that varies a little with each page.
if (!isset($random_state)) {
$random_state = print_r($_SERVER, TRUE);
if (function_exists('getmypid')) {
// Further initialize with the somewhat random PHP process ID.
$random_state .= getmypid();
}
// hash() is only available in PHP 5.1.2+ or via PECL.
$has_hash = function_exists('hash')
&& in_array('sha256', hash_algos());
$bytes = '';
}
if ($has_hash) {
do {
$random_state = hash('sha256', microtime() . mt_rand() .
$random_state);
$bytes .= hash('sha256', mt_rand() . $random_state, TRUE);
} while (strlen($bytes) < $count);
} else {
do {
$random_state = md5(microtime() . mt_rand() . $random_state);
$bytes .= pack("H*", md5(mt_rand() . $random_state));
} while (strlen($bytes) < $count);
}
}
}
$output = substr($bytes, 0, $count);
$bytes = substr($bytes, $count);
return $output;
}
Once you have that function you need a function to create your random keys:
<?php
function pseudo_random_key($byte_count = 32) {
return base64_encode(pseudoRandomBytes($byte_count));
}
As random does not mean unique! you need to merge a unique 32 bytes prefix as I suggested. As big number functions are time-expensive I'll use a chunk-math function using a prefix I suppose generated from time to time using a cron function and stored at an environment DB variable and an auto-incremental index also db-stored
<?php
function uniqueChunkMathKeysPrefix(){
// a call to read your db for prefix
// I suppose you have an environment string-keyed table
// and a couple of dbfunction to read and write data to it
$last18bytesPrefix = dbReadEnvVariable('unique_prefix');
// Also you store your current index wich returns to 0 once you get
// a 99999999999999 value
$lastuniqueindex = dbReadEnvVariable('last_unique_keys_index');
if ($lastuniqueindex < 99999999999999){
$currentuniqueindex = $lastuniqueindex + 1;
$curret18bytesPrefix = $last18bytesPrefix;
}else{
$currentuniqueindex = 0;
$curret18bytesPrefix = dbReadEnvVariable('next_unique_prefix');
// flag your db variables to notify cron to create a new next prefix
dbStoreEnvVariable('next_unique_prefix', 0);
dbStoreEnvVariable('unique_prefix', $curret18bytesPrefix);
// you have the time needed to have site visits and create new
// 99999999999999 keys as a while to run your cron to adjust your
// next prefix
}
// store your current index
dbStoreEnvVariable('last_unique_keys_index', $currentuniqueindex);
// Finally you create the unique index prefix part
$uniqueindexchunk = substr('00000000000000'.$currentuniqueindex, -14);
// return the output
return $curret18bytesPrefix.$uniqueindexchunk;
}
Now you can write a function for unique pseudo-random 64 bytes uniquekeys
<?php
function createUniquePseudoRandomKey(){
$newkey = uniqueChunkMathKeysPrefix() . pseudo_random_key(32);
// to beautify the output make a dummie call
// masking the 0s ties
return md5($newkey);
}
I wrote this PHP function:
<?php
//windows cpu temperature
function win_cpu_temp(){
$wmi = new COM("winmgmts://./root\WMI");
$cpus = $wmi->execquery("SELECT * FROM MSAcpi_ThermalZoneTemperature");
foreach ($cpus as $cpu) {
$cpupre = $cpu->CurrentTemperature;
}
$cpu_temp = ($cpupre/10)-273.15 . ' C';
return $cpu_temp;
}
echo win_cpu_temp();
?>
My problem, is that the script displays 59.55 C which I had thought was correct. I checked this value several hours later, and it's exactly the same. I just put the CPU to work at 90% compressing video for ten minutes, and this value is the same still.
Can anyone help me find the "true" value for this function?
I've read (to no avail):
MSAcpi_ThermalZoneTemperature class not showing actual temperature
How is, say, "Core Temp" getting its values? Same computer, it reports between 49 and 53 Celsius.
With a little digging around I found the common issue with using MSAcpi_ThermalZoneTemperature was that it is dependent upon being implemented on your system.
You could try querying Win32_TemperatureProbe and see if you have any luck there.
Neither MSAcpi_ThermalZoneTemperature or Win32_TemperatureProbe worked on my system, although if you have admin access, you can use http://openhardwaremonitor.org/ which provides a WMI interface for all available sensor data.
This worked great for me and I was able to accurately report CPU Core temp from a PHP script:
function report_cpu_temp(){
$wmi = new COM('winmgmts://./root/OpenHardwareMonitor');
$result = $wmi->ExecQuery("SELECT * FROM Sensor");
foreach($result as $obj){
if($obj->SensorType == 'Temperature' && strpos($obj->Parent, 'cpu') > 0)
echo "$obj->Name ($obj->Value C)"; // output cpu core temp
else
echo 'skipping ' . $obj->Identifier ;
echo '<br />';
}
}
Hope this helps.
I have an Web Application that requires SELECT and INSERT Querying to MySQL database and Instantiating a PHP class using new operator almost more that thousand times within a loop. May be there are alternatives to my present logic, but my point is that is there any harm if I carry on this logic?. I don't bother about the time complexity associated with the algorithm presently but **worrying much about if anything goes wrong during transaction or memory usage. I am giving the piece of code for reference
$stm_const = "select ce.TIMETAKEN, qm.QMATTER as STRING1, ce.SMATTER as STRING2 from w_clkexam ce, clkmst cm, qsmst qm where ce.QID=qm.QID and cm.ROLLNO=ce.ROLLNO";
for ($c=0; $c < count($rollnos); $c++) {
$stm3 =$stm_const." "."and ce.ROLLNO='$rollnos[$c]'";
$qry3 = mysql_query($stm3) or die("ERROR 3:".mysql_error());
while($row1 = mysql_fetch_array($qry3)) {
echo $string1=$row1['STRING1'];
echo $string2=$row1['STRING2'];
$phpCompareStrings=new PhpCompareStrings($string2, $string1);
$percent=$phpCompareStrings->getSimilarityPercentage();
$percent2=$phpCompareStrings->getDifferencePercentage();
echo '$string1 and $string2 are '.$percent.'% similar and '.$percent2.'% differnt<br/>';
}// end while
}// end for
Please help, I am waiting for opinions from you so that I can move further. Thanks in advance.
I don't see any problem there. You just get all the rows from database and for each row compare the strings. As you assign the object to the same variable each time, the old object is destroyed before the new object is created. So you have only one instance of the object in memory at all times. The question is what you want to do with the results? Only print them as in your example, or to store the results for further processing?
Anyway, I think it is not possible to optimize your code without modifying the class. If you are using this class, you can try to modify it so that it can accept multiple strings. Using this, you can create only 1 instance of the class and you avoid destroying/creating the object for every row. It will save you some CPU time, but not memory (as at all times, only 1 instance of the class is active).
Untested modification below:
Modify this function inside the class:
function __construct($str1,$str2){
$str1=trim($str1);
$str2=trim($str2);
if($str1==""){ trigger_error("First parameter can not be left blank", E_USER_ERROR); }
elseif($str2==""){ trigger_error("Second parameter can not be left blank", E_USER_ERROR); }
else{
$this->str1=$str1;
$this->str2=$str2;
$this->arr1=explode(" ",$str1);
$this->arr2=explode(" ",$str2);
}
}
To these 2 functions:
function init($str1,$str2){
$str1=trim($str1);
$str2=trim($str2);
if($str1==""){ trigger_error("First parameter can not be left blank", E_USER_ERROR); }
elseif($str2==""){ trigger_error("Second parameter can not be left blank", E_USER_ERROR); }
else{
$this->str1=$str1;
$this->str2=$str2;
$this->arr1=explode(" ",$str1);
$this->arr2=explode(" ",$str2);
}
}
function __construct($str1,$str2){ $this->init($str1,$str2); }
Then create the object outside the loop and only call $phpCompareStrings->init($string2,$string1) inside the loop.