I have two collections. The first one, $bluescan, has 345 documents and some ['company'] values missing. The second one, $maclistResults, has 20285 documents.
I'm running $bluescan against $maclistResults to fill missing ['company'] values.
I created three indexes for $maclistResults using the following PHP code:
$db->$jsonCollectionName->ensureIndex(array('mac' => 1));
$db->$jsonCollectionName->ensureIndex(array('company' => 1));
$db->$jsonCollectionName->ensureIndex(array('mac'=>1, 'company'=>1));
I'm using a MongoLab free account for the DB and running my PHP application locally.
The code below demonstrates the process. It works, does what I need but it takes almost 64 seconds to perform the task.
Is there anything I can do to improve this execution time?
Code:
else
{
$maclist = 'maclist';
$time_start = microtime(true);
foreach ($bluescan as $key => $value)
{
$maclistResults = $db->$maclist->find(array('mac'=>substr($value['mac'], 0, 8)), array('mac'=>1, 'company'=>1));
foreach ($maclistResults as $value2)
{
if (empty($value['company']))
{
$bluescan[$key]['company'] = $value2['company'];
}
}
}
}
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "<pre>";
echo "maclistResults execution time: ".$time." seconds";
echo "</pre>";
Echo output from $time:
maclistResults execution time: 63.7298750877 seconds
Additional info: PHP version 5.6.2 (MAMP PRO on OSX Yosemite)
As stated by SolarBear, you're actually executing find as often as you have elements in $bluescan.
You should first fetch the list from the server and the iterate over that result set instead, basically exactly the opposite order of your current code.
Related
In a Linux server Nginx as web server,
Response to http request from Android application to PHP code with Nginx is too slow
The point is the response from the server is as following:
echo "{mResponse}"
When mResponse is small (you say 500 bytes), there is no problem and the response is too quick
When mResponse is big (you say 200K bytes), there is a problem and the response is too slow
I doubted on echo and searched a lot and found out the echo may be too slow
There were some solutions like chuncking the big data into smaller chuncks (4096) and then echoing
Or ob_start and ...
I tested them and finally found out the problem is somewhere else because when I used the following code, the time of echo is ok:
$time_start = microtime(true);
$this->echobig("###{$mCommand}70{$mUserID}{$mResponse}{$mCheckSum}###");
echo "\nThe first script took: " . ( microtime(true) - $time_start) . " sec";
$time_start = microtime(true);
ob_start();
$this->echobig("###{$mCommand}70{$mUserID}{$mResponse}{$mCheckSum}###");
ob_end_flush();
echo "\nThe Second script took: " . ( microtime(true) - $time_start) . " sec";
public function echobig($string)
{
$splitString = str_split($string, 4096);
foreach($splitString as $chunk)
{
echo $chunk;
}
}
On both above codes (The first and second scripts) the time was near
0.0005 sec
which is ok
But the Android application is receiving the response in 13 seconds
As I said when the response is small, the Android application quickly receives the response (in 2 seconds)
Now I doubt on Nginx settings or PHP settings (may be buffer limits somewhere)
I don't know which parameter is problematic
Sorry, It has nothing to do with Nginx or PHP
The problem is the following code to calculate the check sum of the mentioned big response:
$cs = 0;
$content_length = strlen($string);
for ($i=0; $i < $content_length;$i++)
{
$K = mb_substr($string,$i, 1, 'UTF-8');
$A = mb_convert_encoding(mb_substr($string,$i, 1, 'UTF-8'),"UTF-8");
$B = $this->mb_ord_WorkAround($A);
$cs = $cs + $B;
}
I am currently working on a websever that will create a quick diagnose dashboard of other servers.
My requirement is to display the time of these remote servers, it seems that the NTP create some issues and I would like to see that.
I currently have a bat file on my desktop that simply send
net time \\SRV*******
I have also tried:
echo exec('net time \\\\SRV****');
=> result is '0'
But I would like to find a better solution in PHP so anybody of a team can read it on a webpage.
Any idea what I would do?
Note: this is not related to How to get date and time from server ad I want to get the time of a REMOTE server and not local server
You can use NTP protocol to retrieve datetime from a remote server.
Try this code:
error_reporting(E_ALL ^ E_NOTICE);
ini_set("display_errors", 1);
date_default_timezone_set("Europe/...");
function query_time_server ($timeserver, $socket)
{
$fp = fsockopen($timeserver,$socket,$err,$errstr,5);
# parameters: server, socket, error code, error text, timeout
if ($fp) {
fputs($fp, "\n");
$timevalue = fread($fp, 49);
fclose($fp); # close the connection
} else {
$timevalue = " ";
}
$ret = array();
$ret[] = $timevalue;
$ret[] = $err; # error code
$ret[] = $errstr; # error text
return($ret);
}
$timeserver = "10.10.10.10"; #server IP or host
$timercvd = query_time_server($timeserver, 37);
//if no error from query_time_server
if (!$timercvd[1]) {
$timevalue = bin2hex($timercvd[0]);
$timevalue = abs(HexDec('7fffffff') - HexDec($timevalue) - HexDec('7fffffff'));
$tmestamp = $timevalue - 2208988800; # convert to UNIX epoch time stamp
$datum = date("Y-m-d (D) H:i:s",$tmestamp - date("Z",$tmestamp)); /* incl time zone offset */
$doy = (date("z",$tmestamp)+1);
echo "Time check from time server ",$timeserver," : [<font color=\"red\">",$timevalue,"</font>]";
echo " (seconds since 1900-01-01 00:00.00).<br>\n";
echo "The current date and universal time is ",$datum," UTC. ";
echo "It is day ",$doy," of this year.<br>\n";
echo "The unix epoch time stamp is $tmestamp.<br>\n";
echo date("d/m/Y H:i:s", $tmestamp);
} else {
echo "Unfortunately, the time server $timeserver could not be reached at this time. ";
echo "$timercvd[1] $timercvd[2].<br>\n";
}
You can use a powershell script along with WMI (assuming your servers are windows machines)
This will be way faster than the old net time style.
The script needs to be executed with a Domain-Admin (or any other user that has WMI-query permissions)
$servers = 'dc1', 'dc2', 'dhcp1', 'dhcp2', 'wsus', 'web', 'file', 'hyperv1', 'hyperv2'
ForEach ($server in $servers) {
$time = "---"
$time = ([WMI]'').ConvertToDateTime((gwmi win32_operatingsystem -computername $server).LocalDateTime)
$server + ', ' + $time
}
You can achieve this with a scheduled task every 5 minutes.
If you want to display the time "with seconds / minutes" (I wouldn't query each server too often, as time does not change THAT fast), you could add a little maths:
When executing the script, don't store the actual time, but just the offset of each server, compared to your webservers time. (i.e. +2.2112, -15.213)
When your users are visiting the "dashboard" load these offsets and display the webservers current time +/- the offset per server. (Could be "outdated" by some microseconds then, but do you care?)
Why so complicated?
Just found that this is working:
echo exec('net time \\\\SRV****, $output);
echo '<pre>';
print_r($output);
echo '</pre>';
I have 6 nested loops in a PHP program, however, the calculation time for the script is extremely slow. I would like to ask if there is a better way of implementing the 6 loops and increasing computation time, even if it means switching to another language. The nature of the algorithm I'm implementing requires iteration, so I don't know how I can better implement it.
Here's the code.
<?php
$time1 = microtime(true);
$res = 16;
$imageres = 128;
for($x=0;$x<$imageres;++$x){
for($y=0;$y<$imageres;++$y){
$pixels[$x][$y]=1;
}};
$quantizermatrix = 1;
$scalingcoefficient = 1/($res/2);
for($currentimagex=0;$currentimagex<($res*($imageres/$res-1)+1);$currentimagex = $currentimagex +$res){
for($currentimagey=0;$currentimagey<($res*($imageres/$res-1)+1);$currentimagey = $currentimagey +$res){
for($u=0;$u<$res;++$u){
for($v=0;$v<$res;++$v){
for($x=0;$x<$res;++$x){
for($y=0;$y<$res;++$y){
if($u == 0) {$a = 1/(sqrt(2));} else{$a = 1;};
if($v == 0){$b = 1/(sqrt(2));}else{$b = 1;};
$xes[$y] = $pixels[$x+$currentimagex][$y+$currentimagey]*cos((M_PI/$res)*($x+0.5)*$u)*cos( M_PI/$res*($y+0.5)*$v);
}
$xes1[$x] = array_sum($xes);
}
$xes2= array_sum($xes1)*$scalingcoefficient*$a*$b;
$dctarray[$u+$currentimagex][$v+$currentimagey] = round($xes2/$quantizermatrix)*$quantizermatrix;
}}}};
foreach($dctarray as $dct){
foreach($dct as $dc){
echo $dc." ";
}
echo "<br>";}
$time2 = microtime(true);echo 'script execution time: ' . ($time2 - $time1);
?>
I've removed a large portion of the code that's irrelevant, since this is the section of the code that's problematic
Essentially the code iterates through every pixel in a PNG image and outputs a computed matrix (2d array). This code takes around 2 seconds for a 128x128 image. This makes this program impractical for normal images greater than 128x128
There is a function available in Imagick library
Imagick::exportImagePixels
Refer the below link it might help you out
http://www.php.net/manual/en/imagick.exportimagepixels.php
Ok, I know this is not the first question about microtime and number_format but it seem there is no one able to come up with a dummy proof answer. So, as the dummy, here is my question:
How can I output my result from 2 microtime(true) variables in millisecondes. By that I mean 1 being 1 milliseconds and 1000 being 1 secondes.
So if it took less then a millisecondes, i get 1. If it took 1/2 of a second, I get 500.
I currently do this:
number_format((microtime(true)-$timetrace), 4)
And I get this:
1,391,490,671.8339
=)
$timetrace was set before in the code as $timetrace = microtime(true). Thansk for the help and if this was answer before in a one line command, I am truly sorry for asking that again.
This has nothing to do with number_format.
My bet is, you did not put what you intended in your $timetrace variable.
See this example:
<?php
$timetrace = microtime(true);
sleep (1); // wait about 1 second
$elapsed_time = microtime(true) - $timetrace;
echo "$elapsed_time<br>";
echo number_format($elapsed_time, 4) . "<br>";
?>
output:
1.0120580196381
1.0121
Based on kuroi neko suggestion, I did built a little funciton and it work perfectly! Thank you a lot!!
Function ms($overall=0)
{
global $timetrace;
global $starttrace;
if ($overall==1){
$result = microtime(true) - $initialtrace;
}else{
$result = microtime(true) - $timetrace;
}
$result = number_format($result, 4);
$result = $result * 1000;
if ($result < 1)$result = 1;
// Reset timetrace
$timetrace = microtime(true);
return $result." ms<br>";
}
In the code I just have to put:
if (isset($timetrace)) echo "CPU$==# initiation time = ".ms();
That give me the time spent between my last echo. (isset is actually a variable I configure at the beginning to see if I am in 'time debugging mode')
If I pass 1 - echo "CPU$==# initiation time = ".ms(1); - I get the result of the total duration of the script since the top of the page instead of the last echo.
Here is a sample of the output:
CPU$==# INSERT = 12.4 ms
CPU$==# DATE SET FOR GOOGLE QUERY = 530.4 ms
CPU$==# GOOGLE QUERY EXECUTED = 308.8 ms
CPU$==# INSERT 606 ROW = 12290.3 ms
CPU$==# CLOSE LOG WITH SUCCESS = 25.4 ms
Success
And yeah, performance is really terrible. The first 530 ms just to prepare the google query is way too high and 12 seconds to insert 600 rows is insane. Something is badly broken somewhere! =) But at least, now I will be able to benchmark my progression.
Thank you!!!
PHPBench.com runs quick benchmark scripts on each pageload. On the foreach test, when I load it, foreach takes anywhere from 4 to 10 times as long to run than the third example.
Why is it that a native language construct is apparently slower than performing the logic oneself?
Maybe it has to do with the fact that foreach works on a copy of the array ?
Or maybe it has to do with the fact that, when looping with foreach, on each iteration, the internal array pointer is changed, to point to the next element ?
Quoting the relevant portion of foreach's manual page :
Note: Unless the array is referenced,
foreach operates on a copy of the
specified array and not the array
itself. foreach has some side effects
on the array pointer.
As far as I can tell, the third test you linked to doesn't do any of those two things -- which means both tests don't do the same thing -- which means you are not comparing two way of writing the same code.
(I would also say that this kind of micro-optimization will not matter at all in a real application -- but I guess you already know that, and just asked out of curiosity)
There is also one thing that doesn't feel right in this test : it only does the test one time ;; for a "better" test, it might be useful to test all of those more than once -- with timings in the order of 100 micro-seconds, not much is required to make a huge difference.
(Considering the first test varies between 300% and 500% on a few refreshes...)
For those who don't want to click, here's the first test (I've gotten 3xx%, 443%, and 529%) :
foreach($aHash as $key=>$val) {
$aHash[$key] .= "a";
}
And the third one (100%) :
$key = array_keys($aHash);
$size = sizeOf($key);
for ($i=0; $i<$size; $i++) {
$aHash[$key[$i]] .= "a";
}
I'm sorry, but the website got it wrong. Here's my own script that shows the two are almost the same in speed, and in fact, foreach is faster!
<?php
function start(){
global $aHash;
// Initial Configuration
$i = 0;
$tmp = '';
while($i < 10000) {
$tmp .= 'a';
++$i;
}
$aHash = array_fill(100000000000000000000000, 100, $tmp);
unset($i, $tmp);
reset($aHash);
}
/* The Test */
$t = microtime(true);
for($x = 0;$x<500;$x++){
start();
$key = array_keys($aHash);
$size = sizeOf($key);
for ($i=0; $i<$size; $i++) $aHash[$key[$i]] .= "a";
}
print (microtime(true) - $t);
print ('<br/>');
$t = microtime(true);
for($x = 0;$x<500;$x++){
start();
foreach($aHash as $key=>$val) $aHash[$key] .= "a";
}
print (microtime(true) - $t);
?>
If you look at the source code of the tests: http://www.phpbench.com/source/test2/1/ and http://www.phpbench.com/source/test2/3/ , you can see that $aHash isn't repopulated to the initial data after each iteration. It is created once at the beginning, then each test is ran X times. In this sense, you are working with an ever growing $aHash for each iteration... in psuedocode:
iteration 1: $aHash[10000000000000]=='aaaaaa....10000 times...a';
iteration 2: $aHash[10000000000000]=='aaaaaa....10001 times...a';
iteration 2: $aHash[10000000000000]=='aaaaaa....10002 times...a';
Over time, the data for all the tests is getting larger for each iteration, so of course by iteration 100, the array_keys method is faster because it'll always have the same keys, where as the foreach loop has to contend with an ever growing data set and store the values in arrays!
If you run my code provided above on your server, you'll see clearly that foreach is faster AND neater AND clearer.
If the author of the site intended his test to do what it does, then it certainly is not clear, and otherwise, it's an invalid test.
Benchmark results for such micro measurements, coming from a live, busy webserver that is subject to extreme amounts of varying load and other influences, should be disregarded. This is not an environment to benchmark in.