Inconsistent loading time for JS and PHP - php

I have a PHP script being loaded by JS through JQuery's $.ajax.
I measured the execution time of the PHP script using:
$start = microtime(); // top most part of code
// all other processes that includes AES decryption
$end = microtime(); // bottom part of code
file_put_contents('LOG.TXT','TIME IT TOOK: '.($end-$start)."\n",FILE_APPEND);
It measured somewhere less than 1 second. There are no prepend/append PHP scripts.
In the JS $.ajax code, I have measured the execution time by:
success: function(response) {
console.log(date('g:i:s a') + ' time received\n');
// all other processes including AES decryption
console.log(date('g:i:s a') + ' time processed\n');
}
The time is the same for the time received and the time processed.
However, when I check the Chrome Developer Tools, it claims that the PHP script loaded for about 8 seconds.
What could be wrong in how I measured these things?
I'm certain that PHP is loading fast but how come Chrome reports that it took more than 8 seconds?
I'm using localhost and my web server is fast and this is the only time I encountered this problem. All other AJAX calls are fast.

In the PHP section, make sure you're using microtime(true) so that you're working with floating point numbers instead of strings.
Using subtraction on strings may yield incorrect results.
Example: http://ideone.com/FWkjF2
<?php
// Wrong
$start = microtime();
sleep(3);
$stop = microtime();
echo ($stop - $start) . PHP_EOL; // Prints 8.000000000008E-5
// Correct
$start = microtime(true);
sleep(3);
$stop = microtime(true);
echo ($stop - $start) . PHP_EOL; // Prints 3.0000791549683
?>

Related

Performance degrades when php getmxrr() is called from inside shell for loop

I noticed big performance difference when tried to fetch MX of gmail 100000 times using php and shell script.
PHP script is taking around 1.5 min.
<?php
$time = time();
for($i=1;$i<=100000;$i++)
{
getmxrr('gmail.com', $hosts, $mxweights);
unset($hosts, $mxweights);
}
$runtime = time() - $time;
echo "Time Taken : $runtime Sec.";
?>
But same thing done inside shell for loop is almost 10 times slower
time for i in {1..100000}; do (php -r 'getmxrr("gmail.com", $mxhosts, $mxweight);');done
I am curious to know, what are the reasons, shell script is taking more time to complete exactly the same thing which php script can do very fast.

Microtime inaccurate time measurement?

I just found out about microtime() in PHP.
I tried to check how long it will take to execute basic image load.
Here is code:
<?php
$start = microtime(true);
echo("<img src='http://example.com/public/images/new.png'/>");
$time_elapsed_secs = microtime(true) - $start;
echo($time_elapsed_secs);
?>
On average it returns: "8.8214874267578E-6" which I assume means 8.82 seconds?
Did I do something wrong? I am sure image loads faster than 8 seconds, I would definitely notice 8 seconds.
Here is image itself:
The E-6 at the end of that string means you need to move the decimal six places to the left.
By the way, the echo statement executes almost instantly, writing that HTML to the output stream. That doesn't mean the image loaded that fast in some remote browser reading the HTML stream and trying to load the image.

executing script before 60 seconds

I am creating simple script to test that I can echo inside my while loop before it gets 60 seconds,but the problem is it will not echo inside my loop.I don't know if it is really executed inside my while loop. Then my browser will crashed.
$timelimit = 60; //seconds
set_time_limit($timelimit);
$start_time = time(); //set startup time;
while(((time() - $start_time) < $timelimit) || !$timelimit){
echo "executing..<br/>";
}
Thank you in advance.
This is a very tight loop. It will run very fast and will create a very large output, which will eventually kill the browser (it will have hundreds of thousands of lines). You may add some delay to your loop:
while(((time() - $start_time) < $timelimit) || !$timelimit){
sleep(1); // pause for 1 second
echo "executing..<br/>";
}
In this case the output will be only 60 lines, and the browser should render it after a minute of waiting.
CPU execution is very first (approximately 10^-9s per execution). Your looping time is 60 seconds. So consider how many (may be 300915626 executions) executions will occur. During this time if you want to print something your browser will be killed.
If you're expecting to see the output as the script generates it, then you'll want to add a flush(); after your echo. However, if I recall correctly, php will still wait to send the output until it has a certain number of bytes (1024 maybe?)

Is there a way to put the number of seconds a script takes to execute (or "page load time") in the middle of the page?

I want to put the number of seconds a search script takes to search in PHP. But, I don't want to put that number at the bottom of the page. I put this at the top of the PHP file:
$time_start = microtime(true);
And then I put this where I am echoing out the number:
$time_end = microtime(true);
echo number_format($time_end - $time_start, 5);
The problem is that this doesn't count the whole script as there's still a lot of other code under that. Is there a way to determine how long it takes to execute but echo it in another place on the page rather than the bottom?
I can think of three possibilities:
Echo the value inside a Javascript at the bottom of the page. The script can modify the DOM as soon as the page is loaded, so the value can be inserted anywhere in the document. Disadvantage: won't work for clients that don't support or ignore Javascript.
Cheat and stop the timer early. You'll have to do all time-intensive stuff before the point in the document where you want the time (i.e. preload all results it in memory and echo it afterwards), so the part that you're not measuring is negligible. Disadvantage: it's not completely accurate and the pre-loading could be a hassle or memory-inefficient.
Buffer all output with output control functions and perform a search-and-replace on the output buffer contents at the end of the script. Disadvantage: could be inefficient depending on your situation.
Not precisely, but you can get a very close approximation by using output buffering:
<?php
$time_start = microtime(true);
ob_start();
// Do stuff here
$page = ob_get_contents();
ob_end_clean();
$time_end = microtime(true);
$elapsed = number_format($time_end - $time_start, 5);
var_dump($page, $elapsed);
?>

PHP Output Buffer Benchmark (microtime inaccurate when used with usleep?)

I post a strange behavior that could be reproduced (at least on apache2+php5).
I don't know if I am doing wrong but let me explain what I try to achieve.
I need to send chunks of binary data (let's say 30) and analyze the average Kbit/s at the end :
I sum each chunk output time, each chunk size, and perform my Kbit/s calculation at the end.
<?php
// build my binary chunk
$var= '';
$o=10000;
while($o--)
{
$var.= pack('N', 85985);
}
// get the size, prepare the memory.
$size = strlen($var);
$tt_sent = 0;
$tt_time = 0;
// I send my chunk 30 times
for ($i = 0; $i < 30; $i++)
{
// start time
$t = microtime(true);
echo $var."\n";
ob_flush();
flush();
$e = microtime(true);
// end time
// the difference should reprenent what it takes to the server to
// transmit chunk to client right ?
// add this chuck bench to the total
$tt_time += round($e-$t,4);
$tt_sent += $size;
}
// total result
echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n";
?>
In this example above, it works so far ( on localhost, it oscillate from 7000 to 10000 Kbit/s through different tests).
Now, let's say I want to shape the transmission, because I know that the client will have enough of a chunk of data to process for a second.
I decide to use usleep(1000000), to mark a pause between chunk transmission.
<?php
// build my binary chunk
$var= '';
$o=10000;
while($o--)
{
$var.= pack('N', 85985);
}
// get the size, prepare the memory.
$size = strlen($var);
$tt_sent = 0;
$tt_time = 0;
// I send my chunk 30 times
for ($i = 0; $i < 30; $i++)
{
// start time
$t = microtime(true);
echo $var."\n";
ob_flush();
flush();
$e = microtime(true);
// end time
// the difference should reprenent what it takes to the server to
// transmit chunk to client right ?
// add this chuck bench to the total
$tt_time += round($e-$t,4);
$tt_sent += $size;
usleep(1000000);
}
// total result
echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n";
?>
In this last example, I don't know why, the calculated bandwidth can jump from 72000 Kbit/s to 1,200,000, it is totally inaccurate / irrelevant.
Part of the problem is that the time measured to output my chunk is ridiculously low each time a chunk is sent (after the first usleep).
I am doing something wrong ? Does the buffer output is not synchronous ?
I'm not sure how definitive these tests are, but I found it interesting. On my box I'm averaging around 170000 kb/s. From a networked box this number goes up to around 280000 kb/s. I guess we have to assume microtime(true) is fairly accurate even though I read it's operating system dependent. Are you on a Linux based system? The real question is how do we calculate the kilobits transferred in a 1 second time period? I try to project how many chunks can be sent in 1 second and then store the calculated Kb/s to be averaged at the end. I added a sleep(1) before flush() and this results in a negative kb/s as to be expected.
Something don't feel right and I would be interested in knowing if you have improved your testing method. Good Luck!
<?php
// build my binary chunk
$var= '';
$o=10000;
//Alternative to get actual bytes
$m1 = memory_get_usage();
while($o--)
{
$var.= pack('N', 85985);
}
$m2 = memory_get_usage();
//Your size estimate
$size = strlen($var);
//Calculate alternative bytes
$bytes = ($m2 - $m1); //40108
//Convert to Kilobytes 1 Kilobyte = 1024 bytes
$kilobytes = $size/1024;
//Convert to Kilobits 1 byte = 8 bits
$kilobits = $kilobytes * 8;
//Display our data for the record
echo "<pre>size: $size</pre>";
echo "<pre>bytes: $bytes</pre>";
echo "<pre>kilobytes: $kilobytes</pre>";
echo "<pre>kilobits: $kilobits</pre>";
echo "<hr />";
//The test count
$count = 100;
//Initialize total kb/s variable
$total = 0;
for ($i = 0; $i < $count; $i++)
{
// Start Time
$start = microtime(true);
// Utilize html comment to prevent browser from parsing
echo "<!-- $var -->";
// End Time
$end = microtime(true);
// Seconds it took to flush binary chunk
$seconds = $end - $start;
// Calculate how many chunks we can send in 1 second
$chunks = (1/$seconds);
// Calculate the kilobits per second
$kbs = $chunks * $kilobits;
// Store the kbs and we'll average all of them out of the loop
$total += $kbs;
}
//Process the average (data generation) kilobits per second
$average = $total/$count;
echo "<h4>Average kbit/s: $average</h4>";
Analysis
Even though I arrive at some arbitrary value in the test, it is still a value that can be measured. Using a networked computer adds insight into what is really going on. I would have thought the localhost machine would have a higher value then the networked box, but tests prove otherwise in a big way. When on the localhost we have to both send the raw binary data and receive it. This of course shows that two threads are sharing cpu cycles and therefore the supposed kb/s value is in fact lower when testing in a browser on the same machine. We are therefore really measuring cpu cycles and we obtain higher values when the server is allowed to be a server.
Some interesting things start to show up when you increase the test count to 1000. First don't make the browser parse the data. It takes alot of cpu to attempt to render raw data at such high test cases. We can manually watch what is going on with say system monitor and task manager. In my case the local is a linux server and the network box is xp. You can obtain some real kb/s speeds this way and it makes it obvious that we are dynamically serving up data using mainly the cpu and network interfaces. The server doesn't replicate the data and, therefore no matter how high we set the test count, we only need 40 kilobytes of memory space. So 40 kilobytes can generate 40 megabytes dynamically at a 1000 test cases and 400mb at 10000 cases.
I crashed firefox in xp with a virtual memory to low error after running the test case 10000 several times. System monitor on the linux server showed some understandable spikes in cpu and network, but overall pushed out a large amount of data really quick and had plenty of room to spare. Running 10000 cases on linux a few times actually spun up the swap drive and pegged the server cpu cycle. The most interesting fact though is that my values that I obtained above only changed when I was both receiving in firefox and transmitting in apache when testing locally. I practically locked up the xp box, yet my network value of ~280000 kb/s did not change on the print out.
Conclusion:
The kb/s value we arrive at above is virtually useless other then to prove its useless. The test itself however shows some interesting things. At high test cases I beleive I could actually see some physical buffering going on in both the server and the client . Our test script actually dumps the data to apache and it gets released to continue its work. Apache handles the details of the transfer of course. This is actually nice to know, but proves we can't measure the actual transmission rate from our server to browser this way. We can measure our servers data generation rate I guess if thats meaningful in some way. Guess what! Flushing actually slowed down the speeds. Theres a penalty for flushing. In this case, there is no reason for it and removing flush() actually speeds up our data generation rate. Since we are not dealing with networking, our value above is actually more meaningful kept as Kilobytes. It's useless any ways so I'm not changing.

Categories