How can I get the real CPU time used in PHP? - php

How can I calculate the CPU time actually used by my php script?
Note this is NOT what I'm looking for:
<?php
$begin_time=microtime(true);
//..
//end of the script:
$total_time=microtime(true)-$begin_time;
because that would give me the time elapsed. That may include a lot of time used by unrelated processes running at the same time, as well as time spent waiting for i/o.
I've seen there is getrusage(), but a user comment in the documentation page says:
getrusage() reports kernel counters that are updated only once application loses context and a switch to kernel space happens. For example on modern Linux server kernels that would mean that getrusage() calls would return information rounded at 10ms, desktop kernels - at 1ms.
getrusage() isn't usable for micro-measurements at all - and getmicrotime(true) might be much more valuable resource.
so that doesn't seem to be an option, is it?
What alternatives do I have?

Define "your" I/O. Technically a move of memory to cpu registers is I/O. Are you saying you want to remove that time from your calculation? (I'm guessing no)
What I'm getting at is you are looking to profile your code in some way most likely. If you want to measure time not spent reading/writing to files or network sockets, just use microtime and put extra calls around portions doing I/O. Then you will also get an idea of how much time your I/O is taking. More likely you will find you have some loop taking more time than you expect.
When I profile like this I either use profiling tools in eclipse, or I use a time logger and do some kind of binary search-ish insertion of the time logging into the code. Usually I find some small area of code that is taking 85% of the measured time and do optimization there.
Also as a side note, don't let the perfect become the enemy of the practical. 90% of the time during your process your calls won't be interrupted by some other process and your microtime counts will be close enough.

You can use getrusage()
Full example:
<?php
function rutime($ru, $rus, $index){
return ($ru["ru_$index.tv_sec"]*1000 + intval($ru["ru_$index.tv_usec"]/1000))
- ($rus["ru_$index.tv_sec"]*1000 + intval($rus["ru_$index.tv_usec"]/1000));
}
$cpu_before = getrusage();
$ms = microtime(true) * 1000;
sleep(3);
$tab = [];
for($i = 0; $i < 500000; $i++) {
$tab[] = $i;
}
$cpu_after = getrusage();
echo "Took ".rutime($cpu_after, $cpu_before, "utime")." ms CPU usage" . PHP_EOL;
echo "Took ".((microtime(true) * 1000) - $ms)." ms total". PHP_EOL;
Source: https://helpdesk.nodehost.ca/en/article/how-to-calculate-real-cpu-usage-in-a-php-script-o3ceu8/

Related

Timing attack in PHP

strcmp - what is means "Binary safe string comparison"? This compare is safe for the timing attack?
If no, how can I compare two strings for preventing the timing attack? Compare hashes of the strings is enough? Or I must use some library (or own code) that gives constant time for the compare?
Here writes that the timing attack can be used in the web. But can be this type of an attack exists in the real world? Or this attack can be used only for a small type of an attacker (like government) so this protection through the web is excess?
"binary safe" means that any bytes can be safely compared with strcmp, not just valid characters in some character set. A quick test confirms that strcmp is not safe against timing attacks:
$nchars = 1000;
$s1 = str_repeat('a', $nchars + 1);
$s2 = str_repeat('a', $nchars) . 'b';
$s3 = 'b' . str_repeat('a', $nchars);
$times = 100000;
$start = microtime(true);
for ($i = 0; $i < $times; $i++) {
strcmp($s1, $s2);
}
$timeForSameAtStart = microtime(true) - $start;
$start = microtime(true);
for ($i = 0; $i < $times; $i++) {
strcmp($s1, $s3);
}
$timeForSameAtEnd = microtime(true) - $start;
printf("'b' at the end: %.4f\n'b' at the front: %.4f\n", $timeForSameAtStart, $timeForSameAtEnd);
For me this prints something like 'b' at the end: 0.0634 'b' at the front: 0.0287.
Many other string-based functions in PHP likely suffer from similar issues. Working around this is tricky, especially in PHP where you don't actually know what a lot of functions are really doing at the physical level.
One possible tactic is just sticking a random wait time in your code before you return the answer to the caller/potential attacker. Even better, measure how long it took to check the input data (e.g., with microtime), and then wait a random time minus that amount of time. This is not 100% secure, but it makes attacking the system MUCH harder because, at a minimum, an attacker will have to try each input many times in order to filter out the randomness.
The problem with strcmp is, that it depends on implementation. If it binarily compares each byte of strings until it reaches difference or end of either strings, then it is vulnerable to timing attack.
Now how about hashing?
I have found this Security question and i belive it has the correct answer for you:
https://security.stackexchange.com/a/46215
Timing attack is a myth.
I explain.
The time that it takes to validates a text, between one similar versus other different is around a fraction of second, let's say +/- 0.1 second (exaggerated!).
However, the time that it takes an attacker to measure this time is:
delay of the network + 0.1 seconds + delay of the system (may be its busy doing some other task) + other delays.
So no, its not possible, even for a local system (lag zero), the result of interval of time is always unclear.
In a test, let's say the difference between one method and another is 1us.
So, if we test it and the difference is 1us, then we could guess part of the number.
But what if there is another factor, for example, the network, the cpu usage at the moment, the cpu cycle of the moment and such.
Even if we excluded the network, we have that most operating systems are multi-tasking, so the test must be done in a system with a single-tasking operating system or a system running a single task, and that is not something that you see in the wild. Even embedded systems run multiple threads at the same time.
But let's say we run locally (not network) and we are doing a drill-run in a computer that only runs a single task, our task. But we have another problem, modern CPUs don't run at a constant cycle, they vary depending on the usage (, temperature and other factors.
So, it is only possible if:
it is executed locally and there is no other factor.
it runs as a single task and no other task is running on the server.
the cpu runs constantly.
i.e. it is ABSURD.
it is the test.
<?php
$text='123456789012345678901234567890123456789012345678901234567890123456789012345678901234';
$compare1='12345678901234567890123456789012345678901234567890123456789012345678901234567890123x';
$compare2='2222222222222222222222222222222222222222222222222222222222222222222222222222222222222';
$a1=microtime(true);
for($i=0;$i<100000;$i++) {
if($compare1===$text) {
// do something
}
}
$a2=microtime(true);
var_dump($a2-$a1);
$a1=microtime(true);
for($i=0;$i<100000;$i++) {
if($compare2===$text) {
// do something
}
}
$a2=microtime(true);
var_dump($a2-$a1);
It took me 5 minutes to invalidate this hypothesis.
What is tested:
it tests a 512bit text and it compares with two tests and compares the times.
This test is done to prove the hypothesis so it forces a no-real situation where the first text compared is almost the same as the first test (excluding the last character).
It also excludes latencies and other operations.
(why 512bits, most passwords are encrypted in 128 and 256bits, 512bits is what we can call it safe)
And it is the result.
one round:
0.021588087081909
0.021672010421753 (long time)
another run:
0.021767854690552
0.022729873657227 (long time)
and another run:
0.021697998046875 (long time)
0.021611213684082
and again
0.021565914154053 (long time)
0.020948171615601
and again
0.021995067596436
0.0224769115448 (long time)
So, even when the test is forced to validate the point, it fails.
i.e.
you can't find a trend when one of the variables is unknown and this factor compromises the whole test. I can test it 1 million times and the result will be the same. And this test, in particular, avoids any variable such as latency, other processes, access to the database, etc.

Is memcached supposed to take this long?

I've compared these two pieces of code:
Test 1:
$time = microtime(true);
$memcached = new Memcached();
$memcached->addServer('localhost', 11211);
for($i=1;$i<=1000;$i++){
$result = $memcached->get('test');
}
echo (microtime(true) - $time)*1000;
Resulting time: 50.509929656982
Test 2:
$time = microtime(true);
$memcached = new Memcached();
$memcached->addServer('localhost', 11211);
for($i=1;$i<=1000;$i++){
$result = 'just me';
}
echo (microtime(true) - $time)*1000;
Resulting time: 0.3209114074707
Is memcached supposed to take this long?
You did kind of a lot of confusing math there. I don't think your question is clear at all.
It looks like you got 50.509929656982ms after multiplying the original answer by 1,000. So I'm going to multiply that back to your original answer of ~50,510µs. Now, that was for 1,000 requests, so the average request returned in ~50µs.
Now, if you actually want to grab 1,000 items, you'll it in a single multiget request, which will reduce that time per-item average considerably.
If you're asking if the time to form and transmit a network request and get the network response, parse it and return it is expected to be slower than a NOOP assignment in a tight loop then, yeah.
If you're asking if 50µs is fast or slow... that's really up to you.
On a typical memcached setup I would expect almost every operation to finish in 1ms or less. I would recommend checking to see what your network latency is and then profiling the client to see how much time the client spends actually processing the operations. If you can rule out those being slow then you might have some kind of server side issue.

execution time and memory uses by the code

I am a PHP coder. I don't find any way to get how many time is taken during execution of code.means execution time.
and How much memory space is used by the code during execution tim
I know the PHP INI setting but doesn't showed my solution.
How could i get that time and memory space unit with coding.
I found this one in generator example.
$start_time=microtime(true);
//do something what you want to measure
$end_time=microtime(true);
echo "time: ", bcsub($end_time, $start_time, 4), "\n";
echo "memory (byte): ", memory_get_peak_usage(true), "\n";
http://php.net/manual/en/language.generators.overview.php#114136
I always use
microtime()
for timing, and
memory_get_usage()
for memory usage, together with my IDE (PHPStorm) and XDEBUG this gives quite some useful info.
http://www.php.net/manual/en/function.microtime.php
and
http://php.net/manual/en/function.memory-get-usage.php
You can use
microtime()
at the begin of your script, and at the end. The difference will be the execution time.
microtime return the timestamp with micro second wich is enough for your needs.
http://www.php.net/manual/en/function.microtime.php

php time() and microtime() do sometimes not concord

While logging some data using microtime() (using PHP 5), I encountered some values that seemed slightly out of phase in respect to the timestamp of my log file, so I just tried to compare the output of time() and microtime() with a simple script (usleep is just here in order to limit the data output):
<?php
for($i = 0; $i < 500; $i++) {
$microtime = microtime();
$time = time();
list($usec, $sec) = explode(" ", $microtime);
if ((int)$sec > $time) {
echo $time . ' : ' . $microtime . '<br>';
}
usleep(50000);
}
?>
Now, as $microtime is declared before $time, I expect it to be smaller, and nothing should ever be output; however, this obviously is not the case, and every now and then, $time is smaller than the seconds returned from microtime(), as in this example (truncated) output:
1344536674 : 0.15545100 1344536675
1344536675 : 0.15553900 1344536676
1344536676 : 0.15961000 1344536677
1344536677 : 0.16758900 1344536678
Now, this is just a small gap; however, I have observed some series where the difference is (quite) more than a second... so, how is this possible?
If you look at the implementations of time and microtime, you see they're radically different:
time just calls the C time function.
microtime has two implementations: If the C function gettimeofday is available (which it should be on a Linux system), it is called straight-forward. Otherwise they pull of some acrobatics to use rusage to calculate the time.
Since the C time call is only precise up to a second, it may also intentionally use a low-fidelity time source.
Furthermore, on modern x86_64 systems, both C functions can be implemented without a system call by looking into certain CPU registers. If you're on a multi-core system, these registers may not exactly match across cores, and that could be a reason.
Another potential reason for the discrepancies is that NTPd(a time-keeping daemon) or some other user-space process is changing the clock. Normally, these kinds of effects should be avoided by adjtime.
All of these rationales are pretty esoteric. To further debug the problem, you should:
Determine OS and CPU architecture (hint: and tell us!)
Try to force the process on one core
Stop any programs that are (or may be) adjusting the system time.
This could be due to the fact that microtime() uses floating point numbers, and therefore rounding errors may occur.
You can specify floating point numbers' precision in php.ini

Comparing execution times in PHP

I would like to compare different PHP code to know which one would be executed faster. I am currently using the following code:
<?php
$load_time_1 = 0;
$load_time_2 = 0;
$load_time_3 = 0;
for($x = 1; $x <= 20000; $x++)
{
//code 1
$start_time = microtime(true);
$i = 1;
$i++;
$load_time_1 += (microtime(true) - $start_time);
//code 2
$start_time = microtime(true);
$i = 1;
$i++;
$load_time_2 += (microtime(true) - $start_time);
//code 3
$start_time = microtime(true);
$i = 1;
$i++;
$load_time_3 += (microtime(true) - $start_time);
}
echo $load_time_1;
echo '<br />';
echo $load_time_2;
echo '<br />';
echo $load_time_3;
?>
I have executed the script several times.
The first result is
0.44057559967041
0.43392467498779
0.43600964546204
The second result is
0.50447297096252
0.48595094680786
0.49943733215332
The third result is
0.5283739566803
0.55247902870178
0.55091571807861
The result looks okay, but the problem is, that every time I execute this code the result is different. Also, I am comparing three times the same code and on the same machine.
Why would there be a difference in speed while comparing? And is there a way to compare execution times and to see the the real difference?
there is a thing called Observational error.
As long as your numbers do not exceed it, all your measurements are just waste of time.
The only proper way of doing measurements is called profiling and stands for measuring significant parts of the code, not senseless ones.
Why would there be a difference in
speed while comparing?
There are two reasons for this, which are both related to how things that are out of your control are handled by the PHP and the Operating system.
Firstly, the computer processor can only do certain amount of operations at any given time. The Operating system is basically responsible of handling the multitasking to divide these available cycles to your applications. Since these cycles aren't given at a constant rate, small speed variations are to be expected even with identical PHP commands, because of how processor cycles are allocated.
Secondly, a bigger cause to time variations are the background operations of PHP. There are many things that are completely hidden to the user, like memory allocation, garbage collection and handling various name spaces for variables and the like. These operations also take computer cycles and they can be run at unexpected times during your script. If garbage collection is performed during the first incrementation, but not the second, it causes the first operation to take longer than the second. Sometimes, because of garbage collection, the order in which the tests are performed can also impact the execution time.
Speed testing can be a bit tricky, because unrelated factors (like other applications running in the background) can skew the results of your test. Generally small speed differences are hard to tell between scripts, but when a speed test is run enough number of times, the real results can be seen. For example, if one script is constantly faster than another, it usually points out to that script being more efficient in terms of processing speed.
the reason the results vary is because there are other things going on at the same time, such as windows or linux based tasks, other processes, you will never get an exact result, you best of running the code over a 100 iterations and then devide the result to find the average time take, and use that as your figure/
Also it would be beneficial for you to create a class that can handle this for you, this way you can use it all the time without having to write the code every time:
try something like this (untested):
class CodeBench
{
private $benches = array();
public function __construct(){}
public function begin($name)
{
if(!isset($this->benches[$name]))
{
$this->benches[$name] = array();
}
$this->benches[$name]['start'] = array(
'microtime' => microtime(true)
/* Other information*/
);
}
public function end($name)
{
if(!isset($this->benches[$name]))
{
throw new Exception("You must first declare a benchmark for " . $name);
}
$this->benches[$name]['end'] = array(
'microtime' => microtime()
/* Other information*/
);
}
public function calculate($name)
{
if(!isset($this->benches[$name]))
{
throw new Exception("You must first declare a benchmark for " . $name);
}
if(!isset($this->benches[$name]['end']))
{
throw new Exception("You must first call an end call for " . $name);
}
return ($this->benches[$name]['end'] - $this->benches[$name]['start']) . 'ms'
}
}
And then use like so:
$CB = new CodeBench();
$CB->start("bench_1");
//Do work:
$CB->end("bench_1");
$CB->start("bench_2");
//Do work:
$CB->end("bench_2");
echo "First benchmark had taken: " . $CB->calculate("bench_1");
echo "Second benchmark had taken: " . $CB->calculate("bench_2");
Computing speeds are never 100% set in stone. PHP is a server-side script, and thus depending on the computing power available to the server, it can take a varying amount of time.
Since you're subtracting from the start time with each step, it is expected that load time 3 will be greater than 2 which will be greater than 1.

Categories