I tried using it like this:
$now = microtime(true);
// cpu expensive code here
echo microtime(true) - $now;
but regardless of what code I enter between these statements, I alwasy get almost the same results, something like 3.0994415283203E-6
What am I doing wrong?
Better solution. Run the code multiple times to average out the operation:
$runs = 500;
$start = microtime(true);
for ($i = 0; $i < $runs; $i++) {
//cpu expensive code here
}
$end = microtime(true);
$elapsed = number_format($end - $start, 4);
$one = number_format(($end - $start) / 500, 7);
echo "500 runs in $elapsed seconds, average of $one seconds per call";
3.0994415283203E-6 equates to 0.0000030994415283203.
The E-6 tells you to move the decimal point left six places. E+6 would mean the opposite. As #deceze mentioned, this is called scientific notation.
If you're doing a performance test, it's a good idea to put the code into a 100000 or so iteration loop, and then divide the resulting time by 100000. That way you get a more accurate average.
You're not doing anything wrong, it's just that the code you're timing really only takes a fraction of a second to run.
If you want to prove it, sleep for a few seconds.
It looks like you are using microtime() without the optional argument, but you say you are, so I am not 100% sure.
What is the output of this:
$now = microtime(true);
sleep(1);
echo microtime(true) - $now;
PHP always amazes me with how fast it is. Your code seems to be right. Maybe your code is really only taking 3 milliseconds.
You could try making a long loop, something like this:
$x=0;
while ($x<1000000)
{
$x++;
}
Add this code inside of your timer. For me, looping 1 million times usually takes about 1/2 second. See if this changes your time.
Related
I am trying to start several processes as close to the same time as possible. A cronjob selects all items to be processed from the database a minute prior to the actual time these processes need to start, and then starts a child process for each record found.
As it can take several seconds to run through the database and spawn the child processes, I am trying to create a sleep or usleep delay so that each spawned child process starts as close as possible to the top of the next minute.
In testing, I have noticed some differences between the child start times returned by time() and microtime().
Here's the code I have to create the delay:
$unixtime_now = time();
$unixMicrotime_now = microtime(true);
$timeToStart = $unixtime_now + 2;
$timeToSleep = $timeToStart - $unixMicrotime_now;
$timeToSleep = number_format($timeToSleep,6);
$timeToSleep = str_replace('.','',$timeToSleep);
usleep ($timeToSleep);
$startedAtMicrotimeFloat = microtime(true);
$startedAtTime = time();
$date1 = date("H:i:s", $startedAtMicrotimeFloat);
$date2 = date("H:i:s", $startedAtTime);
echo "Unixtime Now = $unixtime_now<br />UnixMicroTime Now = $unixMicrotime_now<br />Time to Start = $timeToStart<br />Time to Sleep = $timeToSleep<br />Started at Time = $startedAtTime - $date2<br />Started at MicroTimeFloat = $startedAtMicrotimeFloat - $date1";
Note that I used 2 seconds rather than 60 seconds for the offset to facilitate speed in testing.
2 questions:
1) No matter how many times I run this, the times returned by time() are ALWAYS 1 second before the times returned by microtime(). For example:
Started at Time = 1555765444 - 09:04:04
Started at MicroTimeFloat = 1555765445.0002 - 09:04:05
This is probably a forest for the trees issue, but does anyone have an idea why this is or how I can resolve it?
2) Are there any drawbacks to having a usleep() delay that can be as long as 59 seconds? Would it be better to use sleep() and accept the the child processes might start at 1555765445.9999 - 09:04:05 rather than 1555765445.0002 - 09:04:05, or up to almost a full second later?
Thanks in advance.
Alan
Per Dave's suggestion, I edited the code as follows:
$date1 = date("H:i:s", microtime(true));
$date2 = date("H:i:s", time());
I still get the same result:
Started at Time = 1555876342 - 15:52:22
Started at MicroTimeFloat = 1555876343.0003 - 15:52:23
The microtime() result, which I suspect to be accurate, is later than the time() result by 1.0003 seconds.
The question is simple, I would like to check a database to serve customised content to a site visitor, but failover and serve a generic page if this function takes more then 800ms to execute. (Target time for the server response is 1000ms).
I've seen the set_time_limit function, however this takes an integer in seconds as the argument.
My question: is there something similar that can be used with values of less than 1 second?
I'm looking for something like:
void set_time_limit_ms ( int $milliseconds )
set_time_limit_ms (800)
doesn't exist. you just could emulate it with a tick function:
declare(ticks=1); // or more if 1 takes too much time
$start = microtime(1);
register_tick_function(function () use ($start) {
(microtime(1) - $start < 0.8) or die();
});
You will not be able to use this function to prevent a query that is running longer than you expected. This only measures the actual script execution time. Here is an bit from the manual.
The set_time_limit() function and the configuration directive
max_execution_time only affect the execution time of the script
itself. Any time spent on activity that happens outside the execution
of the script such as system calls using system(), stream operations,
database queries, etc. is not included when determining the maximum
time that the script has been running. This is not true on Windows
where the measured time is real.
Yes there is, I often use microtime. see
http://php.net/manual/en/function.microtime.php
<?php
$time_start = microtime(true);
// Sleep for a while
usleep(100);
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Did nothing in $time seconds\n";
?>
What is the best way for getting how long a function takes to run until finish and then storing that number in a variable in PHP ?
How I would think about doing this is get the time right before the function is executed and right after and then take the difference of the former from the latter, but I don't know how get the time in php.
Also, I am trying to get the units to be in tenths and hundredths of second (.42 seconds), hopefully the function takes less than a second to complete so if anyone can help me convert it to those units, i'd appreciate that.
You can do that using microtime().
$start = microtime(true);
for ($x=0;$x<10000;$x++) {}
$end = microtime(true);
echo 'It took ' . ($end-$start) . ' seconds!';
See "Example #1 Timing script execution with microtime()" in the documentation.
http://docs.php.net/manual/en/function.microtime.php
$time = time();
call_your_function();
$time = time()-$time;
echo "The function took $time seconds to run\n";
I am currently writing a online game. Now I have to check if an event happen (checking timestamp in database) and depending on that execute some actions. I have to check for an event every second.
I wanted to use a cronjob but with cron you can run a script only every minute.
My idea was to use cron and loop 60 times in my php script. But I think this isn't the best solution.
So whats the best way to run a script every second?
I searched for a better solution but it seems that my first idea, with clean code, is the best solution.
This is my current code:
<?php
set_time_limit(60);
$start = time();
for ($i = 0; $i < 59; ++$i) {
// Do whatever you want here
time_sleep_until($start + $i + 1);
}
?>
Why not modify the script so that it just repeats the code every second? This will reduce the parsing overhead and be less complicated.
You should probably run the script once, and use a loop with delay to accomplish your desired timing.
The side benefit is that this is more efficient, and you would only have to open resources (ie, databases) once.
You shouldn't want this :P. No host will accept your cronjob is running every second every minute?
You can save the time it runned in a database, and the next time you run it calculate the time between both runs and do the calculations you want. every second is a very bad idea.
$total_time = 0;
$start_time = microtime(true);
while($total_time < 60)
{
//DoSomethingHere;
echo $total_time."\n";
//sleep(5);
$total_time = microtime(true) - $start_time ;
}
add this in crontab to run every minute.
I'm trying to perform a simple benchmark on a script I have. First I tried to just add something like:
echo 'After Checklist: '. date('h:i:s:u A') ."<br />";
but it just prints out the same time for a lot of the times - it isn't until a includes script is ran that it returns a different time. Is there anyway to do this? Or something similar - I basically just want to see where the bottleneck is so I can increase performance.
You want to use the microtime function which will return the current Unix timestamp with microseconds.
From here you can do simple subtraction from the start and end time to get the desired results. The trick however, is to pass in "true" into the function so it returns a float value, rather than a string.
An example, as posted on php.net is this:
<?php
$time_start = microtime(true);
// Sleep for a while
usleep(100);
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Did nothing in $time seconds\n";
?>
I think you need microtime:
http://php.net/manual/en/function.microtime.php