I've REALLY been wanting to test speeds of regex etc. and on php.net it has this example:
$time_start = microtime(true);
// Sleep for a while
usleep(100); // Or anything for that matter..
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Did nothing in $time seconds\n";
EDIT: What I meant was to play a large loop of functions in place of usleep(). It always shows a very random number that is always under 1. It shows nothing worth a benchmark!
Timers are like that. Wrap a loop around the function. Run it 10^6 times if you want to measure microseconds, 10^3 times if you want milliseconds. Don't expect to get more than 2 decimal digits of precision.
The usleep function need time in microsecond, you are specifying very low value, try increasing that:
usleep(2000000); // wait for two seconds
usleep function is also known to be tricky on Windows systems.
Or you can simply use the sleep function with seconds to be specified as its parameter eg:
sleep(10); // wait for 10 seconds
Cannot reproduce, and no one has a faster computer than me. 16 threads of execution, eight cores, i7 architecture. Your PHP build or OS must be suboptimal. This is running your code exactly.
`--> php tmp.php
Did nothing in 0.00011587142944336 seconds
.-(/tmp)---------------------------------------------------(pestilence#pandemic)-
This is with usleep(1000000)... (1 second)
`--> php tmp.php
Did nothing in 1.0000479221344 seconds
.-(/tmp)---------------------------------------------------(pestilence#pandemic)-
FYI, I was using PHP 5.3.0, not that it makes any difference (in this case) past 5.0.
Is your code taking less than a second to execute? If so, that'd explain your benchmarks. When I profile something for an article, I run my routine 10,000's of times... sometimes millions. Computers are fast these days.
As far as the "random" part goes, this is why I take several samples of large iterations and average them together. It doesn't hurt to throw the standard deviation in there too. Also, remember not to stream MP3s or play video while testing. :p
Related
I'm using the DigitalOcean API to create droplets when my web application needs the extra resources. Because of the way DigitalOcean charges (min. one hour increments), I'd like to keep a created server on for an hour so it's available for new tasks after the initial task is completed.
I'm thinking about formatting my script this way:
<?php
createDroplet($dropletid);
$time = time();
// run resource heavy task
sleep($time + 3599);
deleteDroplet($dropletid);
Is this the best way to achieve this?
It doesn't look like a good idea, but the code is so simple, nothing can compete with that. You would need to make sure your script can run, at least, for that long.
Note that sleep() should not have $time as an argument. It sleeps for the given number of seconds. $time contains many, many seconds.
I am worried that the script might get interrupted, and you will never delete the droplet. Not much you can do about that, given this script.
Also, the sleep() itself might get interrupted, causing it to sleep much shorter than you want. A better sleep would be:
$remainingTime = 3590;
do {
$remainingTime = sleep($remainingTime);
} while ($remainingTime > 0);
This will catch an interrupt of sleep(). This is under the assumption that FALSE is -1. See the manual on sleep(): http://php.net/manual/en/function.sleep.php
Then there's the problem that you want to sleep for exactly 3599 seconds, so that you're only charged one hour. I wouldn't make it so close to one hour. You have to leave some time for DigitalOcean to execute stuff and log the time. I would start with 3590 seconds and make sure that always works.
Finally: What are the alternatives? Clearly this could be a cron job. How would that work? Suppose you execute a PHP script every minute, and you have a database entry that tells you which resource to allocate at a certain start time and deallocate at a certain expire time. Then that script could do this for you with an accurary of about a minute, which should be enough. Even if the server crashes and restarts, as long as the database is intact and the script runs again, everything should go as planned. I know, this is far more work to implement, but it is the better way to do it.
I am interested to know if there is better way to time operations. Relying on timer, as it seems the microtime .. or any other method that reads OS time will clearly not give as good an estimate.
I am interested to know in general what is the precision of timing operations nowadays in a linux based operating system, say Red Hat Linux.
Now to be more specific, I recently wrote an algorithm and tried to compare the time it took. My php code worked like this :
$start = microtime(true);
$result = myTimeConsumingProcess();
$end= microtime(true);
$timeinMiliSec = 1000.0*($end - $start);
It turns out that sometime the process took 0 ms and on different execution, with precisely the same data it took 9.9xxxx milliseconds.
The explanation for this as I can imagine is that time is measured using timer interrupts. If the execution starts and finishes before the next timer interrupt updates the time of the OS, we are bound to get 0 sec as the difference.
It used to be the case from my early DOS days that a timer interrupts was called at every 18.19 ms. Looks like that is not the case any more with better machines and upgraded OS , we have are now running timers faster than that.
Wanted to know if there was a was to prevent a php timeout of occurring if a part of the code has started being process.
Let me explain:
i have a script that is executed that take way too long to even use
ini_set('max_execution_time', 0);
set_time_limit(0);
the code is built to allow it to timeout and restart where it was but I have 2 line of code that need to be executed together for that to happen
$email->save();
$this->_setDoneWebsiteId($user['id'], $websiteId);
is there a way in php to tell it it has to finish executing them both even if the timeout is called?
Got an idea as I'm writing this, i could use a time_out of 120 sec and start a timer and if there is less then 20 sec left before timeout to stop, i just wanted to know if i was missing something.
Thank you for your inputs.
If your code is not synchronous and some task takes more than 100 seconds - you'll not be able to check the execution time.
I see only one truly HACK (be careful, test it with php -f in console for be able to kill the processes):
<?php
// Any preparations here
register_shutdown_function(function(){
if (error_get_last()) { // There was timeout exceeded error
// Call the rest of your system
// But note: you have no stack, no context, no valid previous frame - nothing!
}
});
One thing you could do is use the DATE time features to monitor your average execution time. (kind of builds up with each execution assuming you have a loop).
If the average then time is then longer than how ever much time you have left (you would be counting how much time has been taken already against your maximum execution time), you would trigger a restart and let it pick up from where it left off.
How ever if you are experiencing time outs, you might want to look at ways to make your code more efficient.
No, you can't abort timeout handler, but i'd say that 20 seconds is quite a big time if you're not parsing something huge. However, you can do the following:
Get time of the execution start ($start = $_SERVER['REQUEST_TIME'] or just $start = microtime(true); in the beginning of your controller).
Asset that execution time is lesser than 100 seconds before running $email->save() or halt/skip code if necessary. This is as easy as if (microtime(true) - $start < 100) { $email->save()... }. You would like more abstraction on this, however.
(if necessary) check execution time after both methods have ran and halt execution if it has timed out.
This will require time_limit set to big value or even turned off and tedious work to prevent too long execution. In most cases timeout is your friend that just tells you're work is taking too much time and you should rethink your architecture and inner processes; if you're out of 120 seconds, you'll probably want to put that work on a daemon.
Thank you for your input, as I thought the timer solution is the best way.
what I ended up doing was the following, this is not the actual code as its too long to make a good answer but just the general idea.
ini_set('max_execution_time', 180);
set_time_limit(180);
$startTime = date('U'); //give me the current timestamps
while(true){
//gather all the data i need for the email
//I know its overkill to break the loop with 1 min remaining
//but i realy dont want to take chances
if(date('U') < ($startTime+120)){
$email->save();
$this->_setDoneWebsiteId($user['id'], $websiteId);
}else{
return false
}
}
I could not use the idea of measuring the average time of each cycle as it vary too much.
I could have made the code more efficient but it number of cycle is based on the number of users and websites in the framework. It should grow big enough to need multiple run to be completed anyway.
Il have to make some research to understand register_shutdown_function, but I will look into it.
Again Thank you!
Is the only difference between sleep function and usleep one Is that the parameter of first one is in seconds and the other one in micro seconds ?? Is there any difference else ??
Anther thing please, I'll use this functions with loops, Is there any problem will accident me on that??
From a review of the PHP documentation on usleep and that of sleep, you'll notice that there are 2 differences:
The argument for usleep is an integer representing micro seconds (a micro second is one millionth of a second), while the argument for sleep is an integer representing seconds.
sleep returns "zero on success, or FALSE on error." usleep has no return value. Here are more details on the return value for sleep:
If the call was interrupted by a signal, sleep() returns a non-zero
value. On Windows, this value will always be 192 (the value of the
WAIT_IO_COMPLETION constant within the Windows API). On other
platforms, the return value will be the number of seconds left to
sleep.
In general, you should be ok using these functions within loops. That being said, though, the more important questions to ask are: Why do you need a solution that depends on halting execution for a certain amount of time? Is this really a good solution to your problem, or is it a hack-fix for some strange bug or edge case that you just want to have go away?
usleep — Delay execution in microseconds.
Halt time in micro seconds. A micro second is one millionth of a second.
`sleep` — Delay execution
Delays the program execution for the given number of seconds.
i don't get more difference between this two function
sleep() allows your code to sleep in seconds.
sleep(5); // sleeps for 5 seconds
usleep() allows your code with respect to microseconds.
usleep(2500000); // slees for 2.5 seconds
Other than that, I think they're identical.
sleep($n) == usleep($n * 1000000)
usleep(25000) only sleeps for 0.025 seconds.
I have a script when ran consumes 17mb of memory (logged using memory_get_peak_usage()).
Script is ran 1 million times per day.
Total daily memory consumption: 17 million mb
86400 seconds in a day.
17000000 / 86400 = 196.76
Assumption: Running this script 1 million times each day will require at least 196.76 dedicated memory.
Is my assumption correct?
If script is runs 1000000 copies at the same time, then you would get 17 million MB, but as it releases memory after to completes, you don't add usage to total sum.
You need to know how much copies run at same time and multiply that number of 17 MB. That would be max memory usage.
Not entirely correct; the first hundred times your script is executed, it'll probably all fit into memory fine; so, the first two minutes or so might go as expected. But, once you push your computer into swap, your computer will spend so much time handling swap, that the next 999,800 executions might go significantly slower than you'd expect. And, as they all start competing for disk bandwidth, it will get much worse the longer it runs.
I'm also not sure about the use of the php memory_get_peak_usage() function; it is an 'internal' view of the memory the program requires, not the view from the operating system's perspective. It might be significantly worse. (Perhaps the interpreter requires 20 megs RSS just to run a hello-world. Perhaps not.)
I'm not sure what would be the best way forward for your application: maybe it could be a single long-lived process, that handles events as they are posted, and returns results. That might be able to run in significantly less memory space. Maybe the results don't actually change every 0.0005 seconds, and you could cache the results for one second, and only run it 86400 times per day. Or maybe you need to buy a few more machines. :)
Yes, but what if it gets called multiple times at the same time? You have multiple threads running simultaneously.
EDIT: Also, why is that script running a million times / day? (unless you got a huge website)
I dont think this calculation will give correct results because you also need to consider other factors such as -
How long the script runs.
What are the number of scripts running at a time.
Distribution of the invocation of scripts throughout the time.
And in your calculation whats the point in dividing by 86400 seconds? why not hours or milliseconds. To me, the calculation seems pretty meaningless.