php loop interactions taking too long to complete - php

The following loop is taking 13 seconds to run on a Windows i7 # 3.4Ghz 16GB.
The script is running from the command line - php loop.php
$start = microtime(true);
for($i = 0; $i <= 150000; $i++) {
$running_time = date('i:s', microtime(true) - $start);
echo "$i - $running_time\n";
}
If I take out the 'echo', it takes less than a second, why?

This has to do with lack of buffering of your output. If you run this in a Windows console, you'll find that the console is your bottleneck.
Hold the scroll bar and watch your program hang until you release it again, to prove this.

Related

Show an indicator on CLI while query runs

I have a PHP script running via CLI that's working well, but it runs a couple long queries (2-5 minutes) that would ideally give you some idea that something is still happening. When iterating through results, I do have a function running that updates the progress, but when PHP is waiting for a query to return, silence ensues.
I don't need to know anything about when the query will complete, but some sort of indication on the CLI that it's doing something would be a huge gain (binking ..., or something). Possible?
I've found that using carriage returns \r without newlines to be extremely helpful. They reset the output to the beginning of the line, but do not move down a line, allowing you to overwrite the current text.
Please note that you'll need to pad the line to the full length, otherwise previous characters will still linger. For example:
$iteration = 0;
while (/* wait condition */) {
printf("Process still running%-5s\r", str_repeat('.', $iteration % 5));
sleep(1);
$iteration++;
}
echo "\n";
echo "Task completed!";
If you're using a for loop for processing, something like this would be much more useful:
// Display progress every X iterations
$update_interval = 1000000;
for ($i = 0; $i < $massive_number; $i++) {
// Do processing
if ($i % $update_interval == 0) {
printf("Progress: %.2f%%\r", (100 * $i / $massive_number));
}
}

Inconsistent loading time for JS and PHP

I have a PHP script being loaded by JS through JQuery's $.ajax.
I measured the execution time of the PHP script using:
$start = microtime(); // top most part of code
// all other processes that includes AES decryption
$end = microtime(); // bottom part of code
file_put_contents('LOG.TXT','TIME IT TOOK: '.($end-$start)."\n",FILE_APPEND);
It measured somewhere less than 1 second. There are no prepend/append PHP scripts.
In the JS $.ajax code, I have measured the execution time by:
success: function(response) {
console.log(date('g:i:s a') + ' time received\n');
// all other processes including AES decryption
console.log(date('g:i:s a') + ' time processed\n');
}
The time is the same for the time received and the time processed.
However, when I check the Chrome Developer Tools, it claims that the PHP script loaded for about 8 seconds.
What could be wrong in how I measured these things?
I'm certain that PHP is loading fast but how come Chrome reports that it took more than 8 seconds?
I'm using localhost and my web server is fast and this is the only time I encountered this problem. All other AJAX calls are fast.
In the PHP section, make sure you're using microtime(true) so that you're working with floating point numbers instead of strings.
Using subtraction on strings may yield incorrect results.
Example: http://ideone.com/FWkjF2
<?php
// Wrong
$start = microtime();
sleep(3);
$stop = microtime();
echo ($stop - $start) . PHP_EOL; // Prints 8.000000000008E-5
// Correct
$start = microtime(true);
sleep(3);
$stop = microtime(true);
echo ($stop - $start) . PHP_EOL; // Prints 3.0000791549683
?>

php exec how to calculate time?

I'm trying to implement some kind of 'multiprocessing' in php for my task. The task is to check the status of every device in our network.
For that i decided to use looped exec and it works. But i don't know whether it works fine or not:
$command = "php scan_part.php $i > null &";
exec($command);
This calls scan_part.php as many times as i need, but the question is: how can i calculate time needed for all of my scan_part.php's to be executed?
Please, help me, i'm stuck!
Use proc_open to launch your script instead of exec. Proc_open lets you wait until a process is done through proc_close, which waits until the termination of the program.
$starttime = microtime(true);
$processes = array();
// stdin, stdout, stderr- take no input, save no output
$descriptors = array(
0 => array("file", "/dev/null", 'r'),
1 => array("file", "/dev/null", 'w'),
2 => array("file", "/dev/null", 'w'));
while ($your_condition)
{
$command = "php scan_part.php $i"; // no pipe redirection, no background mark
$processes[] = proc_open($command, $descriptors, $pipes);
}
// proc_close will block until the program terminates or will return immediately
// if the program has already terminated
foreach ($processes as $process)
proc_close($process);
$endtime = microtime(true);
$delta = $endtime - $starttime;
Consider using the microtime() function. Example #2 on this page is useful:
http://php.net/manual/en/function.microtime.php
EDIT: This approach will not work, because as pointed out in comments, the process is lauched in the background. I am unable to suggest a better approach.
Does the scan_part.php script output anything that the calling script expects/accepts as input? If not, you can use /usr/bin/time to calculate the run time of the scan_part script:
exec('/usr/bin/time /usr/bin/php scan_part.php &');
That returns something like this:
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 4192maxresident)k
0inputs+0outputs (0major+318minor)pagefaults 0swaps
which you can parse for the times. This would let you get execution time without having to modify scan_parts.
On the other hand, if scan_parts doesn't normally produce output, and you can/will modify it, then wrapping the contents of scan_parts with a couple microtime(true) calls and outputting the difference would be simpler.
$start = microtime(true);
.... do the core of scan_parts here
echo microtime(true) - $start; // output the runtime here
exit();
This gives you a SINGLE time value to return, saving you the parseing overhead of the /usr/bin/time stuff.
You can't request the time to execute on a normal operating system (it's a system call and handled by the kernel). For this you need a real time os.
But you can calculate it by record for each instance how much time it took to launch. Than calculate the average on that dataset.
microtime can give you the current time. Do this like so:
while(/** list proc */)
{
$starttime = microtime();
proc_open( /** some syntax here */);
$endtime = microtime() - $starttime;
//store time here
}
You may use the microtime(), if you want to benchmark a whole application you may also look into XDebug, it will give you detailed reports on the performance of your application.
For windows the latest version of Wampserver comes packaged with XDebug and webgrind. For linux the installation is also quite simple.
See: http://code.google.com/p/webgrind/

PHP Output Buffer Benchmark (microtime inaccurate when used with usleep?)

I post a strange behavior that could be reproduced (at least on apache2+php5).
I don't know if I am doing wrong but let me explain what I try to achieve.
I need to send chunks of binary data (let's say 30) and analyze the average Kbit/s at the end :
I sum each chunk output time, each chunk size, and perform my Kbit/s calculation at the end.
<?php
// build my binary chunk
$var= '';
$o=10000;
while($o--)
{
$var.= pack('N', 85985);
}
// get the size, prepare the memory.
$size = strlen($var);
$tt_sent = 0;
$tt_time = 0;
// I send my chunk 30 times
for ($i = 0; $i < 30; $i++)
{
// start time
$t = microtime(true);
echo $var."\n";
ob_flush();
flush();
$e = microtime(true);
// end time
// the difference should reprenent what it takes to the server to
// transmit chunk to client right ?
// add this chuck bench to the total
$tt_time += round($e-$t,4);
$tt_sent += $size;
}
// total result
echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n";
?>
In this example above, it works so far ( on localhost, it oscillate from 7000 to 10000 Kbit/s through different tests).
Now, let's say I want to shape the transmission, because I know that the client will have enough of a chunk of data to process for a second.
I decide to use usleep(1000000), to mark a pause between chunk transmission.
<?php
// build my binary chunk
$var= '';
$o=10000;
while($o--)
{
$var.= pack('N', 85985);
}
// get the size, prepare the memory.
$size = strlen($var);
$tt_sent = 0;
$tt_time = 0;
// I send my chunk 30 times
for ($i = 0; $i < 30; $i++)
{
// start time
$t = microtime(true);
echo $var."\n";
ob_flush();
flush();
$e = microtime(true);
// end time
// the difference should reprenent what it takes to the server to
// transmit chunk to client right ?
// add this chuck bench to the total
$tt_time += round($e-$t,4);
$tt_sent += $size;
usleep(1000000);
}
// total result
echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n";
?>
In this last example, I don't know why, the calculated bandwidth can jump from 72000 Kbit/s to 1,200,000, it is totally inaccurate / irrelevant.
Part of the problem is that the time measured to output my chunk is ridiculously low each time a chunk is sent (after the first usleep).
I am doing something wrong ? Does the buffer output is not synchronous ?
I'm not sure how definitive these tests are, but I found it interesting. On my box I'm averaging around 170000 kb/s. From a networked box this number goes up to around 280000 kb/s. I guess we have to assume microtime(true) is fairly accurate even though I read it's operating system dependent. Are you on a Linux based system? The real question is how do we calculate the kilobits transferred in a 1 second time period? I try to project how many chunks can be sent in 1 second and then store the calculated Kb/s to be averaged at the end. I added a sleep(1) before flush() and this results in a negative kb/s as to be expected.
Something don't feel right and I would be interested in knowing if you have improved your testing method. Good Luck!
<?php
// build my binary chunk
$var= '';
$o=10000;
//Alternative to get actual bytes
$m1 = memory_get_usage();
while($o--)
{
$var.= pack('N', 85985);
}
$m2 = memory_get_usage();
//Your size estimate
$size = strlen($var);
//Calculate alternative bytes
$bytes = ($m2 - $m1); //40108
//Convert to Kilobytes 1 Kilobyte = 1024 bytes
$kilobytes = $size/1024;
//Convert to Kilobits 1 byte = 8 bits
$kilobits = $kilobytes * 8;
//Display our data for the record
echo "<pre>size: $size</pre>";
echo "<pre>bytes: $bytes</pre>";
echo "<pre>kilobytes: $kilobytes</pre>";
echo "<pre>kilobits: $kilobits</pre>";
echo "<hr />";
//The test count
$count = 100;
//Initialize total kb/s variable
$total = 0;
for ($i = 0; $i < $count; $i++)
{
// Start Time
$start = microtime(true);
// Utilize html comment to prevent browser from parsing
echo "<!-- $var -->";
// End Time
$end = microtime(true);
// Seconds it took to flush binary chunk
$seconds = $end - $start;
// Calculate how many chunks we can send in 1 second
$chunks = (1/$seconds);
// Calculate the kilobits per second
$kbs = $chunks * $kilobits;
// Store the kbs and we'll average all of them out of the loop
$total += $kbs;
}
//Process the average (data generation) kilobits per second
$average = $total/$count;
echo "<h4>Average kbit/s: $average</h4>";
Analysis
Even though I arrive at some arbitrary value in the test, it is still a value that can be measured. Using a networked computer adds insight into what is really going on. I would have thought the localhost machine would have a higher value then the networked box, but tests prove otherwise in a big way. When on the localhost we have to both send the raw binary data and receive it. This of course shows that two threads are sharing cpu cycles and therefore the supposed kb/s value is in fact lower when testing in a browser on the same machine. We are therefore really measuring cpu cycles and we obtain higher values when the server is allowed to be a server.
Some interesting things start to show up when you increase the test count to 1000. First don't make the browser parse the data. It takes alot of cpu to attempt to render raw data at such high test cases. We can manually watch what is going on with say system monitor and task manager. In my case the local is a linux server and the network box is xp. You can obtain some real kb/s speeds this way and it makes it obvious that we are dynamically serving up data using mainly the cpu and network interfaces. The server doesn't replicate the data and, therefore no matter how high we set the test count, we only need 40 kilobytes of memory space. So 40 kilobytes can generate 40 megabytes dynamically at a 1000 test cases and 400mb at 10000 cases.
I crashed firefox in xp with a virtual memory to low error after running the test case 10000 several times. System monitor on the linux server showed some understandable spikes in cpu and network, but overall pushed out a large amount of data really quick and had plenty of room to spare. Running 10000 cases on linux a few times actually spun up the swap drive and pegged the server cpu cycle. The most interesting fact though is that my values that I obtained above only changed when I was both receiving in firefox and transmitting in apache when testing locally. I practically locked up the xp box, yet my network value of ~280000 kb/s did not change on the print out.
Conclusion:
The kb/s value we arrive at above is virtually useless other then to prove its useless. The test itself however shows some interesting things. At high test cases I beleive I could actually see some physical buffering going on in both the server and the client . Our test script actually dumps the data to apache and it gets released to continue its work. Apache handles the details of the transfer of course. This is actually nice to know, but proves we can't measure the actual transmission rate from our server to browser this way. We can measure our servers data generation rate I guess if thats meaningful in some way. Guess what! Flushing actually slowed down the speeds. Theres a penalty for flushing. In this case, there is no reason for it and removing flush() actually speeds up our data generation rate. Since we are not dealing with networking, our value above is actually more meaningful kept as Kilobytes. It's useless any ways so I'm not changing.

Run a PHP script every second using CLI

I have a dedicated server running Cent OS with a Parallel PLESK panel. I need to run a PHP script every second to update my database. These is no alternative way time-wise, it needs to be updated every second.
I can find my script using the URL http://www.somesite.com/phpfile.php?key=123.
Can the file be executed locally every second? Like phpfile.php?
Update:
It has been a few months since I added this question. I ended up using the following code:
#!/user/bin/php
<?php
$start = microtime(true);
set_time_limit(60);
for ($i = 0; $i < 59; ++$i) {
doMyThings();
time_sleep_until($start + $i + 1);
}
?>
My cronjob is set to every minute. I have been running this for some time now in a test environment, and it has worked great. It is really super fast, and I see no increase in CPU nor Memory usage.
You could actually do it in PHP. Write one program which will run for 59 seconds, doing your checks every second, and then terminates. Combine this with a cron job which runs that process every minute and hey presto.
One approach is this:
set_time_limit(60);
for ($i = 0; $i < 59; ++$i) {
doMyThings();
sleep(1);
}
The only thing you'd probably have to watch out for is the running time of your doMyThings() functions. Even if that's a fraction of a second, then over 60 iterations, that could add up to cause some problems. If you're running PHP >= 5.1 (or >= 5.3 on Windows) then you could use time_sleep_until()
$start = microtime(true);
set_time_limit(60);
for ($i = 0; $i < 59; ++$i) {
doMyThings();
time_sleep_until($start + $i + 1);
}
Have you thought about using "watch"?
watch -n 1 /path/to/phpfile.php
Just start it once and it will keep going. This way it is immune to PHP crashing (not that it happens, but you never know). You can even add this inittab to make it completely bullet-proof.
Why not run a cron to do this and in the php file loop 60 times which a short sleep. That is the way I have overcome this to run a php script 5 times a minute.
To set up your file to be run as a script add the path to the your PHP on the first line such as a perl script
#!/user/bin/php
<?php
while($i < 60) {
sleep(1);
//do stuff
$i++;
}
?>
This is simple upgraded version of nickf second solution witch allow to specify the desired interval in seconds beetween each executions in execution time.
$duration = 60; // Duration of the loop in seconds
$sleep = 5; // Sleep beetween each execution (with stuff execution)
for ($i = 0; $i < floor($duration / $sleep); ++$i) {
$start = microtime(true);
// Do you stuff here
time_sleep_until($start + $sleep);
}
I noticed that the OP edited the answer to give his solution. This solution did not work on my box (the path to PHP is incorrect and the PHP syntax is not correct)
This version worked (save as whatever.sh and chmod +X whatever.sh so it can execute)
#!/usr/bin/php
<?php
$start = microtime(true);
set_time_limit(60);
for ($i = 0; $i < 59; ++$i) {
echo $i;
time_sleep_until($start + $i + 1);
}
?>
You can run your infinite loop script with nohup command on your server which can work even you logout from system. Only restart or physically shutdown can destroy this process. Don't forget to add sleep (1) in your php script.
nohup php /path/to/you/script.php
Now in case you don't need to use the console while it's working, it'll write its output to nohup.out file in your working directory (use the pwd command to get it).

Categories