I'm trying to implement some kind of 'multiprocessing' in php for my task. The task is to check the status of every device in our network.
For that i decided to use looped exec and it works. But i don't know whether it works fine or not:
$command = "php scan_part.php $i > null &";
exec($command);
This calls scan_part.php as many times as i need, but the question is: how can i calculate time needed for all of my scan_part.php's to be executed?
Please, help me, i'm stuck!
Use proc_open to launch your script instead of exec. Proc_open lets you wait until a process is done through proc_close, which waits until the termination of the program.
$starttime = microtime(true);
$processes = array();
// stdin, stdout, stderr- take no input, save no output
$descriptors = array(
0 => array("file", "/dev/null", 'r'),
1 => array("file", "/dev/null", 'w'),
2 => array("file", "/dev/null", 'w'));
while ($your_condition)
{
$command = "php scan_part.php $i"; // no pipe redirection, no background mark
$processes[] = proc_open($command, $descriptors, $pipes);
}
// proc_close will block until the program terminates or will return immediately
// if the program has already terminated
foreach ($processes as $process)
proc_close($process);
$endtime = microtime(true);
$delta = $endtime - $starttime;
Consider using the microtime() function. Example #2 on this page is useful:
http://php.net/manual/en/function.microtime.php
EDIT: This approach will not work, because as pointed out in comments, the process is lauched in the background. I am unable to suggest a better approach.
Does the scan_part.php script output anything that the calling script expects/accepts as input? If not, you can use /usr/bin/time to calculate the run time of the scan_part script:
exec('/usr/bin/time /usr/bin/php scan_part.php &');
That returns something like this:
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 4192maxresident)k
0inputs+0outputs (0major+318minor)pagefaults 0swaps
which you can parse for the times. This would let you get execution time without having to modify scan_parts.
On the other hand, if scan_parts doesn't normally produce output, and you can/will modify it, then wrapping the contents of scan_parts with a couple microtime(true) calls and outputting the difference would be simpler.
$start = microtime(true);
.... do the core of scan_parts here
echo microtime(true) - $start; // output the runtime here
exit();
This gives you a SINGLE time value to return, saving you the parseing overhead of the /usr/bin/time stuff.
You can't request the time to execute on a normal operating system (it's a system call and handled by the kernel). For this you need a real time os.
But you can calculate it by record for each instance how much time it took to launch. Than calculate the average on that dataset.
microtime can give you the current time. Do this like so:
while(/** list proc */)
{
$starttime = microtime();
proc_open( /** some syntax here */);
$endtime = microtime() - $starttime;
//store time here
}
You may use the microtime(), if you want to benchmark a whole application you may also look into XDebug, it will give you detailed reports on the performance of your application.
For windows the latest version of Wampserver comes packaged with XDebug and webgrind. For linux the installation is also quite simple.
See: http://code.google.com/p/webgrind/
Related
I noticed big performance difference when tried to fetch MX of gmail 100000 times using php and shell script.
PHP script is taking around 1.5 min.
<?php
$time = time();
for($i=1;$i<=100000;$i++)
{
getmxrr('gmail.com', $hosts, $mxweights);
unset($hosts, $mxweights);
}
$runtime = time() - $time;
echo "Time Taken : $runtime Sec.";
?>
But same thing done inside shell for loop is almost 10 times slower
time for i in {1..100000}; do (php -r 'getmxrr("gmail.com", $mxhosts, $mxweight);');done
I am curious to know, what are the reasons, shell script is taking more time to complete exactly the same thing which php script can do very fast.
I am a PHP beginner. I want to invoke an external Unix command, pipe some stuff into it (e.g., string and files), and have the result appear in my output buffer (the browser).
Consider the following:
echo '<h1>stuff for my browser output window</h1>';
$fnmphp= '/tmp/someinputfile';
$sendtoprogram = "myheader: $fnmphp\n\n".get_file_contents($fnmphp);
popen2outputbuf("unixprogram < $sendtoprogram");
echo '<p>done</p>';
An even better solution would let PHP write myheader (into Unix program), then pipe the file $fnmphp (into Unix program); and the output of unixprogram would immediately go to my browser output buffer.
I don't think PHP uses stdout, so that my Unix program STDOUT output would make it into the browser. Otherwise, this would happen to default if I used system(). I can only think of solutions that require writing tempfiles.
I think I am standing on the line here (German idiom; wires crossed)--- this probably has an obvious solution.
update:
here is the entirely inelegant but pretty precise solution that I want to replace:
function pipe2eqb( $text ) {
$whatever= '/tmp/whatever-'.time().'-'.$_SESSION['uid'];
$inf = "$whatever.in";
$outf= "$whatever.out";
assert(!file_exists($inf));
assert(!file_exists($outf));
file_put_contents($inf, $text);
assert(file_exists($inf));
system("unixprog < $inf > $outf");
$fo= file_get_contents($outf);
unlink($infilename);
unlink($outfilename);
return $fo;
}
It is easy to replace either the input or the output, but I want to replace both. I will post a solution when I figure it out.
the best to do this is the proc_open family of functions
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$cwd = NULL;//'/tmp';
$env = NULL;//array();
$cmd='unixprog ';
$process = proc_open($cmd, $descriptorspec, $pipes, $cwd, $env);
assert(false!==$process);
now, to give arguments to unixprog, do like
$cmd='unixprog --arg1='.escapeshellarg($arg1).' --arg2='.escapeshellarg($arg2);
to talk to the program's stdin, do like
assert(strlen($stdinmessage)===fwrite($pipes[0],$stdinmessage));
to read from the process's stdout, do like
$stdout=file_get_contents($pipes[$1])
to read from the process's stderr, do like
$stderr=file_get_contents($pipes[$2])
to check if the program has finished, do like
$status=proc_get_status($process);
if($status['running']){echo 'child process is still running.';}
to check the return code of the process when it has finished,
echo 'return code from child process: '.$status['exitcode'];
to wait for the child process to finish, you CAN do
while(proc_get_status($process)['running']){sleep(1);}
this is a quick and easy way to do it, but it is not optimal. tl;dr: it may be slow or waste cpu. long version:
there is some nigh-optimal event-driven way to do this, but im not sure how to do it. imagine having to run a program 10 times, but the program execute in 100 milliseconds. this code would use 10 seconds! while optimal code would use only 1 second. you can use usleep() for microseconds, but its still not optimal, imagine if you're checking every 100 microseconds, but the program use 10 seconds to execute: you would waste cpu, checking the status 100,000 times, while optimal code would only check it once.. im sure there is a fancy way to let php sleep until the process finishes with some callback/signal, perhaps with stream_select , but i've yet to solve it. (if anybody have the solution, please let me know!)
-- read more at http://php.net/manual/en/book.exec.php
I'm thinking about making a php script which opens stockfish chess engine CLI, send fews commands and get back the output.
I think I can achieve this by using proc_open with pipes array but I can't figure out how to wait the whole output... If there's another better solution it's appreciated!
Here's my code:
// ok, define the pipes
$descr = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$pipes = array();
// open the process with those pipes
$process = proc_open("./stockfish", $descr, $pipes);
// check if it's running
if (is_resource($process)) {
// send first universal chess interface command
fwrite($pipes[0], "uci");
// send analysis (5 seconds) command
fwrite($pipes[0], "go movetime 5000");
// close read pipe or STDOUTPUT can't be read
fclose($pipes[0]);
// read and print all output comes from the pipe
while (!feof($pipes[1])) {
echo fgets($pipes[1]);
}
// close the last opened pipe
fclose($pipes[1]);
// at the end, close the process
proc_close($process);
}
The process seems to start, but the second STDINPUT I send to process isn't able to wait until it finishes because the second command produces analysis lines for about 5 seconds and the result it prints it's immediate.
How can I get on it?
CLI link, CLI documentation link
Please, ask me for more information about this engine if you need.
Thank you!
fwrite($pipes[0], "uci/n");
fwrite($pipes[0], "go movetime 5000/n");
Without /n Stockfish see this as one command (ucigo movetime 5000) and don't recognise it.
Actually, your code works. $pipes[1] contained all the output from stockfish...
You might need to change the line
position startpos moves 5000
to a different number, as the 5000 means 5000 ms = 5 seconds, ie. the time when the engine stops. Try 10000 and the engine stops after 10 seconds etc.
You need to remove: fclose($pipes[0]); and make check for bestmove in while cycle, if bestmove is found - break the cycle, and after that only put fclose($pipes[0]);. That worked for me. And add \n separator at the end of commands.
Thanks for the code!
I have developed a metasearch engine and one of the optimisations I would like to make is to process the search APIs in parallel. Imagine that results are retrieved from Search Engine A in 0.24 seconds, SE B in 0.45 Seconds and from SE C in 0.5 seconds. With other overheads the metasearch engine can return aggregated results in about 1.5 seconds, which is viable. Now what I would like to do is to send those requests in parallel rather than in series, as at present, and get that time down to under a second. I have investigated exec, forking, threading and all, for various reasons, have failed. Now I have only spent a day or two on this so I may have missed something. Ideally i would like to implement this on a WAMP stack on my development machine (localhost) and see about implementing on a Linux webserver thereafter. Any help appreciated.
Let's take a simple example: say we have two files we want to run simultaneously. File 1:
<?php
// file1.php
echo 'File 1 - Test 1'.PHP_EOL;
$sleep = mt_rand(1, 5);
echo 'Start Time: '.date("g:i:sa").PHP_EOL;
echo 'Sleep Time: '.$sleep.' seconds.'.PHP_EOL;
sleep($sleep);
echo 'Finish Time: '.date("g:i:sa").PHP_EOL;
?>
Now, imagine file two is the same... the idea is that if run in parallel the command line output for the times should be the same, for example:
File 1 - Test 1
Start Time: 9:30:43am
Sleep Time: 4 seconds.
Finish Time: 9:30:47am
But whether I use exec, popen or whatever, I just cannot get this to work in PHP!
I would use socket_select(). Doing so, only the connection time would be cummulative as you can read from the sockets in parralel. This will give you a big performance boost.
There is one viable approach. Make a cli php file that gets in arguments what it have to do and returns whatever result is produced serialized.
In your main app you may popen as many of these workers as you need and then in a simple loop collect the outputs:
[edit] I used your worker example, just had to chmod +x and add a #!/usr/bin/php line on top:
#!/usr/bin/php
<?php
echo 'File 1 - Test 1'.PHP_EOL;
$sleep = mt_rand(1, 5);
echo 'Start Time: '.date("g:i:sa").PHP_EOL;
echo 'Sleep Time: '.$sleep.' seconds.'.PHP_EOL;
sleep($sleep);
echo 'Finish Time: '.date("g:i:sa").PHP_EOL;
?>
also modified the run script a little bit - ex.php:
#!/usr/bin/php
<?php
$pha=array();
$res=array();
$pha[1]=popen("./file1.php","r");
$res[1]='';
$pha[2]=popen("./file2.php","r");
$res[2]='';
while (list($id,$ph)=each($pha)) {
while (!feof($ph))
$res[$id].=fread($ph,8192);
pclose($ph);
}
echo $res[1].$res[2];
here is the result, when tested in cli (its the same when ex.php is called from web, but paths to file1.php and file2.php should be fixed):
$ time ./ex.php
File 1 - Test 1
Start Time: 11:00:33am
Sleep Time: 3 seconds.
Finish Time: 11:00:36am
File 2 - Test 1
Start Time: 11:00:33am
Sleep Time: 4 seconds.
Finish Time: 11:00:37am
real 0m4.062s
user 0m0.040s
sys 0m0.036s
As seen in the result one script takes 3 seconds to execute and the other takes 4. Both run for 4 seconds together in parallel.
[end edit]
In this way the slow operation will run in parallel, you will only collect the result in serial.
Finally it will take (the slowest worker time)+(time for collecting) to execute. Since the time for collecting the results and time to unserialize, etc., may be ignored you get all data for the time of the slowest request.
As a side note you may try to use the igbinary serialiser that is much faster than the built-in one.
As noted in comments:
worker.php is executed outside of the web request and you have to pass all its state via arguments. Passing arguments may also be a problem to handle all escaping, security and etc., so not-effective but simple way is to use base64.
A major drawback in this approach is that it is not easy to debug.
It can be further improved by using stream_select instead of fread and also collecting data in parallel.
I'm developing a long running php script that compiles scraped information from multiple sources, organizes them, and caches them in a database.
As this script has a very high runtime, I'd like to print out runtime status reports in order to track the progress.
for ($i = 1; $i<= 10; $i++) {
echo "Starting iteration #$i\n";
set_time_limit(40);
echo "Time Limit set, starting 10 second sleep.\n";
sleep(10);
echo "Finishing iteration #$i\n\n";
}
echo "Iterations Finished.";
would output:
Starting iteration #1
Time Limit set, starting 10 second sleep
then wait 10 seconds and output:
Finishing iteration #1
Starting iteration #2
Time Limit set, starting 10 second sleep
then right before the php finishes parsing, it will output:
Iterations Finished.
What is the best way to achieve this?
If you are running PHP from the CLI you can output directly to stdout, without having to wait for the script
to end, using :
$handle = fopen('php://stdout', 'r');
fwrite($handle, $output);
If run from CGI, which would be really bad for a script like this, you would have to change how the buffer acts with flush();
Try writing the runtime status reports to a file, then viewing live updates of the file using ajax, per this question: Live feed of an updating file