I need to perform a series of test for picking the fastest branch of code for a set of functions I designed. As this functions output some text/HTML content, I would like to measure the speed without filling the browser with garbage data.
Is there an equivalent to /dev/null in PHP? The closest equivalent to write temporary data I've found are php://temp and php://memory but those two I/O streams store the garbage data and I want for every piece of data to be written in a 'fake' fashion.
I could always write all garbage data in a variable ala $tmp .= <function return value goes here> but I'm sure there must be a more elegant or a better way to accomplish this WITHOUT resorting to functions like shell_exec(), exec(), proc_open() and similar approaches (the production server I'm going to test the final code won't have any of those commands).
Is there an equivalent?
// For what its worth, this works on CentOS 6.5 php 5.3.3.
$fname = "/dev/null";
if(file_exists($fname)) print "*** /dev/null exists ***\n";
if (is_readable($fname)) print "*** /dev/null readable ***\n";
if (is_writable($fname)) print "*** /dev/null writable ***\n";
if (($fileDesc = fopen($fname, "r"))==TRUE){
print "*** I opened /dev/null for reading ***\n";
$x = fgetc($fileDesc);
fclose($fileDesc);
}
if (($fileDesc = fopen($fname, "w"))==TRUE)
{
print "*** I opened /dev/null for writing ***\n";
$x = fwrite($fileDesc,'X');
fclose($fileDesc);
}
if (($fileDesc = fopen($fname, "w+"))==TRUE) {
print "*** I opened /dev/null for append ***\n";
$x = fwrite($fileDesc,'X');
fclose($fileDesc);
}
I think your best bet would be a streamWrapper that profiles your output on write with microtime, that you can then stream_wrapper_register . The example in the manual is pretty good.
If your code is not that complicated or you fell this would be overkill, you can just use the ob_start callback handler
Hope this helps.
Related
PHP's proc_open manual states:
The file descriptor numbers are not limited to 0, 1 and 2 - you may specify any valid file descriptor number and it will be passed to the child process. This allows your script to interoperate with other scripts that run as "co-processes". In particular, this is useful for passing passphrases to programs like PGP, GPG and openssl in a more secure manner. It is also useful for reading status information provided by those programs on auxiliary file descriptors.
What Happens: I call a Perl script in a PHP-based web application and pass parameters in the call. I have no future need to send data to the script. Through stdout [1] I receive from the Perl script json_encoded data that I use in my PHP application.
What I would like to add: The Perl script is progressing through a website collecting information depending on the parameters passed in it's initial call. I would like to send back to the PHP application a text string that I could use to display as a sort of Progress Bar.
How I think I should do it: I would expect to poll (every 1-2 seconds) the channel that has been setup for that "progression" update. I would use Javascript / jQuery to write into an html div container for the user to view. I do not think I should mix the "progress" channel with the more critical "json_encode(data)" channel as I would then need to decipher the stdout stream. (Is this thought logical, practical?)
My Main Question: How do you use additional "file descriptors?" I would image the setup of additional channels to be straightforward, such as the 3 => ... in the below:
$tunnels = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'),
3 => array('pipe', 'w')
);
$io = array();
$resource = proc_open("perl file/tomy/perl/code.pl $param1 $param2 $param3", $tunnels, $io);
if(!is_resource($resource)) {
$error = "No Resource";
}
fclose($io[0]);
$perlOutput = stream_get_contents($io[1]);
$output = json_decode($perlOutput);
$errors = stream_get_contents($io[2]);
print "$errors<p>";
fclose($io[1]);
fclose($io[2]);
$result = proc_close($resource);
if($result != 0) {
echo "you returned a $result result on proc_close";
}
But, in the Perl script I simply write to the stdout like:
my $json_terms = encode_json(\#terms);
print $json_terms;
If my understanding of setting up an additional channel is correct (above, the 3 => ...), then how would I write to it within the Perl script?
Thanks
Say you want to monitor the progress of a hello-world program, where each step is a dot written to the designated file descriptor.
#! /usr/bin/env perl
use warnings;
use strict;
die "Usage: $0 progress-fd\n" unless #ARGV == 1;
my $fd = shift;
open my $progress, ">&=", $fd or die "$0: dup $fd: $!";
# disable buffering on both handles
for ($progress, *STDOUT) {
select $_;
$| = 1;
}
my $output = "Hello, world!\n";
while ($output =~ s/^(.)(.*)\z/$2/s) {
my $next = $1;
print $next;
print $progress ".";
sleep 1;
}
Using bash syntax to open fd 3 on /tmp/progress and connect it to the program is
$ (exec 3>/tmp/progress; ./hello-world 3)
Hello, world!
$ cat /tmp/progress
..............
(It’s more amusing to watch live.)
To also see the dots on your terminal as they emerge, you could open your progress descriptor and effectively dup2 it onto the standard error—again using bash syntax and more fun in real time.
$ (exec 17>/dev/null; exec 17>&2; ./hello-world 17)
H.e.l.l.o.,. .w.o.r.l.d.!.
.
You could of course skip the extra step with
$ (exec 17>&2; ./hello-world 17)
to get the same effect.
If your Perl program dies with an error such as
$ ./hello-world 333
./hello-world: dup 333: Bad file descriptor at ./hello-world line 9.
then the write end of your pipe on the PHP side probably has its close-on-exec flag set.
You open a new filehandle and dup it to file descriptor 3:
open STD3, '>&3';
print STDERR "foo\n";
print STD3 "bar\n";
$ perl script.pl 2> file2 3> file3
$ cat file2
foo
$ cat file3
bar
Edit: per Greg Bacon's comment, open STD3, '>&=', 3 or open STD3, '>&=3' opens the file descriptor directly, like C's fdopen call, avoiding a dup call and saving you a file descriptor.
I'm trying to use popen to run a php script in the background. However, I need to pass a (fairly large) serialized object.
$cmd = "php background_test.php >log/output.log &";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
//pclose($fh);
Without the ampersand this code executes fine but the parent script will wait until the child is finished running. With the ampersand STDIN gets no data.
Any ideas?
You can try forking letting child process to write data and main script continue as normal.
Something like this
// Fork a child process
$pid = pcntl_fork();
// Unable to fork
if ($pid == -1) {
die('error');
}
// We are the parent
elseif ($pid) {
// do nothing
}
// We are the child
else {
$cmd = "php background_test.php >log/output.log";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
exit();
}
// parent will continue here
// child will exit above
Read more about it here: https://sites.google.com/a/van-steenbeek.net/archive/php_pcntl_fork
Also check function pcntl_waitpid() (zombies be gone) in php documentation.
As far as I know there is no way in php to send a process in background and continue to feed its STDIN (but maybe I'm wrong). You have two other choices here:
Refactor your background_test.php to get its input from command line and transform your command line in php background_test.php arg1 arg2 ... >log/output.log &
If your input is pretty long, write it to a temporary file and then feed the background_test.php script with that file as in the following code
Example for point 2:
<?
$tmp_file = tempnam();
file_put_content($tmp_file, $data);
$cmd = "php background_test.php < $tmp_name > log/output.log &";
exec($cmd);
Make a background processes listen to a socket file. Then open socket file from PHP and send your serialized data there. When your background daemon receives connection through the socket, make it fork, read data then process.
You would need to do some reading, but I think that's the best way to achieve this. By socket i mean unix socket file, but you can also use this over the network.
http://gearman.org/ is also a good alternative as mentioned by #Joshua
I am running multiple shell_exec, process.php's run in the background
On the shell/ssh, I execute the code like this: username [~/public_html/curl]# php index.php
Example....
index.php
<?php
shell_exec("php process.php > /dev/null 2>&1 &");
shell_exec("php process.php > /dev/null 2>&1 &");
shell_exec("php process.php > /dev/null 2>&1 &");
shell_exec("php process.php > /dev/null 2>&1 &");
?>
process.php
<?php
$section = rand(999,999999);
$z = 1;
print "STARTED .... \n";
while($z <= 10) {
print "---------------------------------\n";
print $section . ": " . $z . "\n";
$z++;
sleep(2);
}
print "LOOP FINISH at " . time();
?>
when process.php's is running, I am having two problems with this:
I cant not see the output from process.php's (I need to know what they are doing)
I need to know which process is finish and which have started.
What the best way logging the output in real time? saving into text file? or how can it be done to mysql database (logs table)?
If your processes will be alive only within the lifetime of another php script, you could use popen instead of shell_exec:
http://us.php.net/popen
This gives you a very convenient way to get data from the other processes into your php script, using the same interface as file handles. To know then the process is done, you could make sure the process sends an EOF (end-of-file) when it's done, and use the feof php function to detect it.
On the other hand, if your processes may live longer than any other php scripts that talk to it, then a text file may be a very practical solution. Keep in mind, though, that disk access is always much much slower than memory access, so if you use text files for communication, it will not be optimally fast.
I am trying to run a process on a web page that will return its output in realtime. For example if I run 'ping' process it should update my page every time it returns a new line (right now, when I use exec(command, output) I am forced to use -c option and wait until process finishes to see the output on my web page). Is it possible to do this in php?
I am also wondering what is a correct way to kill this kind of process when someone is leaving the page. In case of 'ping' process I am still able to see the process running in the system monitor (what makes sense).
This worked for me:
$cmd = "ping 127.0.0.1";
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that the child will write to
);
flush();
$process = proc_open($cmd, $descriptorspec, $pipes, realpath('./'), array());
echo "<pre>";
if (is_resource($process)) {
while ($s = fgets($pipes[1])) {
print $s;
flush();
}
}
echo "</pre>";
This is a nice way to show real time output of your shell commands:
<?php
header("Content-type: text/plain");
// tell php to automatically flush after every output
// including lines of output produced by shell commands
disable_ob();
$command = 'rsync -avz /your/directory1 /your/directory2';
system($command);
You will need this function to prevent output buffering:
function disable_ob() {
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
// Implicitly flush the buffer(s)
ini_set('implicit_flush', true);
ob_implicit_flush(true);
// Clear, and turn off output buffering
while (ob_get_level() > 0) {
// Get the curent level
$level = ob_get_level();
// End the buffering
ob_end_clean();
// If the current level has not changed, abort
if (ob_get_level() == $level) break;
}
// Disable apache output buffering/compression
if (function_exists('apache_setenv')) {
apache_setenv('no-gzip', '1');
apache_setenv('dont-vary', '1');
}
}
It doesn't work on every server I have tried it on though, I wish I could offer advice on what to look for in your php configuration to determine whether or not you should pull your hair out trying to get this type of behavior to work on your server! Anyone else know?
Here's a dummy example in plain PHP:
<?php
header("Content-type: text/plain");
disable_ob();
for($i=0;$i<10;$i++)
{
echo $i . "\n";
usleep(300000);
}
I hope this helps others who have googled their way here.
Checked all answers, nothing works...
Found solution Here
It works on windows (i think this answer is helpful for users searching over there)
<?php
$a = popen('ping www.google.com', 'r');
while($b = fgets($a, 2048)) {
echo $b."<br>\n";
ob_flush();flush();
}
pclose($a);
?>
A better solution to this old problem using modern HTML5 Server Side Events is described here:
http://www.w3schools.com/html/html5_serversentevents.asp
Example:
http://sink.agiletoolkit.org/realtime/console
Code: https://github.com/atk4/sink/blob/master/admin/page/realtime/console.php#L40
(Implemented as a module in Agile Toolkit framework)
For command-line usage:
function execute($cmd) {
$proc = proc_open($cmd, [['pipe','r'],['pipe','w'],['pipe','w']], $pipes);
while(($line = fgets($pipes[1])) !== false) {
fwrite(STDOUT,$line);
}
while(($line = fgets($pipes[2])) !== false) {
fwrite(STDERR,$line);
}
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
return proc_close($proc);
}
If you're trying to run a file, you may need to give it execute permissions first:
chmod('/path/to/script',0755);
try this (tested on Windows machine + wamp server)
header('Content-Encoding: none;');
set_time_limit(0);
$handle = popen("<<< Your Shell Command >>>", "r");
if (ob_get_level() == 0)
ob_start();
while(!feof($handle)) {
$buffer = fgets($handle);
$buffer = trim(htmlspecialchars($buffer));
echo $buffer . "<br />";
echo str_pad('', 4096);
ob_flush();
flush();
sleep(1);
}
pclose($handle);
ob_end_flush();
I've tried various PHP execution commands on Windows and found that they differ quite a lot.
Don't work for streaming: shell_exec, exec, passthru
Kind of works: proc_open, popen -- "kind of" because you cannot pass arguments to your command (i.e. wont' work with my.exe --something, will work with _my_something.bat).
The best (easiest) approach is:
You must make sure your exe is flushing commands (see printf flushing problem). Without this you will most likely receive batches of about 4096 bytes of text whatever you do.
If you can, use header('Content-Type: text/event-stream'); (instead of header('Content-Type: text/plain; charset=...');). This will not work in all browsers/clients though! Streaming will work without this, but at least first lines will be buffered by the browser.
You also might want to disable cache header('Cache-Control: no-cache');.
Turn off output buffering (either in php.ini or with ini_set('output_buffering', 'off');). This might also have to be done in Apache/Nginx/whatever server you use in front.
Turn of compression (either in php.ini or with ini_set('zlib.output_compression', false);). This might also have to be done in Apache/Nginx/whatever server you use in front.
So in your C++ program you do something like (again, for other solutions see printf flushing problem):
Logger::log(...) {
printf (text);
fflush(stdout);
}
In PHP you do something like:
function setupStreaming() {
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
// Disable Apache output buffering/compression
if (function_exists('apache_setenv')) {
apache_setenv('no-gzip', '1');
apache_setenv('dont-vary', '1');
}
}
function runStreamingCommand($cmd){
echo "\nrunning $cmd\n";
system($cmd);
}
...
setupStreaming();
runStreamingCommand($cmd);
First check whether flush() works for you. If it does, good, if it doesn't it probably means the web server is buffering for some reason, for example mod_gzip is enabled.
For something like ping, the easiest technique is to loop within PHP, running "ping -c 1" multiple times, and calling flush() after each output. Assuming PHP is configured to abort when the HTTP connection is closed by the user (which is usually the default, or you can call ignore_user_abort(false) to make sure), then you don't need to worry about run-away ping processes either.
If it's really necessary that you only run the child process once and display its output continuously, that may be more difficult -- you'd probably have to run it in the background, redirect output to a stream, and then have PHP echo that stream back to the user, interspersed with regular flush() calls.
If you're looking to run system commands via PHP look into, the exec documentation.
I wouldn't recommend doing this on a high traffic site though, forking a process for each request is quite a hefty process. Some programs provide the option of writing their process id to a file such that you could check for, and terminate the process at will, but for commands like ping, I'm not sure that's possible, check the man pages.
You may be better served by simply opening a socket on the port you expect to be listening (IE: port 80 for HTTP) on the remote host, that way you know everything is going well in userland, as well as on the network.
If you're attempting to output binary data look into php's header function, and ensure you set the proper content-type, and content-disposition. Review the documentation, for more information on using/disabling the output buffer.
Try changing the php.ini file set "output_buffering = Off". You should be able to get the real time output on the page
Use system command instead of exec.. system command will flush the output
why not just pipe the output into a log file and then use that file to return content to the client. not quite real time but perhaps good enough?
I had the same problem only could do it using Symfony Process Components ( https://symfony.com/doc/current/components/process.html )
Quick example:
<?php
use Symfony\Component\Process\Process;
$process = new Process(['ls', '-lsa']);
$process->run(function ($type, $buffer) {
if (Process::ERR === $type) {
echo 'ERR > '.$buffer;
} else {
echo 'OUT > '.$buffer;
}
});
?>
The code below almost works, but it's not what I really meant:
ob_start();
echo 'xxx';
$contents = ob_get_contents();
ob_end_clean();
file_put_contents($file,$contents);
Is there a more natural way?
It is possible to write STDOUT directly to a file in PHP, which is much easier and more straightforward than using output bufferering.
Do this in the very beginning of your script:
fclose(STDIN);
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null', 'r');
$STDOUT = fopen('application.log', 'wb');
$STDERR = fopen('error.log', 'wb');
Why at the very beginning you may ask? No file descriptors should be opened yet, because when you close the standard input, output and error file descriptors, the first three new descriptors will become the NEW standard input, output and error file descriptors.
In my example here I redirected standard input to /dev/null and the output and error file descriptors to log files. This is common practice when making a daemon script in PHP.
To write to the application.log file, this would suffice:
echo "Hello world\n";
To write to the error.log, one would have to do:
fwrite($STDERR, "Something went wrong\n");
Please note that when you change the input, output and error descriptors, the build-in PHP constants STDIN, STDOUT and STDERR will be rendered unusable. PHP will not update these constants to the new descriptors and it is not allowed to redefine these constants (they are called constants for a reason after all).
here's a way to divert OUTPUT which appears to be the original problem
$ob_file = fopen('test.txt','w');
function ob_file_callback($buffer)
{
global $ob_file;
fwrite($ob_file,$buffer);
}
ob_start('ob_file_callback');
more info here:
http://my.opera.com/zomg/blog/2007/10/03/how-to-easily-redirect-php-output-to-a-file
None of the answers worked for my particular case where I needed a cross platform way of redirecting the output as soon as it was echo'd out so that I could follow the logs with tail -f log.txt or another log viewing app.
I came up with the following solution:
$logFp = fopen('log.txt', 'w');
ob_start(function($buffer) use($logFp){
fwrite($logFp, $buffer);
}, 1); //notice the use of chunk_size == 1
echo "first output\n";
sleep(10)
echo "second output\n";
ob_end_clean();
I haven't noticed any performance issues but if you do, you can change chunk_size to greater values.
Now just tail -f the log file:
tail -f log.txt
No, output buffering is as good as it gets. Though it's slightly nicer to just do
ob_start();
echo 'xxx';
$contents = ob_get_flush();
file_put_contents($file,$contents);
Using eio pecl module eio is very easy, also you can capture PHP internal errors, var_dump, echo, etc. In this code, you can found some examples of different situations.
$fdout = fopen('/tmp/stdout.log', 'wb');
$fderr = fopen('/tmp/stderr.log', 'wb');
eio_dup2($fdout, STDOUT);
eio_dup2($fderr, STDERR);
eio_event_loop();
fclose($fdout);
fclose($fderr);
// output examples
echo "message to stdout\n";
$v2dump = array(10, "graphinux");
var_dump($v2dump);
// php internal error/warning
$div0 = 10/0;
// user errors messages
fwrite(STDERR, "user controlled error\n");
Call to eio_event_loop is used to be sure that previous eio requests have been processed. If you need append on log, on fopen call, use mode 'ab' instead of 'wb'.
Install eio module is very easy (http://php.net/manual/es/eio.installation.php). I tested this example with version 1.2.6 of eio module.
You can install Eio extension
pecl install eio
and duplicate a file descriptor
$temp=fopen('/tmp/my_stdout','a');
$my_data='my something';
$foo=eio_dup2($temp,STDOUT,EIO_PRI_MAX,function($data,$esult,$request){
var_dump($data,$esult,$request);
var_dump(eio_get_last_error($request));
},$my_data);
eio_event_loop();
echo "something to stdout\n";
fclose($temp);
this creates new file descriptor and rewrites target stream of STDOUT
this can be done with STDERR as well
and constants STD[OUT|ERR] are still usable
I understand that this question is ancient, but people trying to do what this question asks will likely end up here... Both of you.
If you are running under a particular environment...
Running under Linux (probably most other Unix like operating systems, untested)
Running via CLI (Untested on web servers)
You can actually close all of your file descriptors (yes all, which means it's probably best to do this at the very beginning of execution... for example just after a pcntl_fork() call to background the process in a daemon (which seems like the most common need for something like this)
fclose( STDIN ); // fd 3
fclose( STDERR); // fd 2
fclose( STDOUT ); // fd 1
And then re-open the file descriptors, assigning them to a variable that will not fall out of scope and thus be garbage collected. Because Linux will predictably open them in the proper order.
$kept_in_scope_variable_fd1 = fopen(...); // fd 1
$kept_in_scope_variable_fd2 = fopen(...); // fd 2
$kept_in_scope_variable_fd3 = fopen( '/dev/null', ... ); // fd 3
You can use whatever files or devices you want for this. I gave /dev/null as the example for STDIN (fd3) because that's probably the most common case for this kind of code.
Once this is done you should be able to do normal things like echo, print_r, var_dump, etc without specifically needing to write to a file with a function. Which is useful when you're trying to background code that you do not want to, or aren't able to, rewrite to be file-pointer-output-friendly.
YMMV for other environments and things like having other FD's open, etc. My advice is to start with a small test script to prove that it works, or doesn't, in your environment and then move on to integration from there.
Good luck.
Here is an ugly solution that was useful for a problem I had (need to debug).
if(file_get_contents("out.txt") != "in progress")
{
file_put_contents("out.txt","in progress");
$content = file_get_contents('http://'.$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']);
file_put_contents("out.txt",$content);
}
The main drawback of that is that you'd better not to use the $_POST variables.
But you dont have to put it in the very beggining.