I'm trying to use popen to run a php script in the background. However, I need to pass a (fairly large) serialized object.
$cmd = "php background_test.php >log/output.log &";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
//pclose($fh);
Without the ampersand this code executes fine but the parent script will wait until the child is finished running. With the ampersand STDIN gets no data.
Any ideas?
You can try forking letting child process to write data and main script continue as normal.
Something like this
// Fork a child process
$pid = pcntl_fork();
// Unable to fork
if ($pid == -1) {
die('error');
}
// We are the parent
elseif ($pid) {
// do nothing
}
// We are the child
else {
$cmd = "php background_test.php >log/output.log";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
exit();
}
// parent will continue here
// child will exit above
Read more about it here: https://sites.google.com/a/van-steenbeek.net/archive/php_pcntl_fork
Also check function pcntl_waitpid() (zombies be gone) in php documentation.
As far as I know there is no way in php to send a process in background and continue to feed its STDIN (but maybe I'm wrong). You have two other choices here:
Refactor your background_test.php to get its input from command line and transform your command line in php background_test.php arg1 arg2 ... >log/output.log &
If your input is pretty long, write it to a temporary file and then feed the background_test.php script with that file as in the following code
Example for point 2:
<?
$tmp_file = tempnam();
file_put_content($tmp_file, $data);
$cmd = "php background_test.php < $tmp_name > log/output.log &";
exec($cmd);
Make a background processes listen to a socket file. Then open socket file from PHP and send your serialized data there. When your background daemon receives connection through the socket, make it fork, read data then process.
You would need to do some reading, but I think that's the best way to achieve this. By socket i mean unix socket file, but you can also use this over the network.
http://gearman.org/ is also a good alternative as mentioned by #Joshua
Related
We have a command line php application that maintains special permissions and want to use it to relay piped data into a shell script.
I know that we can read STDIN with:
while(!feof(STDIN)){
$line = fgets(STDIN);
}
But how can I redirect that STDIN into a shell script?
The STDIN is far too large to load into memory, so I can't do something like:
shell_exec("echo ".STDIN." | script.sh");
Using xenon's answer with popen seems to do the trick.
// Open the process handle
$ph = popen("./script.sh","w");
// This puts it into the file line by line.
while(($line = fgets(STDIN)) !== false){
// Put in line from STDIN. (Note that you may have to use `$line . '\n'`. I don't know
fputs($ph,$line);
}
pclose($ph);
As #Devon said, popen/pclose are very useful here.
$scriptHandle = popen("./script.sh","w");
while(($line = fgets(STDIN)) !== false){
fputs($scriptHandle,$line);
}
pclose($scriptHandle);
Alternatively, something along the lines of fputs($scriptHandle, file_get_contents("php://stdin")); might work in the place of a line-by-line approach for a smaller file.
I am testing my code using little database in txt files. The most important problem that I have found is: when users write at the same time into one file. To solve this I am using flock.
OS of my computer is windows with xampp installed (comment this because i understand flocks works fine over linux no windows) However I need to do this test over linux server.
Actually I have tested my code by loading the same script in 20 windows at the same time. The firsts results works fine, but after test database file appears empty.
My Code :
$file_db=file("test.db");
$fd=fopen("".$db_name."","w");
if (flock($fd, LOCK_EX))
{
ftruncate($fd,0);
for ($i=0;$i<sizeof($file_db);$i++)
{
fputs($fd,"$file_db[$i]"."\n");
}
fflush($fd);
flock($fd, LOCK_UN);
fclose($fd);
}
else
{
print "Db Busy";
}
How it's possible that the script deletes database file content. What is proper way: use flock with fixing of existing code or use some other alternative technique of flock?
I have re-wrote the script using #lolka_bolka's answer and it works. So in answer to your question, the file $db_name could be empty if the file test.db is empty.
ftruncate after fopen with "w" is useless.
file function
Returns the file in an array. Each element of the array corresponds to a line in the file, with the newline still attached. Upon failure, file() returns FALSE.
You do not have to add additional end of line symbol.
flock function
PHP supports a portable way of locking complete files in an advisory way (which means all accessing programs have to use the same way of locking or it will not work).
It means that file function not affected by the lock. It means that $file_db=file("test.db"); could read file while other process somewhere between ftruncate($fd,0); and fflush($fd);. So, you need read file content inside lock.
$db_name = "file.db";
$fd = fopen($db_name, "r+"); // w changed to r+ for getting file resource but not truncate it
if (flock($fd, LOCK_EX))
{
$file_db = file($db_name); // read file contents while lock obtained
ftruncate($fd, 0);
for ($i = 0; $i < sizeof($file_db); $i++)
{
fputs($fd, "$file_db[$i]");
}
fflush($fd);
flock($fd, LOCK_UN);
}
else
{
print "Db Busy";
}
fclose($fd); // fclose should be called anyway
P.S. you could test this script using console
$ for i in {1..20}; do php 'file.php' >> file.log 2>&1 & done
Is is possible to store an exec' output into a session variable while its running to see it's current progress?
example:
index.php
<?php exec ("very large command to execute", $arrat, $_SESSION['output']); ?>
follow.php
<php echo $_SESSION['output']); ?>
So, when i run index.php i could close the page and navigate to follow.php and follow the output of the command live everytime i refresh the page.
No, because exec waits for the spawned process to terminate before it returns. But it should be possible to do with proc_open because that function provides the outputs of the spawned process as streams and does not wait for it to terminate. So in broard terms you could do this:
Use proc_open to spawn a process and redirect its output to pipes.
Use stream_select inside some kind of loop to see if there is output to read; read it with the appropriate stream functions when there is.
Whenever output is read, call session_start, write it to a session variable and call session_write_close. This is the standard "session lock dance" that allows your script to update session data without holding a lock on them the whole time.
No, exec will run to completion and only then will store the result in session.
You should run a child process writing directly to a file and then read that file in your browser:
$path = tempnam(sys_get_temp_dir(), 'myscript');
$_SESSION['work_path'] = $path;
// Release session lock
session_write_close();
$process = proc_open(
'my shell command',
[
0 => ['pipe', 'r'],
1 => ['file', $path],
2 => ['pipe', 'w'],
],
$pipes
);
if (!is_resource($process)) {
throw new Exception('Failed to start');
}
fclose($pipes[0]);
fclose($pipes[2]);
$return_value = proc_close($process);
In your follow.php you can then just output the current output:
echo file_get_contents($_SESSION['work_path']);
No, You can't implement watching in this way.
I advise you to use file to populate status from index.php and read status from file in follow.php.
As alternative for file you can use Memcache
I need to perform a series of test for picking the fastest branch of code for a set of functions I designed. As this functions output some text/HTML content, I would like to measure the speed without filling the browser with garbage data.
Is there an equivalent to /dev/null in PHP? The closest equivalent to write temporary data I've found are php://temp and php://memory but those two I/O streams store the garbage data and I want for every piece of data to be written in a 'fake' fashion.
I could always write all garbage data in a variable ala $tmp .= <function return value goes here> but I'm sure there must be a more elegant or a better way to accomplish this WITHOUT resorting to functions like shell_exec(), exec(), proc_open() and similar approaches (the production server I'm going to test the final code won't have any of those commands).
Is there an equivalent?
// For what its worth, this works on CentOS 6.5 php 5.3.3.
$fname = "/dev/null";
if(file_exists($fname)) print "*** /dev/null exists ***\n";
if (is_readable($fname)) print "*** /dev/null readable ***\n";
if (is_writable($fname)) print "*** /dev/null writable ***\n";
if (($fileDesc = fopen($fname, "r"))==TRUE){
print "*** I opened /dev/null for reading ***\n";
$x = fgetc($fileDesc);
fclose($fileDesc);
}
if (($fileDesc = fopen($fname, "w"))==TRUE)
{
print "*** I opened /dev/null for writing ***\n";
$x = fwrite($fileDesc,'X');
fclose($fileDesc);
}
if (($fileDesc = fopen($fname, "w+"))==TRUE) {
print "*** I opened /dev/null for append ***\n";
$x = fwrite($fileDesc,'X');
fclose($fileDesc);
}
I think your best bet would be a streamWrapper that profiles your output on write with microtime, that you can then stream_wrapper_register . The example in the manual is pretty good.
If your code is not that complicated or you fell this would be overkill, you can just use the ob_start callback handler
Hope this helps.
The code below almost works, but it's not what I really meant:
ob_start();
echo 'xxx';
$contents = ob_get_contents();
ob_end_clean();
file_put_contents($file,$contents);
Is there a more natural way?
It is possible to write STDOUT directly to a file in PHP, which is much easier and more straightforward than using output bufferering.
Do this in the very beginning of your script:
fclose(STDIN);
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null', 'r');
$STDOUT = fopen('application.log', 'wb');
$STDERR = fopen('error.log', 'wb');
Why at the very beginning you may ask? No file descriptors should be opened yet, because when you close the standard input, output and error file descriptors, the first three new descriptors will become the NEW standard input, output and error file descriptors.
In my example here I redirected standard input to /dev/null and the output and error file descriptors to log files. This is common practice when making a daemon script in PHP.
To write to the application.log file, this would suffice:
echo "Hello world\n";
To write to the error.log, one would have to do:
fwrite($STDERR, "Something went wrong\n");
Please note that when you change the input, output and error descriptors, the build-in PHP constants STDIN, STDOUT and STDERR will be rendered unusable. PHP will not update these constants to the new descriptors and it is not allowed to redefine these constants (they are called constants for a reason after all).
here's a way to divert OUTPUT which appears to be the original problem
$ob_file = fopen('test.txt','w');
function ob_file_callback($buffer)
{
global $ob_file;
fwrite($ob_file,$buffer);
}
ob_start('ob_file_callback');
more info here:
http://my.opera.com/zomg/blog/2007/10/03/how-to-easily-redirect-php-output-to-a-file
None of the answers worked for my particular case where I needed a cross platform way of redirecting the output as soon as it was echo'd out so that I could follow the logs with tail -f log.txt or another log viewing app.
I came up with the following solution:
$logFp = fopen('log.txt', 'w');
ob_start(function($buffer) use($logFp){
fwrite($logFp, $buffer);
}, 1); //notice the use of chunk_size == 1
echo "first output\n";
sleep(10)
echo "second output\n";
ob_end_clean();
I haven't noticed any performance issues but if you do, you can change chunk_size to greater values.
Now just tail -f the log file:
tail -f log.txt
No, output buffering is as good as it gets. Though it's slightly nicer to just do
ob_start();
echo 'xxx';
$contents = ob_get_flush();
file_put_contents($file,$contents);
Using eio pecl module eio is very easy, also you can capture PHP internal errors, var_dump, echo, etc. In this code, you can found some examples of different situations.
$fdout = fopen('/tmp/stdout.log', 'wb');
$fderr = fopen('/tmp/stderr.log', 'wb');
eio_dup2($fdout, STDOUT);
eio_dup2($fderr, STDERR);
eio_event_loop();
fclose($fdout);
fclose($fderr);
// output examples
echo "message to stdout\n";
$v2dump = array(10, "graphinux");
var_dump($v2dump);
// php internal error/warning
$div0 = 10/0;
// user errors messages
fwrite(STDERR, "user controlled error\n");
Call to eio_event_loop is used to be sure that previous eio requests have been processed. If you need append on log, on fopen call, use mode 'ab' instead of 'wb'.
Install eio module is very easy (http://php.net/manual/es/eio.installation.php). I tested this example with version 1.2.6 of eio module.
You can install Eio extension
pecl install eio
and duplicate a file descriptor
$temp=fopen('/tmp/my_stdout','a');
$my_data='my something';
$foo=eio_dup2($temp,STDOUT,EIO_PRI_MAX,function($data,$esult,$request){
var_dump($data,$esult,$request);
var_dump(eio_get_last_error($request));
},$my_data);
eio_event_loop();
echo "something to stdout\n";
fclose($temp);
this creates new file descriptor and rewrites target stream of STDOUT
this can be done with STDERR as well
and constants STD[OUT|ERR] are still usable
I understand that this question is ancient, but people trying to do what this question asks will likely end up here... Both of you.
If you are running under a particular environment...
Running under Linux (probably most other Unix like operating systems, untested)
Running via CLI (Untested on web servers)
You can actually close all of your file descriptors (yes all, which means it's probably best to do this at the very beginning of execution... for example just after a pcntl_fork() call to background the process in a daemon (which seems like the most common need for something like this)
fclose( STDIN ); // fd 3
fclose( STDERR); // fd 2
fclose( STDOUT ); // fd 1
And then re-open the file descriptors, assigning them to a variable that will not fall out of scope and thus be garbage collected. Because Linux will predictably open them in the proper order.
$kept_in_scope_variable_fd1 = fopen(...); // fd 1
$kept_in_scope_variable_fd2 = fopen(...); // fd 2
$kept_in_scope_variable_fd3 = fopen( '/dev/null', ... ); // fd 3
You can use whatever files or devices you want for this. I gave /dev/null as the example for STDIN (fd3) because that's probably the most common case for this kind of code.
Once this is done you should be able to do normal things like echo, print_r, var_dump, etc without specifically needing to write to a file with a function. Which is useful when you're trying to background code that you do not want to, or aren't able to, rewrite to be file-pointer-output-friendly.
YMMV for other environments and things like having other FD's open, etc. My advice is to start with a small test script to prove that it works, or doesn't, in your environment and then move on to integration from there.
Good luck.
Here is an ugly solution that was useful for a problem I had (need to debug).
if(file_get_contents("out.txt") != "in progress")
{
file_put_contents("out.txt","in progress");
$content = file_get_contents('http://'.$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']);
file_put_contents("out.txt",$content);
}
The main drawback of that is that you'd better not to use the $_POST variables.
But you dont have to put it in the very beggining.