Store php exec in a session variable - php

Is is possible to store an exec' output into a session variable while its running to see it's current progress?
example:
index.php
<?php exec ("very large command to execute", $arrat, $_SESSION['output']); ?>
follow.php
<php echo $_SESSION['output']); ?>
So, when i run index.php i could close the page and navigate to follow.php and follow the output of the command live everytime i refresh the page.

No, because exec waits for the spawned process to terminate before it returns. But it should be possible to do with proc_open because that function provides the outputs of the spawned process as streams and does not wait for it to terminate. So in broard terms you could do this:
Use proc_open to spawn a process and redirect its output to pipes.
Use stream_select inside some kind of loop to see if there is output to read; read it with the appropriate stream functions when there is.
Whenever output is read, call session_start, write it to a session variable and call session_write_close. This is the standard "session lock dance" that allows your script to update session data without holding a lock on them the whole time.

No, exec will run to completion and only then will store the result in session.
You should run a child process writing directly to a file and then read that file in your browser:
$path = tempnam(sys_get_temp_dir(), 'myscript');
$_SESSION['work_path'] = $path;
// Release session lock
session_write_close();
$process = proc_open(
'my shell command',
[
0 => ['pipe', 'r'],
1 => ['file', $path],
2 => ['pipe', 'w'],
],
$pipes
);
if (!is_resource($process)) {
throw new Exception('Failed to start');
}
fclose($pipes[0]);
fclose($pipes[2]);
$return_value = proc_close($process);
In your follow.php you can then just output the current output:
echo file_get_contents($_SESSION['work_path']);

No, You can't implement watching in this way.
I advise you to use file to populate status from index.php and read status from file in follow.php.
As alternative for file you can use Memcache

Related

Is it necessary to fclose pipes and proc_close processes

I am using a script to open (proc_open) from 5 to 50 processes one after another. Each of them does cURL and posts results to DB. I do not want to wait for their execution results, I just want them to run when the main script is axecuting and after it. I use set_time_limit(10); in each proccess file.
proc_close waits for the process to terminate, and returns its exit code.
On the web I found that php automatically closes all pipes and each proccess once the main file is completely executed without calling proc_close and fclose but did not find any documented prove.
So the question is: Is it necessary to fclose pipes and proc_close processes?
And can it be a problem if 100-200 users run this script at the same time and it opens from 5 to 50 processes for each user and proc_close and fclose are not called.
If you have a more elegant way of doing this task please tell me but first of all I need to get the information regarding my current approach. Thanks a lot.
The code I use to call each proccess is (part of a function):
$params = addcslashes(serialize($params), '"');
$command = $this->phpPath.' -q '.$filename.' --params "'.$params.'"';
++$this->lastId;
$this->handles[$this->lastId] = proc_open($command, $this->descriptorSpec, $pipes);
$this->streams[$this->lastId] = $pipes[1];
$this->pipes[$this->lastId] = $pipes;

PHP run parallel class instances

$a=new FileProcessing($path1);
$b=new FileProcessing($path2);
$a->doProcessFile();
$b->doProcessFile();
this code run doProcessFile() for $a then run doProcessFile() for $b in sequential order.
I want run doProcessFile() for $a in parallel with doProcessFile() for $b so I can process different file in parralel.
I can do that in PHP ?
As I can see, PHP process script in sequential order only. Now I wonder if I can run multiple instances of the same script in parralel ,the difference between running scripts is a parameter I pass when I call each script. for example : php myscript.php [path1] then I call this script a second time with different parameter.
You can use pcntl_fork, but only when running PHP from command line instead of as a web server.
Alternately, you can use gearman.
make 2 files
1. processor.php
<?php
if($arc < 1) user_error('needs a argument');
$a=new FileProcessing($argv[1]);
$a->doProcessFile();
2. caller.php
<?php
function process($path,$name){
$a = array();
$descriptorspec = array(
1 => array("file", "/tmp/$name-output.txt", "a"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/$name-error.txt", "a") // stderr is a file to write to
);
return proc_open(PHP_BINARY.' processor.php '.escapeshellarg($path), $descriptorspec, $a);
}
$proc1 = process($path1,'path1');
$proc2 = process($path2,'path2');
//now wait for them to complete
if(proc_close($proc1))user_error('proc1 returned non 0');
if(proc_close($proc2))user_error('proc2 returned non 0');
echo 'All done!!';
be sure to read the documentation of proc_open to for more information, also note that you need to include the class(and anything it needs) in the processor because its a new php environment
Never tried that myself but in theory should be possible:
Convert your doProcessingFile method to a standalone PHP script process_file.php that takes file to process as input.
Put the files for processing to a folder created for that sole purpose
Create a shell script paralellizer that will list the folder with files to process and send the files for parallel processing (note that it accepts number of files to process as arg - this bit could maybe be done nicer as a part of oneliner below):
#!/bin/bash
ls special_folder | xargs -P $1 -n 1 php process_file.php
Call parallelizer from php and make sure process_file returns processing result status :
$files_to_process = Array($path1, $path2);
exec('parallelizer '.count($files_to_process), $output);
// check output
// access processed files
disclaimer: just a rough ugly sketch of idea. I'm sure it can be improved

how to handle infinite loop process when using proc_open

I use proc_open to execute a program created by c language.
I was using file for the "stdout".
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("file", "/tmp/example.output"),
2 => array("file", "/tmp/example.error", "a")
);
Everything is fine when I was executing good program but problem occurred when I was executing infinite loop program like the code below :
#include "stdio.h"
int main(){
while(1){
printf("Example");
}
return 0
}
File example.output will make my hard disk full. So I need to delete the file and restart my computer. My question is how to handle something like this?
Thanks :)
The only thing you can do is slaughter the offending process without prejudice using proc_terminate (but you can be polite and allow it to run for a while first, in effect imposing a time limit for it to complete).
For example:
$proc = proc_open(...);
sleep(20); // give the process some time to run
$status = proc_get_status($proc); // see what it's doing
if($status['running']) {
proc_terminate($proc); // kill it forcefully
}
Don't forget to clean up any handles you still have in your hands afterwards.

PHP Pipe into a background process

I'm trying to use popen to run a php script in the background. However, I need to pass a (fairly large) serialized object.
$cmd = "php background_test.php >log/output.log &";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
//pclose($fh);
Without the ampersand this code executes fine but the parent script will wait until the child is finished running. With the ampersand STDIN gets no data.
Any ideas?
You can try forking letting child process to write data and main script continue as normal.
Something like this
// Fork a child process
$pid = pcntl_fork();
// Unable to fork
if ($pid == -1) {
die('error');
}
// We are the parent
elseif ($pid) {
// do nothing
}
// We are the child
else {
$cmd = "php background_test.php >log/output.log";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
exit();
}
// parent will continue here
// child will exit above
Read more about it here: https://sites.google.com/a/van-steenbeek.net/archive/php_pcntl_fork
Also check function pcntl_waitpid() (zombies be gone) in php documentation.
As far as I know there is no way in php to send a process in background and continue to feed its STDIN (but maybe I'm wrong). You have two other choices here:
Refactor your background_test.php to get its input from command line and transform your command line in php background_test.php arg1 arg2 ... >log/output.log &
If your input is pretty long, write it to a temporary file and then feed the background_test.php script with that file as in the following code
Example for point 2:
<?
$tmp_file = tempnam();
file_put_content($tmp_file, $data);
$cmd = "php background_test.php < $tmp_name > log/output.log &";
exec($cmd);
Make a background processes listen to a socket file. Then open socket file from PHP and send your serialized data there. When your background daemon receives connection through the socket, make it fork, read data then process.
You would need to do some reading, but I think that's the best way to achieve this. By socket i mean unix socket file, but you can also use this over the network.
http://gearman.org/ is also a good alternative as mentioned by #Joshua

PHP - Blocking File Read

I have a file that is getting added to remotely (file.txt). From SSH, I can call tail -f file.txt which will display the updated contents of the file. I'd like to be able to do a blocking call to this file that will return the last appended line. A pooling loop simply isn't an option. Here's what I'd like:
$cmd = "tail -f file.txt";
$str = exec($cmd);
The problem with this code is that tail will never return. Is there any kind of wrapper function for tail that will kill it when once it has returned content? Is there a better way to do this in a low overhead way?
The only solution I've found is somewhat dirty:
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open('tail -f -n 0 /tmp/file.txt',$descriptorspec,$pipes);
fclose($pipes[0]);
stream_set_blocking($pipes[1],1);
$read = fgets($pipes[1]);
fclose($pipes[1]);
fclose($pipes[2]);
//if I try to call proc_close($process); here, it fails / hangs untill a second line is
//passed to the file. Hence an inelegant kill in the next 2 line:
$status = proc_get_status($process);
exec('kill '.$status['pid']);
proc_close($process);
echo $read;
tail -n 1 file.txt will always return you the last line in the file, but I'm almost sure what you want instead is for PHP to know when file.txt has a new line, and display it, all without polling in a loop.
You will need a long running process anyway if it will check for new content, be it with a polling loop that checks for file modification time and compares to the last modification time saved somewhere else, or any other way.
You can even have php be run via cron to do the check if you don't want it running in a php loop (probably best), or via a shell script that does the loop and calls the php file if you need sub 1-minute runs that are cron's limit.
Another idea, though I haven't tried it, would be to open the file in a non-blocking stream and then use the quite efficient stream_select on it to have the system poll for changes.

Categories