My script, let's call it execute.php, needs to start a shell script which is in Scripts subfolder. Script has to be executed so, that its working directory is Scripts. How to accomplish this simple task in PHP?
Directory structure looks like this:
execute.php
Scripts/
script.sh
Either you change to that directory within the exec command (exec("cd Scripts && ./script.sh")) or you change the working directory of the PHP process using chdir().
The current working directory is the same as the PHP script's current working directory.
Simply use chdir() to change the working directory before you exec().
For greater control over how the child process will be executed, you can use the proc_open() function:
$cmd = 'Scripts/script.sh';
$cwd = 'Scripts';
$spec = array(
// can something more portable be passed here instead of /dev/null?
0 => array('file', '/dev/null', 'r'),
1 => array('file', '/dev/null', 'w'),
2 => array('file', '/dev/null', 'w'),
);
$ph = proc_open($cmd, $spec, $pipes, $cwd);
if ($ph === FALSE) {
// open error
}
// If we are not passing /dev/null like above, we should close
// our ends of any pipes to signal that we're done. Otherwise
// the call to proc_close below may block indefinitely.
foreach ($pipes as $pipe) {
#fclose($pipe);
}
// will wait for the process to terminate
$exit_code = proc_close($ph);
if ($exit_code !== 0) {
// child error
}
If you really need your working directory to be scripts, try:
exec('cd /path/to/scripts; ./script.sh');
Otherwise,
exec('/path/to/scripts/script.sh');
should suffice.
This is NOT the best way :
exec('cd /patto/scripts; ./script.sh');
Passing this to the exec function will always execute ./scripts.sh, which could lead to the script not being executed with the right working directory if the cd command fails.
Do this instead :
exec('cd /patto/scripts && ./script.sh');
&& is the AND logical operator. With this operator the script will only be executed if the cd command is successful.
This is a trick that uses the way shells optimize expression evaluation : since this is an AND operation, if the left part does not evaluate to TRUE then there is not way the whole expression can evaluate to TRUE, so the shells won't event process the right part of the expression.
Related
Is is possible to store an exec' output into a session variable while its running to see it's current progress?
example:
index.php
<?php exec ("very large command to execute", $arrat, $_SESSION['output']); ?>
follow.php
<php echo $_SESSION['output']); ?>
So, when i run index.php i could close the page and navigate to follow.php and follow the output of the command live everytime i refresh the page.
No, because exec waits for the spawned process to terminate before it returns. But it should be possible to do with proc_open because that function provides the outputs of the spawned process as streams and does not wait for it to terminate. So in broard terms you could do this:
Use proc_open to spawn a process and redirect its output to pipes.
Use stream_select inside some kind of loop to see if there is output to read; read it with the appropriate stream functions when there is.
Whenever output is read, call session_start, write it to a session variable and call session_write_close. This is the standard "session lock dance" that allows your script to update session data without holding a lock on them the whole time.
No, exec will run to completion and only then will store the result in session.
You should run a child process writing directly to a file and then read that file in your browser:
$path = tempnam(sys_get_temp_dir(), 'myscript');
$_SESSION['work_path'] = $path;
// Release session lock
session_write_close();
$process = proc_open(
'my shell command',
[
0 => ['pipe', 'r'],
1 => ['file', $path],
2 => ['pipe', 'w'],
],
$pipes
);
if (!is_resource($process)) {
throw new Exception('Failed to start');
}
fclose($pipes[0]);
fclose($pipes[2]);
$return_value = proc_close($process);
In your follow.php you can then just output the current output:
echo file_get_contents($_SESSION['work_path']);
No, You can't implement watching in this way.
I advise you to use file to populate status from index.php and read status from file in follow.php.
As alternative for file you can use Memcache
$a=new FileProcessing($path1);
$b=new FileProcessing($path2);
$a->doProcessFile();
$b->doProcessFile();
this code run doProcessFile() for $a then run doProcessFile() for $b in sequential order.
I want run doProcessFile() for $a in parallel with doProcessFile() for $b so I can process different file in parralel.
I can do that in PHP ?
As I can see, PHP process script in sequential order only. Now I wonder if I can run multiple instances of the same script in parralel ,the difference between running scripts is a parameter I pass when I call each script. for example : php myscript.php [path1] then I call this script a second time with different parameter.
You can use pcntl_fork, but only when running PHP from command line instead of as a web server.
Alternately, you can use gearman.
make 2 files
1. processor.php
<?php
if($arc < 1) user_error('needs a argument');
$a=new FileProcessing($argv[1]);
$a->doProcessFile();
2. caller.php
<?php
function process($path,$name){
$a = array();
$descriptorspec = array(
1 => array("file", "/tmp/$name-output.txt", "a"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/$name-error.txt", "a") // stderr is a file to write to
);
return proc_open(PHP_BINARY.' processor.php '.escapeshellarg($path), $descriptorspec, $a);
}
$proc1 = process($path1,'path1');
$proc2 = process($path2,'path2');
//now wait for them to complete
if(proc_close($proc1))user_error('proc1 returned non 0');
if(proc_close($proc2))user_error('proc2 returned non 0');
echo 'All done!!';
be sure to read the documentation of proc_open to for more information, also note that you need to include the class(and anything it needs) in the processor because its a new php environment
Never tried that myself but in theory should be possible:
Convert your doProcessingFile method to a standalone PHP script process_file.php that takes file to process as input.
Put the files for processing to a folder created for that sole purpose
Create a shell script paralellizer that will list the folder with files to process and send the files for parallel processing (note that it accepts number of files to process as arg - this bit could maybe be done nicer as a part of oneliner below):
#!/bin/bash
ls special_folder | xargs -P $1 -n 1 php process_file.php
Call parallelizer from php and make sure process_file returns processing result status :
$files_to_process = Array($path1, $path2);
exec('parallelizer '.count($files_to_process), $output);
// check output
// access processed files
disclaimer: just a rough ugly sketch of idea. I'm sure it can be improved
PHP's proc_open manual states:
The file descriptor numbers are not limited to 0, 1 and 2 - you may specify any valid file descriptor number and it will be passed to the child process. This allows your script to interoperate with other scripts that run as "co-processes". In particular, this is useful for passing passphrases to programs like PGP, GPG and openssl in a more secure manner. It is also useful for reading status information provided by those programs on auxiliary file descriptors.
What Happens: I call a Perl script in a PHP-based web application and pass parameters in the call. I have no future need to send data to the script. Through stdout [1] I receive from the Perl script json_encoded data that I use in my PHP application.
What I would like to add: The Perl script is progressing through a website collecting information depending on the parameters passed in it's initial call. I would like to send back to the PHP application a text string that I could use to display as a sort of Progress Bar.
How I think I should do it: I would expect to poll (every 1-2 seconds) the channel that has been setup for that "progression" update. I would use Javascript / jQuery to write into an html div container for the user to view. I do not think I should mix the "progress" channel with the more critical "json_encode(data)" channel as I would then need to decipher the stdout stream. (Is this thought logical, practical?)
My Main Question: How do you use additional "file descriptors?" I would image the setup of additional channels to be straightforward, such as the 3 => ... in the below:
$tunnels = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'),
3 => array('pipe', 'w')
);
$io = array();
$resource = proc_open("perl file/tomy/perl/code.pl $param1 $param2 $param3", $tunnels, $io);
if(!is_resource($resource)) {
$error = "No Resource";
}
fclose($io[0]);
$perlOutput = stream_get_contents($io[1]);
$output = json_decode($perlOutput);
$errors = stream_get_contents($io[2]);
print "$errors<p>";
fclose($io[1]);
fclose($io[2]);
$result = proc_close($resource);
if($result != 0) {
echo "you returned a $result result on proc_close";
}
But, in the Perl script I simply write to the stdout like:
my $json_terms = encode_json(\#terms);
print $json_terms;
If my understanding of setting up an additional channel is correct (above, the 3 => ...), then how would I write to it within the Perl script?
Thanks
Say you want to monitor the progress of a hello-world program, where each step is a dot written to the designated file descriptor.
#! /usr/bin/env perl
use warnings;
use strict;
die "Usage: $0 progress-fd\n" unless #ARGV == 1;
my $fd = shift;
open my $progress, ">&=", $fd or die "$0: dup $fd: $!";
# disable buffering on both handles
for ($progress, *STDOUT) {
select $_;
$| = 1;
}
my $output = "Hello, world!\n";
while ($output =~ s/^(.)(.*)\z/$2/s) {
my $next = $1;
print $next;
print $progress ".";
sleep 1;
}
Using bash syntax to open fd 3 on /tmp/progress and connect it to the program is
$ (exec 3>/tmp/progress; ./hello-world 3)
Hello, world!
$ cat /tmp/progress
..............
(It’s more amusing to watch live.)
To also see the dots on your terminal as they emerge, you could open your progress descriptor and effectively dup2 it onto the standard error—again using bash syntax and more fun in real time.
$ (exec 17>/dev/null; exec 17>&2; ./hello-world 17)
H.e.l.l.o.,. .w.o.r.l.d.!.
.
You could of course skip the extra step with
$ (exec 17>&2; ./hello-world 17)
to get the same effect.
If your Perl program dies with an error such as
$ ./hello-world 333
./hello-world: dup 333: Bad file descriptor at ./hello-world line 9.
then the write end of your pipe on the PHP side probably has its close-on-exec flag set.
You open a new filehandle and dup it to file descriptor 3:
open STD3, '>&3';
print STDERR "foo\n";
print STD3 "bar\n";
$ perl script.pl 2> file2 3> file3
$ cat file2
foo
$ cat file3
bar
Edit: per Greg Bacon's comment, open STD3, '>&=', 3 or open STD3, '>&=3' opens the file descriptor directly, like C's fdopen call, avoiding a dup call and saving you a file descriptor.
I am uploading a video, which is supposed to generate three screenshot thumbnails. I have the same upload code running in both admin and front-end, but for some odd reason the thumb is only being generated when I upload from front end, and not from backend...
My directory structure
root/convert.php (this is the file running through exec call)
(the following two files are the upload files running in user-end and admin-end respectively)
root/upload.php
root/siteadmin/modules/videos/edit.php
I believe convert.php is not being run from admin-side for some reason. The command is something like:
$cmd = $cgi . $config['phppath']. ' ' .$config['BASE_DIR']. '/convert.php ' .$vdoname. ' ' .$vid. ' ' .$ff;echo $cmd;die;
exec($cmd. '>/dev/null &');
And echoing out the exec $cmd, I get this:
/usr/bin/php /home/testsite/public_html/dev/convert.php 1272.mp4 1272 /home/testsite/public_html/dev/video/1272.mp4
How do I make sure convert.php is being run?
EDIT: OK, now I am sure it is not being executed from admin-side, any ideas why?
http://php.net/manual/en/function.exec.php
"return_var" - If the return_var argument is present along with the output argument, then the return status of the executed command will be written to this variable.
Another way to determine if exec actually runs the convert.php file, add some debugging info in convert.php (e.g. write something to a file when the covert.php script starts).
Just an Idea
you could print "TRUE" in the convert script when it runs successfully.
don't add >/dev/null &
check the return value of exec
$value = exec($cmd);
if($value == 'TRUE')
// did run sucessfully
}
chmod 755 convet.php
you also make sure the first line of convert.php is:
#!/usr/bin/php
check the full path of php cli executable.
Also make sure convert.php las unix line ending ("\n")
I have a file that is getting added to remotely (file.txt). From SSH, I can call tail -f file.txt which will display the updated contents of the file. I'd like to be able to do a blocking call to this file that will return the last appended line. A pooling loop simply isn't an option. Here's what I'd like:
$cmd = "tail -f file.txt";
$str = exec($cmd);
The problem with this code is that tail will never return. Is there any kind of wrapper function for tail that will kill it when once it has returned content? Is there a better way to do this in a low overhead way?
The only solution I've found is somewhat dirty:
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open('tail -f -n 0 /tmp/file.txt',$descriptorspec,$pipes);
fclose($pipes[0]);
stream_set_blocking($pipes[1],1);
$read = fgets($pipes[1]);
fclose($pipes[1]);
fclose($pipes[2]);
//if I try to call proc_close($process); here, it fails / hangs untill a second line is
//passed to the file. Hence an inelegant kill in the next 2 line:
$status = proc_get_status($process);
exec('kill '.$status['pid']);
proc_close($process);
echo $read;
tail -n 1 file.txt will always return you the last line in the file, but I'm almost sure what you want instead is for PHP to know when file.txt has a new line, and display it, all without polling in a loop.
You will need a long running process anyway if it will check for new content, be it with a polling loop that checks for file modification time and compares to the last modification time saved somewhere else, or any other way.
You can even have php be run via cron to do the check if you don't want it running in a php loop (probably best), or via a shell script that does the loop and calls the php file if you need sub 1-minute runs that are cron's limit.
Another idea, though I haven't tried it, would be to open the file in a non-blocking stream and then use the quite efficient stream_select on it to have the system poll for changes.