How would I execute a shell script from PHP while giving constant/live feedback to the browser?
I understand from the system function documentation:
The system() call also tries to automatically flush the web server's
output buffer after each line of output if PHP is running as a server
module.
I'm not clear on what they mean by running it as a 'server module'.
Example PHP code:
<?php
system('/var/lib/script_test.sh');
Example shell code:
#!/bin/bash
echo "Start..."
for i in {1..10}
do
echo "$i..."
sleep 1
done
echo "Done."
What this does: It will wait about 10 seconds and then flush to the output buffer.
What I want this to do: Flush to the output buffer after each line of output.
This can be done using popen() which gives you a handle to the stdout of whatever process you open. Chunks of data can be sent to the client using ob_flush(), the data can be displayed using an XHR.
One option is to write to a file in the shell script, on each step to say where it's up to. On your web page, use an ajax call every X seconds/minutes. The ajax call will call a PHP script which reads the status file and returns the status or completed steps.
The advantage to this approach is the page live information will be available to multiple visitors, rather than just the one that actually initiated the shell script. Obviously that may or may not be desirable depending on your needs.
The disadvantage of course is the longer the ajax interval, the more out of date the update will be.
Related
I have a problem displaying the results of a Perl script that I am calling from my PHP webpage. The Perl script constantly monitors a socket and will display the output of this when run from the command line and also saves the output to a file. I know the Perl script is being called and running successfully as the text file is being updated but I do not get the output on the webpage as I was hoping for.
I have tried using the system(), exec(), passthru() and they all allow the Perl script to run but still with no output on the webpage so I am obviously missing something. Am I using the correct functions? Is there a parameter that I need to add to one of the above to push the output back to the webpage that calls the Perl script?
One example of what I have tried from the PHP manual pages:
<?php
exec('perl sql.pl', $output, $retval);
echo "Returned with status $retval and output:\n";
print_r($output);
?>
Edited to include output example as text instead of image as requested.
# perl sql.pl
Connecting to the PBX 192.168.99.200 on port 1752
04/07 10:04:50 4788 4788 3256739 T912 200 2004788 A2003827 A
I'm no PHP expert, but I guess that exec waits for the external program to finish executing before populating the $output and $return variables and returning.
You say that your sql.pl program "constantly monitors a socket". That sounds like it doesn't actually exit until the user closes it (perhaps with a Ctrl-C or a Ctrl-Z). So, presumably, your PHP code sits there waiting for your Perl program to exit - but it never does.
So I think there are a few approaches I'd investigate.
Does sql.pl have a command-line option that tells it to run once and then quit?
Does PHP have a way to send a Ctrl-C or Ctrl-Z to sql.pl a second or so after you've started it?
Does PHP have a way to deal with external programs that never end? Can you open a pipe to the external process and read output from it a line at a time?
I have a php file that takes in a couple parameters from the URL and then runs an exec command, in which i want to wait and have the results of the exec displayed. This exec command takes about 20-30 seconds to finish. It never completes because the webpage just gets an nginx 502 bad gateway error (times out).. Instead of extending nginx timeout error, as that's bad practice to have a connection hang for that long, how can i run the php's exec in the back and then have it returned on the page after it's complete?
Or is there a better way to accomplish this without using php?
Have the PHP trigger an exec to a script that is forked so it runs in the background (ends with &). The page should then return some js that periodically polls the server via ajax requests to check the original script's status. The script should output its STDOUT and STDERR to unique file(s) so that the polling script can check the status.
Edit: If you need to know the script's exit code wrap it in another script:
#/bin/bash
uniqId=$1
yourscript $uniqId &
myPid=$!
echo $myPid > $uniqId'.pid'
wait $myPid
echo $? > $uniqId'.returned'
Call like ./wrapper.sh someIdUniqueToClient &. Untested, but you get the gist.
I have a daemon program that prints in the terminal when new device is plugged or removed, now i want it to be printed in php like the way it was printed in linux. it's like realtime output. when a new device is plugged in linux it will alert php without you clicking any button it just prints in the screen. what my daemon program prints in linux also php prints.
I also have another program which scan devices but not daemon i can get it's output without a problem and prints it in php.
How am i supposed to make a real time output with my daemon program in php?
Thanks,
Comments becoming long so I add a post here.
First off the redirection of stderr and stdout to file by ~$ my-daemon >> my_logfile 2>&1 - unless your daemon has a log-file option.
Then you could perhaps use inotifywait with the -m flag on modify events (if you want to parse/do something on system outside PHP, i.e. by bash.)
Inotify can give you notification on various changes - This is i..e a short few lines of a bash script I use to check for new files in a specific directory:
notify()
{
...
inotifywait -m -e moved_to --format "%f" "$path_mon" 2>&- |
awk ' { print $0; fflush() }' |
while read buf; do
printf "NEW:[file]: %s\n" "$buf" >> "$file_noti_log"
blah blah blah
...
done
}
What this does is: each time a file get moved to $path_mon the script enters inside the while loop and perform various actions defined by the script.
Haven't used inotify on PHP but this looks perhaps like what you want:
inotify_init (separate module in PHP).
inotify check various events in one or several directories, or you can target a specific file. Check man inotifywait or inotify. You would most likely want to use the "modify" flag, "IN_MODIFY" under PHP: Inotify Constants.
You could also write your own in C. Haven't read this page, but IBM's pages use to be quite OK : Monitor file system activity with inotify
Another option could be to use PCNTL or similar under PHP.
it will alert php without you clicking any button
So you're talking about client side PHP.
The big problem is alerting the client browser.
For short lengths of time you could ignore the problem and just disable all buffering and send the daemon output to the browser. It's neither elegant nor really working in the long run, and it has... aesthetic issues. Moreover, you can't really manipulate the output client side at all, not easily or cleanly at least.
So you need to have a program running on the client, which means Javascript. The JS and the PHP programs must communicate, and PHP must also talk to the daemon, or at least monitor what it's doing.
There are ways of doing the first using Web Sockets, or maybe multipart-x-mixed-replace, but they're not very portable yet.
You could refresh the Web page but that's wasteful, and slow.
The problem of getting the notification to the client browser is then, in my opinion, best solved with an AJAX poll. You don't get an immediate alert, but you do get alerted within seconds.
You would send a query to PHP from AJAX every, say, 10 seconds (10000 ms)
function poll_devices() {
$.ajax({
url: '/json/poll-devices.php',
dataType: 'json',
error: function(data) {
// ...
},
success: function(packet) {
setTimeout(function() { poll_devices(); }, 10000);
// Display packet information
},
contentType: 'application/json'
});
}
and the PHP would check the accumulating log and send the situation.
Another possibility is to have the PHP script block up to 20 seconds, not enough to make AJAX time out and give up, and immediately return in case of changes. You would then employ an asynchronous AJAX function to drive the poll back-to-back.
This way, the asynchronous function starts and immediately goes to sleep while the PHP script is sleeping too. After 20 seconds, the call returns and is immediately re-issued, sleeping again.
The net effect is to keep one connection constantly open, and changes being echoed back to client side Javascript immediately. You have to manage connection interruptions, though. But this way, every 20 seconds you only issue one call, and still manage to be alerted almost instantly.
Server side PHP can check the log file's size at the start (last read position being saved in the session), and keep it open read only in shared mode and block reads with fgets(), if the daemon allows it.
Or you could pipe the daemon to logger, and get messages to syslog. Configure syslog to send those messages to a specific unbuffered file readable by PHP. Now PHP should be able to do everything with fopen(), ftell() and fgets(), without requiring additional notification systems.
I am coding a PHP-scripted web page that is intended to accept the filename of a JFFS2 image which was previously uploaded to the server. The script is to then re-flash a partition on the server with the image, and output the results. I had been using this:
$tmp = shell_exec("update_flash -v " . $filename . " 4 2>&1");
echo '<h3>' . $tmp . '</h3>';
echo verifyResults($tmp);
(The verifyResults function will return some HTML that indicates to the user whether the update command completed successfully. I.e., in the case that the update completes successfully, display a button to restart the device, etc.)
The problem with this is that the update command takes several minutes to complete, and the PHP script blocks until the shell command is complete before it returns any of the output. This typically means that the update command will continue running, while the user will see an HTTP 504 error (at worst) or wait for the page to load for several minutes.
I was thinking about doing something like this instead:
shell_exec("rm /tmp/output.txt");
shell_exec("update_flash -v " . $filename . " 4 2>&1 >> /tmp/output.txt &");
echo '<div id="output"></div>';
echo '<div id="results"></div>';
This would theoretically put the command in the background and append all output to /tmp/output.txt.
And then, in a Javascript function, I would periodically request getOutput.php, which would simply print the contents of /tmp/output.txt and stick it into the "output" div. Once the command is completely done, another Javascript function would process the output and display a result in the "results" div.
But the problem I see here is that getOutput.php will eventually become inaccessible during the process of updating the device's flash memory, because it's on the partition to which is targeted for an update. So that could leave me in the same position as before, albeit without the 504 or a seemingly eternally-loading page.
I could move getOutput.php to another partition in the device, but then I think I would still have to do some funky stuff with the webserver configuration to be able to access it there (a symlink to it from the webroot would, like any other file, eventually be overwritten during the re-flash).
Is there any other way of displaying the output of the command as it runs, or should I just make do with the solution I have?
Edit 1: I'm currently testing some solutions. I'll update my question with results later.
Edit 2: It seems that the filesystem does not get overwritten as I had originally thought. Instead, the system seems to mount the existing filesystem in read-only mode, so I can still access getOutput.php even after the filesystem is re-flashed.
The second solution I described in my question does seem to work in addition with using popen (as mentioned in an answer below) instead of shell_exec. The page loads, and via Ajax I can display the contents of output.txt.
However, it seems that output.txt does not reflect the output from the re-flash command in real time-- it seems to display nothing until the update command returns from execution. I will need to do further testing to see what's going on here.
Edit 3: Never mind, it looks like the file is current as I access it. I was just hitting a delay while the kernel did some JFFS2-related tasks triggered by my use of the partition on which the source JFFS2 image is stored. I don't know why, but this apparently causes all PHP scripts to block until it's done.
To work around that, I'm going to put the update command invocation in a separate script and request it via Ajax-- that way, the user will at least receive some prepackaged feedback while technically still waiting on the system.
Look at the popen: http://it.php.net/manual/en/function.popen.php
Interesting scenario.
My first thought was to do something regarding proc_* and $_SESSION, but I'm not sure if that will work or not. Give it a try, but if not...
If you're worried about the file being flashed during the process, you could always instantiate a mysql database in the secondary process and write to that. The database can exist on another partition, and you can address it by local ip and the system will take care of the routing.
Edit
When I mentioned proc_* with sessions, I meant something similar to this where $descriptorspec would become:
$_SESSION = array(
1 => array("pipe", "w"),
);
However I kind of doubt that will work. The process will end up writing to the $_SESSION in memory which no longer exists once the first script is killed.
Edit 2
ACTUALLY, on that note, you could install memcache and have your secondary process write directly to memory, which can then be re-read by your web-interfaced process.
If you wipe the DocRoot there is no resource/script that can respond to requests from the user during this time. Therefore you have to send updates to the user in the same request that does the wipe. This requires you to start the shell process and immediately return to PHP. This can be accomplished with pcntl_fork() and pcntl_exec(). Your PHP script should now continuously send the output of the shell script to the client. If the shell script appends to a file in /tmp, you could fpassthru() that file and clear it until the shell script ends.
Regarding your However:
My guess is you are trying to use the file as a stream. I haven't done any production tests, but I believe that the file will only be written back to disk on fclose().
If you are writing to the file continually in script #2, those writes are actually going directly into memory until the file is closed.
Again - I cannot verify this, but if you want to test it, try re-opening and closing the file for every write. This will confirm or deny my theory and you can modify your approach accordingly.
I'm currently running an Apache server (2.2) on my local machine (Windows) which I'm using to run some PHP scripts to take care of some tedious work. One of the scripts involves a ton of moving, resizing, and download / uploading files to another server. I would very much like the script to run constantly so that I don't have to baby the script by starting it up again every time it times out.
set_time_limit(0);
ignore_user_abort(1);
Both are set in my script, but after about 30mins to an hour the script stops and I get the 504 Gateway Time-out message in my browser. Is there something I missing in Apache or PHP to prevent the timeout? Or should I be running the script a different way?
Or should I be running the script a different way?
Definitely. You should run your script from command line (CLI)
if i should implement something like this i would you 2 different scripts:
A. process_controller.php
B. process.php
The workflow should be:
the user call the script A by using a browser
the script A start the script B by using a system() or exec() and pass to it a "process token" via command line.
the script B write the execution status into a shared space: a file named as the token, a database table. in general something that can be read also by the script A by using the token as reference
the script A contains an AJAX call, in polling, that ask to the script A the status of the process for a given token
Ajax polling:
<script>
var $myToken;
function ajaxPolling()
{
$.get('process_controller.php?action=getStatus&token='+$myToken, function(data) {
$('.result').html(data);
});
}
setInterval("ajaxPolling()",60*1000); //Every minute
</script>
there are some considerations about the communication between the 2 processes, depending on how many instances of the script B you would be able to run in parallel
Just one: you don't need a random/unique token
One per user: session_start(); $token = session_id();
More than one per user: session_start(); $token = session_id().microtime();
If you need to run it form your browser, You should make sure that there is not php execution limit in the php.ini file, but also that there is not limit set in mod_php (or what ever you are using) under apache.
Use php's system() to call a shell script which starts a service/background task.