get output from shell_exec command as command runs - php

I am coding a PHP-scripted web page that is intended to accept the filename of a JFFS2 image which was previously uploaded to the server. The script is to then re-flash a partition on the server with the image, and output the results. I had been using this:
$tmp = shell_exec("update_flash -v " . $filename . " 4 2>&1");
echo '<h3>' . $tmp . '</h3>';
echo verifyResults($tmp);
(The verifyResults function will return some HTML that indicates to the user whether the update command completed successfully. I.e., in the case that the update completes successfully, display a button to restart the device, etc.)
The problem with this is that the update command takes several minutes to complete, and the PHP script blocks until the shell command is complete before it returns any of the output. This typically means that the update command will continue running, while the user will see an HTTP 504 error (at worst) or wait for the page to load for several minutes.
I was thinking about doing something like this instead:
shell_exec("rm /tmp/output.txt");
shell_exec("update_flash -v " . $filename . " 4 2>&1 >> /tmp/output.txt &");
echo '<div id="output"></div>';
echo '<div id="results"></div>';
This would theoretically put the command in the background and append all output to /tmp/output.txt.
And then, in a Javascript function, I would periodically request getOutput.php, which would simply print the contents of /tmp/output.txt and stick it into the "output" div. Once the command is completely done, another Javascript function would process the output and display a result in the "results" div.
But the problem I see here is that getOutput.php will eventually become inaccessible during the process of updating the device's flash memory, because it's on the partition to which is targeted for an update. So that could leave me in the same position as before, albeit without the 504 or a seemingly eternally-loading page.
I could move getOutput.php to another partition in the device, but then I think I would still have to do some funky stuff with the webserver configuration to be able to access it there (a symlink to it from the webroot would, like any other file, eventually be overwritten during the re-flash).
Is there any other way of displaying the output of the command as it runs, or should I just make do with the solution I have?
Edit 1: I'm currently testing some solutions. I'll update my question with results later.
Edit 2: It seems that the filesystem does not get overwritten as I had originally thought. Instead, the system seems to mount the existing filesystem in read-only mode, so I can still access getOutput.php even after the filesystem is re-flashed.
The second solution I described in my question does seem to work in addition with using popen (as mentioned in an answer below) instead of shell_exec. The page loads, and via Ajax I can display the contents of output.txt.
However, it seems that output.txt does not reflect the output from the re-flash command in real time-- it seems to display nothing until the update command returns from execution. I will need to do further testing to see what's going on here.
Edit 3: Never mind, it looks like the file is current as I access it. I was just hitting a delay while the kernel did some JFFS2-related tasks triggered by my use of the partition on which the source JFFS2 image is stored. I don't know why, but this apparently causes all PHP scripts to block until it's done.
To work around that, I'm going to put the update command invocation in a separate script and request it via Ajax-- that way, the user will at least receive some prepackaged feedback while technically still waiting on the system.

Look at the popen: http://it.php.net/manual/en/function.popen.php

Interesting scenario.
My first thought was to do something regarding proc_* and $_SESSION, but I'm not sure if that will work or not. Give it a try, but if not...
If you're worried about the file being flashed during the process, you could always instantiate a mysql database in the secondary process and write to that. The database can exist on another partition, and you can address it by local ip and the system will take care of the routing.
Edit
When I mentioned proc_* with sessions, I meant something similar to this where $descriptorspec would become:
$_SESSION = array(
1 => array("pipe", "w"),
);
However I kind of doubt that will work. The process will end up writing to the $_SESSION in memory which no longer exists once the first script is killed.
Edit 2
ACTUALLY, on that note, you could install memcache and have your secondary process write directly to memory, which can then be re-read by your web-interfaced process.

If you wipe the DocRoot there is no resource/script that can respond to requests from the user during this time. Therefore you have to send updates to the user in the same request that does the wipe. This requires you to start the shell process and immediately return to PHP. This can be accomplished with pcntl_fork() and pcntl_exec(). Your PHP script should now continuously send the output of the shell script to the client. If the shell script appends to a file in /tmp, you could fpassthru() that file and clear it until the shell script ends.

Regarding your However:
My guess is you are trying to use the file as a stream. I haven't done any production tests, but I believe that the file will only be written back to disk on fclose().
If you are writing to the file continually in script #2, those writes are actually going directly into memory until the file is closed.
Again - I cannot verify this, but if you want to test it, try re-opening and closing the file for every write. This will confirm or deny my theory and you can modify your approach accordingly.

Related

Make output of cli based PHP script viewable from web without piping to a file?

I have a command line PHP script that runs constantly (infinite loop) on my server in a 'screen' session. The PHP script outputs various lines of data using echo.
What I would like to do is create a PHP web script to interface the command line script so that I can view the echo output without having to SSH into the server.
I had considered writing/piping all of the echo statements to a text file, and then having the web script read the text file. The problem here is that the text file will grow to several megabytes in the space of only a few minutes.
Does anyone know of a more elegant solution?
I think expect_popen will work for you, if you have it available.
Another option is to used named pipes - no disk usage, the reading end has output available as it comes.
The CLI script can write to a file like so:
file_put_contents( '/var/log/cli-log-'.date('YmdHi').'.log', $data );
Thereby a new log file being created every minute to keep the file size down. You can then clean up the directory at that point, deleting previous log files or moving them or whatever you want to do.
Then the web script can read from the current log file like so:
$log = file_get_contents( '/var/log/cli-log-'.date('YmdHi').'.log' );
As Elias Van Ootegem suggested, I would definitely recommend a cron instead of an constantly running script.
If you want to view the data from a web script you can do a few things....one is write the data to a log file or a database so you can pull it out later....I would consider limiting what you output if you there is so much data (if that is a possiblity).
I have a lot of crons email me data, not sure if that would work for you but I figured I would mention it.
The most elegant suggestion I can think of is to run the commands using exec in a web script which will directly output to the browse if you use : http://php.net/manual/en/function.flush.php

How to catch the result of a background PHP script launched from inside PHP?

I've got some PHP code that I want to run as a background process. That code checks a database to see if it should do anything, and either does it or sleeps for awhile before checking again. When it does something, it prints some stuff to stdout, so, when I run the code from the command line, I typically redirect the output of the PHP process to a file in the obvious way: php code.php > code.log &.
The code itself works fine when it's run from the shell; I'm now trying to get it to run when launched from a web process -- I have a page that determines if the PHP process is running, and lets me start or stop it, depending. I can get the process started through something like:
$the_command = "/bin/php code.php > /tmp/code.out &";
$the_result = exec($the_command, $output, $retval);
but (and here's the problem!) the output file-- /tmp/code.out -- isn't getting created. I've tried all the variants of exec, shell_exec, and system, and none of them will create the file. (For now, I'm putting the file into /tmp to avoid ownership/permission problems, btw.) Am I missing something? Will redirection just not work in this case?
Seems like permission issues. One way to resolve this would be to:
rename your echo($data) statements to a function like fecho($data)
create a function fecho() like so
.
function fecho($data)
{
$fp = fopen('/tmp/code.out', 'a+');
fwrite($fp, $data);
fclose($fp);
}
Blurgh. After a day's hacking, this issue is finally resolved:
The scheme I originally proposed (exec of a statement with
redirection) works fine...
...EXCEPT it refuses to work in /tmp. I
created another directory outside of the server's webspace and opened
it up to apache, and everything works.
Why this is, I have no idea. But a few notes for future visitors:
I'm running a quite vanilla Fedora 17, Apache 2.2.23, and PHP 5.4.13.
There's nothing unusual about my /tmp configuration, as far as I know (translation: I've never modified whatever got set up with the basic OS installation).
My /tmp has a large number of directories of the form /tmp/systemd-private-Pf0qG9/, where the latter part is a different set of random characters. I found a few obsolete versions of my log files in a couple of those directories. I presume that this is some sort of Fedora-ism file system juju that I will confess to not understanding, and that these are orphaned files left over from some of my process hacking/killing.
exec(), shell_exec(), system(), and passthru() all seemed to work, once I got over the hump.
Bottom line: What should have worked does in fact work, as long as you do it in the right place. I will now excuse myself to take care of a large bottle of wine that has my name on it, and think about how my day might otherwise have been spent...

calling Windows program from PHP file (through command-prompt)

I have tried calling a windows program several ways and I have gotten the same result each time.
The program opens up on my machine (without a GUI) but never closes each means that the browser is forever loading.
Though when executing the query string manually through the command line prompt the program closes. Not only that, but the program doesn't actually execute
(it is just launched i.e. there aren't any results).
I just want to know the proper way of starting a program with switches through PHP.
Here is the query string that works (closes the program after executing):
"C:\Program Files (x86)\Softinterface, Inc\Convert PowerPoint\ConvertPPT.exe" /S
"C:\Users\Farzad\Desktop\upload\test.ppt" /T "C:\Users\Farzad\Desktop\upload\test.png" /C 18
If the program never closes, then PHP can't return a value from exec(). The program must close. Chances are there is a problem accessing your files on your desktop in this manner. It will be executed with whatever permissions the webserver has defined.
http://php.net/manual/en/function.exec.php
You might consider the advanced functionality of proc_open(). It will give you access to all the necessary pipes, but I don't think that will help you in this situation.
If the target directory on your Windows machine is C:\Program Files (x86)\Softinterface, Inc\Convert PowerPoint\ConvertPPT.exe, you need to double-quote the directories that have space character within them.
To translate it into php terms, it should be like this:
$directory = 'C:\"Program Files (x86)"\"Softinterface, Inc"\"Convert PowerPoint"\ConvertPPT.exe';
$command = $directory . ' enter your arguments here';
exec($command, $output, $return_var);
// if $return_var == 0, you hit the jackpot.
The physical directory where your Windows desktop is stored belongs to your user profile folder. That means that other users (including the one Apache runs as, which is typical "Local System") won't have the appropriate permissions to read and write files on it. While you can adjust your Apache set-up to make it run with your own user, Farzad, it's more common to put web applications in an entirely different directory tree. It may happen that ConvertPPT.exe just stalls because it's trying to write a file at a location where it's not allowed. I suggest you create a top folder directory and make sure it's world-writeable (once finished, you can tighten these permissions if you like).
Once you discard (or confirm) that the issue is caused by lack of appropriate credentials, make sure you are escaping your command and arguments properly. See this link:
http://es2.php.net/manual/en/function.exec.php#101579
One more thing you can try is to close PHP sessions before issuing the call to exec():
http://es2.php.net/manual/en/function.exec.php#99781
You have probably run into this bug: http://bugs.php.net/bug.php?id=44994
which has been bothering me for ages, even today, on PHP 5.3.5.
It seems like there is some kind of deadlock between the error output of the program and the apache error log file handle into which the program is redirected to write its stderr output, making the program be stuck for ever until the apache processes are killed.
Also, when using passthru, or system, or the backtick operator, there's an intermediary "cmd.exe" process that is used to run the program in an invisible console, and I have seen this cmd process getting stuck without even running the program.
I don't really have a solution as of now, and it seems the bug, even though reproduced by many people, hasn't been resolved.

How can I troubleshoot why my PHP script won't work in cron when it does from the command line?

I've got a script that calls two functions, A and B, from the same class. A creates an Amazon virtual server and B destroys one, both via shell_exec()'s of Amazon's command line tools. The script, doActions.php, pulls actions from a queue. If the action is "create" it creates an instance; when the action is "destroy" it kills one.
The script works fine to execute both A and B when I execute it from the command line: php script.php.
When I put it on a cron, it runs but only successfully runs the B function. It deletes destroys instances but won't create them.
The point of failure is clearly function B. It chokes at the first and most important shell_exec, returning and echoing nothing.
echo $string = shell_exec('/home/user/public_html/domain.com/private/ec2-api-tools/bin/ec2-run-instances ami-23b6534a -k gsg-keypair -z us-east-1a');
Unless you know something specific about the way Amazon's command line tools work, please suggest to me reasons why a shell_exec might work in one case and not the other.
Another shell_exec in the same place behaves as expected:
echo $string = shell_exec ('echo overflow');
My guess is that it has to do something with permissions. But when I have it run shell_exec('whoami') it return "root," and when I su and run the command it works fine. I'm having a hard time thinking of creative ways to troubleshoot why my PHP script won't work in cron when it does from the command line. Can you suggest some?
When something runs from the command line but refuses to do so within cron, it's often an environment issue (path or some other environment variable that's needed by the code you're running).
For a start you should modify the script to output the current environment (shell_exec('env')?) at the very top and examine the output from the command line and cron.
Hopefully, there will be something obvious such as AMAZON_EC2_VITAL_VAR but, if not, you should move the cron environment towards your command line one, one variable at a time, until it starts working.
A quick test to ascertain this. From your command line, do:
env >/tmp/pax_env.sh
Then run your PHP script from a shell script which first executes:
. /tmp/pax_env.sh
so that the environments are identical.
And keep in mind that su on its own doesn't give you the same environment as you'd get from logging in directly as a specific user (su - does, I think). You may want to check the behaviour for when you log in as root directly.
Re your comment:
Yes, I do believe you've got it. I'm likely going to mark your answer as correct but need you to suffer through a few addendums about your clever solution. First of all, what's the best way to execute the pax_env.sh script? Does shell_exec() work?
Never let it be said I didn't work for my money :-) No. The shell_exec will almost certainly be running a sub-shell so the variables would be set in that sub-shell but would not affect the PHP parent process.
My advice, if you wanted all those variables set, would be to create a shell-script consisting of all the commands in /tmp/pax_env.sh (probably prefixing each with export) followed by the command you currently have running in cron, something along the lines of:
export PATH=.:/usr/bin
export PS1=Urk:
export PS2=MoreUrk:
/home/user/pax/scriptB.php
Then run that script from cron rather than /home/user/pax/scriptB.php directly. That will ensure the environment is set up before your PHP code is called.
Astute readers will have noticed the phrase "if you wanted all those variables set" above. I don't personally think it's a good idea to dump all your command line variables into the shell script for the cron job. I'd prefer to actually find out which ones are needed and only include those. That lessens the pollution your cron job has to run under. For example, it's unlikely that the PS1/PS2 prompt variables will be required for your PHP script.
If it works, you can set all the environment variables - I just prefer the absolute minimum so I don't have to worry too much when things change.
A way of finding out what's needed is to comment out one export at a time until your script breaks again. Then you know that variable is needed. Once it works with the maximum amount of export statements commented out, you can just delete those commented export statements altogether and what remains, however improbable, must be okay (with apologies to Sir Arthur Conan Doyle).

PHP loop acting as cronjob[ensure only one instance running]

I have a multi part question for a php script file. I am creating this file that updates the database every second. There is no other modeling method, it has to be done every second.
Now i am running CentOS and i am new to it. The first noob question is:
How do i run a php file via SSH. I read it is just # php path-to/myfile.php. But i tried to echo something, and i dont see it in the text.
Now i don't think that starting the file is going to be a problem. One problem i guess will be, i don't know if it is even possible, but here goes.
Is it possible for me to be hundred percent sure that the file is only run once. What happens if i by accident run the file again.
I was wondering further, if i implement a write to a log every second, i can know if everything is running ok. If there is an error or something wrong the log file will stop.
Is the writing to a log file with the fopen, and write and close. Isn't this going to take a lot of time, isn't there an easier method in CentOS.
Ok another big point i have is what happens when i run the file. Is the file run in the memory, or does it use the file in the system. Does it respond on changes made in the file, for example to stop the execution of the script.
Can i implement some kind of stop mechanism in the file itself. Or is there a command i can use to stop the file.
Another option i know of is implementing a cronjob that runs every minute. And this cronjob executes the php file. The php file will loop for one minute, updateting everything needed, and terminating. I implemented this method, but just used a browser. I just browsed to mu file, and opened it. I saw the browser was busy for a minute, but it didn't update anything in the database. Does anyone have an idea what the reason of this can be.
Another question i have is by implementing the cronjob method, what is the command i fill in the PLESK panel. Is it the same as the above command. just php and the file name. Or are there special command like -f -q -something.
Sorry for all the noob questions.
If someone can help me i really appreciate it.
Ciao!
The simplest way to ensure only one copy of your script is running is to use flock() to obtain a file lock. For example:
<?php
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // do an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
?>
So basically you'd have a dummy file set up where your script, upon starting, tries to acquire a lock. If it succeeds, it runs. If not, it exits. That way only one copy of your script can be running at a time.
Note: flock() is what is called an advisory locking method, meaning it only works if you use it. So this will stop your own script from being run multiple times but won't do anything about any other scripts, which sounds fine in your situation.
You can't always rely on the lock within the script itself, as stated in the comment to previous answer. This might be a solution.
#Mins Hours Days Months Day of week
* * * * * lockfile -r 0 /tmp/the.lock; php parse_tweets.php; rm -f /tmp/the.lock
* * * * * lockfile -r 0 /tmp/the.lock; php php get_tweets.php; rm -f /tmp/the.lock
This way even if the scripts crashes, the lockfile will be released. Taken from here: https://unix.stackexchange.com/a/158459

Categories