I am developing a CLI PHP script that can either be executed in the foreground or the background. If running in the foreground I want to be able to interact with the user getting input information. Obviously if the script is launched in the background using the '&' parameter any user interaction should be skipped...
So is there a way for a PHP script to detect that it has been launched in the background?
Its not possible to detect if its running in background. I still didn't find any way to do.
One way could be to traverse process list and check the status of /usr/bin/php
The best ways is to use a parameter (say --daemon). When this parameter is passed it'll be running in background otherwise it'll print useful information on front-end.
You can create daemon using System_Daemon pear package.
There's a Unix frequently asked question (which I found via this Stack Overflow post) that basically claims it cannot reliably be done. One suggestion the article gives is to check whether or not stdin is a terminal:
sh: if [ -t 0 ]; then ... fi
C: if(isatty(0)) { ... }
I agree with Pekka's comment that you should simply provide a parameter to the script. The -i (for interactive) option is sometimes used for this express purpose. If that parameter isn't passed, you can assume you're in "automated" mode.
Related
This must be a solved problem, but Googling hasn't found me an answer.
I have an existing program with an interactive, text-only, command-line interface. I'd like to run this program on a web server so that each session gets a unique session of the CLI program, and each time a command is typed into the web page the corresponding response gets sent back.
I sorta-kinda know how to do this if the program were run fresh each time; what I'm looking for is how to do this so that the program keeps running (presumably in its own process), and the web page gets updates.
It'll be on a Linux box. Security hints would be helpful -- I'm hoping that I can run the program itself as a user that's tailored to have just enough privileges to actually run, but as little as possible otherwise to keep a hacker from getting in.
Try https://seashells.io/
Seashells lets you pipe output from command-line programs to the web in real-time, even without installing any new software on your machine. You can use it to monitor long-running processes like experiments that print progress to the console. You can also use Seashells to share output with friends!
$ echo 'Hello, Seashells!' | nc seashells.io 1337
serving at https://seashells.io/v/{random url}
Other article explaining seashells
Another alternative if you only want to see the result of the command, you can use websocketd
You could use this:
https://www.tecmint.com/shell-in-a-box-a-web-based-ssh-terminal-to-access-remote-linux-servers/
Then, you'd need to configure a user with limited rights that automatically launches the desired application. Make sure you understand Linux security before you do this. :)
Here is some information on restricting SSH sessions that you may find useful for this objective:
http://www.linuxjournal.com/article/8257
It looks like at least a partial answer is just basic Linux-ness: I stumbled across named pipes (which I knew about, dangit!). After testing, I can set up a pair of pipes, and then run my command as
tail -f in_pipe.fifo | my_command > out_pipe.fifo
Then the transient commands will just need to open the relevant pipes, talk to them, close, and exit. I'll still need to manage the lifetime of the above command (no doubt timers will be involved, and magic), but that's a question for another posting.
At this point I can open three terminal windows: one with the above command running, one with tail -f out_pipe.fifo running, and one in which I type things of the form echo line of input > in_pipe.fifo, and it all works. I'm sure that (A) I can work this out from here, and (B) it'll be ugly.
I did have to get into the source code for the program and make sure that the output buffer was flushed (using fflush(stdout)) in the C-code. (I'm working from some vintage source code; at this point I'm trying to do the minimal amount of work to make it compile and run -- every line changed is an opportunity for a bug, and I'm not yet familiar with this code base).
I'm currently launching an asynchronous job with PHP to perform some tests.
To make it work, I found on SO some tips, like the use of popen and start:
$commande = "testu.bat";
$pid = popen('start /B ' . $commande, 'r');
$status = pclose($pid);
The testu.bat's folder is in my user PATH.
This script performs some task, and to control it's execution, it should generates a log file, but I never get it.
Whereas if I just remove the /B option, it works fine and I get my log file.
Did I miss something about background execution? How can I catch the error informations when it is running in the background?
It appears you are operating under the assumption that the /B switch to the start command means "background". It does not. From the start usage:
B Start application without creating a new window. The
application has ^C handling ignored. Unless the application
enables ^C processing, ^Break is the only way to interrupt
the application.
Processes launched by start are asynchronous by default. Since that appears to be what you want, just run the command without the /B switch.
Interesting one... Ok, here's what I think is going on:
Because you run the task in the background, the PHP script will just carry on, it is not waiting for testu.bat to return anything...
Or put another way, popen does what it was instructed to do, which is starting the task in the background (which it does) but then control is handed immediately back to PHP, whilst the log file is still being created in the background and the php script carries on at the same time...
What I would do in this case is let testu.bat call the php script (or another PHP script) in a callback type fashion once it has done its processing, in a similar way as in Javascript you would use callbacks for asynchromous Ajax calls...
Maybe provide the callback script command as a parameter to testu.bat..?
Hope this is of any help...
I'm not quite sure about your goal here, but here are some info you might use:
for figuring out background errors, you may find these functions useful:
set_exception_handler();
set_error_handler();
register_shutdown_function();
Of course write out the errors they catch into some file.
If you do not need any data back from your requests, you can simply use:
fsockopen()
curl
and give them a short timeout (10 milisec). The scripts will run in the backround.
Alternatively if you do need the data back, you can either put it into a database and set up a loop that checks if the data has already been inserted, or simply output it into a file and check for its existence.
In my opinion start launches the specified command by creating a new prcoess in the "background". Therefore the execution of start itself "just" starts the second process and exists immediately.
However, using the /B switch, the command to be executed will be excuted in the context of the start process. Therefore the execution of the start process takes longer. Now what I suspect is that executing pclose terminates the start process and as a result of this you don't get your log file.
Maybe one solution (not testet though) could be executing something like
start /B cmd "/C testu.bat" where start just tries to execute cmd and cmd gets /C testu.bat as parameter which is the "command" it shall execute.
Another thought:
What happens if you don't call $status = pclose($pid);?
Just for some people seeking this trick to works, in my case it just needs to activate the PHP directive ignore_user_abort in php.ini or by the PHP platform function.
Without this activated, the process is killed by pclose() without finishing the job.
Your problem is most likely properly solved by using a queue system. You insert a job into a queue that a background process picks up and works on. In this way the background task is completely independent of the HTTP request that initiated the task - but you can still monitor its progress.
The two most popular software packages that can help you in your scenario:
Gearman
Check out this gist and this totorial for installation on Windows.
RabbitMQ
Check out this tutorial for installation on Windows.
Note that implementing a queuing solution is not a 5 minute prospect, but it is technically the right approach for this sort of situation. For a company product this is the only viable approach; for a personal project where you just want something to work, there is a level of commitment required to see it through for the first time. If you're looking to expand your development horizons, I strongly suggest you give the queue a shot.
I have a script written in PHP. my PHP server is running on a Windows Server.
How can I schedule a task that will open a web link "URL" and after the page finish executing close the browser.
Thanks
The answer of Julian Knight has some drawbacks:
It will leave IEInstances open, which you have to kill manually... While searching for a "One-Liner" to achieve the same goal, I found powershell to be very helpful:
Task-Scheduler, target: Powershell.exe, use arguments:
-Command "(New-Object Net.WebClient).DownloadString('http://myhost/cron/cron.php')"
This will download the whole content (render the whole DOM), and then quit the powershell-instance.
As others have said, you use the Windows Task Scheduler which allows similar functionality to CRON but also a lot more (startup/down, events, etc.)
You can call Internet Explorer from a command with the url as a parameter:
iexplore.exe "http://etc.etc.etc"
You may need to explicitly add the path to iexplore.exe
Of course, as you say, you would then need to kill the browser process afterwards.
Better would be to add a simpler command to a folder on your path such as CURL for Windows or WGET for Windows. Direct the output of the command to null, the command will exit with a testable return code should you need to further check whether it worked.
A third option would be to use PowerShell though I think that would be overkill in this case. I'd use it if I needed to test the return code for logging or to execute some other task on failure (e.g. write an error to the Windows Event Log and have a second task set up to run when that event occurs).
A 4th option would be to call PHP as this is supported if you have PHP installed on the server. Again though, the overheads of startup wouldn't make this worthwhile.
Personally, I would use WGET.
I finished a program with ZMQ, built a PHP Socket Program, and in order to accept some Client requests. I must be sure that this Server Program run in linux all the time.
I run this Program like this:php /app/server.php.
And, the Terminal shows my output statement like waiting for client connecting..., at this time, I can't use my Terminal to do others things, unless I Ctrl + c to exit this program.
I want to let it automatical run in linux background like a progress. And the Program may die when PHP Error, I have to restart this program manually.
I also want to it can restart self when error happened.
How to do that? Thank U first:)
Take a look at the System_Daemon PEAR package. I've used it several times and find that it works well.
Try using "Screen" program. you can use the following parameters to de-attach it from the current terminal and have your program running inside it.
screen -d -m php /app/server.php
also to have it autostart at boot time, add the above line to
/etc/rc.local
Use the & symbol at the end of the command to run the program in the background, like so:
php /app/server.php &
You can put the above command in /etc/rc.local so that it starts automatically when the system boots.
If you program produces output, you might want send it to a log file instead of STDOUT.
i start a linux console app from my php5 script, it starts ok but then termintates. I've tried using system(), shell_exec and tried starting as background process but to no avail it starts and then quits.
What i am trying to achieve is from a remote browser start a console app using a php5 script and then it should remain running (just as it would if i started it from a bash shell) , i then want to send commands (from a bash shell it would be keyboard strokes) to the console app from another set of php5 scripts. Hope its clear what i am trying to do.
If anyone could give some info on the best way about doing this, as i think i may have something fundamentally wrong.
I have a Debian Lenny box running apache.The console app is just a simple program that prints to stdout and reads from stdin.
How do you expect to send input to this app? Where is it listening for input?
It simply may only support interactive use, and exit as a result of that. Or, even simpler, it may terminate because it sees that is has no input (nothing piped in or nothing from some file) and since it's not connected to an interactive shell, it has nothing to do. There's no point in waiting for input from a user that doesn't have a way to interact w/ the application.
On every request, PHP starts up, compiles your script and executes it. After execution, the script exists. When the script exits, all of the resources it was using, including file handles, database handles, and pipes to other programs are terminated.
You're going to need to find another way to keep your program open and have PHP communicate with it. Otherwise, every request to your script is going to open a new copy of the program, and then both will exit when the PHP script is complete.
Unfortunately without knowing what the program is, it will be hard to offer suggestions on how to go about doing this.