This must be a solved problem, but Googling hasn't found me an answer.
I have an existing program with an interactive, text-only, command-line interface. I'd like to run this program on a web server so that each session gets a unique session of the CLI program, and each time a command is typed into the web page the corresponding response gets sent back.
I sorta-kinda know how to do this if the program were run fresh each time; what I'm looking for is how to do this so that the program keeps running (presumably in its own process), and the web page gets updates.
It'll be on a Linux box. Security hints would be helpful -- I'm hoping that I can run the program itself as a user that's tailored to have just enough privileges to actually run, but as little as possible otherwise to keep a hacker from getting in.
Try https://seashells.io/
Seashells lets you pipe output from command-line programs to the web in real-time, even without installing any new software on your machine. You can use it to monitor long-running processes like experiments that print progress to the console. You can also use Seashells to share output with friends!
$ echo 'Hello, Seashells!' | nc seashells.io 1337
serving at https://seashells.io/v/{random url}
Other article explaining seashells
Another alternative if you only want to see the result of the command, you can use websocketd
You could use this:
https://www.tecmint.com/shell-in-a-box-a-web-based-ssh-terminal-to-access-remote-linux-servers/
Then, you'd need to configure a user with limited rights that automatically launches the desired application. Make sure you understand Linux security before you do this. :)
Here is some information on restricting SSH sessions that you may find useful for this objective:
http://www.linuxjournal.com/article/8257
It looks like at least a partial answer is just basic Linux-ness: I stumbled across named pipes (which I knew about, dangit!). After testing, I can set up a pair of pipes, and then run my command as
tail -f in_pipe.fifo | my_command > out_pipe.fifo
Then the transient commands will just need to open the relevant pipes, talk to them, close, and exit. I'll still need to manage the lifetime of the above command (no doubt timers will be involved, and magic), but that's a question for another posting.
At this point I can open three terminal windows: one with the above command running, one with tail -f out_pipe.fifo running, and one in which I type things of the form echo line of input > in_pipe.fifo, and it all works. I'm sure that (A) I can work this out from here, and (B) it'll be ugly.
I did have to get into the source code for the program and make sure that the output buffer was flushed (using fflush(stdout)) in the C-code. (I'm working from some vintage source code; at this point I'm trying to do the minimal amount of work to make it compile and run -- every line changed is an opportunity for a bug, and I'm not yet familiar with this code base).
Related
we are working on project where we have used node-js in background socket for continues respond to web application. In between sometimes somehow some process stops automatically.
We would like to know how we can check all the process running using forever.
We are using sudo forever list to list all process. Is there any way to use this command(forever list) in .sh(shell script file) to check my specific process like responsclient is working or not. If that particular process is not working then we needs to start that.
There are several solutions that will ensure that your service is always running.
One of them is even called forever. Here you have an overview prepared by express.
However, for production services I recommend passenger The result is almost the same, but much greater scalability. For example, you can configure so that another instance is automatically added.
Almost - because it is designed to ensure the availability of HTTP, and not the constant operation of the application.
BTW: service stops, because you have uncatched exception.
Update
If you insist on forever, then: (We're talking about the same forever?)
Make sure that forever is run by the same user. forever has separate managers for all users.
Make sure you save your data in the same place. (automatic run eg by cron is different from manual startup (vaiables in env))
forever has --pidFile - then it is very easy to check if the process is working
also ps -aux | grep node should be your big friend.
No, I do not have it combined. When I started to have problems, I switched to passenger. Finally I did it well because I have professional monitoring, which I launched in less time than searching how to combine the above points together.
I often use PHP to script more complex maintenance tasks. Usually run in CLI, or as cron tasks. I've had decent success with having prompts passed through, and functional with exec and pass-thru.
I never really thought about having web access to these scripts, until something in the documentation made me think it may be possible/useful.
However, almost all of the documentation mentions the output of commands, slim to none information about [CLI] prompts.
It's probably a bad idea in the first place because it would probably inadvertently boost privileges of web server user (www-data), right?
I'm learning php and I'd like to write a simple forum monitor, but I came to a problem. How do I write a script that downloads a file regularly? When the page is loaded, the php is executed just once, and if I put it into a loop, it would all have to be ran before the page is finished loading. But I want to, say, download a file every minute and make a notification on the page when the file changes. How do I do this?
Typically, you'll act in two steps :
First, you'll have a PHP script that will run every minute -- using the crontab
This script will do the heavy job : downloading and parsing the page
And storing some information in a shared location -- a database, typically
Then, your webpages will only have to check in that shared location (database) if the information is there.
This way, your webpages will always work :
Even if there are many users, only the cronjob will download the page
And even if the cronjob doesn't work for a while, the webpage will work ; worst possible thing is some information being out-dated.
Others have already suggested using a periodic cron script, which I'd say is probably the better option, though as Paul mentions, it depends upon your use case.
However, I just wanted to address your question directly, which is to say, how does a daemon in PHP work? The answer is that it works in the same way as a daemon in any other language - you start a process which doesn't end immediately, and put it into the background. That process then polls files or accepts socket connections or somesuch, and in so doing, accepts some work to do.
(This is obviously a somewhat simplified overview, and of course you'd typically need to have mechanisms in place for process management, signalling the service to shut down gracefully, and perhaps integration into the operating system's daemon management, etc. but the basics are pretty much the same.)
How do I write a script that downloads
a file regularly?
there are shedulers to do that, like 'cron' on linux (or unix)
When the page is loaded, the php is
executed just once,
just once, just like the index.php of your site....
If you want to update a page which is show in a browser than you should use some form of AJAX,
if you want something else than your question is not clear to /me......
i start a linux console app from my php5 script, it starts ok but then termintates. I've tried using system(), shell_exec and tried starting as background process but to no avail it starts and then quits.
What i am trying to achieve is from a remote browser start a console app using a php5 script and then it should remain running (just as it would if i started it from a bash shell) , i then want to send commands (from a bash shell it would be keyboard strokes) to the console app from another set of php5 scripts. Hope its clear what i am trying to do.
If anyone could give some info on the best way about doing this, as i think i may have something fundamentally wrong.
I have a Debian Lenny box running apache.The console app is just a simple program that prints to stdout and reads from stdin.
How do you expect to send input to this app? Where is it listening for input?
It simply may only support interactive use, and exit as a result of that. Or, even simpler, it may terminate because it sees that is has no input (nothing piped in or nothing from some file) and since it's not connected to an interactive shell, it has nothing to do. There's no point in waiting for input from a user that doesn't have a way to interact w/ the application.
On every request, PHP starts up, compiles your script and executes it. After execution, the script exists. When the script exits, all of the resources it was using, including file handles, database handles, and pipes to other programs are terminated.
You're going to need to find another way to keep your program open and have PHP communicate with it. Otherwise, every request to your script is going to open a new copy of the program, and then both will exit when the PHP script is complete.
Unfortunately without knowing what the program is, it will be hard to offer suggestions on how to go about doing this.
The Setup:
I have a LAMP server (Ubuntu 9.10) which is also hooked up to my HDTV. Upon boot, one user is automatically logged in so as to have a desktop environment on screen.
What I want:
I've been developing a remote, web interface in PHP for various things (play music, for example). However, I've hit a snag in that I would like to run a windowed program and have it display on the TV. Obviously, since PHP/Apache is running under the user www-data, this isn't going to happen just by running my command via exec().
Is there a Linux command that can run it as the currently logged in session of my other user, or a program that?
The (X) program starting uses the DISPLAY variable to determine which X session to hook up to. You'll need to figure out which X session the user currently has, if this is a one-user box is's most likely to be :0.
Then you could write a simple bash script to
1. Set the DISPLAY (and other variables as needed)
2. Execute.
--
Another solution would be to write the necessary information to a flat file and then have a cron job checking for updates every one or three seconds. The cron can be configured to run as a specified user. Ugly, but sould work.
As mataisf says - you need to set the DISPLAY variable so the program knows where to generate the window. However there is an authentication system which prevents unauthorized programs from access an X server (the place where the keyboard, screen and mouse are). One way around this is to let any program connecting from the local machine have access:
xhost +localhost
....but a better solution would be to run the program as the user logged in. There are lots of different ways to do this - but probably the most practical is via sudo, e.g.
sudo -u console_user program
Note that before you do this you might want to set the HOME variable so that xauth works properly (you can do this with the -H flag for sudo).
Note that you need to configure the program, the console_user and the webserver user in the /etc/sudoers file.
C.