I am writing an automated build system to do nightly builds of our code.
Under normal conditions every thing works fine, but some of the processes in the build can require user input, if a developer commits a change that opens up one of these the automated builds grind to a halt.
Does any one know of a way of causing reads from STDIN to fail so any process that tries this will (hopefully) fail with an error? Right now I only need a solution for Linux (Ubuntu) but The system also has to run on Windows.
FYI: The automated build system is written in PHP, and in the case where this is (currently) a problem it is using buildroot to do the compilation.
Read from /dev/null. It will always return EOF. This is achieved in different ways depending on how your build system is setup. A command line app can use < /dev/null, of course.
Related
This must be a solved problem, but Googling hasn't found me an answer.
I have an existing program with an interactive, text-only, command-line interface. I'd like to run this program on a web server so that each session gets a unique session of the CLI program, and each time a command is typed into the web page the corresponding response gets sent back.
I sorta-kinda know how to do this if the program were run fresh each time; what I'm looking for is how to do this so that the program keeps running (presumably in its own process), and the web page gets updates.
It'll be on a Linux box. Security hints would be helpful -- I'm hoping that I can run the program itself as a user that's tailored to have just enough privileges to actually run, but as little as possible otherwise to keep a hacker from getting in.
Try https://seashells.io/
Seashells lets you pipe output from command-line programs to the web in real-time, even without installing any new software on your machine. You can use it to monitor long-running processes like experiments that print progress to the console. You can also use Seashells to share output with friends!
$ echo 'Hello, Seashells!' | nc seashells.io 1337
serving at https://seashells.io/v/{random url}
Other article explaining seashells
Another alternative if you only want to see the result of the command, you can use websocketd
You could use this:
https://www.tecmint.com/shell-in-a-box-a-web-based-ssh-terminal-to-access-remote-linux-servers/
Then, you'd need to configure a user with limited rights that automatically launches the desired application. Make sure you understand Linux security before you do this. :)
Here is some information on restricting SSH sessions that you may find useful for this objective:
http://www.linuxjournal.com/article/8257
It looks like at least a partial answer is just basic Linux-ness: I stumbled across named pipes (which I knew about, dangit!). After testing, I can set up a pair of pipes, and then run my command as
tail -f in_pipe.fifo | my_command > out_pipe.fifo
Then the transient commands will just need to open the relevant pipes, talk to them, close, and exit. I'll still need to manage the lifetime of the above command (no doubt timers will be involved, and magic), but that's a question for another posting.
At this point I can open three terminal windows: one with the above command running, one with tail -f out_pipe.fifo running, and one in which I type things of the form echo line of input > in_pipe.fifo, and it all works. I'm sure that (A) I can work this out from here, and (B) it'll be ugly.
I did have to get into the source code for the program and make sure that the output buffer was flushed (using fflush(stdout)) in the C-code. (I'm working from some vintage source code; at this point I'm trying to do the minimal amount of work to make it compile and run -- every line changed is an opportunity for a bug, and I'm not yet familiar with this code base).
I'm fully aware that PHP has a range of functions available to issue commands to the DOS bck-end of the Windows operating system, alas from my experience. This runs in a completely seperate scenario.
I've been researching into the methodology of issuing commands to an already running command prompt and printing out the results. My current setup is as followed:
Windows Server 2008R2 (IIS, PHP5.5,MSSQL & MySQL server)
an already running command prompt screen initialized by the following:
C:\Datalog\sys\dedi.exe -logfile=C:\inetpub\wwwroot\Syslog\
The problem now, is that the functions that I'm aware of, such as:
exec(), system() and passthru() only run commands in a seperate envrionment.
Why Don't I start the executional with php?
This can be done with either PHP and/or with an ajax solution, but the problem that will be encountered is that when navigating away from the page the executional will close & when navigating to page again, it might cause duplicate running environments
So, my overall question.. Is it possible to use PHP to issue commands to an already running command prompt screen? which is kept alive by the operating system?
The short answer is no, this is not possible. The web server will launch new processes separate from any other shell. You could write a command line app that runs continuously in a command prompt and takes IPC messages from the web app to get instructions, but this is probably too convoluted given your main concern:
the problem that will be encountered is that when navigating away from
the page the executional will close & when navigating to page again,
it might cause duplicate running environments
These are concerns that can be resolved in other ways. Processes can be launched asynchronously to run apart from the web application and continue if the connection is closed. To prevent "duplicating the running environment" the launched processes or the web app can use semaphores or other techniques to prevent duplicate runs.
Part of my web application is a background script that polls from a beanstalkd server and process data.
This script needs to run continuously (like a daemon). If it crashes, it needs to be started again. It also can't be started twice (more precisely run twice).
As I want to ease the deployment and development process, I want to avoid using pcntl_fork. It's not available on Windows, it necessitates recompiling PHP on Mac, sometimes on Linux too...
Can I do this simply using a bash script to launch the PHP script in background?
# verify that the script is not already running
...
/usr/bin/php myScript.php &
If I execute this batch with crontab every hour or so, my process should run continuously and be restarted in maximum one hour if it crashes?
Assuming blindly that you control the server(s) on which your scripts run, Supervisor is probably a good solution for you.
It's a process control daemon, written in Python. You can configure it to start your PHP script and keep it running. The PHP script itself doesn't need to do anything special. No forking, no manual process control, nothing.
On the other hand, you've also expressed concern about pcntl_fork not being available on Windows. If you're really running this thing on Windows, Supervisor isn't going to work out for you, as it isn't Windows friendly. Keep in mind that Windows isn't really friendly to Unix-style daemonization either, as it would want to control the daemon as a Service. While that's possible, it's not exactly an easy or elegant solution.
I am writing a Wordpress plug in php in and next step is some kind of add on to this plug in.
The add on would scrape data from web, sending forms etc. I have this part almost ready from the time before I had any thoughts about Wordpress plugin - it's coded in ruby using mechanize. I haven't found anything similar to mechanize in php anyway.
But I do not know what is the best way to call my ruby script from Wordpress. Some tasks will be managed by cron. What about the ones based on user request?
php script only triggers the ruby script. It won't wait/require anything from ruby's output
Wordpress plugin is fully portable and functional without ruby script. Ruby adds on something more. If somebody requires it.
everything will be running on my linux server where I have a root access
A WordPress plugin that depends on Ruby isn't going to be portable. That's OK if you're the only one who will be using it, though.
If the Ruby script needs to return a result that will be used immediately by the PHP script that's calling it, then something like exec() is the only way. Make sure you escape any arguments you pass to the Ruby script; otherwise you'll be vulnerable to injection attacks.
If the Ruby script doesn't need to return a result immediately (e.g. some background processing, such as thumbnail generation) then I think the best way would be for the PHP script to insert a row into a MySQL database or something similar. The Ruby script can work in the background or run from cron, check the database periodically for new jobs, and do whatever processing it needs to do. This approach avoids the performance overhead and security issues of exec(), and it's arguably also more scalable. (A similar approach would have the Ruby script listen on a socket, and your PHP scripts would connect to the socket. But this requires more work to get it right.)
If i were you i would handle all the ruby stuff from the cron. Make a queue in the DB to hand user requests then make the script (in ruby?) invoked by cron grab all the unprocessed jobs from the queue and start running them, then remove the job from the queue (or set some kind of flag for it being done). This way you dont have to call exec which in most cases is going to be off limits unless the user is running on VPS/Dedicated server where they have root access.
You could also make this a seperate job and have it poll the DB for unprocessed jobs more regularly than the primary job... if necessary.
Still, this begs the question... why use ruby in a php blog/cms app??????
Use exec() to run the ruby interpreter, giving it the path to your ruby script.
http://php.net/manual/en/function.exec.php
i start a linux console app from my php5 script, it starts ok but then termintates. I've tried using system(), shell_exec and tried starting as background process but to no avail it starts and then quits.
What i am trying to achieve is from a remote browser start a console app using a php5 script and then it should remain running (just as it would if i started it from a bash shell) , i then want to send commands (from a bash shell it would be keyboard strokes) to the console app from another set of php5 scripts. Hope its clear what i am trying to do.
If anyone could give some info on the best way about doing this, as i think i may have something fundamentally wrong.
I have a Debian Lenny box running apache.The console app is just a simple program that prints to stdout and reads from stdin.
How do you expect to send input to this app? Where is it listening for input?
It simply may only support interactive use, and exit as a result of that. Or, even simpler, it may terminate because it sees that is has no input (nothing piped in or nothing from some file) and since it's not connected to an interactive shell, it has nothing to do. There's no point in waiting for input from a user that doesn't have a way to interact w/ the application.
On every request, PHP starts up, compiles your script and executes it. After execution, the script exists. When the script exits, all of the resources it was using, including file handles, database handles, and pipes to other programs are terminated.
You're going to need to find another way to keep your program open and have PHP communicate with it. Otherwise, every request to your script is going to open a new copy of the program, and then both will exit when the PHP script is complete.
Unfortunately without knowing what the program is, it will be hard to offer suggestions on how to go about doing this.