I'm considering the idea of a browser-based PHP IDE and am curious about the possibility of emulating the command line through the browser, but I'm not familiar enough with developing tools for the CLI to know if it's something that could be done easily or at all. I'd like to do some more investigation, but so far haven't been able to find very many resources on it.
From a high level, my first instinct is to set up a text input which would feed commands to a PHP script via AJAX and return any output onto the page. I'm just not familiar enough with the CLI to know how to interface with it in that context.
I don't need actual code, though that would be useful too, but I'm looking for more of which functions, classes or APIs I should investigate further. Ideally, I would prefer something baked into PHP (assume PHP 5.3) and not a third-party library. How would you tackle this? Are there any resources or projects I should know about?
Edit: The use case for this would be a localhost or development server, not a public facing site.
Call this function trough a RPC or a direct POST from javascript, which does things in this order:
Write the PHP code to a file (with a random name) in a folder (with a random name), where it will sit alone, execute, and then be deleted at the end of execution.
The current PHP process will not run the code in that file. Instead it has to have exec permissions (safe_mode off). exec('php -c /path/to/security_tight/php.ini') (see php -?)
Catch any ouput and send it back to the browser. You are protected from any weird errors. Instead of exec I recomment popen so you can kill the process and manually control the timeout of waiting for it to finish (in case you kill that process, you can easily send back an error to the browser);
You need lax/normal security (same as the entire IDE backend) for the normal PHP process which runs when called through the browser.
You need strict and paranoid security for the php.ini and php process which runs the temporary script (go ahead and even separate it on another machine which has no network/internet access and has its state reverted to factory every hour just to be sure).
Don't use eval(), it is not suitable for this scenario. An attacker can jump out into your application and use your current permissions and variables state against you.
The basic version would be
you scripts outputs a form with a line input
The form action points to your script
The script takes the input on the form and passes it to eval
pass any output from eval to the browser
output the form again
The problem is, that defined functions and variables are lost between each request.
Would you could to is to add each line that is entered to your session. Lets say
$inputline = $_GET['line'];
$_SESSION['script'] .= $inputline . PHP_EOL;
eval($_SESSION['script'];
by this, on each session a the full PHP script is executed (and of course you will get the full output).
Another option would be to create some kind of daemon (basically an instance of a php -a call) that runs on the server in the background and gets your input from the browser and passes the output.
You could connect this daemon to two FIFO devices (one for the input and one for the output) and communicate via simple fopen.
For each user that is using your script, a new daemon process has to be spawned.
Needless to say, that it is important to secure your script against abuse.
Recently I read about a PHP interpreter written in Javascript php.js, so you could write and execute PHP code using your browser only. I'm not sure if this is what you need in the end but it sounds interesting.
We've tested some products at my university for ssh-accessing our lab servers and used some of the Web-SSH-Tools - they basically do exactly what you want. The Shell-In-A-Box-Project may be bound to any interpreter you like and may be used with an interactive php-interpreter, if desired (on the demo-page, they used a basic-interpreter). The project may serve as a basis for a true PHP-IDE. These have the advantage of being capable of interacting with any console-based editor as well (e.g. vi, emacs or nano), as well as being able to give administrative commands (e.g. creating folders, changing ownerships or ACLs or rebooting a service).
Mozilla also has a full-featured webbased IDE called Bespin, which is also highly extensible and configurable.
As you stated, that the page is not for the public, you of course have to protect the page with Authentication and SSL to combat session hijacking.
Related
I'm developing an application in PHP and Javascript and I need to set up disk quotas for a given user (as I'm using an FTP daemon (ProFTPd in this case) to allow for users to have their own document manager) so elFinder (which is the document manager I'm currently thinking on using) can run 'freely' (instead of having to create my own PHP function to control how much space is actually being used).
The idea is to run a single command to adjust the disk quota on the server side, but... is it safe to let PHP run system commands (even if I'm not going to accept parameters or allow any kind of user interaction with the system)?
Usualy is not safe. It doesn't matter if you let users send commands or any other kind of interactivity. Even if your script runs alone, exploits can be invented to make use of it in one form or another and maybe alter it's actions.
But, this applies only if you want to have insane security rules on your server. In real world, the chance is minimal that you can compromise your server security.
I still have some suggestions for you :
make sure your script does not accept any input from outside, it does not read a database or a file. Everything must be enclosed inside the script.
Try to put the script somewhere outside the documentRoot so it won't be accesible by users.
Put some special permissions on the script so that it's actions are limited to the user it runs as. Even if someone breaks it somehow, the OS will not let him do something else than running just that particular command in a particular environment.
This of course may be completed with more rules, but this is just what comes in mind now. Hope it helps
It is unsafe if you let the user enter info; POST or GET values without filtering.
If you have to use a GET value the user enters, you should use escapeshellarg() or escapeshellcmd().
http://www.php.net/manual/en/function.escapeshellarg.php
http://php.net/manual/en/function.escapeshellcmd.php
As long as you are not getting any user input and for the command the be run, it is just as safe to use exec() as it is to do it from the command line. If you are going to use user input in the commands, use escapeshellarg() or escapeshellcmd() and it should still be safe.
I'm trying to determine the best approach to providing an Ajax based terminal using PHP. I haven't made an attempt at writing it yet but having rolled the idea around, the only way I could see it possible, would be 2 scripts:
Script 1; handles Ajax communication
between server and client browser. when a
request is made to use the terminal,
it connects to (or starts as a service then
connects to) Script 2 via a socket.
Script 2; performs the system calls,
passing back output to the Ajax
script for output via the socket.
There are multiple holes I can see in this though, and I'm wondering if anyone has created/seen a set of scripts that can perform these tasks? Any insight would be greatly appreciated!
Thanks :)
Edit: I think I was unclear about a few things. I've found a few scripts that imitate terminals, providing nearly the functionality that I'm looking for, such as AjaxPHPTerm (http://sourceforge.net/projects/ajaxphpterm/)
The problem is that, I'm trying to find a method that permits interaction with shell scripts. If a script prompts Press any key to continue, or Select option [x], using AjaxPHPTerm, it just hangs or drops out of the shell script.
That's why I started thinking sockets, or streams; some way of forming a direct I/O stream to the system calls.
Http is stateless and AJAX, sockets or any other technology based on pages generated by server will not change it magically. Whatever tricks You would use, it will be not efficient and simply not worth the effort (In my opinion at least).
The problem seems to be that AjaxPHPTerm is actually closer to a shell than a terminal (glancing at the code, it seems to do its own CWD handling, and has a simple read-eval-print loop).
Assuming a Posix-compatible OS on the server, the proper way to implement this would probably be to use the pseudo-terminal facility, so that your web terminal appears like a virtual terminal on the system, that running programs can interactively access.
I need to run a javascript code on server side using IE8
(the javascript works with activeX objects)
But I need to run it from command line, from PHP.
So in short, I will install apache + php on 2003 Windows server, and php will use system() to execute iexplore running a page of javascript.
I would like to know if this is logically possible, as i can see a number of pitfalls:
PHP might not be able to execute iexplore without a user logged in.
iexplore might not run the javascript correctly to interact with ActiveX objects
iexplore might not quit when JS finished running.
I will attepmt to make a little test case as soon as i can, but any pointers about this aproach will be apreciated.
Edit:
Now, I realise that this is a round about way of doing things (read, wrong), The goal was to make a Dymo Label printer print from a central location rather than client machines (this is where the JS is from). Dymo SDK provide several ways of interacting with their printers, but Im still looking for a way to use pure PHP. I think it might be possible to use one of their example cli binaries.
Does the Dymo have a way of interacting with it from Command Line? If so you can easily send commands to it via shell_exec(). http://www.php.net/manual/en/function.shell-exec.php
This is generally the easiest option when you are able to control something via command-line. Sometimes you need a bit more control, however (interactive command-line programs, for instance) and sometimes the program you want to run isn't even command-line based. In these cases you may need proc_open() (http://www.php.net/manual/en/function.proc-open.php) or exec() (http://www.php.net/manual/en/function.exec.php)
Just make sure that if you use exec() you redirect the output!!. Failure to do this can cause the program to hang indefinitely.
From the PHP manual:
Note:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
Make sure to update your Service Packs and AntiVirus definitions. I can foresee many many many potential security issues here.
Keep in mind that JavaScript in IE runs with a webpage context. When you refresh/navigate pages, the old JavaScript execution state is wiped and a new one begins.
Was there a specific question here?
I want to run a php script from the command line that is always running and constantly updating a variable.
I then want any php script that is run in the meantime (probably but not necessarily from the web) to be able to read that variable at any time.
Anyone know how I can do this?
Thanks.
Here, you want some kind of inter-process communication mecanism.
You cannot use a PHP variable for that : these are local to the script they're in.
Which means you'll have to use some "external" tool to store your data, like, to only speak of a few :
a file
a database (SQLite, MySQL, ...)
some shared-memory segment
In each case, you'll have :
One script that write to the data-storage space -- i.e. your first always running script
One or many other scripts that will read from the data-store
You should write the variable to a file with the CLI script and read from that with the other script.
Be sure to use flock to prevent race conditions.
You can write a php socket based server script, which will listen on desired port. Find article here.
Then your client php script can connect to it either locally or from the web and retrieve any data, including variables.
You can use any simple protocol designed by you or well known like XML to transfer variables.
Lots of idea's:
At set intervals it appends/writes to a file.
You use sqlite and write your data to it.
Your use a small memcached service as your intermediary.
You go somewhat crazy and write a socket class, listen on a set port, then make non-blocking calls to check.
1-2 are probably the simplest
3 would work great if you need to query the value a lot
4 would be fun, but might not be worth the effort.
I'm using some PHP scripts from FeedForAll to join together RSS feeds (RSSmesh) and display them as HTML (RSS2HTML).
Because I intend to run these scripts fairly intensively and don't want the resulting HTTP requests and bandwidth to count towards my hosting quota, I am in the process of moving to running them on the web host's server in an umbrella PHP "batch" script, and call this script via cron (this is a Linux server, by the way).
Here's a (working) sample request over HTTP:
http://www.mydomain.com/a/rss2htmlcore/rss2html2.php?XMLFILE=http://www.mydomain.com/a/myapp/xmlcache/feed.xml&TEMPLATE=template.html
This will produce the desired HTML output. An example of how I want this to work on the command line:
/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html
This is with the correct shebang line added to "rss2html2-cli.php". I could just as well specify the executable ("/usr/local/bin/php") in the request, I doubt it makes a difference because I am able to run another script (that I wrote myself) either way without problems.
Now, RSS2HTML and RSSmesh are different in that, for starters, they include secondary files -- for example, both include an XML parser script -- and I suspect that this is where I am getting a bit in over my head.
Right now I'm calling exec() from the "umbrella" batch script, like so:
exec("/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html", $output)
But no output is being produced. What's the best way to go about this and what "gotchas" should I keep in mind? Is exec() the right way to approach this? It works fine for the other (simple) script but that writes its own output. For this I want to get the output and write it to a file from within the umbrella script if possible. I've also tried output buffering but to no avail.
Do I need to pay attention to anything specific with regard to the includes? Right now they're specified in the scripts as include_once("FeedForAll_XMLParser.inc.php"); and the specified files are indeed in the same folder.
Further info:
-This is a Linux server.
-I have no direct access to the shell, so I can't test things directly on a command line, everything is via crontab.
-I will admit that support for the FeedForAll scripts leaves a lot to be desired, but I'd like to keep using their scripts if at all possible, if only because I know them and have been using them for a while. I have looked into Simplepie, but the FFA scripts do some things that I've seen no obvious solutions for with Simplepie, like limiting the number of items per individual feed (RSSmesh) or limiting the description length (RSS2HTML).
-Yahoo! Pipes is out, they cache their data for too long for my application.
Should you want to take a look at the code, here are the scripts as txt files. RSS2HTML2 and RSSmesh are the FeedForAll scripts, FeedForAll_XMLParser... is the included parser. Note that I have not yet amended these to handle $argv etc. I have however in "scraper-universal-rss-cli", which works fine with CLI.
If anyone has any thoughts to share on this it would be very much appreciated. Thank you in advance.
I think the $hideErrors = 0; line in rss2html is not helping. Since isset is used to check if errors should be displayed you should comment this out. Setting it to zero does nothing since a variable set to 0 still evaluates to true with isset.
Re-run and see if it throws up some errors for you.
Use wget or curl to issue the request against the local web server. Don't use CLI.