I need to run a javascript code on server side using IE8
(the javascript works with activeX objects)
But I need to run it from command line, from PHP.
So in short, I will install apache + php on 2003 Windows server, and php will use system() to execute iexplore running a page of javascript.
I would like to know if this is logically possible, as i can see a number of pitfalls:
PHP might not be able to execute iexplore without a user logged in.
iexplore might not run the javascript correctly to interact with ActiveX objects
iexplore might not quit when JS finished running.
I will attepmt to make a little test case as soon as i can, but any pointers about this aproach will be apreciated.
Edit:
Now, I realise that this is a round about way of doing things (read, wrong), The goal was to make a Dymo Label printer print from a central location rather than client machines (this is where the JS is from). Dymo SDK provide several ways of interacting with their printers, but Im still looking for a way to use pure PHP. I think it might be possible to use one of their example cli binaries.
Does the Dymo have a way of interacting with it from Command Line? If so you can easily send commands to it via shell_exec(). http://www.php.net/manual/en/function.shell-exec.php
This is generally the easiest option when you are able to control something via command-line. Sometimes you need a bit more control, however (interactive command-line programs, for instance) and sometimes the program you want to run isn't even command-line based. In these cases you may need proc_open() (http://www.php.net/manual/en/function.proc-open.php) or exec() (http://www.php.net/manual/en/function.exec.php)
Just make sure that if you use exec() you redirect the output!!. Failure to do this can cause the program to hang indefinitely.
From the PHP manual:
Note:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
Make sure to update your Service Packs and AntiVirus definitions. I can foresee many many many potential security issues here.
Keep in mind that JavaScript in IE runs with a webpage context. When you refresh/navigate pages, the old JavaScript execution state is wiped and a new one begins.
Was there a specific question here?
Related
I am programming PHP applications where I need to move processing routines from the client (browser) to the server. We have our own dedicated Windows servers.
Let us say that a shopper buys something and the system has to generate a nice and complex invoice PDF (and do a lot other things that takes some seconds) and after this send it to the client as fast as possible. Right now, I have running these time-consuming routines in a hidden Iframe and hoping that client is not breaking the routine by going to another page. It is not a good solution.
A much better solution would be to trigger some kind of software on the Windows server that does the processing instead of the browser (and does it instantly).
I could use "Scheduled Tasks" in Windows but the quickest it can run is each minute. I need something that can run instantly. Do you know what can do this on a Windows server? Like some kind callback server (software).
To make sure user navigation doesn't stop your scripting you can use ignore_user_abort(true);
You can also use exec create background process and run your php script
Example
exec("start /B php Notify.php");
To learn more about execfunction click here
You can find more information by googling stuffs.
I have been looking at the node.js application vs php, I found many comparisons comparing these two server technique. Mostly people suggest that the javascript V8 engine is much faster than php in terms of running speed of a single file calculation.
I worked on some javascript code for Node.js, now I have this idea I don't know if it is correct.
In my opinion, Node.js runs a javascript application and listen on a port. so this javascript application is an application that is running on server computer. Therefore, the application code is all copied in the memory of the computer. stuff like global variables are declared and saved at the beginning when node.js execute this program. So any new request come in, the server can use these variables very efficiently.
In php, however, the php program execute *.php file based on request. Therefore, if some request is for www.xx.com/index.php, then the program will execute index.php, and in which, there may be stuff like
require("globalVariables.php");
then, php.exe would go there and declare these variables again. same idea for functions and other objects...
So am I correct in thinking that php may not be a good idea when there are many other libraries that need to be included?
I have searched for the comparison, but nobody have talked about this.
Thanks
You are comparing different things. PHP depends on Apache or nginx (for example) to serve scripts, while Node.js is a complete server itself.
That's a big difference, cause when you load a php page, Apache will spawn a thread and run the script there. In Node all requests are served by the Node.js unique thread.
So, php and Node.js are different things, but regarding your concern: yes, you can mantain a global context in Node that will be loaded in memory all the time. On the other hand PHP loads, runs and exits all the time. But that's not the typical use case, Node.js web applications have templates, that have to be loaded and parsed, database calls, files... the real difference is the way Node.js handles heavy tasks: a single thread for javascript, an event queue, and external threads for filesystem, network and all that slow stuff. Traditional servers spawn threads for each connection, that's a very different approach.
I'm considering the idea of a browser-based PHP IDE and am curious about the possibility of emulating the command line through the browser, but I'm not familiar enough with developing tools for the CLI to know if it's something that could be done easily or at all. I'd like to do some more investigation, but so far haven't been able to find very many resources on it.
From a high level, my first instinct is to set up a text input which would feed commands to a PHP script via AJAX and return any output onto the page. I'm just not familiar enough with the CLI to know how to interface with it in that context.
I don't need actual code, though that would be useful too, but I'm looking for more of which functions, classes or APIs I should investigate further. Ideally, I would prefer something baked into PHP (assume PHP 5.3) and not a third-party library. How would you tackle this? Are there any resources or projects I should know about?
Edit: The use case for this would be a localhost or development server, not a public facing site.
Call this function trough a RPC or a direct POST from javascript, which does things in this order:
Write the PHP code to a file (with a random name) in a folder (with a random name), where it will sit alone, execute, and then be deleted at the end of execution.
The current PHP process will not run the code in that file. Instead it has to have exec permissions (safe_mode off). exec('php -c /path/to/security_tight/php.ini') (see php -?)
Catch any ouput and send it back to the browser. You are protected from any weird errors. Instead of exec I recomment popen so you can kill the process and manually control the timeout of waiting for it to finish (in case you kill that process, you can easily send back an error to the browser);
You need lax/normal security (same as the entire IDE backend) for the normal PHP process which runs when called through the browser.
You need strict and paranoid security for the php.ini and php process which runs the temporary script (go ahead and even separate it on another machine which has no network/internet access and has its state reverted to factory every hour just to be sure).
Don't use eval(), it is not suitable for this scenario. An attacker can jump out into your application and use your current permissions and variables state against you.
The basic version would be
you scripts outputs a form with a line input
The form action points to your script
The script takes the input on the form and passes it to eval
pass any output from eval to the browser
output the form again
The problem is, that defined functions and variables are lost between each request.
Would you could to is to add each line that is entered to your session. Lets say
$inputline = $_GET['line'];
$_SESSION['script'] .= $inputline . PHP_EOL;
eval($_SESSION['script'];
by this, on each session a the full PHP script is executed (and of course you will get the full output).
Another option would be to create some kind of daemon (basically an instance of a php -a call) that runs on the server in the background and gets your input from the browser and passes the output.
You could connect this daemon to two FIFO devices (one for the input and one for the output) and communicate via simple fopen.
For each user that is using your script, a new daemon process has to be spawned.
Needless to say, that it is important to secure your script against abuse.
Recently I read about a PHP interpreter written in Javascript php.js, so you could write and execute PHP code using your browser only. I'm not sure if this is what you need in the end but it sounds interesting.
We've tested some products at my university for ssh-accessing our lab servers and used some of the Web-SSH-Tools - they basically do exactly what you want. The Shell-In-A-Box-Project may be bound to any interpreter you like and may be used with an interactive php-interpreter, if desired (on the demo-page, they used a basic-interpreter). The project may serve as a basis for a true PHP-IDE. These have the advantage of being capable of interacting with any console-based editor as well (e.g. vi, emacs or nano), as well as being able to give administrative commands (e.g. creating folders, changing ownerships or ACLs or rebooting a service).
Mozilla also has a full-featured webbased IDE called Bespin, which is also highly extensible and configurable.
As you stated, that the page is not for the public, you of course have to protect the page with Authentication and SSL to combat session hijacking.
I'm trying to determine the best approach to providing an Ajax based terminal using PHP. I haven't made an attempt at writing it yet but having rolled the idea around, the only way I could see it possible, would be 2 scripts:
Script 1; handles Ajax communication
between server and client browser. when a
request is made to use the terminal,
it connects to (or starts as a service then
connects to) Script 2 via a socket.
Script 2; performs the system calls,
passing back output to the Ajax
script for output via the socket.
There are multiple holes I can see in this though, and I'm wondering if anyone has created/seen a set of scripts that can perform these tasks? Any insight would be greatly appreciated!
Thanks :)
Edit: I think I was unclear about a few things. I've found a few scripts that imitate terminals, providing nearly the functionality that I'm looking for, such as AjaxPHPTerm (http://sourceforge.net/projects/ajaxphpterm/)
The problem is that, I'm trying to find a method that permits interaction with shell scripts. If a script prompts Press any key to continue, or Select option [x], using AjaxPHPTerm, it just hangs or drops out of the shell script.
That's why I started thinking sockets, or streams; some way of forming a direct I/O stream to the system calls.
Http is stateless and AJAX, sockets or any other technology based on pages generated by server will not change it magically. Whatever tricks You would use, it will be not efficient and simply not worth the effort (In my opinion at least).
The problem seems to be that AjaxPHPTerm is actually closer to a shell than a terminal (glancing at the code, it seems to do its own CWD handling, and has a simple read-eval-print loop).
Assuming a Posix-compatible OS on the server, the proper way to implement this would probably be to use the pseudo-terminal facility, so that your web terminal appears like a virtual terminal on the system, that running programs can interactively access.
I'm using some PHP scripts from FeedForAll to join together RSS feeds (RSSmesh) and display them as HTML (RSS2HTML).
Because I intend to run these scripts fairly intensively and don't want the resulting HTTP requests and bandwidth to count towards my hosting quota, I am in the process of moving to running them on the web host's server in an umbrella PHP "batch" script, and call this script via cron (this is a Linux server, by the way).
Here's a (working) sample request over HTTP:
http://www.mydomain.com/a/rss2htmlcore/rss2html2.php?XMLFILE=http://www.mydomain.com/a/myapp/xmlcache/feed.xml&TEMPLATE=template.html
This will produce the desired HTML output. An example of how I want this to work on the command line:
/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html
This is with the correct shebang line added to "rss2html2-cli.php". I could just as well specify the executable ("/usr/local/bin/php") in the request, I doubt it makes a difference because I am able to run another script (that I wrote myself) either way without problems.
Now, RSS2HTML and RSSmesh are different in that, for starters, they include secondary files -- for example, both include an XML parser script -- and I suspect that this is where I am getting a bit in over my head.
Right now I'm calling exec() from the "umbrella" batch script, like so:
exec("/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html", $output)
But no output is being produced. What's the best way to go about this and what "gotchas" should I keep in mind? Is exec() the right way to approach this? It works fine for the other (simple) script but that writes its own output. For this I want to get the output and write it to a file from within the umbrella script if possible. I've also tried output buffering but to no avail.
Do I need to pay attention to anything specific with regard to the includes? Right now they're specified in the scripts as include_once("FeedForAll_XMLParser.inc.php"); and the specified files are indeed in the same folder.
Further info:
-This is a Linux server.
-I have no direct access to the shell, so I can't test things directly on a command line, everything is via crontab.
-I will admit that support for the FeedForAll scripts leaves a lot to be desired, but I'd like to keep using their scripts if at all possible, if only because I know them and have been using them for a while. I have looked into Simplepie, but the FFA scripts do some things that I've seen no obvious solutions for with Simplepie, like limiting the number of items per individual feed (RSSmesh) or limiting the description length (RSS2HTML).
-Yahoo! Pipes is out, they cache their data for too long for my application.
Should you want to take a look at the code, here are the scripts as txt files. RSS2HTML2 and RSSmesh are the FeedForAll scripts, FeedForAll_XMLParser... is the included parser. Note that I have not yet amended these to handle $argv etc. I have however in "scraper-universal-rss-cli", which works fine with CLI.
If anyone has any thoughts to share on this it would be very much appreciated. Thank you in advance.
I think the $hideErrors = 0; line in rss2html is not helping. Since isset is used to check if errors should be displayed you should comment this out. Setting it to zero does nothing since a variable set to 0 still evaluates to true with isset.
Re-run and see if it throws up some errors for you.
Use wget or curl to issue the request against the local web server. Don't use CLI.