I often use PHP to script more complex maintenance tasks. Usually run in CLI, or as cron tasks. I've had decent success with having prompts passed through, and functional with exec and pass-thru.
I never really thought about having web access to these scripts, until something in the documentation made me think it may be possible/useful.
However, almost all of the documentation mentions the output of commands, slim to none information about [CLI] prompts.
It's probably a bad idea in the first place because it would probably inadvertently boost privileges of web server user (www-data), right?
Related
This must be a solved problem, but Googling hasn't found me an answer.
I have an existing program with an interactive, text-only, command-line interface. I'd like to run this program on a web server so that each session gets a unique session of the CLI program, and each time a command is typed into the web page the corresponding response gets sent back.
I sorta-kinda know how to do this if the program were run fresh each time; what I'm looking for is how to do this so that the program keeps running (presumably in its own process), and the web page gets updates.
It'll be on a Linux box. Security hints would be helpful -- I'm hoping that I can run the program itself as a user that's tailored to have just enough privileges to actually run, but as little as possible otherwise to keep a hacker from getting in.
Try https://seashells.io/
Seashells lets you pipe output from command-line programs to the web in real-time, even without installing any new software on your machine. You can use it to monitor long-running processes like experiments that print progress to the console. You can also use Seashells to share output with friends!
$ echo 'Hello, Seashells!' | nc seashells.io 1337
serving at https://seashells.io/v/{random url}
Other article explaining seashells
Another alternative if you only want to see the result of the command, you can use websocketd
You could use this:
https://www.tecmint.com/shell-in-a-box-a-web-based-ssh-terminal-to-access-remote-linux-servers/
Then, you'd need to configure a user with limited rights that automatically launches the desired application. Make sure you understand Linux security before you do this. :)
Here is some information on restricting SSH sessions that you may find useful for this objective:
http://www.linuxjournal.com/article/8257
It looks like at least a partial answer is just basic Linux-ness: I stumbled across named pipes (which I knew about, dangit!). After testing, I can set up a pair of pipes, and then run my command as
tail -f in_pipe.fifo | my_command > out_pipe.fifo
Then the transient commands will just need to open the relevant pipes, talk to them, close, and exit. I'll still need to manage the lifetime of the above command (no doubt timers will be involved, and magic), but that's a question for another posting.
At this point I can open three terminal windows: one with the above command running, one with tail -f out_pipe.fifo running, and one in which I type things of the form echo line of input > in_pipe.fifo, and it all works. I'm sure that (A) I can work this out from here, and (B) it'll be ugly.
I did have to get into the source code for the program and make sure that the output buffer was flushed (using fflush(stdout)) in the C-code. (I'm working from some vintage source code; at this point I'm trying to do the minimal amount of work to make it compile and run -- every line changed is an opportunity for a bug, and I'm not yet familiar with this code base).
We need to distribute software which should contain a PHP script which will run for some minutes. Therefore I am searching a best practise way to do this in 2017.
It has to be invoked by an HTTP request. There should be no HTTP request waiting for some minutes so the script has to run still AFTER the visitor got his HTTP response.
It has to run periodically (every night). It also should run every night per default (like a cron job). Notice: since the software is going to be distributed to clients there is no way for us to add a cronjob manually (we have no access to our clients servers). Everything should be accomplished within PHP code.
(Please note that I read existing blog posts and Stackoverflow questions myself but I could not find a satisfying answer)
Maybe anyone knows how frameworks like Symfony and Laravel or webshops like Magento accomplish such tasks? Still I want to know how to do it by myself in plain PHP without using frameworks or libraries.
Many solutions exist:
using exec (rather insecure), that triggers a background job (recommended in comments, I would probably prefer symfony process, but insecure nevertheless).
using a cron to trigger a symfony process every so often, not over http so way more secure.
using php-fpm, you can send a response without stopping the process using fastcgi_finish_request
using a queue system (SQS, RabbitMQ, Kafka and so on).
using a cron manager in PHP
using a daemon and something like supervisord to make sure it runs continuously.
The best solutions are definitely queues and cron, then PHP-FPM, rest of it is just rubbish.
There is absolutely no way for you to run on someone's server without doing something that won't work at some point.
Sidenote: you said you did not want libraries in order to know how to do it yourself, I added links to libraries as reading them may give you a deeper knowledge of the technology, these libraries are really high quality.
Magento only runs it cronjobs, if you setup a regular cronjob for Magento. It has a cron.sh, that runs every minute an executes jobs in Magento's queue.
Any Solution to execute long-running tasks via http involves web-server configuration.
Finally I think there are two ways to start a long running process in PHP via an HTTP request (without letting the user wait for a long time):
Using FPM and send the response to the user with fastcgi_finish_request(). After the response is sent you can do whatever you want, for example long running tasks. (Here you don't have to start a new process, just continue with PHP).
fastcgi_finish_request();
longrunningtask();
Create a new process using one of the following functions. Redirect STDOUT and STDERR to null and put it to the background (you have to do both). To still get output of the new process the new process can write to some log file.
exec('php longrunningtask.php >/dev/null 2>/dev/null &');
shell_exec('php longrunningtask.php >/dev/null 2>/dev/null &');
system('php longrunningtask.php >/dev/null 2>/dev/null &');
passthru('php longrunningtask.php >/dev/null 2>/dev/null &');
Or use proc_open() like symfony/process does.
Notes:
symfony/process: on the docs you can read that you can use FPM and fastcgi_finish_request() for long running tasks.
Security: I can see no built in security risk, you just have to do things right then everything is fine (you could use password protection, validate possible inputs to your commands, etc.).
For the cron-job-part of the question there is no answer. I don't think it's possible.
What are the bad points when you execute a python script with php?
Also, how is it different from executing python through the cgi method
I found an this interesting method from Running python script from php and i thought it will be great to just use the
exec("python ../cgi-bin/form.py");
and closely-related methods.
Please explain properly and tell me what we have to keep in mind when using this method.
You problem is very common - and in general it's not about executing python scripts - but to execute some external commands. To do that, you'll need some conditions to be fulfilled:
Normally, PHP is run by web-server. So to execute some script, web-server must be able to do that. It means - that OS user, from which web-server was launched, must have enough permissions to execute external command
In many cases, external execution functions, like exec() or system() are treated as unsafe - and, thus, are disabled in common case (I'm speaking about hostings). So, relying on those functions will make your application's technical requirements more strict - and you'll not be able to use such hostings.
Besides described above, PHP script will "hang" until full data will be passed from exec() back to script. That means slow execution and low-predictable response-time. And, more, in Win systems execution of external scripts is included to total script execution time (unlike in *nix systems) - and, therefore, you may have good chances to catch time limit error if your external script was too long to response.
If you want to make some "comparison" with launching python script as CGI - then you should choose CGI. At least because it's intended to serve that purpose. Launching python script with CGI will definitely win in terms of speed - because there will be no overhead for launching PHP script (and you may, for example, disable PHP support if you want to only use python). Permissions level in common case will not be a problem, since, at end point, executable will be launched from web-server user, thus, they will be same in both cases. And, as I've mentioned above, launching via CGI will not bind you to PHP's time limits - you'll not care about what's happening in PHP.
So the main idea is - yes, it is a way to launch. But no - that's not a thing that you should do if you can do that natively via CGI launch.
I have a script that will update some columns on my database. It is written in PHP, I execute it via URL (eg. http://foo.com/xyz/yzx/dbupt8r). I did this using crontab -e then curl on the script URL, because on my mind it is like somehow similar of what I am doing: accessing it via URL. Is there any advisable or better way of doing this? Am I at security or threat flaws?
There are two ways to do this, the way that you're already doing it: (curling a publicly accessible URL); or executing the PHP script directly from your crontab.
Cron Curling
As you mentioned, this is often very convenient and comfortable since you're already developing a web application in PHP and so it's the way you're already working. There are a few risks:
Most importantly, security: You'll want to ensure that input is properly filtered, there are no risks of SQL injection, etc., in case someone discovers the URL and tries to take advantage of it. Hopefully you're covering most of this anyway since you're developing a web application.
Frequency & concurrency: You're scheduling it's execution from cron, so are there any issues if someone else starts calling the URL and making it fire more frequently or at the same time as a scheduled run is occurring?
Relying on curl: It also means you're relying on curl to execute your script, so you're opening yourself up to many points of failure (curl itself, DNS, etc.).
Running PHP from Cron
Alternatively, you may be able to run the script directly from your crontab. There are two ways of doing this:
Passing the PHP script to the PHP interpreter binary, which would look something like this (note the path to your PHP binary varies by platform, but should be specified as an absolute path as cron doesn't have access to many environment variables):
*/15 * * * * /usr/bin/php -f /path/to/php/script.php
Alternatively, you can add a hashbang/shebang line to the first line of the PHP script as follows:
#!/usr/bin/php
Make it executable, for example:
chmod 755 /path/to/php/script.php
And add it directly to your crontab:
*/15 * * * * /path/to/php/script.php
The advantages of this method are that you can put the script in a location that's not publicly accessible so you can ensure tighter control over its access & execution. It may also mean you can write lighter code if you don't have to handle the web side of things. That said, if you're using a PHP framework, you may find it difficult to develop a stand-alone script such as this.
You can always run it using the php command. Have your crontab run a "/path/to/script.sh" that contains:
#!/bin/bash
cat "/path/to/phpscript.php" | php -e
You can have it save the output if you want. You could also have CRON run "php -f /path/to/script.php"
It depends on what you have access to. Personally, I wouldn't like to depend on an external curl script for required periodic jobs. One of the downsides to this approach is that you risk giving permission to the world to run your dbupt8r script. Please bear in mind that you can run PHP scripts without them being in the context of a web server so you could create a cron job on the web server that does
php /my/folder/dbupt8r.php
In this case, your periodic job will run regardless of whether the web server is available and without any risk of exposing it to the outside world.
Calling a URL exposes you to timeout problems which could lead to transaction errors in your database. I suggest you use command line interface (CLI) for this kind of process.
I am writing an automated build system to do nightly builds of our code.
Under normal conditions every thing works fine, but some of the processes in the build can require user input, if a developer commits a change that opens up one of these the automated builds grind to a halt.
Does any one know of a way of causing reads from STDIN to fail so any process that tries this will (hopefully) fail with an error? Right now I only need a solution for Linux (Ubuntu) but The system also has to run on Windows.
FYI: The automated build system is written in PHP, and in the case where this is (currently) a problem it is using buildroot to do the compilation.
Read from /dev/null. It will always return EOF. This is achieved in different ways depending on how your build system is setup. A command line app can use < /dev/null, of course.