background script spawned by PHP command script print to screen - php

I need to make a background script that is spawned by PHP command line script that echos to the SSH session. Essentially, I need to do this linux command:
script/path 2>&1 &
If I just run this command in linux, it works great. Output is still displayed to the screen, but I can still use the same session for other commands. However, when I do this in PHP, it doesn't work the same way.
I've tried:
`script/path 2>&1 &`;
exec("script/path 2>&1 &");
system("script/path 2>&1 &")
...And none of these work. I need it to spawn the process, and then kill itself so that I can free up the session, but I still want the output from the child process to print to the screen.
(please comment if this is unclear... I had a hard time putting this into words :P)

I came up with a solution that works in this case.
I created a wrapper bash script that starts up the PHP script, which in turn spawns a child script that has its output redirected to a file, which the bash script wrapper tails.
Here is the bash script I came up with:
php php_script.php "$#"
ps -ef | grep php_script.log | grep -v grep | awk '{print $2}' | xargs kill > /dev/null 2>&1
if [ "$1" != "-stop" ]
then
tail -f php_script.log -n 0 &
fi
(it also cleans up "tail" processes that are still running so that you don't get a gazillion processes when you run this bash script multiple times)
And then in your php child script, you call the external script like this:
exec("php php_script.php >> php_script.log &");
This way the parent PHP script exits without killing the child script, you still get the output from the child script, and your command prompt is still available for other commands.

Related

How run PHP script file in the background forever

My issue seems to be asked before but hold on, this one is a bit different.
I have 2 php files, I run the following commands:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
This will create basically 2 php processes.
My problem is that sometimes one of the files or even both of them are killed by the server for unknown reason and I have to re-enter the commands over again. I tried 'forever' but doesn't help.
If the server is rebooted I will have to enter those 2 commands, I thought about Cronjob but I'm not sure if it would launch it twice which would create more confusion.
My question is how to start automatically the files if one or both of them got killed? What is the best way to achieve this which would check exactly that file_1.php or that file_2.php is indeed running?
There are a couple of ways you can do this. As #Chris Haas mentioned in the comments, supervisord can do this for you, or you can run a watchdog shell script from cron at regular intervals that checks to see if your script is running, and if not, start it. Here's one I use.
#!/bin/bash
FIND_PROC=`ps -ef |grep "php queue_agent.php --env prod" | awk '{if ($8 !~ /grep/) print $2}'`
# if FIND_PROC is empty, the process has died; restart it
if [ -z "${FIND_PROC}" ]; then
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo queue_agent.php failed at `date`
cd ${DIR}
nohup nice -n 10 php queue_agent.php --env prod -v > ../../sandbox/logs/queue_agent.log &
fi
exit 0
I think u can try to figure out why these two php scripts shut down as the first step. To solve this problem, u can use this php function:
void register_shutdown_function(callback $callback[.mixed $parameter]);
which Registers a callback to be executed after script execution finishes or exit() is called. So u can log some info when php files get shut down like this:
function myLogFunction() {
//log some info here
}
register_shutdown_function(myLogFunction);
Instead of putting the standard output and error output into /dev/null, u can put them into a log file(Since maybe we can get some helpful info from the output). So instead of using:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
try:
nohup php file_1.php 2>yourLog.log &
nohup php file_2.php 2>yourLog.log &
If u want to autorun these two php files when u boot the server, try edit /etc/rc.local(which is autorun when the os start up). Add your php cli command lines in this file.
If u can't figure out why php threads get shut down, try supervisor as #Chris Haas mensioned.

Create Linux Screen session and get its PID

I'm trying to create a PHP wrapper class around the Linux Screen utility.
I need to be able to start a screen session by name and get back the PID of the screen that was created.
I cannot assume the session name is unique so it can't parse the session -ls output to find the PID.
Based on examples and suggestions around the internet, I have tried the following approach using PHP's exec() function:
screen -D -m -S 'screen_name' 2>&1 &
The -D parameter tells screen to not fork the process, so it has the same PID as the background process.
So, then I can parse the PID from the output of the & background shell operator which is in this format:
[<number>] <pid>
This command works from the Linux shell (terminates immediately), but when I run it from PHP it freezes and the browser never loads the page.
I have tried executing it with both PHP's exec() and shell_exec() and it's the same result.
I have also tried lots of different combinations with the last part of the command such as the ones described HERE.
Edit:
If I use this command:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & 2>&1
Then the PHP exec() works. It starts the screen session but then I don't get any output from the & background operator. I'm assuming it's because I'm redirecting the stdout and stderr to /dev/null but as far as I know it should only be doing that for the screen command, not the & operator.
ALMOST SOLUTION:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & echo $!
I realized that the text showing in the terminal from the & background operator wasn't actually coming from the command stdout or stderr. It's the shell displaying it. So I added echo $! after the & operator which will print the PID of the process created by the last command.
This works in the terminal, it prints the correct PID. But when it's executed in PHP, the actual screen PID value is 4 more than the one returned by the echo.
It's like the echo is being executed before the & operator. Is there any way to make it wait?
SOLUTION:
I was wrapping the command in sudo, and the background operator was acting on the sudo command instead of the screen command. Make sure you escape your command arguments! :)
ALMOST SOLUTION:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & echo $!
I realized that the text showing in the terminal from the & background operator wasn't actually coming from the command stdout or stderr. It's the shell displaying it. So I added echo $! after the & operator which will print the PID of the process created by the last command.
This works in the terminal, it prints the correct PID. But when it's executed in PHP, the actual screen PID value is 4 more than the one returned by the echo.
It's like the echo is being executed before the & operator. Is there any way to make it wait?
SOLUTION:
I was wrapping the command in sudo, and the background operator was acting on the sudo command instead of the screen command. Make sure you escape your command arguments! :)
– Bradley Odell

running php exec command in background

I am attempting to launch sar and have it run forever via a php script. But for whatever reason it never actually launches. I have tried the following:
exec('sar -u 1 > /home/foo/foo.txt &');
exec('sar -o /home/foo/foo -u 1 > /dev/null 2>&1 &');
However it never launches sar. If I just use:
exec('sar -u 1')
It works but it just hangs the php script. My understanding that if a program is started with exec function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream.
I will assume your running this on a *nix platform. To get php to run something in the background and not wait for the process to finish I would recommend 2 things: First use nohup and also redirect the output of the command to /dev/null (trash).
Example:
<?php
exec('nohup sar -u 1 > /dev/null 2>/dev/null &');
nohup means we do not send the "hang up" signal (which kills the process) when the terminal running the command closes.
> /dev/null 2>/dev/null & redirects the "normal" and "error" outputs to the blackhole /dev/null location. This allows PHP to not have to wait for the outputs of the command being called.
On another note, if you are using PHP just to call a shell command, you may want to consider other options like Ubuntu's Upstart with no PHP component--if you are using Ubuntu that is.

Unix Cron Not Executing

I'm executing a shell script as a cron job every minute like so:
* * * * * /bin/bash /var/www/html/stream.sh
The script contains the following code:
#!/bin/sh
if ps -ef | grep -v grep | grep get_tweets.php ; then
exit 0
else
nohup php /var/www/html/streaming/db/get_tweets.php > /dev/null &
exit 0
fi
I'm also running another shell script as a cron, the only difference between the two is "get_tweets" is replaced with "parse_tweets_keyword" and cron is executed like so:
* * * * * /bin/bash /var/www/html/process-check.sh
Problem is, whilst the latter cron works perfectly fine, the first doesn't seem to run the script successfully, however when I run the command:
nohup php /var/www/html/streaming/db/get_tweets.php > /dev/null &
The script runs perfectly, so i'm not entirely sure what the problem is. The permissions for all files are correct and executable and i'm running the crons as root under crontab and both of the PHP script that i'm attempting to execute are running as background processes.
If anyone can help or knows the issue, would be greatly appreciated. I'm also open to better ways of running the PHP script perhaps as a single line in crontab rather than running the shell script to run the PHP script via cron.
So it seems this was the answer to my question:
#!/bin/sh
cd /var/www/html/streaming/db
if ps ax | grep -v grep | grep get_tweets.php ;
then
exit 0
else
nohup php ./get_tweets.php > /dev/null &
exit 0
fi
It would appear even running as root get_tweets.php would not execute without first changing the directory prior to actually running the command. As per the second line:
cd /var/www/html/streaming/db
Although I'm not entirely sure why it wouldn't run, as parse_tweets_keyword.php ran perfectly using the first bash script posted in my question.

Run multiple php scripts in sequence from a single php script or batch file in windows

I have 5 php scripts need to be executed one after other. How can i create a single php/batch script which executes all the php scripts in sequence.
I want to schedule a cron which run that php file.
#!/path/to/your/php
<?php
include("script1.php");
include("script2.php");
include("script3.php");
//...
?>
Alternative
#/bin/bash
/path/to/your/php /path/to/your/script1.php
/path/to/your/php /path/to/your/script2.php
/path/to/your/php /path/to/your/script3.php
# ...
If your scripts need to be accessed via http:
#/bin/bash
wget --quiet http://path/to/your/script1.php > /dev/null 2>&1
wget --quiet http://path/to/your/script2.php > /dev/null 2>&1
wget --quiet http://path/to/your/script3.php > /dev/null 2>&1
# ...
I did something for testing using the "wait" command, as it seems I just put wait between the calls to each script. I did a little test creating databases in php scripts, returning some records in a bash script, then updating with another php script, then another bash script to return the updated results and it seemed to work fine...
From what I have read, as long as the subscript doesn't call another subscript, the master script will wait if "wait" command is used between script calls.
Code is as below:
#!/bin/sh
/usr/bin/php test.php
wait
/bin/bash test.sh
wait
/usr/bin/php test2.php
wait
/bin/bash test2.sh
wait
echo all done
Hope it would execute all the php scripts in sequence.

Categories