I'm trying to create a PHP wrapper class around the Linux Screen utility.
I need to be able to start a screen session by name and get back the PID of the screen that was created.
I cannot assume the session name is unique so it can't parse the session -ls output to find the PID.
Based on examples and suggestions around the internet, I have tried the following approach using PHP's exec() function:
screen -D -m -S 'screen_name' 2>&1 &
The -D parameter tells screen to not fork the process, so it has the same PID as the background process.
So, then I can parse the PID from the output of the & background shell operator which is in this format:
[<number>] <pid>
This command works from the Linux shell (terminates immediately), but when I run it from PHP it freezes and the browser never loads the page.
I have tried executing it with both PHP's exec() and shell_exec() and it's the same result.
I have also tried lots of different combinations with the last part of the command such as the ones described HERE.
Edit:
If I use this command:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & 2>&1
Then the PHP exec() works. It starts the screen session but then I don't get any output from the & background operator. I'm assuming it's because I'm redirecting the stdout and stderr to /dev/null but as far as I know it should only be doing that for the screen command, not the & operator.
ALMOST SOLUTION:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & echo $!
I realized that the text showing in the terminal from the & background operator wasn't actually coming from the command stdout or stderr. It's the shell displaying it. So I added echo $! after the & operator which will print the PID of the process created by the last command.
This works in the terminal, it prints the correct PID. But when it's executed in PHP, the actual screen PID value is 4 more than the one returned by the echo.
It's like the echo is being executed before the & operator. Is there any way to make it wait?
SOLUTION:
I was wrapping the command in sudo, and the background operator was acting on the sudo command instead of the screen command. Make sure you escape your command arguments! :)
ALMOST SOLUTION:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & echo $!
I realized that the text showing in the terminal from the & background operator wasn't actually coming from the command stdout or stderr. It's the shell displaying it. So I added echo $! after the & operator which will print the PID of the process created by the last command.
This works in the terminal, it prints the correct PID. But when it's executed in PHP, the actual screen PID value is 4 more than the one returned by the echo.
It's like the echo is being executed before the & operator. Is there any way to make it wait?
SOLUTION:
I was wrapping the command in sudo, and the background operator was acting on the sudo command instead of the screen command. Make sure you escape your command arguments! :)
– Bradley Odell
Related
I'm running a php socket. I run the program through nohup. Run this program properly through root. But my problem is running the program via the exec () function in php. When I run the command this way the program runs correctly but the program output is not printed in nohup.out.
my command in ssh:
nohup php my_path/example.php & #is working
my command in user php:
exec('nohup php my_path/example.php >/dev/null 2>&1 &', $output); #not update nohup.out
please guide me...
From PHP docs on exec:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
From man nohup:
If standard input is a terminal, redirect it from /dev/null. If standard output is a terminal, append output to 'nohup.out' if possible, '$HOME/nohup.out' otherwise. If standard error is a terminal, redirect it to standard output. To save output to FILE, use 'nohup COMMAND > FILE'.
To satisfy both - redirect manually to nohup.out:
exec('nohup php my_path/example.php >>nohup.out 2>&1 &', $output);
My issue seems to be asked before but hold on, this one is a bit different.
I have 2 php files, I run the following commands:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
This will create basically 2 php processes.
My problem is that sometimes one of the files or even both of them are killed by the server for unknown reason and I have to re-enter the commands over again. I tried 'forever' but doesn't help.
If the server is rebooted I will have to enter those 2 commands, I thought about Cronjob but I'm not sure if it would launch it twice which would create more confusion.
My question is how to start automatically the files if one or both of them got killed? What is the best way to achieve this which would check exactly that file_1.php or that file_2.php is indeed running?
There are a couple of ways you can do this. As #Chris Haas mentioned in the comments, supervisord can do this for you, or you can run a watchdog shell script from cron at regular intervals that checks to see if your script is running, and if not, start it. Here's one I use.
#!/bin/bash
FIND_PROC=`ps -ef |grep "php queue_agent.php --env prod" | awk '{if ($8 !~ /grep/) print $2}'`
# if FIND_PROC is empty, the process has died; restart it
if [ -z "${FIND_PROC}" ]; then
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo queue_agent.php failed at `date`
cd ${DIR}
nohup nice -n 10 php queue_agent.php --env prod -v > ../../sandbox/logs/queue_agent.log &
fi
exit 0
I think u can try to figure out why these two php scripts shut down as the first step. To solve this problem, u can use this php function:
void register_shutdown_function(callback $callback[.mixed $parameter]);
which Registers a callback to be executed after script execution finishes or exit() is called. So u can log some info when php files get shut down like this:
function myLogFunction() {
//log some info here
}
register_shutdown_function(myLogFunction);
Instead of putting the standard output and error output into /dev/null, u can put them into a log file(Since maybe we can get some helpful info from the output). So instead of using:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
try:
nohup php file_1.php 2>yourLog.log &
nohup php file_2.php 2>yourLog.log &
If u want to autorun these two php files when u boot the server, try edit /etc/rc.local(which is autorun when the os start up). Add your php cli command lines in this file.
If u can't figure out why php threads get shut down, try supervisor as #Chris Haas mensioned.
I need to make a background script that is spawned by PHP command line script that echos to the SSH session. Essentially, I need to do this linux command:
script/path 2>&1 &
If I just run this command in linux, it works great. Output is still displayed to the screen, but I can still use the same session for other commands. However, when I do this in PHP, it doesn't work the same way.
I've tried:
`script/path 2>&1 &`;
exec("script/path 2>&1 &");
system("script/path 2>&1 &")
...And none of these work. I need it to spawn the process, and then kill itself so that I can free up the session, but I still want the output from the child process to print to the screen.
(please comment if this is unclear... I had a hard time putting this into words :P)
I came up with a solution that works in this case.
I created a wrapper bash script that starts up the PHP script, which in turn spawns a child script that has its output redirected to a file, which the bash script wrapper tails.
Here is the bash script I came up with:
php php_script.php "$#"
ps -ef | grep php_script.log | grep -v grep | awk '{print $2}' | xargs kill > /dev/null 2>&1
if [ "$1" != "-stop" ]
then
tail -f php_script.log -n 0 &
fi
(it also cleans up "tail" processes that are still running so that you don't get a gazillion processes when you run this bash script multiple times)
And then in your php child script, you call the external script like this:
exec("php php_script.php >> php_script.log &");
This way the parent PHP script exits without killing the child script, you still get the output from the child script, and your command prompt is still available for other commands.
I have some PHP files. Each of them start a socket listener, or run an infinite loop. The scripts halt when are executed via php command:
php sock_listener.php ...halt there
php listener2.php ... halt there
...
Currently I use screen command to start all the listener PHP files every time the machine is rebooted. Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
Using screen
Create a detached screen session for the first script:
session='php-test'
screen -S "$session" -d -m -t A php a.php
where -d -m combination causes screen to create a detached session.
Run the rest of the scripts in the same session in separate windows:
screen -S "$session" -X screen -t B php b.php
screen -S "$session" -X screen -t C php c.php
where
-X sends the built-in screen command to the running session;
-t sets the window title.
The session will be available in the output of screen -ls command:
There is a screen on:
8951.php-test (Detached)
Connect to the session using -r option, e.g.:
screen -r 8951.php-test
List the windows within the screen session with Ctrl-a " shortcut, or windowlist -b command.
Forking Processes to Background
A less convenient way is to send the commands to background by appending an ampersand at the end of each command:
nohup php a.php 2>a.php.err >a.php.out &
nohup php b.php 2>b.php.err >b.php.out &
nohup php c.php 2>c.php.err >c.php.out &
where
nohup prevents termination of the commands, if the user logs out of the shell. Read this tutorial for more information;
2>a.php.err redirects the standard error to a.php.err file;
>a.php.out redirects the standard output to a.php.out file.
Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
You can put the above-mentioned commands into a shell script file, e.g.:
#!/bin/bash -
# Put the commands here
make it executable:
chmod +x /path/to/script
and call it when you need it:
/path/to/script
Modify the shebang as appropriate.
Just run them under circus. Circus will let you define a number of processes and how many instances you want to run, and just keep them running.
https://circus.readthedocs.io/en/latest/
I am attempting to launch sar and have it run forever via a php script. But for whatever reason it never actually launches. I have tried the following:
exec('sar -u 1 > /home/foo/foo.txt &');
exec('sar -o /home/foo/foo -u 1 > /dev/null 2>&1 &');
However it never launches sar. If I just use:
exec('sar -u 1')
It works but it just hangs the php script. My understanding that if a program is started with exec function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream.
I will assume your running this on a *nix platform. To get php to run something in the background and not wait for the process to finish I would recommend 2 things: First use nohup and also redirect the output of the command to /dev/null (trash).
Example:
<?php
exec('nohup sar -u 1 > /dev/null 2>/dev/null &');
nohup means we do not send the "hang up" signal (which kills the process) when the terminal running the command closes.
> /dev/null 2>/dev/null & redirects the "normal" and "error" outputs to the blackhole /dev/null location. This allows PHP to not have to wait for the outputs of the command being called.
On another note, if you are using PHP just to call a shell command, you may want to consider other options like Ubuntu's Upstart with no PHP component--if you are using Ubuntu that is.