Ending a shell_exec process PHP - php

been trying to figure this one out for a couple of hours now. No luck.
I am trying to build a system that can run reports (running scripts in the background) using shell_exec.
The following code starts the script that runs the report:
shell_exec("php /var/www/html/lab-40/test/invoice_reminder.php");
Now how would I go about ending that script execution using PHP?
I've tried things like PIDS but I have no clue how I would go about this. Thanks for the help in advance!
EDIT: I am not trying to end the process if the e.g tab is closed.

Based on this comment on shell_exec help page (& will bring the process to the background, and echo $! will print the process PID):
<?php
function shell_exec_background(string $command): int {
return (int)shell_exec(
sprintf(
'nohup %s 1> /dev/null 2> /dev/null & echo $!',
$command
)
);
}
function is_pid_running(int $pid): bool {
exec(
sprintf(
'kill -0 %d 1> /dev/null 2> /dev/null',
$pid
),
$output,
$exit_code
);
return $exit_code === 0;
}
function kill_pid(int $pid): bool {
exec(
sprintf(
'kill -KILL %d 1> /dev/null 2> /dev/null',
$pid
),
$output,
$exit_code
);
return $exit_code === 0;
}
$pid = shell_exec_background('php /var/www/html/lab-40/test/invoice_reminder.php');
var_dump($pid);
var_dump(is_pid_running($pid));
var_dump(kill_pid($pid));

Related

PHP Fibers & Node via PHP Exec

I'm testing out PHP Fibers and I'm having a problem, probably it's something silly :)
Anyhow I have fibers running a loop and then inside that loop I have PHP exec with sudo nano... that works nice, however it runs 1 by 1, can I anyhow have the loop start node script and just PHP finishes once all are running?
self::$fiber = new Fiber(function ($pages, $servers): void {
foreach ($pages as $pageId => $pageUrl) {
$siteUrl = parse_url($pageUrl);
exec('sudo node /var/www/puppeteer.js --url="'.$pageUrl.'" --proxy="'.$proxyIP.'" > /dev/null 2>&1');
}
}
So the code works nice and run, however just instead running 5 exec node at once, it's running 1 by 1...
Thanks in advance!
I found the problem, I didn't have an & on end of the command.
self::$fiber = new Fiber(function ($pages, $servers): void {
foreach ($pages as $pageId => $pageUrl) {
$siteUrl = parse_url($pageUrl);
exec('sudo node /var/www/puppeteer.js --url="'.$pageUrl.'" --proxy="'.$proxyIP.'" > /dev/null 2>&1 &');
}
}

exec() not returning process ID

I'm using the PHP exec() function to execute the Canu assembler programs, and I want to get its process ID within the same script.
The problem is exec() not returning any PID, even the process is running successfully.
The processes are started like this:
$gnuplot_path = '/usr/bin/gnuplot';
$command = 'nohup canu -d . -p E.coli gnuplot='.$gnuplot_path.' genomeSize=4.8m useGrid=false maxThreads=30 -pacbio-raw /path/to/p6.25x.fastq > /path/to/process.err 2>&1 &';
Currently, I try to determine if the process is still running by:
$pid = exec($command, $output);
var_dump($pid);
and also this:
exec($command, $pid, $return_var);
print_r($pid);
echo "$return_var\n";
However, I got output of string(0) "" and Array ( ) 0 respectively.
Please let me know how to solve this. Thanks much.
This one is tricky. What I would do:
$gnuplot_path = '/usr/bin/gnuplot';
$command = 'nohup canu -d . -p E.coli gnuplot='.$gnuplot_path.' genomeSize=4.8m useGrid=false maxThreads=30 -pacbio-raw /path/to/p6.25x.fastq > /path/to/process.err 2>&1';
$command .= ' & echo $!';
$pid = exec($command, $output, $a);
var_dump($output[0]);

Can I call a rake task from PHP?

can I run a rake task from php?
I tried shell_exec but nothing happens.
exec('/bin/bash -l -c \'cd /Users/username/www/rails_app && [[ -s "/Users/username/.rvm/scripts/rvm" ]] && source "/Users/username/.rvm/scripts/rvm/.rvm/scripts/rvm" && rvm use 2.1.2 && RAILS_ENV=development /usr/bin/rake "ko:complete_order[2]" --silent\'', $out, $err);
The $out is an empty array.
The $err is 1.

PHP script to execute a bash script

I have 3 scripts (I have removed the help_page function from the networkstats.sh script when I pasted here to save some space):
api3.php
<?php
output = shell_exec('/bin/bash /usr/share/nginx/status/getnetworkstatsin.sh');
echo $output;
?>
getnetworkstatsin.sh
#!/bin/bash
ssh -i /tmp/id_rsa1 root#centos7clone bash -s -- -I < ./networkstats.sh
networkstats.sh
#!/bin/bash
interface=enp0s3
read -r inbytesold outbytesold < <(awk -v dev="^$interface:" '$1 ~ dev {
sub(/[^:]*:/,""); print $1, $9; exit }' /proc/net/dev)
sleep 1
read -r inbytesnew outbytesnew < <(awk -v dev="^$interface:" '$1 ~ dev {
sub(/[^:]*:/,""); print $1, $9; exit }' /proc/net/dev)
kilobitsin=$(( ( ( inbytesnew - inbytesold ) * 8 ) / 1024 ))
kilobitsout=$(( ( ( outbytesnew - outbytesold ) * 8 ) / 1024 ))
show_outgoing() {
echo $kilobitsout
}
show_all() {
echo "kilobits in: $kilobitsin"
echo "kilobits out: $kilobitsout"
}
if [[ $# -eq 0 ]];
then
help_page
exit 1
fi
for arg in "$#"
do
case $arg in
-h|--help)
help_page
;;
-I)
show_incoming
;;
-O)
show_outgoing
;;
-A|--all)
show_all
;;
esac
done
The problem I have is that when I execute the api3.php script from console, it is able to execute and return a value.
However when I try and execute from a webpage it fails to return anything.
I believe it is not even executing when I load it via the webpage by navigating to localhost/api3.php. Can someone help, what is the reason behind this? I have added
nginx ALL=NOPASSWD: /usr/share/nginx/status/getnetworkstatsin.sh
To my visudo section, I have tried to change permissions of all files involved to 777 (temporally) without success.
EDIT: I should also mention that all these scripts are located inside /usr/share/nginx/status which nginx has access to.

mysqldump via PHP

I have a PHP script that gets passed the MySQL connection details of a remote server and I want it to execute a mysqldump command. To do this I'm using the php exec() function:
<?php
exec("/usr/bin/mysqldump -u mysql-user -h 123.145.167.189 -pmysql-pass database_name > /path-to-export/file.sql", $output);
?>
When the right login details are passed to it, it'll work absolutely fine.
However, I'm having trouble checking if it executes as expected and if it doesn't finding out why not.
The $output array returns as empty, whereas if I run the command directly on the command line a message is printed out telling me the login failed. I want to capture such error messages and display them. Any ideas on how to do that?
You should check the third parameter of exec function: &$return_var.
$return_var = NULL;
$output = NULL;
$command = "/usr/bin/mysqldump -u mysql-user -h 123.145.167.189 -pmysql-pass database_name > /path-to-export/file.sql";
exec($command, $output, $return_var);
By convention in Unix a process returns anything other than 0 when something goes wrong.
And so you can:
if($return_var) { /* there was an error code: $return_var, see the $output */ }
The solution I found is to run the command in a sub-shell and then output the stderr to stdout (2>&1). This way, the $output variable is populated with the error message (if any).
i.e. :
exec("(mysqldump -uroot -p123456 my_database table_name > /path/to/dump.sql) 2>&1", $output, $exit_status);
var_dump($exit_status); // (int) The exit status of the command (0 for success, > 0 for errors)
echo "<br />";
var_dump($output); // (array) If exit status != 0 this will handle the error message.
Results :
int(6)
array(1) { [0]=> string(46) "mysqldump: Couldn't find table: "table_name"" }
Hope it helps !
Because this line redirect the stdout output > /path-to-export/file.sql
try this,
<?php
exec("/usr/bin/mysqldump -u mysql-user -h 123.145.167.189 -pmysql-pass database_name", $output);
/* $output will have sql backup, then save file with these codes */
$h=fopen("/path-to-export/file.sql", "w+");
fputs($h, $output);
fclose($h);
?>
I was looking for the exact same solution, and I remembered I'd already solved this a couple of years ago, but forgotten about it.
As this page is high in Google for the question, here's how I did it:
<?php
define("BACKUP_PATH", "/full/path/to/backup/folder/with/trailing/slash/");
$server_name = "your.server.here";
$username = "your_username";
$password = "your_password";
$database_name = "your_database_name";
$date_string = date("Ymd");
$cmd = "mysqldump --hex-blob --routines --skip-lock-tables --log-error=mysqldump_error.log -h {$server_name} -u {$username} -p{$password} {$database_name} > " . BACKUP_PATH . "{$date_string}_{$database_name}.sql";
$arr_out = array();
unset($return);
exec($cmd, $arr_out, $return);
if($return !== 0) {
echo "mysqldump for {$server_name} : {$database_name} failed with a return code of {$return}\n\n";
echo "Error message was:\n";
$file = escapeshellarg("mysqldump_error.log");
$message = `tail -n 1 $file`;
echo "- $message\n\n";
}
?>
It's the --log-error=[/path/to/error/log/file] part of mysqldump that I always forget about!
As exec() is fetching just stdout which is redirected to the file, we have partial or missing result in the file and we don't know why. We have to get message from stderr and exec() can't do that. There are several solutions, all has been already found so this is just a summary.
Solution from Jon: log errors from mysqldump and handle them separately (can't apply for every command).
Redirect outputs to separate files, i.e. mysqldump ... 2> error.log 1> dump.sql and read the error log separately as in previous solution.
Solution from JazZ: write the dump as a subshell and redirect stderr of the subshell to stdout which can php exec() put in the $output variable.
Solution from Pascal: better be using proc_open() instead of exec() because we can get stdout and stderr separately (directly from pipes).
write below code to get the database export in .sql file.
<?php exec('mysqldump --user=name_user --password=password_enter --host=localhost database_name > filenameofsql.sql'); ?>

Categories