New to kubernetes and php, so I'm having some issues. Any and all help is greatly appreciated!
<?php
$postgres = 'kubectl get pods -n migrationnamespace | grep postgres | cut -d " " -f1 2>&1';
$postgres_pod = shell_exec($postgres);
echo $postgres_pod;
$list2 = 'kubectl exec -it -n migrationnamespace ' . $postgres_pod . ' -- psql -U postgres -c \'SELECT * FROM mywhales\'; 2>&1';
echo "<pre>";
echo shell_exec($list2);
echo "<pre>";
?>
results in error
postgres-7957478b7d-tmw6m
error: you must specify at least one command for the container
sh: line 1: --: command not found
When switching '.$postgres_pod.' for postgres-7957478b7d-tmw6m as below - it executes fully
$list2 = 'kubectl exec -it -n migrationnamespace postgres-7957478b7d-tmw6m -- psql -U postgres -c \'SELECT * FROM mywhales\';';
postgres-7957478b7d-tmw6m
whale
---------
16:117
......
561:539
(17 rows)
Thanks - Mike
There can sometimes be extra whitespace before or after strings, especially return characters which don't always show up when echoing the result.
Using trim($postgres_pod) will ensure they are removed.
I'm using the PHP exec() function to execute the Canu assembler programs, and I want to get its process ID within the same script.
The problem is exec() not returning any PID, even the process is running successfully.
The processes are started like this:
$gnuplot_path = '/usr/bin/gnuplot';
$command = 'nohup canu -d . -p E.coli gnuplot='.$gnuplot_path.' genomeSize=4.8m useGrid=false maxThreads=30 -pacbio-raw /path/to/p6.25x.fastq > /path/to/process.err 2>&1 &';
Currently, I try to determine if the process is still running by:
$pid = exec($command, $output);
var_dump($pid);
and also this:
exec($command, $pid, $return_var);
print_r($pid);
echo "$return_var\n";
However, I got output of string(0) "" and Array ( ) 0 respectively.
Please let me know how to solve this. Thanks much.
This one is tricky. What I would do:
$gnuplot_path = '/usr/bin/gnuplot';
$command = 'nohup canu -d . -p E.coli gnuplot='.$gnuplot_path.' genomeSize=4.8m useGrid=false maxThreads=30 -pacbio-raw /path/to/p6.25x.fastq > /path/to/process.err 2>&1';
$command .= ' & echo $!';
$pid = exec($command, $output, $a);
var_dump($output[0]);
I have an html form that asks the user to input a domain name, this is posted to the php7.0 page below for processing by a shell script which then passes output through aha for writing to an html page which is then displayed.
The problem I have is how can I prevent users from injecting commands like:
domain.com | rm * -rf
I thought this could be done using safe_mode and restricting the directory from which commands can be run but it seems this feature is now deprecated.
$domain_arg = escapeshellarg( $_POST['domain'] );
$today = date("Y-m-d-H:i:s");
$cmd = "/home/ubuntu/dtest/dtest.sh $domain_arg | aha -b -t 'Domain test of $domain_arg' > /var/www/website/results/$domain_arg.$today.html";
$output = shell_exec($cmd);
header("Location: http://the.web.com/results/$result.$today.html");
You use escapeshellarg on the arguments before running them, the result is a single quoted string such as 'domain.com | rm *'. Your problem is that you're not using this as a shell argument, but as a portion of a shell argument:
aha -b -t 'Domain test of $domain_arg'
This will absolutely cause the problems you're describing. Try something like this instead:
<?php
$domain_arg = escapeshellarg($_POST["domain"]);
$log_msg = escapeshellarg("Domain test of $_POST[domain]");
$today = date("Y-m-d-H:i:s");
$log_file = preg_replace(
"/[^\w:\\/.-]/i",
"_",
"/var/www/website/results/$_POST[domain].$today.html"
);
$cmd = "/home/ubuntu/dtest/dtest.sh $domain_arg | aha -b -t $log_msg > $log_file";
$output = shell_exec($cmd);
$result = rawurlencode($output); // just guessing that's where it comes from
$today = rawurlencode($today);
header("Location: http://the.web.com/results/$result.$today.html");
You should rather use escapeshellcmd() instead of escapeshellarg().
I have 3 scripts (I have removed the help_page function from the networkstats.sh script when I pasted here to save some space):
api3.php
<?php
output = shell_exec('/bin/bash /usr/share/nginx/status/getnetworkstatsin.sh');
echo $output;
?>
getnetworkstatsin.sh
#!/bin/bash
ssh -i /tmp/id_rsa1 root#centos7clone bash -s -- -I < ./networkstats.sh
networkstats.sh
#!/bin/bash
interface=enp0s3
read -r inbytesold outbytesold < <(awk -v dev="^$interface:" '$1 ~ dev {
sub(/[^:]*:/,""); print $1, $9; exit }' /proc/net/dev)
sleep 1
read -r inbytesnew outbytesnew < <(awk -v dev="^$interface:" '$1 ~ dev {
sub(/[^:]*:/,""); print $1, $9; exit }' /proc/net/dev)
kilobitsin=$(( ( ( inbytesnew - inbytesold ) * 8 ) / 1024 ))
kilobitsout=$(( ( ( outbytesnew - outbytesold ) * 8 ) / 1024 ))
show_outgoing() {
echo $kilobitsout
}
show_all() {
echo "kilobits in: $kilobitsin"
echo "kilobits out: $kilobitsout"
}
if [[ $# -eq 0 ]];
then
help_page
exit 1
fi
for arg in "$#"
do
case $arg in
-h|--help)
help_page
;;
-I)
show_incoming
;;
-O)
show_outgoing
;;
-A|--all)
show_all
;;
esac
done
The problem I have is that when I execute the api3.php script from console, it is able to execute and return a value.
However when I try and execute from a webpage it fails to return anything.
I believe it is not even executing when I load it via the webpage by navigating to localhost/api3.php. Can someone help, what is the reason behind this? I have added
nginx ALL=NOPASSWD: /usr/share/nginx/status/getnetworkstatsin.sh
To my visudo section, I have tried to change permissions of all files involved to 777 (temporally) without success.
EDIT: I should also mention that all these scripts are located inside /usr/share/nginx/status which nginx has access to.
I have a PHP script that gets passed the MySQL connection details of a remote server and I want it to execute a mysqldump command. To do this I'm using the php exec() function:
<?php
exec("/usr/bin/mysqldump -u mysql-user -h 123.145.167.189 -pmysql-pass database_name > /path-to-export/file.sql", $output);
?>
When the right login details are passed to it, it'll work absolutely fine.
However, I'm having trouble checking if it executes as expected and if it doesn't finding out why not.
The $output array returns as empty, whereas if I run the command directly on the command line a message is printed out telling me the login failed. I want to capture such error messages and display them. Any ideas on how to do that?
You should check the third parameter of exec function: &$return_var.
$return_var = NULL;
$output = NULL;
$command = "/usr/bin/mysqldump -u mysql-user -h 123.145.167.189 -pmysql-pass database_name > /path-to-export/file.sql";
exec($command, $output, $return_var);
By convention in Unix a process returns anything other than 0 when something goes wrong.
And so you can:
if($return_var) { /* there was an error code: $return_var, see the $output */ }
The solution I found is to run the command in a sub-shell and then output the stderr to stdout (2>&1). This way, the $output variable is populated with the error message (if any).
i.e. :
exec("(mysqldump -uroot -p123456 my_database table_name > /path/to/dump.sql) 2>&1", $output, $exit_status);
var_dump($exit_status); // (int) The exit status of the command (0 for success, > 0 for errors)
echo "<br />";
var_dump($output); // (array) If exit status != 0 this will handle the error message.
Results :
int(6)
array(1) { [0]=> string(46) "mysqldump: Couldn't find table: "table_name"" }
Hope it helps !
Because this line redirect the stdout output > /path-to-export/file.sql
try this,
<?php
exec("/usr/bin/mysqldump -u mysql-user -h 123.145.167.189 -pmysql-pass database_name", $output);
/* $output will have sql backup, then save file with these codes */
$h=fopen("/path-to-export/file.sql", "w+");
fputs($h, $output);
fclose($h);
?>
I was looking for the exact same solution, and I remembered I'd already solved this a couple of years ago, but forgotten about it.
As this page is high in Google for the question, here's how I did it:
<?php
define("BACKUP_PATH", "/full/path/to/backup/folder/with/trailing/slash/");
$server_name = "your.server.here";
$username = "your_username";
$password = "your_password";
$database_name = "your_database_name";
$date_string = date("Ymd");
$cmd = "mysqldump --hex-blob --routines --skip-lock-tables --log-error=mysqldump_error.log -h {$server_name} -u {$username} -p{$password} {$database_name} > " . BACKUP_PATH . "{$date_string}_{$database_name}.sql";
$arr_out = array();
unset($return);
exec($cmd, $arr_out, $return);
if($return !== 0) {
echo "mysqldump for {$server_name} : {$database_name} failed with a return code of {$return}\n\n";
echo "Error message was:\n";
$file = escapeshellarg("mysqldump_error.log");
$message = `tail -n 1 $file`;
echo "- $message\n\n";
}
?>
It's the --log-error=[/path/to/error/log/file] part of mysqldump that I always forget about!
As exec() is fetching just stdout which is redirected to the file, we have partial or missing result in the file and we don't know why. We have to get message from stderr and exec() can't do that. There are several solutions, all has been already found so this is just a summary.
Solution from Jon: log errors from mysqldump and handle them separately (can't apply for every command).
Redirect outputs to separate files, i.e. mysqldump ... 2> error.log 1> dump.sql and read the error log separately as in previous solution.
Solution from JazZ: write the dump as a subshell and redirect stderr of the subshell to stdout which can php exec() put in the $output variable.
Solution from Pascal: better be using proc_open() instead of exec() because we can get stdout and stderr separately (directly from pipes).
write below code to get the database export in .sql file.
<?php exec('mysqldump --user=name_user --password=password_enter --host=localhost database_name > filenameofsql.sql'); ?>