need to call a php script from bash script through eval - php

I need to run
eval "php /srv/www/scripts/mage/install-invoke-app.php"
which it finds the file, but end up with
Which is messing up on the <?php right off. Why? How is that fixed? Googling so far has not produced the right answer.
update
Here is the script in short, it's a function, then I pass a call back .. there is tons stripped out so just the area only.
in the including base.sh script
cd /srv/www/
. scripts/install-functions.sh
#tons of other stuff
cd /srv/www/
. scripts/mage-install.sh
in install-functions.sh
install_repo(){
if [ $2 ]
then
echo "just 1"
git clone $1 -q
else
echo "just 1 and 2"
git clone $1 $2 -q
fi
success=$?
if [[ $success -eq 0 ]];
then
echo "Repository successfully cloned."
echo "cleaning"
cd $r/
rm -rf LICENSE.txt STATUS.txt README.md RELEASE_NOTES.txt modman
cd ../
cp -af $r/* .
rm -rf $r/
if [ -z "$3" ]
then
echo "no callback"
else
eval $3
fi
else
echo "Something went wrong!"
fi
sleep 1 # slow it down to insure that we have the items put in place.
}
#declare -A list = ( [repo]=gitUser )
install_repolist(){
gitRepos=$1
for r in "${!gitRepos[#]}" #loop with key as the var
do
giturl="git://github.com/${gitRepos[$r]}/$r.git"
echo "Adding $r From $giturl"
if [ -z "$r" ];
then
echo
else
install_repo $giturl $2 $3
fi
echo
done
return 1
}
In the scripts/mage-install.sh
declare -A gitRepos
#[repo]=gitUser
gitRepos=(
[wsu_admin_base]=jeremyBass
[wsu_base_theme]=jeremyBass
[Storeutilities]=jeremyBass
[StructuredData]=jeremyBass
)
cd /srv/www/mage/
install_repolist $gitRepos 0 "php /srv/www/scripts/mage/install-invoke-app.php"
unset gitRepos #unset and re-declare to clear associative arrays
declare -A gitRepos
And that is the basic loop here.. I need to call back to a function, but that install_repolist is used in other areas too so I can't hard code it. If there is a better way then eval, cool

There are still some parts of your script that I don't understand, or not sure it does really work, but about your question, I think your callback could only work if you place it on a function like:
function mycallback {
php /srv/www/scripts/mage/install-invoke-app.php
}
And call your install_repolist function as
install_repolist $gitRepos 0 mycallback
That should make your php command call with the file argument work but there is one thing: I don't think values of gitRepos could actually be passed like that.
Most parts of your code has variables that actually needed to be quoted around double quotes "". One problem with it is that your php command would just end up in the final place where it is executed as one single argument php and no longer with the file due to word splitting.

Related

How to execute two CMD queries in PHP simultaneously

$command1 = "interfacename -S ipaddress -N nms -P company ";
$command2 = "list search clientclass hardwareaddress Mac address ";
if ( exec( $command1 . "&&" . $command2 ) ) {
echo "successfuly executed";
} else {
echo "Not successfuly executed";
}
If command 1 (cmd query) successfully executed, I want command 2 (which also contains some cmd queries) to be executed next. In the above script, only command 1 is executed. It doesn’t show any result for command 2.
I have wasted two days on this without finding any solution.
You can use either a ; or a && to separate the comands. The ; runs both commands unconditionally. If the first one fails, the second one still runs. Using && makes the second command depend on the first. If the first command fails, the second will NOT run. Reference
You can use shell_exec() PHP function to run Shell Command directly in your script.
Syntax : string shell_exec (string $cmd)
Example :
$output = shell_exec('ls -lart');
var_dump($output); #Showing the outputs
You can use multiple conditions in a single command line.
Example :
$data = "rm a.txt && echo \"Deleted\"";
$output = exec($data);
var_dump($output);
if($output=="Deleted"){
#Successful
}
In above example "Deleted" string will assign to $output when the file deleted successfully. Otherwise the error/warning/empty string will assign to $output variable. You should make condition with $output string.
Here is the documentation of shell_exec()
Note : There will be a new line character of the function shell_exec() output.
If I understand your question correctly, you want to execute $command1 and then execute $command2 only if $command1 succeeds.
The way you tried, by joining the commands with && is the correct way in a shell script (and it works even with the PHP function exec()). But, because your script is written in PHP, let's do it in the PHP way (in fact, it's the same way but we let PHP do the logical AND operation).
Use the PHP function exec() to run each command and pass three arguments to it. The second argument ($output, passed by reference) is an array variable. exec() appends to it the output of the command. The third argument ($return_var, also passed by reference) is a variable that is set by exec() with the exit code of the executed command.
The convention on Linux/Unix programs is to return 0 exit code for success and a (one byte) positive value (1..255) for errors. Also, the && operator on the Linux shell knows that 0 is success and a non-zero value is an error.
Now, the PHP code:
$command1 = "ipcli -S 192.168.4.2 -N nms -P nmsworldcall ";
$command2 = "list search clientclassentry hardwareaddress 00:0E:09:00:00:01";
// Run the first command
$out1 = array();
$code1 = 0;
exec($command1, $out1, $code1);
// Run the second command only if the first command succeeded
$out2 = array();
$code2 = 0;
if ($code1 == 0) {
exec($command2, $out2, $code2);
}
// Output the outcome
if ($code1 == 0) {
if ($code2 == 0) {
echo("Both commands succeeded.\n");
} else {
echo("The first command succeeded, the second command failed.\n");
}
} else {
echo("The first command failed, the second command was skipped.\n");
}
After the code ends, $code1 and $code2 contain the exit codes of the two commands; if $code1 is not zero then the first command failed and $code2 is zero but the second command was not executed.
$out1 and $out2 are arrays that contain the output of the two commands, split on lines.
I'm not sure to know about simultaneous execution but I'm sure about one cmd dependent on another cmd execution action. Here I'm running single execution cmd first to clear all set path, second I've declared my file path, third I install angular cmd npm install.
$path = "D:/xampp/htdocs/tests/omni-files-upload/aa-test/src";
$command_one = "cd /";
$command_two = "cd ".$path;
$command_three = "npm install";
#exec($command_one."&& ".$command_two."&& ".$command_three);

Why is bash inserting the output of "ls /" in output?

I've come across a rather mystifying bug in bash, which I suspect has to do with the shell expansion rules.
Here's the story: at work, I've been tasked with documenting a massive internal website for coordinating company resources. Unfortunately, the code is quite ugly, as it has outgrew it's original purpose and "evolved" into the main resource for coordinating company efforts.
Most of the code is PHP. I wrote a few helper scripts to help me write the documentation; for example, one script extracts all the global php variables used in a php function.
At the center of all these scripts lies the "extract_function.sh" script. Basically, given a single php function name and a php source file, it extracts and outputs that php function.
Now here's the problem: somehow, as the script is extracting the function, it is basically inserting the output of ls / randomly within the output.
For example:
$ ./extract_function my_function my_php_file.php
function my_function {
// php code
/etc
/bin
/proc
...
// more php code
}
Even more confusingly, I've only gotten this to occur for one specific function from one specific file! Now, since the function is quite huge (500+ lines, I mean it when I say the code is ugly!), I haven't been able for the life of me to figure out what is causing this, or to come up with a simpler ad-hoc function to produce this behavior. Also, company policy prevents me from sharing the actual code.
However, here is my code:
#!/usr/bin/env bash
program_name=$(basename $0);
function_name=$1;
file_name=$2;
if [[ -z "$function_name" ]]; then
(>&2 echo "Usage: $program_name function_name [file]")
exit 1
fi
if [[ -z "$file_name" ]] || [ "$file_name" = "-" ]; then
file_name="/dev/stdin";
fi
php_lexer_file=$(mktemp)
trap "rm -f $php_lexer_file" EXIT
read -r -d '' php_lexer_text << 'EOF'
<?php
$file = file_get_contents("php://stdin");
$tokens = token_get_all($file);
foreach ($tokens as $token)
if ($token === '{')
echo PHP_EOL, "PHP_BRACKET_OPEN", PHP_EOL;
else if ($token == '}')
echo PHP_EOL, "PHP_BRACKET_CLOSE", PHP_EOL;
else if (is_array($token))
echo $token[1];
else
echo $token;
?>
EOF
echo "$php_lexer_text" > $php_lexer_file;
# Get all output from beginning of function declaration
extracted_function_start=$(sed -n -e "/function $function_name(/,$ p" < $file_name);
# Prepend <?php so that php will parse the file as php
extracted_function_file=$(mktemp)
trap "rm -f $extracted_function_file" EXIT
echo '<?php' > $extracted_function_file;
echo "$extracted_function_start" >> $extracted_function_file;
tokens=$(php $php_lexer_file < $extracted_function_file);
# I've checked, and at this point $tokens does not contain "/bin", "/lib", etc...
IFS=$'\n';
open_count=0;
close_count=0;
for token in $tokens; do # But here the output of "ls /" magically appears in $tokens!
if [ $token = "PHP_BRACKET_OPEN" ]; then
open_count=$((open_count+1))
token='{';
elif [ $token == "PHP_BRACKET_CLOSE" ] ; then
close_count=$((close_count+1))
token='}';
fi
echo $token;
if [ $open_count -ne 0 ] && [ $open_count -eq $close_count ]; then
break;
fi
done
Yes, I know that I shouldn't be using bash to manipulate php code, but I basically have two questions:
1) Why is bash doing this?
2) And, how can I fix it?
One of the tokens in $tokens is a * (or a glob pattern which can match several files). If you cannot arrange for the token list to not contain shell metacharacters, you will need to jump through some hoops to avoid expansion. One possible technique is to use read -ra to read the tokens into an array, which will make it easier to quote them.

How to run a shell command with a sub shell with PHP's exec?

Running this command with PHP's exec gives me syntax errors, no matter if I run it directly or put it in an extra file and run that.
time (convert -layers merge input1.png $(for i in directory/*; do echo -ne $i" "; done) output.png)
I think the problem is that it creates sub shells, which exec doesn't seem to be able to handle.
Syntax error: word unexpected (expecting ")")
try to simplify the command: remove the outer () which you dont need.
you could replace $(for i in directory/*; do echo -ne $i" "; done) by
just directory/* or if you are
worried about empty dirs then $(shopt -s nullglob; echo directory/*)
time convert -layers merge input1.png directory/* output.png
or
time convert -layers merge input1.png $(shopt -s nullglob; echo directory/*) output.png

Exit shell script using PHP exit code while looping?

#!/bin/bash
cd /maintenance;
for (( i=1;i<1000;i++)); do
php -q dostuff.php $i
done
I use this shell script to call the dostuff.php script and pass the $i as an agrv to the script. The script connects to a webservice that returns results 50 items at a time. The $i value is the page number... I have no way to know how many times it needs to be called (how many pages) until I get a response code back from CURL inside that script that I test for. I need to pass my own response code back to the shell script to have it stop looping... it will never get to 1000 iterations... it was just a quick loop I made.
If I use exec("php -q dostuff.php $i", $output, $return_var) how do I tell the script to keep executing and passing the incremented $i value until my php script exits with a response code of 0?
There has got to be a better way. Maybe a while? Just not that good with this syntax.
I have to start at page 1 and repeat until page XXX incrementing by 1 each iteration. When there are no more results I can test for this in the dostuff.php and exit(0). What is the best way to implement this in the shell script?
Thanks!
You can check for the return value of the script, and break the loop if it isn't what is expected.
Usually a script returns 0 when it ran successfully, and something else otherwise, so if I assume your script respect this condition you could do:
#!/bin/bash
cd /maintenance;
for (( i=1;i<1000;i++)); do
php -q dostuff.php $i
if [ $? -ne 0 ]; then break; fi
done
On the other hand, if you want your script to return 0 if the loop shouldn't continue then you should do:
if [ $? -eq 0 ]; then break; fi
Edit: to take the comment into account to simplify the script:
If your script returns 0 when it shouldn't be called again, you instead do:
#!/bin/bash
cd /maintenance;
for (( i=1;i<1000;i++)); do
if php -q dostuff.php $i; then break; fi
done
As already suggested in the comments, you might get way better control if you dont wrap the php script inside a bash script but instead use php-cli as the shell script (PHP is kinda shell):
#!/usr/bin/php
<?php
for ($i = 0; $i < 1000; $i++) {
// contents of dostuff.php integrated
}
You might also be interested in using STDOUT, STDIN and STDERR:
http://php.net/manual/en/features.commandline.io-streams.php

Check to determine if userA has access to fileB (PHP)

PHP has a is_readable function which checks to see if the file is readable by the owner of the script. Is there a corresponding script to see if a file is readable by a specified user, for example
is_readable('Gavrilo Princip', 'black_hand.srj')
Not built in. I don't even think there is a command line utility to check if a certain user has read permissions to a file.
You can write your own function to do the checking though. Look into the fileperms(), fileowner(), filegroup(), and posix_getpwuid() functions.
Check this question
Check file permissions
PHP fileperms http://php.net/manual/en/function.fileperms.php
PHP stat http://www.php.net/manual/en/function.stat.php
The examples in there are for *nix systems. I don't know if it will operate the same on Windows hosts. With these you could get the GID and UID of the file.
I don't know if there is a PHP equivalent that would let you get the UID and/or GID of the particular system user. You may need to get that manually and search against those values. You can find the value typically in the /etc/passwd file
Thanks to the help of Chris and AndrewR I have come up with a, as of yet untested, solution. This solution is implemented in shell, and waits for input from standard in (designed to work with Apache RewriteMap). However, it can easily be modified to be called from either the command line or from a PHP script. It is a little bit more complicated than it has to be because we are piping the input of a function (getfacl) to a while loop. When we do this, it starts a new suprocess, so any variables declared or updated inside this loop (ie. result) will not be available to the outside world. Furthermore, I used getfacl as I can later expand it to also work with ACL permissions as well. Finally, for implementation reasons, I already know the owner of the file (user) before calling this script, however, if this is not the case, one can easily find this from the getfacl command.
#!/bin/bash
#USAGE: STDIN viewer:user:file
while read line
do
viewer=`echo $4 | cut -d ':' -f 1`
user=`echo $4 | cut -d ':' -f 2`
file=`echo $4 | cut -d ':' -f 3`
result=$(
getfacl $file 2>/dev/null | while read line
do
if [[ $user == $viewer ]] && [[ $line =~ ^user: ]]
then
permissions=`echo $line | cut -d ':' -f 3`
if [[ $permissions =~ r ]]
then
echo true
break
fi
elif [[ $user == $viewer ]] && [ $line =~ ^group: ]]
then
#NOTE: I take advantage of the fact that each user has one single group and that group has the same name as the user's name
permissions=`echo $line | cut -d ':' -f 3`
if [[ $permissions =~ r ]]
then
echo true
break
fi
elif [[ $line =~ ^other: ]]
then
permissions=`echo $line | cut -d ':' -f 3`
if [[ $permissions =~ r ]]
then
echo true
break
fi
fi
done
)
if [[ $result == "true" ]]
then
echo true
else
echo false
fi
done

Categories