I have many files containing php serialized data in which I have to replace some strings by another one. The linux host doesn't have any php installed. The problem is to adjust the modified string to correct size.
I tried something like to replace /share path to /opt:
sed -re 's~s:([0-9]+):"/share([^"]*)~s:int(\1-2):/opt\2~g' file
but the result file is bad: lengths are litteral expression int(size - 2)
Any idea ?
This solution isn't ideal, but you could use perl:
my $line;
while ($line = <STDIN>) {
$line =~ s~s:([0-9]+):"/share([^"]*)~"s:".($1-2).":\"/opt$2"~ge;
print $line;
}
Hopefully I've understood your requirements correctly. Here's an example:
php -r 'echo serialize(array("/share/foo")) . "\n";'
a:1:{i:0;s:10:"/share/foo";}
php -r 'echo serialize(array("/share/foo")) . "\n";' | perl replace.pl
a:1:{i:0;s:8:"/opt/foo";}
EDIT: Here's a modified script to edit the file in-place with variable search and replace strings.
Related
I just cannot fathom how to get the PHP exec() or shell_exec() functions to treat a '*' character as a wildcard. Is there some way to properly encode / escape this character so it makes it through to the shell?
This is on windows (via CLI shell script if that matters, Terminal or a git-bash yields the same results).
Take the following scenario:
C:\temp\ contains a bunch of png images.
echo exec('ls C:\temp\*');
// output: ls: cannot access 'C:\temp\*': No such file or directory
Permissions is not the problem:
echo exec('ls C:\temp\exmaple.png');
// output: C:\temp\example.png
Therefore the * character is the problem and is being treated as a literal filename rather than a wildcard. The file named * does not exist, so from that point of view, it's not wrong...
It also does not matter if I use double quotes to encase the command:
echo exec("ls C:\temp\*");
// output: ls: cannot access 'C:\temp\*': No such file or directory
I have also tried other things like:
exec(escapeshellcmd('ls C:\temp\*'));
exec('ls C:\temp\\\*');
exec('ls "C:\temp\*"');
exec('ls "C:\temp\"*');
And nothing works...
I'm pretty confused that I cannot find any other posts discussing this but maybe I'm just missing it. At this point I have already worked around the issue by manually programming a glob loop and using the internal copy() function on each file individually, but it's really bugging me that I do not understand how to make the wildcard work via shell command.
EDIT:
Thanks to #0stone0 - The answer provided did not particularly answer my initial question but I had not tried using forward slashes in the path and when I do:
exec('ls C:/temp/*')
It works correctly, and as 0stone0 said, it only returns the last line of the output, which is fine since this was just for proof of concept as I was not actually attempting to parse the output.
Also, on a side note, since posting this question my system had been updated to Win11 22H2 and now for some reason the original test code (with the backslashes) no longer returns the "Cannot access / no file" error message. Instead it just returns an empty string and has no output set to the &$output parameter either. That being said, I'm not sure if the forward slashes would have worked on my system prior to the 22H2 update.
exec() only returns the last output line by default.
The wildcard probably works, but the output is just truncated.
Pass an variable by ref to exec() and log that:
<?php
$output = [];
exec('ls -lta /tmp/*', $output);
var_dump($output);
Without any additional changes, this returns the same as when I run ls -lta /tmp/* in my Bash terminal
That said, glob() is still the preferred way of getting data like this especcially since
You shouldn't parse the output of ls
I'm writing a php CLI script that accepts, among other argument, a path.
So an example is:
php myscript.php -p=/Volumes/Macintosh HD/Users/andrea/samples
The script has his own way to red the arguments and it properly gets the value for -p, setting it in a variable called $project_path.
However, when I test the folder with isdir($project_path) it returns false.
I've tried to pass the path in different ways:
/Volumes/Macintosh\ HD/Users/andrea/samples
'/Volumes/Macintosh HD/Users/andrea/samples'
"/Volumes/Macintosh HD/Users/andrea/samples"
'/Volumes/Macintosh\ HD/Users/andrea/samples'
"/Volumes/Macintosh\ HD/Users/andrea/samples"
Non of them works.
What's the format I must use to make it work?
Please consider that the script must also work on different OS (i.e. Windows).
The problem is the path argument is automatically escaped:
I need to unescape it.
The returned string is:
\'/Volumes/Macintosh\ HD/Users/andrea/samples\'
Short answer: Use escapeshellarg()
Long answer:
chmod +x yourscript.php
-
$path = '/Volumes/Macintosh HD/Users/andrea/samples';
$cmdline = sprintf('/home/user/yourscript.php -p=%s 2>&1', escapeshellarg($path));
$output = shell_exec($cmdline);
Example cli script:
#!/usr/bin/php
<?php
fwrite(STDOUT, print_r($_SERVER, TRUE));
exit(0); // exit with exit code 0
?>
I eventually used the getopt() to get unescaped arguments (I don't know why there is this difference) and str_replace( array( "'", '"'), '', $file_path ); to remove the wrapping quotes.
I have a PHP script that retrieves 200 lines from a file by executing a command in Bash using backtick operators. Here's what the code looks like:
$endline = `(shell execution that returns a number here)`;
$line = $endline - "200";
$lines = "sed -n '".$line.", ".$endline." p' log.txt";
echo $lines;
$file = `$lines`;
echo $file;
This code returns $lines as sed -n '1800, 2000 p' log.txt, but $file doesn't return any results. When directly using sed -n '1800, 2000 p' log.txt in a Bash terminal, I get the expected results.
What is done incorrectly here? Do the ' characters have to be escaped?
Edit: The shell script added a space after the number, therefore misreading it.
My guess is that it's $eof or that your path (log.txt) is not appropriate.
I copied and pasted your code, and it works with the following tweaks:
syntax error fixed (add ; to echo $lines)
change $eof to $endline (though you may not need to if $eof is valid
ensure that log.txt was a valid path (this is most likely your error)
otherwise, it ran as expected.
The reason it would work in Bash but not in PHP is that their "working directory" is not necessarily the same.
It is possible to pipe data using unix pipes into a command-line php script? I've tried
$> data | php script.php
But the expected data did not show up in $argv. Is there a way to do this?
PHP can read from standard input, and also provides a nice shortcut for it: STDIN.
With it, you can use things like stream_get_contents and others to do things like:
$data = stream_get_contents(STDIN);
This will just dump all the piped data into $data.
If you want to start processing before all data is read, or the input size is too big to fit into a variable, you can use:
while(!feof(STDIN)){
$line = fgets(STDIN);
}
STDIN is just a shortcut of $fh = fopen("php://stdin", "r");.
The same methods can be applied to reading and writing files, and tcp streams.
As I understand it, $argv will show the arguments of the program, in other words:
php script.php arg1 arg2 arg3
But if you pipe data into PHP, you will have to read it from standard input. I've never tried this, but I think it's something like this:
$fp = readfile("php://stdin");
// read $fp as if it were a file
If your data is on one like, you can also use either the -F or -R flag (-F reads & executes the file following it, -R executes it literally) If you use these flags the string that has been piped in will appear in the (regular) global variable $argn
Simple example:
echo "hello world" | php -R 'echo str_replace("world","stackoverflow", $argn);'
You can pipe data in, yes. But it won't appear in $argv. It'll go to stdin. You can read this several ways, including fopen('php://stdin','r')
There are good examples in the manual
This worked for me:
stream_get_contents(fopen("php://stdin", "r"));
Came upon this post looking to make a script that behaves like a shell script, executing another command for each line of the input... ex:
ls -ln | awk '{print $9}'
If you're looking to make a php script that behaves in a similar way, this worked for me:
#!/usr/bin/php
<?php
$input = stream_get_contents(fopen("php://stdin", "r"));
$lines = explode("\n", $input);
foreach($lines as $line) {
$command = "php next_script.php '" . $line . "'";
$output = shell_exec($command);
echo $output;
}
If you want it to show up in $argv, try this:
echo "Whatever you want" | xargs php script.php
That would covert whatever goes into standard input into command line arguments.
Best option is to use -r option and take the data from the stdin. Ie I use it to easily decode JSON using PHP.
This way you don't have to create physical script file.
It goes like this:
docker inspect $1|php -r '$a=json_decode(stream_get_contents(STDIN),true);echo str_replace(["Array",":"],["Shares"," --> "],print_r($a[0]["HostConfig"]["Binds"],true));'
This piece of code will display shared folders between host & a container.
Please replace $1 by the container name or put it in a bash alias like ie displayshares() { ... }
I needed to take a CSV file and convert it to a TSV file. Sure, I could import the file into Excel and then re-export it, but where's the fun in that when piping the data through a converter means I can stay in the commandline and get the job done easily!
So, my script (called csv2tsv) is
#!/usr/bin/php
<?php
while(!feof(STDIN)){
echo implode("\t", str_getcsv(fgets(STDIN))), PHP_EOL;
}
I chmod +x csv2tsv.
I can then run it cat data.csv | csv2tsv > data.tsv and I now have my data as a TSV!
OK. No error checking (is the data an actual CSV file?), etc. but the principle works well.
And of course, you can chain as many commands as you need.
If you are wanting more to expand on this idea, then how about the ability to include additional options to your command?
Simple!
#!/usr/bin/php
<?php
$separator = $argv[1] ?? "\t";
while(!feof(STDIN)){
echo implode($separator, str_getcsv(fgets(STDIN))), PHP_EOL;
}
Now I can overwrite the default separator from being a tab to something else. A | maybe!
cat data.csv | csv2tsv '|' > data.psv
Hope this helps and allows you to see how much more you can do!
echo "sed -i 's/NULL/\\N/g' ".$_REQUEST['para'].".sql";
The above statement works. But it fail when I use it in exec like this...
exec("sed -i 's/NULL//\/\/\N/g' ".$_REQUEST['para'].".sql");
You should escape backslashes with backslashes, not with forward slashes, like this:
exec("sed -i 's/NULL/\\\\N/g' ".$_REQUEST['para'].".sql");
EDIT I wrote the answer without looking at what the code actually does. Don't do this, because $_REQUEST['para'] can be whatever the user wants, which can be used for code injection. Use the PHP functions as the other answer suggests.
Although it's entirely up to you, but my advice is not to call system commands unnecessarily. In PHP, you can use preg_replace() to do the functionality of sed.
preg_replace("/NULL/","\\N",file_get_contents("$_REQUEST['para']"."sql") )
Building on ghostdog's idea, here's code that will actually do what you want (the original code he posted didn't actually read content of the file in):
//basename protects against directory traversal
//ideally we should also do a is_writable() check
$file = basename($_REQUEST['para'].".sql");
$text = file_get_contents($file);
$text = str_replace('NULL', '\\N', $text); //no need for a regex
file_put_contents($file, $text);
Admittedly, however, if the file in question is more than a few meg, this is inadvisable as the whole file will be read into memory. You could read it in chunks, but that'd get a bit more complicated:
$file = basename($_REQUEST['para'].".sql");
$tmpFile = tempnam("/tmp", "FOO");
$in = fopen($file, 'r');
$tmp = fopen($tmpFile, 'w');
while($line = fgets($in)) {
$line = str_replace('NULL', '\\N', $line);
fputs($tmp, $line);
}
fclose($tmp);
fclose($in);
rename($tmpFile, $file);
If the file is 100+ meg, honestly, calling sed directly like you are will be faster. When it comes to large files, the overhead of trying to reproduce a tool like sed/grep with its PHP equivalent just isn't worth it. However, you need to at least take some steps to protect yourself if you're going to do so:
Taking some basic steps to secure amnom's code:
$file = basename($_REQUEST['para'].".sql");
if(!is_writable($file))
throw new Exception('bad filename');
exec("sed -i 's/NULL/\\\\N/g' ".escapeshellarg($file));
First, we call basename, which
strips any path from our filename
(e.g., if an attacker submitted the
string '/etc/passwd', we'd at least
now be limiting them to the file
'passwd' in the current working
directory
Next, we ensure that the file is, in
fact, writable. If not, we
shouldn't continue
Finally, we escapeshellarg() on the file. Failure to do so allows arbitrary command execution. e.g., if the attacker submitted the string /etc/passwd; rm -rf /; #, you'd end up with the command sed 's/blah/blah/' /etc/passwd; rm -rf /; #.sql. It should be clear that while that exact command may not work, finding one that actually would is trivial.