tar file preserves full path. how to stop it? - php

I am trying to create a tar archive on my server through PHP as follows:
exec('tar -cvf myfile.tar tmp_folder/innerfolder/');
It works fine, but the saved file preserves the full path including tmp_folder/innerfolder/
I am creating those on the fly for users, so it's a bit unusable for users to have this path while extracting. I have reviewed this topic - How to strip path while archiving with TAR , but in the explanation the guy doesn't give an example, and I don't quite understand what to do.
Please, tell me with an example, how to add files to tar in a way that it does not preserve the 'tmp_folder/innerfolder/' part in archive?
Thanks in advance

Use the -C option to tar:
tar -C tmp_folder/innerfolder -cvf myfile.tar .

you can cheat..
exec('cd /path/to/tmp_folder/ && tar -cvf /path/to/myfile.tar innerfolder/');
This would would give your users just the innerfolder when they extracted the tarball

You can use --transform
tar -cf files.tar --transform='s,/your/path/,,' /your/path/file1 /your/path/file2
tar -tf files.tar
file1
file2
More info: http://www.gnu.org/software/tar/manual/html_section/transform.html

tar czf ~/backup.tgz --directory=/path filetotar

If you want to preserve the current directory name but not the full path to it, try something like this (executed from within the directory that you want to tar; assumes bash/zsh):
ORIGDIR=${PWD##*/}
tar -C `dirname $PWD` -cvf ../archive.tar $ORIGDIR
Here's some detail; first:
ORIGDIR=${PWD##*/}
.. stores the current directory name (i.e. the name of the directory you're in). Then, in the tar command:
-C `dirname $PWD`
.. switches tar's "working directory" from the standard root ("/") to the parent of the folder you want to archive. Strangely the -C switch only affects the path for building the archive, but not the location the archive itself will be stored in. Hence you'll still have to prefix the archive name with "../", or else tar will place it within the folder you started the command in. Finally, $ORIGDIR is relative to the parent directory, and so it and its contents are archived recursively into the tar (but without the path leading to it).

Related

pass php variables in a bash script

I want to run this script bash
#!/bin/bash
# file name: myscript.sh
PROJECT_DIR=$1
mkdir $PROJECT_DIR
mkdir $PROJECT_DIR/PolySkills
mkdir $PROJECT_DIR/PolySkills/cr
mkdir $PROJECT_DIR/PolySkills/cr/corrections
mkdir $PROJECT_DIR/PolySkills/cr/corrections/jpg
mkdir $PROJECT_DIR/PolySkills/cr/corrections/pdf
mkdir $PROJECT_DIR/PolySkills/cr/diagnostic
mkdir $PROJECT_DIR/PolySkills/cr/zooms
mkdir $PROJECT_DIR/PolySkills/data
mkdir $PROJECT_DIR/PolySkills/exports
mkdir $PROJECT_DIR/PolySkills/scans
mkdir $PROJECT_DIR/PolySkills/copies
cd $PROJECT_DIR/PolySkills
cp ~/file.tex $PROJECT_DIR/PolySkills
For the variable "PROJECT_DIR" that represents a path in which I will create folders, I want to retrieve its value from a php variable.
I looked at some examples on the internet and I tried one but it does not work. This is what i used :
chdir('~/');
$pathtofile = "~/ExportEval/".$NomUE."/".$NomOcc."/".$Numtudiant;
$directory=$pathtofile."/AMC_Project";
$output = exec("./myscript $directory");
Knowing that the script file "myscript " exists in home "~ /"
thank you for your help :)
I update my question :
I found what is the problem but I don't see what the solution as the variable $NomUE is composed of a sentence separated by spaces it considers that $ 1 is only the first of this sentence if I change $ 1 by $ 2 it takes the second word of that same sentence! I don't understand why it does not take $pathtofile as path !
You need to always escape variables before sending them to the command line. Also, ~ has no meaning here and there's no reason to use chdir(). Just use a fully qualified pathname instead:
<?php
$pathtofile = "/home/someuser/ExportEval/$NomUE/$NomOcc/$Numtudiant";
$directory = escapeshellarg("$pathtofile/AMC_Project");
$output = exec("/home/someuser/myscript $directory");
You may have problems running a script in a home directory, and reading from that directory. Best is to put the script in a proper location such as /usr/local/bin/.

replace mutiple files across various subfolders with a new version file in unix

I can find several examples of how to replace a string in multiple files using grep or sed, but I want replace old version of file with a new version.
For example I have a class file new1.class.php in 10 different sub-folders and I want to replace all these new1.class.php with a new new1.class.php,
how can I do that?
You can use find and just copy the new file over the top thusly...
find . -name new1.class.php -exec cp /some/place/new1.class.php {} \;
(Assuming you run this as root) cp will preserve the ownership and permission of the target file (the one being overwritten). If you want to keep the permissions of the source file then you can use cp -p

unix mv --backup=numbered

I'm trying in php to move a folder but keep both files in the dest folder if exist duplicate.
i tried todo that in recursion but its too complicated so many things can go wrong for example file premissions and duplicate files\folders.
im trying to work with system() command and i cant figure out how to move files but keep backup if duplicate without destroying the extension
$last_line = system('mv --backup=t websites/test/ websites/test2/', $retval);
gives the following if file exist in both dirs:
ajax.html~
ajax.html~1
ajax.html~2
what im looking for is:
ajax~.html
ajax~1.html
ajax~2.html
or any other like (1), (2) ... but without ruining the extension of the file.
any ideas? please.
p.s must use the system() command.
For this problem, I get sed to find and swap those extensions after the fact in this function below (passing my target directory as my argument):
swap_file_extension_and_backup_number ()
{
IFS=$'\n'
for y in $(ls $1)
do
mv $1/`echo $y | sed 's/ /\\ /g'` $1/`echo "$y" | sed 's/\(\.[^~]\{3\}\)\(\.~[0-9]\{1,2\}~\)$/\2\1/g'`
done
}
The function assumes that your file extensions will be the normal 3 characters long, and this will find backups up to two digits long i.e. .~99~
Explanation:
This part $1/`echo $y | sed 's/ /\\ /g'` $1/`echo "$y"
represents the first argument (the original file) of mv but protects you from space characters by adding an escape.
The last part $1/`echo "$y" | sed 's/\(\.[^~]\{3\}\)\(\.~[0-9]\{1,2\}~\)$/\2\1/g' is of course the target file where two parenthetic groups are swapped .i.e. /\2\1/
if you want to keep the original files and just create a copy then use cp not mv.
If you want to create a backup archive then do a tar gzip of the folder like this
tar -pczf name_of_your_archive.tar.gz /path/to/directory/to/backup
rsync --ignore-existing --remove-source-files /path/to/source /path/to/dest
Use rsync with the --backup and --backup-dir options. eg:
rsync -a --backup --backup-dir /usr/local/backup/2013/03/20/ /path/to/source /path/to/dest
Every time a file might be overwritten it is copied to the folder given, plus the path to that item. eg: /path/to/dest/path/to/source/file.txt
From the looks of things, there don't seem to be any built in method for you to back up files while keeping the extension at the correct place. Could be wrong, but I was not able to find one that doesn't do what your original question already pointed out.
Since you said that it's complicated to copy the files over using php, perhaps you can do it the same way you are doing it right now, getting the files in the format
ajax.html~
ajax.html~1
ajax.html~2
Then using PHP to parse through the files and rename them to the format you want. This way you won't have to deal with permissions, and duplicate files, which are complications you mentioned. You just have to look for files with this format, and rename them.
I am not responding strictly to your question, but the case I am presenting here is very common and therefore valid!
Here's my hack!
TO USE WITH FILES:
#!/bin/bash
# It will find all the files according to the arguments in
# "<YOUR_ARGUMENT_TO_FIND_FILES>" ("find" command) and move them to the
# "<DEST_FOLDER>" folder. Files with the same name will follow the pattern:
# "same_name.ext", "same_name (1).ext", "same_name (2).ext",
# "same_name (3).ext"...
cd <YOUR_TARGET_FOLDER>
mkdir ./<DEST_FOLDER>
find ./ -iname "<YOUR_ARGUMENT_TO_FIND_FILES>" -type f -print0 | xargs -0 -I "{}" sh -c 'cp --backup=numbered "{}" "./<DEST_FOLDER>/" && rm -f "{}"'
cd ./<DEST_FOLDER>
for f_name in *.~*~; do
f_bak_ext="${f_name##*.}"
f_bak_num="${f_bak_ext//[^0-9]/}"
f_orig_name="${f_name%.*}"
f_only_name="${f_orig_name%.*}"
f_only_ext="${f_orig_name##*.}"
mv "$f_name" "$f_only_name ($f_bak_num).$f_only_ext"
done
cd ..
TO USE WITH FOLDERS:
#!/bin/bash
# It will find all the folders according to the arguments in
# "<YOUR_ARGUMENT_TO_FIND_FOLDERS>" ("find" command) and move them to the
# "<DEST_FOLDER>" folder. Folders with the same name will have their contents
# merged, however files with the same name WILL NOT HAVE DUPLICATES (example:
# "same_name.ext", "same_name (1).ext", "same_name (2).ext",
# "same_name (3).ext"...).
cd <YOUR_TARGET_FOLDER>
find ./ -path "./<DEST_FOLDER>" -prune -o -iname "<YOUR_ARGUMENT_TO_FIND_FOLDERS>" -type d -print0 | xargs -0 -I "{}" sh -c 'rsync -a "{}" "./<DEST_FOLDER>/" && rm -rf "{}"'
This solution might work in this case
cp --backup=simple src dst
Or
cp --backup=numbered src dst
You can also specify a suffix

How to build shared library using CMake, starting from this (non working) Makefile example

I am writing a Makefile by hand to create a PHP extension lib using SWIG. I have the following directory structure:
wrappers/ # SWIG generated C++ wrappers and header
objects/ # I want to place my object files here
bin/ # I want to place my executable (shared lib) here
This is what my Makefile looks like:
CC=g++
CFLAGS=-fPIC -c -Wall
INCLUDES=`php-config --includes` -Iwrappers
LDFLAGS=-shared
SOURCES=foo_wrap.cpp \
foobar_wrap.cpp \
foofoobar_wrap.cpp \
foobarbar_wrap.cpp
OBJECTS=$(SOURCES:.cpp=.o)
EXECUTABLE=php_foobarlib.so
all: wrappers/$(SOURCES) bin/$(EXECUTABLE)
$(EXECUTABLE): $(OBJECTS)
$(CC) $(LDFLAGS) $(OBJECTS) -o objects/$(input)
.cpp.o:
$(CC) $(CFLAGS) $< -o objects/$(input)
clean:
rm -rf *o $(EXECUTABLE)
When I run make at the command line, I get the following error:
make: * No rule to make target foobar_wrap.cpp', needed byall'.
Stop.
I want to build the shared library using CMake instead. Could someone please post an outline of the CMakeLists file I need to create to build the shared library, taking into accounts the directory structure of the project - i.e. where I want the built objects and binaries to go.
You can specify in the Makefile the following:
SOURCES=wrappers/foo_wrap.cpp wrappers/fo. .... and so on,
and then remove wrappers/ from all:
As observed by you, this will create the object files in the wrappers directory. In order to avoid that look at the gcc/g++ option to place all object files into separate directory answer for a solution to this problem.
wrappers/$(SOURCES) expands to nonsense, prefixing only the first filename with the path.

After tar extract, Changing Permissions

Just a Question Regarding unix and PHP today.
What I am doing on my PHP is using the Unix system to untar a tarred file.
exec("tar -xzf foo.tar.gz");
Generally everything works fine until I run into this particular foo.tar.gz, which has a file system as follows:
Applications/
Library/
Systems/
After running the tar command, it seems that the file permissions get changed to 644 (instead of 755).
This causes Permission denied (errno 13) and therefore disabling most of my code. (I'm guessing from lack of privileges)
Any way I can stop this tar command completely ruining my permissions?
Thanks.
Oh and this seems to only happen when I have a foo.tar.gz file that Has this particular file system. Anything else and I'm good.
If you want to keep the permissions on files then you have to add the -p (or --preserve-permissions or --same-permissions) switch when extracting the tarball. From the tar man pages :
--preserve-permissions
--same-permissions
-p
When `tar' is extracting an archive, it normally subtracts the
users' umask from the permissions specified in the archive and
uses that number as the permissions to create the destination
file. Specifying this option instructs `tar' that it should use
the permissions directly from the archive.
So PHP code should be :
exec("tar -xzfp foo.tar.gz");
Edit: --delay-directory-restore solved the problem below about being unable to untar a file. The permissions of pwd are still altered, so the problem of the original poster might not be solved.
Not really an answer, but a way to reproduce the error.
First create some files and directories. Remove write access to the directories:
mkdir hello
mkdir hello/world
echo "bar" > hello/world/foo.txt
chmod -w hello/world
chmod -w hello
Next, create the tar file from within the directory, preserving permissions.
cd hello
tar -cpf ../hw.tar --no-recursion ./ world world/foo.txt
cd ..
Listing the archive:
tar -tvf hw.tar
# dr-xr-xr-x ./
# dr-xr-xr-x world/
# -rw-r--r-- world/foo.txt
So far, I've been unable to untar the archive as a normal user due to the "Permission denied"-error. The archive can't be untarred naively. The permissions of the local directory change as well.
mkdir untar
cd untar
ls -ld .
# drwxr-xr-x ./
tar -xvf ../hw.tar
# ./
# world/
# tar: world: Cannot mkdir: Permission denied
# world/foo.txt
# tar: world/foo.txt: Cannot open: No such file or directory
# tar: Exiting with failure status due to previous errors
ls -ld .
# dr-xr-xr-x ./
Experimenting with umask and/or -p did not help. However, adding --delay-directory-restore does help untarring:
tar -xv --delay-directory-restore -f ../hw.tar
# ./
# world/
# world/foo.txt
ls -ld .
# dr-xr-xr-x ./
chmod +w .
It is also possible to untar the file as root. What suprised me most is that tar apparently can change the permissions of pwd, which is still unsolved.
By the way, I originally got into this problem by creating a tarball for / with
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system /
as root (pwd=/) and untarring it as a normal user to create a linux container.

Categories