I am creating an automatic backup system. I plan on running a Cron Job that will, once a week, automatically create a backup and email it to a secure email.(this way, even if the server explodes into a million pieces I will have a full recent backup of my files that any administrator can access)
I found this simple method: system('tar -cvzpf backup.tar.gz /path/to/folder');
It works nearly perfectly for what I need. Only problem is there is one directory that I do not want included in the backup. On this website, users upload their own avatars and the directory in which the images are held is inside the directory I want backed up. Because I'm sending this via email I have to keep the folder relatively small, and a couple thousand images add up. Is there any way I could tell the function to ignore this directory and just compress everything else?
find /path/to/folder -not -ipath '*baddirectory*' -print | xargs tar -cvzpf backup.tar.gz though you might consider passing PHP the full path to all the binaries you use (in the above command: find, xargs, and tar).
From the tar man:
tar --exclude='/path/to/folder/bad'
So you would get:
system('tar -czpf --exclude='/path/to/folder/bad' backup.tar.gz /path/to/folder');
You can leave the v (verbose) out, since you are not watching your code being executed.
You can exclude something from been included with --exclude-tag-all PATTERN long option as on manual.
Unfortunely I did not found a good example about pattern.
I guess the following exclude will work:
--exclude-tag-all=/path/foldernotinclude/.
Since it should match the "directory" file tag.
With lucky another user will make a comment about patern to use.
Related
I have a REALLY strange thing happening! When I view a file (within the "Program Files (x86)" folder tree) in my file manager it has one content, but when I retrieve it through PHP CLI script using file_get_contents() it has different content (with some additional lines I added through the script earlier) - except if I run the CLI script in a prompt with admin rights, then I see the same content. How on earth is it possible that the same file can have different content based on the permissions of the user accessing the file? Is that really possible, and if so where can I find more information on how it works? I've never heard of such a thing in my 25+ years of computing and programming experience...
I have quatro-checked that the path is the same and checked in all kinds of ways that there isn't something else playing a trick on me - but I just can't find any possible explanations!
I'm running Windows 10.
32-bit applications that do not have a requestedExecutionLevel node in their manifest are assumed to be UAC-unaware and if they try to write to a privileged location in the file system or registry (when the process is not elevated) the write operation is virtualized. Virtualized files are stored in %LocalAppData%\VirtualStore.
Manually delete the file in the virtual store and then edit the ACL/security of the file if you need to write to it from your script as a standard user...
I'm brand new to shell scripting and have been searching for examples on how to create a backup script for my website but I'm unable find something or at least something I understand.
I have a Synology Diskstation server that I'd like to use to automatically (through its scheduler) take backups of my website.
I currently am doing this via Automator on my Mac in conjunction with the Transmit FTP program, but making this a command line process is where I struggle.
This is what I'm looking to do in a script:
1) Open a URL without a browser (this URL creates a mysql dump of the databases on the server to be downloaded later). example url would be http://mywebsite.com/dump.php
2) Use FTP to download all files from the server. (Currently Transmit FTP handles this as a sync function and only downloads files where the remote file date is newer than the local file. It also will remove any local files that don't exist on the remote server).
3) Create a compressed archive of the files from step 2, named as website_CURRENT-DATE
4) Move archive from step 3 to a specific folder and delete any file in this specific folder that's older than 120 Days.
Right now I don't know how to do step 1, or the synchronization in step 2 (I see how I can use wget to download the whole site, but that seems as though it will download everything each time it runs, even if its not been changed).
Steps 3 and 4 are probably easy to find via searching, but I haven't searched for that yet since I can't get past step 1.
Thanks!
Also FYI my web-host doesn't do these types of backups, so that's why I like to do my own.
Answering each of your questions in order, then:
Several options, the most common of which would be one of wget http://mywebsite.com/dump.php or curl http://mywebsite.com/dump.php.
Since you have ssh access to the server, you can very easily use rsync to grab a snapshot of the files on-disk with e. g. rsync -essh --delete --stats -zav username#mywebsite.com:/path/to/files/ /path/to/local/backup.
Once you have the snapshot from rsync, you can make a compressed, dated copy with cd /path/to/local/backup; tar cvf /path/to/archives/website-$(date +%Y-%m-%d).tgz *
find /path/to/archives -mtime +120 -type f -exec rm -f '{}' \; will remove all backups older than 120 days.
I was reading php functions and I came across symlink but I really couldnt grab it specially its usage in real world application developement. Can anybody please explain me with real world example?
Thanks
Let's assume you have a src folder in your $HOME directory where your sources are stored. When you open a new shell, you usually enter your $HOME directory when the shell was started. It might be a common step that whenever you open up a new shell, you want to enter the directory ~/src/very_long_project_name afterwards.
This is where symlinks come into play: you could create a symlink in your $HOME directory (for example called vlpn that directly points to ~/src/very_long_project_name.
When you open your console next time, you could simply type cd vlpn instead of cd src/very_long_project_name. That's it. Nothing PHP specific. Like giraff and gnur already said.
An administrator may create symlinks to arrange storage without messing up filesystems; e.g. a web mirror may have thousands of sites hosted, and dozens of disks mounted:
/mnt/disk1/
/mnt/disk2/
...
and want to store data in their /var/www/htdocs/ without users caring about which disk holds their data.
/var/www/htdocs/debian -> /mnt/disk1/debian
/var/www/htdocs/ubuntu -> /mnt/disk2/ubuntu
/var/www/htdocs/centos -> /mnt/disk9/centos
Second, you may have a 'latest upload'; your users are uploading photos, or software packages, and you want http://example.com/HOT_STUFF to always be the most recent uploaded photo. You could set the symlink($new_upload, $HOT_STUFF); and users will never need more than the one URL to see the newest thing.
Third, Debian and Ubuntu use the update-alternatives mechanism to allow multiple versions of a tool to be installed at once and yet still allow the administrator to say which one is the default. e.g.,
$ ls -l /usr/bin/vi
lrwxrwxrwx 1 root root 20 2011-01-11 01:07 /usr/bin/vi -> /etc/alternatives/vi
$ ls -l /etc/alternatives/vi
lrwxrwxrwx 1 root root 18 2011-01-11 01:07 /etc/alternatives/vi -> /usr/bin/vim.basic
$ ls -l /usr/bin/vim.basic
-rwxr-xr-x 1 root root 1866904 2010-09-28 04:06 /usr/bin/vim.basic
It's a little circuitous, but the configuration is maintained in a per-system /etc/ directory, and the usual /usr/bin/vi path will execute something that is very much like vi, when there are many choices available (nvi, elvis, vim, AT&T vi, etc.)
Symlinks are something that is used on the host OS, not so much by PHP itself.
It creates a shortcut to a file. It can be useful to request a much used file with a long path, by creating a symlink in the public_html folder to the long path you can include it without using the full path.
http://en.wikipedia.org/wiki/Symbolic_link
//edit:
This is better then just copying the file because you actually use the original file, so if the original changes the symlink will always point to the new file, so it is not a copy!
The php function actually only delegates to the operating-system's functionality, so it is only as useful as a symbolic symlink in general is:
Symbolic links operate transparently
for most operations: programs which
read or write to files named by a
symbolic link will behave as if
operating directly on the target file.
(from Wikipedia)
I've seen it used by Typo3:
Each Typo3-site has folder that link to the main installation - so several sites can use the same code base, and Typo3 can be updated by extracting the new version, then changing the symlinks (reducing the time the site is offline).
here is a pretty good example of usage with the explanation http://dogmatic69.com/blog/development/12-symlink-cakephp-plugin-assets-for-faster-page-loads
Are there differences when I use that functions? Why should I use one instead of the other one...
copy() copies the file - you now have 2 files, and for large files, this can take very long
rename() changes the file's name, which can mean moving it between directories.
move_uploaded_file() is basically the same as rename(), but it will only work on files that have been uploaded via PHP's upload mechanism. This is a security feature that prevents users from tricking your script into showing them security-relevant data.
In the future, I suggest looking up such information in the PHP Manual yourself.
I found this in the manual of move_uploaded_file():
Florian S. in H. an der E. [.de] at 17-Aug-2008 09:02
move_uploaded_file (on my setup) always makes files 0600 (rw- --- ---) and owned by the user running the webserver (owner AND group).
Even though the directory has a sticky bit set to the group permissions!
I couldn't find any settings to change this via php.ini or even using umask().
I want my regular user on the server to be able to tar cjf the directory .. which would fail on files totally owned by the webserver-process-user;
the copy(from, to) function obeys the sticky-bit though!
so it seems like copy and rename do a slightly different work.
I'm working on a web backup service part of which allows the user to download a zip file containing the files they have selected from a list of what they have backed up.
The zipping process is done via a shell_exec command. For this to work I have given the apache user no-password sudo privileges to the zip command.
My problem is, since I can't do "sudo cd /path/to/user/files" to the folder containing the user's backed up files I need to specify the full absolute path to the files to be zipped which then go into the zip file which I don't want.
Here's what I mean:
The user, 'test1' has their files backed up which are stored as follows
/home/backups/test1/my pics/pic1.jpg
/home/backups/test1/my pics/pic2.jpg
/home/backups/test1/my pics/pic3.jpg
/home/backups/test1/hello.txt
/home/backups/test1/music/band/song.mp3
When these files are zipped up they keep the full paths but I just want it to show:
my pics/pic1.jpg
my pics/pic2.jpg
my pics/pic3.jpg
hello.txt
music/band/song.mp3
Is there some way to tell zip to truncate the paths, or specify a custom path for each file?
Or is there some way I can change the working directory to the user's root backup folder so I can simply specify relative file paths.
Thanks for reading, hope you got this far and it all makes sense!
I have to slightly question why you are making a potentially dangerous system shell call when there are a number of good PHP zipping classes around. The tutorial here shows how to easily create a class that will output a zip file to the browser. There is also a number of classes on phpclasses.org, this seems to be the best one.
If you have to do it with a system call my suggestions are:
To truncate the path for the file you could use symbolic links.
Can you not increase the permissions of the zip executable to mean that apache can use it without using sudo?
Have you tried using chdir() to change the working directory?
chdir('/home/backups/test1/');
A better idea may be to make a shell script, and grant the webserver sudo access to that shell script. This may be more secure, too
#!/bin/bash
#
# Zip up files for a given user
#
# usage: backupToZipFile.sh <username>
cd home/backups/$1
tar -czf /tmp/$1.tgz .
Also have you considered running sudo as the user you're trying to zip files as, instead of as root? This would be more secure as well.
It seems like a simple option, but according to man it's just not there. You could of course symlink the stuff you want to archive in the location where you'll create the zip file (I assume you can actually cd to that directory). With the proper options, zip will not archive the symlinks themselves but the symlinked files instead, using the names of the symlink.
E.g. create symlinks /tmp/$PID/my pics/pic1.jpg etc, and then zip everything in /tmp/$PID.