I'm brand new to shell scripting and have been searching for examples on how to create a backup script for my website but I'm unable find something or at least something I understand.
I have a Synology Diskstation server that I'd like to use to automatically (through its scheduler) take backups of my website.
I currently am doing this via Automator on my Mac in conjunction with the Transmit FTP program, but making this a command line process is where I struggle.
This is what I'm looking to do in a script:
1) Open a URL without a browser (this URL creates a mysql dump of the databases on the server to be downloaded later). example url would be http://mywebsite.com/dump.php
2) Use FTP to download all files from the server. (Currently Transmit FTP handles this as a sync function and only downloads files where the remote file date is newer than the local file. It also will remove any local files that don't exist on the remote server).
3) Create a compressed archive of the files from step 2, named as website_CURRENT-DATE
4) Move archive from step 3 to a specific folder and delete any file in this specific folder that's older than 120 Days.
Right now I don't know how to do step 1, or the synchronization in step 2 (I see how I can use wget to download the whole site, but that seems as though it will download everything each time it runs, even if its not been changed).
Steps 3 and 4 are probably easy to find via searching, but I haven't searched for that yet since I can't get past step 1.
Thanks!
Also FYI my web-host doesn't do these types of backups, so that's why I like to do my own.
Answering each of your questions in order, then:
Several options, the most common of which would be one of wget http://mywebsite.com/dump.php or curl http://mywebsite.com/dump.php.
Since you have ssh access to the server, you can very easily use rsync to grab a snapshot of the files on-disk with e. g. rsync -essh --delete --stats -zav username#mywebsite.com:/path/to/files/ /path/to/local/backup.
Once you have the snapshot from rsync, you can make a compressed, dated copy with cd /path/to/local/backup; tar cvf /path/to/archives/website-$(date +%Y-%m-%d).tgz *
find /path/to/archives -mtime +120 -type f -exec rm -f '{}' \; will remove all backups older than 120 days.
Related
I've automated the deploying of my site and I have a script that runs framework/sake /dev/build "flush=1" This works however it clears the cache directory of the user who runs it, which is different from the apache user (which I can't run it from).
I've read a few bug reports and people talking about it on the SS forum however either there is no answer or it doesn't work for example
define('MANIFEST_FILE', TEMP_FOLDER . "/manifest-main");
I thought about just deleting the cache directory however it's a randomised string so not easy to script.
Whats the best way to clear the cache via command line?
To get this to work you need to first move the cache from the default directory to within the web directory by creating a folder silverstripe-cache at the web root. Also make sure the path is read/write (SS default config blocks this being readable by the public)
Then you can script:
sudo -u apache /path/to/web/root/framework/sake dev/build "flush=1"
How to copy by CRON every day files from source server to destination server (create backup) and then delete these files from source server?
I need copy just newest files (but it's not important if copied files will be deleted)
I've found these solutions
https://serverfault.com/questions/259938/cron-job-to-copy-file-from-one-location-to-another-for-new-files-daily
https://unix.stackexchange.com/questions/166542/transferring-data-between-servers
But I don't know how to be sure all the files are transfered correctly and I can delete it from source server.
There are 2 options how to do that - by shh or some combination with PHP.
Can you show me a correct way? Accurate solution would be best because I'm not sure with these things (SSH, scp, etc.).
My working solution:
50 3 * * * sudo rsync --remove-source-files /SOURCE_PATH/* SSH_LOGIN:/DESTINATION_PATH
I am creating an automatic backup system. I plan on running a Cron Job that will, once a week, automatically create a backup and email it to a secure email.(this way, even if the server explodes into a million pieces I will have a full recent backup of my files that any administrator can access)
I found this simple method: system('tar -cvzpf backup.tar.gz /path/to/folder');
It works nearly perfectly for what I need. Only problem is there is one directory that I do not want included in the backup. On this website, users upload their own avatars and the directory in which the images are held is inside the directory I want backed up. Because I'm sending this via email I have to keep the folder relatively small, and a couple thousand images add up. Is there any way I could tell the function to ignore this directory and just compress everything else?
find /path/to/folder -not -ipath '*baddirectory*' -print | xargs tar -cvzpf backup.tar.gz though you might consider passing PHP the full path to all the binaries you use (in the above command: find, xargs, and tar).
From the tar man:
tar --exclude='/path/to/folder/bad'
So you would get:
system('tar -czpf --exclude='/path/to/folder/bad' backup.tar.gz /path/to/folder');
You can leave the v (verbose) out, since you are not watching your code being executed.
You can exclude something from been included with --exclude-tag-all PATTERN long option as on manual.
Unfortunely I did not found a good example about pattern.
I guess the following exclude will work:
--exclude-tag-all=/path/foldernotinclude/.
Since it should match the "directory" file tag.
With lucky another user will make a comment about patern to use.
I am working on Ubuntu and trying to get a PHP script working that will allow the user to input a Youtube video URL, and the script will download the flv and convert it using youtube2mp3 (which I found here: http://hubpages.com/hub/Youtube-to-MP3-on-Ubuntu-Linux ). I have been getting errors which I'm sure are permissions based, and I would like to know the best and most secure way to correct them. Right now I'm calling
echo system("youtube-dl --output=testfile.flv --format=18 $url");
just to try and get the downloading portion working. What shows up on the following page is
[youtube] Setting language
[youtube] xOMEi2g_oEU: Downloading video webpage
[youtube] xOMEi2g_oEU: Downloading video info webpage
[youtube] xOMEi2g_oEU: Extracting video information
[youtube] xOMEi2g_oEU: Extracting video information
before showing the rest of my (irrelevant) output. In the apache error log, I'm getting
ERROR: unable to open for writing: [Errno 13]
Permission denied: u'testfile.flv.part'
which is obviously indicative of a permissions issue. Do I have to chown the directory in question to www-user? Is that secure? Or should I chmod the directory instead? Eventually I will be putting this on a public facing server and I don't want any vulnerabilities in my implementation. Any and all advice and answers are greatly appreciated!
This is running as the user running the php process so two things:
Make sure this user has access to the directory you are writing your testfile out to. I would specify a path that is isolated and not part of the web server directory structure which it appears to be doing now
Is $url coming from user input? If it is I would then use escapeshellcmd on the entire string to ensure there isn't the random rm -rf * command in there.
chown can be used only by a superuser so if it's convenient you can use it, but servers don't normally run as superusers so I would go for chmod.
Both of #Wes's suggestions are worth following; you don't want some goofball to supply an url like ||nc -l 8888 | sh & and log in to your system a second later.
I strongly recommend confining your configuration with a tool such as AppArmor, SElinux, TOMOYO, or SMACK. Any of these mandatory access control tools can prevent an application from writing in specific locations, executing arbitrary commands, reading private data, etc.
As I've worked on the AppArmor system for a decade, it's the one I'm most familiar with; I believe you could have a profile for your deployment put together in half a day or so. (It'd take me about ten or fifteen minutes, but like I said, I've been working on AppArmor for a decade. :)
When accessing http://www.example.net, a CSV file is downloaded with the most current data regarding that site. I want to have my site, http://www.example.com, access http://www.example.net on an hour by hour basis in order to get updated information.
I want to then use the updated information stored in the CSV file to compare changes from data in previous CSV files. I obviously have no idea what the best plan of attack would be so any help would be appreciated. I am just looking for a general outline of how I should proceed, but the more information the better.
By the way, I'm using a LAMP bundle so PHP and mySQL solutions are preferred.
I think the most easy way for you to handle this would be to have a cron job running every hour (or scheduled task if are on windows), downloading the CSV with curl or file_get_contents(manual). When you have downloaded the CSV you can import new data in your MySQL database.
The CSV should have some kind of timestamp on every row so you can easily separate new and old data.
Also handling XML would be better then plain CSV.
A better way to setup that would be you to create a webservice on http://www.example.com and update in real time from your http://www.example.net. But it requires you to have access to both websites.
Depending on the OS you're using, you're looking at a scheduled task (Windows) or a cron job (*nix) to kick up a service/app that would pull the new CSV and compare it to an older copy.
You'll definitely want to go the route of a cron job. I'm not exactly sure what you want to do with the differences, however, if you just want an email, here is one potential (and simplified) option:
wget http://uri.com/file.txt && diff file.txt file_previous.txt | mail -s "Differences" your#email.com && mv file.txt file_previous.txt
Try this command by itself from your command line (I'm guessing you are using a *nix box) to see if you can get it working. From there, I would save this to a shell file in the directory where you want to save your CSV files.
cd /path/to/directory
vi process_csv.sh
And add the following:
#!/bin/bash
cd /path/to/directory
wget http://uri.com/file.txt
diff file.txt file_previous.txt | mail -s "Differences" your#email.com
mv file.txt file_previous.txt
Save and close the file. Make the new shell script executable:
chmod +x process_csv.sh
From there, start investigating the cronjob route. It could be as easy as checking to see if you can edit your crontab file:
crontab -e
With luck, you'll be able to enter your cronjob and save/close the file. It will look something like the following:
01 * * * * /path/to/directory/process_csv.sh
I hope you find this helpful.