I have a backup created of my mysql database (vbulletin forum v3.8) everyday. It's about 360mb in size. It's stored as one text file in a secure folder.
I'm thinking of getting another server, through a different host, and somehow automatically transferring the backup to my second server every day.
Any ideas on how I could automate this process? I'm thinking PHP and a cron job.
Cron definately. Php, if you like it, but using bash with mysqldump combined with gzip works wonders.
Schedule rsync to transfer the files (over ssh) with cron (if you're on Linux).
Cron + rsync may be your best bet. If the file is text as you say and the changes are "diff"able then rsync can be used to transfer only the updates to that file. For example, the crontab could look something like this:
20 4 * * * rsync -a --delete source/ username#remotemachine.com:/path/to/destination/
This will sync the remote machine once a day deleting any files in the remote copy that no longer exist on the source machine.
As a note, I just read again and noticed this is a mysql backup so they output of the dump could eventually contain binary and in that case you probably want to just use a replication server or copy the file whole each day. rsync could be used for the copy as well...
slightly different approach and technically not a backup solution, but might want to consider running mysql in a replication mode and replicating changes live. so if the worst happpens you will be up to date with the data and not a day behind.
Related
I'm writing a PHP app, and our team has our servers set up so that I have to copy code onto a second server every time I edit a file. Naturally this is against the DRY concept, and simply a monotonous chore.
When I make a change to the file, and go to refresh the webpage to see the changes, both of these servers have to be updated, or else there is a chance that I will be seeing the old version on the server that hasn't been updated yet. So I'm copying/pasting/saving hundreds of times a day while I develop this app!
I'm using Eclipse Luna 4.4.1. In the Remote Systems Explorer, the system type is "SSH Only." This is the same for server 2.
I would like there to be some kind of functionality where each time I save sample.php on server 1, the same file sample.php will be automatically updated and saved with the same code on server 2. I don't want to have to do anything else other than just save the file on server 1; after doing that I'd like for the file on server 2 to be automatically updated.
Both servers are running on nginx OS.
I've been reading a lot about this here on SE, but many of the question/answers are 5+ years old, and I wonder if there is an optimal way to do this today. I don't have much experience with Java, but I'm happy to do this any way that will work. Any plugin or other way would be great.
Take a look at running rsync over ssh, and put it on a cron to run the update on a periodic interval as you desire.
Here is an example of an rsync process I run to copy database backups (type / contents of the file are mostly irrelevant here):
rsync -ar -e "ssh -l <username>" <local_path_of_files_to_watch> hostname:<target_path_where_files_land>
This is what my command looks like to rsync databases from a remote system that monitors radio traffic and records decoded messages to a daily log file:
rsync -av --remove-source-files --progress --stats -e "ssh -l stratdb" 192.168.0.23:/var/log/stratux.sqlite.*.gz /Users/stratdb/sqlite-dbs/.
If you want this process to run without having to define a password, you'll want to setup .ssh with shared keys to allow passwordless login from server 1 -> server 2 for this process (Ideally this is done from a user that has limited capabilities on both servers).
There are some continues deployment processes you could use that would do builds and copies and the like (e.g. Jenkins) if you need really robust and highly configurable solution... but if you just need to brute-force copy files from one server to another with a single easy command (ad-hoc) or automagically.. I'll investigate rsync
So I have a PHP application running on linux machine which is using a mysql database. I have manage to add the back up my mysql database every day by adding a code in the CRONTAB. In my application clients are able to upload document, of which a saved in a directory in the application folder ie /myapp/uploaded_documents/, I am looking at backing up this directory.
My question is: how do I back up a directory to a certain remote location on a certain time every day? Is it possible to also password protect this directory on my app folder?
Thank you
As told in previous answer, to backup periodically on a remote machine you can use rsync+ssh+crontab. Just set ssh to access the remote machine without password following (for ubuntu distro) https://help.ubuntu.com/community/SSH/OpenSSH/Keys, then add to crontab a rsync job at the time and days you want (check man crontab to understand how to do this), telling rsync to backup over ssh on a remote machine, something like 0 2 * * * rsync -ae ssh dir_to_bkp name#host:dir_where_bkp to backup each day at 02:00 am the "dir_to_bkp" in the "host" machine using "name" user and "dir_where_bpk" as destination. The -e ssh option in rsync specify to use ssh.
The best way is to use rsync, as you would be uploading (most likely) changes only.
http://linux.die.net/man/1/rsync
Additionally you can create incremental backup:
http://www.mikerubel.org/computers/rsync_snapshots/
So my suggested solution would be rsync + crontab
I am trying to make a complete file & mySQL backup of my site each night.
I was thinking the best way of doing it would be to have a cronjob run each night that would login to a remote server and replicate all of the local files.
Then, I need to figure out a way to take backups of all the mysql databases (currently there are three) and upload them all to the remote server as well.
This sounds like a huge project and I don't know whether to reinvent the wheel here, or if there is some script out there which basically does the same thing already.
Use cronjob to run a bash script
mysqldump the databases
tar -cvf the files
wput it all to your remote server
Also you can set a variable like now=$(date +"%Y_%m_%d") to use in your file names
You can use the mysqldump command to backup the database to a file and then upload to a different server
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
have you thought about MySQL replication? Maybe that fits better to your needs and you doesn't need php to do it
http://dev.mysql.com/doc/refman/5.5/en/replication.html
I'm thinking about building a php script that flushes, locks and copys a mysql data folder. Because I need to lock the tables and typical dump takes 5 minutes plus, I was thinking if I do a flush, lock and file copy of the data folder, it should be quicker. Anyone any experience of this and know if this is a viable solution?
Look to a XtraBackup also if you are planning to do non-stop backups of your data.
The MySQL devs are way ahead of you. The rolled your exact method with some syntactic sugar and proper error checking in a command called mysqlhotcopy.
Might be too late now but what phase does all this break down into? If your server is spending most of the five minutes copying the files instead of the actual flushing then your problem is simply a slow disk.
I think the best answer to the question is the following Windows command:
set bin=C:\Program Files\MySQL\MySQL Server 5.6\bin
"%bin%/mysql" -e "FLUSH TABLES WITH READ LOCK; UNLOCK TABLES;" --user=UserName --password=Password DatabaseName
This quickly forces all MySQL data for a database to its MySQL three or four files in the MySQL data folder, from which they can be copied to some other folder. You'll have to customize this command for your particular version of MySQL and for your database and admin user.
I couldn't get the other answers to work, but this BAT/CMD command works fast and well in my experience.
The only other suggestion I can make is to use the MySQL Workbench (which comes with MySQL) to Stop the MySQL server. When it is stopped it is flushed to disk. Don't forget to Start your MySQL server when you are finished using the files directly (at which time MySQL reads the database files from disk).
Note: if you simply copy the data files for a database to the data folder of another instance of MySQL that is already open, you won't see the data in MySQL applications! You would think that MySQL would check the date/time modified of the disk files to detect updates, but it doesn't. And it doesn't keep its files locked, showing Windows that they are in use, which they are. So strange.
Im running a file host thats grown beyond the capacity of a single server, and I need to implement multiple server storage of files. I'd like to do this as cheap as possible, so those fancy mass storage methods are out of the question. I simply want to move a file thats uploaded by the user to the "gateway" server, which hosts all the http and mysql, to one of the media servers. It can either be done at the end of the user's request, or via cron every couple of minutes.
At this point the only method Im truly familiar with, is using ftp_put php function, and simply ftping the file to a different server, but I've had problems with this method in the past, especially for larger files, and a lot of the files that will be getting transferred will be over 100mb.
Can anyone suggest a good solution for this? Preferably Im looking for a purely software solution... hopefully nothing more than a php/bash script.
One method that requires no programmatic setup, and is quite secure is to mount a folder from the other server, and use php to save to that directory like you would normally.
You don't have to use sshfs, there are a bunch of other ways to provide the same solution. I would use sshfs for this situation as it works across ssh, and therefore is secure and relatively easy to setup.
To achieve this (using sshfs):
Install sshfs
Mount a ssfs file to a folder accessable to php
Use a php script to store the files within the sshfs mount, and therefore on the other server.
Another method is to setup an rsync command between the two using crontab.
you can write a bash script that run in cron
and use command line like scp or sftp or rsync
example :
[bash]# scp filename alice#galaxy.example.com:/home/alice
Copy "filename" into alice's home directory on the remote galaxy.example.com server
Use cron, a bash script, and rsync.
cron: "Cron is a time-based job scheduler in Unix-like computer operating systems. 'cron' is short for 'chronograph'."
bash: "Bash is a free software Unix shell written for the GNU Project."
rsync: "rsync is a software application for Unix systems which synchronizes files and directories from one location to another while minimizing data transfer using delta encoding when appropriate."
In summary, place the rsync command in a bash script and get cron to run it at regular intervals.
See here for some examples: