How to move mysql database easiest & fastest way? [closed] - php

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Hi i have to move mysql database to another server ,
It's nearly 5 gb
i can have root access at both servers?

Usually you run mysqldump to create a database copy and backups as follows:
$ mysqldump -u user -p db-name > db-name.out
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.out user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.out
OR
$ mysql -u user -p 'password' db-name < db-name.out
How do I copy a MySQL database from one computer/server to another?
Short answer is you can copy database from one computer/server to another using ssh or mysql client.
You can run all the above 3 commands in one pass using mysqldump and mysql commands (insecure method, use only if you are using VPN or trust your network):
$ mysqldump db-name | mysql -h remote.box.com db-name
Use ssh if you don't have direct access to remote mysql server (secure method):
$ mysqldump db-name | ssh user#remote.box.com mysql db-name
OR
$ mysqldump -u username -p'password' db-name | ssh user#remote.box.com mysql -u username -p'password db-name
You can just copy table called foo to remote database (and remote mysql server remote.box.com) called bar using same syntax:
$ mysqldump db-name foo | ssh user#remote.box.com mysql bar
OR
$ mysqldump -u user -p'password' db-name foo | ssh user#remote.box.com mysql -u user -p'password' db-name foo
Almost all commands can be run using pipes under UNIX/Linux oses.
More from Reference
Regards,

If you have Root, you might find it quicker to avoid mysqldump. You can create the DB on the destination server, and copy the database files directly. Assuming user has access to the destination server's mysql directory:
[root#server-A]# /etc/init.d/mysqld stop
[root#server-A]# cd /var/lib/mysql/[databasename]
[root#server-A]# scp * user#otherhost:/var/lib/mysql/[databasename]
[root#server-A]# /etc/init.d/mysqld start
Important things here are: Stop mysqld on both servers before copying DB files, make sure the file ownership and permissions are correct on the destination before starting mysqld on the destination server.
[root#server-B]# chown mysql:mysql /var/lib/mysql/[databasename]/*
[root#server-B]# chmod 660 /var/lib/mysql/[databasename]/*
[root#server-B]# /etc/init.d/mysqld start
With time being your priority here, the use of compression will depend on whether the time lost waiting for compression/decompression (with something like gzip) will be greater than the time wasted transmitting uncompressed data; that is, the speed of your connection.

For an automated way to backup your MySQL database:
The prerequisites to doing any form of backup is to find the ideal time of day to complete the task without hindering performance of running systems or interfere with users. On that note, the size of the database must be taken into consideration along with the I/O speed of the drives - this becomes exponentially more important as the database grows ( another factor that comes to mind is the number of drives, as having an alternate drive where the database is not stored would increase the speed due to the heads not performing both reads and writes.) For the sake of keeping this readable I'm going to assume the database is of a manageable size ( 100MB ) and the work environment is a 9am-5pm job with no real stress or other running systems on off hours.
The first step would be to log into your local machine with root privileges. Once at the root shell, a MySQL user will need to be created with read only privileges. To do this, enter the MySQL shell using the command:
mysql -uroot -ppassword
Next, a user will need to be created with read only privileges to the database that needs to be backed up. In this case, a specific database does not need to be assigned to a user in case the script or process would be used at a later time. To create a user with full read privileges, enter these commands in the MySQL shell:
grant SELECT on *.* TO backupdbuser#localhost IDENTIFIED BY ' backuppassword';
FLUSH PRIVILEGES;
With the MySQL user created, it's safe to exit the MySQL shell and drop back into the root shell using exit. From here, we'll need to create the script that we want to run our backup commands, this is easily accomplished using BASH. This script can be stored anywhere, as we'll be using a cron job to run the script nightly, for the sake of this example we'll place the script in a new directory we create called "backupscripts." To create this directory, use this command at the root shell:
mkdir /backupscripts
We'll also need to create a directory to store our backups locally. We'll name this directory "backuplogs." Issue this command at the root shell to create the directory:
mkdir /backuplogs
The next step would be to log into your remote machine with root credentials and create a "backup user" with the command:
useradd -c "backup user" -p backuppassword backupuser
Create a directory for your backups:
mkdir /backuplogs/
Before you log out, grab the IP address of the remote host for future reference using the command:
ifconfig -a
eth0 is the standard interface for a wired network connection. Note this IP_ADDR.
Finally, log out of the remote server and return to your original host.
Next we'll create the file that is our script on our host local machine, not the remote. We'll use VIM (don't hold it against me if you're a nano or emacs fan - but I'm not going to list how to use VIM to edit a file in here,) first create the file and using mysqldump, backup your database. We'll also be using scp to After the database has been created, compress your file for storage. Read the file to STDOUT to satisfy the instructions. Finally, check for files older than 7 days old. Remove them. To do this, your script will look like this:
vim /backupscripts/mysqldbbackup.sh
#!/bin/sh
# create a temporary file for the schema to be stored
BACKUPDIR = /backuplogs/
TMPFILE = tmpout.sql
CURRTIME = $(date +%Y%m%d).tgz
#backup your database
mysqldump -ubackupdbuser -pbackuppassword databasename > $BACKUPDIR$TMPFILE
#compress this file and store it locally with the current date
tar -zvcf /backuplogs/backupdb-$CURRTIME $BACKUPDIR$TMPFILE
#per instructions - cat the contents of the SQL file to STDOUT
cat $BACKUPDIR$TMPFILE
#cleanup script
# remove files older than 7 days old
find $BACKUPDIR -atime +7 -name 'backup-db-*.tgz' -exec rm {} \;
#remove the old backupdirectory from the remote server
ssh backupuser#remotehostip find /backuplogs/ -name 'backup-db-*.tgz' -exec rm {} \;
#copy the current backup directory to the remote server using scp
scp -r /backuplogs/ backupuser#remotehostip:/backuplogs/
#################
# End script
#################
With this script in place, we'll need to setup ssh keys, so that we're not prompted for a password every time our script runs. We'll do this with SSH-keygen and the command:
ssh-keygen -t rsa
Enter a password at the prompt - this creates your private key. Do not share this.
The file you need to share is your public key, it is stored in the file current_home/.ssh/id_rsa.pub. The next step is to transfer this public key to your remote host . To get the key use the command:
cat current_home/.ssh/id_rsa.pub
copy the string in the file. Next ssh into your remote server using the command:
ssh backupuser#remotehostip
Enter your password and then edit the file /.ssh/authorized_keys. Paste the string that was obtained from your id_rsa.pub file into the authorized_keys file. Write the changes to the file using your editor and then exit the editor. Log out of your remote server and test the RSA keys have worked by attempting to log into the remote server again by using the previous ssh command. If no password is asked for, it is working properly. Log out of the remote server again.
The final thing we'll need to do is create a cron job to run this every night after users have logged off. Using crontab, we'll edit the current users file (root) as to avoid all permission issues. *Note - this can have serious implications if there are errors in your scripts including deletion of data, security vulnerabilities, etc - double check all of your work and make sure you trust your own system, if this is not possible then an alternate user and permissions should be set up on the current server *. To edit your current crontab we'll issue the command:
crontab -e
While in the crontab editor (it'll open in your default text editor), we're going to set our script to run every night at 12:30am. Enter a new line into the editor:
30 0 * * * bash /backupscripts/mysqldbbackup.sh
Save this file and exit the editor. To get your cronjob to run properly, we'll need to restart the crond service. To do this, issue the command:
/etc/init.d/crond restart

Related

PHP exec() and filesystem permissions

So im new to exec and recently wanted to auutomate mysql Dumping via a php script.
My php file originates from
var/WWW/html/tool/script.php
The folder reads root / WWW-data when i do ls -l
It is doing
exec('mysqldump -user=user --password=pass db > selfDir\dump.sql')
Now what i would expect is the for the script to output to dump.sql inside the scripts Folder.
This is not Happening though.
Only once i did touch dump.sql and chmod 777 dump.sql was the dump actually being written.
I dont understand why. How can i alter my exec() to make sure it can CREATE the dump.file instead of only writing to it once it already exists ?
thanks
If you only want to automatize the database backup, it's better calling the mysqldump with any other system service, like cron.
Rely on php exec can generate too many errors.
Php limit time execution
Execute from request, then the request is killed the exec is killed or need to wait all the mysqldump time
More difficult permissions
Limitation memory in php and the server
In cron you the backup will be (example from The Java Guy):
0 0 * * * mysqldump -u 'username' -p'password' DBNAME > /home/eric/db_backup/liveDB_`date +\%Y\%m\%d_\%H\%M`.sql
Other link for crontab backup with mysql: Answer K1773R mysql-dump-cronjob

Unable to execute mysqldump from PHP script

I am trying to create periodic backups (poor man's cron) of my database using mysqldump with exec() function. I am using XAMPP/PHP7 on macOS.
$command = "$mysqldump_location -u$db_user -h$db_host -p$db_password $db_name > $backup_file_location";
exec($command);
When I run the PHP script, I get no SQL dump in the path mentioned in $backup_file_location but if I execute the same $command string on the terminal directly I get the desired SQL file in the desired location.
I am unable to understand what could be the problem here. Also open to suggestions on better ways to dump the entire DB.
Edit 1:
The value of $mysqldump_location is /Applications/XAMPP/xamppfiles/bin/mysqldump
The value of $backup_file_location is /Applications/XAMPP/xamppfiles/htdocs/app5/data/sqldumps/sql_data.sql
/app5/ is the folder in while I am developing my app.
Edit 2:
Possible duplicate suggestion does not apply since the issue here was not on how to dump SQL backups. The key issue here was that the backup using mysqldump was working through terminal, but not through PHP's exec() function.
The resolution of the issue, from above comments, was that the PHP request executes in XAMPP as a user that has limited privileges, and the mysqldump process inherits those privileges.
Checking the exit status of the process run by exec() confirmed that mysqldump exited with a nonzero exit status, indicating it failed for some reason.
Opening write privileges to 777 on the directory where the mysqldump process tries to write resolved the error.
It should also be adequate to figure out the specific uid & gid of Apache processes (check the User and Group config values in the Apache config file (e.g. xampp-home/apache/conf/httpd.conf) and make the output directory writeable by that uid or gid.

file_get_contents for file owned by root

I am in the process of setting up a remote backup for my MariaDB server (running on 64 bit Ubuntu 12.04).
The first step in this process was easy - I am using automysqlbackup as explained here.
The next step is mostly pretty easy - I am uploading the latest backup in the folder /var/lib/automysqlbackup/daily/dbname to a remote backup service (EVBackup in the present case) via SFTP. The only issue I have run into is this
The files created in that folder by automysqlbackup bear names in the format dbname_2014-04-25_06h47m.Friday.sql.gz and are owned by root. Establishing the name of the latest backup file is not an issue. However, once I have got it I am unable to use file_get_contents since it is owned by the root user and has its permissions set to 600. Perhaps there is a way to run a shell script that fetches me those contents? I am a novice when it comes to shell scripts. I'd be much obliged to anyone who might be able to suggest a way to get at the backup data from PHP.
For the benefit of anyone running into this thread I give below the fully functional script I created. It assumes that you have created and shared your public ssh_key with the remote server (EVBackup in my case) as described here.
In my case I had to deal with one additional complication - I am in the process of setting up the first of my servers but will have several others. Their domain names bear the form svN.example.com where N is 1, 2, 3 etc.
On my EVBackup account I have created folders bearing the same names, sv1, sv2 etc. I wanted to create a script that would safely run from any of my servers. Here is that script with some comments
#! /bin/bash
# Backup to EVBackup Server
local="/var/lib/automysqlbackup/daily/dbname"
#replace dbname as appropriate
svr="$(uname -n)"
remote="$(svr/.example.com/)"
#strip out the .example.com bit to get the destination folder on the remote server
remote+="/"
evb="user#user.evbackup.com:"
#user will have to be replaced with your username
evb+=$remote
cmd='rsync -avz -e "ssh -i /backup/ssh_key" '
#your ssh key location may well be different
cmd+=$local
cmd+=$evb
#at this point cmd will be something like
#rsync -avz -e "ssh -i /backup/ssh_key" /home bob#bob.evbackup.com:home-data
eval $cmd

Is it possible to run PHP exec() but hide params from the process list?

I'd like to have a properly protected PHP web-based tool to run a mysqlcheck for general database table health, but I don't want the password to be visible in the process list. I'd like to run something like this:
$output = shell_exec('mysqlcheck -Ac -uroot -pxxxxx -hhostname');
// strip lines that's OK
echo '<pre>'.preg_replace('/^.+\\sOK$\\n?/m', '', $output).'</pre>';
Unfortunately, with a shell_exec(), I have to include the password in the command line, but am concerned that the password will show up in the process list (ps -A | grep mysqlcheck).
Using mariadb 5.5 on my test machine, mysqlcheck doesn't show the password in the process list, but my production machine isn't running mariadb and running a different OS and I'd like to be on the safe-side and not run these tests on the production side.
Do all versions of mysql also hide the password in the process list? Are my concerns a non-issue?
Yes, since at least MySQL 5.1, the client obscures the password on the command-line.
I found this blog by former MySQL Community Manager Lenz Grimmer from 2009, in which he linked to the relevant code in the MySQL 5.1 source. http://www.lenzg.net/archives/256-Basic-MySQL-Security-Providing-passwords-on-the-command-line.html
You could alternatively not pass the password on the command-line at all, and instead store the user/password credentials in a file which PHP has privileges to read, and then execute the client as:
mysqlcheck --defaults-extra-file=/etc/php.d/mysql-client.cnf
The filename is an example; you can specify any path you want. The point is that most MySQL client tools accept that --defaults-extra-file option. See http://dev.mysql.com/doc/refman/5.6/en/option-file-options.html for more information.
It is a real concern and your OS will be showing it, Just not maybe in the default view.
You could proc_open instead which will allow you to read and write to the stream opened by that file.
mysqlcheck -Ac -uroot -p -hhostname
will prompt for the password which you can write to with the pipes from proc_open

How can i transfer a set of files/folder from a linux server to another via script?

I'm creating a php script to backup my website everyday, backup goes to another linux server of mine but how can i compress all files and send to another linux server via a script?
One possible solution (in bash).
BACKUP_SERVER_PATH=remote_user#remote_server:/remote/backup/path/
SITE_ROOT=/path/to/your/site/
cd "$SITE_ROOT"
now=$(date +%Y%m%d%H%M)
tar -cvzf /tmp/yoursite.$now.tar.gz .
scp /tmp/yoursite.$now.tar.gz "$BACKUP_SERVER_PATH"
Some extra stuff to take into account for permission (read access to docroot) and ssh access to the remote server (for scp).
Note that there are really many ways to do this. Another one, if you don't mind storing an uncompressed version of your site is to use rsync.
The answer provided by Timothée Groleau should work, but I prefer doing it the other way around: from my backups machine (a server with a lot of storage available) launch a process to backup all other servers.
In my environment, I use a configuration file that lists all servers, valid user for each server and the path to backup:
/usr/local/etc/backupservers.conf
192.168.44.34 userfoo /var/www/foosites
192.168.44.35 userbar /var/www/barsites
/usr/local/sbin/backupservers
#!/bin/sh
CONFFILE=/usr/local/etc/backupservers.conf
SSH=`which ssh`
if [ ! -r "$CONFFILE" ]
then
echo "$CONFFILE not readable" >&2
exit 1
fi
if [ ! -w "/var/backups" ]
then
echo "/var/backups is not writable" >&2
exit 1
fi
while read host user path
do
file="/var/backups/`date +%Y-%m-%d`.$host.$user.tar.bz2"
touch $file
chmod 600 $file
ssh $user#$host tar jc $path > $file
done
For this to work correctly and without need to enter passwords for every server to backup you need to exchange SSH keys (there are lots of questions/answers on stackoverflow on how to do this).
And last step would be to add this to cron to launch the process each night:
/etc/crontab
0 2 * * * backupuser /usr/local/sbin/backupservers

Categories