shell_exec('mysqldump --opt -Q -h ' . cDatabaseHost . ' -u ' . cDatabaseUser . ' --password=' . cDatabasePassword . ' ' . cDatabaseDatabase . ' | gzip > ' . $DatabaseFileName);
Above I have given my code, In which
When I want to get backup of my data. It creates random backup some time 50 MB and again I run it creates 10 MB, means every time different size. When the actual data backup size around about 500MB. Kindly Help me.
Thanks.
Here is my code
<?php
$dbuser = "root";
$dbpass = "";
$dbname = "test";
$backupfile = $dbname .time()."-".date("Y-M-D h:i:s");
//$query= "C:\\wamp\\bin\\mysql\\mysql5.5.16\\bin\\mysqldump.exe " ."-u ". $dbuser ." -p ". $dbname.">". $backupfile.".sql";
$query= "C:\wamp\bin\mysql\mysql5.5.16\bin\mysqldump.exe " ."-u ". $dbuser ." -p ". $dbname.">". $backupfile.".sql";
echo $query;
system($query);
?>
But i am always getting blank file. I am using wamp server in windows 7. I have tried with double slashes and single slashes but same result. Please help me with this essue.
N.B- One things i have not mentioned yet that, when i open this in browser, the empty backup file created, but page loading not stop,its show loading....
$query= "C:\\wamp\\bin\\mysql\\mysql5.5.16\\bin\\mysqldump.exe --user root test > upload/". time()."_backup.sql";
Finally it works. I have use --user instead of -u, as i don't have any password for wamp server, i removed --password field. Now its working fine.
The command is asking for password, you must remove --password parameter or -p flag if you mysql root user password is blank.
First of I searched all around and could not find a valid answer.
I want to import database files into a database using PHP.
The following only works if the import.php file resides in the same folder (temp) as the database files:
$command = "gunzip --to-stdout $backupfile | mysql -u$database_user -p$database_pass $database";
If I place the import.php in another location and add a path to the $backupfile, it does not work as shown below:
$database_path = '/home/site/public_html/temp/';
$command = "gunzip --to-stdout $database_path.$backupfile | mysql -u$database_user -p$database_pass $database";
What is the correct way to add the path to the database file ($backupfile) so that I can run the PHP script from any location?
Thanks in advance.
Try this :
$command = "gunzip --to-stdout {$database_path}{$backupfile} | mysql -u$database_user -p$database_pass $database";
I'd go with:
$database_path = '/home/site/public_html/temp/';
$command = 'gunzip --to-stdout ' . $database_path . $backupfile . ' | mysql -u' . $database_user . ' -p' . $database_pass . ' ' . $database;
or first execute command $c2 = 'cd ' . $database_path;
I'm trying to dump some of my tables if the prefix of theme is matching a given sub string using php. Trying to use php's system did not bring any result in sense the dump file was not created. I thought to use the command line function exec to achieve my result and I was making the following
exec('E:/xampp/mysql/bin/mysqldump '. $dbname .' -h '. $this->host .' -u ' .$this->user . ' $(E:/xampp/mysql/bin/mysql -u '. $this->user . ' -p ' . $dbname .' -Bse "show tables like \'wp_dev%\'")> mydb.sql 2>&1', $output);
but to sub query what would filter out the matching tables is returning the following error
mysqldump: unknown option '-s'
It seems like I'm missing something by the syntax.
Use like this, it will take dump of only listed tables.
exec('E:/xampp/mysql/bin/mysqldump -h '. $this->host .' -u ' .$this->user . ' -p'. $this->password .' '. $dbname .' table1 table2 > /path_to_file/dump_file.sql');
I have a LIVE version of a MySQL database with 5 tables and a TEST version.
I am continually using phpMyAdmin to make a copy of each table in the LIVE version to the TEST version.
Does anyone have the mysql query statement to make a complete copy of a database? The query string would need to account for structure, data, auto increment values, and any other things associated with the tables that need to be copied.
Thanks.
Ok, after a lot of research, googling, and reading through everyone's comments herein, I produced the following script -- which I now run from the browser address bar. Tested it and it does exactly what I needed it to do. Thanks for everyone's help.
<?php
function duplicateTables($sourceDB=NULL, $targetDB=NULL) {
$link = mysql_connect('{server}', '{username}', '{password}') or die(mysql_error()); // connect to database
$result = mysql_query('SHOW TABLES FROM ' . $sourceDB) or die(mysql_error());
while($row = mysql_fetch_row($result)) {
mysql_query('DROP TABLE IF EXISTS `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('CREATE TABLE `' . $targetDB . '`.`' . $row[0] . '` LIKE `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('INSERT INTO `' . $targetDB . '`.`' . $row[0] . '` SELECT * FROM `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('OPTIMIZE TABLE `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
}
mysql_free_result($result);
mysql_close($link);
} // end duplicateTables()
duplicateTables('liveDB', 'testDB');
?>
Depending on your access to the server. I suggest using straight mysql and mysqldump commands. That's all phpMyAdmin is doing under the hood.
Reference material for Mysqldump.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
There is a PHP class for that, I didn't test it yet.
From it's description:
This class can be used to backup a MySQL database.
It queries a database and generates a list of SQL statements that can be used later to restore the database **tables structure** and their contents.
I guess this what you need.
Hi here you can use simple bash script to backup whole database.
######### SNIP BEGIN ##########
## Copy from here #############
#!/bin/bash
# to use the script do following:
# sh backup.sh DBNAME | sh
# where DBNAME is database name from alma016
# ex Backuping mydb data:
# sh backup.sh mydb hostname username pass| sh
echo "#sh backup.sh mydb hostname username pass| sh"
DB=$1
host=$2
user=$3
pass=$4
NOW=$(date +"%m-%d-%Y")
FILE="$DB.backup.$NOW.gz"
# rest of script
#dump command:
cmd="mysqldump -h $host -u$user -p$pass $DB | gzip -9 > $FILE"
echo $cmd
############ END SNIP ###########
EDIT
If you like to clone backuped database just edit the dump and change the database name then:
tar xzf yourdump.tar.gz| mysql -uusername -ppass
cheers Arman.
Well in script form, you could try using
CREATE TABLE ... LIKE syntax, iterating through a list of tables, which you can get from SHOW TABLES.
Only problem is that does not recreate indexes or foreign keys natively. So you would have to list them and create them as well. Then a few INSERT ... SELECT calls to get the data in.
If your schema never changes, only the data. Then create a script that replicates the table structure and then just do the INSERT ... SELECT business in a transaction.
Failing that, mysqldump as the others say is pretty easy to get working from a script. I have a daily firing cron job that dumps all manner of databases from my datacenter servers, connects via FTPS to my location and sends all the dumps across. It can be done, quite effectively. Obviously you have to make sure such facilities are locked down, but again, not overly hard.
As per code request
The code is proprietary, but I'll show you the critical section that you need. This is from in the middle of a foreach loop, hence the continue statements and the $c.. prefixed variables (I use that to indicate current loop (or instance) variables). The echo commands could be whatever you want, this is a cron script, so echoing current status was appropriate. The flush() lines are helpful for when you run the script from the browser, as the output will be sent up to that point, so the browser results fill as it runs, rather than all turning up at the end. The ftp_fput() line is obviously down to my situation of uploading the dump somewhere and uploads directly from the pipe - you could use another process open to pipe the output in to a mysql process to replicate the database. Providing suitable amendments where made.
$cDumpCmd = $mysqlDumpPath . ' -h' . $dbServer . ' -u' . escapeshellarg($cDBUser) . ' -p' . escapeshellarg($cDBPassword) . ' ' . $cDatabase . (!empty($dumpCommandOptions) ? ' ' . $dumpCommandOptions : '');
$cPipeDesc = array(0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'));
$cPipes = array();
$cStartTime = microtime(true);
$cDumpProc = proc_open($cDumpCmd, $cPipeDesc, $cPipes, '/tmp', array());
if (!is_resource($cDumpProc)) {
echo "failed.\n";
continue;
} else {
echo "success.\n";
}
echo "DB: " . $cDatabase . " - Uploading Database...";
flush();
$cUploadResult = ftp_fput($ftpConn, $dbFileName, $cPipes[1], FTP_BINARY);
$cStopTime = microtime(true);
if ($cUploadResult) {
echo "success (" . round($cStopTime - $cStartTime, 3) . " seconds).\n";
$databaseCount++;
} else {
echo "failed.\n";
continue;
}
$cErrorOutput = stream_get_contents($cPipes[2]);
foreach ($cPipes as $cFHandle) {
fclose($cFHandle);
}
$cDumpStatus = proc_close($cDumpProc);
if ($cDumpStatus != 0) {
echo "DB: " . $cDatabase . " - Dump process caused an error:\n";
echo $cErrorOutput . "\n";
continue;
}
flush();
If you're using linux or mac, here is a single line to clone a database.
mysqldump -uUSER -pPASSWORD -hsample.host --single-transaction --quick test | mysql -uUSER -pPASSWORD -hqa.sample.host --database=test
The 'advantage' here is that it will lock the database while its making a copy.
That means you end up with a consistent copy.
It also means your production database will be tied up for the duration of the copy which generally isn't a good thing.
Without locks or transactions, if something is writing to the database while you're making a copy, you could end up with orphaned data in your copy.
To get a good copy without impacting production, you should create a slave on another server.
The slave is updated in real time.
You can run the same command on the slave without impacting production.