mysqldump - cron job - php

Am trying to get a auto backup of our mysql database via a cron job that runs daily.
We have:
$database_user = 'VALUE';
$database_pass = 'VALUE';
$database_server = 'localhost';
// Name of the database to backup
$database_target = 'VALUE';
// Get Database dump
$sql_backup_file = $temp_dir . '/sql_backup.sql';
$backup_command = "mysqldump -u" . $database_user . " -p" . $database_pass . " -h " . $database_server . " " . $database_target . " > " . $sql_backup_file;
system($backup_command);
// End Database dump
Problem is we get a message back from the Cron Daemon with:
Usage: mysqldump [OPTIONS] database [tables]
OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR mysqldump [OPTIONS] --all-databases [OPTIONS]
For more options, use mysqldump --help
sh: -h: command not found
So it looks like it has something to do with the -h
~~~
Anyone have any thoughts on how to fix?

First, I recommend you upgrade to mysql 5.6+ so that you can keep your database passwords more secure. First, you would follow the instructions from this stackoverflow answer to set up the more-secure mysql login method for command line scripts.
You should probably write a bash script instead of using PHP. Here's a full backup script, very simple. db_name is the name of your database, and /path/to/backup_folder/ is obviously where you want to store backups. The --login-path=local switch will look in the home directory of whoever is running this bash script and see if there is a login file there (must be readable by the current user and accessible by no one else).
#!/bin/bash
#$NOW will provide you with a timestamp in the filename for the backup
NOW=$(date +"%Y%m%d-%H%M%S")
DB=/path/to/backup_folder/"$NOW"_db_name.sql.gz
mysqldump --login-path=local db_name | gzip > "$DB"
#You could change permissions on the created file if you want
chmod 640 "$DB"
I save that file as db_backup.sh inside of the /usr/local/bin/ folder and make sure it is readable/executable by the user who is going to be doing the db backups. Now I can run # db_backup.sh from anywhere on the system and it will work.
To make it a cron, I put a file called 'db_backup' (the name doesn't really matter) inside my /etc/crond.d/ folder that looks like this (user_name is the user who is supposed to run the backup script)
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:usr/local/bin
MAILTO=root
HOME=/
# "user_name" will try to run "db-backup.sh" every day at 3AM
0 3 * * * user_name db-backup.sh

I think that you have some strange char on your password string. Maybe a "`" character and this why you are getting both errors.
The first from mysqldump because you don't have specified any database (because the broken string) and the second from shell saying that cannot find -h command (next string after password)
Try to escape $backup_command before system_call (http://www.php.net/escapeshellcmd)
system(escapeshellcmd($backup_command));

You should not use -p option anyway, as this can be read out by anybody on the system via ps aux.
It is a much better way to specify your credentials for mysqldump in an option file:
[client]
host = localhost
user = root
password = "password here"
Then you use call your script like this:
mysqldump --defaults-file=/etc/my-option-file.cnf ...other-args
This way, there is no password exposure on your system to other users. It might even also fix your proglem with unescaped complex passwords.
I would also recommend you to look at this project to get some more ideas:
mysqldump-secure

Do not keep space between -p and $database_pass. You need to write the password
immediately after the -p, without a space.
$backup_command = "mysqldump -u" . $database_user . " -p" . $database_pass . " -h " .
$database_server . " " . $database_target . " > " . $sql_backup_file;
hope will work for you

You need to supply database name. If you want to ahve backup for all please add --all-databases and Also remove spacing between -p and password
"mysqldump -u" . $database_user . " -p" . $database_pass . " -h " .
$database_server . " --all-databases" . $database_target . " > " .
$sql_backup_file;

Try this by adding the space between -u and username
$backup_command = "mysqldump -u " . $database_user . " -p" . $database_pass . " -h " . $database_server . " " . $database_target . " > " . $sql_backup_file;
^
If the database server located at the localhost remove the host param
$backup_command = "mysqldump -u " . $database_user . " -p" . $database_pass . " " . $database_target . " > " . $sql_backup_file;

Related

Getting Not Desired size of file while backup mysql database using mysql dump

shell_exec('mysqldump --opt -Q -h ' . cDatabaseHost . ' -u ' . cDatabaseUser . ' --password=' . cDatabasePassword . ' ' . cDatabaseDatabase . ' | gzip > ' . $DatabaseFileName);
Above I have given my code, In which
When I want to get backup of my data. It creates random backup some time 50 MB and again I run it creates 10 MB, means every time different size. When the actual data backup size around about 500MB. Kindly Help me.
Thanks.

mysqldump create empty file in win7 wamp

Here is my code
<?php
$dbuser = "root";
$dbpass = "";
$dbname = "test";
$backupfile = $dbname .time()."-".date("Y-M-D h:i:s");
//$query= "C:\\wamp\\bin\\mysql\\mysql5.5.16\\bin\\mysqldump.exe " ."-u ". $dbuser ." -p ". $dbname.">". $backupfile.".sql";
$query= "C:\wamp\bin\mysql\mysql5.5.16\bin\mysqldump.exe " ."-u ". $dbuser ." -p ". $dbname.">". $backupfile.".sql";
echo $query;
system($query);
?>
But i am always getting blank file. I am using wamp server in windows 7. I have tried with double slashes and single slashes but same result. Please help me with this essue.
N.B- One things i have not mentioned yet that, when i open this in browser, the empty backup file created, but page loading not stop,its show loading....
$query= "C:\\wamp\\bin\\mysql\\mysql5.5.16\\bin\\mysqldump.exe --user root test > upload/". time()."_backup.sql";
Finally it works. I have use --user instead of -u, as i don't have any password for wamp server, i removed --password field. Now its working fine.
The command is asking for password, you must remove --password parameter or -p flag if you mysql root user password is blank.

How to add a path to a database import in php mysql

First of I searched all around and could not find a valid answer.
I want to import database files into a database using PHP.
The following only works if the import.php file resides in the same folder (temp) as the database files:
$command = "gunzip --to-stdout $backupfile | mysql -u$database_user -p$database_pass $database";
If I place the import.php in another location and add a path to the $backupfile, it does not work as shown below:
$database_path = '/home/site/public_html/temp/';
$command = "gunzip --to-stdout $database_path.$backupfile | mysql -u$database_user -p$database_pass $database";
What is the correct way to add the path to the database file ($backupfile) so that I can run the PHP script from any location?
Thanks in advance.
Try this :
$command = "gunzip --to-stdout {$database_path}{$backupfile} | mysql -u$database_user -p$database_pass $database";
I'd go with:
$database_path = '/home/site/public_html/temp/';
$command = 'gunzip --to-stdout ' . $database_path . $backupfile . ' | mysql -u' . $database_user . ' -p' . $database_pass . ' ' . $database;
or first execute command $c2 = 'cd ' . $database_path;

dump matching tables php

I'm trying to dump some of my tables if the prefix of theme is matching a given sub string using php. Trying to use php's system did not bring any result in sense the dump file was not created. I thought to use the command line function exec to achieve my result and I was making the following
exec('E:/xampp/mysql/bin/mysqldump '. $dbname .' -h '. $this->host .' -u ' .$this->user . ' $(E:/xampp/mysql/bin/mysql -u '. $this->user . ' -p ' . $dbname .' -Bse "show tables like \'wp_dev%\'")> mydb.sql 2>&1', $output);
but to sub query what would filter out the matching tables is returning the following error
mysqldump: unknown option '-s'
It seems like I'm missing something by the syntax.
Use like this, it will take dump of only listed tables.
exec('E:/xampp/mysql/bin/mysqldump -h '. $this->host .' -u ' .$this->user . ' -p'. $this->password .' '. $dbname .' table1 table2 > /path_to_file/dump_file.sql');

Can you use a MySQL Query to Completely Create a Copy of the Database

I have a LIVE version of a MySQL database with 5 tables and a TEST version.
I am continually using phpMyAdmin to make a copy of each table in the LIVE version to the TEST version.
Does anyone have the mysql query statement to make a complete copy of a database? The query string would need to account for structure, data, auto increment values, and any other things associated with the tables that need to be copied.
Thanks.
Ok, after a lot of research, googling, and reading through everyone's comments herein, I produced the following script -- which I now run from the browser address bar. Tested it and it does exactly what I needed it to do. Thanks for everyone's help.
<?php
function duplicateTables($sourceDB=NULL, $targetDB=NULL) {
$link = mysql_connect('{server}', '{username}', '{password}') or die(mysql_error()); // connect to database
$result = mysql_query('SHOW TABLES FROM ' . $sourceDB) or die(mysql_error());
while($row = mysql_fetch_row($result)) {
mysql_query('DROP TABLE IF EXISTS `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('CREATE TABLE `' . $targetDB . '`.`' . $row[0] . '` LIKE `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('INSERT INTO `' . $targetDB . '`.`' . $row[0] . '` SELECT * FROM `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('OPTIMIZE TABLE `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
}
mysql_free_result($result);
mysql_close($link);
} // end duplicateTables()
duplicateTables('liveDB', 'testDB');
?>
Depending on your access to the server. I suggest using straight mysql and mysqldump commands. That's all phpMyAdmin is doing under the hood.
Reference material for Mysqldump.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
There is a PHP class for that, I didn't test it yet.
From it's description:
This class can be used to backup a MySQL database.
It queries a database and generates a list of SQL statements that can be used later to restore the database **tables structure** and their contents.
I guess this what you need.
Hi here you can use simple bash script to backup whole database.
######### SNIP BEGIN ##########
## Copy from here #############
#!/bin/bash
# to use the script do following:
# sh backup.sh DBNAME | sh
# where DBNAME is database name from alma016
# ex Backuping mydb data:
# sh backup.sh mydb hostname username pass| sh
echo "#sh backup.sh mydb hostname username pass| sh"
DB=$1
host=$2
user=$3
pass=$4
NOW=$(date +"%m-%d-%Y")
FILE="$DB.backup.$NOW.gz"
# rest of script
#dump command:
cmd="mysqldump -h $host -u$user -p$pass $DB | gzip -9 > $FILE"
echo $cmd
############ END SNIP ###########
EDIT
If you like to clone backuped database just edit the dump and change the database name then:
tar xzf yourdump.tar.gz| mysql -uusername -ppass
cheers Arman.
Well in script form, you could try using
CREATE TABLE ... LIKE syntax, iterating through a list of tables, which you can get from SHOW TABLES.
Only problem is that does not recreate indexes or foreign keys natively. So you would have to list them and create them as well. Then a few INSERT ... SELECT calls to get the data in.
If your schema never changes, only the data. Then create a script that replicates the table structure and then just do the INSERT ... SELECT business in a transaction.
Failing that, mysqldump as the others say is pretty easy to get working from a script. I have a daily firing cron job that dumps all manner of databases from my datacenter servers, connects via FTPS to my location and sends all the dumps across. It can be done, quite effectively. Obviously you have to make sure such facilities are locked down, but again, not overly hard.
As per code request
The code is proprietary, but I'll show you the critical section that you need. This is from in the middle of a foreach loop, hence the continue statements and the $c.. prefixed variables (I use that to indicate current loop (or instance) variables). The echo commands could be whatever you want, this is a cron script, so echoing current status was appropriate. The flush() lines are helpful for when you run the script from the browser, as the output will be sent up to that point, so the browser results fill as it runs, rather than all turning up at the end. The ftp_fput() line is obviously down to my situation of uploading the dump somewhere and uploads directly from the pipe - you could use another process open to pipe the output in to a mysql process to replicate the database. Providing suitable amendments where made.
$cDumpCmd = $mysqlDumpPath . ' -h' . $dbServer . ' -u' . escapeshellarg($cDBUser) . ' -p' . escapeshellarg($cDBPassword) . ' ' . $cDatabase . (!empty($dumpCommandOptions) ? ' ' . $dumpCommandOptions : '');
$cPipeDesc = array(0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'));
$cPipes = array();
$cStartTime = microtime(true);
$cDumpProc = proc_open($cDumpCmd, $cPipeDesc, $cPipes, '/tmp', array());
if (!is_resource($cDumpProc)) {
echo "failed.\n";
continue;
} else {
echo "success.\n";
}
echo "DB: " . $cDatabase . " - Uploading Database...";
flush();
$cUploadResult = ftp_fput($ftpConn, $dbFileName, $cPipes[1], FTP_BINARY);
$cStopTime = microtime(true);
if ($cUploadResult) {
echo "success (" . round($cStopTime - $cStartTime, 3) . " seconds).\n";
$databaseCount++;
} else {
echo "failed.\n";
continue;
}
$cErrorOutput = stream_get_contents($cPipes[2]);
foreach ($cPipes as $cFHandle) {
fclose($cFHandle);
}
$cDumpStatus = proc_close($cDumpProc);
if ($cDumpStatus != 0) {
echo "DB: " . $cDatabase . " - Dump process caused an error:\n";
echo $cErrorOutput . "\n";
continue;
}
flush();
If you're using linux or mac, here is a single line to clone a database.
mysqldump -uUSER -pPASSWORD -hsample.host --single-transaction --quick test | mysql -uUSER -pPASSWORD -hqa.sample.host --database=test
The 'advantage' here is that it will lock the database while its making a copy.
That means you end up with a consistent copy.
It also means your production database will be tied up for the duration of the copy which generally isn't a good thing.
Without locks or transactions, if something is writing to the database while you're making a copy, you could end up with orphaned data in your copy.
To get a good copy without impacting production, you should create a slave on another server.
The slave is updated in real time.
You can run the same command on the slave without impacting production.

Categories