Automated or regular backup of mysql data - php

I want to take regular backups of some tables in my mysql database using <insert favorite PHP framework here> / plain php / my second favorite language. I want it to be automated so that the backup can be restored later on in case something goes wrong.
I tried executing a query and saving the results to a file. Ended up with code that looks somewhat like this.
$sql = 'SELECT * FROM my_table ORDER id DESC';
$result = mysqli_query( $connect, $sql );
if( mysqli_num_rows( $result ) > 0){
$output=fopen('/tmp/dumpfile.csv','w+');
/* loop through recordset and add that to the file */
while( $row = mysqli_fetch_array( $result ) ) {
fputcsv( $output, $row, ',', '"');
}
fclose( $output );
}
I set up a cron job on my local machine to hit the web page with this code. I also tried writing a cronjob on the server run the script as a CLI. But it's causing all sorts of problems. These include
Sometimes the data is not consistent
The file appears to be truncated
The output cannot be imported into another database
Sometimes the script times out
I have also heard about mysqldump. I tried to run it with exec but it produces an error.
How can I solve this?

CSV and SELECT INTO OUTFILE
http://dev.mysql.com/doc/refman/5.7/en/select-into.html
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
Here is a complete example:
SELECT * FROM my_table INTO OUTFILE '/tmp/my_table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
The file is saved on the server and the path chosen needs to be writable. Though this query can be executed through PHP and a web request, it is best executed through the mysql console.
The data that's exported in this manner can be imported into another database using LOAD DATA INFILE
While this method is superior iterating through a result set and saving to a file row by row, it's not as good as using....
mysqldump
mysqldump is superior to SELECT INTO OUTFILE in many ways, producing CSV is just one of the many things that this command can do.
The mysqldump client utility performs logical backups, producing a set
of SQL statements that can be executed to reproduce the original
database object definitions and table data. It dumps one or more MySQL
databases for backup or transfer to another SQL server. The mysqldump
command can also generate output in CSV, other delimited text, or XML
format.
Ideally mysqldump should be invoked from your shell. It is possible to use exec in php to run it but since producing the dump might take a long time depending on the amount of data, and php scripts usually run only for 30 seconds, you would need to run it as a background process.
mysqldump isn't without it's fair share of problems.
It is not intended as a fast or scalable solution for backing up
substantial amounts of data. With large data sizes, even if the backup
step takes a reasonable time, restoring the data can be very slow
because replaying the SQL statements involves disk I/O for insertion,
index creation, and so on.
A classic example see this question: Server crash on MySQL backup using python where one mysqldump seems to start before the earlier one has finished and rendered the website completely unresponsive.
Mysql replication
Replication enables data from one MySQL database server (the master)
to be copied to one or more MySQL database servers (the slaves).
Replication is asynchronous by default; slaves do not need to be
connected permanently to receive updates from the master. Depending on
the configuration, you can replicate all databases, selected
databases, or even selected tables within a database.
Thus replication operates differently from SELECT INTO OUTFILE or msyqldump It's ideal keeping the data in the local copy almost upto date (Would have said perfectly in sync but there is something called slave lag) On the other hand if you use a scheduled task to run mysqldump once every 24 hours. Imagine what can happen if the server crashes after 23 hours?
Each time you run mysqldump you are producing a large amount of data, keep doing it regularly and you will find your hard disk filled up or your file storage bills are hitting the roof. With replication only the changes are passed on to the server (by using the so called binlog)
XtraBackup
An alternative to replication is to use Percona XtraBackup.
Percona XtraBackup is an open-source hot backup utility for MySQL -
based servers that doesn’t lock your database during the backup.
Though by Percona, it's compatible with Mysql and Mariadb. It has the ability to do incremental backups lack of which is the biggest limitation of mysqldump.

I am suggesting to get database backup by command line utility using script file instead of PHP script.
Make my.ini file for store configurations
make file my.ini for default db username and password in root directory of user. so script will take username, password and hostname from this file
[client]
user = <db_user_name>
password = <db_password>
host = <db_host>
Create sh file called backup.sh
#!/bin/sh
#
# script for get backup everyday
#change directory to your backup directory
cd /path_of_your_directory
#get backup of database of applications
mysqldump <your_database_name> tmp_db.sql;
#compress it in zip file
zip app_database-$(date +%Y-%m-%d).sql.zip tmp_db.sql;
#remove sql file
rm -rf tmp_db.sql;
Give Executable permission to sh.file
chmod +x backup.sh
Set Cronjob
sh /<script_path>/backup.sh >/dev/null 2>&1
Thats all
Goodluck

Related

Using mysql INTO OUTFILE in web application context

I want to export data from mysql quickly to an output file. As it turns out, INTO OUTFILE syntax seems miles ahead of any kind of processing I can do in PHP performance wise . However, this aproach seems to be ridden with problems:
The output file can only be created in /tmp or /var/lib/mysql/ (mysqld user needs write permissions)
The output file owner and group will be
set as mysqld
tmp dir is pretty much a dumpster fire because of settings like "private tmp".
How would I manage this in a way that isn't a nightmare in terms of managing the user accounts / file permissions?
I need to access the output file from my php script and I would also like to output this file to the application directory if possible. Of course, if there is another way to export my query results in a performance effective way, I would like to know of it.
Currently I am thinking of the following aproaches:
Add mysqld user to a "www-data" group to give access to application files, write to application dir and other www-data users will hopefully be able to access the output files.
I could not get the access rights working for the mysql user. Having scripts add the user to www-data group or other such measures would also increase the application deployment overhead.
I decided to go with using the program piping method with symfony Process component.
mysql -u <username> -p<password> <database> -e "<query>" | sed 's/\t/","/g;s/^/"/;s/$/"/;' > /output/path/here.csv
Note that the csv formatting might break in case you have values that contain reserved characters like \ , " \n etc. in your columns. You will also need escape these characters (" to \" for example) and possibly do something about mysql outputting null values as NULL (string).

Unable to execute mysqldump from PHP script

I am trying to create periodic backups (poor man's cron) of my database using mysqldump with exec() function. I am using XAMPP/PHP7 on macOS.
$command = "$mysqldump_location -u$db_user -h$db_host -p$db_password $db_name > $backup_file_location";
exec($command);
When I run the PHP script, I get no SQL dump in the path mentioned in $backup_file_location but if I execute the same $command string on the terminal directly I get the desired SQL file in the desired location.
I am unable to understand what could be the problem here. Also open to suggestions on better ways to dump the entire DB.
Edit 1:
The value of $mysqldump_location is /Applications/XAMPP/xamppfiles/bin/mysqldump
The value of $backup_file_location is /Applications/XAMPP/xamppfiles/htdocs/app5/data/sqldumps/sql_data.sql
/app5/ is the folder in while I am developing my app.
Edit 2:
Possible duplicate suggestion does not apply since the issue here was not on how to dump SQL backups. The key issue here was that the backup using mysqldump was working through terminal, but not through PHP's exec() function.
The resolution of the issue, from above comments, was that the PHP request executes in XAMPP as a user that has limited privileges, and the mysqldump process inherits those privileges.
Checking the exit status of the process run by exec() confirmed that mysqldump exited with a nonzero exit status, indicating it failed for some reason.
Opening write privileges to 777 on the directory where the mysqldump process tries to write resolved the error.
It should also be adequate to figure out the specific uid & gid of Apache processes (check the User and Group config values in the Apache config file (e.g. xampp-home/apache/conf/httpd.conf) and make the output directory writeable by that uid or gid.

How to fetch & store over 100k records to DB from another DB

I have school database who having more then 80 000 records and I want to update and insert into my newSchool database using php, whenever I try to run query update or insert almost 2 000 records and after some time its stopped automatically please help
You could (should) do a full dump and import that dump later. I'm not sure how to do it with php - and think you'd be better doing this with those commands on the cli:
mysqldump -u <username> -p -A -R -E --triggers --single-transaction > backup.sql
And on your localhost:
mysql -u <username> -p < backup.sql
The backup statement flags meanings from the docs:
-u
DB_USERNAME
-p
DB_PASSWORD
Don't paste your password here, but enter it after mysql asks for it. Using a password on the command line interface can be insecure.
-A
Dump all tables in all databases. This is the same as using the --databases option and naming all the databases on the command line.
-E
Include Event Scheduler events for the dumped databases in the output.
This option requires the EVENT privileges for those databases.
The output generated by using --events contains CREATE EVENT
statements to create the events. However, these statements do not
include attributes such as the event creation and modification
timestamps, so when the events are reloaded, they are created with
timestamps equal to the reload time.
If you require events to be created with their original timestamp
attributes, do not use --events. Instead, dump and reload the contents
of the mysql.event table directly, using a MySQL account that has
appropriate privileges for the mysql database.
-R
Include stored routines (procedures and functions) for the dumped
databases in the output. Use of this option requires the SELECT
privilege for the mysql.proc table.
The output generated by using --routines contains CREATE PROCEDURE and
CREATE FUNCTION statements to create the routines. However, these
statements do not include attributes such as the routine creation and
modification timestamps, so when the routines are reloaded, they are
created with timestamps equal to the reload time.
If you require routines to be created with their original timestamp
attributes, do not use --routines. Instead, dump and reload the
contents of the mysql.proc table directly, using a MySQL account that
has appropriate privileges for the mysql database.
--single-transaction
This option sets the transaction isolation mode to REPEATABLE READ and
sends a START TRANSACTION SQL statement to the server before dumping
data. It is useful only with transactional tables such as InnoDB,
because then it dumps the consistent state of the database at the time
when START TRANSACTION was issued without blocking any applications.
If you only need the data and don't need routines nor events, just skip those flags.
Be sure to do commit after a few commands, for example after 500 rows. That save memory but has the problem that in case of rollback only going back to the last commit.

Restore a postgresql dump (.sql file) without the command line?

Scenario:
I have built a PHP framework that uses a postgresql database. The framework comes shipped with a .sql file which is a dump of the default tables and data that the framework requires.
I want to be able to run the sql file from the client (PHP), rather than the command line, in order to import the data. This is because I have come across some server setups where accessing the command line is not always a possibility, and/or running certain commands isn't possible (pg_restore may not be accessible to the PHP user for example).
I have tried simply splitting up the .sql file and running it as a query using the pg_sql PHP extension, however because the dump file uses COPY commands to create the data, this doesn't seem to work. It seems to be that because COPY is used, the .sql file expects to be imported using the pg_restore command (unless I am missing something?).
Question:
So the question is, how can I restore the .sql dump, or create the .sql dump in a way that it can be restored via the client (PHP) rather than the command line?
For example:
<?php pg_query(file_get_contents($sqlFile)); ?>
Rather than:
$ pg_restore -d dbname filename
Example of the error:
I am using pgAdmin III to generate the .sql dump, using the "plain" setting. In the .sql file, the data that will be inserted into a table looks like this:
COPY core_classes_models_api (id, label, class, namespace, description, "extensionName", "readAccess") FROM stdin;
1 data Data \\Core\\Components\\Apis\\Data The data api Core 310
\.
If I then run the above sql within a pgAdmin III query window, I get the following error:
ERROR: syntax error at or near "1"
LINE 708: 1 data Data \\Core\\Components\\Apis\\Data The data api Core...
This was a bit tricky to find, but after some investigation, it appears that pg_dump's "plain" format (which generates a plain-text SQL file) generates COPY commands rather than INSERT commands by default.
Looking at the specification for the pg_dump here, I found the option for --inserts. Configuring this option will allow the dump to create INSERT commands where it would normally create COPY commands.
The specification does state:
This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.
This works for my purposes however, and hopefully will help others with the same problem!

MySQL reload privilege for mysqldump during PHP cron job: Use MySQL admin account or create unique user? Security?

I'm running a cron job that executes mysqldump via a PHP script, the dump requires the RELOAD privilege. Using the MySQL admin account doesn't feel right, but neither does creating a user with admin privileges.
My main concern is the security aspect, I'm loading the db attributes (username, password, etc.) in a protected array "property" of the class I'm using.
I'm wondering which approach makes more sense or if there's another way to achieve the same results.
Overview:
LAMP Server: CENTOS 5.8, Apache 2.2.3, MySQL 5.0.95, PHP 5.3.3
Cron job outline:
Dump raw stats data from two InnoDB tables in the website db, they
have a foreign key relationship.
Load the data into tables in the stats db
Get the last value of the auto-incrementing primary key that was
transferred
Use the primary key value in a query that deletes the copied data
from the website db
Process the transferred data in the stats db to populate the reports
tables
When processing completes, delete the raw stats data from the stats
db
The website database is configured as a master with binary logging, and the replicated server will be set up once the stats data is no longer stored and processed in the website database (replicating the website database was the impetus for moving the stats to their own database).
All files accessed during the cron job are located outside the DocumentRoot directory.
The nitty gritty:
The mysqldump performed in the first step requires the RELOAD privilege, here's the command:
<?php
$SQL1 = "--no-create-info --routines --triggers --master-data ";
$SQL1 .= "--single-transaction --quick --add-locks --default-character-set=utf8 ";
$SQL1 .= "--compress --tables stats_event stats_event_attributes";
$OUTPUT_FILENAME = "/var/stats/daily/daily-stats-18.tar.gz";
$cmd1 = "/usr/bin/mysqldump -u website_user -pXXXXXX website_db $SQL1 | gzip -9 > $OUTPUT_FILENAME";
exec( $cmd1 );
?>
The error message:
mysqldump: Couldn't execute 'FLUSH TABLES': Access denied; you need the RELOAD privilege for this operation (1227)
Works fine if I use the mysql admin credentials.
I'm wondering which approach makes more sense or if there's another way to achieve the same results.
The bottom line is that you need a user with certain privileges to run that mysqldump command. While it may seem silly to create a new user just for this one cron job, it's the most straightforward and simple approach you can take that at least gives the outward appearance of lolsecurity.
Given that this is a stopgap measure until you can get replication up and running, there's no real harm being done here. Doing this by replication is totally the way to go, and the stopgap measure seems sane.
Also, when it comes time to get replication going, xtrabackup is your friend. It includes binary log naming and position information with the snapshot it takes, which makes setting up new slaves a breeze.
I just ran across this same error (probably on the same site you were working on :) ), even when running as the MySQL root user. I managed to get around it by not specifying --skip-add-locks, e.g. this worked:
/usr/bin/mysqldump -u USERNAME -pPW DATABASE_NAME --skip-lock-tables --single-transaction --flush-logs --hex-blob

Categories