Generate MySQL data dump in SQL from PHP - php

I'm writing a PHP script to generate SQL dumps from my database for version control purposes. It already dumps the data structure by means of running the appropriate SHOW CREATE .... query. Now I want to dump data itself but I'm unsure about the best method. My requirements are:
I need a record per row
Rows must be sorted by primary key
SQL must be valid and exact no matter the data type (integers, strings, binary data...)
Dumps should be identical when data has not changed
I can detect and run mysqldump as external command but that adds an extra system requirement and I need to parse the output in order to remove headers and footers with dump information I don't need (such as server version or dump date). I'd love to keep my script as simple as I can so it can be hold in an standalone file.
What are my alternatives?

I think the mysqldump output parsing is still the easiest. I think there are really few meta-data you have to exclude, and thus dropping those should be only a few lines of code.
You could also look at the following mysqdump options: --tab, --compact

Could always try the system() function. Hope you are in linux :)

"Select * from tablename order by primary key"
you can get the table names from the information_schema database or the "show tables" command. Also can get the primary key from "show indexes from tablename"
and then loop and create insert statements strings

Related

PHP create a copy command like phpmyadmin

I am new with PHP development and just wondering if theres a existing function on PHP than duplicate the copy command on phpmyadmin, i know that the query sequence is below, but this is like a long query/code since the table has alot of columns. i mean if phpmyadmin has this feature maybe its calling a build in function?
SELECT * FROM table where id = X
INSERT INTO table (XXX)VALUES(XXX)
Where the information is based from the SELECT query
Note: The id is primary and auto increment.
Here is the copy command on phpmyadmin
i mean if phpmyadmin has this feature maybe its calling a build in function?
There is no built-in functionality in MySQL to duplicate a row other than an INSERT statement of the form: INSERT INTO tableName ( columns-specification ) SELECT columns-specification FROM tableName WHERE primaryKeyColumns = primaryKeyValue.
The problem is you need to know the names of the columns beforehand, you also need to exclude auto_increment columns, as well as primary-key columns, and know how to come up with "smart defaults" for non-auto_increment primary key columns, especially composite keys. You'll also need to consider if any triggers should be executed too - and how to handle any constraints and indexes that may be designed to prevent duplicate values that a "copy" operation might introduce.
You can still do it in PHP, or even pure-MySQL (inside a sproc, using Dynamic SQL) but you'll need to query information_schema to get metadata about your database - which may be more trouble than it's worth.

php multiple table changes

I need to generate a php script that will carry out a sequential backup and update/renaming a number of MySql tables. Can I do this in a single query or will I need to generate a query for each action?
I need the script to do the following in order
DROP TABLE backup2
RENAME TABLE backup1 TO backup2
RENAME TABLE main TO backup1
COPY TABLE incomingmain TO main
TRUNCATE TABLE incomingmain
In practice the TABLE incomingmain will be populated from an external import before the TABLE update sequence above is carried out.
Can any one advise please how I structure this after connecting to the database?
You are better off to use a mysqli::multi_query().
It also depends on the return values, meaning are you going to check for errors or blindly run them all at once? If I am you, I would code it sequentially, just because it will look much cleaner from a coding point of view.

On the fly anonymisation of a MySQL dump

I am using mysqldump to create DB dumps of the live application to be used by developers.
This data contains customer data. I want to anonymize this data, i.e. remove customer names / credit card data.
An option would be:
create copy of database (create dump and import dump)
fire SQL queries that anonymize the data
dump the new database
But this has to much overhead.
A better solution would be, to do the anonymization during dump creation.
I guess I would end up parsing all the mysqlsqldump output? Are there any smarter solutions?
You can try Myanon: https://myanon.io
Anonymization is done on the fly during dump:
mysqldump | myanon -f db.conf | gzip > anon.sql.gz
Why are you selecting from your tables if you want to randomize the data?
Do a mysqldump of the tables that are safe to dump (configuration tables, etc) with data, and a mysqldump of your sensitive tables with structure only.
Then, in your application, you can construct the INSERT statements for the sensitive tables based on your randomly created data.
I had to develop something similar few days ago. I couldn't do INTO OUTFILE because the db is AWS RDS. I end up with that approach:
Dump data in tabular text form from some table:
mysql -B -e 'SELECT `address`.`id`, "address1" , "address2", "address3", "town", "00000000000" as `contact_number`, "example#example.com" as `email` FROM `address`' some_db > addresses.txt
And then to import it:
mysql --local-infile=1 -e "LOAD DATA LOCAL INFILE 'addresses.txt' INTO TABLE \`address\` FIELDS TERMINATED BY '\t' ENCLOSED BY '\"' IGNORE 1 LINES" some_db
only mysql command is required to do this.
As the export is pretty quick (couple of seconds for ~30.000 rows), the import process is a bit slower, but still fine. I had to join few tables on the way and there was some foreign keys so it will surely be faster if you don't need that. Also if you disable foreign key checks while importing it will also speed up things.
You could do a select of each table (and not a select *) and specify the columns you want to have and omit or blank those you don't want to have, and then use the export option of phpmyadmin for each query.
You can also use the SELECT ... INTO OUTFILE syntax from a SELECT query to make a dump with a column filter.
I found to similar questions but it looks like there is no easy solution for what you want. You will have to write a custom export yourself.
MySQL dump by query
MySQL: Dump a database from a SQL query
phpMyAdmin provides an export option to the SQL format based on SQL queries. It might be an option to extract this code from PHPmyadmin (which is probably well tested) and use it in this application.
Refer to the phpMyAdmin export plugin - exportData method for the code.

How to run multiple sql queries using php without giving load on mysql server?

I have a script that reads an excel sheet containing list of products. These are almost 10000 products. The script reads these products & compares them with the products inside mysql database, & checks
if the product is not available, then ADD IT (so I have put insert query for that)
if the product is already available, then UPDATE IT (so I have put update query for that)
Now the problem is, it creates a very heavy load on mysql server & it shows a message as "mysql server gone away..".
I want to know is there a better method to do this excel sheet work without making load on mysql server?
I am not sure if this is the case, but judging from your post, I assume it could be the case that for every check you initilize a new connection to the MySQL server. If that indeed is the case you can simply connect once before you do this check, and run all future queries trought this connection.
Next to that a good optimization option would be to introduce indexes in MySQL that would significantly speed up product search, introduce index for those product table columns, that you reference most in your php search function.
Next to that you could increase MySQL buffer size to something above 256 MB in order to cache most of the results, and also use InnoDB so you do not need to lock whole table every time you do the check, and also the input function.
I'm not sure why PHP has come into the mix. Excel can connect directly to a MySql database and you should be able to do a WHERE NOT IN query to add items and a UPDATE statements of ons that have changed Using excel VBA.
http://helpdeskgeek.com/office-tips/excel-to-mysql/
You could try and condense your code somewhat (you might have already done this though) but if you think it can be whittled down more, post it and we can have a look.
Cache data you know exists already, so if a products variables don't change regularly you might not need to check them so often. You can cache the data for quick retrieval/changes later (see Memcached, other caching alternatives are available). You could end up reducing your work load dramatically.
Have you seperated your mysql server? Try running the product checks on a different sub-system, and merge the databases to your main, hourly or daily or whatever.
Ok, here is quick thought
Instead of running the query, after every check, where its present or not, add on to your sql as long as you reach the end and then finally execute it.
Example
$query = ""; //creat a query container
if($present) {
$query .= "UPDATE ....;"; //Remember the delimeter ";" symbol
} else {
$query .= "INSERT ....;";
}
//Now, finally run it
$result = mysql_query($query);
Now, you make one query at the last part.
Update: Approach this the another way
Use the query to handle it.
INSERT INTO table (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=c+1;
UPDATE table SET c=c+1 WHERE a=1;
Reference

Archive MySQL data using PHP every week

I have a MySQL DB that receives a lot of data from a source once every week on a certain day of the week at a given time (about 1.2million rows) and stores it in, lets call it, the "live" table.
I want to copy all the data from "live" table into an archive and truncate the live table to make space for the next "current data" that will come in the following week.
Can anyone suggest an efficient way of doing this. I am really trying to avoid -- insert into archive_table select * from live --. I would like the ability to run this archiver using PHP so I cant use Maatkit. Any suggestions?
EDIT: Also, the archived data needs to be readily accessible. Since every insert is timestamped, if I want to look for the data from last month, I can just search for it in the archives
The sneaky way:
Don't copy records over. That takes too long.
Instead, just rename the live table out of the way, and recreate:
RENAME TABLE live_table TO archive_table;
CREATE TABLE live_table (...);
It should be quite fast and painless.
EDIT: The method I described works best if you want an archive table per-rotation period. If you want to maintain a single archive table, might need to get trickier. However, if you're just wanting to do ad-hoc queries on historical data, you can probably just use UNION.
If you only wanted to save a few periods worth of data, you could do the rename thing a few times, in a manner similar to log rotation. You could then define a view that UNIONs the archive tables into one big honkin' table.
EDIT2: If you want to maintain auto-increment stuff, you might hope to try:
RENAME TABLE live TO archive1;
CREATE TABLE live (...);
ALTER TABLE LIVE AUTO_INCREMENT = (SELECT MAX(id) FROM archive1);
but sadly, that won't work. However, if you're driving the process with PHP, that's pretty easy to work around.
Write a script to run as a cron job to:
Dump the archive data from the "live" table (this is probably more efficient using mysqldump from a shell script)
Truncate the live table
Modify the INSERT statements in the dump file so that the table name references the archive table instead of the live table
Append the archive data to the archive table (again, could just import from dump file via shell script, e.g. mysql dbname < dumpfile.sql)
This would depend on what you're doing with the data once you've archived it, but have you considered using MySQL replication?
You could set up another server as a replication slave, and once all the data gets replicated, do your delete or truncate with a SET BIN-LOG 0 before it to avoid that statement also being replicated.

Categories