How to synchronize a postgresql database with data from mysql database? - php

I have an application at Location A (LA-MySQL) that uses a MySQL database; And another application at Location B (LB-PSQL) that uses a PostgreSQL database. (by location I mean physically distant places and different networks if it matters)
I need to update one table at LB-PSQL to be synchronized with LA-MySQL but I don't know exactly which are the best practices in this area.
Also, the table I need to update at LB-PSQL does not necessarily have the same structure of LA-MySQL. (but I think that isn't a problem since the fields I need to update on LB-PSQL are able to accommodate the data from LA-MySQL fields)
Given this data, which are the best practices, usual methods or references to do this kind of thing?
Thanks in advance for any feedback!

If both servers are in different networks, the only chance I see is to export the data into a flat file from MySQL.
Then transfer the file (e.g. FTP or something similar) to the PostgreSQL server and import it there using COPY
I would recommend to import the flat file into a staging table. From there you can use SQL to move the data to the approriate target table. That will give you the chance to do data conversion or do updates on existing rows.
If that transformation is more complicated you might want to think about using an ETL tool (e.g. Kettle) to do the migration on the target server .

Just create a script on LA that will do something like this (bash sample):
TMPFILE=`mktemp` || (echo "mktemp failed" 1>&2; exit 1)
pg_dump --column-inserts --data-only --no-password \
--host="LB_hostname" --username="username" \
--table="tablename" "databasename" \
awk '/^INSERT/ {i=1} {if(i) print} # ignore everything to first INSERT' \
> "$TMPFILE" \
|| (echo "pg_dump failed" 1>&2; exit 1)
(echo "begin; truncate tablename;"; cat "$TMPFILE"; echo 'commit;' ) \
| mysql "databasename" < "$TMPFILE" \
|| (echo "mysql failed" 1>&2; exit 1) \
rm "$TMPFILE"
And set it to run for example once a day in cron. You'd need a '.pgpass' for postgresql password and mysql option file for mysql password.
This should be fast enough for a less than a million of rows.

Not a turnkey solution, but this is some code to help with this task using triggers. The following assumes no deletes or updates for brevity. Needs PG>=9.1
1) Prepare 2 new tables. mytable_a, and mytable_b. with the same columns as the source table to be replicated:
CREATE TABLE mytable_a AS TABLE mytable WITH NO DATA;
CREATE TABLE mytable_b AS TABLE mytable WITH NO DATA;
-- trigger function which copies data from mytable to mytable_a on each insert
CREATE OR REPLACE FUNCTION data_copy_a() RETURNS trigger AS $data_copy_a$
BEGIN
INSERT INTO mytable_a SELECT NEW.*;
RETURN NEW;
END;
$data_copy_a$ LANGUAGE plpgsql;
-- start trigger
CREATE TRIGGER data_copy_a AFTER INSERT ON mytable FOR EACH ROW EXECUTE PROCEDURE data_copy_a();
Then when you need to export:
-- move data from mytable_a -> mytable_b without stopping trigger
WITH d_rows AS (DELETE FROM mytable_a RETURNING * ) INSERT INTO mytable_b SELECT * FROM d_rows;
-- export data from mytable_b -> file
\copy mytable_b to '/tmp/data.csv' WITH DELIMITER ',' csv;
-- empty table
TRUNCATE mytable_b;
Then you may import the data.csv to mysql.

Related

mysql select query not getting executed without limit

I copied the contents of a large data table from one table to another with 2 additional columns,
the table1(original data) is getting queried by
select * from cc2;
But the same data with 2 more additional columns having NULL values throughout are not getting executed normally. i have to put limit clause to make it execute. like
select * from cc *limit 0,68000*;
the database is same, table and content are same. the question is WHY this weird behavior. and to parse my this data to foreach() loop, i am having to run for() loop and it is affecting the performance.
Any suggestions would be tried and tested asap.
Thanks in advance geniuses
Instead of using php for importing a lot of data, just try to execute directly from the commandline.
first dump your table:
mysqldump -u yourusername -p yourpassword yourdatabase tableName > text_file.sql
then change the tablename in the top of that file (and make sure the extra columns have default NULL ). And import with
mysql -u yourusername -p yourpassword yourdatabase < text_file.sql
Using a textfile containing the query is always preferred for large data sets, so you don't run into problems with PHP or the webserver.

How to export database with Adminer?

It's the first time i use ADMINER.
I want to export the database and i'm not sure to set correctly the parameters.
The database is in production and i don't want to make any mistakes.
See screenshot :
What are the correct parameters for :
1) Database : Use, drop + create, Create, Create + alter
2) Table : drop + create, Create, Create + alter
3) Datas : Truncate + insert, insert, insert+ update
Per your comment on or original question:
I want to export the database and import it in phpMyAdmin in my local environment to test and modify my client's website.
You want to recreate the database and data in a new environment and you are exporting SQL. Therefore, you will want to create tables where none exist, or discard and overwrite data if it does exist.
To accomplish this, you want to select the following options:
Database: Drop + Create - this will cause DROP statements to appear in the exported SQL before CREATE statements. This means that any existing databases with the same name will be dropped and all tables discarded. This is what you want to do if you want a clean test environment that matches production.
Tables: Drop + Create - for the same reason as above
Data: Insert - this will insert all data from your production database into your test copy database.

On the fly anonymisation of a MySQL dump

I am using mysqldump to create DB dumps of the live application to be used by developers.
This data contains customer data. I want to anonymize this data, i.e. remove customer names / credit card data.
An option would be:
create copy of database (create dump and import dump)
fire SQL queries that anonymize the data
dump the new database
But this has to much overhead.
A better solution would be, to do the anonymization during dump creation.
I guess I would end up parsing all the mysqlsqldump output? Are there any smarter solutions?
You can try Myanon: https://myanon.io
Anonymization is done on the fly during dump:
mysqldump | myanon -f db.conf | gzip > anon.sql.gz
Why are you selecting from your tables if you want to randomize the data?
Do a mysqldump of the tables that are safe to dump (configuration tables, etc) with data, and a mysqldump of your sensitive tables with structure only.
Then, in your application, you can construct the INSERT statements for the sensitive tables based on your randomly created data.
I had to develop something similar few days ago. I couldn't do INTO OUTFILE because the db is AWS RDS. I end up with that approach:
Dump data in tabular text form from some table:
mysql -B -e 'SELECT `address`.`id`, "address1" , "address2", "address3", "town", "00000000000" as `contact_number`, "example#example.com" as `email` FROM `address`' some_db > addresses.txt
And then to import it:
mysql --local-infile=1 -e "LOAD DATA LOCAL INFILE 'addresses.txt' INTO TABLE \`address\` FIELDS TERMINATED BY '\t' ENCLOSED BY '\"' IGNORE 1 LINES" some_db
only mysql command is required to do this.
As the export is pretty quick (couple of seconds for ~30.000 rows), the import process is a bit slower, but still fine. I had to join few tables on the way and there was some foreign keys so it will surely be faster if you don't need that. Also if you disable foreign key checks while importing it will also speed up things.
You could do a select of each table (and not a select *) and specify the columns you want to have and omit or blank those you don't want to have, and then use the export option of phpmyadmin for each query.
You can also use the SELECT ... INTO OUTFILE syntax from a SELECT query to make a dump with a column filter.
I found to similar questions but it looks like there is no easy solution for what you want. You will have to write a custom export yourself.
MySQL dump by query
MySQL: Dump a database from a SQL query
phpMyAdmin provides an export option to the SQL format based on SQL queries. It might be an option to extract this code from PHPmyadmin (which is probably well tested) and use it in this application.
Refer to the phpMyAdmin export plugin - exportData method for the code.

Reset the database after 3 hrs & make it behave as a new database through php script

How to reset the database after 3 hrs & make it behave as a new database through php script
Possibly the easiest way would be to have a cron job that executes every three hours and calls mysql with "clean" database set up. The crontab set up would be something along the lines of:
* */03 * * * mysql -u XXX -pXXX < clean_database.sql
However, the "clean_database.sql" file would need to use "DROP TABLE IF EXISTS ..." for each of the tables you want to reset. That said, you can simply use mysqldump with a "known good" version of the database to create this file. (You'll need to add a "use <database name>;" statement at the top that said.)
The easiest way is to drop the database and recreate it using your create scripts. If you don't have create scripts you can get them by making a dump of your database.
To delete the data in each table without dropping the tables you can use the TRUNCATE TABLE tablename command on each table.
If you don't have permission to use TRUNCATE you can use DELETE FROM tablename without a WHERE clause.
Note that if you have foreign key constraints you may have to run the statements in a specific order to avoid violating these constraints.
To get a list of all tables you can use SHOW TABLES.
steps to do:
connect to database server
select database
mysql_query("SHOW TABLES");
read in array or object
foreach($tables as $tableName) of the item mysql_query("TRUNCATE TABLE $tableName")
I hope the principle is clean to you ;-)
mysql_query('DROP DATABASE yourdatabase');
mysql_query('CREATE DATABASE yourdatabase');
mysql_query('CREATE TABLE yourdatabase.sometable ...'); // etc.
This will drop the database, and create it anew. You can then use the CREATE TABLE syntax to recreate the tables - note that as this script has significant powers, you should consider creating a special mySQL user for it, one that's not used during normal operations.

Copying non existing rows from one database table to another database table?

I have the following 1 db table in Database 1 and 1db table in Database 2, now the stucture of both tables are exactly the same. Now what happens is table 1 (DB1) gets new rows added daily, I need to update the table 1 (DB 1) new rows in table 1 (DB 2) so that these 2 tables remain the same. A cron will trigger a php script on midnight to do this task. What is the best way to do this and how using PHP/mysql?
You might care to have a look at replication (see http://dev.mysql.com/doc/refman/5.4/en/replication-configuration.html). That's the 'proper' way to do it; it isn't to be trifled with, though, and for small tables the above solutions are probably better (and certainly easier).
This might help you out, its what i do on my database for a similar kinda thing
$dropSQL = "DROP TABLE IF EXISTS `$targetTable`";
$createSQL = "CREATE TABLE `$targetTable` SELECT * FROM `$activeTable`";
$primaryKeySQL = "ALTER TABLE `$targetTable` ADD PRIMARY KEY(`id`)";
$autoIncSQL = "ALTER TABLE `$targetTable` CHANGE `id` `id` INT( 60 ) NOT NULL AUTO_INCREMENT";
mysql_query($dropSQL);
mysql_query($createSQL);
mysql_query($primaryKeySQL);
mysql_query($autoIncSQL);
obviously you will have to modify the taget and active table variables. Dropping the table will lose the primary key when you do this, oh well .. easy enough to add back in
I would recommend replication as has already been suggested. However, another option is to use mysqldump to grab the rows you need and send them to the other table.
mysqldump -uUSER -pPASSWORD -hHOST --compact -t --where="date=\"CURRENT_DATE\"" DB1 TABLE | mysql -uUSER -pPASSWORD -hHOST -D DB2
Replace USER, HOST, and PASSWORD with login info for your database. You can use different information for each part of the command if DB1 and DB2 have different access information. DB1 and DB2 are the names of your databases, and TABLE is the name of the table.
You can also modify the --where option to grab only the rows which need to updated. Hopefully you have some query you can use. As mentioned previously, if the table has a primary key, you could grab the last key which DB2 has using a command something like
KEY=`echo "SELECT MAX(KEY_COLUMN) FROM TABLE;" mysql -uUSER -pPASSWORD -hHOST -D DB2`
for a bash shell script (then use this value in the WHERE clause above). Depending on how your primary key is generated, this may be a bad idea since rows may be added in holes in the keyspace if they exist.
This solution will also work if rows are changed as long as you have a query which can select these rows. Just add the --replace option to the mysqldump command. In your situation, it would be best to add some type of value such as date updated which you can compare by.

Categories