Mysql large db migration - php

Hi I'm building a enterprise management system (php based) for a medium size company. I'm trying to migrate their existing customer records about (9000 records) into my db. Our db schemas are different.
Here are the steps I'm planning to take:
1.) Get the .csv file for each table and clean it up (get rid of unnecessary columns, remove blanks rows which seem to be littered throughout table)
2.) Import the tables into my database via phpmyadmin
3.) Write a php script to loop grab tables with this old data and then process and insert them into MY db tables
I was wondering if this plan I outlined above make sense or is the optimal way to do it?
Thanks

There is an data migration is possible in MySQL Workbench 6.0. I have migrated more than millions of record so this is not big deal.
Try
http://www.mysql.com/products/workbench/migrate/

Related

Import data from multiple csv data into one master table

I have a MySQL database on which i want to import data from multiple csv files. For the data I provided a table on which I want to merge the several files into one (join). Unfortunately I have the problem that my data is too big and therefore it is quite time-consuming until I get everything stored in the table. Therefore the question: What is the best way to deal with a huge amount of data?
I took the liberty to create a temporary table for each csv file and load the data into it. Then I joined all tables and wanted to insert the result of my query into the big table and there I already had the problem with the long waiting time. I would like to limit the solutions to the following languages: MySQL, PHP. So far I used the GUI of datagrip and the sql-console for importing these files.
Use any data integration tool like Pentaho, then follow the below steps:
Pentaho has CSV import object
You could join multiple CSV file using join object
Select all the columns from merging output
Then push it to MySQL using DB connector output object
There is a pretty neat library that does exactly this. Helps you to migrate data from one source to another. And it does pretty quickly.
https://github.com/DivineOmega/uxdm
You could use a shell script to loop through the files (this one assumes they're in the current directory)
#!/bin/bash
for f in *.csv
do
mysql -e "load data infile '"$f"' into table my_table" -u username --password=your_password my_database
done
You can achieve this easily with the use of pentaho data integration (ETL tool).
It provided us csv data input in which you can mention your csv file. then link to table output step in which you can use jdbc or jndi connection of your mysql database.

Selective syncing of MySQL database with SQLite

I'm trying to develop an Android app that provides information on a chosen topic. All the information is stored on a MySQL database with one table for each topic. What I want to achieve is that when the user chooses a topic, the corresponding table should be downloaded to SQLite so that it can be used offline. Also, any changes to that particular table in the MySQL should be in synced with the SQLite db automatically when the phone connects to Internet the next time.
I have understood how to achieve the connection using PHP and HTTP requests. What I wanna know what is the best logic to sync any entries in a particular table in MuSQL database to the one in SQLite. I read about using various sync services but I don't understand how to use them. All my tables have exactly the same schema so is there an efficient way to achieve the sync ?
I have a decent knowledge in SQL but I'm kinda new to Android.

optimize large Mysql database

I am developing an application software. For db, I am using Mysql. Some tables have huge data.The table is showing overloaded. That time from Mysql optimize table option I am optimizing the table.
But I want to know - are there any mysql query by which I can optimize full database from my code....
i think you need the read the following article on how to optimize your database and table maintenance
http://dev.mysql.com/doc/refman/5.0/en/mysqlcheck.html

Importing large amounts of data using MySQL/PHP

We created a import script which imports about 120GB of data into a MySQL database. The data is saved in a few hunderd directories (all are seperated databases). Each directory contains files with table structures and table data.
The issue being; it works on my local machine with a subset of the actual data, but when the import is ran on the server (Which takes a few days). Not all the tables are created (even tables that are tested locally). The odd thing is that the script, when ran on the server does not show any errors on the creation of the tables.
Here is on a high level how the script works:
Find all directories that represent a database
Create all databases
Per database loop through the tables: create table, fill table
Added the code on gist: https://gist.github.com/3349872
Add more logging to see steps that succeded since you might be having problems with memory usage or execution times.
Why dont you create sql files from given CVS files and then just do normal importing in bash?
mysql -u root -ppassword db_something< db_user.sql
Auch, the problem was in the code. Amazingly stupid mistake.
When testing the code on a subset of all the files all table information and table content where available. When a table count not be created the function enters a logging statement and than returns. On the real data this was a the mistake because there are files with no data and no structure so after creating a few tables this creation of the tables of a certain database went wrong and did a return and so didn't create the other tables.

dump selected data from one db to another in mysql

Here's the situation:
I have a mySQL db on a remote server. I need data from 4 of its tables. On occasion, the schema of these tables is changed (new fields are added, but not removed). At the moment, the tables have > 300,000 records.
This data needs to be imported into the localhost mySQL instance. These same 4 tables exist (with the same names), but the fields needed are a subset of the fields in the remote db tables. The data in these local tables is considered read-only and is never written to. Everything needs to be run in a transaction so there is always some data in the local tables, even if it is a day old. The localhost tables are used by an active website, so this entire process needs to complete as quickly as possible to minimize downtime.
This process runs once per day.
The options as I see them:
Get a mysqldump of the structure/data of the remote tables and save to file. Drop the localhost tables, and run the dumped sql script. Then recreate the needed indexes on the 4 tables.
Truncate the localhost tables. Run SELECT queries on the remote db in PHP and retrieve only the fields needed instead of the entire row. Then loop through the results and create INSERT statements from this data.
My questions:
Performance wise, which is my best option?
Which one will complete the fastest?
Will either one put a heavier load on the server?
Would indexing the
tables take the same amount of time in both options?
If there is no good reason for having the local d/b be a subset of the remote, make the structure the same and enable database replication on the needed tables. Replication works by the master tracking all changes made, and managing each slave d/b's pointer into the changes. Each slave says give me all changes since the last request. For a sizeable database, this is far more efficient than any alternative you have selected. It comes with only modest cost.
As for schema changes, I think the alter information is logged by the master, so the slave(s) can replicate those as well. The mechanism definitely replicates drop table ... if exists and create table ... select, so alter logically should follow, but I have not tried it.
Here it is: confirmation that alter is properly replicated.

Categories