How to migrate data to one table to another in MySQL - php

I've two same tables(same table columns and primary key) in two different databases. I want to add 2nd table data to the first table that not exist in the first table (according to the primary key).
what is the best method to do that?
I can export 2nd table data as csv, php array or sql file.
Thanks

There are lots of ways to do this.
The simplest is probably this one:
INSERT IGNORE
INTO table_1
SELECT *
FROM table_2
;
which allows those rows in table_1 to supersede those in table_2 that
have a matching primary key, while still inserting rows with new
primary keys.
Alternatively, you can use a subquery to find out the rows that are not shared by both tables and insert them. If you've got a lot of records, you may want to consider using a temporary table to speed up the process.

Related

MySQL single auto increment for two tables without duplication a better solution

i have two tables(innodb) in MYSQL data base both share a similar column the account_no column i want to keep both columns as integers and still keep both free from collusion when inserting data only.
there are 13 instances of this same question on stackoverflow i have read all. but in all, the recommended solutions where:
1) using GUID :this is good but am trying to keep the numbers short and easy for the users to remember.
2) using sequence :i do not fully understand how to do this but am thinking it involves making a third table that has an auto_increment and getting my values for the the two major tables from it.
3) using IDENTITY (1, 10) [1,11,21...] for the first table and the second using IDENTITY (2, 10) [2,12,22...] this works fine but in the long term might not be such a good idea.
4) using php function uniqid(,TRUE) :not going to work its not completely collision free and the columns in my case have to be integers.
5) using php function mt_rand(0,10): might work but i still have to check for collisions before inserting data.
if there is no smarter way to archive my goal i would stick with using the adjusted IDENTITY (1, 10) and (2, 10).
i know this question is a bit dumb seeing all the options i have available but the most recent answer on a similar topic was in 2012 there might have been some improvements in the MYSQL system that i do not know about yet.
also am using php language to insert the data thanks.
Basically, you are saying that you have two flavors of an entity. My first recommendation is to try to put them in a single table. There are three methods:
If most columns overlap, just put all the columns in a single table (accounts).
If one entity has more columns, put the common columns in one table and have a second table for the wider entity.
If only some columns overlap, put those in a single table and have a separate table for each subentity.
Let met assume the third situation for the moment.
You want to define something like:
create table accounts (
AccountId int auto_increment primary key,
. . . -- you can still have common columns here
);
create table subaccount_1 (
AccountId int primary key,
constraint foreign key (AccountId) references accounts(AccountId),
. . .
);
create table subaccount_2 (
AccountId int primary key,
constraint foreign key (AccountId) references accounts(AccountId),
. . .
);
Then, you want an insert trigger on each sub-account table. This trigger does the following on insert:
inserts a row into accounts
captures the new accountId
uses that for the insert into the subaccount table
You probably also want something on accounts that prevents inserts into that table, except through the subaccount tables.
A big thank you to Gordon Linoff for his answer i want to fully explain how i solved the problem using his answer to help others understand better.
original tables:
Table A (account_no, fist_name, last_name)
Table B (account_no, likes, dislikes)
problem: need account_no to auto_increment across both tables and be unique across both tables and remain a medium positive integer (see original question).
i had to make an extra Table_C to which will hold all the inserted data at first, auto_increment it and checks for collisions through the use of primary_key
CREATE TABLE Table_C (
account_no int NOT NULL AUTO_INCREMENT,
fist_name varchar(50),
last_name varchar(50),
likes varchar(50),
dislikes varchar(50),
which_table varchar(1),
PRIMARY KEY (account_no)
);
Then i changed MySQL INSERT statement to insert to Table_C and added an extra column which_table to say which table the data being inserted belong to and Table_C on insert of data performs auto_increment and checks collision then reinsert the data to the desired table through the use of triggers like so:
CREATE TRIGGER `sort_tables` AFTER INSERT ON `Table_C` FOR EACH ROW
BEGIN
IF new.which_table = 'A' THEN
INSERT INTO Table_A
VALUES (new.acc_no, new.first_name, new.last_name);
ELSEIF new.which_table = 'B' THEN
INSERT INTO Table_B
VALUES (new.acc_no, new.likes, new.dislikes);
END IF;
END

How to speed up my mysql JOIN query?

I am developing one social chatting application. In my app having 5000 users. I want to fetch username which was last 1 hour in online.
I have two tables users and messages. My database is very heavy. users table having 4983 records and messages table having approximately 15 millions records. I want to show 20 users which user sending message between last 1 hour.
My Query -
SELECT a.username,a.id FROM users a JOIN messages b
WHERE a.id != ".$getUser['id']." AND
a.is_active=1 AND
a.is_online=1 AND
a.id=b.user_id AND
b.created > DATE_SUB(NOW(), INTERVAL 1 HOUR)
GROUP BY b.user_id
ORDER BY b.id DESC LIMIT 20
Users Table -
Messages Table -
Above query working fine. But my query is getting too much slow. And some times page hanged out. I want to get faster record.
Note - $getUser['id'] is login user id.
Any idea?
You can use indexes
A database index is a data structure that improves the speed of
operations in a table. Indexes can be created using one or more
columns, providing the basis for both rapid random lookups and
efficient ordering of access to records.
While creating index, it should be considered that what are the
columns which will be used to make SQL queries and create one or more
indexes on those columns.
Practically, indexes are also type of tables, which keep primary key
or index field and a pointer to each record into the actual table.
The users cannot see the indexes, they are just used to speed up
queries and will be used by Database Search Engine to locate records
very fast.
INSERT and UPDATE statements take more time on tables having indexes
where as SELECT statements become fast on those tables. The reason is
that while doing insert or update, database need to insert or update
index values as well.
Simple and Unique Index:
You can create a unique index on a table. A unique index means that two rows cannot have the same index value. Here is the syntax to create an Index on a table
CREATE UNIQUE INDEX index_name
ON table_name ( column1, column2,...);
You can use one or more columns to create an index. For example, we can create an index on tutorials_tbl using tutorial_author.
CREATE UNIQUE INDEX AUTHOR_INDEX
ON tutorials_tbl (tutorial_author)
You can create a simple index on a table. Just omit UNIQUE keyword from the query to create simple index. Simple index allows duplicate values in a table.
If you want to index the values in a column in descending order, you can add the reserved word DESC after the column name.
mysql> CREATE UNIQUE INDEX AUTHOR_INDEX
ON tutorials_tbl (tutorial_author DESC)
ALTER command to add and drop INDEX:
There are four types of statements for adding indexes to a table:
ALTER TABLE tbl_name ADD PRIMARY KEY (column_list):
This statement adds a PRIMARY KEY, which means that indexed values must be unique and cannot be NULL.
ALTER TABLE tbl_name ADD UNIQUE index_name (column_list):
This statement creates an index for which values must be unique (with the exception of NULL values, which may appear multiple times).
ALTER TABLE tbl_name ADD INDEX index_name (column_list):
This adds an ordinary index in which any value may appear more than once.
ALTER TABLE tbl_name ADD FULLTEXT index_name (column_list):
This creates a special FULLTEXT index that is used for text-searching purposes.
Here is the example to add index in an existing table.
mysql> ALTER TABLE testalter_tbl ADD INDEX (c);
You can drop any INDEX by using DROP clause along with ALTER command. Try out the following example to drop above-created index.
mysql> ALTER TABLE testalter_tbl DROP INDEX (c);
You can drop any INDEX by using DROP clause along with ALTER command. Try out the following example to drop above-created index.
ALTER Command to add and drop PRIMARY KEY:
You can add primary key as well in the same way. But make sure Primary Key works on columns, which are NOT NULL.
Here is the example to add primary key in an existing table. This will make a column NOT NULL first and then add it as a primary key.
mysql> ALTER TABLE testalter_tbl MODIFY i INT NOT NULL;
mysql> ALTER TABLE testalter_tbl ADD PRIMARY KEY (i);
You can use ALTER command to drop a primary key as follows:
mysql> ALTER TABLE testalter_tbl DROP PRIMARY KEY;
To drop an index that is not a PRIMARY KEY, you must specify the index name.
Displaying INDEX Information:
You can use SHOW INDEX command to list out all the indexes associated with a table. Vertical-format output (specified by \G) often is useful with this statement, to avoid long line wraparound:
Try out the following example:
mysql> SHOW INDEX FROM table_name\G
Optimize Your query by removing mysql function like date_sub , and do the same in php and pass it
DATE_SUB PHP VERSION

MySQL having 1 primary key with auto increment for 2 tables

I have 2 tables 'table1' & 'table2', both have column named 'id' (primary key) and both have auto increment. The problem is how do I make the primary key viable for the both tables.
Example:
I enter id '100' in 'table1' and after that if I try to enter a new record in 'table2' I would have '101' in 'table2'
I thought 'foreign key' would do the job but it did the exactly opposite of what I need
Strictly speaking, there's no way to do what you describe without resorting to extra queries or extra locking.
You can simulate it by creating a third table, into which you insert to generate new id's, then use LAST_INSERT_ID() to insert into either table1 or table2. See example in How to have Unique IDs across two or more tables in MySQL?
Also see some discussion of this problem and solutions here at the popular MySQL Performance Blog: Sharing an auto_increment value across multiple MySQL tables
But I agree with the comment from #Barmar, this might be an indication of a bad design for a database. If you have a requirement to make the auto-increment PK unique across multiple tables, this may mean that the two tables should really be one table.

How to import "a lot" of data to MySQL with PHP and foreign keys?

I have these tables:
create table person (
person_id int unsigned auto_increment,
person_key varchar(40) not null,
primary key (person_id),
constraint uc_person_key unique (person_key)
)
-- person_key is a varchar(40) that identifies an individual, unique
-- person in the initial data that is imported from a CSV file to this table
create table marathon (
marathon_id int unsigned auto_increment,
marathon_name varchar(60) not null,
primary key (marathon_id)
)
create table person_marathon (
person_marathon _id int unsigned auto_increment,
person_id int unsigned,
marathon_id int unsigned,
primary key (person_marathon_id),
foreign key person_id references person (person_id),
foreign key marathon_id references person (marathon_id),
constraint uc_marathon_person unique (person_id, marathon_id)
)
Person table is populated by a CSV that contains about 130,000 rows. This CSV contains a unique varchar(40) for each person and some other person data. There is no ID in the CSV.
For each marathon, I get a CSV that contains a list of 1k - 30k persons. The CSV contains essentially just a list of person_key values that show which people participated in that specific marathon.
What is the best way to import the data into the person_marathon table to maintain the FK relationship?
These are the ideas I can currently think of:
Pull the person_id + person_key information out of MySQL and merge the person_marathon data in PHP to get the person_id in there before inserting into the person_marathon table
Use a temporary table for insert... but this is for work and I have been asked to never use temporary tables in this specific database
Don't use a person_id at all and just use the person_key field but then I would have to join on a varchar(40) and that's usually not a good thing
Or, for the insert, make it look something like this (I had to insert the <hr> otherwise it wouldn't format the whole insert as code):
insert into person_marathon
select p.person_id, m.marathon_id
from ( select 'person_a' as p_name, 'marathon_a' as m_name union
select 'person_b' as p_name, 'marathon_a' as m_name )
as imported_marathon_person_list
join person p
on p.person_name = imported_marathon_person_list.p_name
join marathon m
on m.marathon_name = imported_marathon_person_list.m_name
The problem with that insert is that to build it in PHP, the imported_marathon_person_list would be huge because it could easily be 30,000 select union items. I'm not sure how else to do it, though.
I've dealt with similar data conversion problems, though at a smaller scale. If I'm understanding your problem correctly (which I'm not sure of), it sounds like the detail that makes your situation challenging is this: you're trying to do two things in the same step:
import a large number of rows from CSV into mysql, and
do a transformation such that the person-marathon associations work through person_id and marathon_id, rather than the (unwieldy and undesirable) varchar personkey column.
In a nutshell, I would do everything possible to avoid doing both of these things in the same step. Break it into those two steps - import all the data first, in tolerable form, and optimize it later. Mysql is a good environment to do this sort of transformation, because as you import the data into the persons and marathons tables, the IDs are set up for you.
Step 1: Importing the data
I find data conversions easier to perform in a mysql environment than outside of it. So get the data into mysql, in a form that preserves the person-marathon associations even if it isn't optimal, and worry about changing the association approach afterwards.
You mention temp tables, but I don't think you need any. Set up a temporary column "personkey", on the persons_marathons table. When you import all the associations, you'll leave person_id blank for now, and just import personkey. Importantly, ensure that personkey is an indexed column both on the associations table and on the persons table. Then you can go through later and fill in the correct person_id for each personkey, without worrying about mysql being inefficient.
I'm not clear on the nature of the marathons table data. Do you have thousands of marathons to enter? If so, I don't envy you the work of handling 1 spreadsheet per marathon. But if it's fewer, then you can perhaps set up the marathons table by hand. Let mysql generate marathon IDs for you. Then as you import the person_marathon CSV for each marathon, be sure to specify that marathon ID in each association relevant to that marathon.
Once you're done importing the data, you have three tables:
* persons - you have the ugly personkey, as well as a newly generated person_id, plus any other fields
* marathons - you should have a marathon_id at this point, right? either newly generated, or a number you've carried over from some older system.
* persons_marathons - this table should have marathon_id filled in & pointing to the correct row in the marathons table, right? You also have personkey (ugly but present) and person_id (which is still null).
Step 2: Use personkey to fill in person_id for each row in the association table
Then you either use straight Mysql, or write a simple PHP script, to fill in person_id for each row in the persons_marathons table. If I'm having trouble getting mysql to do this directly, I'll often write a php script to deal with a single row at a time. The steps in this would be simple:
look up any 1 row where person_id is null but personkey is not null
look up that personkey's person_id
write that person_id in the associations table for that row
You can tell PHP to repeat this 100 times then end script, or 1000 times, if you keep getting timeout problems or anything like taht.
This transformation involves a huge number of lookups, but each lookup only needs to be for a single row. That's appealing because at no point do you need to ask mysql (or PHP) to "hold the whole dataset in its head".
At this point, your associations table should have person_id filled in for every row. It's now safe to delete the personkey column, and voila, you have your efficient foreign keys.

How to remove duplicate entries from MySql database table

I had heavy SQL dump of a table. I used bigdump lib to import it in MySql database on my server.
Although it worked fine, but now I have duplicated entries in that table.
same table on local server has 8 * 105 records but on server it has 15 * 105 records.
Can you suggest me a query to delete duplicate entries from this table?
Here is my table structure.
Table name is : techdata_products
P.S. This table does not have any primary key.
SQL is not my strong point but I think you can export the result of this query:
SELECT DISTINCT * FROM table;
And then, create a new table and import your results.
First starters why do you have no primary key? You could have simply made that id field that auto increments a primary key to prevent duplicates. My suggestion would be to create a new table and do a
Select Distinct * from table and put the results into a new table that has a primary key

Categories