I have 2 tables 'table1' & 'table2', both have column named 'id' (primary key) and both have auto increment. The problem is how do I make the primary key viable for the both tables.
Example:
I enter id '100' in 'table1' and after that if I try to enter a new record in 'table2' I would have '101' in 'table2'
I thought 'foreign key' would do the job but it did the exactly opposite of what I need
Strictly speaking, there's no way to do what you describe without resorting to extra queries or extra locking.
You can simulate it by creating a third table, into which you insert to generate new id's, then use LAST_INSERT_ID() to insert into either table1 or table2. See example in How to have Unique IDs across two or more tables in MySQL?
Also see some discussion of this problem and solutions here at the popular MySQL Performance Blog: Sharing an auto_increment value across multiple MySQL tables
But I agree with the comment from #Barmar, this might be an indication of a bad design for a database. If you have a requirement to make the auto-increment PK unique across multiple tables, this may mean that the two tables should really be one table.
Related
i have two tables(innodb) in MYSQL data base both share a similar column the account_no column i want to keep both columns as integers and still keep both free from collusion when inserting data only.
there are 13 instances of this same question on stackoverflow i have read all. but in all, the recommended solutions where:
1) using GUID :this is good but am trying to keep the numbers short and easy for the users to remember.
2) using sequence :i do not fully understand how to do this but am thinking it involves making a third table that has an auto_increment and getting my values for the the two major tables from it.
3) using IDENTITY (1, 10) [1,11,21...] for the first table and the second using IDENTITY (2, 10) [2,12,22...] this works fine but in the long term might not be such a good idea.
4) using php function uniqid(,TRUE) :not going to work its not completely collision free and the columns in my case have to be integers.
5) using php function mt_rand(0,10): might work but i still have to check for collisions before inserting data.
if there is no smarter way to archive my goal i would stick with using the adjusted IDENTITY (1, 10) and (2, 10).
i know this question is a bit dumb seeing all the options i have available but the most recent answer on a similar topic was in 2012 there might have been some improvements in the MYSQL system that i do not know about yet.
also am using php language to insert the data thanks.
Basically, you are saying that you have two flavors of an entity. My first recommendation is to try to put them in a single table. There are three methods:
If most columns overlap, just put all the columns in a single table (accounts).
If one entity has more columns, put the common columns in one table and have a second table for the wider entity.
If only some columns overlap, put those in a single table and have a separate table for each subentity.
Let met assume the third situation for the moment.
You want to define something like:
create table accounts (
AccountId int auto_increment primary key,
. . . -- you can still have common columns here
);
create table subaccount_1 (
AccountId int primary key,
constraint foreign key (AccountId) references accounts(AccountId),
. . .
);
create table subaccount_2 (
AccountId int primary key,
constraint foreign key (AccountId) references accounts(AccountId),
. . .
);
Then, you want an insert trigger on each sub-account table. This trigger does the following on insert:
inserts a row into accounts
captures the new accountId
uses that for the insert into the subaccount table
You probably also want something on accounts that prevents inserts into that table, except through the subaccount tables.
A big thank you to Gordon Linoff for his answer i want to fully explain how i solved the problem using his answer to help others understand better.
original tables:
Table A (account_no, fist_name, last_name)
Table B (account_no, likes, dislikes)
problem: need account_no to auto_increment across both tables and be unique across both tables and remain a medium positive integer (see original question).
i had to make an extra Table_C to which will hold all the inserted data at first, auto_increment it and checks for collisions through the use of primary_key
CREATE TABLE Table_C (
account_no int NOT NULL AUTO_INCREMENT,
fist_name varchar(50),
last_name varchar(50),
likes varchar(50),
dislikes varchar(50),
which_table varchar(1),
PRIMARY KEY (account_no)
);
Then i changed MySQL INSERT statement to insert to Table_C and added an extra column which_table to say which table the data being inserted belong to and Table_C on insert of data performs auto_increment and checks collision then reinsert the data to the desired table through the use of triggers like so:
CREATE TRIGGER `sort_tables` AFTER INSERT ON `Table_C` FOR EACH ROW
BEGIN
IF new.which_table = 'A' THEN
INSERT INTO Table_A
VALUES (new.acc_no, new.first_name, new.last_name);
ELSEIF new.which_table = 'B' THEN
INSERT INTO Table_B
VALUES (new.acc_no, new.likes, new.dislikes);
END IF;
END
I've two same tables(same table columns and primary key) in two different databases. I want to add 2nd table data to the first table that not exist in the first table (according to the primary key).
what is the best method to do that?
I can export 2nd table data as csv, php array or sql file.
Thanks
There are lots of ways to do this.
The simplest is probably this one:
INSERT IGNORE
INTO table_1
SELECT *
FROM table_2
;
which allows those rows in table_1 to supersede those in table_2 that
have a matching primary key, while still inserting rows with new
primary keys.
Alternatively, you can use a subquery to find out the rows that are not shared by both tables and insert them. If you've got a lot of records, you may want to consider using a temporary table to speed up the process.
I have these tables:
create table person (
person_id int unsigned auto_increment,
person_key varchar(40) not null,
primary key (person_id),
constraint uc_person_key unique (person_key)
)
-- person_key is a varchar(40) that identifies an individual, unique
-- person in the initial data that is imported from a CSV file to this table
create table marathon (
marathon_id int unsigned auto_increment,
marathon_name varchar(60) not null,
primary key (marathon_id)
)
create table person_marathon (
person_marathon _id int unsigned auto_increment,
person_id int unsigned,
marathon_id int unsigned,
primary key (person_marathon_id),
foreign key person_id references person (person_id),
foreign key marathon_id references person (marathon_id),
constraint uc_marathon_person unique (person_id, marathon_id)
)
Person table is populated by a CSV that contains about 130,000 rows. This CSV contains a unique varchar(40) for each person and some other person data. There is no ID in the CSV.
For each marathon, I get a CSV that contains a list of 1k - 30k persons. The CSV contains essentially just a list of person_key values that show which people participated in that specific marathon.
What is the best way to import the data into the person_marathon table to maintain the FK relationship?
These are the ideas I can currently think of:
Pull the person_id + person_key information out of MySQL and merge the person_marathon data in PHP to get the person_id in there before inserting into the person_marathon table
Use a temporary table for insert... but this is for work and I have been asked to never use temporary tables in this specific database
Don't use a person_id at all and just use the person_key field but then I would have to join on a varchar(40) and that's usually not a good thing
Or, for the insert, make it look something like this (I had to insert the <hr> otherwise it wouldn't format the whole insert as code):
insert into person_marathon
select p.person_id, m.marathon_id
from ( select 'person_a' as p_name, 'marathon_a' as m_name union
select 'person_b' as p_name, 'marathon_a' as m_name )
as imported_marathon_person_list
join person p
on p.person_name = imported_marathon_person_list.p_name
join marathon m
on m.marathon_name = imported_marathon_person_list.m_name
The problem with that insert is that to build it in PHP, the imported_marathon_person_list would be huge because it could easily be 30,000 select union items. I'm not sure how else to do it, though.
I've dealt with similar data conversion problems, though at a smaller scale. If I'm understanding your problem correctly (which I'm not sure of), it sounds like the detail that makes your situation challenging is this: you're trying to do two things in the same step:
import a large number of rows from CSV into mysql, and
do a transformation such that the person-marathon associations work through person_id and marathon_id, rather than the (unwieldy and undesirable) varchar personkey column.
In a nutshell, I would do everything possible to avoid doing both of these things in the same step. Break it into those two steps - import all the data first, in tolerable form, and optimize it later. Mysql is a good environment to do this sort of transformation, because as you import the data into the persons and marathons tables, the IDs are set up for you.
Step 1: Importing the data
I find data conversions easier to perform in a mysql environment than outside of it. So get the data into mysql, in a form that preserves the person-marathon associations even if it isn't optimal, and worry about changing the association approach afterwards.
You mention temp tables, but I don't think you need any. Set up a temporary column "personkey", on the persons_marathons table. When you import all the associations, you'll leave person_id blank for now, and just import personkey. Importantly, ensure that personkey is an indexed column both on the associations table and on the persons table. Then you can go through later and fill in the correct person_id for each personkey, without worrying about mysql being inefficient.
I'm not clear on the nature of the marathons table data. Do you have thousands of marathons to enter? If so, I don't envy you the work of handling 1 spreadsheet per marathon. But if it's fewer, then you can perhaps set up the marathons table by hand. Let mysql generate marathon IDs for you. Then as you import the person_marathon CSV for each marathon, be sure to specify that marathon ID in each association relevant to that marathon.
Once you're done importing the data, you have three tables:
* persons - you have the ugly personkey, as well as a newly generated person_id, plus any other fields
* marathons - you should have a marathon_id at this point, right? either newly generated, or a number you've carried over from some older system.
* persons_marathons - this table should have marathon_id filled in & pointing to the correct row in the marathons table, right? You also have personkey (ugly but present) and person_id (which is still null).
Step 2: Use personkey to fill in person_id for each row in the association table
Then you either use straight Mysql, or write a simple PHP script, to fill in person_id for each row in the persons_marathons table. If I'm having trouble getting mysql to do this directly, I'll often write a php script to deal with a single row at a time. The steps in this would be simple:
look up any 1 row where person_id is null but personkey is not null
look up that personkey's person_id
write that person_id in the associations table for that row
You can tell PHP to repeat this 100 times then end script, or 1000 times, if you keep getting timeout problems or anything like taht.
This transformation involves a huge number of lookups, but each lookup only needs to be for a single row. That's appealing because at no point do you need to ask mysql (or PHP) to "hold the whole dataset in its head".
At this point, your associations table should have person_id filled in for every row. It's now safe to delete the personkey column, and voila, you have your efficient foreign keys.
In MySQL, is it possible to have a column in two different tables that auto-increment? Example: table1 has a column of 'secondaryid' and table2 also has a column of 'secondaryid'. Is it possible to have table1.secondaryid and table2.secondaryid hold the same information? Like table1.secondaryid could hold values 1, 2, 4, 6, 7, 8, etc and table2.secondaryid could hold values 3, 5, 9, 10? The reason for this is twofold: 1) the two tables will be referenced in a separate table of 'likes' (similar to users liking a page on facebook) and 2) the data in table2 is a subset of table1 using a primary key. So the information housed in table2 is dependent on table1 as they are the topics of different categories. (categories being table1 and topics being table2). Is it possible to do something described above or is there some other structural work around that im not aware of?
It seems you want to differentiate categories and topics in two separate tables, but have the ids of both of them be referenced in another table likes to facilitate users liking either a category or a topic.
What you can do is create a super-entity table with subtypes categories and topics. The auto-incremented key would be generated in the super-entity table and inserted into only one of the two subtype tables (based on whether it's a category or a topic).
The subtype tables reference this super-entity via the auto-incremented field in a 1:1 relationship.
This way, you can simply link the super-entity table to the likes table just based on one column (which can represent either a category or a topic), and no id in the subtype tables will be present in both.
Here is a simplified example of how you can model this out:
This model would allow you to maintain the relationship between categories and topics, but having both entities generalized in the superentity table.
Another advantage to this model is you can abstract out common fields in the subtype tables into the superentity table. Say for example that categories and topics both contained the fields title and url: you could put these fields in the superentity table because they are common attributes of its subtypes. Only put fields which are specific to the subtype tables IN the subtype tables.
If you just want the ID's in the two tables to be different you can initially set table2's AUTO_INCREMENT to some big number.
ALTER TABLE `table2` AUTO_INCREMENT=1000000000;
You can't have an auto_increment value shared between tables, but you can make it appear that it is:
set ##auto_increment_increment=2; // change autoinrement to increase by 2
create table evens (
id int auto_increment primary key
);
alter table evens auto_increment = 0;
create table odds (
id int auto_increment primary key
);
alter table odds auto_increment = 1;
The downside to this is that you're changing a global setting, so ALL auto_inc fields will now be growing by 2 instead of 1.
It sounds like you want a MySQL equivalent of sequences, which can be found in DBMS's like PosgreSQL. There are a few known recipes for this, most of which involve creating table(s) that track the name of the sequence and an integer field that keeps the current value. This approach allows you to query the table that contains the sequence and use that on one or more tables, if necessary.
There's a post here that has an interesting approach on this problem. I have also seen this approach used in the DB PEAR module that's now obsolete.
You need to set the other table's increment value manually either by the client or inside mysql via an sql function:
ALTER TABLE users AUTO_INCREMENT = 3
So after inserting into table1 you get back the last auto increment then modify the other table's auto increment field by that.
I'm confused by your question. If table 2 is a subset of table 3, why would you have it share the primary key values. Do you mean that the categories are split between table 2 and table 3?
If so, I would question the design choice of putting them into separate tables. It sounds like you have one of two different situations. The first is that you have a "category" entity that comes in two flavors. In this case, you should have a single category table, perhaps with a type column that specifies the type of category.
The second is that your users can "like" things that are different. In this case, the "user likes" table should have a separate foreign key for each object. You could pull off a trick using a composite foreign key, where you have the type of object and a regular numeric id afterwards. So, the like table would have "type" and "id". The person table would have a column filled with "PERSON" and another with the numeric id. And the join would say "on a.type = b.type and a.id = b.id". (Or the part on the "type" could be implicit, in the choice of the table).
You could do it with triggers:
-- see http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_last-insert-id
CREATE TABLE sequence (id INT NOT NULL);
INSERT INTO sequence VALUES (0);
CREATE TABLE table1 (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
secondardid INT UNSIGNED NOT NULL DEFAULT 0,
PRIMARY KEY (id)
);
CREATE TABLE table2 (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
secondardid INT UNSIGNED NOT NULL DEFAULT 0,
PRIMARY KEY (id)
);
DROP TRIGGER IF EXISTS table1_before_insert;
DROP TRIGGER IF EXISTS table2_before_insert;
DELIMITER //
CREATE
TRIGGER table1_before_insert
BEFORE INSERT ON
table1
FOR EACH ROW
BEGIN
UPDATE sequence SET id=LAST_INSERT_ID(id+1);
NEW.secondardid = LAST_INSERT_ID();
END;
//
CREATE
TRIGGER table2_before_insert
BEFORE INSERT ON
table2
FOR EACH ROW
BEGIN
UPDATE sequence SET id=LAST_INSERT_ID(id+1);
NEW.secondardid = LAST_INSERT_ID();
END;
//
I have a 2 tables that I wish to update articles and articles_entities articles has a PK of id and articles_entities has the FK article_id both these fields are char(36) currently a UUID. I am looking to convert these fields to int(10).
Is there a way I can update the 2 tables with 1 query and key the keys matching? or do I have to write a script to look through each articles and update all references?
I am using InnoDb if that helps.
Two steps:
Ensure your foreign key is set to "ON UPDATE CASCADE", then update the mother table's ID field so it contains numbers. The ON UPDATE CASCADE constraint will have InnoDB update the child table as it updates the mother... If you have a lot of rows, be prepared that this will be extremely slow.
Change the type of both columns to INT. You may need to drop the foreign key before you do so and re-create it afterwards.