Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm working on the first giant update to my website. It has an existing table for posts. If I add on two columns to the table, will I be able to import the old data to the updated table?
EDIT: I meant columns. Whoops.
You mean two columns? If they are nullable then you should be able to import your old data which doesn't contain these columns. You'll just need to write your insert statement carefully by making sure to specify the exact columns you are inserting into. This is a good practice and should be done anyway (thanks Bruno for the tip).
Example:
INSERT INTO table1 (Col1, Col2, Col3)
SELECT Col1, Col2, Col3
FROM sourceTable
Instead of
INSERT INTO table1
SELECT *
FROM sourceTable
If the two new columns are not nullable then you can you try appending some dummy data using literals just to get the insert to work.
Example:
INSERT INTO table1 (Col1, Col2, Col3, NewCol1, NewCol2)
SELECT Col1, Col2, Col3, '', ''
FROM oldDataTable
Or you could create a default constraint which will work just as well.
First Import the data and then add the new columns.
Or you can insert the two columns taking care that they cann't be null, and then import the old data
I assume you mean if you add two columns to the table?
Yes you should be able to insert the existing data using INSERT statements as along as the new columns are nullable or you set a default value for those columns.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Im looking for the best solution of a large data problem. I've been thinking for a while now and its nice to hear your opinion.
I have a mysql database with a table that has about 5.000.000 records that is loaded and daily changed (new records and changed records).
There are some duplicated records in that table that i want mark daily.
There are arround 20 columns in the table. I want to find duplicated records that have the same data in 4 of the columns of the table.
Afther that i found the duplicates i need to loop through each duplicate record to update my search function and update the record in the table that it is duplicated to the other product.
I want to use mysql resources little as possible and make the script as fast as possible.
Now i have the following query but it is realy slow:
SELECT GROUP_CONCAT(id SEPARATOR '|') as ids,
GROUP_CONCAT(stock SEPARATOR '|') as stock
FROM table
GROUP BY column1, column2, column3, column4
HAVING count(id) > 1;
I could put indexes on the for columns but i think it will still be slow to run this query.
I'm curious about your vision.
It sounds like you want a query like this:
select col1, col2, col3, col4,
group_concat(id separator '|') as ids,
group_concat(stock separator '|') as stocks
from stock s
group by col1, col2, col3, col4
having count(*) > 1;
(This is essentially your query. It is where I would start, though.)
Alternatively, it might be faster to get each duplicated row. You can do this by using:
select s.*
from stock s
where exists (select 1
from stock s2
where s2.col1 = s.col1 and s2.col2 = s.col2 and
s2.col3 = s.col3 and s2.col4 = s.col4 and
s2.id <> s.id
);
For this to have any hope of working, you need an index on stock(col1, col2, col3, col4, id). And this formulation assumes the values in those columns are not NULL.
Note: If this is faster but you still need the original format, you can put this condition into the group by query.
To be honest, though. I think the right approach is to have a unique index on the four columns:
create index unq_stock_col1_col2_col3_col4 on stock(col1, col2, col3, col4);
Then handle the duplicate problems when updates or inserts modify the data. It is best to do data integrity checks in the database and to not let the data problems get out-of-hand.
couldn't manage to summaries my question in the title...
I have a database, full of data.
I have another database, full of data.
The two databases have the exact same structure.
I need to export data from one and put it all into the other.
I have exported 'data only' but of course that's not enough.
I need the imported data's table's id columns to all change to correspond as the data is being added to tables with existing data.
I couldn't see one, but i am hoping there is a setting in the export settings for this? Or in the import settings at the other end?
If not, what is the easiest way to go about it, please??
Thanks
Assuming your id-column is an AUTO_INCREMENT PRIMARY KEY simply do a SELECT without the key and insert the result in the new table:
INSTERT INTO table2 (col2, col3, col4)
SELECT col2, col3, col4 FROM table1
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I need help with two tables in a MySQL DB.
Both tables deal with ZipCodes in the USA.
The problem I have found is, not all zipcodes are contained in each table.
One table has zipcodes that the other does not and vice versa.
So...I can solve this either of two ways...1) Find someone who has a DB of all the USA zipcodes and get a copy from them. Or 2) find someone who can help me write a script to copy data from one table and insert it into the other table. So that I end up with one table with all the zipcodes in it. What I envision is searching one table for zipcodes that are NOT found in the other table, then inserting that missing zipcode into the other table.
Keep in mind that there is more than just a field for zipcode. We also have county, state, longitude, latitude, city....etc.
My final goal is to have one table with all the USA zipcodes along with the correct longitude and latitude for each zipcode.
Anybody out there have either of these solutions?
INSERT IGNORE INTO table1 (SELECT zipcode FROM table2)
You'll have to add the extra fields yourself since you didn't post them I don't know what they are.
INSERT IGNORE just skips any records that are duplicates.
(You do have a unique index on the zipcode column right?)
There are several ways to do this, here are two of them
Way 1:
INSERT INTO table1 (column1, column2 ... zip_column)
SELECT column1, column2 ... zip_column FROM table2
WHERE table2.zip_column NOT IN (SELECT zip_column FROM table 1)
Way 2 (The better one):
Create an unique index over zip code column in table1 and:
INSERT IGNORE INTO table1 (column1, column2 ... zip_column)
SELECT column1, column2 ... zip_column FROM table2
Try this:
INSERT INTO table1 (county, state, longitude, latitude, city .... add more here)
SELECT table2.area,table2.zipcode FROM table2
LEFT JOIN table1 ON
(
table2.county=table1.county AND
table2.state=table1.state AND
table2.longitude=table1.longitude AND
table2.latitude=table1.latitude AND
.... add more here
)
WHERE
table1.county IS NULL OR
table1.state IS NULL OR
table1.longitude IS NULL OR
table1.latitude IS NULL OR
.... add more here
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have two tables that have the same column names. Is there a way of getting the contents from one table and inserting it into another. I could export it into a sreadsheet then import it into the other table but would prefer to quickly use mysql. My experience with php and mysql is mainly selecting and inserting from one table.
Use INSERT ... SELECT:
INSERT INTO table1 SELECT * FROM table2
Note that this will only work if the tables are defined with columns in the same order; otherwise one will have to explicitly name the columns in one (or both) of the INSERT and SELECT parts of the command:
INSERT INTO table1 (colA, colB, colC) SELECT colA, colB, colC FROM table2
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I am getting the list of students from 1 table it has 5 records. Now i added a checkbox at the end of each row.
Now what i want to do is insert all the data in a new table including the checkbox value as 1 if checked and 0 for unchecked.
New table will have a new colomn date_id which will be same for all 5 entries, and other columns will remain same as in table 1.
how to do this please help
Did you mean inserting multiple rows in mysql with a single query then:-
Insert into table table_name Values (date_id,values_of_a),(date_id,values_of_a),(date_id,values_of_a)... n number of records;
The values for a,b,c columns you already have from select query from table 1.
You can try this:
<input type="checkbox" name="check[]" value="<?=(isset($table1Id))?$table1Id:'0'?>" />
<?php
foreach($_POST['check'] as $each){
if($each != "0"){
$table1Id = $each;
### retrive the details from table 1 here based on pk "$table1Id" and insert into table 2
}
}
?>
execute a for/ for each loop for entire list and execute insert query for every record in that list array
You need to do something like:
INSERT INTO new_table (col1, col2, col3) select col1, col2, custom_value from old_table;
Note:
The column count needs to match.
You can specify any custom values in the select query.
If you want using single query then
Insert into table your_table_name Values (date_id,values1),(date_id,values2),(date_id,values3),(date_id,values4),(date_id,values5)
or
store all value in list and execute for loop for every record in that list