mysql paste a long list of values - php

I have a large database with many tables with several thousand records.
I have done some Excel work to identify some records / rows I want to delete from one of my large tables, as when I tried to do the query within phpmyadmin, the table kept locking as the query was too big.
Anyway.... Long story short.
I now have a list of 1500 records I need to delete from one of my tables.
Is there a way to "paste" these 1500 values into a query, so I can bring back the matching records, select them all at once and delete them?
Obviously, I dont want to do this manually one at a time.
So the query I have in mind is something like this:
Find any records which match these IDs (WHERE ID = )
Paste in list of IDs from Ms Excel
Results returned
Can select all rows and delete
Any tips?
Thanks

Just use the Keyword "IN" in your query with your list of value. Like :
Select Name
From Users
Where ID IN (1,2,3,4 .....) ;

There is more than one way to do this.
First:
You can directly paste the list of coma separated IDs in the where clause.
DELETE FROM tablename WHERE ID IN (1,2,3,4);
It you get error 'Packet Too Large' you can increase max_allowed_packet
The largest possible packet that can be transmitted to or from a MySQL
5.0 server or client is 1GB.
Second:
You can export you Excel file to csv file and load the data to a temp table then delete fro the table using the tmp table as reference
LOAD DATA INFILE 'X:\filename.csv'
INTO TABLE tmptable
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';
DELETE t1
FROM tablename t1
JOIN tmptable t2 ON t1.ID = t2.ID;
Reference: MySQL LOAD DATA INFILE Syntax
Don't forget to remove your tmp table.

Related

Mysql Insert .... select, obtain last insert ID

I have this query in php. It's an insert select copying from table2, but I need to get the IDs of the newly created rows and store them into an array. Here is my code:
$sql = "INSERT INTO table1 SELECT distinct * from table2";
$db->query($sql);
I could revert the flow starting with a select on table2 and making all single inserts but it would slow down the script on a big table. Ideas?
You could lock the table, insert the rows, and get the ID of the last item inserted, and then unlock; that way you know that the IDs will be contiguous as no other concurrent user could have changed them. Locking and unlocking is something you want to use with caution though.
An alternative approach could be to use one of the columns in the table - either an 'updated' datetime column, or an insert-id column (for which you put in a value that will be the same across all of your rows.)
That way you can do a subsequent SELECT of the IDs back out of the database matching either the updated time or your chosen insert ID.

Don't add exact same 3 columns value in Mysql

I need to add a CSV file in a database. But after I add one file, some weeks later I need to add a new update of the file. Problem is: Duplicate rows are added.
I don't have an ID for rows so I have to see if they are the same with 'City', 'Address' and 'Location Name'. Only if the 3 are matching then we dont put the new row in database.
I tried IGNORE but it seems to only work whit an ID as primary key (and I don't have primary key).
I also read a 'multiple primary key' thread but I did not succeed to create it.
My actual code is (Codeigniter):
$query = $this->db->query('
LOAD DATA INFILE "'.$path.'fichier/'.$fichier.'"
INTO TABLE location FIELDS TERMINATED BY ";"
LINES TERMINATED BY "'.$os2.'"
IGNORE 1 LINES ('.$name[1].','.$name[2].','.$name[3].','.$name[4].','.$name[5].','.$name[6].','.$name[7].','.$name[8].','.$name[9].','.$name[10].','.$name[11].','.$name[12].','.$name[13].','.$name[14].','.$name[15].','.$name[16].','.$name[17].','.$name[18].','.$name[19].')');
I would say, best would be to load your updated CSV in a staging table. Once all data loaded, do a LEFT JOIN with your actual table and find out all the new records and then insert only those new records to your main table (OR) Once all data loaded, flush all the data in main table with this new staging table.
Per your comment:
Yes, one you have loaded data to new table perform a LEFT JOIN with your main table (something like below)
select staging_table.id
from staging_table
left join main_table on staging_table.id = main_table.id
where main_table.id is null;
If someone want to know how I finally succeed:
I created a unique in phpmyadmin. And then I used a IGNORE on my request.
ALTER TABLE location
ADD CONSTRAINT iu_location UNIQUE( col1, col2, col3 );

MySQL: INSERT all columns form one table into another table based on column in third table

Okay- I'm sure I'm missing something simple here but what I'm after is this:
We have tables for terminated employees (employees_term) and training history (traininghistory) and training history for terminated employees for archiving (traininghistory_term).
I need to INSERT all of the columns from traininghistory into traininghistory_term based on the employee number in the employees_term table (so it only copies over the terminated employees and not the active employees). I need to run this one time to copy over all of our terminated employees training info and don't want to build an entire function for something that I'll use once and I know MySQL can do with a couple lines.
traininghistory_term is an EXACT copy of traininghistory- literally copied the table structure and all.
Here's what I've got so far:
INSERT INTO traininghistory_term SELECT * FROM traininghistory
JOIN employees_term
ON employees_term.employee_id = traininghistory.EMPID
And here's what MySQL is telling me:
#1136 - Column count doesn't match value count at row 1
Any thoughts? Thanks in advance!

mysql query to update duplicate entries

i am stuck in a simple update query.
i have a table say tabble1 containing 'name' and 'phone_no' column. Now when i upload csv file containing list of name and contact numbers, i want to update name of duplicate number with previous one. For ex. i have a row containing 'max' '8569589652'. now when i upload same number with another name say 'stela' '8569589652' then stela shuld get updated to max.
for this purpose i created another table say table2. then i collected all duplicate entries from table1 into table2. after that updated new entry with previous name.
following are my queries:
to collect all duplicate entries:
INSERT INTO table2 SELECT phone_no,name FROM table1
GROUP BY phone_no HAVING COUNT(*)>1;
to update duplicate entries in table1:
UPDATE table1.table2 SET table1.name=table2.name
WHERE table1.phone_no=table2.phone_no ;
My problem is when i run these two query it is taking tooo much of time. It is taking ore than half an hour to upload csv file of 1000 numbers.
Please suggest me optimize query to upload csv in less time.
does speed of uploading matters with size of database..
please help.
thanks in advance.
Here are the steps from the link I suggested.
1) Create a new temporary table.
CREATE TEMPORARY TABLE temporary_table LIKE target_table;
2) Optionally, drop all indices from the temporary table to speed things up.
SHOW INDEX FROM temporary_table;
DROP INDEX `PRIMARY` ON temporary_table;
DROP INDEX `some_other_index` ON temporary_table;
3) Load the CSV into the temporary table
LOAD DATA INFILE 'your_file.csv'
INTO TABLE temporary_table
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(field1, field2);
4) Copy the data using ON DUPLICATE KEY UPDATE
SHOW COLUMNS FROM target_table;
INSERT INTO target_table
SELECT * FROM temporary_table
ON DUPLICATE KEY UPDATE field1 = VALUES(field1), field2 = VALUES(field2);
5) Remove the temporary table
DROP TEMPORARY TABLE temporary_table;
You can update duplicate entries of "phone no" like below.
INSERT INTO table2 (phone_no,name)
VALUES
('11111', aaa),
('22222', bbb),
('33333', cccc),
ON DUPLICATE KEY UPDATE
phone_no = VALUES(phone_no);
Dump your CSV file in a temp table.
Then aply merge statement simply
Merge
AS MAIN
USING
AS TEMP
On
MAIN.CONTACT_NO=TEMP.CONTACT_NO
WHEN MATCHED THEN UPDATE
MAIN.NAME=TEMP.NAME;
IF YOU WANT TO INSERT NON MATCHING RECORD USE THIS
WHEN NOT MATCHED
THEN INSERT
(NAME,
CONTACT_NO)
VALUES
(
TEMP.NAME,
TEMP.CONTACT_NO
);
Please not that merge command must end with ';'
I have used ';' after upadate remove that and add the below part and end the whole merge with ';'
Hope this helps
Please update if any more help needed.

how to check whether value exists or not in mysql database column while inserting

i have a contactnumber column in mysql database. In contactnumber column there are more than 20,000 entries. Now when i upload new numbers through .csv file, i dont want duplicate numbers in database.
How can i avoid duplicate numbers while inserting in database.
I initially implemented logic that checks each number in .csv file with each of the number in database.
this works but takes lot of time to upload .csv file containing 1000 numbers.
Pleae suggest how to minimize time required to upload .csv file while not uploading duplicate values.
Simply add a UNIQUE constraint to the contactnumber column:
ALTER TABLE `mytable` ADD UNIQUE (`contactnumber`);
From there you can use the IGNORE option to ignore the error you'd usually be shown when inserting a duplicate:
INSERT IGNORE INTO `mytable` VALUES ('0123456789');
Alternatively, you could use the ON DUPLICATE KEY UPDATE to do something with the dupe, as detailed in this question: MySQL - ignore insert error: duplicate entry
If your contactnumber should not be repeated then make it PRIMARY or at least a UNIQUE key. That way when a value is being inserted as a duplicate, insert will fail automatically and you won't have to check beforehand.
The way I would do it is to create a temporary table.
create table my_dateasyyyymmddhhiiss as select * from mytable where 1=0;
Do your inserts into that table.
and then query out the orphans on the between mytable and the temp table based on contactnumber
then run an inner join query between the two tables and fetch out the duplicate for your telecaller tracking.
finally drop the temporary table.
Thing that this does not address are duplicates within the supplied file (don't know if that would be an issue in this problem)
Hope this help
If you don't want to insert duplicate values in table and rather wants to keep that value in different table.
You can create trigger on table.
like this:
DELIMITER $$
CREATE TRIGGER unique_key BEFORE INSERT ON table1
FOR EACH ROW BEGIN
DECLARE c INT;
SELECT COUNT(*) INTO c FROM table1 WHERE itemid = NEW.itemid;
IF (c > 0) THEN
insert into table2 (column_name) values (NEW.itemid);
END IF;
END$$
DELIMITER ;
I would recommend this way
Alter the contactnumber column as UNIQUE KEY
Using phpmyadmin import the .csv file and check the option 'Do not abort on INSERT error' under Format-Specific Options before submitting

Categories