I have a table with the following columns:
id
name
mail
There is lot of data in this table and chances of duplicate data is very high.
I want to display original data row and duplicate data row one after other so that user can delete duplicate data on clicking delete button.
if you select the data and only order them by email address, the original and the duplicates will come together. The rest of the logic can be covered in PHP too.
First of all there is no provision to detect original and dubplicate records in my sql. group by can give u dublicate entry of same record if any there.
You can use group by mail having count(id) > 1 in your select query to find out the records which are duplicate. Then you can use this query result as sub query and find out all records which are duplicate and order them base on mail.
Related
Apparently this...
$lastid = $wpdb->insert_id
...will give me the last inserted row (as noted here: How to get last inserted row ID from wordpress database?).
But how can I target a specific table, and get information from the latest inserted row after a form was just submitted?
For example, I have a table called 'license' and each row contains the columns 'email' and 'name' (among others).
The idea is that after I insert something into that row with a form, I need to display the email and name on screen.
Any help would be awesome.
You can use the value of an auto-increment column.
You might already have the ID column doing so.
In a new SELECT-statment use ORDER BY myCol DESC the auto-increment-coloumn and use LIMIT 1 to only get the one with the highest value (wich is the most recent one)
According to what you already have you have to use the last inserted id for a new query.
$query = "select * from license where id = ".$lastid;
Using the result of this query you can print out the other information of the last inserted row.
If you need more info on the php code, provide your code so we can edit it accordingly
ID of the Last Inserted Record in general (in your example from Wordpress), can only fetch (as the name says) only the ID (AUTO_INCREMENT) from a column as a result of the most recently executed INSERT statement. mysql doc
There is no direct way if specifying "get me lastID of XYZ table" for a particular in reverse point in time, only works as described above.
Even with the proper usage in php you need to be careful if other INSERT statements on other tables (ones you don't want) are executed before you try to fetch LAST_INSERT_ID, so you don't end up getting IDs from unwanted tables.
Your possible workarounds:
do a SELECT from your desired table ordered by ID desc, e.g:
SELECT id FROM xyz ORDER BY id DESC LIMIT 1;
this is presuming id is an AUTO_INCREMENT or similar
another option is if possible in code locate the "position" of INSERT statements of the table you want e.g. xyz and fetch the id right there with:
$wpdb->insert_id
I am implementing a request mechanism where the user have to approve a request. For that i have implemented a temporary table and main table. Initially when the request is added the data will be inserted to the temporary table, on approval it will be copied to the main table.
The issue is there will be more than 5k rows to be moved to the main table after approval + another 3-5 row for each row in the detail table (stores the details).
My current implementation is like this
//Get the rows from temporary table (batch_temp)
//Loop through the data
//Insert the data to the main table (batch_main) and return the id
//Get the details row from the temporary detail table (batch_temp_detail) using detail_tempid
//Loop through the data
//Insert the details to the detail table (batch_main_detail) with the main table id amount_id
//End Loop
//End Loop
But this implementation would take atleast 20k queries. Is there any better ways to implement the same.
I tried to create a sqlfiddle but was unable to create one. So i have pasted the query in pgsql.privatepaste.com
I'm sorry that I'm not familiar with PostgreSQL. My solution is in MySQL, I hope it can help since if they (MySQL & PostgreSQL) are same.
First, we should add 1 more field into your batch_main table to track the origin batch_temp record for each batch_main record.
ALTER TABLE `batch_main`
ADD COLUMN tempid bigint;
Then, on approval, we will insert 5k rows by 1 query:
INSERT INTO batch_main
(batchid, userid, amount, tempid)
SELECT batchid, userid, amount, amount_id FROM batch_temp;
So, with each new batch_main record we have its origin batch_temp record's id. Then, insert the detail records
INSERT INTO `batch_main_detail`
(detail_amount, detail_mainid)
SELECT
btd.detail_amount, bm.amount_id
FROM
batch_temp_detail `btd`
INNER JOIN batch_main `bm` ON btd.detail_tempid = bm.tempid
Done!
P/S:
I'm confuse a bit about the way you name your fields, and since I do not know about PostgreSQL and by looking into your syntax, can you use same sequence for primary key of both table batch_temp & batch_main? If you can, it's no need to add 1 more field.
Hope this help,
Simply need to update your Schema. Instead of having two tables: one main and one temporary, you should have all the data in main table, but have a flag which indicates whether a certain record is approved or no. Initially it will be set to false, and once approved it will simply be set to true and then the data can display on your website etc. That way you will not need to write the data two times, or even have to move it from one table to another
You haven't specified RDBMS you are using, but good old INSERT with SELECT in it must do the trick in one command:
insert main (field1,...,fieldN) select field1,...,fieldN from temporary
Trying to store the terms of a search. I've created a table "Searches" which stores each of the search terms in it's own field (bedrooms, baths, etc). So each row will contain one search.
On the advanced search form, users can select multiple search terms for a single field using an option select. I thought it would be wise to store each of these terms in a unique row of a related table for easy statistics reporting. I thought this way I could quickly report how many times a term is searched for. I also need to have the ability to save and regenerate the search query.
However if none of the terms searched are in the main table, I still need to generate a unique id to link it to the related table. So I would need to insert a blank row to generate the foreign key which I'm reluctant to do.
Is there a better way? I could store the multiple search terms questions in the primary table comma separated but it seems like it would be more difficult to pull them back out and count for statistics etc.
Why do you need to insert a blank row? You don't need to persist any of the records until the time comes to persist all of the records, right?
So as I understand it, your table layout is something like:
Table1
--------
ID
etc.
Table2
--------
ID
Table1ID
etc.
If that's the case, then the order of operations for inserting the data would look like this:
Begin Transaction
Insert into Table1
Get the last inserted ID
Insert into Table2
Commit Transaction
Assuming I understand your UX correctly, this would all happen when the user submits the form.
if i understand you,
it seems like you should have two tables:
search_term
-----------------
term_id
term
and
search
-----------------
search_id
term_id
then you can query search for all the terms and issue the SELECT statement.
This is my db structure:
ID NAME SOMEVAL API_ID
1 TEST 123456 A123
2 TEST2 223232 A123
3 TEST3 918922 A999
4 TEST4 118922 A999
I'm filling it using a function that calls an API and gets some data from an external service.
The first run, I want to insert all the data I get back from the API. After that, each time I run the function, I just want to update the current rows and add rows in case I got them from the API call and are not in the db.
So my initial thought regarding the update process is to go through each row I get from the API and SELECT to see if it already exists.
I'm just wondering if this is the most efficient way to do it, or maybe it's better to DELETE the relevant rows from the db and just re-inserting them all.
NOTE: each batch of rows I get from the API has an API_ID, so when I say delete the rows i mean something like DELETE FROM table WHERE API_ID = 'A999' for example.
If you retrieving all the rows from the service i recommend you the drop all indexes, truncate the table, then insert all the data and recreate indexes.
If you retrieving some data from the service i would drop all indexes, remove all relevant rows, insert all rows then recreate all indexes.
In such scenarios I'm usually going with:
start transaction
get row from external source
select local store to check if it's there
if it's there: update its values, remember local row id in list
if it's not there: insert it, remember local row id in list
at the end delete all rows that are not in remembered list of local row ids (NOT IN clause if the count of ids allows for this, or other ways if it's possible that there will be many deleted rows)
commit transaction
Why? Because usually I have local rows referenced by other tables, and deleting them all would break the references (not to mention deletete cascade).
I don't see any problem in performing SELECT, then deciding between an INSERT or UPDATE. However, MySQL has the ability to perform so-called "upserts", where it will insert a row if it does not exist, or update an existing row otherwise.
This SO answer shows how to do that.
I would recommend using INSERT...ON DUPLICATE KEY UPDATE.
If you use INSERT IGNORE, then the row won't actually be inserted if it results in a duplicate key on API_ID.
Add unique key index on API_ID column.
If you have all of the data returned from the API that you need to completely reconstruct the rows after you delete them, then go ahead and delete them, and insert afterwards.
Be sure, though, that you do this in a transaction, and that you are using an engine that supports transactions properly, such as InnoDB, so that other clients of the database don't see rows missing from the table just because they are going to be updated.
For efficiency, you should insert as many rows as you can in a single query. Much faster that way.
BEGIN;
DELETE FROM table WHERE API_ID = 'A987';
INSERT INTO table (NAME, SOMEVAL, API_ID) VALUES
('TEST5', 12345, 'A987'),
('TEST6', 23456, 'A987'),
('TEST7', 34567, 'A987'),
...
('TEST123', 123321, 'A987');
COMMIT;
I want to update only one field in a mysql table.
I have an "ad_id" which is unique.
The field "mod_date" is a TIMESTAMPS field, which is the one I need to update.
UPDATE main_table
SET main_table.mod_date = NOW()
WHERE classified.ad_id = $ad_id";
I haven't tested this yet because I am afraid it might update all rows.
So I have two questions:
Is there anyway to prevent MySql to update more than 1 row?
Is this sql code correct for updating one row only?
Thanks
If ad_id is unique, it will only update one row (if $ad_id is valid, zero otherwise).
If your worried about an update like this, rewrite it as a select to confirm which rows it will operate on before running it.
Your query doesn't look like it would work as such because it checks for field ad_id in table classified which hasn't been defined in the statement. If this is just a partial query and you're joining the classified table somewhere in the query there's not enough info here to tell how many rows will be modified.
You can add LIMIT 1 to the end of the query to make it update only the first row the query finds, but if you're not sure what the query does the first row might not be the one you want to modify.
As a side note I do have to say that if you're afraid to try and see what the query does, it means that either you don't have a backup of the database or you're working directly with a production database, and both of those options sound pretty scary.
What's the relation between main_table and classified?
For example...
UPDATE header h
INNER JOIN detail d ON d.id_header = h.id_header
SET h.name = 'New name'
WHERE d.id_detail = 10
will update name in the header table for specific id_detail.
In your case if ad_id is unique then you can be sure that MySQL will update only one row.