I have a table in MySql Database. I would like to prevent inserting matching rows in MySql. Like I have 4 columns in a table. I would not like to insert any row which has matching values of these 4 columns. I am trying to show that below
My table
----------
product_name| product_sku |product_quantity| product_price
----------
Computer | comp_007 | 5 | 500
I would like to prevent to insert same row again. How can I do that using MySql Query ??
UPDATE
I would not like to insert again
Computer | comp_007 | 5 | 500
But I would like to insert below rows
mouse | comp_007 | 5 | 500
Computer | comp_008 | 5 | 500
Computer | comp_007 | 50 | 500
Computer | comp_007 | 5 | 100
mouse | mou_007 | 5 | 500
Create a combined unique key / composite key on the columns in question:
ALTER TABLE `table` ADD UNIQUE (
`product_name` ,
`product_sku` ,
`product_quantity`,
`product_price`
);
Any attempts to insert duplicate rows will result in a MySQL error.
If possible you should add a Unique Key to your columns:
ALTER TABLE `table_name`
ADD UNIQUE INDEX `ix_name` (`product_name`, `product_sku`, `product_quantity`, `product_price`);
and then use INSERT IGNORE:
INSERT IGNORE INTO table_name (product_name, product_sku, product_quantity, product_price) VALUES (value1, value2, value3, value4);
If the record is unique MYSQL inserts it as usual, if the record is a duplicate then the IGNORE keyword discards the insert without generating an error.
SQLfiddle
The simplest way would be to make your columns unique. On undesired inserts, your MySQL driver should throw an exception then, which you can handle.
From a domain logic point of view, this is a bad practice, since exceptions should never handle expected behaviour. By the way, your DB table could use a primary key.
So, to have a better solution, your approach could be:
- Define a unique field (the SKU seems suitable); think about using this as the primary key as well
- With MySQL, use a REPLACE statement:
REPLACE INTO your_tablename
SET product_name='the name'
-- and so on for other columns
WHERE product_sku = 'whatever_sku';
REPLACE does the Job of trying to INSERT, and doing an UPDATE instead if the PK exists.
Related
This is My Table structure as follows
CREATE TABLE manage_files_log (
account_sid uuid,
file_type text,
file_sid timeuuid,
date_created timestamp,
file_description text,
file_name text,
status int,
url text,
PRIMARY KEY ((account_sid, file_type), file_sid)
) WITH CLUSTERING ORDER BY (file_sid DESC)
in this table I want to update my records with query
UPDATE manage_files_log SET url='$url' WHERE account_sid =e40daea7-b1ec-088a-fc23-26f67f2052b9 AND file_type ='json' AND file_sid=961883e0-208f-11e6-9c41-474a6606bc87;
but it is inserting new record instead of updating existing record.
please help Me
below is the example of data in table where I want to update the url column value
account_sid | file_type | file_sid | date_created | file_description | file_name | status | url
--------------------------------------+-----------+------------------- -------------------+--------------------------+------------------+-------- ---+--------+-----
e40daea7-b1ec-088a-fc23-26f67f2052b9 | json | e15e02f0-20ab-11e6-9c41-474a6606bc87 | 2016-05-22 00:00:00+0000 | descripton | testUrl1 | 1 |
Okay, there's one thing you have to know about cassandra: Update or insert don't exist. I know, you write, in your query, update or insert, but it's the same. It's called Upsert. You might think: Whaaat? But there's a reason for this: Cassandra is a masterless distributed database system. You have, generally, no transactions. If you insert a value to node1 and want to update it after 10ms on node2, it could happen that your first value isn't on node2. With a hard UPDATE, your second operation will fail. But cassandra ignores this fact and writes the values to node2. After a time node2 and node1 will synchronizing their values. At this stage node1 gets the right values from node1. (Cassandra uses an internal column timestamp for synchronizing)
But you can also use update as update: Simply add IF EXISTS; to your query. But remember one thing: It's a big performance killer. Cassandra has to read all values from all nodes!
I have 2 table like this:
1-private_messages table:
messageId | message
--------------------
1 | text1
4 | text4
2-public_messages table:
messageId | message
----------------------
2 | text2
3 | text3
5 | text5
in two table , messageId column is primaryKey
now I want that these two column be auto increment and has a unique Id in two table like shown above.
now when I want to insert a row in one of tables , I had to find max Id of each table and compare them to find max of them. then increase that and insert new row.
I want know, is there any better or automatic way that when I insert new row, database do that automatically?
thanks
You can obtain unique numbers in MySQL with a programming pattern like the following.
First create a table for the sequence. It has an auto-increment field and nothing else.
CREATE TABLE sequence (
sequence_id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`sequence_id`)
)
Then when you need to insert a unique number into one of your tables, use something like these queries:
INSERT INTO sequence () VALUES ();
DELETE FROM sequence WHERE sequence_id < LAST_INSERT_ID();
INSERT INTO private_messages (messageID, message)
VALUES (LAST_INSERT_ID(), 'the message');
The second INSERT is guaranteed to use a unique sequence number. This guarantee holds even if you have dozens of different client programs connected to your database. That's the beauty of AUTO_INCREMENT.
The second query (DELETE) keeps the table from getting big and wasting space. We don't care about any rows in the table except for the most recent one.
Edit. If you're using php, simply issue the three queries one after the other using three calls to mysqli_query() or the equivalent method in the MySQL interface you have chosen for your program.
All that being said, beware of false economy. Don't forget that storage on Amazon S3 costs USD 0.36 per year per gigabyte. And that's the most expensive storage. The "wasted" storage cost for putting your two kinds of tables into a single table will likely amount to a few dollars. Troubleshooting a broken database app in production will cost thousands of dollars. Keep it simple!
Use flag like 1 for private messages and 0 for public in a single table , so it is easy to insert and easy to fetch and compare....
messageId | message | flag
---------------------------
1 | text1 | 1
2 | text2 | 0
3 | text3 | 0
4 | text4 | 1
5 | text5 | 0
There is no way to do this automatically that I'm aware of.
You might be able to write a function in the DB to make it happen, I don't recommend it.
Mark Baker's suggestion, to have a single messages table and a public/private flag sounds like the best way to go if you absolutely need IDs to be unique across both types of messages.
I am trying to prevent duplicate entries like this
INSERT IGNORE INTO myTable( `val_1`, `val_2`, `val_3`, `date` )
VALUES ( '$var_1', '$var_2', '$var_3', now() )
The values i would like to check are the 3 val_x but because now() will be a unique value, insert ignore does not work.
How can i not check that last variable as unique?
note: this is kind of like a cart, so i cannot make the first 3 values unique. There is a session variable that allows each user to see a a unique collection.
From this diagram, the first 2 rows are duplicate since they belong to the same user session. The 3rd row is not a duplicate becase it belongs to a different user session
+---------+-------+--------+
| session | var 1 | var 2 |
+---------+-------+--------+
| abc1234 | aaaaa | bbbbb |
+---------+-------+--------+
| abc1234 | aaaaa | bbbbb |
+---------+-------+--------+
| 5678def | aaaaa | bbbbb |
+---------+-------+--------+
| 5678def | aaaaa | ccccc |
+---------+-------+--------+
as paqogomez suggested i removed now() from the query and altered the table but it looks like i need a primary key for insert ignore to work, but for my scenario i cant make these 3 columns unique
ERROR 1062: Duplicate entry 'aaaaa' for key 'var 1'
I would suggest moving the date into the default value of the column.
ALTER TABLE mytable CHANGE `date` `date` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
This way, you can still deal with the duplicates in the data in PHP. The alternative, as others have suggested would result in a database foreign key error if you attempted to insert a duplicate.
Then this sql would work and give the same result:
INSERT IGNORE INTO myTable( `val_1`, `val_2`, `val_3` )
VALUES ( '$var_1', '$var_2', '$var_3' )
EDIT:
You still need a unique index to make it work. See #Gordon's answer
Create a unique index on the first three columns:
create unique index myTable_session_val2_val3 on myTable(session, val_1, val_2);
This will guarantee that combinations of these three are unique, without taking into account any other columns.
You should probably define the UNIQUE keys for the combination of columns you don't want to be duplicated. So, don't specify column date as UNIQUE. Did you verify that? Define rest of three values as unique columns. It will probably work.
I have a table that records tickets that are separated by a column that denotes the "database". I have a unique key on the database and cid columns so that it increments each database uniquely (cid has the AUTO_INCREMENT attribute to accomplish this). I increment id manually since I cannot make two AUTO_INCREMENT columns (and I'd rather the AUTO_INCREMENT take care of the more complicated task of the uniqueness).
This makes my data look like this basically:
-----------------------------
| id | cid | database |
-----------------------------
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 2 | 2 |
-----------------------------
This works perfectly well.
I am trying to make a feature that will allow a ticket to be "moved" to another database; frequently a user may enter the ticket in the wrong database. Instead of having to close the ticket and completely create a new one (copy/pasting all the data over), I'd like to make it easier for the user of course.
I want to be able to change the database and cid fields uniquely without having to tamper with the id field. I want to do an UPDATE (or the like) since there are foreign key constraints on other tables the link to the id field; this is why I don't simply do a REPLACE or DELETE then INSERT, as I don't want it to delete all of the other table data and then have to recreate it (log entries, transactions, appointments, etc.).
How can I get the next unique AUTO_INCREMENT value (based on the new database value), then use that to update the desired row?
For example, in the above dataset, I want to change the first record to go to "database #2". Whatever query I make needs to make the data change to this:
-----------------------------
| id | cid | database |
-----------------------------
| 1 | 3 | 2 |
| 2 | 1 | 2 |
| 3 | 2 | 2 |
-----------------------------
I'm not sure if the AUTO_INCREMENT needs to be incremented, as my understanding is that the unique key makes it just calculate the next appropriate value on the fly.
I actually ended up making it work once I re-read an except on using AUTO_INCREMENT on multiple columns.
For MyISAM and BDB tables you can specify AUTO_INCREMENT on a
secondary column in a multiple-column index. In this case, the
generated value for the AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is
useful when you want to put data into ordered groups.
This was the clue I needed. I simply mimic'd the query MySQL runs internally according to that quote, and joined it into my UPDATE query as such. Assume $new_database is the database to move to, and $id is the current ticket id.
UPDATE `tickets` AS t1,
(
SELECT MAX(cid) + 1 AS new_cid
FROM `tickets`
WHERE database = {$new_database}
) AS t2
SET t1.cid = t2.new_cid,
t1.database = {$new_database}
WHERE t1.id = {$id}
I have a Reporting table where i store description
tableA
sno | Project |name | description | mins |
1 | prjA |nameA |ABC -10% task done| 30 |
...
3000 | prjA |nameB |ABC -70% task done| 70 |
i want to query the description field and save in another table
tableB
id | valueStr | total_mins | last_sno
1 | ABC | 100 | 3000
if there is no entry in second table , i create a entry with default values
if there is and entry in second table , i update 2nd table , with the total_mins and increment the last_sno to that value say 3300 , so that the next time i query this table i get values from second table and based on the last_sno
Query
SELCT last_sno FROM tableB where valueStr ='ABC'
the first 3 characters in the description field
SELECT max(sno), sum(mins) FROM tableA
where sno > last_sno and description like 'ABC%'
Since the first table has million of rows so,
i search the first table with sno > last_sno , so that should help performance right ?
but the explain shows that it scans the same no of rows , when i query the first table from the first sno
The use of the index may not help you, because MySQL still has to scan the index from the last_sno to the end of the data. You would be better off with an index on TableA(description), because such an index can be used for description like 'ABC%'.
In fact, this might be a case where the index can hurt you. Instead of sequentially reading the pages in the table, the index reads them randomly -- which is less efficient.
EDIT: (too long for comment)
Try running the query with an ignore index hint to see if you can run the query without it. It is possible that the index is actually making things worse.
However, the "real" solution is to store the prefix you are interested in as a separate column. You can then add an index on this column and the query should work efficiently using basic SQL. You won't have to spend your time trying to optimize a simple process, because the data will be stored correctly for it.