Can't insert new row into postgres database table? - php

I have an issue, I'm trying to insert a new row into a postgres database table and get the following error
ERROR: duplicate key violates unique constraint "n_clients_pkey"
Here my query
insert into n_clients(client_name) values( 'value');
I'm using postgres 8.1.11
PostgreSQL 8.1.11 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14)
Here's the structure for my table
Table "public.n_clients"
Column | Type | Modifiers
-------------+--------------------------+-----------------------------------------------------------------------
id | integer | not null default nextval(('public.n_clients_id_seq'::text)::regclass)
client_name | character varying(200) | not null
moddate | timestamp with time zone | default now()
createdate | timestamp with time zone | default now()
Indexes:
"n_clients_pkey" PRIMARY KEY, btree (id)
and the sequence
Sequence "public.n_clients_id_seq"
Column | Type
---------------+---------
sequence_name | name
last_value | bigint
increment_by | bigint
max_value | bigint
min_value | bigint
cache_value | bigint
log_cnt | bigint
is_cycled | boolean
is_called | boolean

This row exists already, therefore you cannot insert it. What is the primary key of your relation? Is it a sequence? If so, maybe it got stuck (maybe you imported data). You should reset it manually to the next free ID available (e.g., if the maximum ID is 41, you should do: SELECT setval('your_seq', 42);) then try again.

You must have a UNIQUE constraint on your table, that your insert is violating -- ie, considering the name of your table and index, you are probably trying to insert a client that already exists in your table.

Typically one gets into this situation by manually adding a record with an id field that matches the current value for the sequence. It's easy to introduce this by some common dump/reload operations for example. I wrote an article about correcting for this sort of error across the entire database at Fixing Sequences.

The PostGresSQL should have a primary key while creating a DB , so you are not able to add anything include then only you can add data manually

8.1 version is dated.
8.4 displays a much better error message :
ERROR: duplicate key value violates unique constraint "master_pkey"
DETAIL: Key (id)=(1) already exists.

Related

MYSQL UPDATE on KEY - duplicates [duplicate]

This problem seems easy at first sight, but I simply have not found a solution that is reasonable time wise.
Consider a table with the following characteristics:
ID INTEGER PRIMARY KEY AUTOINCREMENT
name INTEGER
values1 INTEGER
values2 INTEGER
dates DATE
Every day, N amount of new rows are generated, for dates into the future, and with the 'name' coming from a finite list. I would like to insert a new row when there is new data, but if there is already a row with 'name' and 'dates', simply update it.
Please note that a current proposed solution of an SPROC that checks the conditional is not feasible, as this is data being pushed from another language.
that is what insert on duplicate key update is for.
The Manual page for it is here.
The trick is that the table needs to have a unique key (can be a composite) so that the clash of doing an insert can be detected. As such, the update to occur on that row, otherwise an insert. It can be a primary key, of course.
In your case, you could have a composite key such as
unique key(theName,theDate)
If the row is already there, the clash is detected, and the update happens.
Here is a complete example
create table myThing
( id int auto_increment primary key,
name int not null,
values1 int not null,
values2 int not null,
dates date not null,
unique key(name,dates) -- <---- this line here is darn important
);
insert myThing(name,values1,values2,dates) values (777,1,1,'2015-07-11') on duplicate key update values2=values2+1;
insert myThing(name,values1,values2,dates) values (778,1,1,'2015-07-11') on duplicate key update values2=values2+1;
-- do the 1st one a few more times:
insert myThing(name,values1,values2,dates) values (777,1,1,'2015-07-11') on duplicate key update values2=values2+1;
insert myThing(name,values1,values2,dates) values (777,1,1,'2015-07-11') on duplicate key update values2=values2+1;
insert myThing(name,values1,values2,dates) values (777,1,1,'2015-07-11') on duplicate key update values2=values2+1;
show results
select * from myThing;
+----+------+---------+---------+------------+
| id | name | values1 | values2 | dates |
+----+------+---------+---------+------------+
| 1 | 777 | 1 | 4 | 2015-07-11 |
| 2 | 778 | 1 | 1 | 2015-07-11 |
+----+------+---------+---------+------------+
As expected, insert on duplicate key update works, just 2 rows.
This is easy:
Create a unique key on the columns to check
Use the INSERT .. ON DUPLICATE KEY UPDATE construct

Update query in Cassandra 2.3

This is My Table structure as follows
CREATE TABLE manage_files_log (
account_sid uuid,
file_type text,
file_sid timeuuid,
date_created timestamp,
file_description text,
file_name text,
status int,
url text,
PRIMARY KEY ((account_sid, file_type), file_sid)
) WITH CLUSTERING ORDER BY (file_sid DESC)
in this table I want to update my records with query
UPDATE manage_files_log SET url='$url' WHERE account_sid =e40daea7-b1ec-088a-fc23-26f67f2052b9 AND file_type ='json' AND file_sid=961883e0-208f-11e6-9c41-474a6606bc87;
but it is inserting new record instead of updating existing record.
please help Me
below is the example of data in table where I want to update the url column value
account_sid | file_type | file_sid | date_created | file_description | file_name | status | url
--------------------------------------+-----------+------------------- -------------------+--------------------------+------------------+-------- ---+--------+-----
e40daea7-b1ec-088a-fc23-26f67f2052b9 | json | e15e02f0-20ab-11e6-9c41-474a6606bc87 | 2016-05-22 00:00:00+0000 | descripton | testUrl1 | 1 |
Okay, there's one thing you have to know about cassandra: Update or insert don't exist. I know, you write, in your query, update or insert, but it's the same. It's called Upsert. You might think: Whaaat? But there's a reason for this: Cassandra is a masterless distributed database system. You have, generally, no transactions. If you insert a value to node1 and want to update it after 10ms on node2, it could happen that your first value isn't on node2. With a hard UPDATE, your second operation will fail. But cassandra ignores this fact and writes the values to node2. After a time node2 and node1 will synchronizing their values. At this stage node1 gets the right values from node1. (Cassandra uses an internal column timestamp for synchronizing)
But you can also use update as update: Simply add IF EXISTS; to your query. But remember one thing: It's a big performance killer. Cassandra has to read all values from all nodes!

MyISAM Engine table relations (MySQL)

I’m using a host which only supports MyISAM tables engines for MySQL. I’m trying to create a CMS using php and MySQL, however, I’m having issues working out how to create relationships between the tables. For example, one of the features within this system is being able to assign tags to an article/blogpost, similar to how stack overflow has tags on their questions.
My question is, as I cannot change my tables to use InnoDB, how can I form a relationship between the two tables? I am unable to use foreign keys as they are not supported in MyISAM, or at least not enforced.
So far, all I've found when searching is keeping track of it through PHP by ensuring that I update multiple tables at a time, but there must be a way of doing this on the MySQL side.
Below are examples of the Article and Tag tables.
+---------------------------+ +---------------------------+
| Article | | Tags |
+---------------------------+ +---------------------------+
| articleID int(11) | | tagID int(11) |
| title varchar(150) | | tagString varchar(15) |
| description varchar(150) | +---------------------------+
| author varchar(30) |
| content text |
| created datetime |
| edited datetime |
+---------------------------+
I’ve found loads of related questions on this site, but most of them InnoDB, which I cannot do as my host does not support it.
I've found a solution (kind of). I've added another table called ArticleTags
+---------------------------+
| ArticleTags |
+---------------------------+
| articleID int(11) |
| tagID int(11) |
+---------------------------+
This query returns the correct result, but I'm not sure if it's a bit of a hack, or if there is a better way to do it.
SELECT `tagString`
FROM `Tags`
WHERE id
IN (
SELECT `tagID`
FROM `ArticleTags`
WHERE `articleID` = :id
)
ORDER BY `Tags`.`tagString`
Can someone tell me if this this right?
Try TRIGGERs:
Enforcing Foreign Keys Programmatically in MySQL
Emulating Cascading Operations From InnoDB to MyISAM Tables
Example MyIsam with Foreign-Key:
Create parent table:
CREATE TABLE myisam_parent
(
mparent_id INT NOT NULL,
PRIMARY KEY (mparent_id)
) ENGINE=MYISAM;
Create child table:
CREATE TABLE myisam_child
(
mparent_id INT NOT NULL,
mchild_id INT NOT NULL,
PRIMARY KEY (mparent_id, mchild_id)
) ENGINE = MYISAM;
Create trigger (with DELIMITER):
DELIMITER $$
CREATE TRIGGER insert_myisam_child
BEFORE INSERT ON myisam_child
FOR EACH ROW
BEGIN
IF (SELECT COUNT(*) FROM myisam_parent WHERE mparent_id=new.mparent_id)=0 THEN
INSERT error_msg VALUES ('Foreign Key Constraint Violated!');//Custom error
END IF;
END;$$
DELIMITER ;
Test case:
Try insert (create 3 lines in myisam_parent and 6 lines in myisam_child):
INSERT INTO myisam_parent VALUES (1), (2), (3);
INSERT INTO myisam_child VALUES (1,1), (1,2), (2,1), (2,2), (2,3), (3,1);
Try insert:
INSERT INTO myisam_child VALUES (7, 1);
Returns this error:
ERROR 1062 (23000): Duplicate entry 'Foreign Key Constraint Violated!' for key 'PRIMARY'
Note:
This example is for INSERT, for "triggers" with DELETE and UPDATE read link (at the beginning the question)

Table with 2 relations/foreign keys is not made/ignored [duplicate]

I have a production database where I have renamed several column's that are foreign keys. Obviously mysql makes this a real pain to do in my experience.
My solution was to drop all the indexes and foreign keys, rename the id columns, and then re-add the indexes and foreign keys.
This works great on mysql 5.1 on windows for the development database.
I went to run my migration script on my debian server, which is also using mysql 5.1, and it gives the following error:
mysql> ALTER TABLE `company_to_module`
-> ADD CONSTRAINT `FK82977604FE40A062` FOREIGN KEY (`company_id`) REFERENCES `company` (`company_id`) ON DELETE RESTRICT ON UPDATE RESTRICT;
ERROR 1005 (HY000): Can't create table 'jobprep_production.#sql-44a5_76' (errno: 150)
There are no values in this table that would conflict with the foreign key I am trying to add. The database hasn't changed. The foreign key DID exist before... so the data is fine. Let's not mention that I took the SAME database that I have on the server and it migrates fine on Windows. But these same foreign key migrations are not taking on Debian.
The columns are using the same type - BIGINT (20)
The names do in fact exist in their respective tables.
The tables are innodb. They already have foreign keys in other columns as it is. This is not a new database.
I cannot drop tables because this is a production database.
The tables "as is" in my database:
CREATE TABLE `company_to_module` (
`company_id` bigint(20) NOT NULL,
`module_id` bigint(20) NOT NULL,
KEY `FK8297760442C8F876` (`module_id`),
KEY `FK82977604FE40A062` (`company_id`) USING BTREE,
CONSTRAINT `FK8297760442C8F876` FOREIGN KEY (`module_id`) REFERENCES `module` (`module_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
And
Create Table: CREATE TABLE `company` (
`company_id` bigint(20) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
`address` varchar(255) DEFAULT NULL,
`postal_code` varchar(255) DEFAULT NULL,
`province_id` bigint(20) DEFAULT NULL,
`phone_number` varchar(255) DEFAULT NULL,
`is_enabled` bit(1) DEFAULT NULL,
`director_id` bigint(20) DEFAULT NULL,
`homepage_viewable` bit(1) NOT NULL DEFAULT b'1',
`courses_created` int(10) NOT NULL DEFAULT '0',
`header_background` varchar(25) DEFAULT '#172636',
`display_name` varchar(25) DEFAULT '#ffffff',
`tab_background` varchar(25) DEFAULT '#284767',
`tab_text` varchar(25) DEFAULT '#ffffff',
`hover_tab_background` varchar(25) DEFAULT '#284767',
`hover_tab_text` varchar(25) DEFAULT '#f2e0bd',
`selected_tab_background` varchar(25) DEFAULT '#f5f5f5',
`selected_tab_text` varchar(25) DEFAULT '#172636',
`hover_table_row_background` varchar(25) DEFAULT '#c0d2e4',
`link` varchar(25) DEFAULT '#4e6c92',
PRIMARY KEY (`company_id`),
KEY `FK61AE555A71DF3E03` (`province_id`),
KEY `FK61AE555AAC50C977` (`director_id`),
CONSTRAINT `company_ibfk_1` FOREIGN KEY (`director_id`) REFERENCES `user_account` (`user_account_id`),
CONSTRAINT `FK61AE555A71DF3E03` FOREIGN KEY (`province_id`) REFERENCES `province` (`province_id`)
) ENGINE=InnoDB AUTO_INCREMENT=24 DEFAULT CHARSET=utf8
Here is the innodb status:
------------------------
LATEST FOREIGN KEY ERROR
------------------------
110415 3:14:34 Error in foreign key constraint of table jobprep_production/#sql-44a5_1bc:
FOREIGN KEY (`company_id`) REFERENCES `company` (`company_id`) ON DELETE RESTRICT ON UPDATE RESTRICT:
Cannot resolve column name close to:
) ON DELETE RESTRICT ON UPDATE RESTRICT
If I try and drop the index from 'company_to_module', I get this error:
#1025 - Error on rename of './jobprep_production/#sql-44a5_23a' to './jobprep_production/company_to_module' (errno: 150)
Here are my innodb variables:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 1048576 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_size | 8388608 |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_io_threads | 4 |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | |
| innodb_force_recovery | 0 |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 1048576 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 90 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_open_files | 300 |
| innodb_rollback_on_timeout | OFF |
| innodb_stats_on_metadata | ON |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 20 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 8 |
| innodb_thread_sleep_delay | 10000 |
+---------------------------------+------------------------+
I also want to add that while I was playing with adding the foreign keys, mysql corrupted my database and destroyed it. I had to reload from a backup to try again.
Help? :/
Are both tables InnoDB type?
Does the company table have an index on company_id ?
I guess that your table is MyISAM (the default if you haven't changed the config) and you can't create foreign key constraints in MyISAM. See the description of the CREATE TABLE for yout two tables.
If both tables are empty, drop them and re-create them, choosing InnoDB as engine. You could also add the FOREIGN KEY constraints in the tables creation script(s).
From MySQL Reference Manual:
Foreign keys definitions are subject to the following conditions:
Both tables must be InnoDB tables and
they must not be TEMPORARY tables.
Corresponding columns in the foreign
key and the referenced key must have
similar internal data types inside
InnoDB so that they can be compared
without a type conversion. The size
and sign of integer types must be the
same. The length of string types need
not be the same. For nonbinary
(character) string columns, the
character set and collation must be
the same.
InnoDB requires indexes on foreign
keys and referenced keys so that
foreign key checks can be fast and
not require a table scan. In the
referencing table, there must be an
index where the foreign key columns
are listed as the first columns in
the same order. Such an index is
created on the referencing table
automatically if it does not exist.
(This is in contrast to some older
versions, in which indexes had to be
created explicitly or the creation of
foreign key constraints would fail.)
index_name, if given, is used as
described previously.
InnoDB permits a foreign key to
reference any index column or group
of columns. However, in the
referenced table, there must be an
index where the referenced columns
are listed as the first columns in
the same order.
Index prefixes on foreign key columns
are not supported. One consequence of
this is that BLOB and TEXT columns
cannot be included in a foreign key
because indexes on those columns must
always include a prefix length.
If the CONSTRAINT symbol clause is
given, the symbol value must be
unique in the database. If the clause
is not given, InnoDB creates the name
automatically.
#egervari: What happens if you run this:
CREATE TABLE `test` (
`company_id` bigint(20) NOT NULL,
`module_id` bigint(20) NOT NULL,
KEY (`module_id`),
KEY (`company_id`),
CONSTRAINT `test_fk_module`
FOREIGN KEY (`module_id`)
REFERENCES `module` (`module_id`),
CONSTRAINT `test_fk_company`
FOREIGN KEY (`company_id`)
REFERENCES `company` (`company_id`)
ON DELETE RESTRICT
ON UPDATE RESTRICT
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
And if you run:
ALTER TABLE `company_to_module`
ADD CONSTRAINT `company_to_module_fk_company`
FOREIGN KEY (`company_id`)
REFERENCES `company` (`company_id`)
ON DELETE RESTRICT
ON UPDATE RESTRICT;
Ensure that company_to_module.company_id and company.company_id are the EXACT same datatype. I had this happen when the primary key was setup as an UNSIGNED INT but the foreign key field was just an INT. Adding UNSIGNED to the datatype fixed the problem.
I have simply applied the refactorings using Windows and then reimported the database into Debian - it works.
I think it's safe to say that something was messed up on the Debian server, or with the linux version of Mysql - perhaps a bug in 5.1 build?
Anyway, I have also upgraded the ram on the server from 1gb to 2gb, and these problems have gone away.
I think MySQL maybe just didn't have enough ram to complete the operation. If that was the case (and it seems to be), I think MySQL should have simply said so rather than spitting out these errors - making me and everyone here think it was a syntax or a schema-related problem.
Anyway, thanks for those that tried to help. At least it helped me to isolate all the things it couldn't have been.
Since it doesn't seem to be anything syntax-related, my best guess would be that you're running out of space for creating InnoDB tables.
EDIT: Can you paste your InnoDB configuration:
SHOW VARIABLES LIKE "inno%";
Since trying to create a copy of company_to_module manually gives you the same error, you should carefully check the fk constraint already present in company_to_module. Is it still valid, or did you modify the table module?
From the MySQL-Docs:
1005 (ER_CANT_CREATE_TABLE) Cannot create table. If the error message refers to error 150, table creation failed because a foreign key constraint was not correctly formed.
#egervari You wrote - My solution was to drop all the indexes and foreign keys, rename the id columns, and then re-add the indexes and foreign keys.
Agree with you. But it might be that something went wrong. I reproduced the error, and (in my case) fixed it.
I'd suggest you to run OPTIMIZE TABLE command for table where column was renamed. Documentation says - For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE, which rebuilds the table to update index statistics and free unused space in the clustered index.
One more solution:
Drop unique key in the referenced table (key that is used by foreign key, in your case it is a primary key). Then add new foreign key and recreate droped unique key.
One more solution:
Try to add and drop new column to the referenced table, then try to create your foreign key.
ALTER TABLE company ADD COLUMN column1 VARCHAR(255) DEFAULT NULL;
ALTER TABLE company DROP COLUMN column1;

Searching MySQL data

I am trying to search MySQL database with a search key entered by the user. My data contain upper case and lower case. My question is how to make my search function not case sensitive. ex:data in mysql is BOOK but if the user enters book in search input. The result is not found....Thanks..
My search code
$searchKey=$_POST['searchKey'];
$searchKey=mysql_real_escape_string($searchKey);
$result=mysql_query("SELECT *
FROM product
WHERE product_name like '%$searchKey%' ORDER BY product_id
",$connection);
Just uppercase the search string and compare it to the uppercase field.
$searchKey= strtoupper($_POST['searchKey']);
$searchKey=mysql_real_escape_string($searchKey);
$result=mysql_query("SELECT * FROM product
WHERE UPPER(product_name) like '%$searchKey%' ORDER BY product_id
",$connection);
If possible, you should avoid using UPPER as a solution to this problem, as it incurs both the overhead of converting the value in each row to upper case, and the overhead of MySQL being unable to use any index that might be on that column.
If your data does not need to be stored in case-sensitive columns, then you should select the appropriate collation for the table or column. See my answer to how i can ignore the difference upper and lower case in search with mysql for an example of how collation affects case sensitivity.
The following shows the EXPLAIN SELECT results from two queries. One uses UPPER, one doesn't:
DROP TABLE IF EXISTS `table_a`;
CREATE TABLE `table_a` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`value` varchar(255) DEFAULT NULL,
INDEX `value` (`value`),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
INSERT INTO table_a (value) VALUES
('AAA'), ('BBB'), ('CCC'), ('DDD'),
('aaa'), ('bbb'), ('ccc'), ('ddd');
EXPLAIN SELECT id, value FROM table_a WHERE UPPER(value) = 'AAA';
+----+-------------+---------+-------+---------------+-------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+-------+---------------+-------+---------+------+------+--------------------------+
| 1 | SIMPLE | table_a | index | NULL | value | 258 | NULL | 8 | Using where; Using index |
+----+-------------+---------+-------+---------------+-------+---------+------+------+--------------------------+
EXPLAIN SELECT id, value FROM table_a WHERE value = 'AAA';
+----+-------------+---------+------+---------------+-------+---------+-------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+------+---------------+-------+---------+-------+------+--------------------------+
| 1 | SIMPLE | table_a | ref | value | value | 258 | const | 2 | Using where; Using index |
+----+-------------+---------+------+---------------+-------+---------+-------+------+--------------------------+
Notice that the first SELECT which uses UPPER has to scan all the rows, whereas the second only needs to scan two - the two that match. On a table this size, the difference is obviously imperceptible, but with a large table, a full table scan can seriously impact the speed of your query.
This is an easy way to do it:
$searchKey=strtoupper($searchKey);
SELECT *
FROM product
WHERE UPPER(product_name) like '%$searchKey%' ORDER BY product_id
First of all, try to avoid using * as much as possible. It is generally considered a bad idea. Select the columns using column names.
Now, your solution would be -
$searchKey=strtoupper($_POST['searchKey']);
$searchKey=mysql_real_escape_string($searchKey);
$result=mysql_query("SELECT product_name,
// your other columns
FROM product
WHERE UPPER(product_name) like '%$searchKey%' ORDER BY product_id
",$connection);
EDIT
I will try to explain why it is a bad idea to use *. Suppose you need to change the schema of the product table(adding/deleting columns). Then, the columns that are being selected through this query will change, which may cause unintended side effects and will be hard to detect.
According to the MySQL manual, case-sensitivity in searches depends on the collation used, and should be case-insensitive by default for non binary fields.
Make sure you have the field types and the query right (maybe there's an extra space or something). If that doesn't work, you can convert the string to upper case in PHP (ie: $str = strtoupper($str)) and do the same on the MySQL side (#despart)
EDIT: I posted the article above (^). AndI just tested it. Searches on CHAR, VARCHAR, and TEXT fields are case-insensitive (collation = latin1)

Categories