I have a person table and a score table. The Person table basically stores a person's information while score table stores what kind of score a person has. I set the FK constraint in score table to ON DELETE: CASCADE
person
- id
- name
- scored_id (FK)
score
- id (PK)
- bmi
- weight
So, in the table setting score.id is linked with person's scored_id. That being said when I delete a record in score, a person will get deleted as well. But why when I delete a record in person, the record of his in score is not deleted?
Just an idea how you might structure the tables and use a foreign key which will delete records from the score table if/when a user from the person table is deleted. The score table should have a reference to the user - pid which is used as the foreign key dependancy. It makes sense to me that the score is dependant upon the user so no user, no score.
create table `person` (
`id` int(10) unsigned not null auto_increment,
`name` varchar(50) null default null,
primary key (`id`)
)
collate='latin1_swedish_ci'
engine=innodb
auto_increment=4;
mysql> describe person;
+-------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(50) | YES | | NULL | |
+-------+------------------+------+-----+---------+----------------+
create table `score` (
`id` int(10) unsigned not null auto_increment,
`bmi` int(10) unsigned not null default '0',
`weight` int(10) unsigned not null default '0',
`pid` int(10) unsigned not null default '0',
primary key (`id`),
index `pid` (`pid`),
constraint `fk_sc_pid` foreign key (`pid`) references `person` (`id`) on update cascade on delete cascade
)
collate='latin1_swedish_ci'
engine=innodb
auto_increment=4;
mysql> describe score;
+--------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| bmi | int(10) unsigned | NO | | 0 | |
| weight | int(10) unsigned | NO | | 0 | |
| pid | int(10) unsigned | NO | MUL | 0 | |
+--------+------------------+------+-----+---------+----------------+
mysql> select * from person;
+----+------+
| id | name |
+----+------+
| 1 | bob |
| 2 | rita |
| 3 | sue |
+----+------+
mysql> select * from score;
+----+-----+--------+-----+
| id | bmi | weight | pid |
+----+-----+--------+-----+
| 1 | 34 | 34 | 1 |
| 2 | 56 | 41 | 2 |
| 3 | 56 | 77 | 3 |
+----+-----+--------+-----+
mysql> delete from person where id=3;
Query OK, 1 row affected (0.00 sec)
/* delete a user, the score disappears too which makes sense */
mysql> select * from score;
+----+-----+--------+-----+
| id | bmi | weight | pid |
+----+-----+--------+-----+
| 1 | 34 | 34 | 1 |
| 2 | 56 | 41 | 2 |
+----+-----+--------+-----+
Your issue is semantic understanding of the task, rather than syntax. Intuitively your relation looks wrong. It is unlikely, that a particular score, say 75kg and bmi of 20 will need to have a many relations link to people with the same score. This would be arbitary. More likely, your want, a person to have different scores over time, then when you delete a person, you want their associated values deleted. So table relation should be:
person
- id (Primary Key)
- name
score
- id (Primary Key)
- bmi
- weight
- scoreDate
- personID (Foreign Key to person)
A score date would be a helpful addition.
This structure will allow a person to have a history of many score and see the fluctuation of their weight and body mass index over time. A semantically helpful task that resonates with reality, and therefore follows the notions of entity analysis and table structures following the real world application.
Helpful discussion of ERD and table structure levels and relations
In you tables, "person" table is having reference(FK) of "score" table so when you delete a record in "score" table mysql search related record in "users" table to delete.
but "score" table dose not have any reference(FK) of "person" table.
You can try below table structure if you want to delete score record when person record will be delete but person record will be still safe if score record will be delete
person
- id (PK)
- name
score
- id (PK)
- person_id (FK)
- bmi
- weight
Related
I've two tables, the first table contains information on the ideas submitted by user and the second table contains information on the file attachments that are part of the idea. An idea submitted by the user can have 0 or any number of attachments.
Table 1:
-------------------------------------
Id Title Content Originator
-------------------------------------
1 aaa bbb John
2 ccc ddd Peter
--------------------------------------
Table 2:
---------------------------------------------
Id Idea_id Attachment_name
---------------------------------------------
1 1 file1.doc
2 1 file2.doc
3 1 file3.doc
4 2 user2.doc
---------------------------------------------
Table 1 primary key is Id and table 2 primary key is Id as well. Idea_id is the foreign key in table 2 mapping to table 1 Id.
I'm trying to display all the ideas, along with their attachments in a html page. So what I've been doing is: get all the ideas from Table 1 and then for each idea record, retrieve the attachment records from table 2.It seems to be extremely inefficient. Could this be optimized so that I can retrieve idea records and their corresponding attachment records in one query?
I tried with left outer join(Table 1 left outer join Table 2) but that would give me three records for Id = 1 in table 1. I'm looking for a SQL query to club idea detail and attachment names in 1 row to make HTML page processing efficient. Otherwise, What would be the best solution for this?
If you want to get all attachments along with all ideas, you may use GROUP_CONCAT. such as
SELECT *, (SELECT GROUP_CONCAT(attachment_name separator ', ') FROM TABLE2 WHERE idea_id = TABLE1.id) attachments FROM TABLE1
I probably missed the point but a left join should bring back all the records
create table `ideas` (
`id` int(10) unsigned not null auto_increment,
`title` varchar(50) not null,
`content` varchar(50) not null,
`originator` varchar(50) not null,
primary key (`id`)
)
engine=innodb
auto_increment=3;
create table `attachments` (
`id` int(10) unsigned not null auto_increment,
`idea_id` int(10) unsigned not null default '0',
`attachment` varchar(50) not null default '0',
primary key (`id`),
index `idea_id` (`idea_id`),
constraint `fk_ideas` foreign key (`idea_id`) references `ideas` (`id`) on update cascade on delete cascade
)
engine=innodb
auto_increment=5;
mysql> select * from ideas;
+----+----------------+-----------+-----------------+
| id | title | content | originator |
+----+----------------+-----------+-----------------+
| 1 | Flux capacitor | Rubbish | Doc |
| 2 | Star Drive | Plutonium | Professor Frink |
+----+----------------+-----------+-----------------+
mysql> select * from attachments;
+----+---------+------------------------------+
| id | idea_id | attachment |
+----+---------+------------------------------+
| 1 | 1 | Flux capacitor schematic.jpg |
| 2 | 1 | Sensors.docx |
| 3 | 1 | fuel.docx |
| 4 | 2 | plans.jpg |
+----+---------+------------------------------+
mysql> select * from ideas i
-> left outer join attachments a on a.idea_id=i.id;
+----+----------------+-----------+-----------------+------+---------+------------------------------+
| id | title | content | originator | id | idea_id | attachment |
+----+----------------+-----------+-----------------+------+---------+------------------------------+
| 1 | Flux capacitor | Rubbish | Doc | 1 | 1 | Flux capacitor schematic.jpg |
| 1 | Flux capacitor | Rubbish | Doc | 2 | 1 | Sensors.docx |
| 1 | Flux capacitor | Rubbish | Doc | 3 | 1 | fuel.docx |
| 2 | Star Drive | Plutonium | Professor Frink | 4 | 2 | plans.jpg |
+----+----------------+-----------+-----------------+------+---------+------------------------------+
I am currently working on a user system, and would like to setup multiple usergroups/ranks per member. I see other posts on here explaining it by using foreign keys, denormalized tables, etc... but they're all from 2010-2012, so wasn't sure if there were easier/better standard ways of doing it.
CREATE TABLE IF NOT EXISTS `users` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`username` varchar(50) NOT NULL,
`password` varchar(80) NOT NULL,
`email` varchar(80) NOT NULL,
`character_name` varchar(50) NOT NULL,
`verify` int(5) NOT NULL,
`rank` varchar(50) NOT NULL,
`position` int(10) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX (`email`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=3;
CREATE TABLE IF NOT EXISTS `rank` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`rank` varchar(50) NOT NULL,
`position` int(10) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=26 ;
The above are my two tables. Here are the two table structures with three examples in each:
users
id | username | password | email | character_name | verify | rank | position
1 | userA | passA | A#A | characterA | yes | mem | 1
2 | userB | passB | B#B | characterB | yes | mod | 1
3 | userC | passC | C#C | characterC | yes | adm | 3
rank
id | rank | position
1 | mem | 1
2 | mod | 1
3 | adm | 3
Users should be able to be in multiple ranks.
userA should only be a mem.
userB should be a mem and mod.
userC should be a mem, mod, and adm.
If I were to join both tables:
SELECT * FROM rank INNER JOIN users ON rank.position = users.position;
Would that cause userA to be mem and mod as well, since the position for both is 1?
Would it make more sense to remove rank and position from users, position from rank, and join based on user id?
For example (question 2):
users
id | username | password | email | character_name | verify
1 | userA | passA | A#A | characterA | yes
2 | userB | passB | B#B | characterB | yes
3 | userC | passC | C#C | characterC | yes
rank
id | rank
1 | mem
2 | mem
2 | mod
3 | mem
3 | mod
3 | adm
SELECT * FROM rank INNER JOIN users ON rank.id = users.id;
Once selected, I want certain ranks to be able to do certain things.
$sql = SELECT * FROM rank INNER JOIN users ON rank.id = users.id;
$ranks = $conn->query($sql);
$ranks->execute();
foreach ($ranks as $row) {
if($row['rank'] == "mem" {
Do something.
}
if($row['rank'] == "mod" {
Do something else.
}
if($row['rank'] == "adm" {
Do another something else.
}
}
Is the above correct?
Edited, because my first posting apparently wasn't clear enough.
Can anyone help?
Database example:
| country | animal | size | x_id* |
|---------+--------+--------+-------|
| 777 | 1001 | small | 1 |
| 777 | 2002 | medium | 2 |
| 777 | 7007 | medium | 3 |
| 777 | 7007 | large | 4 |
| 42 | 1001 | small | 1 |
| 42 | 2002 | medium | 2 |
| 42 | 7007 | large | 4 |
I need to generate the x_id continuously based on entries in (animal, size) and if x_id for the combination x_id exist use it again.
Currently i use the following PHP script for this action, but on a large db table it is very slow.
query("UPDATE myTable SET x_id = -1");
$i = $j;
$c = array();
$q = query("
SELECT animal, size
FROM myTable
WHERE x_id = -1
GROUP BY animal, size");
while($r = fetch_array($q)) {
$hk = $r['animal'] . '-' . $r['size'];
if( !isset( $c[$hk] ) ) $c[$hk] = $i++;
query("
UPDATE myTable
SET x_id = {$c[$hk]}
WHERE animal = '".$r['animal']."'
AND size = '".$r['size']."'
AND x_id = -1");
}
Is there a way to convert the PHP script to one or two mysql commands?
edit:
CREATE TABLE `myTable` (
`country` int(10) unsigned NOT NULL DEFAULT '1', -- country
`animal` int(3) NOT NULL,
`size` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`lang_id` tinyint(4) NOT NULL DEFAULT '1',
`x_id` int(10) NOT NULL,
KEY `country` (`country`),
KEY `x_id` (`x_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
UPDATE myTable m
JOIN (
SELECT animal, size, #newid := #newid + 1 AS x_id
FROM myTable a
CROSS JOIN (SELECT #newid := 0) b
WHERE x_id = -1
GROUP BY animal, size
) t ON m.animal = t.animal AND m.size = t.size
SET m.x_id = t.x_id
;
http://sqlfiddle.com/#!9/5525ba/1
The group by in the subquery is not needed. It generates useless overhead. If it's fast enough, leave it like this, otherwise we can use distinct+another subquery instead.
User variables are awkward but should do the trick,tested on my machine
CREATE TABLE t
( animal VARCHAR(20),
size VARCHAR(20),
x_id INT);
INSERT INTO T(animal,size) VALUES('crocodile','small'),
('elephant','medium'),
('giraffe','medium'),
('giraffe','large'),
('crocodile','small'),
('elephant','medium'),
('giraffe','large');
UPDATE t RIGHT JOIN
(SELECT animal,size,
MIN(CASE WHEN #var:=CONCAT(animal,size) THEN #id ELSE #id:=#id+1 END)id
FROM t,
(SELECT #var:=CONCAT(animal,size) FROM t)x ,
(SELECT #id:=0)y
GROUP BY animal,size)q
ON t.animal=q.animal AND t.size=q.size
SET x_id=q.id
Results
"animal" "size" "x_id"
"crocodile" "small" "1"
"elephant" "medium" "2"
"giraffe" "medium" "3"
"giraffe" "large" "4"
"crocodile" "small" "1"
"elephant" "medium" "2"
"giraffe" "large" "4"
You want these indexes added for (a lot) faster access
ALTER TABLE `yourtable` ADD INDEX `as_idx` (`animal`,`size`);
ALTER TABLE `yourtable` ADD INDEX `id_idx` (`x_id`);
This is a conceptual. Worm it into your world if useful.
Schema
create table AnimalSize
( id int auto_increment primary key,
animal varchar(100) not null,
size varchar(100) not null,
unique key(animal,size) -- this is critical, no dupes
);
create table CountryAnimalSize
( id int auto_increment primary key,
country varchar(100) not null,
animal varchar(100) not null,
size varchar(100) not null,
xid int not null -- USE THE id achieved thru use of AnimalSize table
);
Some queries
-- truncate table animalsize; -- clobber and reset auto_increment back to 1
insert ignore AnimalSize(animal,size) values ('snake','small'); -- id=1
select last_insert_id(); -- 1
insert ignore AnimalSize(animal,size) values ('snake','small'); -- no real insert but creates id GAP (ie blows slot 2)
select last_insert_id(); -- 1
insert ignore AnimalSize(animal,size) values ('snake','small'); -- no real insert but creates id GAP (ie blows slot 3)
select last_insert_id(); -- 1
insert ignore AnimalSize(animal,size) values ('frog','medium'); -- id=4
select last_insert_id(); -- 4
insert ignore AnimalSize(animal,size) values ('snake','small'); -- no real insert but creates id GAP (ie blows slot 3)
select last_insert_id(); -- 4
Note: insert ignore says do it, and ignore the fact that it may die. In our case, it would fail due to unique key (which is fine). In general, do not use insert ignore unless you know what you are doing.
It is often thought of in connection with an insert on duplicate key update (IODKU) call. Or should I say thought about, as in, How can I solve this current predicament. But, that (IODKU) would be a stretch in this case. Yet, keep both in your toolchest for solutions.
After insert ignore fires off, you know, one way or the other, that the row is there.
Forgetting the INNODB GAP aspect, what the above suggests is that if the row already exists prior to insert ignore, that
You cannot rely on last_insert_id() for the id
So after firing off insert ignore, go and fetch the id that you know has to be there. Use that in subsequent calls against CountryAnimalSize
continue along this line of reasoning for your CountryAnimalSize table inserts where the row may or may not already be there.
There is no reason to formalize the solution here because, as you say, these aren't even your tables anyway in the Question.
Also, back to INNODB GAP. Google that. Figure out whether or not you can live with gaps created.
Most people have bigger fish to fry that keeping id's tight and gapless.
Other people (read: OCD) are so consumed by the perceived gap problem that they blow days on it.
So, these are general comments meant to help a broader audience, than to answer your question, which, as you say, isn't even your schema.
You can use x_id as this:
CONCAT(`animal`, '_', `size`) AS `x_id`
And then compare it with x_id, so that you will get something like:
+---------+-----------+--------+------------------+
| country | animal | size | x_id* |
+---------+-----------+--------+------------------+
| africa | crocodile | small | crocodile_small |
| africa | elephant | medium | elephant_medium |
| africa | giraffe | medium | giraffe_medium |
| africa | giraffe | large | giraffe_large |
| europe | crocodile | small | crocodile_small |
| europe | elephant | medium | elephant_medium |
| europe | giraffe | large | giraffe_large |
+---------+-----------+--------+------------------+
As I see, you are already using MyISAM engine type, You can just define both country and x_id field as PRIMARY KEY (jointly) and you can set the AUTO_INCREMENT for x_id field. Now MySQL will do the rest for you! BINGO!
Here is the SQL Fiddle for you!
CREATE TABLE `myTable` (
`country` int(10) unsigned NOT NULL DEFAULT '1', -- country
`animal` int(4) NOT NULL,
`size` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`lang_id` tinyint(4) NOT NULL DEFAULT '1',
`x_id` int(10) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (country,x_id)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO `myTable` (`country`, `animal`, `size`) VALUES
(777, 1001, 'small'),
(777, 2002, 'medium'),
(777, 7007, 'medium'),
(777, 7007, 'large'),
(42, 1001, 'small'),
(42, 2002, 'medium'),
(42, 7007, 'large')
The result will be like this:
| country | animal | size |lang_id | x_id |
|---------+--------+--------+--------+-------|
| 777 | 1001 | small | 1 | 1 |
| 777 | 2002 | medium | 1 | 2 |
| 777 | 7007 | medium | 1 | 3 |
| 777 | 7007 | large | 1 | 4 |
| 42 | 1001 | small | 1 | 1 |
| 42 | 2002 | medium | 1 | 2 |
| 42 | 7007 | large | 1 | 4 |
NOTE: This will only work for MyISAM and BDB tables, for other engine types you will get error saying "Incorrect table definition; there can be only one auto column and it must be defined as a key!" See this answer for more on this : https://stackoverflow.com/a/5416667/5645769.
The following queries use 80% or more CPU and can take more than 1 minute to complete.
My question: Is there anything wrong with my queries that would cause CPU usage like that? Can I decrease CPU usage and query time by optimizing the MySQL server conf?
Query 1 (loan_history contains 2.6 million records)
SELECT officer, SUM(balance) as balance
FROM loan_history
WHERE bank_id = '1'
AND date ='2013-07-04'
AND officer IS NOT NULL
AND officer <> ''
GROUP BY officer
ORDER BY officer;
Query 2 (loan_history contains 2.6 million records)
SELECT SUM(weighted_interest_rate) as total
FROM (SELECT balance, tmp1.balance_sum,
(balance / tmp1.balance_sum * interest_rate) as weighted_interest_rate
FROM loan_history,
(SELECT SUM(balance) balance_sum FROM loan_history
WHERE date = '2013-07-04'
AND bank_id = '1') as tmp1
WHERE date = '2013-07-04'
AND bank_id = '1') tmp2
Table information:
CREATE TABLE `loan_history` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`bank_id` int(11) DEFAULT NULL,
`loan_purpose_id` int(11) DEFAULT NULL,
`date` date NOT NULL,
`credit_grade` varchar(5) COLLATE utf8_unicode_ci DEFAULT NULL,
`interest_rate` decimal(5,2) NOT NULL,
`officer` varchar(5) COLLATE utf8_unicode_ci DEFAULT NULL,
`balance` decimal(10,2) NOT NULL,
`start_date` date DEFAULT NULL,
`days_delinquent` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `IDX_9F5FE3F11C8FB41` (`bank_id`),
KEY `IDX_9F5FE3F6F593857` (`loan_purpose_id`),
KEY `date` (`date`),
KEY `credit_grade` (`credit_grade`),
KEY `officer` (`officer`),
KEY `start_date` (`start_date`),
KEY `days_delinquent` (`days_delinquent`),
KEY `interest_rate` (`interest_rate`),
KEY `balance` (`balance`),
CONSTRAINT `FK_9F5FE3F11C8FB41` FOREIGN KEY (`bank_id`) REFERENCES `bank` (`id`),
CONSTRAINT `FK_9F5FE3F6F593857` FOREIGN KEY (`loan_purpose_id`) REFERENCES `loan_purpose` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2630634 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
Query 1 EXPLAIN:
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | loan_history | index_merge | IDX_9F5FE3F11C8FB41,date,officer | date,IDX_9F5FE3F11C8FB41 | 3,5 | NULL | 4829 | Using intersect(date,IDX_9F5FE3F11C8FB41); Using where; Using temporary; Using filesort |
Query 2 EXPLAIN:
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 8236 |
| 2 | DERIVED | <derived3> | system | NULL | NULL | NULL | NULL | 1 |
| 2 | DERIVED | loan_history | index_merge | IDX_9F5FE3F11C8FB41,date | date,IDX_9F5FE3F11C8FB41 | 3,5 | NULL | 4829 | Using intersect(date,IDX_9F5FE3F11C8FB41); Using where; Using index |
| 3 | DERIVED | loan_history | index_merge | IDX_9F5FE3F11C8FB41,date | date,IDX_9F5FE3F11C8FB41 | 3,5 | NULL | 4829 | Using intersect(date,IDX_9F5FE3F11C8FB41); Using where; Using index |
My.cnf file:
default-storage-engine=MyISAM
interactive_timeout=300
key_buffer_size=256M
key_cache_block_size=4096
max_heap_table_size=128M
max_join_size=1000000000
max_allowed_packet=32M
open_files_limit=4096
query_cache_size=256M
query_cache_limit=10240M
query_cache_type=1
table_cache=256
thread_cache_size=100
tmp_table_size=128M
wait_timeout=7800
max_user_connections=50
join_buffer_size=256K
sort_buffer_size=4M
read_rnd_buffer_size=1M
innodb_open_files=300
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_file_per_table=1
innodb_additional_mem_pool_size=20M
innodb_flush_log_at_trx_commit=0
innodb_flush_method=O_DIRECT
innodb_support_xa=0
innodb_thread_concurrency=0
innodb_buffer_pool_size=3000M
The sum() and GROUP BY from the first query could be taking some time but I don't think there is much you can do there.
In the second query your FROM (SELECT.... is probably hitting the system pretty hard, I would recommend turning
(SELECT balance, tmp1.balance_sum,
(balance / tmp1.balance_sum * interest_rate) as weighted_interest_rate
FROM loan_history,
(SELECT SUM(balance) balance_sum FROM loan_history
WHERE date = '2013-07-04'
AND bank_id = '1') as tmp1
WHERE date = '2013-07-04'
AND bank_id = '1')
into a view or figuring out how to do it with JOINs
Please tell us how much does exactly each one take. More than 1 minute each isn't that much indicative.
Any how, From MySQL manual
When tuning a MySQL server, the two most important variables to configure are key_buffer_size and table_cache. You should first feel confident that you have these set appropriately before trying to change any other variables.
Also, take a look here.
As for optimization, first try a composite index.
I defined the column id as my primary key, but how do I make it automatically one larger than the last one?
You are lookin for AUTO_INCREMENT, you can check documentation here
You will need to set id column as AUTO_INCREMENT
Example from documentation
CREATE TABLE animals (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
you must set autoincrement.
CREATE TABLE animals (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
INSERT INTO animals (name) VALUES
('dog'),('cat'),('penguin'),
('lax'),('whale'),('ostrich');
SELECT * FROM animals;
Which returns:
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
| 6 | ostrich |
+----+---------+
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html mysql reference
Ok didn't notice the tags.
Hit the A_I checkbox in phpMyAdmin for the id column.
---old---
The Oracle way, triggered sequence:
CREATE sequence aic increment BY 1 start WITH 1;
CREATE TABLE blarg (
id NUMBER(15,0) PRIMARY KEY,
foobar VARCHAR2(255)
);
CREATE TRIGGER trigger ait BEFORE INSERT ON blarg
REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW
Begin
SELECT aic.NEXTVAL INTO :NEW.id FROM DUAL;
End;