I have an MySQL table named i_visited structured like: userid,tid,dateline
And I run this condition in view_thread.php page:
if (db('count','SELECT userid FROM i_visited
WHERE tid = '.intval($_GET['id']).'
AND userid = '.$user['id']))
mysql_query('UPDATE i_visited
SET dateline = unix_timestamp(now())
WHERE userid = '.$user['id'].'
AND tid = '.intval($_GET['id']));
else
mysql_query('INSERT INTO i_visited (userid,tid,dateline) VALUES
('.$user['id'].','.intval($_GET['id']).',unix_timestamp(now()))');
The problem is that it executes in 80/100 ms (on Windows) 40/60 (on Linux)
1 row affected. (query executed in 0.0707 sec)
The mysql_num_rows() aka db('count',sql) uses 2 / 3 ms, so the problem is at the update and the insert.
P.S. i_visited is an utf8_unicode_ci (InnoDB), has anyone seen this problem?
Other queries run normal (2 / 3 milliseconds)
CREATE TABLE i_visited (
userid int(10) NOT NULL,
tid int(10) unsigned NOT NULL,
dateline int(10) NOT NULL,
KEY userid (userid,tid),
KEY userid_2 (userid),
KEY tid (tid) )
ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
You do not need to do a select to check existence and then choose either Update or Insert.
You can use MySQL's ON DUPLICATE KEY UPDATE Feature like this.
$query = 'INSERT INTO
i_visited (userid,tid,dateline)
VALUES (' .
$user['id'] . ',' .
intval($_GET['id']) . ',
unix_timestamp(now()))
ON DUPLICATE KEY UPDATE
dateline = unix_timestamp(now())';
mysql_query($query);
This query will insert a new row if there is now KEY conflict, and in case a duplicate key is being inserted, it will instead execute the update part.
And as you have a KEY userid (userid,tid) in your CREATE Statement the above query is equivalent to your if...else block.
Try this and see if there are any gains
You can also use REPLACE INTO, as there are only the specified 3 columns, like this
$query = 'REPLACE INTO
i_visited (userid,tid,dateline)
VALUES (' .
$user['id'] . ',' .
intval($_GET['id']) . ',
unix_timestamp(now()))';
mysql_query($query);
But I would suggest looking at ON DUPLICATE KEY UPDATE as it is more flexible, as it can be used on a table with any number of columns, whereas REPLACE INTO would only work in some limited cases as other column values would also need to be filled in the REPLACE INTO statement unnecessarily
I think (part) of the problem is that your table does not have an explicit primary key.
You've only declared secondary keys.
Change the definition to:
CREATE TABLE i_visited (
userid int(10) NOT NULL,
tid int(10) unsigned NOT NULL,
dateline int(10) NOT NULL,
PRIMARY KEY userid (userid,tid), <<----------
KEY userid_2 (userid),
KEY tid (tid) )
ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
InnoDB does not work well without an explicit primary key defined.
Related
I have the following table:
CREATE TABLE `my_table` (
composite_pk1 INT NOT NULL ,
composite_pk2 INT NOT NULL ,
data VARCHAR(255) NOT NULL ,
primary key (composite_pk1, composite_pk2)
) ENGINE=InnoDB;
For a given composite_pk1, I wish composite_pk2 to act as an autoincrement primary key. I don't wish to lock the table, and as such plan on using a trigger such as the following:
DELIMITER $$
CREATE TRIGGER my_trigger BEFORE INSERT ON my_table
FOR EACH ROW BEGIN
SET NEW.composite_pk2 = (
SELECT IFNULL(MAX(composite_pk2), 0) + 1
FROM issue_log
WHERE composite_pk1 = NEW.composite_pk1
);
END $$
I can now insert a record:
$stmt=$myDB->prepare('INSERT INTO my_table(composite_pk1, data) VALUES (?,?)');
$stmt->execute([123,'hello']);
How do I get the last inserted composite_pk2? PDO::lastInsertId only works with native autoincrement tables (i.e. not the trigger approach). I "could" later do a SELECT query to get the max value, however, there is no guarantee that another record has snuck in.
You can make composite_pk2 an unique key with auto_increment:
CREATE TABLE `my_table` (
composite_pk1 INT NOT NULL ,
composite_pk2 INT NOT NULL unique auto_increment,
data VARCHAR(255) NOT NULL ,
primary key (composite_pk1, composite_pk2)
) ENGINE=InnoDB;
Now last_insert_id() will return the recently created id for composite_pk2.
I'm building a small report in a PHP while loop.
The query I'm running inside the while() loop is this:
INSERT IGNORE INTO `tbl_reporting` SET datesubmitted = '2015-05-26', submissiontype = 'email', outcome = 0, totalcount = totalcount+1
I'm expecting the totalcount column to increment every time the query is run.
But the number stays at 1.
The UNIQUE index composes the first 3 columns.
Here's the Table Schema:
CREATE TABLE `tbl_reporting` (
`datesubmitted` date NOT NULL,
`submissiontype` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL,
`outcome` tinyint(1) unsigned NOT NULL DEFAULT '0',
`totalcount` mediumint(5) unsigned NOT NULL DEFAULT '0',
UNIQUE KEY `datesubmitted` (`datesubmitted`,`submissiontype`,`outcome`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
When I modify the query into a regular UPDATE statement:
UPDATE `tbl_reporting` SET totalcount = totalcount+1 WHERE datesubmitted = '2015-05-26' AND submissiontype = 'email' AND outcome = 1
...it works.
Does INSERT IGNORE not allow adding numbers? Or is my original query malformed?
I'd like to use the INSERT IGNORE, otherwise I'll have to query for the original record first, then insert, then eventually update.
Think of what you're doing:
INSERT .... totalcount=totalcount+1
To calculate totalcount+1, the DB has to retrieve the current value of totalcount... which doesn't exist yet, because you're CREATING a new record, and there is NO existing data to retrieve the "old" value from.
e.g. you're trying eat your cake before you ever went to the store to buy the ingredients, let alone mix/bake them.
Everything I have searched for and found has yet to work because I am accessing the Table through a php script and differently than everything I see. Anyways,
I am importing Feeds from a website into a mysql table. My table was created like this...
$query2 = <<<EOQ
CREATE TABLE IF NOT EXISTS `Entries` (
`feed_id` int(11) NOT NULL,
`item_title` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
`item_link` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
`item_date` varchar(40) COLLATE utf8_unicode_ci NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
EOQ;
$result = $db_obj->query($query2);
I enter the data like so....
foreach($rss->channel->item as $Item){
$query5 = <<<EOQ
INSERT INTO Entries (feed_id, item_title, item_link, item_date)
VALUES ('$get_id','$Item->title','$Item->link','$Item->pubDate')
EOQ;
$result = $db_obj->query($query5);
}
Now, every time Import new feeds from the site I want to make sure I delete any duplicates that might already be there. Everything I have tried, especially DISTINCT, has not worked for me. Does anyone know what type of query I could use to create a temp table, copy over any distinct rows (ENTIRE ROWS, if a title is the same but the date is different I want to keep that), drop the old table, then rename the tamp table to what I want.... or something similar?
Avoid using the duplicate rows in the first place. Make any unique values into keys. When adding new values to your database, use
REPLACE INTO Entries (feed_id, item_title, item_link, item_date)
VALUES ('$get_id','$Item->title','$Item->link','$Item->pubDate')
EOQ;
The duplicates will be automatically overwritten. Replace is handy because it works like an insert when there is no conflict in the keys, but when there is then it will update the record and bump up any auto-incrementing keys.
EDIT
I've been drumming over this for a while. Here's what I came up with.
The problem with making a multi-column key on (feed_id, item_title, item_link, item_date) is that it will exceed the 1000 byte limitation in MySQL for key length. So instead alter your schema like so:
CREATE TABLE IF NOT EXISTS `Entries` (
`hash` varchar(32),
`feed_id` int(11) NOT NULL,
`item_title` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
`item_link` varchar(200) COLLATE utf8_unicode_ci NOT NULL,
`item_date` varchar(40) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (hash)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Now when you store a new value, get a hash of the values together:
$hash = md5($get_id . $Item->title . $Item->link . $Item->pubDate);
And for your insert statements use the following:
REPLACE INTO Entries (hash, feed_id, item_title, item_link, item_date)
VALUES ('$hash', '$get_id','$Item->title','$Item->link','$Item->pubDate')
EOQ;
The hash will be a unique representation of the record in it's entirety, and will be easy to compare in order to avoid duplicates. Now when you attempt to add the same record more than once, it will just replace the existing entry, and your query will not fail. As an alternative, you could continue to use insert, and the query will return an error, which you could handle however you want to.
The fastest and easiest way to delete duplicate records is by issuing a very simple command.
ALTER IGNORE TABLE [TABLENAME] ADD UNIQUE INDEX UNIQUE_INDEX ([FIELDNAME])
What this does is create a unique index on the field that you do not want to have any duplicates. The ignore syntax instructs MySQL to not stop and display an error when it hits a duplicate. This is much easier than dumping and reloading a table. It will also add unique indexes so that no new duplicates will be added. Just change you INSERT to INSERT IGNORE.
This also will work, but is not as elegant:
delete from [tablename] where fieldname in (select a.[fieldname] from
(select [fieldname] from [tablename] group by [fieldname] having count(*) > 1 ) a )
Perhaps do something like this:
$query2 = 'CREATE TABLE entries_new LIKE entries';
$result = $db_obj->query($query2);
$query5 = 'INSERT INTO entries_new (feed_id, item_title, item_link, item_date) VALUES ';
foreach($rss->channel->item as $Item){
$query5 .= '('$get_id','$Item->title','$Item->link','$Item->pubDate'),';
}
$query5 = rtrim($query5, ',');
$result = $db_obj->query($query5);
$query6 = "RENAME TABLE entries TO entries_backup, entries_new TO entries";
$result = $db_object->query($query6);
This will create a table called entries_new like your entries table. Make a single insert of data into entries_new and then rename the old table to entries_backup and the new table to entries.
You might also want to consider wrapping this whole sequence up in a transaction.
I want to automatically delete rows when the table (shown below) gets a new insert, if certain conditions are met.
When:
There are rows referring to the same 'field' with the same 'user_id'
Their 'field', 'display' and 'search' columns are the same
Simply, when the rows would become duplicates (except the 'group_id' column) the non null 'group_id' should be deleted, otherwise a row should be updated or inserted.
Is there a way to set this up in mysql (in spirit of "ON DUPLICATE do stuff" combined with unique keys etc.), or do I have to explicitly check for it in php (with multiple queries)?
Additional info:
There should always be a row with NULL 'group_id' for every possible 'field' (there's a limited set, defined elsewhere). On the other hand there might not be one with a non null 'group_id'.
CREATE TABLE `Views` (
`user_id` SMALLINT(5) UNSIGNED NOT NULL,
`db` ENUM('db_a','db_b') NOT NULL COLLATE 'utf8_swedish_ci',
`field` VARCHAR(40) NOT NULL COLLATE 'utf8_swedish_ci',
`display` TINYINT(1) UNSIGNED NOT NULL,
`search` TINYINT(1) UNSIGNED NOT NULL,
`group_id` SMALLINT(6) UNSIGNED NULL DEFAULT NULL,
UNIQUE INDEX `user_id` (`field`, `db`, `user_id`),
INDEX `Views_ibfk_1` (`user_id`),
INDEX `group_id` (`group_id`),
CONSTRAINT `Views_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `User` (`id`) ON
UPDATE CASCADE ON DELETE CASCADE
)
COLLATE='utf8_swedish_ci'
ENGINE=InnoDB;
I think you need to revise your logic. It makes no sense to Insert a row only to delete another row. Why not just update the Group_ID field in the duplicate row to what is being inserted? Below is a rough idea of how I would go about it.
N.b. I haven't done much work with MySQL and cannot get the below to run on SQLFiddle, but based on the MySQL docs I can't work out why. Perhaps someone more versed in MySQL can correct me?
SET #User_ID = 1;
SET #db = 'db_a';
SET #Field = 'Field';
SET #Display = 1;
SET #Search = 1;
SET #Group_ID = 1;
IF EXISTS
( SELECT 1
FROM Views
WHERE User_ID = #User_ID
AND DB = #DB
AND Field = #Field
AND Group_ID IS NOT NULL
)
THEN
UPDATE Views
SET Group_ID = #Group_ID,
Display = #Display,
Search = #Search
WHERE User_ID = #User_ID
AND DB = #DB
AND Field = #Field
AND Group_ID IS NOT NULL
ELSE
INSERT INTO Views (User_ID, DB, Field, Display, Search, Group_ID)
VALUES (#User_ID, #DB, #Field, #Display, #Search, #Group_ID)
END IF;
Alternatively (and my preferred solution), add a Timestamp field to your table and create a view as follows:
SELECT v.User_ID, v.DB, v.Field, v.Display, v.Search, v.Group_ID
FROM Views v
INNER JOIN
( SELECT User_ID, DB, Field, MAX(CreatedDate) AS CreatedDate
FROM Views
WHERE Group_ID IS NOT NULL
GROUP BY User_ID, DB, Field
) MaxView
ON MaxView.User_ID = v.User_ID
AND MaxView.DB = v.DB
AND MaxView.Field = v.Field
AND MaxView.CreatedDate = v.CreatedDate
WHERE v.Group_ID IS NOT NULL
UNION ALL
SELECT v.User_ID, v.DB, v.Field, v.Display, v.Search, v.Group_ID
FROM Views v
WHERE v.Group_ID IS NULL
This would allow you to track changes to your data properly, without compromising the need to be able to view unique records.
delete group_id from Views where group_id != 'NUll'
Your question is not very good to understand, so I'm not sure this is what you want:
DELETE FROM Views WHERE # delete from the table views
group_id IS NOT NULL AND # first condition delete only rows with not null group_id
(SELECT count(*) as tot FROM Views GROUP BY group_id) = 1 # second condition count the difference in group id
If that's not what you want, please update your question with more details...
Mysql table (migration_terms) fields are as follows
oldterm count newterm seed
I used the following create table statment.
CREATE TABLE `migration_terms`
(
`oldterm` varchar(255) DEFAULT NULL,
`count` smallint(6) DEFAULT '0',
`newterm` varchar(255) DEFAULT NULL,
`seed` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`seed`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
And It works, no problems there.
but then when I used the following insert into statement to populate it;
"INSERT INTO migration_terms
SELECT looseterm as oldterm,
COUNT(seed) AS count
FROM looseterms
GROUP BY looseterm
ORDER BY count DESC "
I get this error;
Column count doesn't match value count at row 1
I cannot figure out why?
If you need the table structure of the looseterms table, it was created by the following create table statement.
CREATE TABLE looseterms
(
`seed` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`looseterm` varchar(255)
)
You need to specify the columns if your select statement has fewer columns than the table
"INSERT INTO migration_terms
(oldterm,
count)
SELECT looseterm AS oldterm,
Count(seed) AS count
FROM looseterms
GROUP BY looseterm
ORDER BY count DESC "
From MySql docs on Insert Syntax
If you do not specify a list of column names for INSERT ... VALUES or
INSERT ... SELECT, values for every column in the table must be
provided by the VALUES list or the SELECT statement. If you do not
know the order of the columns in the table, use DESCRIBE tbl_name to
find out.
Your insert is adding 2 columns of data, whereas your table's definition has 4 columns