I have a like table for customers like products in my website.
The problem is, I use this table:
CREATE TABLE IF NOT EXISTS `likes` (
`id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT,
`user` varchar(40) NOT NULL,
`post` int(11) UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 AUTO_INCREMENT=1 ;
user who like, post is product id.
like button send a ajax request to this php:
session_start();
$user = $_SESSION["user"];
$pinsid=$_POST['id'];
$stmt = $mysqli_link->prepare("SELECT * FROM likes WHERE post=? AND user=?");
$stmt->bind_param('is', $pinsid, $user);
$stmt->execute();
$result = $stmt->get_result();
$stmt->close();
$chknum = $result->num_rows;
if($chknum==0){
$stmt = $mysqli_link->prepare("INSERT INTO likes (user, post) VALUES (?,?)");
$stmt->bind_param('si', $user, $pinsid);
$stmt->execute();
$stmt->close();
$response = 'success';
}
echo json_encode($response);
My problem is, I have double inserts in like from the same person. eg:
1 josh 5
2 josh 5
but it only happens if MySQL engine is set as InnoDB, if I change it to MyISAM I have only 1 insert.
What is happening? What should I do to make it work properly?
but it only happens if MySQL engine is set as InnoDB, if I change it to MyISAM I have only 1 insert.
What is happening? What should I do to make it work properly?
The MyISAM engine uses table level locking, which means that if an operation is executing on a table, all other operations wait executing till that oparation is finished.
InnoDb is transactional and uses row-level locking, since you're not using transactions nothing is locked.
As mentioned in the comments and answers the simplest solution is to create an unique constraint on user and post, in youre case you can use both as primary key because the auto-increment column has no added value.
To create a unique constraint:
ALTER TABLE likes ADD UNIQUE KEY uk_user_post (user,post);
As for your question:
but it can slow down my inserts?
If we speak solely about the insert operation at the table, yes it does slow down because each index has to be rebuild after an insert,update or delete operation. How much it slows down depends on the size of the index(es) and the number of rows in the table.
However in your current table structure you have no indexes at all on user and post, and in your application you perform a select with a lookup on both colums, which will result in a full table scan.
With the unique index (user,post) you can skip the select because when the unique constraint is violate you'll get an SQL error.
Also user and post are foreign keys so the should be indexed anyway.
The unique index (user,post) covers the user FK, so you will also need an index on post separatly
One way of doing this would be to set up a unique key for user and post in the likes table (see https://dev.mysql.com/doc/refman/5.0/en/constraint-primary-key.html).
If that was in place, the database would ensure that there are no duplicates of user and post. However, for data which are already in the table, it could be problematic if there are already duplicates
Related
I'm using php and i have a table that have 2 column of varchar , one is used for user identification, and the other is used for page name entry.
they both must be varchar.
i want to insert ignore data when user enter a page to know if he visited it or not, and i want to fetch all the rows that the user have been in.
fetch all for first varchar column.
insert if not exist for both values.
I'm hoping to do it in the most efficient way.
what is the best way to insert without checking with another query if exist?
what is the best way other then:
SELECT * FROM table WHERE id = id
to fetch when the column needed is varchar?
You should consider a normalized table structure like this:
CREATE TABLE user (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100)
);
CREATE TABLE page (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100)
);
CREATE TABLE pages_visted (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
user_id INT UNSIGNED,
page_id INT UNSIGNED,
UNIQUE KEY (user_id, page_id)
);
INSERT IGNORE INTO pages_visted (user_id, page_id) VALUES (:userId, :pageId);
SELECT page_id FROM pages_visted WHERE user_id = :userId;
I think you want to implement a composite primary key.
A composite primary key tells MySQL that you want your primary key to be a combination of fields.
More info here: Why use multiple columns as primary keys (composite primary key)
I don't know of a better option for your query, although I can advise, if possible:
Define columns to be NOT NULL. This gives you faster processing and requires less storage. It will also simplify queries sometimes because you don't need to check for NULL as a special case.
And with variable-length rows, you get more fragmentation in tables where you perform many deletes or updates due to the differing sizes of the records. You'll need to run OPTIMIZE TABLE periodically to maintain performance.
I am new to SQL and looking for some help. I have three tables: author, study and casestudy (which is a linking table). What I want to achieve is when data is inserted into author and study tables (from a web form) their Auto increment IDs get inserted into casestudy table if it is possible. I guess I will need to create triggers. AuthorId and StudyId in casestudy table is a composite key. Table structure is as follow:
CREATE TABLE `test`.`author` (
`AuthorId` INT(11) NOT NULL AUTO_INCREMENT,
`AuthorTitle` VARCHAR(45) NOT NULL,
PRIMARY KEY (`AuthorId`),
UNIQUE INDEX `AuthorId_UNIQUE` (`AuthorId` ASC));
CREATE TABLE `test`.`study` (
`StudyId` INT(11) NOT NULL AUTO_INCREMENT,
`Title` VARCHAR(45) NOT NULL,
PRIMARY KEY (`StudyId`),
UNIQUE INDEX `StudyId_UNIQUE` (`StudyId` ASC));
CREATE TABLE `test`.`casestudy` (
`AuthorId` INT(11) NOT NULL,
`StudyId` INT(11) NOT NULL,
PRIMARY KEY (`AuthorId`, `StudyId`),
INDEX `StudyId_idx` (`StudyId` ASC),
CONSTRAINT `AuthorId`
FOREIGN KEY (`AuthorId`)
REFERENCES `test`.`author` (`AuthorId`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `StudyId`
FOREIGN KEY (`StudyId`)
REFERENCES `test`.`study` (`StudyId`)
ON DELETE NO ACTION
ON UPDATE NO ACTION);
Any advice will be appreciated. Thank you.
As far as I can see there is no data link between author and study tables. There is no such functionality as "one trigger on two tables update at a time". The best place to implement casestudy maintenance is in the procedure that populates the author and study tables during web form processing.
In PHP that can be done through collecting IDs after each Insert using mysql_insert_id(); (see Inserting data into multiple tables using php via a web form) and then using them all to update the linking table.
Using mysqli (and assuming $mysqli is the connection) that should be something like:
$AuthorTitle = $mysqli->real_escape_string($_POST[AuthorTitle value]);
$mysqli->query("INSERT INTO test.author(AuthorTitle) VALUES('$AuthorTitle')");
$AuthorId = $mysqli->insert_id;
$StudyTitle = $mysqli->real_escape_string($_POST[StudyTitle value]);
$mysqli->query("INSERT INTO test.study(Title) VALUES('$StudyTitle')");
$StudyId = $mysqli->insert_id;
$mysqli->query("INSERT INTO test.casestudy(AuthorId, StudyId) VALUES($AuthorId, $StudyId)");
Alternatively all three tables can be consistently populated through a stored procedure (or just an SQL script) that would take care of all the relevant web form fields in one go. Then after each Insert auto-generated ID can be collected using LAST_INSERT_ID(); (see LAST_INSERT_ID() MySQL).
$AuthorTitle = $mysqli->real_escape_string($_POST[AuthorTitle value]);
$StudyTitle = $mysqli->real_escape_string($_POST[StudyTitle value]);
$mysqli->multi_query("
START TRANSACTION;
INSERT INTO test.author(AuthorTitle) VALUES('$AuthorTitle');
SET #AuthorId = LAST_INSERT_ID();
INSERT INTO test.study(Title) VALUES('$StudyTitle');
SET #StudyId = LAST_INSERT_ID();
INSERT INTO test.casestudy(AuthorId, StudyId) VALUES(#AuthorId, #StudyId);
COMMIT;
");
Finally this scenario can be implemented through an Updatable and Insertable View across all three tables that would take care of storing the data in consistent manner.
Hi I have two tables,
users_table and orders table.
users_id is in both tables
as a primary key in users_table and as foreign key in orders_table(referencing users_id in users_table)
When I try to place an order if it's the first time the user is able to place an order but if a user already placed an order the data is not saved to the database in the second attempt.. any Idea why? or any solutions?
I apologise for the bad english
MY PHP CODE:
$query = "INSERT INTO orders_table(users_id, orders_postDate, orders_category, orders_categoryId, orders_name, orders_description, orders_deliveryDate) VALUES('$users_id', '$orders_postDate', '$orders_category', '$orders_categoryId', '$orders_name', '$orders_description', '$orders_deliveryDate')";
$result = mysqli_query($connection, $query);
if($result){
echo "SUCCESS";
}else{
echo "fail";
}
so Let's say that I created an account, signed in and placed an order, the data is successfully on the database. if I try to place a new order with the same user I get the fail message.. so for some reason
mysqli_query($connection, $query);
fails the second time, I am assuming that it is because my foreign key is a primary key? how can I fix this?
Without code/error message it is difficult to say what the issue is. However, you mentioned that user_id is the foreign key for the orders_table, but if it is also used for the primary key for your orders_table it would cause duplication when creating any subsequent orders, hence, insertion of a new order would fail.
These might help you to help us to get a better understanding of whats going on.
How is your orders_table created?
What is the primary key for Orders_table?
How do you assign the primary key (if you do) upon insertion of a new record?
Since you are able to insert the first record that means you get a successful connection. So instead of echoing "success" and "fail", extract the mysqli error instead.
Also, since you are dealing with MySQL and PHP you might want to take a look into using PDO objects instead, if you haven't already. Being able to use prepared statements is a huge plus.
Yes, you are right, if orders_table.user_id with primary key, also there is a 'unique' key. If so, MySql did not allow to paste the same user_id.
Use ALTER TABLE sql command to remove the primary key on orders_table.user_id add add a new column with primary key.
There is code you need:
ALTER TABLE orders_table DROP PRIMARY KEY;
ALTER TABLE orders_table ADD COLUMN `id` int(11) NOT NULL AUTO_INCREMENT;
ALTER TABLE orders_table ADD PRIMARY KEY(`id`);
Your insert query is not getting data from the users_table and so will not be able to insert the right user data into the Orders_table; and second, the order_table is created having user_id as a primary key and foreign key which will cause data duplication. Try altering the table to drop the primary and create another column for it(Primary key).
An example
ALTER TABLE orders_table DROP PRIMARY KEY;
ALTER TABLE orders_table ADD COLUMN 'id' int NOT NULL AUTO_INCREMENT PRIMARY KEY;
That should make your code work now.
I have a MySQL (5.6.26) database with large ammount of data and I have problem with COUNT select on table join.
This query takes about 23 seconds to execute:
SELECT COUNT(0) FROM user
LEFT JOIN blog_user ON blog_user.id_user = user.id
WHERE email IS NOT NULL
AND blog_user.id_blog = 1
Table user is MyISAM and contains user data like id, email, name, etc...
CREATE TABLE `user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`username` varchar(50) DEFAULT NULL,
`email` varchar(100) DEFAULT '',
`hash` varchar(100) DEFAULT NULL,
`last_login` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`created` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`) USING BTREE,
UNIQUE KEY `email` (`email`) USING BTREE,
UNIQUE KEY `hash` (`hash`) USING BTREE,
FULLTEXT KEY `email_full_text` (`email`)
) ENGINE=MyISAM AUTO_INCREMENT=5728203 DEFAULT CHARSET=utf8
Table blog_user is InnoDB and contains only id, id_user and id_blog (user can have access to more than one blog). id is PRIMARY KEY and there are indexes on id_blog, id_user and id_blog-id_user.
CREATE TABLE `blog_user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`id_blog` int(11) NOT NULL DEFAULT '0',
`id_user` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `id_blog_user` (`id_blog`,`id_user`) USING BTREE,
KEY `id_user` (`id_user`) USING BTREE,
KEY `id_blog` (`id_blog`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=5250695 DEFAULT CHARSET=utf8
I deleted all other tables and there is no other connection to MySQL server (testing environment).
What I've found so far:
When I delete some columns from user table, duration of query is shorter (like 2 seconds per deleted column)
When I delete all columns from user table (except id and email), duration of query is 0.6 seconds.
When I change blog_user table also to MyISAM, duration of query is 46 seconds.
When I change user table to InnoDB, duration of query is 0.1 seconds.
The question is why is MyISAM so slow executing the command?
First, some comments on your query (after fixing it up a bit):
SELECT COUNT(*)
FROM user u LEFT JOIN
blog_user bu
ON bu.id_user = u.id
WHERE u.email IS NOT NULL AND bu.id_blog = 1;
Table aliases help make it easier to both write and to read a query. More importantly, You have a LEFT JOIN but your WHERE clause is turning it into an INNER JOIN. So, write it that way:
SELECT COUNT(*)
FROM user u INNER JOIN
blog_user bu
ON bu.id_user = u.id
WHERE u.email IS NOT NULL AND bu.id_blog = 1;
The difference is important because it affects choices that the optimizer can make.
Next, indexes will help this query. I am guessing that blog_user(id_blog, id_user) and user(id, email) are the best indexes.
The reason why the number of columns affects your original query is because it is doing a lot of I/O. The fewer columns then the fewer pages needed to store the records -- and the faster the query runs. Proper indexes should work better and more consistently.
To answer the real question (why is myisam slower than InnoDB), I can't give an authoritative answer.
But it is certainly related to one of the more important differences between the two storage engines : InnoDB does support foreign keys, and myisam doesn't. Foreign keys are important for joining tables.
I don't know if defining a foreign key constraint will improve speed further, but for sure, it will guarantee data consistency.
Another note : you observe that the time decreases as you delete columns. This indicates that the query requires a full table scan. This can be avoided by creating an index on the email column. user.id and blog.id_user hopefully already have an index, if they don't, this is an error. Columns that participate in a foreign key, explicit or not, always must have an index.
This is a long time after the event to be much use to the OP and all the foregoing suggestions for speeding up the query are entirely appropriate but I wonder why no one has remarked on the output of EXPLAIN. Specifically, why the index on email was chosen and how that relates to the definition for the email column in the user table.
The optimizer has selected an index on email column, presumably because it's included in the where clause. key_len for this index is comparatively long and it's a reasonably large table given the auto_increment value so the memory requirements for this index would be considerably greater than if it had chosen the id column (4 bytes against 303 bytes). The email column is NULLABLE but has a default of the empty string so, unless the application explicitly sets a NULL, you are not going to find any NULLs in this column anyway. Neither will you find more than one record with the default given the UNIQUE constraint. The column DEFAULT and UNIQUE constraint appear to be completely at odds with each other.
Given the above, and the fact we only want the count in the query, I'd then wonder if the email part of the where clause serves any purpose other than slowing the query down as each value is compared to NULL. Without it the optimizer would probably pick the primary key and do a much better job. Better yet would be a query which ignored the user table entirely and took the count based on the covering index on blog_user that Gordon Linoff highlighted.
There's another indexing issues here worth mentioning:
On the user table
UNIQUE KEY `id` (`id`) USING BTREE,
is redundant since id is the PRIMARY KEY and therefore UNIQUE by definition.
To answer your last question,
The question is why is MyISAM so slow executing the command?
MyISAM is dependent on the speed of your hard drive,
INNODB once the data is read is at speed of RAM. 1st time query is run could be loading data, second and later will avoid hard drive until aged out of RAM.
I have the following schema with the following attributes:
USER(TABLE_NAME)
USER_ID|USERNAME|PASSWORD|TOPIC_NAME|FLAG1|FLAG2
I have 2 questions basically:
How can I make an attribute USER_ID as primary key and it should
automatically increment the value each time I insert the value into
the database.It shouldn't be under my control.
How can I retrieve a record from the database, based on the latest
time from which it was updated.( for example if I updated a record
at 2pm and same record at 3pm, if I retrieve now at 4pm I should get
the record that was updated at 3pm i.e. the latest updated one.)
Please help.
I'm assuming that question one is in the context of MYSQL. So, you can use the ALTER TABLE statement to mark a field as PRIMARY KEY, and to mark it AUTOINCREMENT
ALTER TABLE User
ADD PRIMARY KEY (USER_ID);
ALTER TABLE User
MODIFY COLUMN USER_ID INT(4) AUTO_INCREMENT; -- of course, set the type appropriately
For the second question I'm not sure I understand correctly so I'm just going to go ahead and give you some basic information before giving an answer that may confuse you.
When you update the same record multiple times, only the most recent update is persisted. Basically, once you update a record, it's previous values are not kept. So, if you update a record at 2pm, and then update the same record at 3pm - when you query for the record you will automatically receive the most recent values.
Now, if by updating you mean you would insert new values for the same USER_ID multiple times and want to retrieve the most recent, then you would need to use a field in the table to store a timestamp of when each record is created/updated. Then you can query for the most recent value based on the timestamp.
I assume you're talking about Oracle since you tagged it as Oracle. You also tagged the question as MySQL where the approach will be different.
You can make the USER_ID column a primary key
ALTER TABLE <<table_name>>
ADD CONSTRAINT pk_user_id PRIMARY KEY( user_id );
If you want the value to increment automatically, you'd need to create a sequence
CREATE SEQUENCE user_id_seq
START WITH 1
INCREMENT BY 1
CACHE 20;
and then create a trigger on the table that uses the sequence
CREATE OR REPLACE TRIGGER trg_assign_user_id
BEFORE INSERT ON <<table name>>
FOR EACH ROW
BEGIN
:new.user_id := user_id_seq.nextval;
END;
As for your second question, I'm not sure that I understand. If you update a row and then commit that change, all subsequent queries are going to read the updated data (barring exceptionally unlikely cases where you've set a serializable transaction isolation level and you've got transactions that run for multiple hours and you're running the query in that transaction). You don't need to do anything to see the current data.
(Answer based on MySQL; conceptually similar answer if using Oracle, but the SQL will probably be different.)
If USER_ID was not defined as a primary key or automatically incrementing at the time of table creation, then you can use:
ALTER TABLE tablename MODIFY USER_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
To issue queries based on record dates, you have to have a field defined to hold date-related datetypes. The date and time of record modifications would be something you would manage (e.g. add/change) based on the way in which you are accessing the records (some PHP-related way? it's unclear what scripts you have in play, based on your question.) Once you have dates in your records you can ORDER BY the date field in your SELECT query.
Check this out
For your AUTOINCREMENT, Its a question already asked here
For your PRIMARY KEY use this
ALTER TABLE USER ADD PRIMARY KEY (USER_ID)
Can you provide more information. If the value gets updated you definitely do NOT have your old value that you entered at 2pm present in the dB. So querying for it will be fine
You can use something like this:
CREATE TABLE IF NOT EXISTS user (
USER_ID unsigned int(8) NOT NULL AUTO_INCREMENT,
username varchar(25) NOT NULL,
password varchar(25) NOT NULL,
topic_name varchar(100) NOT NULL,
flag1 smallint(1) NOT NULL DEFAULT 0,
flag2 smallint(1) NOT NULL DEFAULT 0,
update_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (uid)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
For selection use query:
SELECT * from user ORDER BY update_time DESC