This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Get Insert Statement for existing row in MySQL
Lets say we have a table called users:
CREATE TABLE IF NOT EXISTS users(
UID int(11) PRIMARY KEY NOT NULL auto_increment,
fname varchar(100) default NULL,
lname varchar(100) default NULL,
username varchar(20) default NULL UNIQUE,
password blob
)ENGINE=InnoDB DEFAULT CHARSET=utf8
Lets assume there's a few rows filled up in the table.
I know there is a query that returns the creation of the table -> SHOW CREATE TABLE users
But I'm looking to recreate individual insert statements to save them in a log...I know this sounds weird, but I am making a custom CMS where everything is logged, and hopefully where updated/deleted rows could rollback at point in time...therefore I would need to recreate exact insertion query from table data (I have searched all over the web for an answer and can't find it...)
Is there a query that would return "automatically" the query that inserted that specific row by primary key?
I am looking for a query that would do this:
SHOW INSERT FROM users WHERE PRIMARY_KEY=2
Returns:
INSERT INTO users (UID,fname,lname,username,password) VALUES (2,'somename','somelastname','someusername','someAESencryptedPassword')
The reason I'm thinking such a query would exist/be possible is because when you backup a database with php myadmin (cpanel) and you open the file, you can actually view each insert to recreate the table with all rows a that point in time...
There is no such "command" (the data is stored, but not the actual SQL that inserted it), and it wouldn't make sense to do what you're asking. You do realize your "log" would be about 20 times larger (at least) than the actual table and data itself? And it's not going to able to retrieve the INSERT statement without a lot of work to track it down (see the comments below) anyway.
Study transactional SQL, make use of server logging and transactions, and back up the data regularly like you're supposed to and quit trying to reinvent the wheel. :-)
There is no such command to retrieve the original insert statement. However, you can always remake the insert statement based on the existent table structure and data.
The following link may help you, where this has already been asked:
Get Insert Statement for existing row in MySQL
Another possible option is using mysqldump with php to export the data as SQL statements.
Related
Here is my table
`id` int(11) NOT NULL,
`notifyroles` varchar(50) NULL,
PRIMARY KEY (`id`)
ENGINE=MyISAM DEFAULT CHARSET=utf8
I use it to store a single set of dynamic values of an array that is imploded to string such as item1,item2,item3 and when I pull the data from the db I will explode those values again.
When I initialize my software I insert row id 1 and then leave the notifyroles element as NULL until I used it.
It will and should never have any other rows but row 1 and so I chose not to use the auto increment feature. I never use INSERT I always just use UPDATE for id 1.
Since i dont want to have to write a bunch of code to check for more rows and truncate it and reset it if there is and all of that stuff my question is:
Is there a way to lock the table so that it cannot have more than 1 row? And if someone tried to INSERT another row it would fail.
ps. I am hoping that with the evolution of MySQL that maybe after all this time there is such a way.
Simplest is to manage the rights so the user your software uses has no insert rights but does have update rights on that table.
See: http://dev.mysql.com/doc/refman/5.7/en/grant.html
There's not really a way to lock a table, but you can take advantage of MySQL triggers. As the name suggest, they are activated immediately at the time the specified action is performed, in this case, an insert. Maybe try this:
CREATE TRIGGER locktable
AFTER INSERT ON mytable
FOR EACH ROW
BEGIN
IF (NEW.id != --id of the row you want protected--) THEN
DELETE FROM mytable WHERE id = NEW.id;
END IF;
END;
Why not BEFORE INSERT? some strategies suggest causing the query to fail, but I'm really not comfortable with that approach.
I hope it helps.
I migrate a custom made web site to WordPress and first I have to migrate the data from the previous web site, and then, every day I have to perform some data insertion using an API.
The data I like to insert, comes with a unique ID, representing a single football game.
In order to avoid inserting the same game multiple times, I made a db table with the following structure:
CREATE TABLE `ss_highlight_ids` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`highlight_id` int(10) unsigned zerofill NOT NULL DEFAULT '0000000000',
PRIMARY KEY (`id`),
UNIQUE KEY `highlight_id_UNIQUE` (`highlight_id`),
KEY `highlight_id_INDEX` (`highlight_id`) COMMENT 'Contains a list with all the highlight IDs. This is used as index, and dissalow the creation of double records.'
) ENGINE=InnoDB AUTO_INCREMENT=2967 DEFAULT CHARSET=latin1
and when I try to insert a new record in my WordPress db, I first like to lookup this table, to see if the ID already exists.
The question now :)
What's preferable ? To load all the IDs using a single SQL query, and then use plain PHP to check if the current game ID exists, or is it better to query the DB for any single row I insert ?
I know that MySQL Queries are resource expensive, but from the other side, currently I have about 3k records in this table, and this will move over 30 - 40k in the next few year, so I don't know if it's a good practice to load all of those records in PHP ?
What is your opinion / suggestion ?
UPDATE #1
I just found that my table has 272KiB size with 2966 row. This means that in the near feature it seems that will have a size of about ~8000KiB+ size, and going on.
UPDATE #2
Maybe I have not make it too clear. For first insertion, I have to itterate a CSV file with about 12K records, and after the CSV insertion every day I will insert about 100 - 200 records. All of those records requiring a lookup in the table with the IDs.
So the excact question is, is it better to create a 12K queries in MySQL at CSV insertion and then about 100 - 200 MySQL Queries every day, or just load the IDs in server memory, and use PHP for the lookup ?
Your table has a column id which is auto_increment, what that means is there is no need to insert anything in that column. It will fill it itself.
highlight_id is UNIQUE, so it may as well be the PRIMARY KEY; get rid if id.
A PRIMARY KEY is a UNIQUE key is an INDEX. So this is redundant:
KEY `highlight_id_INDEX` (`highlight_id`)
Back to your question... SQL is designed to do things in batches. Don't defeat that by doing things one row at a time.
How can the table be 272KiB size if it has only two columns and 2966 rows? If there are more columns in the table; show them. There are often good clues of what you are doing, and how to make it more efficient.
2966 rows is 'trivial'; you will have to look closely to see performance differences.
Loading from CSV...
If this is a replacement, use LOAD DATA, building a new table, then RENAME to put it into place. One CREATE, one LOAD, one RENAME, one DROP. Much more efficient than 100 queries of any kind.
If the CSV is updates/inserts, LOAD into a temp table, then do INSERT ... ON DUPLICATE KEY UPDATE ... to perform the updates/inserts into the real table. One CREATE, one LOAD, one IODKU. Much more efficient than 100 queries of any kind.
If the CSV is something else, please elaborate.
This question already has answers here:
Storing Data in MySQL as JSON
(16 answers)
Closed 9 years ago.
Is it acceptable to store JSON data in a MySQL table row? I need to store arrays in a mysql database. The problem is, i don't know how many columns i will need for each user. So I thought to store JSON in one row named array for exemple. Is this the best way?
Edit:
Also, I am using text as table column type.
Yes, it's a very good idea to use mysql as a key-value store, in fact facebook does for some uses.
CREATE TABLE `json` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`data` blob NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The above table-structure will get you far. It's very easy to shard or cluster.
Edit: The point here is to use PHP, Ruby etc to handle the json/data. You do SELECT ..., do your edits, then INSERT ... ON DUPLICATE KEY UPDATE ....
Storing more than one piece of data in a relational database field is generally wrong. It is possible to think of cases where it would be acceptable, but not for storing an entire user.
If a using a relational database structure is not sufficient you may want to look at a NoSQL database inestead of MySQL.
It's not a problem. mySQL can already handle most of the brackets and funky characters in their text field.
I think it is not. What about four months later you decide add a new attribute to your entity or remove some attributes from your entity? How will you parse your old json contents? If you dont know how many columns you will need in your table, you should think in a different way and maybe create a dynamic structure like using user_column table
Just looking for some tips and pointers for a small project I am doing. I have some ideas but I am not sure if they are the best practice. I am using mysql and php.
I have a table called nomsing in the database.
It has a primary key called row id which is an integer.
Then I have about 8 other tables referencing this table.
That are called nomplu, accsing,accplu, datsing, datplu for instance.
Each has a column that references the primary key of nomsing.
Withing my php code I have all the information to insert into the tables except one thing , the row id primary key of the nomsing table. So that php generates a series of inserts like the following.
INSERT INTO nomsing(word,postress,gender) VALUES (''велосипед","8","mask").
INSERT INTO nomplu(word,postress,NOMSING?REFERENCE) VALUES (''велосипеды","2",#the reference to the id of the first insert#).
There are more inserts but this one gets the point across. The second insert should reference the auto generated id for the first insert. I was this to work as a transaction so all inserts should complete or none.
One idea I have is to not auto generate the id and generate it myself in php. That way would know the id given before the transaction but then I would have to check if the id was already in the db.
Another idea I have is to do the first insert and then query for the row id of that insert in php and then make the second insert. I mean both should work but they don't seem like an optimal solution. I am not too familiar with the database transactional features but what would be the best approach to do in this case. I don't like the idea of inserting then querying for the id and then running the rest of the queries. Just seems very inefficient or perhaps I am wrong.
Just insert a row in the master table. Then you can fetch the insert id ( lastInserId when on PDO) and use that to populate your other queries.
You could use the php version as given by JvdBerg , or Mysql's LAST_INSERT_ID. I usually use the former option.
See a similar SO question here.
You could add a new column to the nomsing table, called 'insert_order' (or similar) with a default value of 0, then instead of generating one SQL statement per insert create a bulk insert statement e.g.
INSERT INTO nomsing(word,postress,gender, insert_order)
VALUES (''велосипед","8","mask",1), (''abcd'',"9","hat",2).....
you generate the insert_order number with a counter in your loop starting at one. Then you can perform one SELECT on the table to get the ids e.g.
SELECT row_id
FROM nomsing
WHERE insert_order > 0;
now you have all the IDs you can now do a bulk insert for your following queries. At the end of your script just do an update to reset the insert_order column back to 0
UPDATE nomsing SET insert_order = 0 WHERE insert_order > 0;
It may seem messy to add an extra column to do this but it will add a significant speed increase over performing one query at a time.
This question already has answers here:
How to reset AUTO_INCREMENT in MySQL
(25 answers)
Closed 8 years ago.
I was testing some data in my tables of my database, to see if there was any error, now I cleaned all the testing data, but my id (auto increment) does not start from 1 anymore, can (how do) I reset it ?
ALTER TABLE `table_name` AUTO_INCREMENT=1
You can also do this in phpMyAdmin without writing SQL.
Click on a database name in the left column.
Click on a table name in the left column.
Click the "Operations" tab at the top.
Under "Table options" there should be a field for AUTO_INCREMENT (only on tables that have an auto-increment field).
Input desired value and click the "Go" button below.
Note: You'll see that phpMyAdmin is issuing the same SQL that is mentioned in the other answers.
ALTER TABLE xxx AUTO_INCREMENT =1;
or
clear your table by TRUNCATE
I agree with rpd, this is the answer and can be done on a regular basis to clean up your id column that is getting bigger with only a few hundred rows of data, but maybe an id of 34444543!, as the data is deleted out regularly but id is incremented automatically.
ALTER TABLE users DROP id
The above sql can be run via sql query or as php. This will delete the id column.
Then re add it again, via the code below:
ALTER TABLE `users` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST
Place this in a piece of code that may get run maybe in an admin panel, so when anyone enters that page it will run this script that auto cleans your database, and tidys it.
I have just experienced this issue in one of my MySQL db's and I looked at the phpMyAdmin answer here. However the best way I fixed it in phpMyAdmin was in the affected table, drop the id column and make a fresh/new id column (adding A-I -autoincrement-). This restored my table id correctly-simples! Hope that helps (no MySQL code needed-I hope to learn to use that but later!) anyone else with this problem.