I need to know the id (and reserve it) for a record that I will insert in database before insert it because I add a product and upload also a image for this product, and the location of image is related to the product id.
I can find the id that will follow, but I need also to keep it "lock" until the product image is uploaded, to be sure that this id isn't used by another user meanwhile.
You can split this into two operations.
In the first you create everything for the product and get the id. In the second you update the row with the image you've uploaded, which can now be saved since you know the ID.
you can do it as a transaction
I googled "php mysql transaction" and there are a lot of tutorials on how to do this
http://www.linuxdig.com/news_page/1079394922.php
id reservation can only be done safely if you have your own auto increment implementation. Even when running in a transaction, you can not know for sure what the next id is. You just can be sure what you have when you requested for it.
Some people use the mysql_insert_id to receive the last inserted id. But I have somewhere in mind this function is not threadsafe and you'll probably get the id of another thread which just inserted.
However, in your case only two options come to my mind:
1) create your own sequence generator
2) insert your datarow, select it again with all values in the where clause, and use that id. Run option 2 in a transaction (as already suggested above)
I would go for option 1. Sequence generation is commonly used with all other databases (from the widely used only mysql supports auto increment as far as I know). Sequences therefore are fully acceptable.
Cheers, Christian
Related
I have got a table which has an id (primary key with auto increment), uid (key refering to users id for example) and something else which for my question won’t matter.
I want to make, lets call it, different auto-increment keys on id for each uid entry.
So, I will add an entry with uid 10, and the id field for this entry will have a 1 because there were no previous entries with a value of 10 in uid. I will add a new one with uid 4 and its id will be 3 because I there were already two entried with uid 4.
...Very obvious explanation, but I am trying to be as explainative an clear as I can to demonstrate the idea... clearly.
What SQL engine can provide such a functionality natively? (non Microsoft/Oracle based)
If there is none, how could I best replicate it? Triggers perhaps?
Does this functionality have a more suitable name?
In case you know about a non SQL database engine providing such a functioality, name it anyway, I am curious.
Thanks.
MySQL's MyISAM engine can do this. See their manual, in section Using AUTO_INCREMENT:
For MyISAM tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
The docs go on after that paragraph, showing an example.
The InnoDB engine in MySQL does not support this feature, which is unfortunate because it's better to use InnoDB in almost all cases.
You can't emulate this behavior using triggers (or any SQL statements limited to transaction scope) without locking tables on INSERT. Consider this sequence of actions:
Mario starts transaction and inserts a new row for user 4.
Bill starts transaction and inserts a new row for user 4.
Mario's session fires a trigger to computes MAX(id)+1 for user 4. You get 3.
Bill's session fires a trigger to compute MAX(id). I get 3.
Bill's session finishes his INSERT and commits.
Mario's session tries to finish his INSERT, but the row with (userid=4, id=3) now exists, so Mario gets a primary key conflict.
In general, you can't control the order of execution of these steps without some kind of synchronization.
The solutions to this are either:
Get an exclusive table lock. Before trying an INSERT, lock the table. This is necessary to prevent concurrent INSERTs from creating a race condition like in the example above. It's necessary to lock the whole table, since you're trying to restrict INSERT there's no specific row to lock (if you were trying to govern access to a given row with UPDATE, you could lock just the specific row). But locking the table causes access to the table to become serial, which limits your throughput.
Do it outside transaction scope. Generate the id number in a way that won't be hidden from two concurrent transactions. By the way, this is what AUTO_INCREMENT does. Two concurrent sessions will each get a unique id value, regardless of their order of execution or order of commit. But tracking the last generated id per userid requires access to the database, or a duplicate data store. For example, a memcached key per userid, which can be incremented atomically.
It's relatively easy to ensure that inserts get unique values. But it's hard to ensure they will get consecutive ordinal values. Also consider:
What happens if you INSERT in a transaction but then roll back? You've allocated id value 3 in that transaction, and then I allocated value 4, so if you roll back and I commit, now there's a gap.
What happens if an INSERT fails because of other constraints on the table (e.g. another column is NOT NULL)? You could get gaps this way too.
If you ever DELETE a row, do you need to renumber all the following rows for the same userid? What does that do to your memcached entries if you use that solution?
SQL Server should allow you to do this. If you can't implement this using a computed column (probably not - there are some restrictions), surely you can implement it in a trigger.
MySQL also would allow you to implement this via triggers.
In a comment you ask the question about efficiency. Unless you are dealing with extreme volumes, storing an 8 byte DATETIME isn't much of an overhead compared to using, for example, a 4 byte INT.
It also massively simplifies your data inserts, as well as being able to cope with records being deleted without creating 'holes' in your sequence.
If you DO need this, be careful with the field names. If you have uid and id in a table, I'd expect id to be unique in that table, and uid to refer to something else. Perhaps, instead, use the field names property_id and amendment_id.
In terms of implementation, there are generally two options.
1). A trigger
Implementations vary, but the logic remains the same. As you don't specify an RDBMS (other than NOT MS/Oracle) the general logic is simple...
Start a transaction (often this is Implicitly already started inside triggers)
Find the MAX(amendment_id) for the property_id being inserted
Update the newly inserted value with MAX(amendment_id) + 1
Commit the transaction
Things to be aware of are...
- multiple records being inserted at the same time
- records being inserted with amendment_id being already populated
- updates altering existing records
2). A Stored Procedure
If you use a stored procedure to control writes to the table, you gain a lot more control.
Implicitly, you know you're only dealing with one record.
You simply don't provide a parameter for DEFAULT fields.
You know what updates / deletes can and can't happen.
You can implement all the business logic you like without hidden triggers
I personally recommend the Stored Procedure route, but triggers do work.
It is important to get your data types right.
What you are describing is a multi-part key. So use a multi-part key. Don't try to encode everything into a magic integer, you will poison the rest of your code.
If a record is identified by (entity_id,version_number) then embrace that description and use it directly instead of mangling the meaning of your keys. You will have to write queries which constrain the version number but that's OK. Databases are good at this sort of thing.
version_number could be a timestamp, as a_horse_with_no_name suggests. This is quite a good idea. There is no meaningful performance disadvantage to using timestamps instead of plain integers. What you gain is meaning, which is more important.
You could maintain a "latest version" table which contains, for each entity_id, only the record with the most-recent version_number. This will be more work for you, so only do it if you really need the performance.
I'm doing a food delivery system for my final year project. For the database, I'm required to hide the record that is no longer in used, instead of deleting the record permanently. For example, if the seller doesn't want to sell a particular meal, they can disable the meal but the record of the meal still available in the database. I need to achieve this by using PHP and SQL. Can someone give me some ideas on how to achieve this? Thanks in advance.
The feature you are referring to is something called soft deletion. In soft deletion, a record is logically removed from the database table, without actually removing the record itself.
One common way to implement soft deletion is to add a column which keeps track of whether a column has been soft deleted. You can use the TINYINT(1) type for this purpose.
Your table creation statement would look something like this:
CREATE TABLE yourTable (`deleted` TINYINT(1), `col1` varchar, ...)
To query out records which have not been logically deleted, you could use:
SELECT *
FROM yourTable
WHERE deleted <> 1
And having a soft delete column also makes it easy to remove stale records if the time comes to do that.
A extra deleted column is a great option in many cases. But you have to be very careful that you always check it, and in some cases it can be hard to control this.
Another good choice is a "shadow table" with the same structure, and change your delete process to first copy to the shadow table, and then delete. This means your original table is safe to use, but you cannot do queries on all data (not easily - although UNION can help)
I'm used to building websites with user accounts, so I can simply auto-increment the user id, then let them log in while I identify that user by user id internally. What I need to do in this case is a bit different. I need to anonymously collect a few rows of data from people, and tie those rows together so I can easily discern which data rows belong to which user.
The difficulty I'm having is in generating the id to tie the data rows together. My first thought was to poll the database for the highest user ID in existence, and write to the database with user ID +1. This will fail, however, if two submissions poll the database before either of them writes to it - they will each share the same user ID.
Another thought I had was to create a separate user ID table that would be set to auto-increment, and simply generate a new row, then poll that table for the id of the last row created. That also fails for the same reason as above - if two submissions create a row before either of them polls for the latest user ID, then they'll end up sharing an ID.
Any ideas? I get the impression I'm missing something obvious.
I think I'm understanding you right; I was having a similar issue. There's a super handy php function, though. After you query the database to insert a new row and auto-incrementing their user ID, do:
$user_id = mysql_insert_id();
That just returns the auto-increment value from the previous query on the current mysql connection. You can read more about it here if you need to.
You can then use this to populate the second table's data, being sure nobody will get a duplicate ID from the first one.
You need to insert the user, get the auto-generated id, and then use that id as a foreign key in the couple of rows you need to associate with the parent record. The hat rack must exist before you can hang hats on it.
This is a common issue, and to solve it, you would use a transaction. This gives you the atomic idea being being able to do more than one thing, but have it tied to either a success or fail as a package. It's an advanced db feature, and does require awareness of some more advanced programming in order to implement it in as fault-tolerant a manner as possible.
How can we re-use the deleted id from any MySQL-DB table?
If I want to rollback the deleted ID , can we do it anyhow?
It may be possible by finding the lowest unused ID and forcing it, but it's terribly bad practice, mainly because of referential integrity: It could be, for example, that relationships from other tables point to a deleted record, which would not be recognizable as "deleted" any more if IDs were reused.
Bottom line: Don't do it. It's a really bad idea.
Related reading: Using auto_increment in the mySQL manual
Re your update: Even if you have a legitimate reason to do this, I don't think there is an automatic way to re-use values in an auto_increment field. If at all, you would have to find the lowest unused value (maybe using a stored procedure or an external script) and force that as the ID (if that's even possible.).
You shouldn't do it.
Don't think of it as a number at all.
It is not a number. It's unique identifier. Think of this word - unique. No record should be identified with the same id.
1.
As per your explanation provided "#Pekka, I am tracking the INsert Update and delete query..." I assume you just some how want to put your old data back to the same ID.
In that case you may consider using a delete-flag column in your table.
If the delete-flag is set for some row, you shall consider program to consider it deleted. Further you may make it available by setting the delete-flat(false).
Similar way is to move whole row to some temporary table and you can bring it back when required with the same data and ID.
Prev. idea is better though.
2.
If this is not what you meant by your explanation; and you want to delete and still use all the values of ID(auto-generated); i have a few ideas you may implement:
- Create a table (IDSTORE) for storing Deleted IDs.
- Create a trigger activated on row delete which will note the ID and store it to the table.
- While inserting take minimum ID from IDSTORE and insert it with that value. If IDSTORE is empty you can pass NULL ID to generate Auto Incremented number.
Of course if you have references / relations (FK) implemented, you manually have to look after it, as your requirement is so.
Further Read:
http://www.databasejournal.com/features/mysql/article.php/10897_2201621_3/Deleting-Duplicate-Rows-in-a-MySQL-Database.htm
Here is the my case for mysql DB:
I had menu table and the menu id was being used in content table as a foreign key. But there was no direct relation between tables (bad table design, i know but the project was done by other developer and later my client approached me to handle it). So, one day my client realised that some of the contents are not showing up. I looked at the problem and found that one of the menu is deleted from menu table, but luckily the menu id exist in cotent table. I found the menu id from content table that was deleted and run the normal insert query for menu table with same menu id along with other fields. (Id is primary key) and it worked.
insert into tbl_menu(id, col1, col2, ...) values(12, val1, val2, ...)
I've got an application in php & mysql where the users writes and reads from a particular table. One of the write modes is in a batch, doing only one query with the multiple values. The table has an ID which auto-increments.
The idea is that for each row in the table that is inserted, a copy is inserted in a separate table, as a history log, including the ID that was generated.
The problem is that multiple users can do this at once, and I need to be sure that the ID loaded is the correct.
Can I be sure that if I do for example:
INSERT INTO table1 VALUES ('','test1'),('','test2')
that the ids generated are sequential?
How can I get the Id's that were just loaded, and be sure that those are the ones that were just loaded?
I've thinked of the LOCK TABLE, but the users shouldn't note this.
Hope I made myself clear...
Building an application that requires generated IDs to be sequential usually means you're taking a wrong approach - what happens when you have to delete a value some day, are you going to re-sequence the entire table? Much better to just let the values fall as they may, using a primary key to prevent duplication.
based on the current implementation of myisam and innodb, yes. however, this is not guaranteed to be so in the future, so i would not rely on it.