I found a couple of other questions on this topic. This one...
mysql_insert_id alternative for postgresql
...and the manual seem to indicate that you can call lastval() any time and it will work as expected. But this one...
Postgresql and PHP: is the currval a efficent way to retrieve the last row inserted id, in a multiuser application?
...seems to state that it has to be within a transaction. So my question is this: can I just wait as long as I like before querying for lastval() (without a transaction)? And is that reliable in the face of many concurrent connections?
INSERT, UPDATE and DELETE in PostgreSQL have a RETURNING clause which means you can do:
INSERT INTO ....
RETURNING id;
Then the query will return the value it inserted for id for each row inserted. Saves a roundtrip to the server.
Yes, the sequence functions provide multiuser-safe methods for obtaining successive sequence values from sequence objects.
Related
I have this two tables:
I also have a dynamic form in which it contains table and the user can add rows and the data from it will be inserted in tblcamealsformdetails but the basis for inserting it is the id from tblcamealsform. How do I insert all the values to the two tables simultaneously?
Here's my form:
You will enter data first in table tblcamealsform. You insert ID from that query.
That ID you will use then to insert the rest of the data, along with the insert ID, in table tblcamealsformdetails.
So you don't do it simultaniously. You add the dependencies first.
To get the insert-id from the last query you executed, you will need mysql_insert_id().
See http://php.net/manual/en/function.mysql-insert-id.php
In answer to the comment what will happen if multiple users use the form at the same time:
Since you open a mysql connection at the top of your script which will result a unique connection pointer and all of the mysql-functions you call automatically reference to that pointer I think mysql_insert_id() will always reference to the latest query performed by the current connection. So another thread by another user would not interfere with this.
Please note that I am not 100% sure about this though.
Anyway: I am using this for about 10 years now some of which include high-traffic websites and I have never experienced any problems using this method, so in my opinion you can safely use it.
There is one exception to this:
You must always call mysql_insert_id() immediately after executing the query you want the ID for. If you execute any other query in the meantime (for example, you call a method of another object which performs an insert-query) mysql_insert_id() will return the ID of that query instead.
This is mistake I have made in the past and which you have to be aware of.
I'd like to point you using LAST_INSERT_ID:
when doing multiple-row inserts, LAST_INSERT_ID() will return the value of the first row inserted (not the last).
I am building a PHP RESTful-API for remote "worker" machines to self-assign tasks. The MySQL InnoDB table on the API host holds pending records that the workers can pick up from the API whenever they are ready to work on a record. How do I prevent concurrently requesting worker system from ever getting the same record?
My initial plan to prevent this is to UPDATE a single record with a uniquely generated ID in a default NULL field, and then poll for the details of the record where the unique ID field matches.
For example:
UPDATE mytable SET status = 'Assigned', uniqueidfield = '3kj29slsad'
WHERE uniqueidfield IS NULL LIMIT 1
And in the same PHP instance, the next query:
SELECT id, status, etc FROM mytable WHERE uniqueidfield = '3kj29slsad'
The resulting record from the SELECT statement above is then given to the worker. Would this prevent simultaneously requesting workers from getting the same records shown to them? I am not exactly sure on how MySQL handles the lookups within an UPDATE query, and if two UPDATES could "find" the same record, and then update it sequentially. If this works, is there a more elegant or standardized way of doing this (not sure if FOR UPDATE would need to be applied to this)? Thanks!
Nevermind my previous answer. I believe I understand what you are asking. I'll reword it so maybe it is clearer to others.
"If I issue two of the above update statements at the same time, what would happen?"
According to http://dev.mysql.com/doc/refman/5.0/en/lock-tables-restrictions.html, the second statement would not interfere with the first one.
Normally, you do not need to lock tables, because all single UPDATE
statements are atomic; no other session can interfere with any other
currently executing SQL statement.
A more elegant way is probably opinion based, but I don't see anything wrong with what you're doing.
I am familiar with the MySQL function LAST_INSERT_ID; is there a similar function for performing the same query with a MS Access database via ODBC?
In my specific case, I am using PHP+PDO to insert rows into an Access database, and would like to know the last primary key value of each insert as they are performed.
If this functionality is not available, are there any alternatives? (without changing the database)
Thank you.
It seems that Access 2000 or later supports the ##IDENTITY property. So, you would only need to select its value after an INSERT:
select ##IDENTITY from myTable
Please see the MSDN link: Retrieving Identity or Autonumber Values
In short:
[...] Microsoft Access 2000 or later does support the ##IDENTITY property to retrieve the value of an Autonumber field after an INSERT. Using the RowUpdated event, you can determine if an INSERT has occurred, retrieve the latest ##IDENTITY value, and place that in the identity column of the local table in the DataSet.
As others have said, SELECT ##IDENTITY works with Jet 4 and the ACE.
A new consideration has been introduced with Access 2010, and that's because the new ACE version supports table-level data macros, which are the equivalent of triggers. Thus, an insert in one table might trigger an insert in another, so that ##IDENTITY might be the value for the second table instead of the top-level one. So far as I know, there is no equivalent to SQL Server's SCOPE_IDENTITY() for this scenario.
I have asked about it in other Access forums and nobody seems to know. It's something to watch for should you be using an ACCDB with table-level data macros.
I've never attempted to use access with php, but two ideas come to mind, The first one is simple. And that's to simple select max(id) from table after your insert, since it is auto incrementing you will get the highest value which should be the insertted value. Secondly you can try using odbc_cursor (http://au2.php.net/manual/en/function.odbc-cursor.php).
Try running "SELECT ##IDENTITY FROM MyTable" after your insert.
Consider a simple schema of one to many relationship. Parent Table's Id is referenced in the Child table.
In php I want to insert a row into the table using the statement mysql_query($query). Then I will get the id of the last inserted row by using mysql_insert_id(). Then i will use this id to insert the another row into the child's table.
My question is that the since there could be multiple requests happening on the same time for a php page, what if the above two statements do NOT run one after the other (ie, for example, there are two inserts happening on the parent and then the two inserts on the child)? There could be concurrency issues. So how do we overcome this?
Any ideas?
When you call mysql_insert_id() it gets the last inserted id for that connection, so two PHP scripts won't interfere with each other.
As in MySQL documentation:
The value of mysql_insert_id() is
affected only by statements issued
within the current client connection.
It is not affected by statements
issued by other clients.
So... no problem. Although, for other reasons, I recommend using ORM (like Doctrine).
Problem: When I use an auto-incrementing primary key in my database, this happens all the time:
I want to store an Order with 10 Items. The ordered Items belong to the Order. So I store the order, ask the database for the last inserted id (which is dangerous when it comes to concurrency, right?), and then store the 10 Items with the foreign key (order_id).
So I always have to do:
INSERT ...
last_inserted_id = db.lastInsertId();
INSERT ...
INSERT ...
INSERT ...
and I believe this prevents me from using transactions in almost all INSERT cases where I need a foreign key.
So... here some solutions, and I don't know if they're really good:
A) Don't use auto_increment keys! Use a key table?
Key Table would have two fields: table_name, next_key. Every time I need a key for a table to insert a new dataset, first I ask for the next_key by accessing a special static KeyGenerator class method. This does a SELECT and an UPDATE, if possible in one transaction (would that work?). Of course I would request that for every affected table. Next, I can INSERT my entire object graph in one transaction without playing ping-pong with the database, before I know the keys already in advance.
B) Using GUUID / UUID algorithm for keys?
These suppose to be really unique worldwide, and they're LARGE. I mean ... L_A_R_G_E. So a big amount of memory would go into these gigantic keys. Indexing will be hard, right? And data retrieval will be a pain for the database - at least I guess - integer keys are much faster to handle. On the other hand, these also provide some security: Visitors can't iterate anymore over all orders or all users or all pictures by just incrementing the id parameter.
C) Stick with auto_incremented keys?
Ok, if then, what about transactions like described in the example above? How can I solve that? Maybe by inserting a Ghost Row first and then doing an transaction with one UPDATE + n INSERTs?
D) What else?
When storing orders, you need transactions to prevent situations where only half your products are added to the database.
Depending on your database and your connector, the value returned by the last-insert-id function might be transaction-independent. For instance, with MySQL, mysql_insert_id returns the identifier for the last query from that particular client (without being affected by what other clients are doing concurrently).
Which database are you using?
Yes, typically inserting a record and then trying to select it again to find the auto-generated key is bad, especially if you are using a naive select max(id) from table query. This is because as soon as two threads are creating records max(id) may not actually return the last id your current thread used.
One way to avoid this is to create a sequence in the database. From your code you select sequence.NextValue then use that value to then execute your inserts (or you can craft a more complex SQL statement that does this selection and the inserts in one go). Sequences are atomic / thread-safe.
In MySQL you can ask for the last inserted id from the execution results which I believe will always give you the correct answer.
Sql Server supports SCOPE_IDENTITY (Transact-SQL) which should take care of your transaction issue and concurrency issue.
I would say stick with auto_increment.
(Assuming you are using MySQL)
"ask the database for the last inserted id (which is dangerous when it comes to concurrency, right?)"
If you use MySQLs last_insert_id() function, you only see what happened in your session. So this is safe. You mention ths:
db.last_insert_id()
I don't know what framework or language it is, but I would assume that uses MySQL's last_insert_id() under the covers (if not, it is a pretty useless database abstraction fromework)
" I believe this prevents me from using transactions in almost all INSERT cases w"
I don't see why. Please explain.
D) Sequence
: may not be available in your DBMS, but if it is, solves your problem elegantly.
For Postgresql, have a look at Sequence Functions
There is no final and general answer to this question.
auto incrementing columns are easy to use when you add new records. To use them as foreign keys within the same transaction, they are not so straight forward. You need database specific commands to get the newly created key. This technology is common for certain databases, for instance sql server.
Sequences seem to be harder to use, because you need to get a key before you insert a row, but at the end its easier to use them as foreign keys. This technology is common for certain databases, for instance oracle.
When you use Hibernate or NHibernate, it is discouraged to use auto incrementing keys, because some optimizations are not possible anymore. Using a hi-lo algorithm which uses an additional table is recommended.
Guids are strong, for instance when sharing data between different databases, systems, disconnected scenarios, import / export etc. In many databases, most of the tables contain only a few hundred records, so memory and performance are not such an issue. When using NHibernate, you get an guid generator which produces sequential guids, because some databases perform better when keys are sequential.