Can mysql_insert_id mix up things? - php

I have two tables in my database 'file' and 'message'. If a user submit a file I first submit the file name in the file table, location and so on in my database. After that I inmedietley set the following variable:
$file_id = mysql_insert_id();
This will get the assigned id of the the file, with that variable I can the insert it into the message table, such as message_file_id. That way I can get message file, with just an id.
All is working, but my question is. If I were to have 2 users submit a message with a file at the same time (which would be hard, or even impossible) is there a possibility user A gets user B file id?
There is variable that could affect this like a user having faster or slower time than another.

No, there is no chance User A will get User B's file ID as long as both transactions were performed in the same connection. MySQL is smart enough to give out distinct auto-increment IDs to different connections, even if they are inserting into the same table concurrently. You don't even have to wrap the two inserts in a transaction.
So there is no problem at all as long as you're using mysql_insert_id() - it will always work as you would expect it to and won't mix up IDs, and you are completely safe to use the return value from mysql_insert_id as a the foreign key in a record related to the one you have just inserted.
http://dev.mysql.com/doc/refman/5.0/en/mysql-insert-id.html

Related

"buffered write scheme" for view counter with php, mysql

I read the topic https://meta.stackexchange.com/questions/36728/how-are-the-number-of-views-in-a-question-calculated . I understand the algorithm, but I not understand how do that thing in mysql, php.
Every time a new hit is registered, it is also added to a memory buffer in addition to the expiring cache entry. The buffer itself also expires after a few minutes or after it is filled up to a certain size, whichever happens first. When it expires, everything it has accumulated is written into the database in bulk. They call it a "buffered write scheme".
We use Storage Engine -MEMORY in mysql or maybe better solution with mysql,php.
Can anyone help me how "buffered write scheme" for view counter with php, mysql.
Thanks very much.
Well it wont go faster than MySQL.
A stored procedure for your query can speed-up the process but database-design is the other half.
Make sure you got one table to count:
user_therad_visit:
----------------------------
user_id | thread_id | count
----------------------------
Make sure there is an index on or better a two-rows unique index on columns "user_id" and "thread_id".
When a user logs in, read his entire and thread_id and count values and save them in $_SESSION array.
This way you can check by $_SESSION var if the user has already visited the page or not and simply ignore fetching the database if he was already here, this will reduce queries drastically.
Then simply dont forgot to UPDATE your database incase the user has never been on this thread and also directly update your $_SESSION array manually.
With query helper:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
you can simply combine insert into and update, (whatever is needed) into one query also improving performance.
This way a query is only fired when the user enters the thread first time which you have no choice but to write down somewhere and a database is one of the fastest ways to do that.
Aslong as your thread_id and user_id fields are indexed you should be pretty fast with the SELECT query, even with a million rows in the table.

MySQL - id "reservation" before insert

I need to know the id (and reserve it) for a record that I will insert in database before insert it because I add a product and upload also a image for this product, and the location of image is related to the product id.
I can find the id that will follow, but I need also to keep it "lock" until the product image is uploaded, to be sure that this id isn't used by another user meanwhile.
You can split this into two operations.
In the first you create everything for the product and get the id. In the second you update the row with the image you've uploaded, which can now be saved since you know the ID.
you can do it as a transaction
I googled "php mysql transaction" and there are a lot of tutorials on how to do this
http://www.linuxdig.com/news_page/1079394922.php
id reservation can only be done safely if you have your own auto increment implementation. Even when running in a transaction, you can not know for sure what the next id is. You just can be sure what you have when you requested for it.
Some people use the mysql_insert_id to receive the last inserted id. But I have somewhere in mind this function is not threadsafe and you'll probably get the id of another thread which just inserted.
However, in your case only two options come to my mind:
1) create your own sequence generator
2) insert your datarow, select it again with all values in the where clause, and use that id. Run option 2 in a transaction (as already suggested above)
I would go for option 1. Sequence generation is commonly used with all other databases (from the widely used only mysql supports auto increment as far as I know). Sequences therefore are fully acceptable.
Cheers, Christian

How can I update multiple tables while guaranteeing no duplicate ids?

I'm used to building websites with user accounts, so I can simply auto-increment the user id, then let them log in while I identify that user by user id internally. What I need to do in this case is a bit different. I need to anonymously collect a few rows of data from people, and tie those rows together so I can easily discern which data rows belong to which user.
The difficulty I'm having is in generating the id to tie the data rows together. My first thought was to poll the database for the highest user ID in existence, and write to the database with user ID +1. This will fail, however, if two submissions poll the database before either of them writes to it - they will each share the same user ID.
Another thought I had was to create a separate user ID table that would be set to auto-increment, and simply generate a new row, then poll that table for the id of the last row created. That also fails for the same reason as above - if two submissions create a row before either of them polls for the latest user ID, then they'll end up sharing an ID.
Any ideas? I get the impression I'm missing something obvious.
I think I'm understanding you right; I was having a similar issue. There's a super handy php function, though. After you query the database to insert a new row and auto-incrementing their user ID, do:
$user_id = mysql_insert_id();
That just returns the auto-increment value from the previous query on the current mysql connection. You can read more about it here if you need to.
You can then use this to populate the second table's data, being sure nobody will get a duplicate ID from the first one.
You need to insert the user, get the auto-generated id, and then use that id as a foreign key in the couple of rows you need to associate with the parent record. The hat rack must exist before you can hang hats on it.
This is a common issue, and to solve it, you would use a transaction. This gives you the atomic idea being being able to do more than one thing, but have it tied to either a success or fail as a package. It's an advanced db feature, and does require awareness of some more advanced programming in order to implement it in as fault-tolerant a manner as possible.

My SQL Using temporary tables with PHP without mysql_pconnect

I want to use temporary tables in my PHP code. It is a form that will be mailed. I do use session variables and arrays but some data filled in must be stored in a table format and the user must be able to delete entries in case of typos etc. doing this with arrays could work (not sure) but I'm kinda new at the php and using tables seems so much simpler. My problem is that using mysql_connect creates the table and adds the line of data but when i add my 2nd line it drops table and create it again... Using mysql_pconnect works by not dropping the table but creates more than on instance of the table at times and deleting entry's? what a mess! How can I best use temporary tables and not have them droped when my page refreshes? not using temporary tables may cause other issues if the user closes the page and leaving the table in the database.
Sounds like a mess! I am not sure why you are using a temp table at all, but you could create a random table name and assign it to a session variable. But this is hugely wrong as you would have a table for each user!
If you must use a database, add field to the table called sessionID. When you do your inserting/deleting reference the php sessionid.
Just storing the data in the session would probably be much easier though...
Better to create a permanent table and temporary rows. So, say you've serialized the object holding all your semi-complete form data as $form_data. At the end of the script, the last thing that should happen is that $form_date should be stored to the database and the resulting row id be stored in your $_SESSION. Something like:
$form_data=serialize($form_object); //you can also serialize arrays, etc.
//$form_data may also need to be base64_encoded
$q="INSERT INTO incomplete_form_table(thedata) '$form_data'";
$r=mysqli->query($q);
$id=$mysql->last_insert_id;
$_SESSION['currentform']=$id;
Then, when you get to the next page, you reconstitute your form like this:
$q="SELECT thedata FROM incomplete_form_table WHERE id={$_SESSION['currentform']}";
$r=$mysql->query($q);
$form_data=$r->fetch_assoc();
$form=$form_data['thedata'];//if it was base64_encoded, don't forget to unencode it first
You can (auto-) clean up the incomplete_form_data table periodically if the table has a timestamp field. Just delete everything that you consider expired.
The above code has not been exhaustively checked for syntax errors.

Ids from mysql massive insert from simultaneous sources

I've got an application in php & mysql where the users writes and reads from a particular table. One of the write modes is in a batch, doing only one query with the multiple values. The table has an ID which auto-increments.
The idea is that for each row in the table that is inserted, a copy is inserted in a separate table, as a history log, including the ID that was generated.
The problem is that multiple users can do this at once, and I need to be sure that the ID loaded is the correct.
Can I be sure that if I do for example:
INSERT INTO table1 VALUES ('','test1'),('','test2')
that the ids generated are sequential?
How can I get the Id's that were just loaded, and be sure that those are the ones that were just loaded?
I've thinked of the LOCK TABLE, but the users shouldn't note this.
Hope I made myself clear...
Building an application that requires generated IDs to be sequential usually means you're taking a wrong approach - what happens when you have to delete a value some day, are you going to re-sequence the entire table? Much better to just let the values fall as they may, using a primary key to prevent duplication.
based on the current implementation of myisam and innodb, yes. however, this is not guaranteed to be so in the future, so i would not rely on it.

Categories