I am building a log-in system which uses three tables in a mysql database (PHP) called users, sessions and log-ins. All tables have an auto-increment index. After a successful log-in happens, the user row is linked to a session row via the values stored in a new log-in row. I am wondering if mysqli_insert_id() is safe to use in this process. I am worried that if there is an error during the session row INSERT, the log-in row will receive an incorrect session index number and the user will get logged into the wrong session.
is this going to be a problem? If so, is there a good way to handle it?
That method will produce reliable results if:
The last INSERT operation succeeded.
The result is checked immediately after the INSERT succeeded on the same database connection.
Most of the time it will be sufficient to call INSERT and then fetch the ID of what was inserted as the next operation so long as you're using the same database handle.
A framework will do all of this for you automatically, so it's usually not something you should be concerned with.
If you want to ensure the integrity of your data & the operations that you perform, then I suggest you go with the "All or None" approach. This means that all your queries will pass individually or they will all fail, even if one fails. You can implement this using TRANSACTION & ROLLBACK features in MySQL. For more info, you may visit: http://www.tutorialspoint.com/sql/sql-transactions.htm
it's better if you use triggers
trigger will do a query when you trigger another
see this tutorial
Related
I have little confusion about the Php PDO function: lastInsertID. If I understand correctly, it returns the last auto-incremental id that was inserted in the database.
I usually use this function when I execute a query that inserts a user in my database when I am creating the functionality of registering a user.
My question is that say I have a hundred people registering on my site at one point for example. And may be one user hit the 'Register' button a millisecond after another user. Then is there a chance that this function lastInsertId will return the id of another user that register just momentarily earlier?
May be what I am trying to ask is does the server handle one request at a time and go through a php file one at a time?
Please let me know about this.
Thank you.
Perfectly safe. There is no race condition. It only returns the last inserted Id from the pdo object that made the insert.
It is safe - it guarantees to return you a value from the current connection.
I have a function similar to the following
function add_user_comment($user_id, $comment_id)
{
- some code-
}
both user and comment have a separate tables in the database.
Now, when calling the function if providing wrong values for either $user_id or $comment_id it should return an error message.
What is the proper approach to this:
perform validations that user_id, comment_id exist in their tables and then run the query? this means going to the database multiple times.
run the query without checking and throw an error when it fails because of foreign key constrains.
The proper way to do this is to create a stored procedure that handles your insert for you. In the stored procedure you can create a transaction, check for your values, and then raise an error that will get returned from the DBMS up to your code, which you can then catch and use THAT as your indication that an error occurred. If you are NOT in an error condition, then the Stored Procedure can perform the insert, and you've how handled both tasks in a single operation, using only ONE database turn-around. Stored procedures run on the DBMS, and are generally "compiled" so they run very quickly.
This is the best way, rather than having you CODE execute a query to see if something exists, and have the results fetched from the DB, packaged up, and set down the wire to your code, at which point you have to take action based on what you see, then YOU have to package up the data, and send IT down the wire to the DBMS, to perform the action you wish to perform. That's two database turnarounds, when one will do the trick. It's all about error handling, and how you do it, and what you do in the case of an error.
I should also mention that your Schema has A GREAT DEAL to do with how successful your error handing mechanism will be, overall. if your Schema is wrong, then you might be updating multiple tables, and inserting data, and getting ID's back in order to insert data into other tables, etc... That's just... WRONG... You want to make sure your inserts are also done in a Stored Procedure, so a single call to it will insert the record, any associated records, and possibly return you the new ID of the record you just inserted. Well, that or an error.
I'm using the mysql_insert_id within my code to get an auto increment.
I have read around and it looks like there is no race condition regarding this for different user connections, but what about the same user? Will I be likely to run into race condition problems when connecting to the database using the same username/user but still from different connection sessions?
My application is PHP. When a user submits a web request my PHP executes code and for that particular request/connection session I keep a persistent SQL connection open in to MySQL for the length of that request. Will this cause me any race condition problems?
None for any practical purpose, If you execute the last_id request right after executing your insert then there is practically not enough time for another insert to spoil that. Theoretically might be
possible
According to PHP Manual
Note:
Because mysql_insert_id() acts on the last performed query, be sure to
call mysql_insert_id() immediately after the query that generates the
value.
Just in case you want to double check you can use this function to confirm your previous query
mysql_info
The use of persistent connections doesn't mean that every request will use the same connection. It means that each apache thread will have its own connection that is shared between all requests executing on that thread.
The requests will run serially (one after another) which means that the same persistent connection will not be used by two threads running at the same time.
Because of this, your last_insert_id value will be safe, but be sure that you check the result of your inserts before using it, because it will return the last_insert_id of the last successful INSERT, even if it wasn't the last executed INSERT.
My webservices are as structured as follows:
Receive request
Parse and validate input
Do actual webservice
Validate output
I am primarily using logging for debugging purposes, if something went wrong I want to know what the request was so I can hope to reproduce it (ie. send the exact same request).
Currently I'm logging to a MySQL database table. After 1. a record is created, which is updated with more info after 2. and 3. and cleaned up after 4. (Logs of successful requests are pruned).
I want the logging to be as quick and painless as possible. Any speed up here will considerably improve overall performance (round trip of each request).
I was thinking of using INSERT DELAYED but I can't do that because I need the LAST_INSERT_ID to update and later delete the log record, or at least record the status of the request (ie. success or error) so I know when to prune.
I could generated a unique id myself (or at least an id that is 'unique enough') but even then I wont know the order of the DELAYED statements and I might end up trying to update or delete a record that doesn't exist yet. And since DELAYED also removes the ability to use NUM_AFFECTED_ROWS I can't check if the queries are effected.
Any suggestions?
When you say pruned im assuming if it was a success your removing the record? If so I think it would be better for you if you had a Java object storing the information instead of the database as the process unfolds then if an exception occurs you log the information in the object to the database all at once.
If you wanted to take it further, I did something similiar to this, I have a service which queues the logging of audit data and inserts the info at a steady pace so if the services are getting hammered were not clogging the DB with logging statements, but if your only logging errors that would probably be overkill.
I figured I can probably just do a REPLACE DELAYED and do the pruning some other time. (with a DELETE DELAYED).
When I save an array of records i.e multiple records, if one of the records in the middle has an error(sql), what will happen? Will all records henceforth not be inserted or just the current row or none of them? How should I handle the situation?
PDO Driver is Mysql
Take a look at PDO-Transactions: http://php.net/manual/en/pdo.begintransaction.php
You can check whether there was an error, and if so rollback your commits or do whatever you intend to do
These situations are managed with database transactions.
The classical example is when I want to transfer money from my account to another account. There are two queries to be done:
Remove the money from my account
Put the money in the other account
Of course if the second query fails, I want the first one to be rolled back and notify the user of the error. That's what transactions are for.
If you don't use transactions, when the second query fails, the first is executed anyway and not rolled back (so the money disappears). This is the default behaviour of MySQL.
The general solution would be to use TRANSACTION (mysql) (pgsql) (mssql). What you can do with it and how much control you have, depends on RDBMS. For example: PostgreSQL lets you create a SAVEPOINT, to which you can ROLLBACK TO.
Another solutions would be to use STORED PROCEDURE. In that case you can can specify what should happen if error occurs with DECLARE .. HANDLER