I read the following in the PHP/Oracle manual from php.net:
A transaction begins when the first SQL statement that changes data is executed with oci_execute() using the OCI_NO_AUTO_COMMIT flag. Further data changes made by other statements become part of the same transaction. Data changes made in a transaction are temporary until the transaction is committed or rolled back. Other users of the database will not see the changes until they are committed.
There are two things that I don't understand:
What is committing for?
What does that mean that "other users of the database will not see the changes until they are committed?" How will not they able to see the changes?
Well, you should read some more about transactions.
Simply put - you can think of any query within a transaction as a draft, a temporary data set, that only you(within your database session/connection) can see, unless you issue a commit.
Another analogical explanation is to think of transactions as your thoughts on something that you would actually write down on paper afterwards. The commit is the act of actually writing it so it no longer exists only in your head.
Committing is the finalization of the transaction in which the changes are made permanent.
Because Oracle has the read consistent view, users that start a transaction will only be able to see data that was committed when the new transaction started. So when user A starts a transaction and user B changes some values in a table and commits it, user A won't see the changed data until user A starts a new transaction. The read consistent view makes sure that all users always see a consistent state, one with all data committed.
This causes that a single block of a table can have multiple versions in the undo tablespace, just to support the read consistent views for various transactions.
Related
For our web application, written in Laravel, we use transactions to update the database. We have separated our data cross different database (for ease, let's say "app" and "user"). During an application update, an event is fired to update some user statistics in the user database. However, this application update may be called as part of a transaction on the application database, so the code structure looks something as follows.
DB::connection('app')->beginTransaction();
// ...
DB::connection('user')->doStuff();
// ...
DB::connection('app')->commit();
It appears that any attempt to start a transaction on the user connection (since a single query already creates an implicit transaction) while in a game transaction does not work, causing a deadlock (see InnoDB status output). I also used innotop to get some more information, but while it showed the lock wait, it did not show by which query it was blocked. There was a lock present on the user table, but I could not find it's origin. Relevant output is shown below:
The easy solution would be to pull out the user operation from the transaction, but since the actual code is slightly more complicated (the doStuff actually happens somewhere in a nested method called during the transaction and is called from different places), this is far from trivial. I would very much like the doStuff to be part of the transaction, but I do not see how the transaction can span multiple databases.
What is the reason that this situation causes a deadlock and is it possible to run the doStuff on the user database within this transaction, or do we have to find an entirely different solution like queueing the events for execution afterwards?
I have found a way to solve this issue using a workaround. My hypothesis was that it was trying to use the app connection to update the user database, but after forcing it to use the user connection, it still locked. Then I switched it around and used the app connection to update the user database. Since we have joins between the databases, we have database users with proper access, so that was no issue. This actually solved the problem.
That is, the final code turned out to be something like
DB::connection('app')->beginTransaction();
// ...
DB::connection('app')->table('user.users')->doStuff(); // Pay attention to this line!
// ...
DB::connection('app')->commit();
I currently use InnoDB transactions to manage the effect of any single webpage request. One request per transaction. This works well if the request fails I can just ignore it.
As a relative newbie to MySQL administration, I remain worried that something I write into my PHP code will do something bad to my database. DELETE FROM or UPDATE without a where statement or something as an extreme example. The idea of the transactions is that when I inevitably notice what happened later, after the bad transaction is committed, I should be able to rollback the mistake.
However, the database is used heavily, so its likely that other transactions will come in between when I commit the bad transaction and when I notice it and go to act on it. But all the documentation I have seen on transactions, and the AWS restore-to-point-in-time, only allow you "go back" to before a transaction is committed.
So, how do I recover or "roll-forward" the transactions that came in after my bad one? They are in the InnoDB log, so should I be able to apply the later transactions again, just skipping the one bad one? My software interfaces with an external credit card processor, so just losing those later transactions isn't an option.
I have a hard time imagining its impossible, but I can't find any way to "roll-forward". Is this possible? Is it something you have to write into the database structure itself, like keeping a history table with triggers and using a history record to update after rolling back?
Confused.
I have just successfully attempted using beginTransaction() to execute my sql statements in my project using PHP. I have an array of items need to be written to the database, and each item must be validated against something before getting stored. One of the good things about turning off the auto-commit behaviour of a database is you can rollback the whole transaction if something goes wrong in the middle. In my project, if one item is invalid, the whole array should not be recorded in the database, which is why I chose to use this approach.
Now I am just wondering if this really is a better way in terms of performance? Because even if the last item in the array is validated, I still need to manually commit() the previous execution. Does commit repeat the sql execution?
The only advantage I can think of right now is you only need to do one loop instead of two if you want to validate them all (assuming all items are valid) and then write each of them.
First validate everything, then begin a transaction, database interaction. Transactions are not made to help validating the data.
Commit does not repeat SQL execution.
Typically, when working in a transactional, whenever you execute an INSERT/UPDATE/DELETE statement, the database takes a copy of the records/data pages to a transaction log, and then executes the actual record changes.
If anybody else tries to access those records/data pages during the course of your transaction, they will be redirected to the copy in the transaction log.
Then, when you execute the commit, the data in the database itself is already updated, and all the server needs to do is delete the transaction log.
If you rollback rather than commit, then the database server backtracks through the transaction log, restoring all the records/data pages that you have updated to their original state, deleting each transaction log entry as it goes.
Therefore, a rollback is an overhead, because the database server has to restore the data to its pre-transaction state.
You can use savepoints. From the manual:
BEGIN;
INSERT INTO table1 VALUES (1);
SAVEPOINT my_savepoint;
INSERT INTO table1 VALUES (2);
ROLLBACK TO SAVEPOINT my_savepoint;
INSERT INTO table1 VALUES (3);
COMMIT;
You still have to validate your input, but you can now rollback within a single transaction. Using transactions can make the database faster because there are less (implicit) commits.
When I save an array of records i.e multiple records, if one of the records in the middle has an error(sql), what will happen? Will all records henceforth not be inserted or just the current row or none of them? How should I handle the situation?
PDO Driver is Mysql
Take a look at PDO-Transactions: http://php.net/manual/en/pdo.begintransaction.php
You can check whether there was an error, and if so rollback your commits or do whatever you intend to do
These situations are managed with database transactions.
The classical example is when I want to transfer money from my account to another account. There are two queries to be done:
Remove the money from my account
Put the money in the other account
Of course if the second query fails, I want the first one to be rolled back and notify the user of the error. That's what transactions are for.
If you don't use transactions, when the second query fails, the first is executed anyway and not rolled back (so the money disappears). This is the default behaviour of MySQL.
The general solution would be to use TRANSACTION (mysql) (pgsql) (mssql). What you can do with it and how much control you have, depends on RDBMS. For example: PostgreSQL lets you create a SAVEPOINT, to which you can ROLLBACK TO.
Another solutions would be to use STORED PROCEDURE. In that case you can can specify what should happen if error occurs with DECLARE .. HANDLER
Can I insert something into a MySQL database using PHP and then immediately make a call to access that, or is the insert asynchronous (in which case the possibility exists that the database has not finished inserting the value before I query it)?
What I think the OP is asking is this:
<?
$id = $db->insert(..);
// in this case, $row will always have the data you just inserted!
$row = $db->select(...where id=$id...)
?>
In this case, if you do a insert, you will always be able to access the last inserted row with a select. That doesn't change even if a transaction is used here.
If the value is inserted in a transaction, it won't be accessible to any other transaction until your original transaction is committed. Other than that it ought to be accessible at least "very soon" after the time you commit it.
There are normally two ways of using MySQL (and most other SQL databases, for that matter):
Transactional. You start a transaction (either implicitly or by issuing something like 'BEGIN'), issue commands, and then either explicitly commit the transaction, or roll it back (failing to take any action before cutting off the database connection will result in automatic rollback).
Auto-commit. Each statement is automatically committed to the database as it's issued.
The default mode may vary, but even if you're in auto-commit mode, you can "switch" to transactional just by issuing a BEGIN.
If you're operating transactionally, any changes you make to the database will be local to your db connection/instance until you issue a commit. Issuing a commit should block until the transaction is fully committed, so once it returns without error, you can assume the data is there.
If you're operating in auto-commit (and your database library isn't doing something really strange), you can rely on data you've just entered to be available as soon as the call that inserts the data returns.
Note that best practice is to always operate transactionally. Even if you're only issuing a single atomic statement, it's good to be in the habit of properly BEGINing and COMMITing a transaction. It also saves you from trouble when a new version of your database library switches to transactional mode by default and suddenly all your one-line SQL statements never get committed. :)
Mostly the answer is yes. You would have to do some special work to force a database call to be asynchronous in the way you describe, and as long as you're doing it all in the same thread, you should be fine.
What is the context in which you're asking the question?