Can I insert something into a MySQL database using PHP and then immediately make a call to access that, or is the insert asynchronous (in which case the possibility exists that the database has not finished inserting the value before I query it)?
What I think the OP is asking is this:
<?
$id = $db->insert(..);
// in this case, $row will always have the data you just inserted!
$row = $db->select(...where id=$id...)
?>
In this case, if you do a insert, you will always be able to access the last inserted row with a select. That doesn't change even if a transaction is used here.
If the value is inserted in a transaction, it won't be accessible to any other transaction until your original transaction is committed. Other than that it ought to be accessible at least "very soon" after the time you commit it.
There are normally two ways of using MySQL (and most other SQL databases, for that matter):
Transactional. You start a transaction (either implicitly or by issuing something like 'BEGIN'), issue commands, and then either explicitly commit the transaction, or roll it back (failing to take any action before cutting off the database connection will result in automatic rollback).
Auto-commit. Each statement is automatically committed to the database as it's issued.
The default mode may vary, but even if you're in auto-commit mode, you can "switch" to transactional just by issuing a BEGIN.
If you're operating transactionally, any changes you make to the database will be local to your db connection/instance until you issue a commit. Issuing a commit should block until the transaction is fully committed, so once it returns without error, you can assume the data is there.
If you're operating in auto-commit (and your database library isn't doing something really strange), you can rely on data you've just entered to be available as soon as the call that inserts the data returns.
Note that best practice is to always operate transactionally. Even if you're only issuing a single atomic statement, it's good to be in the habit of properly BEGINing and COMMITing a transaction. It also saves you from trouble when a new version of your database library switches to transactional mode by default and suddenly all your one-line SQL statements never get committed. :)
Mostly the answer is yes. You would have to do some special work to force a database call to be asynchronous in the way you describe, and as long as you're doing it all in the same thread, you should be fine.
What is the context in which you're asking the question?
Related
So, let's say I'm using two drivers at the same time (in the specific mysql and sqlite3)
I have a set of changes that must be commit()ted on both connections only if both dbms didn't fail, or rollBack()ed if one or the another did fail:
<?php
interface DBList
{
function addPDO(PDO $connection);
// calls ->rollBack() on all the pdo instances
function rollBack();
// calls ->commit() on all the pdo instances
function commit();
// calls ->beginTransaction() on all the pdo instances
function beginTransaction();
}
Question is: will it actually work? Does it make sense?
"Why not use just mysql?" you would say! I'm not a masochist! I need mysql for the classic fruition via my application, but I also need to keep a copy of a table that is always synchronized and that is also downloadable and portable!
Thank you a lot in advance!
I suspect you put the cart before the horses! If
two databases are in sync
a transaction commits successfully on one DB
No OS-level error occures
then the transaction will also commit successully on the second DB.
So what you would want to do is:
- Start the transaction on MySQL
- Record all data-changing SQL (see later)
- Commit the transaction on MySQL
- If the commit works, run the recorded SQL against SQlite
- if not, roll back MySQL
Caveat: The assumption above is only valid, if the sequence of transactions is identical on both DBs. So you would want to record the SQL into a MySQL table, which is subject to the same transaction logic as the rest. This does the serialization work for you.
You mistake PDO with a database server. PDO is just an interface, pretty much like the database console. It doesn't perform any data operations of its own. It cannot insert or select data. It cannot perform data locks or transactions. All it can do is to send your command to database server and bring back results if any. It's just an interface. It doesn't have transactions on it's own.
So, instead of such fictional trans-driver transactions you can use regular ones.
Start two, one for each driver, and then rollback them accordingly. By the way, with PDO one don't have to rollback manually. Just set PDO in exception mode, write your queries and add commit at the end. In case one of queries failed, all started transactions will be rolled back automatically due to script termination.
I'm using the mysql_insert_id within my code to get an auto increment.
I have read around and it looks like there is no race condition regarding this for different user connections, but what about the same user? Will I be likely to run into race condition problems when connecting to the database using the same username/user but still from different connection sessions?
My application is PHP. When a user submits a web request my PHP executes code and for that particular request/connection session I keep a persistent SQL connection open in to MySQL for the length of that request. Will this cause me any race condition problems?
None for any practical purpose, If you execute the last_id request right after executing your insert then there is practically not enough time for another insert to spoil that. Theoretically might be
possible
According to PHP Manual
Note:
Because mysql_insert_id() acts on the last performed query, be sure to
call mysql_insert_id() immediately after the query that generates the
value.
Just in case you want to double check you can use this function to confirm your previous query
mysql_info
The use of persistent connections doesn't mean that every request will use the same connection. It means that each apache thread will have its own connection that is shared between all requests executing on that thread.
The requests will run serially (one after another) which means that the same persistent connection will not be used by two threads running at the same time.
Because of this, your last_insert_id value will be safe, but be sure that you check the result of your inserts before using it, because it will return the last_insert_id of the last successful INSERT, even if it wasn't the last executed INSERT.
I read the following in the PHP/Oracle manual from php.net:
A transaction begins when the first SQL statement that changes data is executed with oci_execute() using the OCI_NO_AUTO_COMMIT flag. Further data changes made by other statements become part of the same transaction. Data changes made in a transaction are temporary until the transaction is committed or rolled back. Other users of the database will not see the changes until they are committed.
There are two things that I don't understand:
What is committing for?
What does that mean that "other users of the database will not see the changes until they are committed?" How will not they able to see the changes?
Well, you should read some more about transactions.
Simply put - you can think of any query within a transaction as a draft, a temporary data set, that only you(within your database session/connection) can see, unless you issue a commit.
Another analogical explanation is to think of transactions as your thoughts on something that you would actually write down on paper afterwards. The commit is the act of actually writing it so it no longer exists only in your head.
Committing is the finalization of the transaction in which the changes are made permanent.
Because Oracle has the read consistent view, users that start a transaction will only be able to see data that was committed when the new transaction started. So when user A starts a transaction and user B changes some values in a table and commits it, user A won't see the changed data until user A starts a new transaction. The read consistent view makes sure that all users always see a consistent state, one with all data committed.
This causes that a single block of a table can have multiple versions in the undo tablespace, just to support the read consistent views for various transactions.
A long time ago I wrote a php class that handles postgresql db connections.
I've added transactions to my insert/update functions and it works just fine for me. But recently I found out about the "pg_prepare" function.
I'm a bit confused about what that function does and if it'll be better to switch to it.
Currently whenever I do an insert/update my sql looks like this:
$transactionSql = "PREPARE TRANSACTION ".md5(time()).";"
.$theUpdateOrDeleteSQL.";".
."COMMIT;";
This will return something like:
PREPARE TRANSACTION '4601a2e4b4aa2632167d3cc62b516e6d';
INSERT INTO users (username,g_id,email,password)
VALUES('test', '1', 'test','1234');
COMMIT;
I've structured my database with relations and I'm using (when it's possible):
ON DELETE CASCADE
ON UPDATE CASCADE
But I want to be 100% sure things are clean in the database and there are no leftovers if/when there is a failure upon updating/deleting or inserting.
It would be nice if you can share your opinion/experience about pg_prepare, do I really need the "prepare transaction" and any other addition things that might help me? :)
No, you don't need 2 phase commit !...
For safe PHP database handling, do not use pg_query directly, rather wrap it in a special function which does the following :
opens the database connection on your first query
if using persistent connections, ensure the connection is in a known state
register_shutdown_function to a function that issues a ROLLBACK
make sure autocommit is off, or simply issue a BEGIN before the first query
log database error and slow queries
only uses pg_query_params() which takes care of sql injections nicely
That way, if your script crashes or whatever, a rollback is issued automatically. You can only commit by explicitly comitting.
If you use persistent connections beware : php's handling of pg_pconnect is a little ... buggy.
No you don't need prepare transaction (that is intended for distributed transactions across different servers - as Milen has already pointed out.
I'm not sure how the PHP interface handles that, but as long as you can make sure you are not running in auto commit mode, things should be fine.
If you can't control the auto commit mode, simply put your statements into a BEGIN ... COMMIT block
I'm sorry, this is a very general question but I will try to narrow it down.
I'm new to this whole transaction thing in MySQL/PHP but it seems pretty simple. I'm just using mysql not mysqli or PDO. I have a script that seems to be rolling back some queries but not others. This is uncharted territory for me so I have no idea what is going on.
I start the transaction with mysql_query('START TRANSACTION;'), which I understand disables autocommit at the same time. Then I have a lot of complex code and whenever I do a query it is something like this mysql_query($sql) or $error = "Oh noes!". Then periodically I have a function called error_check() which checks if $error is not empty and if it isn't I do mysql_query('ROLLBACK;') and die($error). Later on in the code I have mysql_query('COMMIT;'). But if I do two queries and then purposely throw an error, I mean just set $error = something, it looks like the first query rolls back but the second one doesn't.
What could be going wrong? Are there some gotchas with transactions I don't know about? I don't have a good understanding of how these transactions start and stop especially when you mix PHP into it...
EDIT:
My example was overly simplified I actually have at least two transactions doing INSERT, UPDATE or DELETE on separate tables. But before I execute each of those statements I backup the rows in corresponding "history" tables to allow undoing. It looks like the manipulation of the main tables gets rolled back but entries in the history tables remain.
EDIT2:
Doh! As I finished typing the previous edit it dawned on me...there must be something wrong with those particular tables...for some reason they were all set as MyISAM.
Note to self: Make sure all the tables use transaction-supporting engines. Dummy.
I'd recommend using the mysqli or PDO functions rather than mysql, as they offer some worthwhile improvements—especially the use of prepared statements.
Without seeing your code, it is difficult to determine where the problem lies. Given that you say your code is complex, it is likely that the problem lies with your code rather than MySQL transactions.
Have you tried creating some standalone test scripts? Perhaps you could isolate the SQL statements from your application, and create a simple script which simply runs them in series. If that works, you have narrowed down the source of the problem. You can echo the SQL statements from your application to get the running order.
You could also try testing the same sequence of SQL statements from the MySQL client, or through PHPMyAdmin.
Are your history tables in the same database?
Mysql transactions only work using the mysqli API (not the classic methods). I have been using transactions. All I do is deactivate autocommit and run my SQL statements.
$mysqli->autocommit(FALSE);
SELECT, INSERT, DELETE all are supported. as long as Im using the same mysqli handle to call these statements, they are within the transaction wrapper. nobody outside (not using the same mysqli handle) will see any data that you write/delete using INSERT/DELETE as long as the transaction is still open. So its critical you make sure every SQL statement is fired with that handle. Once the transaction is committed, data is made available to other db connections.
$mysqli->commit();