So I've got this PHP Service which is connect to my Android app and it executes 2 queries at once. I've read how to detect if one fails and I know that one of mine fails because EventSlots is already 0 and is an INT unsigned.
However, the other statement is successful and gets executed. Obviously, I want to only execute both, and if 1 fails, don't execute anything at all but return an error for my App.
How would I detect if one of the statements failed and STOP the other from executing? I probably can avoid using multiple statements and do 1, check if it was OK and only then execute the other. Can I achieve that with multiple statements though?
Query:
"INSERT INTO GuestList(EventID, AccountID) VALUES (7, (SELECT AccountID FROM Accounts WHERE Username = 'test'));
UPDATE Events SET EventSlots = EventSlots-1 WHERE EventID = 7 ;"
Use PDO transactions http://php.net/manual/en/pdo.transactions.php . A transaction represents a bunch of queries that has to be executed in a atomic way. If something fails during the transaction it will automatically performs a rollback to the initial state.
Related
I have a system that handles many queries per second. I code my system with mysql and PHP.
My problem is mysqli transaction still commit the transaction even the record is deleted by other user at the same time , all my table are using InnoDB.
This is how I code my transaction with mysqli:
mysqli_autocommit($dbc,FALSE);
$all_query_ok=true;
$q="INSERT INTO Transaction() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
$q="INSERT INTO Statement() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
if($all_query_ok==true){
//all success
mysqli_commit($dbc);
}else{
//one of it failed , rollback everything.
mysqli_rollback($dbc);
}
Below are the query performed at the same time in other script by another user and then end up messing the expected system behaviour,
$q="DELETE FROM Transaction...";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
Please advice , did I implement the transaction wrongly? I have read about row-level locking and believe that innoDB does lock the record during a transaction
I don't know which kind of transactions you're talking about but with the mysqli extension I use the following methods to work with transactions:
mysqli::begin_transaction
mysqli::commit
mysqli::rollback
Then the process is like:
Starting a new transaction with mysqli::begin_transaction
Execute your SQL queries
On success use mysqli::commit to confirm changes done by your queries in step 2 OR on error during execution of your queries in step 2 use mysqli::rollback to revert changes done by them.
You can think of transactions like a temporary cache for your queries. It's someway similar to output caching in PHP with ob_* functions. As long as you didn't have flushed the cached data, nothing happens on screen. Same with transactions: as long as you didn't have commited anything (and autocommit is turned off) nothing happens in the database.
I did some research on row level locking which can lock record from delete or update
FOR UPDATE
Official Documentation
Right after the begin transaction I have to select those record I wanted to lock like below
SELECT * FROM Transaction WHERE id=1 FOR UPDATE
So that the record will be lock until transaction end.
This method doesn't work on MyISAM type table
Looks like a typical example of race condition. You execute two concurrent scripts modifying data in parallel. Probably your first script successfully inserts records and commits the transaction, and the second script successfully deletes records afterwards. I'm not sure what you mean by "the query performed at the same time in other script by other user" though.
You will have to do this this way:
mysqli_autocommit($dbc,FALSE);
$dbc->begin_transaction();
$all_query_ok=true;
$q="INSERT INTO Transaction() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
$q="INSERT INTO Statement() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
if($all_query_ok==true){
//all success
mysqli_commit($dbc);
}else{
//one of it failed , rollback everything.
mysqli_rollback($dbc);
}
you can use the object oriented or the procedural style when calling begin_transaction (I prefer the object oriented).
I've read that if you used mysqli prepare instead of mysqli query it will be run on database only once even if you run the script 1000 times.
does that mean that if i ran a prepare statement like
select * from Table where user=?"
and then mysqli_stmt_bind_param($stmt, "s", "Harry");
and user "harry" doesn't exist in the database so num_rows will return 0.and then i insert immediately a new row in the database with user="harry" and ran the script again, will it return num_rows=1 or still return num_rows=0 because the result is cached?
The fact that you see the newly inserted row depends on other factors... transaction and transaction isolation. Preparing the statement just means that the server knows what is coming and has calculate a plan to optimize the statement. You can then run the query many times, with the same or with different parameters, and the server will not have to analyze it any more... just execute it.
So... to answer your question... the second time you should get num_rows=1. But in case you don't, the problem is not that you prepared a query with parameters... it's something else.
I have a question regarding MySQL commits and transactions. I have a couple of PHP statements that execute MySQL queries. Do I just say the following?
mysql_query("START TRANSACTION");
//more queries here
mysql_query("COMMIT");
What exactly would this do? How does it help? For updates, deletes and insertions I also found this to block other queries from reading:
mysql_query("LOCK TABLES t1 WRITE, t2 WRITE");
//more queries here
mysql_query("UNLOCK TABLES t1, t2");
Would this block other queries whatever nature or only writes/selects?
Another question: Say one query is running and blocks other queries. Another query tries to access blocked data - and it sees that it is blocked. How does it proceed? Does it wait until the data is unblocked again and re-execute the query? Does it just fail and needs to be repeated? If so, how can I check?
Thanks a lot!
Dennis
In InnoDB, you do not need to explicitly start or end transactions for single queries if you have not changed the default setting of autocommit, which is "on". If autocommit is on, InnoDB automatically encloses every single SQL query in a transaction, which is the equivalent of START TRANSACTION; query; COMMIT;.
If you explicitly use START TRANSACTION in InnoDB with autocommit on, then any queries executed after a START TRANSACTION statement will either all be executed, or all of them will fail. This is useful in banking environments, for example: if I am transferring $500 to your bank account, that operation should only succeed if the sum has been subtracted from my bank balance and added to yours. So in this case, you'd run something like
START TRANSACTION;
UPDATE customers SET balance = balance - 500 WHERE customer = 'Daan';
UPDATE customers SET balance = balance + 500 WHERE customer = 'Dennis';
COMMIT;
This ensures that either both queries will run successfully, or none, but not just one.
This post has some more on when you should use transactions.
In InnoDB, you will very rarely have to lock entire tables; InnoDB, unlike MyISAM, supports row-level locking. This means clients do not have to lock the entire table, forcing other clients to wait. Clients should only lock the rows they actually need, allowing other clients to continue accessing the rows they need.
You can read more about InnoDB transactions here. Your questions about deadlocking are answered in sections 14.2.8.8 and 14.2.8.9 of the docs. If a query fails, your MySQL driver will return an error message indicating the reason; your app should then reissue the queries if required.
Finally, in your example code, you used mysql_query. If you are writing new code, please stop using the old, slow, and deprecated mysql_ library for PHP and use mysqli_ or PDO instead :)
I'm curios if this can be achieved as I'm currently facing a bug and would like to see if putting a SELECT and an UPDATE in a transaction would fix it (if you're wondering why I'm not posting the code that causes the bug it's because it's a complex environment and I can't post all the influencing factors).
Something that I'm also interested in, related to this, is if you have ever experienced code that had and UPDATE query written after a SELECT query, yet the UPDATE gets executed before the SELECT (with the possibility that the script might run twice ruled out).
It depends on what do you mean by a transaction.
There are two types of transactions:
Implicit transactons: as in INSERT, UPDATE, SELECT, DELETE statements, and in such statements there is no explicit transaction commands, and the database engine will rollback the whole statement if an error happens.
Explicit Transactions: in such the enclosed statements inside the transaction are executed as a unit and either COMMIT the whole transaction or ROLLBACK .
So you can't have both SELECT and UPDATE inside one query, but you can but them inside a transaction like:
START TRANSACTION;
SELECT * FROM tableName;
UPDATE table SET something = 'other something' WHERE thirdsomething = #s;
COMMIT;
Then Put them in a stored procedure or a UDF.
Note that: SELECT statements do not modify data, so you might not need to enclose it in a transaction, so in your case you will have only UPDATE statement you can just use a stored procedure without a transaction.
I'm trying to grasp the idea of transactions fully. Therefore the following question... (ofcourse newbie, so don't laugh :D )
I have set up a (simplified) transaction in PHP (using the PHP SQL driver from microsoft). I want to get the rows I'm going to delete for some extra processing later:
sqlsrv_begin_transaction($conn);
$sql = "SELECT * FROM test WITH (XLOCK) WHERE a<10";
$statement = sqlsrv_query($conn,$sql);
$sql = "DELETE FROM test WHERE a<10";
sqlsrv_query($conn,$sql);
$result = get_result_array($statement);
sqlsrv_commit($conn);
$result2 = get_result_array($statement);
1) I do get the expected result in $result but an empty array in $result2. Why?
I would expect only a result in $result2 because then the transaction has actually been executed. I guess the result in $result is a sort of 'temporary' result in memory and not actually a result from the actual database.
2) It could be that between the moment the transaction was started and the actual commit, an other query from another connection has changed the rows which match (a<10)? That means that the results I'm expecting according to $result will be different from the actual changes in the database.
Or is it that (a) the transaction occurres with an in-memory copy of the database (not affected by in-between queries from other connections), or (b) the locks obtained since the beginning of the transaction are already in action for other queries from other connections?
After typing this I'm expecting answer b....?
I'm not familiar with the sqlsrv driver, but if it works anything like most other PHP DB drivers, the result of the sqlsrv_query call is not a result set in some form of array, but a PHP resource (see http://www.php.net/manual/en/language.types.resource.php). Calling get_result_array still retrieves data from that resource, in this case the database, and it does so immediately. The COMMIT only affects writes to the database, not reads, so you see your result immediately in result1. After you commit your transaction (i.e, the DELETE), the next call correctly returns an empty result set.
I tested it out with some mysql tools (which i'm more familiar with):
1. When I start a transaction and do a 'select' of one particular record I directly get the result. Then from an other connection I delete the same record (with autocommit) it is gone for that connection but for the first connection the record is still there (I did the 'select' again without committing the transaction). Only after committing the transaction of the first connection and doing the 'select' again the record is gone.
2. When I do the same but acquire an exclusive lock for the first 'select' query then the delete query of the second connection waits until the transaction of the first connection has been committed.
Conclusion: In situation (1) for the second select query of the first connection, the database IS returning a result as it was at the moment of the start of the transaction... thus WITHOUT taking into account other (write) queries running AFTER the start of the transaction. Situation (2) is exactly the answer 2b from my original question. :)