Executing this query may cause deadlock ? If yes, then please explain how??
$q="UPDATE SET `count` =`count` + 1 WHERE user_id='$uid' FOR UPDATE";
It will not cause a deadlock. Even if a lot of queries try to update at the same time, they will either wait for the other query to finish the update. Or Mysql optimiser will run them simultaneously if different queries are updating different rows, given that you are using InnoDB engine. In MyISAM, there is only table-level locking so the queries will end up running sequentially even if they are run at the same time.
I do not see why there will be a deadlock with this query.
Related
I have a system that handles many queries per second. I code my system with mysql and PHP.
My problem is mysqli transaction still commit the transaction even the record is deleted by other user at the same time , all my table are using InnoDB.
This is how I code my transaction with mysqli:
mysqli_autocommit($dbc,FALSE);
$all_query_ok=true;
$q="INSERT INTO Transaction() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
$q="INSERT INTO Statement() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
if($all_query_ok==true){
//all success
mysqli_commit($dbc);
}else{
//one of it failed , rollback everything.
mysqli_rollback($dbc);
}
Below are the query performed at the same time in other script by another user and then end up messing the expected system behaviour,
$q="DELETE FROM Transaction...";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
Please advice , did I implement the transaction wrongly? I have read about row-level locking and believe that innoDB does lock the record during a transaction
I don't know which kind of transactions you're talking about but with the mysqli extension I use the following methods to work with transactions:
mysqli::begin_transaction
mysqli::commit
mysqli::rollback
Then the process is like:
Starting a new transaction with mysqli::begin_transaction
Execute your SQL queries
On success use mysqli::commit to confirm changes done by your queries in step 2 OR on error during execution of your queries in step 2 use mysqli::rollback to revert changes done by them.
You can think of transactions like a temporary cache for your queries. It's someway similar to output caching in PHP with ob_* functions. As long as you didn't have flushed the cached data, nothing happens on screen. Same with transactions: as long as you didn't have commited anything (and autocommit is turned off) nothing happens in the database.
I did some research on row level locking which can lock record from delete or update
FOR UPDATE
Official Documentation
Right after the begin transaction I have to select those record I wanted to lock like below
SELECT * FROM Transaction WHERE id=1 FOR UPDATE
So that the record will be lock until transaction end.
This method doesn't work on MyISAM type table
Looks like a typical example of race condition. You execute two concurrent scripts modifying data in parallel. Probably your first script successfully inserts records and commits the transaction, and the second script successfully deletes records afterwards. I'm not sure what you mean by "the query performed at the same time in other script by other user" though.
You will have to do this this way:
mysqli_autocommit($dbc,FALSE);
$dbc->begin_transaction();
$all_query_ok=true;
$q="INSERT INTO Transaction() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
$q="INSERT INTO Statement() VALUES()";
mysqli_query ($dbc,$q)?null:$all_query_ok=false;
if($all_query_ok==true){
//all success
mysqli_commit($dbc);
}else{
//one of it failed , rollback everything.
mysqli_rollback($dbc);
}
you can use the object oriented or the procedural style when calling begin_transaction (I prefer the object oriented).
I a have task to monitor queries on server and kill the queries which are locking other queries which I am doing from PHP code.
I want to know if this is possible and how this can be done.
I have searched existing questions on this topic but there was not any matching situation.
I am using show processlist to get list of queries.
I have checked the mysql site and found that "state" can be,
Locked - The query is locked by another query.
But how to get process id of the query which has locked this query, so later I can kill this query by this id.
The SHOW PROCESSLIST; and SELECT * FROM information_schema.PROCESSLIST; returns session's id number. You can use this value in KILL function, e.g. -
KILL CONNECTION 337;
KILL Syntax
Also, you can try KILL QUERY statement. From the documentation: KILL QUERY terminates the statement that the connection is currently executing, but leaves the connection itself intact.itself intact.
I am using InnoDB in MySQL and accessing the table from PHP with PDO.
I need to lock the table, do a select and then, depending on the result of that either insert a row or not. Since I want to have the table locked for as short a time as possible, can I do it like this?
prepare select
prepare insert
begin transaction
lock table
execute select
if reservation time is available then execute insert
unlock table
commit
Or do the prepares have to be inside the transaction? Or do they have to be after the lock?
Should the transaction only include the insert, or does that make any difference?
beginTransaction turns off autocommit mode, so it only affects queries that actually commit changes. This means that prepared statements, SELECT, and even LOCK TABLES are not affected by transactions at all. In fact, if you're only doing a single INSERT there's no need to even use a transaction; you would only need to use them if you wanted to do multiple write queries atomically.
I have a question regarding MySQL commits and transactions. I have a couple of PHP statements that execute MySQL queries. Do I just say the following?
mysql_query("START TRANSACTION");
//more queries here
mysql_query("COMMIT");
What exactly would this do? How does it help? For updates, deletes and insertions I also found this to block other queries from reading:
mysql_query("LOCK TABLES t1 WRITE, t2 WRITE");
//more queries here
mysql_query("UNLOCK TABLES t1, t2");
Would this block other queries whatever nature or only writes/selects?
Another question: Say one query is running and blocks other queries. Another query tries to access blocked data - and it sees that it is blocked. How does it proceed? Does it wait until the data is unblocked again and re-execute the query? Does it just fail and needs to be repeated? If so, how can I check?
Thanks a lot!
Dennis
In InnoDB, you do not need to explicitly start or end transactions for single queries if you have not changed the default setting of autocommit, which is "on". If autocommit is on, InnoDB automatically encloses every single SQL query in a transaction, which is the equivalent of START TRANSACTION; query; COMMIT;.
If you explicitly use START TRANSACTION in InnoDB with autocommit on, then any queries executed after a START TRANSACTION statement will either all be executed, or all of them will fail. This is useful in banking environments, for example: if I am transferring $500 to your bank account, that operation should only succeed if the sum has been subtracted from my bank balance and added to yours. So in this case, you'd run something like
START TRANSACTION;
UPDATE customers SET balance = balance - 500 WHERE customer = 'Daan';
UPDATE customers SET balance = balance + 500 WHERE customer = 'Dennis';
COMMIT;
This ensures that either both queries will run successfully, or none, but not just one.
This post has some more on when you should use transactions.
In InnoDB, you will very rarely have to lock entire tables; InnoDB, unlike MyISAM, supports row-level locking. This means clients do not have to lock the entire table, forcing other clients to wait. Clients should only lock the rows they actually need, allowing other clients to continue accessing the rows they need.
You can read more about InnoDB transactions here. Your questions about deadlocking are answered in sections 14.2.8.8 and 14.2.8.9 of the docs. If a query fails, your MySQL driver will return an error message indicating the reason; your app should then reissue the queries if required.
Finally, in your example code, you used mysql_query. If you are writing new code, please stop using the old, slow, and deprecated mysql_ library for PHP and use mysqli_ or PDO instead :)
I am using PHP with an odbc connection to MSSQL database.
Currently, I having around 1900 insert statements, in the one string, separated by a semicolon, and running that in 1 odbc_execute statement.
Firstly, is this a bad method? Should I be processing every insert statement separately in a for loop?
Also, the way I am currently doing it, with 1 big statement, for some reason, only a maximum of 483 rows are being inserted with no errors. If I copy the statement that is run and run this through SQL studio, all rows insert, yet every single time, only a maximum of 483 rows insert.
Any ideas why this could be?
One network round trip per INSERT will mean a lot of latency. It'll be very slow.
There's probably a limit on the buffer size of all those concatenated SQL statements.
I think you want to use a prepared statement and bound variables:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms712553(v=vs.85).aspx
You can still process it in a loop - two, actually. You'll want an inner loop to add INSERTs to a batch that's executed as a single transaction, and an outer loop to process all the necessary batches.
As long as you aren't running a separate transaction for each insert, there's nothing wrong with inserting one at a time.
For this sort of 'batch insert' I would typically run a transaction for every 100 rows or so.
I would avoid trying to cram them all in at once. nothing really to be gained by doing that.
You may put the data in a file (xml?) on the sql-server and request a stored procedure from php that process it.
regards,
/t