From MySQL manual ( https://dev.mysql.com/doc/refman/8.0/en/innodb-deadlocks.html ):
To reduce the possibility of deadlocks, use transactions rather than LOCK TABLES statements
How deadlocks are possible by using LOCK TABLES in InnoDB?
For example, if I write
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 WRITE, t3 WRITE, t4 WRITE;
... do something with tables t1-t4 here ...
COMMIT;
UNLOCK TABLES;
do I really have to check errors like 1213 every time I execute this script?
If you make sure to lock all the tables you will read or write in one LOCK TABLES statement, you should be able to avoid deadlocks.
The other good reason to avoid using LOCK TABLES if you can use transactions instead is to allow row-level locking. LOCK TABLES only locks at the table level, which means concurrent sessions can't touch any rows in the table, even if your session doesn't need to lock them.
This is a disadvantage for software that needs to allow multiple sessions to access tables concurrently. You're forcing table-level locking, which will put a constraint on your software's throughput, because all sessions that access tables will queue up against each other, and be forced to execute serially.
What do you mean by "use t2"? A READ lock? What if I'm using only WRITE locks like in my example.
I think he means if you read from table t2. Since you have that table locked for WRITE, that includes blocking any readers of that table as well. No other session can read or write the table until you UNLOCK.
I'm not concerned about performance. I have a situation where I want to make things as simple as possible and LOCK TABLES feels much more intuitive to me than using transactions with paranoid level error checking.
You will eventually find a case where you want your software to have good performance. You'll have to become more comfortable using transactions.
Related
I have a table with user login information and registration too. So when two users consecutively try to add their details:
Will both the writes clashes and the table wont be updated?
Using threads for these writes is bad idea. As for each write a new thread would be created and it would clog the server. Is the server responsible for it to manage on its own?
Is locking the table a good idea?
My back-end runs on PHP/Apache with MySQL (InnoDB) for the database.
Relational databases are designed to avoid these kinds of conditions. You don't need to worry about them unless you are designing your own relational database from scratch.
In short, just know this: Any time a write is initiated, there is a row-level lock. If another transaction wants to write to that same row, then it has to wait until the first transaction releases the lock. This is a fundamental part of relational databases. You don't need to add a lock because they've already thought of that :)
You can read more about how MySQL performs locks to avoid deadlocking and other transaction errors here.
If you're really paranoid about this, or perhaps you are doing multiple things when you register a user and need them done atomically, you might want to look at using Transactions in MySQL. There's a decent write-up about Transactions here http://www.mysqltutorial.org/mysql-transaction.aspx
BEGIN;
do related reads/writes to the data
COMMIT;
Inside that "transaction", the connection sees a consistent view of the data, and blocks anyone else from messing with that view.
There are exceptions. The main one is
BEGIN
SELECT ... FOR UPDATE;
fiddle with the values SELECTed
UPDATE ...; -- and change those values
COMMIT;
The SELECT .. FOR UPDATE announces what should not be tampered with. If another connection wants to mess with the same rows, it will have to wait until your COMMIT, at which time he may find that things have changed and he will need to do something different. But, in general, this avoids a "deadlock" wherein two transactions are stepping on each other so badly that one has to be "rolled back".
With techniques like this, the "concurrency" is prevented only briefly and relatively precisely. That is, if two connections are working with different rows, both can proceed -- there is no need to "prevent concurrency".
I have an innodb table that I want to perform on some maintenance queries, those queries are going to happen on parallel threads, and they will include (in that order):
Select, update, select, delete, insert.
I want to only allow 1 single parallel thread to have access to that section, so is there something that would allow me to do this?:
mutex.block()
select
update
select
delete
insert
mutex.release()
This will be in php, and all queries will be executed using php's function mysqli_query.
I am hoping for an alternate to transactions, if nothing but transaction can be done here, then be it.
MySQL features table locks and locks (a kind of mutex).
PHP supports pretty much all POSIX locking mechanisms, such as mutexes and semaphores (but these are not available by default, see the related "Installing/Configuring" manual chapters).
But really, I see little reason why you would want to implement your own synchronisation mechanism: transactions exist for this purpose precisely, and are likely to be more efficient and reliable than anything home-brewed.
(if performance is your concern, then a database back-end may not be the right choice)
My system creates a lot of transactions as it has many users and a lot of data which is checked on a daily basis and renewed.
Somehow at a certain moment (i am not sure if it is the backup which did it) there is a LOCKED on queries. And Somehow they are never returned. Is this the deadlock?
The database is not returning anything to the code either, so I can't check if it's locked or not. Also, this causes other queries to be stopped and pile up and my server runs out of connections...
any idea's on this?
It may be caused by several issues. Most popular is MyISAM table lock. Just run this quesry: SHOW STATUS LIKE 'Table%';. Post it here. If Table_locks_waited is big (e.g. more than 0.5% of Table_locks_immediate) and you are using MyISAM switch to InnoDB table engine.
If your database is not very big, changing engine is pretty fast and transparent.
Note, that all your locked queries are "write" queries. That's because MyISAM has long running selects that lock tables. Moreover, selects can cause some kind of deadlock. Quotation from docs:
MySQL grants table write locks as follows:
If there are no locks on the table, put a write lock on it.
Otherwise, put the lock request in the write lock queue.
MySQL grants table read locks as follows:
If there are no write locks on the table, put a read lock on it.
Otherwise, put the lock request in the read lock queue.
Don't forget to tune innodb_* params!
If you don't want to switch to InnoDB (why?!), you can tune concurrent_insert parameter (try "2") in your my.cnf.
Btw, I see a lot of sleeping connections. Do you have persistent connections? If "yes", do you close them properly?
I have a query which takes a very long time to run but produces a new table in the end. The actual joins are not all that slow but it spends almost all of its time in 'copying to tmp table' and during this time the status of all other queries (which should be going to unrelated tables) is 'locked'. I am in the process of optimizing the long query but it is ok for it to take a while since it is an offline process, but it is NOT ok for it to stop all other queries which should not be related to it anyway. Does anyone know why all other unrelated queries would comeback as 'locked' and how to prevent this behavior?
You are right in that "unrelated tables" shouldn't be affected. They shouldn't and to my knowledge they aren't.
There is a lot of information over at MySQL regarding locks, storage engines and ways of dealing with it.
To limit locks I would suggest that you write an application that reads all data needed to do this new table and simply have your application insert values to the new table. This might take longer but it will do it in smaller chunks and have less or no locks.
Good luck!
What is your MySQL Version?
Do you use MyISAM? MyISAM has a big LOCK problems on large SELECT commands.
Do you have a dedicated server? what is your maximum size for in-memory tables (look in my.cnf)?
I have a simple setup of a set of writers and a set of readers working with a MySQL ISAM table. The writers are only inserting rows while the readers are only checking for new rows.
OK, so I know that I don't need a lock in this situation, since I'm not modifying existing rows. However my Writers are accessing one more table that does need a lock. I piece of information seems irrelevant except for the following limitation stated in the MySQL documentation:
A session that requires locks must
acquire all the locks that it needs in
a single LOCK TABLES statement. While
the locks thus obtained are held, the
session can access only the locked
tables. For example, in the following
sequence of statements, an error
occurs for the attempt to access t2
because it was not locked in the LOCK
TABLES statement:
So to access the table I want to insert rows into, I NEED to lock it, which is causing me performance problems. Any suggestions of how to get around this?
Typically you lock and unlock immediately around the queries which need locking. The documentation is simply stating that for any set of queries run while you have a lock, all tables involved must be locked. You can unlock as soon as you're done and touch any other tables.
Also consider that InnoDB supports row-level locking, which is often preferable to table-locking for performance since other queries on other rows will not be locked out for reading while you're also writing.