When does a mysql session end in php? - php

I have a mysql database (InnoDB engine) where I lock a few tables before doing some work. According to the documentation
"The correct way to use LOCK TABLES and UNLOCK TABLES with transactional tables, such as InnoDB tables, is to begin a transaction with SET autocommit = 0 (not START TRANSACTION) followed by LOCK TABLES, and to not call UNLOCK TABLES until you commit the transaction explicitly."
So I'm doing (pseudocode):
mysqli_query("SET autocommit=0");
mysqli_query("LOCK TABLES table1 WRITE, table2 READ ...");
mysqli_query("SOME SELECTS AND INSERTS HERE");
mysqli_query("COMMIT");
mysqli_query("UNLOCK TABLES");
Now, should I also do this:
mysqli_query("SET autocommit=1");
According to the documentation again,
"When you call LOCK TABLES, InnoDB internally takes its own table lock, and MySQL takes its own table lock. InnoDB releases its internal table lock at the next commit, but for MySQL to release its table lock, you have to call UNLOCK TABLES. You should not have autocommit = 1, because then InnoDB releases its internal table lock immediately after the call of LOCK TABLES, and deadlocks can very easily happen. InnoDB does not acquire the internal table lock at all if autocommit = 1, to help old applications avoid unnecessary deadlocks."
I think the documentation is a bit ambigous at this point. As I'm interpreting it, you shouldn't use SET autocommit=1 in place of UNLOCK TABLES.
However, there shouldn't be any harm in doing it AFTER the tables have been unlocked?
But still I'm unsure if it's necessary. I have a single select running in the same script after the COMMIT, and it appears to autocommit even if I don't SET autocommit=1. Why?

Related

Check if transaction on a innoDB row is occurring?

If a database transaction is occurring on one thread is there a way for other threads to check to see if this transaction is already occurring before attempting the transaction? I know innoDB has row-level locking but I want the transaction to not be attempted if its already occurring on another thread, instead of waiting for the lock to be released and then attempting it.
To make my question clearer, an explanation of what I am trying to do may help:
I am creating a simple raffle using php and a innoDB table with MySQL. When a user loads the page to view the raffle it checks the raffle's database row to see if its scheduled end time has passed and if its "processed" column in the database is true or false.
If the raffle needs to be processed it will begin a database transaction which takes about 5 seconds before being committed and marked as "processed" in the database.
If multiple users load the page at around the same time I feel that it will process the raffle more than once which is not what I want. Ideally it would only attempt to process the raffle if no other threads are processing it, otherwise it would do nothing.
How would I go about doing this? Thanks.
You could implement table level locking and handle any subsequent connections to either be run in a queue or fail quietly:
https://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
From the MySQL docs:
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;

Do MySQL transactions lock rows within InnoDB that are being updated and/or selected

Using InnoDB, do MySQL transactions lock newly created rows when BEGIN is called, and then unlock them when commit is called?
for example:
$query = "INSERT INTO employee (ssn,name,phone)
values ('123-45-6789','Matt','1-800-555-1212')";
mysql_query("BEGIN");
$result = mysql_query($query);
mysql_query("COMMIT);
Does the INSERT statement lock that row until COMMIT is called, or is it rolled back to prevent other concurrent connections from modifying it?
If not, can you only lock a row within a transactions which blocks reads and any modifications by calling select FOR UPDATE?
Until the transaction is committed, the newly created record is invisible to other connections. Other connections cannot modify it, so there is no need to lock it.

how to lock mysql myisam table with php

Can anyone test mysql table lock using 2 php scripts. And mysql_query().
I tried for a day but i couldn't get table locked.
I want when one php script uses mysql table all other scripts wouldn't have access to it.
Can you provide 2 simple tested php scripts. And if you can show how they work online it would be perfect.
But it should be so that when first script works and locks mysql table other scripts should wait for its turn.
Like a queue only one script can access myisam mysql table at the same time. But please test your script before answer because i tried many things many advises and nothing works.
I wouldn't advise locking db tables explicitly if it is not aimed to manage complex db logic at transaction level. The queries will still be sent out, but fail due to the lock or worse other transactions become deadlocked because of a lock acquired at the wrong time.
As a consequence you are like to tank any semblance of performance in the application.
Edit:
http://dev.mysql.com/doc/refman/5.1/en/lock-tables.html
The documentation for mysql gives a detailed explanation of how the locks operate. Locks are acquired for the session using it, so if you want your session to not have access to certain tables and aliases, then you want to lock everything besides the tables you want to deny your session access to.
Not sure what locks you wish to test as an example.
define("READ_LOCK", 0)
define("WRITE_LOCK",1)
function lock_on_to_tables ($Tables, $lockType=0)
{
$sql = "LOCK TABLES "
foreach ($Tables as $table)
$sql .= $table . " ,";
$sql = substr($sql, 0, -1); // cut off last comma
$sql .= $lockType ? " WRITE" : " READ;
mysqli_query($sql); // or pdo or whatever is in use.
}
Unlock is just
mysqli_query("UNLOCK TABLES") ;
You do not lock table with PHP. You do lock with mysql query. So basically you do a query with syntax as described in mysql docs and that's it. For example:
mysqli_query("LOCK TABLES my_table READ");
And drop mysql_ in favour of mysqli_ or PDO
There are two kinds of locks. One is what #WebnetMobile.com posted.
The other is called an advisory lock - you put up a flag, and everyone is required to check the flag before being allowed access. It's on you to check for the flag everywhere that needs it. But the advantage is you can exactly tune the locking to your needs.
Also, with InnoDB you can lock specific rows of the table, without locking the entire table.
See: http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html

PHP + Locking MySQL Table fails

I have a table that needs to be locked from being inserted but it also needs to be able to be updated while inserts are prevented.
function myfunction() {
$locked = mysql_result(mysql_query("SELECT locked FROM mylock"),0,0);
if ( $locked ) return false;
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1");
mysql_query("UNLOCK TABLES");
/* I'm checking another table to see if a record doesn't exist already */
/* If it doesn't exist then I'm inserting that record */
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=0");
mysql_query("UNLOCK TABLES");
}
But this isn't enough, the function is called again from another script and simultaneously inserts are happening from the 2 calls to the function, and I can't have that because it's causing duplicate records.
This is urgent please help. I thought of using UNIQUE on the fields but there are 2 fields (player1, player2), and NEITHER cannot contain a duplicate of a player ID.
Unwanted behavior:
Record A = ( Player1: 123 Player2: 456 )
Record B = ( Player1: 456 Player2: 123 )
I just noticed you suffer form a race condition in your code. Assuming there isn't an error (see my comments)... two processes could check and get "not locked" result. The "LOCK TABLES" will serialize their access, but they'll both continue on thinking they have the lock and thus duplicate records.
You could rewrite it as this:
mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1 WHERE locked=0");
$have_lock = mysql_affected_rows() > 0;
mysql_query("UNLOCK TABLES");
if (!$have_lock ) return false;
I suggest not using locks at all. Instead, when insterting the data, do like this:
mysql_query("INSERT IGNORE INTO my_table VALUES(<some values here>)");
if(mysql_affected_rows()>0)
{
// the data was inserted without error
$last_id = mysql_insert_id();
// add what you need here
}
else
{
// the data could not be inserted (because it already exists in the table)
// query the table to retrieve the data
mysql_query("SELECT * FROM my_table WHERE <some_condition>");
// add what you need here
}
When adding the IGNORE keyword to an INSERT statement, MySQL will attempt to insert the data. If it doesn't work because a record with the same primary key is already in the table, it will fail silently. mysql_affected_rows is used to know the number of inserted records and decide what to do.
You don't need table level locking here, better use row level locking. Row level locking means only the one row they're modifying is locked. The usual alternatives are to either lock the entire table for the duration of the modification, or else to lock some subset of the table. Row-level locking simply reduces that subset of the rows to the smallest number that still ensures integrity.
In the InnoDB transaction model, the goal is to combine the best properties of a multi-versioning database with traditional two-phase locking. InnoDB does locking on the row level and runs queries as nonlocking consistent reads by default, in the style of Oracle. The lock table in InnoDB is stored so space-efficiently that lock escalation is not needed: Typically, several users are permitted to lock every row in InnoDB tables, or any random subset of the rows, without causing InnoDB memory exhaustion.
If your problem yet not solved, then memory size may be the issue. InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you're doing anything with InnoDB on your server.
Luckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There's simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post's writing).
Before you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it's shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won't use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.

MYSQL table locking with PHP

I have mysql table fg_stock. Most of the time concurrent access is happening in this table. I used this code but it doesn't work:
<?php
mysql_query("LOCK TABLES fg_stock READ");
$select=mysql_query("SELECT stock FROM fg_stock WHERE Item='$item'");
while($res=mysql_fetch_array($select))
{
$stock=$res['stock'];
$close_stock=$stock+$qty_in;
$update=mysql_query("UPDATE fg_stock SET stock='$close_stock' WHERE Item='$item' LIMIT 1");
}
mysql_query("UNLOCK TABLES");
?>
Is this okay?
"Most of the time concurrent access is happening in this table"
So why would you want to lock the ENTIRE table when it's clear you are attempting to access a specific row from the table (WHERE Item='$item')? Chances are you are running a MyISAM storage engine for the table in question, you should look into using the InnoDB engine instead, as one of it's strong points is that it supports row level locking so you don't need to lock the entire table.
Why do you need to lock your table anyway?????
mysql_query("UPDATE fg_stock SET stock=stock+$qty_in WHERE Item='$item'");
That's it! No need in locking the table and no need in unnecessary loop with set of queries. Just try to avoid SQL Injection by using intval php function on $qty_in (if it is an integer, of course), for example.
And, probably, time concurrent access is only happens due to non-optimized work with database, with the excessive number of queries.
ps: moreover, your example does not make any sense as mysql could update the same record all the time in the loop. You did not tell MySQL which record exactly do you want to update. Only told to update one record with Item='$item'. At the next iteration the SAME record could be updated again as MySQL does not know about the difference between already updated records and those that it did not touched yet.
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html
mysql> LOCK TABLES real_table WRITE, temp_table WRITE;
mysql> INSERT INTO real_table SELECT * FROM temp_table;
mysql> DELETE FROM temp_table;
mysql> UNLOCK TABLES;
So your syntax is correct.
Also from another question:
Troubleshooting: You can test for table lock success by trying to work
with another table that is not locked. If you obtained the lock,
trying to write to a table that was not included in the lock statement
should generate an error.
You may want to consider an alternative solution. Instead of locking,
perform an update that includes the changed elements as part of the
where clause. If the data that you are changing has changed since you
read it, the update will "fail" and return zero rows modified. This
eliminates the table lock, and all the messy horrors that may come
with it, including deadlocks.
PHP, mysqli, and table locks?

Categories