How does locking tables work? - php

I have a php script that will be requested several times "at the same time" I also have a field in a table let's call it persons as a flag for active/inactive. I want when the first instance of the script runs to set that field to inactive so that the rest instances will die when they check that field. Can someone provide a solution for that? How can I ensure that this script will run only once?
PHP, PDO, MySQL
Thank you very much in advance.

Your script should fetch the current flag within a transaction using a locking read, such as SELECT ... FOR UPDATE:
$dbh = new PDO("mysql:dbname=$dbname", $username, $password);
$dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, FALSE);
$dbh->beginTransaction();
// using SELECT ... FOR UPDATE, MySQL will hold all other connections
// at this point until the lock is released
$qry = $dbh->query('SELECT persons FROM my_table WHERE ... FOR UPDATE');
if ($qry->fetchColumn() == 'active') {
$dbh->query('UPDATE my_table SET persons = "inactive" WHERE ...');
$dbh->commit(); // releases lock so others can see they are inactive
// we are the only active connection
} else {
$dbh->rollBack();
// we are inactive
}

You can use MySQL's own 'named' locking functions without ever having to lock a table: http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock
e.g. try get_lock('got here first', 0) with a 0 timeout. if you get a lock, you're first in the gate, and any subsequent requests will NOT get the lock and immediately abort.
however, be careful with this stuff. if you don't clean up after yourself and the client which gained the lock terminates abnormally, the lock will not be released and your "need locks for this" system is dead in the water until you manually clear the lock.

Related

How I can manually trigger the error "canceling statement due to conflict with recovery error" to my postgresql replciation scheme?

In order to test various settings into my postgresql hot standby replication schema I need to reproduce a situation where the following error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
Therefore, I try to make 2 processes 1 that updates forever a boolean field with its opposite and one that reads the value from the replica.
The update script is this one (loopUpdate.php):
$engine = 'pgsql';
$host = 'mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$pdo->exec("UPDATE mytable SET boolval= NOT boolval where id=52");
}
And the read script is the following (./loopRead.php):
$engine = 'pgsql';
$host = 'mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$value=$pdo->exec("SELECT id, boolval FROM mytable WHERE id=52");
var_dump($value);
echo PHP_EOL;
}
And I execute them in parallel:
# From one shell session
$ php ./loopUpdate.php
# From another one shell session
$ php ./loopRead.php
The mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com is hot standby read replica of the mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com.
But I fail to make the loopRead.php to fail with the error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
As far as I know the error I try to reproduce is because postgresql VACUUM action is performed during an active read transaction on read replica that asks rather stale data. So how I can cause my select statement to select on stale versions of my row?
On the standby, set max_standby_streaming_delay to 0 and hot_standby_feedback to off.
Then start a transaction on the standby:
SELECT *, pg_sleep(10) FROM atable;
Then DELETE rows from atable and VACUUM (VERBOSE) it on the primary server. Make sure some rows are removed.
Then you should be able to observe a replication conflict.
In order to cause your error you need to place a HUGE delay into your select query itself via a pg_delay postgresql function, therefore changing your query into:
SELECT id, boolval, pg_sleep(1000000000) FROM mytable WHERE id=52
So on a single transaction you have a "heavy" query and maximizes the chances of causing a PostgreSQL serialization error.
Though the detail will differ:
DETAIL: User was holding shared buffer pin for too long.
In tat case try to reduce the pg_delay value from 1000000000 into 10.

MySQL Make concurrent SELECT ... FOR UPDATE queries fail instead of waiting for lock to be released

In my PHP code I'm trying to make an innoDB database transaction be ignored if another thread is already performing the transaction on the row. Here's some code as an example:
$db = connect_db();
try{
$db->beginTransaction();
$query = "SELECT val FROM numbers WHERE id=1 FOR UPDATE"; //I want this to throw an exception if row is already selected for update
make_query($db,$query); //function I use to make queries with PDO
sleep(5); //Added to delay the transaction
$num = rand(1,100);
$query = "UPDATE numbers SET val='$num' WHERE id=1";
make_query($db,$query);
$db->commit();
echo $num;
}
catch (PDOException $e){
echo $e;
}
When it makes the SELECT val FROM numbers WHERE id=1 FOR UPDATE query I need some way of knowing through php if the thread is waiting for another thread to finish it's transaction and release the lock. What ends up happening is the first thread finishes the transaction and the second thread overwrites its changes immediately afterwards. Instead I want the first transaction to finish and the second transaction to rollback or commit without making any changes.
Consider simulating record locks with GET_LOCK()
Choose a name specific to the rows you want locking. e.g. 'numbers_1'.
Call SELECT GET_LOCK('numbers_1',0) to lock the name 'numbers_1'.. it will return 1 and set the lock if the name is available, or return 0 if the lock is set already. The second parameter is the timeout, 0 for immediate. On a return of 0 you can back out.
Use SELECT RELEASE_LOCK('numbers_1') when you are finished.
Be aware; calling GET_LOCK() again in a transaction will release the previously set lock.

MySql and ADODB5 - Max connections

Here's an example of php code to make a connection to mysql and perform a select query using adodb :
include('adodb.inc.php'); # load code common to ADOdb
$db = &ADONewConnection('mysql');
$db->PConnect("localhost", "root", "password", "database");
$recordSet = &$conn->Execute('select * from products');
if (!$recordSet)
print $conn->ErrorMsg();
else
while (!$recordSet->EOF) {
print $recordSet->fields[0].' '.$recordSet->fields[1].'<BR>';
$recordSet->MoveNext();
}
$recordSet->Close(); # optional
$conn->Close(); # optional
?>
Do i have to use
$db = &ADONewConnection('mysql');
$db->PConnect("localhost", "root", "password", "database");
and
$recordSet->Close(); # optional
$conn->Close(); # optional
each time i want to make a query to unsure the error of max_connection reached ?
How can i manage when 1000 users or more are connected to my website with MySQL's max_connection = 100 ?
When the maximum number of connections has been reached, your $db->PConnect should throw an exception or return an error code (I don't know this driver too much, please check the man pages). You must watch this error and act accordingly in case of error. Typically, wait a few seconds, and try again a couple of times before returning an error to the user.
Now, the max_connection is the limit of concurrent connections. 1000 users connected to your application are (hopefully) not all running a query at the same time, so you should be safe for a while. At the end of a script execution, all connections are closed (or returned to the pool in your case), and become available to other users. So you will not reach your limit of 100 unless 100 users are actually clicking at the same time on some link in your application.
But you should write your scripts so that they open (or acquire) a connection as late as possible during the course of their execution, and close (or release) the connection as early as possible. This way, the connection is held for a span of time as short as possible, making it less likely to hit the limit.
Now, if you do reach the limit, then there is nothing else you can do but increasing the limit. The only workaround is to put exceeding connection requests on hold (as I suggest in the first paragraph).

Postgresql: PREPARE TRANSACTION

I've two DB servers db1 and db2.
db1 has a table called tbl_album
db2 has a table called tbl_user_album
CREATE TABLE tbl_album
(
id PRIMARY KEY,
name varchar(128)
...
);
CREATE TABLE tbl_user_album
(
id PRIMARY KEY,
album_id bigint
...
);
Now if a user wants to create an album what my php code needs to do is:
Create a record in db1 and save its id(primary key)
Create a record in db2 using it saved in first statement
Is it possible to keep these two statements in a transaction? I'm ok with a php solution too. I mean I'm fine if there is a solution that needs php code to retain db handles and commit or rollback on those handles.
Any help is much appreciated.
Yes it is possible, but do you really need it?
Think twice before you decide this really must be two separate databases.
You could just keep both connections open and ROLLBACK the first command if the second one fails.
If you'd really need prepared transactions, continue reading.
Regarding your schema - I would use sequence generators and RETURNING clause on database side, just for convenience.
CREATE TABLE tbl_album (
id serial PRIMARY KEY,
name varchar(128) UNIQUE,
...
);
CREATE TABLE tbl_user_album (
id serial PRIMARY KEY,
album_id bigint NOT NULL,
...
);
Now you will need some external glue - distributed transaction coordinator (?) - to make this work properly.
The trick is to use PREPARE TRANSACTION instead of COMMIT. Then after both transactions succeed, use COMMIT PREPARED.
PHP proof-of-concept is below.
WARNING! this code is missing the critical part - that is error control. Any error in $db2 should be caught and ROLLBACK PREPARED should be executed on $db1
If you don't catch errors you will leave $db1 with frozen transactions which is really, really bad.
<?php
$db1 = pg_connect( "dbname=db1" );
$db2 = pg_connect( "dbname=db2" );
$transid = uniqid();
pg_query( $db1, 'BEGIN' );
$result = pg_query( $db1, "INSERT INTO tbl_album(name) VALUES('Absolutely Free') RETURNING id" );
$row = pg_fetch_row($result);
$albumid = $row[0];
pg_query( $db1, "PREPARE TRANSACTION '$transid'" );
if ( pg_query( $db2, "INSERT INTO tbl_user_album(album_id) VALUES($albumid)" ) ) {
pg_query( $db1, "COMMIT PREPARED '$transid'" );
}
else {
pg_query( $db1, "ROLLBACK PREPARED '$transid'" );
}
?>
And again - think before you will use it. What Erwin proposes might be more sensible.
Oh and just one more note... To use this PostgreSQL feature, you need to set max_prepared_transactions config variable to nonzero value.
If you can access db2 from within db1, then you could optimize the process and actually keep it all inside a transaction. Use dblink or SQL MED for that.
If you roll back a transaction on the local server, what has been done via dblink on a remote server will not be rolled back. (That is one way to make changes persistent even if a transaction is rolled back.)
But you can execute code on the remote server that rolls back if not successful, and only execute it, if the operation in the local db has been successful first. If the remote operation fails you can roll back locally, too.
Also, use the RETURNING clause of INSERT to return id from a serial column.
It will be easier with PDO...
The main advantage of PDO is to capture errors (by PHP error line or returning SQL error messages) of each single SQL statment in the transaction.
See pdo.begintransaction, pdo.commit, pdo.rollback and pdo.error-handling.
Example:
$dbh->beginTransaction();
/* Do SQL */
$sth1 = $dbh->exec("CREATE TABLE tbl_album (..)");
$sth2 = $dbh->exec("CREATE TABLE tbl_user_album(..)");
/* Commit the changes */
$dbh->commit();

SQLite transaction doesn't work as expected

I prepared 2 files, "1.php" and "2.php".
"1.php" is like this.
<?php
$dbh = new PDO('sqlite:test1');
$dbh->beginTransaction();
print "aaa<br>";
sleep(55);
$dbh->commit();
print "bbb";
?>
and "2.php" is like this.
<?php
$dbh = new PDO('sqlite:test1');
$dbh->beginTransaction();
print "ccc<br>";
$dbh->commit();
print "ddd";
?>
and I excute "1.php". It starts a transaction and waits 55 seconds.
So when I immediately excute "2.php", my expectation is this:
"1.php" is getting transaction and
"1" holds a database lock
"2" can not begin a transaction
"2" can not get database lock so
"2" have to wait 55 seconds
BUT, but the test went another way. When I excute "2",then
"2" immediately returned it's result
"2" did not wait
so I have to think that "1" could not get transaction, or could not get database lock.
Can anyone help?
As I understand it, SQLite transactions do not lock the database unless
a. you make them EXCLUSIVE (they are DEFERRED by default), or
b. you actually access the database
So either you explicitly call
$dbh->exec("BEGIN EXCLUSIVE TRANSACTION");
or you make a write operation (INSERT/UPDATE) to the DB before you start to sleep().
To cite the documentation (emphasis mine):
Transactions can be deferred,
immediate, or exclusive. The default
transaction behavior is deferred.
Deferred means that no locks are
acquired on the database until the
database is first accessed. Thus with
a deferred transaction, the BEGIN
statement itself does nothing. Locks
are not acquired until the first read
or write operation. The first read
operation against a database creates a
SHARED lock and the first write
operation creates a RESERVED lock.

Categories