mysql 5.7. GET_LOCK not working any more - php

After upgrading to mysql 5.7 GET_LOCK stopped working as it used on mysql 5.5 and as I expected to work. I am aware of the changes regarding the GET_LOCK in 5.7. as described here.
When I execute the same script from the cmd line twice – with small pause in between – it works as expected: First one acquires the lock and second one doesn't.
When I execute same php script via browser twice – with small pause in between – both return that they successfully acquired the lock. This result is not what I expected and it is different from version 5.5 and from my understanding of the GET_LOCK as described in 5.7 documentation.
PHP is running as module (phpinfo() shows Server API: Apache 2.0
Handler ).
PHP version is: 7.0.20
Mysql version is: 5.7.18 OS is
CentOS 7.
This is the example script locktest.php:
<?php
$host = 'localhost';
$db = 'enter_your_db';
$user = 'enter_your_username';
$pass = 'enter_your_password';
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;dbname=$db;charset=$charset";
$opt = [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
PDO::ATTR_EMULATE_PREPARES => false,
PDO::ATTR_PERSISTENT => false
];
$pdo = new PDO($dsn, $user, $pass, $opt);
echo "pid=(".getmypid().")\n";
$stmt = $pdo->query('SELECT connection_id() as connid');
$row = $stmt->fetch();
echo "mysql connection id =(".$row['connid'].")\n";
$stmt = $pdo->query('SELECT GET_LOCK("foobar", 2)');
$row = $stmt->fetch();
var_dump($row);
echo "\n\n";
sleep(10);
When this script is run from cmd line – I get what I expect:
Run php -q locktest.php from one terminal. Then immediately after from another terminal window.
The first one will return:
pid=(18378)
mysql connection id =(71)
array(1) {
["GET_LOCK("foobar", 2)"]=>
int(1)
}
(please note that GET_LOCK result is 1)
The second one will return (started while the first one was still running):
pid=(18393)
mysql connection id =(73)
array(1) {
["GET_LOCK("foobar", 2)"]=>
int(0)
}
(please note that GET_LOCK result is 0 – as expected, and different pid and mysql connection id).
When the same script is started twice from the browser – it reports that both scripts successfully obtained the lock.
First returns:
pid=(11913) mysql connection id =(74) array(1) { ["GET_LOCK("foobar", 2)"]=> int(1) }
Second returns (while first one is still running):
pid=(11913) mysql connection id =(75) array(1) { ["GET_LOCK("foobar", 2)"]=> int(1) }
Please note that pids are the same, but mysql connection id is different, and GET_LOCK result is not as expected since both returned 1.
Now I am confused. Different mysql connections (as returned by CONNECTION_ID()) is used and this suggests different mysql sessions. And according to the mysql documentation it is possible to obtain more locks with same name from SAME session, but here I have different mysql sessions, right?
I even put PDO::ATTR_PERSISTENT => false although that is a default value.
The only difference between output from cmd line and browser are the pids (Different pids from two executed php scripts from cmd line, and same pids from two executed php scripts from the browser).
Any thoughts what is happening? For now, to me, it seems like a major issue since locking quietly stopped working. :(
Thanks.

Related

How I can manually trigger the error "canceling statement due to conflict with recovery error" to my postgresql replciation scheme?

In order to test various settings into my postgresql hot standby replication schema I need to reproduce a situation where the following error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
Therefore, I try to make 2 processes 1 that updates forever a boolean field with its opposite and one that reads the value from the replica.
The update script is this one (loopUpdate.php):
$engine = 'pgsql';
$host = 'mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$pdo->exec("UPDATE mytable SET boolval= NOT boolval where id=52");
}
And the read script is the following (./loopRead.php):
$engine = 'pgsql';
$host = 'mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$value=$pdo->exec("SELECT id, boolval FROM mytable WHERE id=52");
var_dump($value);
echo PHP_EOL;
}
And I execute them in parallel:
# From one shell session
$ php ./loopUpdate.php
# From another one shell session
$ php ./loopRead.php
The mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com is hot standby read replica of the mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com.
But I fail to make the loopRead.php to fail with the error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
As far as I know the error I try to reproduce is because postgresql VACUUM action is performed during an active read transaction on read replica that asks rather stale data. So how I can cause my select statement to select on stale versions of my row?
On the standby, set max_standby_streaming_delay to 0 and hot_standby_feedback to off.
Then start a transaction on the standby:
SELECT *, pg_sleep(10) FROM atable;
Then DELETE rows from atable and VACUUM (VERBOSE) it on the primary server. Make sure some rows are removed.
Then you should be able to observe a replication conflict.
In order to cause your error you need to place a HUGE delay into your select query itself via a pg_delay postgresql function, therefore changing your query into:
SELECT id, boolval, pg_sleep(1000000000) FROM mytable WHERE id=52
So on a single transaction you have a "heavy" query and maximizes the chances of causing a PostgreSQL serialization error.
Though the detail will differ:
DETAIL: User was holding shared buffer pin for too long.
In tat case try to reduce the pg_delay value from 1000000000 into 10.

mysqldump dump via php exec works for 1st db but not next 3

I have four databases which I am trying to backup in php using exec(mysqldump...)
I am using the following code,
$command = 'mysqldump --single-transaction --comments --dump-date --host='.$host[$db].' --user='.$dbuser[$db].' --password='.$dbpass[$db].' '.$dbname[$db].' | zip > '.$filepath;
exec($command, $_messages, $result);
if($result)
{
echo '<p class="success">Backup for database '.$dbname[$db].' complete. Located at: '.$filepath.'';
}
else
{
echo 'Backup for database '.$dbname[$db].' failed';
}
The $host, $dbname, etc are same variables I use for connecting to the databases for normal access, so I know they are valid. I was under the impression that 3rd argument on the exec() function was True if successful, False otherwise. The dumps are cycled through using the $db variable, so the commands issued are identical in format, but obviously with different details for each database. The results are as follows,
Database A dump works fine, but $result is returned as False
Database B dump produces a file, but no contents and $result is returned as False
Database c dump produces a file, but no contents and $result is returned as False
Database D dump apparently works ($result True) but produces no file.
So I am beginning the think the correct interpretation of $result is 0 for success.
There is no difference if the databases are processed in a loop or processed individually. No problem in producing the dumps via phpMyAdmin and the dump for database A tallys with the phpMyAdmin dump.
I'm on a shared hosting platform running php 5.4 and MySQL 5.6.30 with cPanel.
Anyone got any thoughts as to what might be happening?
Many thanks.

PHP PDO SQL - update inner join query freezes during execution

I need to run a daily PHP script that downloads a data file and executes a bunch of SQL Queries in sequence to import and optimize the data.
I'm having a problem executing one of my queries within PHP which appears to freeze the mysqld process on my server. Oddly, running the same query does not case a similar problem when run from the Sequel Pro Database client program (Mac).
The query is running an update on a large table with over a million rows. Here is the stored procedure I'm using:
DELIMITER ;;
CREATE PROCEDURE spSetTripIdInStopTimes()
BEGIN
-- SET META_TRIP_ID IN IMPORT_STOP_TIMES
UPDATE import_stop_times
INNER JOIN ref_trips ON
(
import_stop_times.trip_id = ref_trips.trip_id
AND import_stop_times.meta_agency_id =ref_trips.meta_agency_id
)
SET import_stop_times.meta_trip_id = ref_trips.meta_trip_id;
END;;
DELIMITER ;
When the procedure is called with
CALL spSetTripIdInStopTimes;
inside Sequel Pro, the result is 1241483 rows affected and the time taken is around 90 seconds.
With PHP PDO I run the same command with
$result = $database->runExecQuery("CALL spSetTripIdInStopTimes");
However, it gets stuck on this query for over 24 hrs and still has not completed. When I cancel the PHP script I can see that the mysqld process is still taking %99.5 CPU. At this point I restart SQL with 'sudo service mysql restart'.
I also tried using PHP's mysqli, but the freezing also occurs with this method.
$mysqli->query("CALL spSetTripIdInStopTimes")
Would anyone be able to reason why this is happening or suggest another method?
Thank you in advance.
Note: I also tried using the older mysql on PHP, but the version I'm using (5.5.9-1ubuntu4.14) tells me the command is deprecated and stops the script.
[UPDATE]
I've also tried running the stored procedure directly on the command line:
mysql --user=[username] --password=[password] --execute="call spSetTripIdInStopTimes()" [tablename]
which worked.
So I tried running the same command with PHP's exec() function:
exec("mysql --user=[username] --password=[password] --execute=\"call spSetTripIdInStopTimes()\" [table name]");
Unfortunately, it stills hangs. I'm starting to wonder if this is due to the limitation or overhead of PHP.
[UPDATE 2]
Here is the array of PHP PDO connection options I use:
array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
// Allow file reading, need following settings to import data from txt files
PDO::MYSQL_ATTR_LOCAL_INFILE => true,
PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true,
PDO::ATTR_EMULATE_PREPARES => true)
[UPDATE 3]
I'm using a custom database object, so I'll show the function for $database->runExecQuery() for clarification:
function runExecQuery($queryString)
{
$db = $this-> getConnection(); // Connect to database
try{
return array(
"success"=> true,
"data"=>$db->exec($queryString)
);
}
catch (PDOException $ex)
{
return array(
"success"=> false,
"errMessage"=> "ERROR[{$ex->getCode()}]".($this-> Debug ? "{$ex}" : "")
);
}
}
The variable $db is the connection variable that is initialized as follows:
// Create address string
$db_address =
"mysql:host={$settings['db_hostname']};".
"dbname={$settings['db_name']};".
"charset={$settings['db_charset']}";
// Create connection to database
$db = new PDO($db_address, $settings['db_user'], $settings['db_pw'], $options);
where $options is the array from [Update 2].
[Update 4]
Mihai's suggestion of changing PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => false had some promising results as query appeared to finish. However, after more testing I found that the PHP script will sometimes still hang on the Query about 50% of the time it is run. This is true even with the same set of data in the SQL tables.

PHP LOAD DATA INFILE Error Reading File

Solved
So, it was a type of permission error. Earlier in this script, I used flock() on the file to make sure the file wasn't being written to by another script. Removing flock() allows the query to run. Now I just need to determine a way to not load a file if it is still being written to...
I'm having trouble getting LOAD DATA INFILE to work in my php script. Here's the relevant portions of the script:
... //set $host, $user, etc.
$dsn = "mysql:host=$host;dbname=$database";
$pdo = new PDO($dsn, $user, $password, array(PDO::MYSQL_ATTR_LOCAL_INFILE => 1));
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
... //set $sqlFile and $table
$sql = "LOAD DATA INFILE '$sqlFile' REPLACE INTO TABLE `$table`";
echo $sql . "\n";
$rows = $pdo->exec($sql);
Running the script then produces:
LOAD DATA INFILE 'D:/pathToTemp/emdr/emdrorders/emdrorders_160314-1947UTC.txt'
REPLACE INTO TABLE `emdrorders`
[PDOException]
SQLSTATE[HY000]: General error: 2 Error reading file 'D:\pathToTemp\emdr\emdrorders\emdrorders_160314-1947UTC.txt'
(Errcode: 13 - Permission denied)
However, if I run the same query through the mysql cli it works.
mysql> LOAD DATA INFILE 'D:/pathToTemp/emdr/emdrorders/emdrorders_160314-1947UTC.txt'
REPLACE INTO TABLE `emdrorders`;
Query OK, 5487 rows affected (0.10 sec)
Records: 5355 Deleted: 132 Skipped: 0 Warnings: 0
I've also tried using LOCAL, but instead of throwing an exception $pdo->exec() returns 0 and the data is not loaded into the database.
My Mysql is 5.6.12 and PHP is 5.4.16 on a Windows machine and planning to put it on linux server. (I'm also doing this within the Laravel framework but I don't think that would cause this problem.)
Since the query works in the mysql cli but not through php, I can only assume the problem is in the php settings or the pdo. What do I need to change?
What is it that you plan on doing with $rows?
You pass queries through PDO::QUERY or PDO::PREPARE.
I assume from the use of PDOStatement::execute that you wish to use PDO::PREPARE.
$query = $pdo->prepare($sql); $query->execute();
To obtain responses from the query:
$query->fetch() or $query->fetchAll()
See link for documentation on preparing a query in PDO:
http://www.php.net/manual/en/pdo.prepare.php

php mysql_connect - Lost connection to MySQL server

this post has been edited to reflect the findings thus far between myself and iamkrillin, as we have been the only two posters
I have the following VB.NET code connecting correctly, running from my PC
Dim strConnection As String = "Server=dev.xxxxx.vmc;Database=report1;integrated security=SSPI;" & _
"persist security info=False;Trusted_Connection=Yes;"
Dim ObjDa As SqlDataAdapter = New SqlDataAdapter(pStrQuery, strConnection)
Try
Dim dsReturn As DataSet = New DataSet
ObjDa.Fill(dsReturn)
ObjDa.Dispose()
Return dsReturn
Catch ex As Exception
Return Nothing
End Try
I have the following PHP code running from our iSeries
$conn = array( 'host' => 'dev.xxxxx.vmc',
'username' => 'vmc\adam',
'password' => 'xxxxxx)',
'dbname' => 'report1',
'pdoType' => 'dblib' );
try {
$db = new Zend_Db_Adapter_Pdo_Mssql($conn);
$db->getConnection();
} catch (Zend_Db_Adapter_Exception $e) {
}
The getConnection function, is throwing an error:
SQLSTATE[] (null) (severity 0)
And when I look up this error HERE, it appears to be a bug PRE 5.2.10, and we are running 5.2.17. But, some of the other comments say it is still a bug in 5.3.
*edit
It seems that if using a domain account, windows auth must be enabled. However, it is not through our PHP. So I need to set up a database specific user for our PHP connection.
In your VB snippet, you are connecting to SQL Server, and in your PHP snippet you are connecting to MySQL. If you need to use SQL Server from PHP, look at this. If you are on a non windows platform you can try FreeTDS. Here is an example of how to get started with it

Categories