PHP PDO SQL - update inner join query freezes during execution - php

I need to run a daily PHP script that downloads a data file and executes a bunch of SQL Queries in sequence to import and optimize the data.
I'm having a problem executing one of my queries within PHP which appears to freeze the mysqld process on my server. Oddly, running the same query does not case a similar problem when run from the Sequel Pro Database client program (Mac).
The query is running an update on a large table with over a million rows. Here is the stored procedure I'm using:
DELIMITER ;;
CREATE PROCEDURE spSetTripIdInStopTimes()
BEGIN
-- SET META_TRIP_ID IN IMPORT_STOP_TIMES
UPDATE import_stop_times
INNER JOIN ref_trips ON
(
import_stop_times.trip_id = ref_trips.trip_id
AND import_stop_times.meta_agency_id =ref_trips.meta_agency_id
)
SET import_stop_times.meta_trip_id = ref_trips.meta_trip_id;
END;;
DELIMITER ;
When the procedure is called with
CALL spSetTripIdInStopTimes;
inside Sequel Pro, the result is 1241483 rows affected and the time taken is around 90 seconds.
With PHP PDO I run the same command with
$result = $database->runExecQuery("CALL spSetTripIdInStopTimes");
However, it gets stuck on this query for over 24 hrs and still has not completed. When I cancel the PHP script I can see that the mysqld process is still taking %99.5 CPU. At this point I restart SQL with 'sudo service mysql restart'.
I also tried using PHP's mysqli, but the freezing also occurs with this method.
$mysqli->query("CALL spSetTripIdInStopTimes")
Would anyone be able to reason why this is happening or suggest another method?
Thank you in advance.
Note: I also tried using the older mysql on PHP, but the version I'm using (5.5.9-1ubuntu4.14) tells me the command is deprecated and stops the script.
[UPDATE]
I've also tried running the stored procedure directly on the command line:
mysql --user=[username] --password=[password] --execute="call spSetTripIdInStopTimes()" [tablename]
which worked.
So I tried running the same command with PHP's exec() function:
exec("mysql --user=[username] --password=[password] --execute=\"call spSetTripIdInStopTimes()\" [table name]");
Unfortunately, it stills hangs. I'm starting to wonder if this is due to the limitation or overhead of PHP.
[UPDATE 2]
Here is the array of PHP PDO connection options I use:
array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
// Allow file reading, need following settings to import data from txt files
PDO::MYSQL_ATTR_LOCAL_INFILE => true,
PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true,
PDO::ATTR_EMULATE_PREPARES => true)
[UPDATE 3]
I'm using a custom database object, so I'll show the function for $database->runExecQuery() for clarification:
function runExecQuery($queryString)
{
$db = $this-> getConnection(); // Connect to database
try{
return array(
"success"=> true,
"data"=>$db->exec($queryString)
);
}
catch (PDOException $ex)
{
return array(
"success"=> false,
"errMessage"=> "ERROR[{$ex->getCode()}]".($this-> Debug ? "{$ex}" : "")
);
}
}
The variable $db is the connection variable that is initialized as follows:
// Create address string
$db_address =
"mysql:host={$settings['db_hostname']};".
"dbname={$settings['db_name']};".
"charset={$settings['db_charset']}";
// Create connection to database
$db = new PDO($db_address, $settings['db_user'], $settings['db_pw'], $options);
where $options is the array from [Update 2].
[Update 4]
Mihai's suggestion of changing PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => false had some promising results as query appeared to finish. However, after more testing I found that the PHP script will sometimes still hang on the Query about 50% of the time it is run. This is true even with the same set of data in the SQL tables.

Related

How I can manually trigger the error "canceling statement due to conflict with recovery error" to my postgresql replciation scheme?

In order to test various settings into my postgresql hot standby replication schema I need to reproduce a situation where the following error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
Therefore, I try to make 2 processes 1 that updates forever a boolean field with its opposite and one that reads the value from the replica.
The update script is this one (loopUpdate.php):
$engine = 'pgsql';
$host = 'mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$pdo->exec("UPDATE mytable SET boolval= NOT boolval where id=52");
}
And the read script is the following (./loopRead.php):
$engine = 'pgsql';
$host = 'mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$value=$pdo->exec("SELECT id, boolval FROM mytable WHERE id=52");
var_dump($value);
echo PHP_EOL;
}
And I execute them in parallel:
# From one shell session
$ php ./loopUpdate.php
# From another one shell session
$ php ./loopRead.php
The mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com is hot standby read replica of the mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com.
But I fail to make the loopRead.php to fail with the error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
As far as I know the error I try to reproduce is because postgresql VACUUM action is performed during an active read transaction on read replica that asks rather stale data. So how I can cause my select statement to select on stale versions of my row?
On the standby, set max_standby_streaming_delay to 0 and hot_standby_feedback to off.
Then start a transaction on the standby:
SELECT *, pg_sleep(10) FROM atable;
Then DELETE rows from atable and VACUUM (VERBOSE) it on the primary server. Make sure some rows are removed.
Then you should be able to observe a replication conflict.
In order to cause your error you need to place a HUGE delay into your select query itself via a pg_delay postgresql function, therefore changing your query into:
SELECT id, boolval, pg_sleep(1000000000) FROM mytable WHERE id=52
So on a single transaction you have a "heavy" query and maximizes the chances of causing a PostgreSQL serialization error.
Though the detail will differ:
DETAIL: User was holding shared buffer pin for too long.
In tat case try to reduce the pg_delay value from 1000000000 into 10.

mysql 5.7. GET_LOCK not working any more

After upgrading to mysql 5.7 GET_LOCK stopped working as it used on mysql 5.5 and as I expected to work. I am aware of the changes regarding the GET_LOCK in 5.7. as described here.
When I execute the same script from the cmd line twice – with small pause in between – it works as expected: First one acquires the lock and second one doesn't.
When I execute same php script via browser twice – with small pause in between – both return that they successfully acquired the lock. This result is not what I expected and it is different from version 5.5 and from my understanding of the GET_LOCK as described in 5.7 documentation.
PHP is running as module (phpinfo() shows Server API: Apache 2.0
Handler ).
PHP version is: 7.0.20
Mysql version is: 5.7.18 OS is
CentOS 7.
This is the example script locktest.php:
<?php
$host = 'localhost';
$db = 'enter_your_db';
$user = 'enter_your_username';
$pass = 'enter_your_password';
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;dbname=$db;charset=$charset";
$opt = [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
PDO::ATTR_EMULATE_PREPARES => false,
PDO::ATTR_PERSISTENT => false
];
$pdo = new PDO($dsn, $user, $pass, $opt);
echo "pid=(".getmypid().")\n";
$stmt = $pdo->query('SELECT connection_id() as connid');
$row = $stmt->fetch();
echo "mysql connection id =(".$row['connid'].")\n";
$stmt = $pdo->query('SELECT GET_LOCK("foobar", 2)');
$row = $stmt->fetch();
var_dump($row);
echo "\n\n";
sleep(10);
When this script is run from cmd line – I get what I expect:
Run php -q locktest.php from one terminal. Then immediately after from another terminal window.
The first one will return:
pid=(18378)
mysql connection id =(71)
array(1) {
["GET_LOCK("foobar", 2)"]=>
int(1)
}
(please note that GET_LOCK result is 1)
The second one will return (started while the first one was still running):
pid=(18393)
mysql connection id =(73)
array(1) {
["GET_LOCK("foobar", 2)"]=>
int(0)
}
(please note that GET_LOCK result is 0 – as expected, and different pid and mysql connection id).
When the same script is started twice from the browser – it reports that both scripts successfully obtained the lock.
First returns:
pid=(11913) mysql connection id =(74) array(1) { ["GET_LOCK("foobar", 2)"]=> int(1) }
Second returns (while first one is still running):
pid=(11913) mysql connection id =(75) array(1) { ["GET_LOCK("foobar", 2)"]=> int(1) }
Please note that pids are the same, but mysql connection id is different, and GET_LOCK result is not as expected since both returned 1.
Now I am confused. Different mysql connections (as returned by CONNECTION_ID()) is used and this suggests different mysql sessions. And according to the mysql documentation it is possible to obtain more locks with same name from SAME session, but here I have different mysql sessions, right?
I even put PDO::ATTR_PERSISTENT => false although that is a default value.
The only difference between output from cmd line and browser are the pids (Different pids from two executed php scripts from cmd line, and same pids from two executed php scripts from the browser).
Any thoughts what is happening? For now, to me, it seems like a major issue since locking quietly stopped working. :(
Thanks.

PHP LOAD DATA INFILE Error Reading File

Solved
So, it was a type of permission error. Earlier in this script, I used flock() on the file to make sure the file wasn't being written to by another script. Removing flock() allows the query to run. Now I just need to determine a way to not load a file if it is still being written to...
I'm having trouble getting LOAD DATA INFILE to work in my php script. Here's the relevant portions of the script:
... //set $host, $user, etc.
$dsn = "mysql:host=$host;dbname=$database";
$pdo = new PDO($dsn, $user, $password, array(PDO::MYSQL_ATTR_LOCAL_INFILE => 1));
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
... //set $sqlFile and $table
$sql = "LOAD DATA INFILE '$sqlFile' REPLACE INTO TABLE `$table`";
echo $sql . "\n";
$rows = $pdo->exec($sql);
Running the script then produces:
LOAD DATA INFILE 'D:/pathToTemp/emdr/emdrorders/emdrorders_160314-1947UTC.txt'
REPLACE INTO TABLE `emdrorders`
[PDOException]
SQLSTATE[HY000]: General error: 2 Error reading file 'D:\pathToTemp\emdr\emdrorders\emdrorders_160314-1947UTC.txt'
(Errcode: 13 - Permission denied)
However, if I run the same query through the mysql cli it works.
mysql> LOAD DATA INFILE 'D:/pathToTemp/emdr/emdrorders/emdrorders_160314-1947UTC.txt'
REPLACE INTO TABLE `emdrorders`;
Query OK, 5487 rows affected (0.10 sec)
Records: 5355 Deleted: 132 Skipped: 0 Warnings: 0
I've also tried using LOCAL, but instead of throwing an exception $pdo->exec() returns 0 and the data is not loaded into the database.
My Mysql is 5.6.12 and PHP is 5.4.16 on a Windows machine and planning to put it on linux server. (I'm also doing this within the Laravel framework but I don't think that would cause this problem.)
Since the query works in the mysql cli but not through php, I can only assume the problem is in the php settings or the pdo. What do I need to change?
What is it that you plan on doing with $rows?
You pass queries through PDO::QUERY or PDO::PREPARE.
I assume from the use of PDOStatement::execute that you wish to use PDO::PREPARE.
$query = $pdo->prepare($sql); $query->execute();
To obtain responses from the query:
$query->fetch() or $query->fetchAll()
See link for documentation on preparing a query in PDO:
http://www.php.net/manual/en/pdo.prepare.php

PHP Open Multiple Connections

I would like to run multiple scripts instances of the same script in different browser tabs. And I would like them to have different MySQL connections. Each its unique connection.
I know that mysql_connect has a fourth parameter $new_link which should open a new link, but even that does not open a new connection, usually. Sometimes it does.
I have a XAMPP install on a Widows machine.
The question is: How can I absolutely force PHP/MySQL to open a new connections for each instance of a script? Script runs for about 2mins.
http://localhost/myscript.php
Here are the excerpts of the MySQL code. First load a work assignment from DB and mark it as in progress:
public function loadRange() {
try{
$this->db()->query('START TRANSACTION');
$this->row = $this->db()->getObject("
SELECT * FROM {$this->tableRanges}
WHERE
status = " . self::STATUS_READY_FOR_WORK . "
AND domain_id = {$this->domainId}
ORDER BY sort ASC
LIMIT 1");
if(!$this->row) throw new Exception('Could not load range');
$this->db()->update($this->tableRanges, $this->row->id, array(
'thread_id' => $this->id,
'status' => self::STATUS_WORKING,
'run_name' => $this->runName,
'time_started' => time(),
));
$this->db()->query('COMMIT');
} catch(Exception $e) {
$this->db()->query('ROLLBACK');
throw new Exception($e->getMessage());
}
}
Then the script may or may not INSERT rows in another table based on what it finds.
In the end, when task is finished, the assignment row is UPDATEd again:
$this->db()->update($this->tableRanges, $this->row->id, array(
'status' => self::STATUS_EXECUTED,
'time_finished' => time(),
'count' => $count,
));
In particular, the $this->tableRanges table looks to be locked. Any idea why it is the case? It is an InnoDB table.
I would like to run multiple scripts instances of the same script in different browser tabs. And I would like them to have different MySQL connections. Each its unique connection.
This is actually the case, without any additional effort
The question is: How can I absolutely force PHP/MySQL to open a new connections for each instance of a script.
Answer: do nothing :)
every time you hit http://localhost/myscript.php a new instance is run. Everything about that instance is unique, the web server spawns a new PHP thread, in which all the resources, connections, variables are unique.
Only state management devices such as sessions are shared and that too if you are using different tabs in same browser. If you hit the same URL with different browsers, the state management resources are different too.
To answer your question, like others mentioned before - your connection is different for each instance IF you are using mysql_connect. You could create a persistent connection that does not close when the application exits and reuses it for new connection requests using mysql_pconnect. But in your code it seems you are using the latter and in that case, you are fine.
You can try to set the isolation read level to prevent table stalling while reading for select
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
More information can be found here.
Again I guess it will take a bit of playing around to find which option works the best.

MeekroDB error "Commands out of sync; you can't run this command now"

I have a PHP script with the following lines:
require_once 'meekrodb.2.1.class.php';
DB::$user = 'usr';
DB::$password = 'pwd';
DB::$dbName = 'db';
DB::$encoding = 'utf8';
$results = DB::queryFirstField("
CALL getSequence('time_id', %i); // ***** Stored procedure call *****
", TENANT_ID);
DB::insert('timeentry', array(
'tenant_id' => TENANT_ID,
'time_id' => $results,
'timestart' => DB::sqleval("now()"),
'assig_id' => $assig_id
));
I am getting the following error:
QUERY: INSERT INTO timeentry (tenant_id, time_id, timestart, assig_id) VALUES (1, '42', now(), '1')
ERROR: Commands out of sync; you can't run this command now
If I replace the call to the stored procedure with a SELECT statement, everything works fine.
$results = DB::queryFirstField("
SELECT 45; // ***** SELECT statement *****
");
DB::insert('timeentry', array(
'tenant_id' => TENANT_ID,
'time_id' => $results,
'timestart' => DB::sqleval("now()"),
'assig_id' => $assig_id
));
I have not analyzed the internals of the MeekroDB Library (http://www.meekro.com).
I tried wrapping each statement in a transaction but I get the same error when COMMIT is executed right after the call to the stored procedure.
Any help is greatly appreciated.
Calls to stored procedures in MySQL produce multiple result sets. That is, a stored proc might have more than one SELECT, so the client has to iterate through several result sets to finish processing the CALL.
See examples in the answer to this question: Retrieving Multiple Result sets with stored procedure in php/mysqli
Until all results from the CALL are finished, it isn't considered closed, and MySQL does not permit you to run another query before the current query is completely finished.
I don't know the MeekroDB library well, but glancing at the online docs I don't see any way to iterate through multiple result sets. So there may not be any way to call stored procedures safely. I suggest you contact the author for specific support: http://www.meekro.com/help.php.

Categories