I have a site which uses PHP's PDO library to access a mysql database. The mysql database is highly optimised and has all the suitable indexes to make the queries fast and so on. I am encountering some strange behaviour though in relation to the first query to run for a particular web service.
This particular web-service runs a query against the database and returns a json response which is then fed to a jquery auto-complete.
The query upon first run in a client takes approx 2s to run, after which it drops to hundredths of a second, presumably due to innodb caching.
If I type in an entry in the auto-complete box during a new session then the first query response can take upwards of 5 seconds after which it becomes blisteringly fast to return responses. If I then leave the site for a good period i.e. perhaps an hour(not an exact measure but for the sake of argument, a relatively long period of time) and come back to it the same slow first query behaviour is observed again.
I am using a persistent connection out of necessity and owing to a finite number of connections on the server in connection.
I was wondering if any of you had any ideas which might allow me to mitigate the initial delay a bit more.
$DBH = null;
$host = "127.0.0.1";
$db_name = "my_db";
$user_name = "me";
$pass_word = "something";
try {
# MySQL with PDO_MYSQL
$DBH = new PDO("mysql:host=$host;dbname=$db_name;charset=utf8", $user_name, $pass_word, array(PDO::ATTR_PERSISTENT => true, PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES 'utf8'"));
$DBH->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION );
}
catch(PDOException $e) {
error_log( $e->getMessage(), 0 );
}
Updated with answer
Right guys after much testing and after thoroughly checking that it was not a dns issue, I went checking the innodb buffer pool route. Anyway I wrote a stored procedure which uses a query to generate a a query for each table in the database which would thus cause them to be cached in the innodb_buffer_pool. The query to generate the sql queries for each table is from the following SO question. I made only one edit to that query and put in the database() function so that it would work from whichever database it was called from.
I also set it up so that it can be called via PHP without waiting for the script to complete so your normal application continues on.
I hope this helps someone out. As an aside to be even more efficient you cold wrap the exec in a small function to only run it at certain times and so on.
MySQL stored procedure SQL
DELIMITER $$
USE `your_db_name`$$
DROP PROCEDURE IF EXISTS `innodb_buffer_pool_warm_up`$$
CREATE DEFINER=`user_name`#`localhost` PROCEDURE `innodb_buffer_pool_warm_up`()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE sql_query VARCHAR(1000) DEFAULT NULL;
DECLARE sql_cursor CURSOR FOR
SELECT
CONCAT('SELECT `',MIN(c.COLUMN_NAME),'` FROM `',c.TABLE_NAME,'` WHERE `',MIN(c.COLUMN_NAME),'` IS NOT NULL')
FROM
information_schema.COLUMNS AS c
LEFT JOIN (
SELECT DISTINCT
TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME
FROM
information_schema.KEY_COLUMN_USAGE
) AS k
USING
(TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME)
WHERE
c.TABLE_SCHEMA = DATABASE()
AND k.COLUMN_NAME IS NULL
GROUP BY
c.TABLE_NAME;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN sql_cursor;
read_loop: LOOP
FETCH sql_cursor INTO sql_query;
IF done THEN
LEAVE read_loop;
END IF;
SET #stmt_sql = sql_query;
PREPARE stmt FROM #stmt_sql;
EXECUTE stmt;
END LOOP;
CLOSE sql_cursor;
END$$
DELIMITER ;
PHP to call the stored procedure
innodb_warm_up_proc_call.php
<?php
$DBH = null;
$host = "localhost";
$db_name = "your_db_name";
$user_name = "user_name";
$pass_word = "password";
try {
# MySQL with PDO_MYSQL
$DBH = new PDO("mysql:host=$host;dbname=$db_name;charset=utf8", $user_name, $pass_word, array(PDO::ATTR_PERSISTENT => true, PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES 'utf8'"));
$DBH->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION );
$sql = "CALL innodb_buffer_pool_warm_up()";
$STH = $DBH->prepare( $sql );
$STH->execute();
}catch( PDOException $e ) {
error_log( $e->getMessage() . ' in ' .$e->getFile(). ' on line ' .$e->getLine(), 0 );
}
?>
PHP to run the above script silently and without waiting for it to complete
innodb_warm_up.php
<?php
$file_to_execute = dirname(__FILE__) . "/innodb_warm_up_proc_call.php";
//Run the stored procedure but don't wait around for a chat
exec("php -f {$file_to_execute} >/dev/null 2>&1 &");
?>
When addressing a particular web service, change it's domain name to IP address.
Most likely it will eliminate such a delay (caused by DNS lookup)
Thanks for all the help and great suggestions on this one. I thoroughly checked the dns and other solutions but in the end it turned out to be the innodb page buffer pool. I have coded up a solution for myself and have added it in its entirety in my question above so hopefully it will be of use.
Thanks again for the help.
Related
In order to test various settings into my postgresql hot standby replication schema I need to reproduce a situation where the following error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
Therefore, I try to make 2 processes 1 that updates forever a boolean field with its opposite and one that reads the value from the replica.
The update script is this one (loopUpdate.php):
$engine = 'pgsql';
$host = 'mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$pdo->exec("UPDATE mytable SET boolval= NOT boolval where id=52");
}
And the read script is the following (./loopRead.php):
$engine = 'pgsql';
$host = 'mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com';
$database = 'dummydb';
$user = 'dummyusr';
$pass = 'dummypasswd';
$dns = $engine.':dbname='.$database.";host=".$host;
$pdo = new PDO($dns,$user,$pass, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
]);
echo "Continious update a field on et_store in order to cause new row version.".PHP_EOL;
while(true)
{
$value=$pdo->exec("SELECT id, boolval FROM mytable WHERE id=52");
var_dump($value);
echo PHP_EOL;
}
And I execute them in parallel:
# From one shell session
$ php ./loopUpdate.php
# From another one shell session
$ php ./loopRead.php
The mydb_replica.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com is hot standby read replica of the mydb.c3rrdbjxxkkk.eu-central-1.rds.amazonaws.com.
But I fail to make the loopRead.php to fail with the error:
SQLSTATE[40001]: Serialization failure: 7 ERROR: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
As far as I know the error I try to reproduce is because postgresql VACUUM action is performed during an active read transaction on read replica that asks rather stale data. So how I can cause my select statement to select on stale versions of my row?
On the standby, set max_standby_streaming_delay to 0 and hot_standby_feedback to off.
Then start a transaction on the standby:
SELECT *, pg_sleep(10) FROM atable;
Then DELETE rows from atable and VACUUM (VERBOSE) it on the primary server. Make sure some rows are removed.
Then you should be able to observe a replication conflict.
In order to cause your error you need to place a HUGE delay into your select query itself via a pg_delay postgresql function, therefore changing your query into:
SELECT id, boolval, pg_sleep(1000000000) FROM mytable WHERE id=52
So on a single transaction you have a "heavy" query and maximizes the chances of causing a PostgreSQL serialization error.
Though the detail will differ:
DETAIL: User was holding shared buffer pin for too long.
In tat case try to reduce the pg_delay value from 1000000000 into 10.
I have my db connection parameters set in a single file which I include on all pages I need it. Connection files looks like so... called connect.php :
$db_host = '111.111.111.111';
$db_database = 'test';
$db_user = 'test';
$db_pass = 'test';
$db_port = '3306';
//db connection
try {
$db = new PDO("mysql:host=$db_host;port=$db_port;dbname=$db_database;charset=utf8", $db_user, $db_pass,
array(
PDO::ATTR_EMULATE_PREPARES => false,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, //PDO::ERRMODE_SILENT is default setting
PDO::ATTR_PERSISTENT => false //when true
)
);
}
catch(PDOException $e) {
error_log("Failed to connect to database (/connect.php): ".$e->getMessage());
}
When I need to do things with the db I include this file and end up with something like this... called example.php :
require $_SERVER['DOCUMENT_ROOT'].'/assets/functions/connect.php';
$stmt = $db->prepare("
SELECT
accounts.account_id,
FROM accounts
WHERE accounts.account_key = :account_key
");
//bindings
$binding = array(
'account_key' => $_POST['account_key']
);
$stmt->execute($binding);
//result (can only be one or none)
$result = $stmt->fetch(PDO::FETCH_ASSOC);
//if result
if($result)
{
// result found so do something
}
Occasionally the database connection will fail (updating, I shut it down, its being hammered, whatever)... when that happens the PDOException I have in the try/catch works as it should and adds an entry into my error log saying so.
What I would also like to do is add a 'check' in my example.php so it doesn't attempt to do any database work if there is no connection (the include file with my connect script failed to get a connection). How would I go about this and what is the preferred method of doing so?
I'm not sure of the correct way to 'test' $db before my $stmt entry. Would there be a way to retry the connection if it was not set?
I realize I can leave it as it and there would be no problems, other than the database query fails and the code doesn't execute, but I want to have more options like adding another entry to the error log when this happens.
To stop further processings just add an exit() at the end of each catch block, unless you want to apply a finally block.
try {
//...
} catch(PDOException $e) {
// Display a message and log the exception.
exit();
}
Also, throwing exceptions and true/false/null validations must be applied through the whole connect/prepare/fetch/close operations involving data access. You may want to see a post of mine:
Applying PDO prepared statements and exception handling
Your idea with including db connection file I find good, too. But think about using require_once, so that a db connection is created only once, not on any include.
Note: In my example I implemented a solution which - somehow - emulates the fact that all exceptions/errors should be handled only on the entry point of an application. So it's more directed toward the MVC concept, where all user requests are sent through a single file: index.php. In this file should almost all try-catch situations be handled (log and display). Inside other pages exceptions would then be thwrown and rethrown to the higher levels, until they reach the entry point, e.g index.php.
As for reconnecting to db, How it should be correlated with try-catch I don't know yet. But anyway it should imply a max-3-steps-iteration.
I need to run a daily PHP script that downloads a data file and executes a bunch of SQL Queries in sequence to import and optimize the data.
I'm having a problem executing one of my queries within PHP which appears to freeze the mysqld process on my server. Oddly, running the same query does not case a similar problem when run from the Sequel Pro Database client program (Mac).
The query is running an update on a large table with over a million rows. Here is the stored procedure I'm using:
DELIMITER ;;
CREATE PROCEDURE spSetTripIdInStopTimes()
BEGIN
-- SET META_TRIP_ID IN IMPORT_STOP_TIMES
UPDATE import_stop_times
INNER JOIN ref_trips ON
(
import_stop_times.trip_id = ref_trips.trip_id
AND import_stop_times.meta_agency_id =ref_trips.meta_agency_id
)
SET import_stop_times.meta_trip_id = ref_trips.meta_trip_id;
END;;
DELIMITER ;
When the procedure is called with
CALL spSetTripIdInStopTimes;
inside Sequel Pro, the result is 1241483 rows affected and the time taken is around 90 seconds.
With PHP PDO I run the same command with
$result = $database->runExecQuery("CALL spSetTripIdInStopTimes");
However, it gets stuck on this query for over 24 hrs and still has not completed. When I cancel the PHP script I can see that the mysqld process is still taking %99.5 CPU. At this point I restart SQL with 'sudo service mysql restart'.
I also tried using PHP's mysqli, but the freezing also occurs with this method.
$mysqli->query("CALL spSetTripIdInStopTimes")
Would anyone be able to reason why this is happening or suggest another method?
Thank you in advance.
Note: I also tried using the older mysql on PHP, but the version I'm using (5.5.9-1ubuntu4.14) tells me the command is deprecated and stops the script.
[UPDATE]
I've also tried running the stored procedure directly on the command line:
mysql --user=[username] --password=[password] --execute="call spSetTripIdInStopTimes()" [tablename]
which worked.
So I tried running the same command with PHP's exec() function:
exec("mysql --user=[username] --password=[password] --execute=\"call spSetTripIdInStopTimes()\" [table name]");
Unfortunately, it stills hangs. I'm starting to wonder if this is due to the limitation or overhead of PHP.
[UPDATE 2]
Here is the array of PHP PDO connection options I use:
array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
// Allow file reading, need following settings to import data from txt files
PDO::MYSQL_ATTR_LOCAL_INFILE => true,
PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true,
PDO::ATTR_EMULATE_PREPARES => true)
[UPDATE 3]
I'm using a custom database object, so I'll show the function for $database->runExecQuery() for clarification:
function runExecQuery($queryString)
{
$db = $this-> getConnection(); // Connect to database
try{
return array(
"success"=> true,
"data"=>$db->exec($queryString)
);
}
catch (PDOException $ex)
{
return array(
"success"=> false,
"errMessage"=> "ERROR[{$ex->getCode()}]".($this-> Debug ? "{$ex}" : "")
);
}
}
The variable $db is the connection variable that is initialized as follows:
// Create address string
$db_address =
"mysql:host={$settings['db_hostname']};".
"dbname={$settings['db_name']};".
"charset={$settings['db_charset']}";
// Create connection to database
$db = new PDO($db_address, $settings['db_user'], $settings['db_pw'], $options);
where $options is the array from [Update 2].
[Update 4]
Mihai's suggestion of changing PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => false had some promising results as query appeared to finish. However, after more testing I found that the PHP script will sometimes still hang on the Query about 50% of the time it is run. This is true even with the same set of data in the SQL tables.
I have several stored procedures that I am calling via PDO in PHP. I was hoping to be able to handle errors by performing a ROLLBACK, but I still want to be able to use PHP to retrieve and handle the last error in the procedure. I have tryed using PDO::errorCode() and PDO::errorInfo(), but that does seem to be a legitimate solution, I think because I am already handling the errors in my stored procedures.
When I call one of the stored procedures via command line and then call SHOW ERRORS I get a nice result set with the error status, code and message, but if I call SHOW ERRORS in PDO after executing the stored procedure, I get no results. I also get no result from SHOW ERRORS in command line if I call show errors inside the stored procedure.
I would use GET DIAGNOSTICS, but the MySQL server I am developing for is on a hosted cPanel that I don't have control over updating and it is version 5.5.
Is there some other option I could use or another route I should be taking?
Like I said, I have several stored procedures I want to handle errors, but I can't even get this to work on a simple stored procedure:
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
ROLLBACK;
END;
SELECT *
FROM bunnies;
END
Update: My PHP code was in an object, so I copied and simplified the code to post per Barmar's request and when I tried the simplified code, I found that SHOW ERRORS does indeed work with PDO when prepared and executed after the function is prepared and executed.
My object was a little complicated (I wrote it a while back before I knew much about PHP OOP), so I simplified it as well and now it works! I think the connection was being closed in between calls and now that the code is simpler, I am having no problems with calling SHOW ERRORS in it.
Here's the simplified PHP code I used to test, in case anyone has had issues getting this to work:
$host = '***';
$user = '***';
$pass = '***';
$schema = '***';
$connection = new PDO('mysql:host=' . $host . ';dbname=' . $schema . ';', $user, $pass);
$statement = $connection->prepare('CALL test()');
$statement->execute();
$statement = $connection->prepare('SHOW ERRORS');
$statement->execute();
echo var_dump($statement->fetchAll());
$statement = null;
$statement = null;
$connection = null;
Make sure you use the same PDO connection object to execute the original queries and to retrieve the errors. Each connection has its own error state.
I am writing an Android app which communicates with a PHP backend. The backend db is SQLite 3. The problem is, I am getting this error intermittently PHP Warning: SQLite3::prepare(): Unable to prepare statement: 5, database is locked. I am opening a connection to the database in each PHP file and closing it when the script finishes. I think the problem is that one script locked the database file while writing to it and the second script was trying to access it, which failed. One way of avoiding this would be to share a connection between all of the php scripts. I was wondering if there is any other way of avoiding this?
Edit:
This is the first file:
<?php
$first = SQLite3::escapeString($_GET['first']);
$last = SQLite3::escapeString($_GET['last']);
$user = SQLite3::escapeString($_GET['user']);
$db = new SQLite3("database.db");
$insert = $db->prepare('INSERT INTO users VALUES(NULL,:user,:first,:last, 0 ,datetime())');
$insert->bindParam(':user', $user, SQLITE3_TEXT);
$insert->bindParam(':first', $first, SQLITE3_TEXT);
$insert->bindParam(':last', $last, SQLITE3_TEXT);
$insert->execute();
?>
Here is the second file:
<?php
$user = SQLite3::escapeString($_GET['user']);
$db = new SQLite3("database.db");
$checkquery = $db->prepare('SELECT allowed FROM users WHERE username=:user');
$checkquery->bindParam(':user', $user, SQLITE3_TEXT);
$results = $checkquery->execute();
$row = $results->fetchArray(SQLITE3_ASSOC);
print(json_encode($row['allowed']));
?>
First, when you are done with a resource you should always close it. In theory it will be closed when it is garbage-collected, but you can't depend on PHP doing that right away. I've seen a few databases (and other kinds of libraries for that matter) get locked up because I didn't explicitly release resources.
$db->close();
unset($db);
Second, Sqlite3 gives you a busy timeout. I'm not sure what the default is, but if you're willing to wait a few seconds for the lock to clear when you execute queries, you can say so. The timeout is in milliseconds.
$db->busyTimeout(5000);
I was getting "database locked" all the time until I found out some features of sqlite3 must be set by using SQL special instructions (i.e. using PRAGMA keyword). For instance, what apparently solved my problem with "database locked" was to set journal_mode to 'wal' (it is defaulting to 'delete', as stated here: https://www.sqlite.org/wal.html (see Activating And Configuring WAL Mode)).
So basically what I had to do was creating a connection to the database and setting journal_mode with the SQL statement. Example:
<?php
$db = new SQLite3('/my/sqlite/file.sqlite3');
$db->busyTimeout(5000);
// WAL mode has better control over concurrency.
// Source: https://www.sqlite.org/wal.html
$db->exec('PRAGMA journal_mode = wal;');
?>
Hope that helps.