I wrote a utility for updating the DB from a list of numbered .sql update files. The utility stores inside the DB the index of the lastAppliedUpdate. When run, it reads lastAppliedUpdate and applies to the db, by order, all the updates folowing lastAppliedUpdate, and then updates the value of lastAppliedUpdate in the db. Basically simple.
The issue: the utility successfully applies the needed updates, but then when trying to store the value of lastAppliedUpdate, an error is encountered:
General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.
Any ideas, what does it mean, and how can be resolved?
Below is the essence of the code. It's a php code within the Yii framework.
foreach ($numericlyIndexedUpdateFiles as $index => $filename)
{
$command = new CDbCommand (Yii::app()->db, file_get_contents ($filename));
$command->execute();
}
$metaData = MDbMetaData::model()->find();
$metaData->lastAppliedUpdate = $index;
if (!$metaData->save()) throw new CException ("Failed to save metadata lastAppliedUpdate.");
// on this line, instead of throwing the exception that my code throws, if any,
// I receive the described above error
mysql version is: 5.1.50, php version is: 5.3
edit: the above code is done inside a transaction, and I want it to.
Check it out
PDO Unbuffered queries
You can also look at the to set PDO:MYSQL_ATTR_USE_BUFFERED_QUERY
http://php.net/manual/en/ref.pdo-mysql.php
The general answer is that you have to retrieve all the results of the previous query before you run another, or find out how to turn off buffered queries in your database abstraction layer.
Since I don't know the syntax to give you with these mysterious classes you're using (not a Yii person), the easy fix solution is to close the connection and reopen it between those two actions.
Related
I've just finished refactoring a bunch of MySQL and MySQLi forms to PDO.
Everything seems to be working.
Now on to error handling.
In the MySQL / MySQLi code I had been using if statements to catch errors. Like this:
if (!$database_connection) {
// error handling here
}
Plain and simple.
But I can't get a similar set-up to work with PDO.
Here's the scenario:
I have a connection to a database that looks something like this:
$data_source_name = "mysql:host=$db_host;dbname=$db_name";
$database_connection = new PDO($data_source_name, $db_username, $db_password);
The execution looks something like this:
$stmt= $database_connection->prepare($sql);
$stmt->execute([$name, $email]);
I'm trying to set up a condition like the one described above:
if ( database connection fails ) {
// error handling
}
But this code doesn't work.
if ( !$database_connection ) {
// error handling
} else {
$stmt= $database_connection->prepare($sql);
$stmt->execute([$name, $email]);
}
This if construct worked in MySQL (now deprecated) and works in MySQLi, but not PDO.
I was originally trying to make this work using try-catch, as recommended in many Stack posts. But after more research it appears that this function is inappropriate for PDO Exceptions.
Any guidance and suggestions appreciated. Thanks.
It's a very common fallacy, that one needs a dedicated error handling code for PDO or Mysqli (or whatever else module for that matter). Least it should be even more specific, such as "Mysqli connection" handler, as it seems with your old mysqli code.
If you think of it, you don't really care whether it was exactly a database error that prevented the code form being executed correctly. There can be any other problem as well.
Admittedly, one hardly can expect any other problem from such a simple code but still, the code may grow, become more modular, perform more tasks - and therefore error out in any other part as well. Like, writing database credentials in the every file is a bit of waste. So it's natural to put them in a file and then just include it in the every other script that requires a database interaction. So this file may get corrupted which will has the same effect as a database error. And will need to be fixed as well.
Or, if you're handling only the connection error, the problem can happen during the query execution as well (especially in your case, as the way the query is executed it will error out even if a customer will simply enter fish'h'chips for example).
What you really care for is whether the data has been stored correctly (and probably whether emails were sent as well) or not, no matter what could be the possible failure. This is being the exact reason, why I wrote in the article this warning against wrapping some specific part of code in a try-catch in order to report this particular error. As error reporting handler must be common for the entire code.
Admittedly, the simplest exception handling method is simply wrapping the entire code in a try catch block where the most generic exception type, namely Throwable, must be checked for. Not very reliable but simplest.
The key here is to wrap the entire code, not just some random part of it. But one shouldn't forget to set the exception mode for PDO, in order let the query execution errors to be caught in this block as well.
<?php
try {
require 'pdo.php'
...
$sql = "INSERT INTO other_choices (order,foods) VALUES (?,?)";
...
$stmt= $db_connection->prepare($sql);
$stmt->execute([$order, $foods]);
// send emails, etc
} catch (Throwable $e) {
// do your handling here
}
Note that I substituted actual variables in the query with question marks, which is being correct way of using prepared statements, that otherwise become useless and render all your transition from mysqli fruitless (especially given that mysqli supports prepared statements as well).
Unfortunately, PHP has two kinds of errors - exceptions and errors proper. And try-catch can catch only the former. In order to handle all kinds of errors, an error handler can be used. You can see a very basic example of one in my article on PHP error reporting.
The last note: sending an email every time an error occurs on the site is not the wisest move. Although in your case it could be justified, given PHP is only involved when a user submits a form, but on a regular site, where PHP is used to handle every page, it can lead to thousands emails. Or even in your case, spammers may target your forms and send thousands requests as well (which itself may cause some overflow error and therefore thousands emails in the inbox). Instead of sending emails manually, consider using a dedicated error monitoring software, such as Sentry. It will send only new errors, as well as aggregated error info.
new PDO raises an exception if the connection fails. Use an exception handler:
try {
$database_connection = new PDO($data_source_name, $db_username, $db_password);
} catch (PDOException $e) {
// error handling
}
Suppose I have code like this:
mysqli_multi_query('<first query>');
include_once 'secondQuery.php';
This is an enormous simplification, and hopefully I haven't simplified the error out, but secondQuery.php relies on <first query> to be completed in order to execute properly. When I run the two manually, in the correct order, everything works perfectly. But when I run this, the error I get is consistent with them either executed in the wrong order, or simultaneously.
How would I write the middle line of:
mysqli_multi_query('<first query>');
wait for mysqli_multi_query to conclude;
include_once 'secondQuery.php';
in correct PHP syntax?
Every time you use mysqli_multi_query() you need to execute a blocking loop after it, because this function sends SQL queries to be executed by MySQL asynchronously. An example of a blocking loop which waits for MySQL to process all queries asynchronously is this:
$mysqli->multi_query(/* ... */);
do {
$result = $mysqli->use_result();
if ($result) {
// process the results here
$result->free();
}
} while ($mysqli->next_result()); // Next result will block and wait for next query to finish
$mysqli->store_result(); // Needed to fetch the error as exception
It is always a terrible idea to use mysqli_multi_query() in your PHP code. 99.99% of the time there are better ways to achieve what you want. This function has so many downsides that using it should be avoided at all cost.
What you need are database transactions. If your queries depend on each other then you need to switch off implicit commit and commit when all of them execute successfully. You can't achieve this with mysqli_multi_query().
I have a large and complicated system consisting of php and javascript code, which make many mysql queries. Is there a way to 'backtrace' each mysql query to the exact line of code, which makes the query?
In mysql it is possible to trace all queries (adding a log statement to the mysql config), but it does not show which php or javascript code/module/line did the query. Is it possible to find the offending lines for each mysql query?
No, there is no way of MySQL knowing what line of code, class, function or file you're making a call from. It just receives a socket connection from the application running the code, and accepts input, processes it and returns a result.
All it knows about is the data it receives, and who is sending it.
You can view active connections and a brief description of what they're doing using
SHOW PROCESSLIST;
You'll get output similar to this:
Id User Host db Command Time State Info
48 test 10.0.2.2:65109 test Sleep 4621
51 test 10.0.2.2:49717 test Sleep 5
52 test 10.0.2.2:49718 test Query 0 SHOW PROCESSLIST
Generally when people want to log queries it happens somewhat similar to this
Before the query is run, log the query and any parameters
Run the query
Log the success/failure of the query, and any errors
To execute this process for systems with hundreds or thousands of queries, you'll generally find a wrapper function/class is created which accepts the appropriate parameters, processes the query as listed above, and returns the result. You could pass your wrapper method the PHP Constants __FILE__ and __LINE__ when you call it, to then log where the database call is being initiated from.
pseudo code only
// Wrapper method
function query_wrapper($stm, $file, $line)
{
log_prequery($stm, $file, $line); // Log the query, file and line
$result = $stm->execute(); // Execute the query
log_postquery($result); // Log the result (and any errors)
return $result; // Return the result
}
// In your code where you're making a database query
$db = new Database();
$stm = $db->prepare("SELECT foo FROM bar");
query_wrapper($stm, __FILE__, __LINE__);
Trying mongodb global timeout etc. is still ignored by find() queries in my PHP script.
I'd like a findOne({...}) or find({...}) lookup and wait max 20ms for the DB server before timeout.
How to make sure that PHP does not utilize this setting as soft limit? It's still ignored and processing answers even 5sec later.
Is this a PHP mongo driver bug?
Example:
MongoCursor::$timeout=20;
$nosql_server=new Mongo('mongodb://user:pw#'.implode(",",$arr_replicas).'',array("replicaSet" => "gmt","timeout"=>10)) OR troubles("too slow to connect");
$nosql_db=$nosql_server->selectDB('aDB');
$nosql_collection_mcol=$nosql_db->mcol;
$testFind=$nosql_collection_mcol->find(array('crit'=>123));
//If PHP considered the MongoCursor::$timeout, I'd expect the prev. line to be skipped or throwing a mongo/timeout exception if DB does not return the find result cursor ready within 20ms.
//However, I arrive with this line after seconds, without exception whenever the DB has some lock or delay, without skipping previous line.
In the PHP documentation for $timeout the following is the explanation for the cursor timeout:
Causes methods that fetch results to throw a
MongoCursorTimeoutException if the query takes longer than the
specified number of milliseconds.
I believe that the timeout is referring to the operations performed on the cursor (e.g. getNext()).
Do not do this:
MongoCursor::$timeout=20;
That is calling a static method and won't do you any good AFAIK.
What you need to realize is that in your code example, $testFind is the MongoCursor object. Therefore in the code snippet you gave, what you should do is add this after everything else in order to set the timeout of the $testFind MongoCursor:
$testFind->timeout(100);
NOTE: If you want to deal with $testFind as an an array you need to do:
$testFindArray = iterator_to_array($testFind);
That one threw me for a loop for awhile. Hope this helps someone.
Pay attention on the readPreference attribute. The possible values are:
MongoClient::RP_PRIMARY
MongoClient::RP_PRIMARY_PREFERRED
MongoClient::RP_SECONDARY
MongoClient::RP_SECONDARY_PREFERRED
MongoClient::RP_NEAREST
I'm coding a web application in php using mongodb and I would like to store very large files (1gb) with gridfs.
I've got 2 problems, first I get a timeout, and I can't find out how to set the cursor timeout of the MongoGridFS class.
<?php
//[...]
$con = new Mongo();
$db = $con->selectDB($conf['base']);
$grid = $db->getGridFS();
$file_id = $grid->storeFile($_POST['projectfile'],
array('metadata' => array('type' => 'release',
'version' => $query['files'][$time]['version'],
'mime' => mime_content_type($_POST['projectfile']),
'filename' => file_name($projectname).'-'.file_name($query['files'][$time]['version']).'.'
.getvalue(pathinfo($_POST['projectfile']), 'extension'))), array( 'safe' => false ));
//[...]
?>
And secondly I wonder if it were possible to execute the request in the background? When I store the file with this query, the execution is blocked and I get an error 500 due to the timeout
PHP Fatal error: Uncaught exception 'MongoGridFSException' with
message 'Could not store file: cursor timed out (timeout: 30000, time
left: 0:0, status: 0)'
May be it will be better to store your files in some directory, and put in database only location of that file? It will be rather quick.
Gridfs queries, by default, are not "safe" however they are not a single query in the driver. This function must run multiple queries within the driver (one to store a fs.files row and one to split the fs.chunks). This means that the timeout is most likely occuring on a find needed to process further batches of information, it might even be related to the PHP tiemout rather than a MongoDB one.
The easiest way to use this in the background is to create a "job" via calling a cronjob or using a message queue to another service.
As for the timeout; unfortunately the gridfs functions (on your side) don't have direct access to the cursor being used (other than setting safe), you can set a timeout on the connection but I wouldn't think this is a wise idea.
However if your cursor is timing out it means (as I said) that a find query is probably taking too long in which case you might wanna monitor the MongoDB logs to find out what is timing out, this might just be a simple case of needing better indexes or a more performant setup.
As #Anton said, you can also consider housing large files outside of MongoDB, however, there is no requirement.