I ran into an odd issue earlier today which had me stumped for a while: I was doing precisely one transaction (confirmed umpteen different ways), and yet I was adding two elements to my SQLite3 database.
Finally twigged as to the possible cause, and tracked it down.
Here's a very minimal testcase which I've written to be as simple and straightforward as possible:
<?php
function dostuff($f) {
#unlink($f);
$db = new SQLite3($f);
$db->exec('CREATE TABLE test (a TEXT)');
$s = $db->prepare('INSERT INTO test (a) VALUES (:a)');
$s->bindValue(':a', 'OHI');
return $s->execute();
}
$f = '__test1__.sqlite3';
dostuff($f);
system('sqlite3 '.$f.' .dump');
print "\n---------------------\n\n";
$f = '__test2__.sqlite3';
$r = dostuff($f);
while (($row = #$r->fetchArray(SQLITE3_ASSOC)) !== false) {
print "LOOP EXECUTED!\n";
}
system('sqlite3 '.$f.' .dump');
?>
Firstly, dostuff creates a new database, which is removed if it exists. Then I create a test table, set up a prepared transaction, run the prepared transaction, and then the function returns the SQLite3Result object from $s->execute().
(I made a function for this code because it gets run twice, to keep the SLOC down.)
Here's what I understand is going on:
I run dostuff() with a first test file.
I run dostuff() with a second test file, but then I grab the SQLite3Result that the function returns, and then iterate over that as though I've just run a SELECT statement and I need to grab the results.
In both cases I dump the resulting database that's been created using system(sqlite3 .dump), as this is a simple, effective way to dump DBs with well-known, predictable results, and I wanted to avoid writing my own potentially buggy code.
This is the output I get on my machine, with PHP 7.0.5 and SQLite3 3.12.0 (PHP module version 0.7):
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE test (a TEXT);
INSERT INTO "test" VALUES('OHI');
COMMIT;
---------------------
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE test (a TEXT);
INSERT INTO "test" VALUES('OHI');
INSERT INTO "test" VALUES('OHI');
COMMIT;
Note that my code contains a loud "LOOP EXECUTED" line inside the fetchArray() loop, but this isn't showing up. The only output that is being presented is the output of the system() calls and the print statement displaying the dashes.
I crashed into this as I was building a DB helper class that would run statements for me and handle retrieving the results (my queries return very very small result sets so gathering all of them simplifies everything with no cost).
I've now written prepared-statement query() and exec() functions that do and do not try to fetch a result set, respectively - now I just have to remember to use the right ones in the right places!
TL;DR: INSERT + fetchArray() runs the query twice.
My question: Is this a bug? Or is it known behavior in SQLite3?
Related
I have a conundrum that appears to defy logic, involving Lumen, PHP, PDO, and SQL Server.
I have a controller which contains an action, that executes a stored procedure on a QL Server instance before returning the results as a JSON string. Nothing special is happening but for certain parameters, I do not get any response.
Right, some code. Here's the PHP/PDO prepared statement.
// Prepare our query.
$query = $pdo->prepare("
EXEC [dbase].[dbo].[myStoredProc]
#A = :A,
#B = :B,
#C = :C,
#D = :D,
#E = :E,
#F = :F,
#G = :G
");
// Bind the parameters and execute the query.
$query->bindParam(':A', $A);
$query->bindParam(':B', $B);
$query->bindParam(':C', $C);
$query->bindParam(':D', $D);
$query->bindParam(':E', $E);
$query->bindParam(':F', $F);
$query->bindParam(':G', $G);
$query->execute();
// Uncomment the following line for debugging purposes.
$query->debugDumpParams();
// Lets get all of the data.
$data = $query->fetchAll(\PDO::FETCH_ASSOC);
print_r($data);
Perfectly normal as I said. If I use POSTMAN and pass in the parameters as follows:
A 'C_ICPMS_06'
B 'AQC1'
C '726'
D NULL
E '2021-08-30 00:00:00'
F '2021-11-30 23:59:59'
G NULL
I get a list of results as expected, both from POSTMAN and PHP as well as through SSMS (using the output from the debug statement).
Now if I change parameter C from '726' to '728', I do not get any output from POSTMAN and PHP, but still, get output from SSMS.
Thinking that there could be some text within the output that is breaking the FETCHALL function, I amended the stored procedure to return a single record, all columns containing 1's. Once more the parameter of 726 works, 728 does not.
I added a VAR_DUMP command to ensure that the parameter isn't being molested on its way to the controller, both parameter values report that they are strings with 3 characters in length.
if I change the prepared statement as below, I still don't get any results seen within POSTMAN/PHP.
// Bind the parameters and execute the query.
$query->bindParam(':A', $A);
$query->bindParam(':B', $B);
//$query->bindParam(':C', $C);
$query->bindValue(':C', '728');
$query->bindParam(':D', $D);
$query->bindParam(':E', $E);
$query->bindParam(':F', $F);
$query->bindParam(':G', $G);
$query->execute();
The debug SQL statement is identical to before (using the param as opposed to value).
If I change the stored procedure, such that regardless of what value is passed in for parameter C, it is hardcoded to 728, it works as intended (obviously it does not matter what the parameter is set within POSTMAN). So I get values within POSTMAN and SSMS, therefore, it is safe to assume that the whole problem is being caused by the parameter and value '728'.
Further digging at this issue, I find that if the parameter has a value of '72F' or '70W', also returns no results via POSTMAN/PHP but does from within SSMS. I've checked and cannot see any error messages being produced.
I added the below lines to the controller to see if I can see an issue, but nothing was seen (not on screen nor within error files).
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
In trying to figure this out, I created a temporary database table in order to capture the input parameters and the number of records as found within the SP. This should show where the problem lies, i.e. within PHP or within SQL Server. It now gets stranger.
Calling the SP from within SSMS, it populates with an expected record, including the number of records the initial search returned. However, calling it from POSTMAN using the controller, everything is the same in terms of parameters, but the number of records found is 0!
So I know something very weird is going on, but cannot put my finger on what and therefore how to fix it. If anyone has any ideas or has come across a similar problem, please let me know. this is bugging me now. No doubt when I get this working, it'll be a silly error and I'll end up kicking myself.
The issue was that a temporary table that was being created couldn't hold a specific value being assigned to it. The column was designated as a TINYINT but should have been a SMALLINT since the value could go negative.
Why SSMS never reported that as an issue and happily allowed it through, God only knows. But when called externally, it failed to insert any records within the temporary table, returning no records as a result.
There go 1.5 days of my life never to be seen again.
I have a query that returns roughly 6,000 results. Although this query executes in under a second in MySQL, once it is run through Zend Framework 2, it experiences a significant slowdown.
For this reason, I tried to do it a more "raw" way with PDO:
class ThingTable implements ServiceLocatorAwareInterface
{
// ...
public function goFast()
{
$db_config = $this->getServiceLocator()->get('Config')['db'];
$pdo = new PDO($db_config['dsn'], $db_config['username'], $db_config['password']);
$statement = $pdo->prepare('SELECT objectNumber, thingID, thingmaker, hidden, title FROM Things ', array(PDO::MYSQL_ATTR_COMPRESS, PDO::CURSOR_FWDONLY));
$statement->execute();
return $statement->fetchAll(PDO::FETCH_ASSOC);
}
}
This doesn't seem to have much of a speedup, though.
I think the problem might be that Zend is still trying to create a new Thing object for each record, even though it is only a partial list of columns. I'd really be okay not populating any objects. I really just need a few columns with those attributes to iterate over.
As suggested by user MonkeyZeus, the following was used for bench-marking:
$start = microtime(true);
$result = $statement->fetchAll(PDO::FETCH_ASSOC);
echo (microtime(true) - $start).' seconds';
And in response:
In a VM, that returns 0.0050520896911621. This is in line with what it
is when I just run the command straight in MySQL. I believe the
overhead is in Zend, but not sure how to quite go about that. Again if
I had to guess, I'd say it is because Zend is adding overhead while
trying to be nice with the results, but I'm not quite sure how to
proceed after that.
[I'm] not so worried about the query. It is a single select statement.
goFast() gets called by the Zend indexAction() --similar to other
actions used across the project--this one is just way slower at
returning the page. One problem I found was that Zend's $this->url()
was slowing things down a bit. So I removed those, but the performance
still isn't great.
How can I speed this up?
When you say , that query runs under a second in MySQL , what do you mean ? did you try to run this query and print ALL 6000 rows ? or you just queried this and command line printed first/last few of them ?
The problem might be that , you are fetching them all , going through cursor , you are copying all the data ( 6000 rows ) from MySQL to PHP and then returning it , are you sure you want to do this ?
Maybe you could return a statement/cursor to the Query and then iterate through rows when you really need it ?
Your problem is not the SQL itself , but fetching them into PHP array all at once.
You can test it by logging the time it needs to actually execute SQL and fetching it into PHP array.
Do not use fetchall , return the statement itself and in the function/code where you have to run "foreach" this array , use statement to fetch each row one by one.
I have a strange situation.
Suppose I have a very simple function in php (I used Yii but the problem is general) which is called inside a transaction statement:
public function checkAndInsert($someKey)
{
$data = MyModel::model()->find(array('someKey'=>$someKey)); // search a record in the DB.If it does not exist, insert
if ( $data == null)
{
$data->someCol = 'newOne';
$data->save();
}
else
{
$data->someCol = 'test';
$data->save();
}
}
...
// $db is the instance variable used for operation on the DB
$db->transaction();
$this->checkAdnInsert();
$db->commit();
That said, if I run the script containing this function by staring many processes, I will have duplicate values in the DB. For example, if I have $someKey='pippo', and I run the script by starting 2 processes, I will have two (or more) records with column "someCol" = "newOne". This happens randomly, not always.
Is the code wrong? Should I put some constraint in DB in form of KEYs?
I also read this post about adding UNIQUE indexes to TokuDB which says that UNIQUE KEY "kills" write performance...
The approach you have is wrong. It's wrong because you delegate the authority for integrity/uniqueness check to PHP, but it's the database that's responsible for that.
In other words, you don't have to check whether something exists and then insert. That's bad because there's always some slight ping involved between PHP and MySQL and as you already saw - you can get false results for your checks.
If you need unique values for certain column or combination of columns, you add a UNIQUE constraint. After that you simply insert. If the record exists, insert fails and you can deal with it via Exception. Not only is it faster, it's also easier for you because your code can become a one-liner which is much easier to maintain or understand.
We dynamically generate a sql file which contains one big insert query. This file will be run periodically from a PHP app with the following command:
mysql --force -u foo -pbar demo < demo-file.sql
There was no output from the command except when an error happened as the content of the file was only one insert command. Now we decided to change the file that it will contain multiple insert queries instead of one big query. Since then the output of the command is
0
0
0
Our PHP app fails because it assumes now that there is an error happening because the output is not empty. So my question is, can I write an sql file that there is no output generated with multiple queries? I try not to touch the PHP app.
I know there are better designs but the code is historically grown :-)
[UPDATE 1]
Basically the app does
$response = shell_exec('mysql --force -u foo -pbar demo < demo-file.sql');
if (empty($response)) {
echo 'OK';
} else {
echo 'error: '.$response;
}
[UPDATE 2]
The sql file contains something like
insert into;
select sleep(0);
insert into;
select sleep(0);
insert into;
select sleep(0);
Those zeros come from the sleep lines. Get rid of them, you don't need them.
Also, for performance sake, if you are using a schema that supports transactions, you should add:
START TRANSACTION
INSERT...
INSERT...
INSERT...
...
COMMIT
I would like to group multiple queries into a single function that lives in PostgreSQL. The function will be queried using PDO.
The function is:
CREATE OR REPLACE FUNCTION "test_multipe_refcursor"()
RETURNS SETOF refcursor AS $BODY$
DECLARE
parentRC refcursor;
childRC refcursor;
BEGIN
open parentRC FOR
SELECT * FROM parent;
RETURN NEXT parentRC;
open childRC FOR
SELECT * FROM child;
RETURN NEXT childRC;
RETURN;
END;$BODY$
LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION "test_multipe_refcursor"() OWNER TO postgres;
Here's the PHP code. "Database" as a singleton class that sets up the usual connection properties, nothing special.
$database = Database::load();
$sql = "select * from test_multipe_refcursor();";
$p = $database->query($sql);
$i = 1;
do
{
$this->set('set' . $i, $p->fetchAll(PDO::FETCH_ASSOC));
$i++;
} while ($p->nextRowset());
$p->closeCursor();
And the result.
PDOException: SQLSTATE[IM001]: Driver does not support this function: driver does not support multiple rowsets in xxxx.php on line 32
This would seem to indicate that it's not supported, but then again, I cannot find a list defining exactly what is.
Has anyone managed to get this working?
References:
http://www.sitepoint.com/forums/showthread.php?p=3040612#post3040612
PostgreSQL function returning multiple result sets
http://ca.php.net/manual/en/pdostatement.nextrowset.php
Support for returning multiple resultsets is still on the PostgreSQL todo list and it will definitely not hit 8.4. As for the setof refcursors method, What you are trying to do doesn't work because the function isn't returning multiple rowsets - it is returning one rowset of refcursors. I'm not sure if using refcursors client side works, but I don't find it likely, even if the client-server protocol supports it, it is unlikely that PDO has an API for that.
But why are you trying to return multiple resultsets in one query? You can always do the queries separately.
Near the bottom of this PostgreSQL doc page, there is a section describing how you can pass back one or more cursors from a function. Basically, you get the caller to specify the name of the cursor(s) as parameters:
CREATE FUNCTION myfunc(refcursor, refcursor) RETURNS SETOF refcursor AS $$
BEGIN
OPEN $1 FOR SELECT * FROM table_1;
RETURN NEXT $1;
OPEN $2 FOR SELECT * FROM table_2;
RETURN NEXT $2;
END;
$$ LANGUAGE plpgsql;
-- need to be in a transaction to use cursors.
BEGIN;
SELECT * FROM myfunc('a', 'b');
FETCH ALL FROM a;
FETCH ALL FROM b;
COMMIT;
The page is for PostgreSQL 8.4, but this documentation snippet is present at least as far back as 8.1 (the version I'm running). As the comment says, you need to be inside a transaction to use cursors, as they are implicitly closed at the end of each transaction (i.e. at the end of every statement if autocommit mode is on).