I have 2 tables, TableA and TableB. TableB has a fk field pointing to TableA.
I need to use a DELETE statement for TableA, and when a record in it is deleted I need to delete all records in TableB related to that record in TableA. Pretty basic.
begin;
DELETE FROM TableB
WHERE nu_fornecedor = $1;
DELETE FROM TableA
WHERE nu_fornecedor = $1;
commit;
This string is passed to pg_prepare(), but then I get error
ERROR: cannot insert multiple commands into a prepared statement
Ok, but I need to run both commands in the same transaction, I cant execute 2 separated statements. I tried to use with without begin-commit and got same error.
Any idea how to do it?
To understand what is going on and your options, let me explain what a prepared statement is and what it is not. You can use pg_prepare, but only for the statements individually, not for the transaction as a whole.
A prepared statement is a statement handed to PostgreSQL which is then parsed for and stored as a parse tree for future use. On first execution, the parse tree is planned with the inputs provided, and executed, and the plan cached for future use. Usually it makes little sense to use prepared statements unless you want to reuse the query plan (i.e. executing a bunch of otherwise identical update statements hitting roughly the same number of rows), all in the same transaction.
If you want something that gives you the benefits of separating parameters from parse trees but does not cache plans, see pg_query_param() in the PHP documentation. That is probably what you want.
Related
I am updating mysql data through php script. I am looking to use mysqli_multi_query() instead of mysql_query. I have N number of update queries to execute. Please suggest if multi query will help in better execution time.
A. Updating data using mysql_query(), Firing single single queries N times.
B. Concatinating All Update queries with ";" and firing once using multi query.
Please Suggest if Technique "B" will help in performance.
Thanks.
C. Use prepared statements and if possible (InnoDB) within a transaction.
A multiquery will save you some client-server roundups, but the queries are still executed one by one.
If you use prepared statement together with transactions, the query is checked once by the parser after that just values are pasted to server. Transaction prevents indexes being rebuild after each update.
Multiple insert statements can be rewritten as a single bulk insert statement:
INSERT INTO t1 (c1,c2,c3) VALUES (1,2,3), (3,4,5) --etc
I am wondering
if we can know..
what type of query is executed in php script.
DML or DDL statement
Can know that.. using php..
with out Regex
Below is the query i executed
$queryOne = "SELECT * FROM employees";
// using php script
$queryOne is DML
$queryTwo = "DROP table employees";
//using php script
$queryTwo is DDL
Classically, the DML statements are:
SELECT
INSERT
DELETE
UPDATE
MERGE (newcomer on the block)
Anything else is DDL - according to some sets of definitions.
Some of the 'other statements' are more like 'session control' statements; not really DML, not really DDL.
If you wish to detect these statements, you can either prepare (and describe) the statement and look at the returned information to diagnose whether it is one of the DML statements listed above, or you can scan for these keywords as the first non-comment words in the statement. This covers the vast majority of practical cases. What you do if you have a single string with multiple statements (possibly of different types) in them is a decision you'll have to make on your own. Not all DBMS allow that anyway.
or use regex.
I'm curios if this can be achieved as I'm currently facing a bug and would like to see if putting a SELECT and an UPDATE in a transaction would fix it (if you're wondering why I'm not posting the code that causes the bug it's because it's a complex environment and I can't post all the influencing factors).
Something that I'm also interested in, related to this, is if you have ever experienced code that had and UPDATE query written after a SELECT query, yet the UPDATE gets executed before the SELECT (with the possibility that the script might run twice ruled out).
It depends on what do you mean by a transaction.
There are two types of transactions:
Implicit transactons: as in INSERT, UPDATE, SELECT, DELETE statements, and in such statements there is no explicit transaction commands, and the database engine will rollback the whole statement if an error happens.
Explicit Transactions: in such the enclosed statements inside the transaction are executed as a unit and either COMMIT the whole transaction or ROLLBACK .
So you can't have both SELECT and UPDATE inside one query, but you can but them inside a transaction like:
START TRANSACTION;
SELECT * FROM tableName;
UPDATE table SET something = 'other something' WHERE thirdsomething = #s;
COMMIT;
Then Put them in a stored procedure or a UDF.
Note that: SELECT statements do not modify data, so you might not need to enclose it in a transaction, so in your case you will have only UPDATE statement you can just use a stored procedure without a transaction.
Is there any way to execute more sql prepared statements at once? Or at least use something to achieve this result, can it be emulated with transactions?
pg_send_query can execute more statements (from php docs "The SQL statement or statements to be executed.")
but
pg_send_execute and pg_send_prepare can work only with one statement.
The query parameter has the following description
"The parameterized SQL statement. Must contain only a single statement. (multiple statements separated by semi-colons are not allowed.) If any parameters are used, they are referred to as $1, $2, etc."
from http://www.php.net/manual/en/function.pg-send-prepare.php
Is there any way to send more statements at once to make less roundtrips between php and postgresql like the pg_send_query does?
I don't want to use pg_send_query because without parameter binding I can have sql injection vulnerabilities in my code.
The round trips to the DB server shouldn't be your bottleneck as long as you are (a) using persistent connections (either directly or via a pool) and (b) aren't suffering from the "n+1 selects" problem.
New connections have an order of magnitude overhead which slows things down if done on every query. The n+1 problem results in generating far more trips than is really needed if the application retrieved (or acted upon) sets of related rows rather than doing all operations one at a time.
See: What is the n+1 selects problem?
Separate your queries by semicolon:
UPDATE customers SET last_name = 'foo' WHERE id = 1;UPDATE customers SET last_name = 'bar' WHERE id = 2;
Edit:
Okay you cannot do this on the call side:
The parameterized SQL statement. Must contain only a single statement. (multiple statements separated by semi-colons are not allowed.)
Another way would be to call a stored procedure with this method and this SP issues multiple statements.
I have a PHP foreach loop and a mysql insert statement inside of it. The loop inserts data into my database. I've never ran into this issue before but what I think is happening is that the insert dies (I do not have an "or die" statement after the insert) when it reaches a duplicate record. Even though there may be duplicate records in the table, I need this to just continue. Is there something that I need to specify to do this?
I'm transferring some records from one table to another. Right now, I have 20 records in table #1 and only 17 in table #2. I'm missing 3 records but only one of those are duplicated which violates the constraint on the table. The other two records should have been added. Can someone give me some advice here?
What's happening is that PHP is throwing a warning when the mysql insert fails and stopping on that warning. The best way to accomplish your goal is:
Create a custom exception handler
Set PHP to use the exception handler for warnings.
Wrap the insert attempt into a try / catch
When you catch the exception / warning, either log or output the mysql error but continue script execution.
This will allow your script to continue without stopping while at the same time explaining to you the problem.
One way around this would be to simply query the database for the record that you're about to insert. This way, your series of queries will not die when attempting to insert a duplicate record.
A slightly more efficient solution would be to query for [i]all[/i] of the records you're about to insert in one query, remove all the duplicates, then insert the new ones.
Do you insert multiple rows with one INSERT statement?
INSERT INTO xyz (x,y,z) VALUES
(1,2,3),
(2,3,5),
(3,4,5),
(4,5,6)
Then you might want to consider prepared statements
...or adding the IGNORE keyword to your INSERT statement
INSERT IGNORE INTO xyz (x,y,z) VALUES
(1,2,3),
(2,3,5),
(3,4,5),
(4,5,6)
http://dev.mysql.com/doc/refman/5.0/en/insert.html says:
If you use the IGNORE keyword, errors that occur while executing the INSERT statement are treated as warnings instead
You can still fetch the warnings but the insertion will not be aborted.
Not a good way cause you should figure out whats wrong, but to just prevent it from dieing try adding # in front of the function
#mysql_query = ...
INSERT INTO FOO
(ID, BAR)
VALUES(1,2),(3,4)
ON DUPLICATE KEY UPDATE BAR=VALUES(BAR)