PDO provides functions to initiate, commit, and roll back transactions:
$dbh->beginTransaction();
$sth = $dbh->prepare('
...
');
$sth->execute(); // in real code some values will be bound
$dbh->commit();
Is there any reason to use the PDO functions over simply using the transaction statements in MySQL? I.e:
$sth = $dbh->prepare('
START TRANSACTION;
...
COMMIT;
');
$sth->execute(); // in real code some values will be bound
UPDATE: Just a note to anyone else looking into this, after some testing I actually found the second case above (using START TRANSACTION and COMMIT in prepare()) will result in an exception being thrown. So in order to use transactions with a prepared statement, you must use the PDO functions shown in the first case.
From a portability standpoint, you're better off using the interface that PDO provides in case you ever want to work with a different DBMS, or bring on another team member who's used to another DBMS.
For example, SQLite uses a slightly different syntax; if you were to move to an SQLite database from MySQL, you would have to change every string in your PHP code that contains the statement START TRANSACTION; because it would no longer be valid syntax for your database. SQL Server 2014 is another example that doesn't use this syntax.
Of course, you can also use BEGIN; in MySQL to start a transaction, and that would work fine in SQLite and SQL Server. You could just use that instead.
You'll often be able to find a syntax that you like and that is reasonably portable but why spend the time and energy to even think about it if you don't have to? Take advantage of the fact that there are a dozen PDO drivers available to make your life easier. If you care at all about consistency, favor the API over implementation-specific SQL syntax.
The difference between the PDO and mysql transaction is nothing. EXCEPT
You can for example start your transaction, make some querys run some php code do more querys based on your code and such, and you could rollback at the end of that code simply execute $PDO->rollback(); this is way easier than creating 2 - 3 more querys instead of using $pdo->beginTransaction();
Also using $pdo->rollback(); is a few lines shorter and in my opinion it's also clearer than creating another query and executing it.
Related
This is odd. I'm running a query with just a single INSERT, preceded by a SET statement. The query looks something like this:
SET #discount:=(SELECT discount * :applyDiscount FROM fra_cus WHERE customerID=:customerID AND franchiseID=:franchiseID);
INSERT INTO discounts_applied (unitID, franchiseID, customerID, amount)
VALUES(:unitID, :franchiseID, :customerID, #discount * :price);
It appears that if I prepare these as two separate PDO queries, lastInsertID() works fine... but if I prepare them and execute them in the same statement, lastInsertID() returns nothing.
It's not the end of the world, but it's annoying. Anyone know why this would be the case? For the record, there's a reason I need to define #discount as a variable (pertains to triggers on one of the tables). Also this is all taking place within a larger transaction.
First of all, I would strongly recommend to run every query in a distinct API call. This is how an Application Programming Interface is intended to work.
It won't only prevent situations like this but also will make your code a multitude times more readable and maintainable.
And it will make your code much safer too. You can run multiple statements in a single call only at the expense of the native prepared statements. However virtual this vulnerability is, why taking chances at all?
Why not to make a regular SELECT query instead of SET, get the resulting value into a PHP variable and then use it among other variables, just through a placeholder? I don't see any reason why there should be such a complex way to deal with simple data.
In case I failed to convince you, the reason is simple. You are running two queries, and the first one doesn't trigger any insert ids. And obviously, you need this query's metadata (errors, affected rows, whatever), not the other one's first. So you get it. And to get the second'query's metadata you have to ask a database for it. The process is explained in my article: Treating PHP delusions - The only proper PDO tutorial: Running multiple queries with PDO. Basically PDOStatement::nextRowset() is what you need.
I have a Symfony 4 project and I want to store mysql queries in as a string in a mysql database. However, before storing the strings I want to make sure they are valid mysql syntax. Is there a way of doing this?
Thanks!
I didn't test it but it should work.
Use the database API you already use in your project to prepare the SQL statements you want to validate then discard them; do not execute the prepared statements.
For example, using PDO, use PDO::prepare() to ask the server to prepare the statement. It returns a PDOStatement object on success (i.e. when the query is correct). Do not call execute() on the returned statement, just discard it (using unset()).
PDO::prepare() returns FALSE or throws an exception on error, depending on how the PDO's error handling is configured.
The easiest way would be to run a query in a new transaction and then roll it back. SQL can get complex to validate especially if you plan to allow MySQL-specific functions. What if a new function gets introduced in next MySQL release? Writing and maintaining a separate SQL validation library seems counterproductive.
Why not to try following:
Create a new user for running these queries in your database. This will allow you to manage security e.g. allowing only to use SELECT statement so no one will run DROP DATABASE.
Run the user provided statement using the new user created in point 1. Start a new transaction using START TRANSACTION, execute the user provided statement, and rollback with ROLLBACK. Ensure SET autocommit=0 is set as per 13.3.1 START TRANSACTION, COMMIT, and ROLLBACK Syntax.
If the user provided statement executes without errors it's valid. You don't have to read all the returned rows in your PHP code.
Make sure to check on performance because some statements will be expensive to execute. This functionality can DOS your application.
I'd probably create procedure or function in the database. That's what they are for. Storing SQL in a table just to query it and then execute only results in a redundant round trip between the database and the application.
I cringed when Sebastien stated he was disconnecting & reconnecting between each use of mysqli_multi_query() # Can mysqli_multi_query do UPDATE statements? because it just didn't seem like best practice.
However, Craig # mysqli multi_query followed by query stated in his case that it was faster to disconnect & reconnect between each use of mysqli_multi_query() than to employ mysqli_next_result().
I would like to ask if anyone has further first-hand knowledge or benchmark evidence to suggest an approximate "cutoff" (based on query volume or something) when a programmer should choose the "new connection" versus "next result" method.
I am also happy to hear any/all concerns not pertaining to speed. Does Craig's use of a connecting function have any bearing on speed?
Is there a speed difference between Craig's while statement:
while ($mysqli->next_result()) {;}
- versus -
a while statement that I'm suggesting:
while(mysqli_more_results($mysqli) && mysqli_next_result($mysqli));
- versus -
creating a new connection for each expected multi_query, before running first multi_query. I just tested this, and the two mysqli_multi_query()s were error free = no close() needed:
$mysqli1=mysqli_connect("$host","$user","$pass","$db");
$mysqli2=mysqli_connect("$host","$user","$pass","$db");
- versus -
Opening and closing between each mysqli_multi_query() like Sebastien and Craig:
$mysqli = newSQL();
$mysqli->multi_query($multiUpdates);
$mysqli->close();
- versus -
Anyone have another option to test against?
It is not next_result() to blame but queries themselves. The time your code takes to run relies on the time actual queries take to perform.
Although mysqli_multi_query() returns control quite fast, it doesn't mean that all queries got executed by that time. Quite contrary, by the time mysqli_multi_query() finished, only first query got executed. While all other queries are queued on the mysql side for the asynchronous execution.
From this you may conclude that next_result() call doesn't add any timeout by itself - it's just waiting for the next query to finish. And if query itself takes time, then next_result() have to wait as well.
Knowing that you already may tell which way to choose: if you don't care for the results, you may just close the connection. But in fact, it'll be just sweeping dirt under the rug, leaving all the slow queries in place. So, it's better to keep next_result() loop in place (especially because you have to check for errors/affected rows/etc. anyway) but speed up the queries themselves.
So, it turns out that to solve the problem with next_result() you have to actually solve the regular problem of the query speed. So, here are some recommendations:
For the select queries it's usual indexing/explain analyze, already explained in other answers.
For the DML queries, especially run in batches, there are other ways:
Speaking of Craig's case, it's quite much resembling the known problem of speed of innodb writes. By default, innodb engine is set up into very cautious mode, where no following write is performed until engine ensured that previous one were finished successfully. So, it makes writes awfully slow (something like only 10 queries/sec). The common workaround for this is to make all the writes at once. For insert queries there are plenty of methods:
you can use multiple values insert syntax
you can use LOAD DATA INFILE query
you can wrap all the queries in a transaction.
While for updating and deleting only transaction remains reliable way. So, as a universal solution such a workaround can be offered
$multiSQL = "BEGIN;{$multiSQL}COMMIT;";
$mysqli->multi_query($multiSQL);
while ($mysqli->next_result()) {/* check results here */}
If it doesn't work/inapplicable in your case, then I'd suggest to change mysqli_multi_query() for the single queries run in a loop, investigate and optimize the speed and then return to multi_query.
To answer your question:
look before you jump
I expect your mysqli_more_results() call (the look before you jump), doesn't speed up things: If you have n results, you'll do (2*n)-1 calls to the database, whereas Craig does n+1.
multiple connections
multi_query executes async, so you'll just be adding connection overhead.
opening and closing db
Listen to Your Common Sense ;-) But don't loose track of what you're doing. Wrapping queries in a transaction, will make them atomic. That means, they all fail, or they all succeed. Sometimes that is required to make the database never conflict with your universe of discourse. But using transactions for speedups, may have unwanted side-effects. Consider the case where one of your queries violates a constraint. That will make the whole transaction fail. Meaning that if they weren't a logical transaction in the first place and most queries should have succeeded, that you'll have to find out which went wrong and which will have to be reissued. Costing you more instead of delivering a speedup.
Sebastien's queries actually look like they should be part of some bigger transaction, that contains the deletion or updates of the parents.
Instead, try and remember
there is no spoon
In your examples, there was no need for multiple queries. The INSERT ... VALUES form takes multiple tuples for VALUES. So instead of preparing one prepared statement and wrap its repeated executions in a transaction like Your Common Sense suggest. You could prepare a single statement and have it executed and auto-committed. As per mysqli manual this saves you a bunch of roundtrips.
So make a SQL statement of the form:
INSERT INTO r (a, b, c) VALUES (?, ?, ?), (?, ?, ?), ...
and bind and execute it. mysqldump --opt does it, so why don't we? The mysql reference manual as a section on statement optimization. Look in its DML section for insert and update queries. But understanding why --opt does what it does is a good start.
the underestimated value of preparing a statement
To me, the real value of prepared statements is not that you can execute them multiple times, but the automatic input escaping. For a measly single extra client-server round-trip, you save yourself from SQL injection. SQL injection is a serious point of attention especially when you're using multi_query. multi_query tells mysql to expect multiple queries and execute them. So fail to escape properly and you're in for some fun:
So my best practise would be:
Do I really need multiple queries?
If I do, escape them well, or prepare them!
I have many questions about PDO ...
Should I use prepare() only when I have parameters to bind? When I need to do a simple query like select * from table order by ... should i use query()?
Should I use exec() when I have update and delete operations and need to get the number of rows affected, or should I use PDOStatement->rowCount() instead?
Should I use closeCursor when I do insert, update and delete, or only with select when I need to do another select?
Does $con = NULL; really close the connection?
Is using bindParam with foreach to make multiple inserts a good point? I mean performance wise, because I think that doing (...),(...) on the same insert is better isn't it?
Can you provide me some more information (URL) about performance points when using PHP PDO MySQL? If someone has another hint it would be really useful.
When I was developing the DB layer in Zend Framework 1.0, I made it use prepare/execute for all queries by default. There is little downside to doing this.* There's a little bit of overhead on the PHP side, but on the MySQL side, prepared queries are actually faster.
My practice is to use query() for all types of queries, and call rowCount() after updates. You can also call SELECTROW_COUNT().
CloseCursor is useful in MySQL if you have pending rows from a result set, or pending result sets in a multi-result set query. It's not necessary when you use INSERT, UPDATE, DELETE.
The PDO_mysql test suite closes connections with $con=NULL and that is the proper way. This won't actually close persistent connections managed by libmysqlnd, but that's deliberate.
Executing a prepared INSERT statement one row at a time is not as fast as executing a single INSERT with multiple tuples. But the difference is pretty small. If you have a large number of rows to insert, and performance is important, you should really use LOAD DATA LOCAL INFILE. See also http://dev.mysql.com/doc/refman/5.6/en/insert-speed.html for other tips.
You can google for "PDO MySQL benchmark" (for example) to find various results. The bottom line, however, is that choosing PDO vs. Mysqli has no clear winner. The difference is slight enough that it diminishes relative to other more important optimization techniques, such as choosing the right indexes, making sure indexes fit in RAM, and clever use of application-side caching.
* Some statements cannot run as prepared statements in MySQL, but the list of such statements gets smaller with each major release. If you're still using an ancient version of MySQL that can't run certain statements with prepare(), then you should have upgraded years ago!
Re your comment:
Yes, using query parameters (e.g. with bindValue() and bindParam()) is considered the best methods for defending against SQL injections in most cases.
Note that there's an easier way to use query parameters with PDO -- you can just pass an array to execute() so you don't have to bother with bindValue() or bindParam():
$sql = "SELECT * FROM MyTable WHERE name = ?";
$stmt = $pdo->prepare($sql);
$stmt->execute( array("Bill") );
You can also use named parameters this way:
$sql = "SELECT * FROM MyTable WHERE name = :name";
$stmt = $pdo->prepare($sql);
$stmt->execute( array(":name" => "Bill") );
Using quote() and then interpolating the result into a query is also a good way to protect against SQL injection, but IMHO makes code harder to read and maintain, because you're always trying to figure out if you have closed your quotes and put your dots in the right place. It's much easier to use parameter placeholders and then pass parameters.
You can read more about SQL injection defense in my presentation, SQL Injection Myths and Fallacies.
Most of questions can be answered with just common sense. So, here I am.
It doesn't matter actually.
Abolutely and definitely - NO. Exec doesn't utilize prepared statements. That's all.
Doesn't really matter. If you ever need this, your program architecture is probably wrong.
You can easily test it yourself. A personal experience is always preferred.
The difference considered to be negligible. However, if your multiple inserts being really slow (on INNODB with default settings for example) you have to use a transaction, which will make them fast again.
There is NONE. PDO is just an API. But APIs aren't related to performance. They just translate your commands to the service. It's either your commands or service may affect performance, but not mere API.
So, the rule of thumb is:
it's query itself that affects performance, not the way you are running it.
I'm sorry, this is a very general question but I will try to narrow it down.
I'm new to this whole transaction thing in MySQL/PHP but it seems pretty simple. I'm just using mysql not mysqli or PDO. I have a script that seems to be rolling back some queries but not others. This is uncharted territory for me so I have no idea what is going on.
I start the transaction with mysql_query('START TRANSACTION;'), which I understand disables autocommit at the same time. Then I have a lot of complex code and whenever I do a query it is something like this mysql_query($sql) or $error = "Oh noes!". Then periodically I have a function called error_check() which checks if $error is not empty and if it isn't I do mysql_query('ROLLBACK;') and die($error). Later on in the code I have mysql_query('COMMIT;'). But if I do two queries and then purposely throw an error, I mean just set $error = something, it looks like the first query rolls back but the second one doesn't.
What could be going wrong? Are there some gotchas with transactions I don't know about? I don't have a good understanding of how these transactions start and stop especially when you mix PHP into it...
EDIT:
My example was overly simplified I actually have at least two transactions doing INSERT, UPDATE or DELETE on separate tables. But before I execute each of those statements I backup the rows in corresponding "history" tables to allow undoing. It looks like the manipulation of the main tables gets rolled back but entries in the history tables remain.
EDIT2:
Doh! As I finished typing the previous edit it dawned on me...there must be something wrong with those particular tables...for some reason they were all set as MyISAM.
Note to self: Make sure all the tables use transaction-supporting engines. Dummy.
I'd recommend using the mysqli or PDO functions rather than mysql, as they offer some worthwhile improvements—especially the use of prepared statements.
Without seeing your code, it is difficult to determine where the problem lies. Given that you say your code is complex, it is likely that the problem lies with your code rather than MySQL transactions.
Have you tried creating some standalone test scripts? Perhaps you could isolate the SQL statements from your application, and create a simple script which simply runs them in series. If that works, you have narrowed down the source of the problem. You can echo the SQL statements from your application to get the running order.
You could also try testing the same sequence of SQL statements from the MySQL client, or through PHPMyAdmin.
Are your history tables in the same database?
Mysql transactions only work using the mysqli API (not the classic methods). I have been using transactions. All I do is deactivate autocommit and run my SQL statements.
$mysqli->autocommit(FALSE);
SELECT, INSERT, DELETE all are supported. as long as Im using the same mysqli handle to call these statements, they are within the transaction wrapper. nobody outside (not using the same mysqli handle) will see any data that you write/delete using INSERT/DELETE as long as the transaction is still open. So its critical you make sure every SQL statement is fired with that handle. Once the transaction is committed, data is made available to other db connections.
$mysqli->commit();