Multiple UPDATEs and INSERTs with PHP/MySQLi - php

I need to reconcile between an existing set of tables and new/changed information that I get on a regular basis, and so have a set of ~30 UPDATE/INSERT operations that has to run every time. Since 'mysql_query' is now deprecated, and I'd prefer not to recode everything in OO, is there a reasonable procedural way to run all of these in sequence without having to call 'mysqli_free_result()' after every single one?
Just for the record, I've tried running them as a set of mysqli_query statements without mysqli_free_result(), and it's a mess: some of the operations go through while others fail silently. Frankly, a shell script with a bunch of 'mysql -e' commands in it was much more reliable... but this needs to be a Web-driven app, so that's not viable anymore.

Your assumptions are wrong.
mysqli_query's behavior is similar to one of mysql_query and you don't need any other modifications. neither mysqli_free_result() is related to your problem.
the only meaningful part of your question is queries that fails silently. To make them fail noisily, just tell mysqli to do so. add this line before mysqli_connect
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
and the first query that fails will tell you the reason.
but in general, there is not a single problem with running multiple UPDATE and INSERT queries using mysqli_query().

Related

PDO lastInsertID() failing due to running multiple queries in a single call

This is odd. I'm running a query with just a single INSERT, preceded by a SET statement. The query looks something like this:
SET #discount:=(SELECT discount * :applyDiscount FROM fra_cus WHERE customerID=:customerID AND franchiseID=:franchiseID);
INSERT INTO discounts_applied (unitID, franchiseID, customerID, amount)
VALUES(:unitID, :franchiseID, :customerID, #discount * :price);
It appears that if I prepare these as two separate PDO queries, lastInsertID() works fine... but if I prepare them and execute them in the same statement, lastInsertID() returns nothing.
It's not the end of the world, but it's annoying. Anyone know why this would be the case? For the record, there's a reason I need to define #discount as a variable (pertains to triggers on one of the tables). Also this is all taking place within a larger transaction.
First of all, I would strongly recommend to run every query in a distinct API call. This is how an Application Programming Interface is intended to work.
It won't only prevent situations like this but also will make your code a multitude times more readable and maintainable.
And it will make your code much safer too. You can run multiple statements in a single call only at the expense of the native prepared statements. However virtual this vulnerability is, why taking chances at all?
Why not to make a regular SELECT query instead of SET, get the resulting value into a PHP variable and then use it among other variables, just through a placeholder? I don't see any reason why there should be such a complex way to deal with simple data.
In case I failed to convince you, the reason is simple. You are running two queries, and the first one doesn't trigger any insert ids. And obviously, you need this query's metadata (errors, affected rows, whatever), not the other one's first. So you get it. And to get the second'query's metadata you have to ask a database for it. The process is explained in my article: Treating PHP delusions - The only proper PDO tutorial: Running multiple queries with PDO. Basically PDOStatement::nextRowset() is what you need.

Keep checking for errors in queries

I'm a bit obsessed now. I'm writing a PHP-MYSQL web application, using PDO, that have to execute a lot of queries. Actually, every time i execute a query, i also check if that query gone bad or good. But recently i thought that there's no reason for it, and that's it is a wast of line to keep checking for an error.
Why should a query go wrong when your database connection is established and you are sure that your database is fine and has all the needed table and columns?
You're absolutely right and you're following the correct way.
In correct circumstances there should be no invalid queries at all. Each query should be valid with any possible input value.
But something still can happen:
You can lose the connection during the query
Table can be broken
...
So I offer you to change PDO mode to throw exception on errors and write one global handler which will catch this kind of errors and output some kind of sorry-page (+ add a line to a log file with some details)

How do I use MySQL transactions in PHP?

I'm sorry, this is a very general question but I will try to narrow it down.
I'm new to this whole transaction thing in MySQL/PHP but it seems pretty simple. I'm just using mysql not mysqli or PDO. I have a script that seems to be rolling back some queries but not others. This is uncharted territory for me so I have no idea what is going on.
I start the transaction with mysql_query('START TRANSACTION;'), which I understand disables autocommit at the same time. Then I have a lot of complex code and whenever I do a query it is something like this mysql_query($sql) or $error = "Oh noes!". Then periodically I have a function called error_check() which checks if $error is not empty and if it isn't I do mysql_query('ROLLBACK;') and die($error). Later on in the code I have mysql_query('COMMIT;'). But if I do two queries and then purposely throw an error, I mean just set $error = something, it looks like the first query rolls back but the second one doesn't.
What could be going wrong? Are there some gotchas with transactions I don't know about? I don't have a good understanding of how these transactions start and stop especially when you mix PHP into it...
EDIT:
My example was overly simplified I actually have at least two transactions doing INSERT, UPDATE or DELETE on separate tables. But before I execute each of those statements I backup the rows in corresponding "history" tables to allow undoing. It looks like the manipulation of the main tables gets rolled back but entries in the history tables remain.
EDIT2:
Doh! As I finished typing the previous edit it dawned on me...there must be something wrong with those particular tables...for some reason they were all set as MyISAM.
Note to self: Make sure all the tables use transaction-supporting engines. Dummy.
I'd recommend using the mysqli or PDO functions rather than mysql, as they offer some worthwhile improvements—especially the use of prepared statements.
Without seeing your code, it is difficult to determine where the problem lies. Given that you say your code is complex, it is likely that the problem lies with your code rather than MySQL transactions.
Have you tried creating some standalone test scripts? Perhaps you could isolate the SQL statements from your application, and create a simple script which simply runs them in series. If that works, you have narrowed down the source of the problem. You can echo the SQL statements from your application to get the running order.
You could also try testing the same sequence of SQL statements from the MySQL client, or through PHPMyAdmin.
Are your history tables in the same database?
Mysql transactions only work using the mysqli API (not the classic methods). I have been using transactions. All I do is deactivate autocommit and run my SQL statements.
$mysqli->autocommit(FALSE);
SELECT, INSERT, DELETE all are supported. as long as Im using the same mysqli handle to call these statements, they are within the transaction wrapper. nobody outside (not using the same mysqli handle) will see any data that you write/delete using INSERT/DELETE as long as the transaction is still open. So its critical you make sure every SQL statement is fired with that handle. Once the transaction is committed, data is made available to other db connections.
$mysqli->commit();

Running a list of MySQL queries without using exec()

I've got a site that requires manual creation of the database tables on install. At the moment they are saved (along with any initial data) in a collection of .sql files.
I've tried to auto-create using exec() with CLI MySQL and while it works on a few platforms it's fairly flakey and I don't really like doing it this way, plus it is hard to debug errors and is far from bulletproof (especially if the MySQL executable isn't in the system path).
Is there a better way of doing this? The MySQL query() command only allows one sql statement per query (which is the sticking point).
MySQLi I've heard may solve some of these issues but I am fairly invested in the original MySQL library but would be willing to switch provided it's stable, compatible and is commonly supported on a standard server build and is an improvement in general.
Failing this I'd probably be willing to do some sort of creation from a PHP array/data structure - which is arguably cleaner as it would be able to update tables to match the schema in situ. I am assuming this may be a problem that has already been solved, so any links to any example implementation with pro's/con's would be useful!
Thanks in advance for any insight.
Apparently you can pass 65536 as client flag when connecting to the datebase to allow multi queries, e.g. making use of ; in one SQL string.
You could also just read in the contents of the SQL files, explode by ; if necessary and run the queries inside a transaction to make sure all queries execute properly.
Another option would be to have a look at Phing and dbdeploy to manage databases.
If you're using this to migrate data between systems, consider using the LOAD DATA INFILE syntax (http://dev.mysql.com/doc/refman/5.1/en/load-data.html) after having used SELECT ... INTO OUTFILE (http://dev.mysql.com/doc/refman/5.1/en/select.html)
You can run the schema creation/update commands via the standard mysql_* PHP functions. And if the query() command as you call it will allow only one statement, just call it many times.
I really don't get why do you require everything to be in the same call.
You should check for errors after each statement and take corrective actions if it fails (unless you are using InnoDB, in which case you can wrap all statements in a transaction and rollback if it fails.)

find time to execute MySQL query via PHP

I've seen this question around the internet (here and here, for example), but I've never seen a good answer. Is it possible to find the length of time a given MySQL query (executed via mysql_query) took via PHP?
Some places recommend using php's microtime function, but this seems like it may be inaccurate. The mysql_query may be bogged down by network latency, or a sluggish system which isn't responding to your query quickly, or some other unrelated cause. None of these are directly related to the quality of your query, which is the only thing I really want to test out here. (Please mention in the comments if you disagree!)
My answer is similar, but varied. Record the time before and after the query, but do it within your database query class. Oh, you say you are using mysql_query directly? Well, now you know why you should use a class wrapper around those raw php database functions (pardon the snark). Actually, one is already built called PDO:
http://us2.php.net/pdo
If you want to extend the functionality to do timing around each of your queries... extend the class! Simple enough, right?
I you are only checking the quality of the query itself, then remove PHP from the equation. Use a tool like the MySQL Query Browser or SQLyog.
Or if you have shell access, just connect directly. Any of these methods will be superior in determining the actual performance of your queries.
At the php level you pretty much would need to record the time before and after the query.
If you only care about the query performance itself you can enable the slow query log in your mysql server: http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html That will log all queries longer than a specified number of seconds.
If you really need query information maybe you could make use of SHOW PROFILES:
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Personally, I would use a combination of microtime-ing, the slow query log, mytop, and analyzing problem queries with the MySQL client (command line).

Categories