mysqli multi_query followed by query [duplicate] - php

This question already has answers here:
"Commands out of sync; you can't run this command now" - Caused by mysqli::multi_query
(3 answers)
Closed 7 months ago.
I am currently doing the following:
$mysqli = new mysqli($server, $username, $password, $database);
$mysqli->multi_query($multiUpdates);
while ($mysqli->next_result()) {;} // Flushing results of multi_queries
$mysqli->query($sqlInserts);
Is there a faster way to dump the results?
I do not need them and just want to run the next query however I get the error:
Commands out of sync; you can't run this command now
Problem is the while ($mysqli->next_result()) {;} takes about 2 seconds which is a waste for something I don't want.
Any better solutions out there?

Found a faster solution which saves about 2-3 seconds when updating 500 records and inserting 500 records.
function newSQL() {
global $server, $username, $password, $database;
$con = new mysqli($server, $username, $password, $database);
return $con;
}
$mysqli = newSQL();
$mysqli->multi_query($multiUpdates);
$mysqli->close();
$mysqli = newSQL();
$mysqli->query($sqlInserts);
$mysqli->close();
Not sure how practical it is but works well for speed.

If closing and reopening the connection works for you, then you might be better off changing the order:
$mysqli = newSQL();
$mysqli->query($sqlInserts);
$mysqli->multi_query($multiUpdates);
$mysqli->close();
If you don't care which runs first, the query runs more predictably. As soon as it finishes, MySQL will return the results of the insert statement to the PHP client (probably mysqlnd). The connection will then be clear and can accept the next request. No need to close and reopen after a query. So you save the time it takes to close and reopen the connection with this order.
The multi_query is more complicated. It returns the results of the first update before the PHP code continues. At this point, we don't know if the later updates have run or not. The database won't accept any more queries on this connection until it has finished with the multi_query, including passing the results back with next_result. So what happens when you close the query?
One possibility is that it blocks until the multi_query is finished but does not require the results. So closing the connection essentially skips the part where the database server returns the results but still has to wait for them to be generated. Another possibility is that the connection closes immediately and the database continues with the query (this is definitely what happens if the connection is simply dropped without formally closing it, as the database server won't know that the connection is broken until it finishes or times out, see here or here).
You'll sometimes see the claim that query and multi_query take the same time. This is not true. Under some circumstances, multi_query can be slower. Note that with a normal query (using the default MYSQLI_STORE_RESULT), the database can simply return the result as soon as it finishes. But with multi_query (or with MYSQLI_USE_RESULT), it has to retain the result on the database server. If the database server stores the result, it may have to page it out of memory or it may deliberately store the result on disk. Either way, it frees up the memory but puts the result in a state where it takes more time to access (because disk is slower than memory).
NOTE for other readers: multi_query is harder to use safely than query. If you don't really know what you are doing, you are probably better off using PDO than mysqli (because PDO does more of the work for you) and you are almost certainly better off doing your queries one at a time. You should only use multi_query if you understand why it increases the risk of SQL injections and are avoiding it. Further, one usually doesn't need it.
The only real advantage to multi_query is it allows you to do your queries in one block. If you already have queries in a block (e.g. from a database backup), this might make sense. But it generally doesn't make sense to aggregate separate queries into a block so as to use multi_query. It might make more sense to use INSERT ON DUPLICATE KEY UPDATE to update multiple rows in one statement. Of course, that trick won't work unless your updates have a unique key. But if you do, you might be able to combine both the inserts and the updates into a single statement that you can run via query.
If you really need more speed, consider using something other than PHP. PHP produces HTML in response to web requests. But if you don't need HTML/web requests and just want to manipulate a database, any shell language will likely be more performant. And certainly multithreaded languages with connection pools will give you more options.

Related

How can I block the database with mysqli in php to avoid another transactions in a script execution?

I have to execute a set of prepared queries to load data in the DB (this load data dealys about 3 hours), and I'm searching the way of blocking the database while the script is executing because if other client executes any update or insert is possible that the info integrity will be break.
How can I do this?.
Thanks.
If I understood you correctly, by "block" you mean terminate your connection so that no other queries can take place. Here is how you terminate your connection in MySQLi:
for MySQLi Procedural:
mysqli_close($conn);
for MySQLi Object-oriented:
$conn->close();
In both cases, $conn is the variable that stored the connection made.
In terms of security, I don't see this being an issue because it would only be a security issue if someone had access to your $conn object, and that object is safely stored in your PHP code. Well how safe depends on so many other factors, but I hope I at least answered your initial question :)
What you are looking for is called "table locking". You may read it up on google.
You may lock either entire table or a certain row, depends on your need.
After locking a table, no other client will be able to do a locked operation on it. You may start reading from here: https://dev.mysql.com/doc/refman/5.5/en/innodb-lock-modes.html

Best way to call and "forget about" an async mysqlnd INSERT query

Setup
I'm working with php and mysql in this situation.
Let's say I have my mysqli connection like this:
$link = new mysqli('localhost', 'root', 'password', 'A_Database');
I have installed mysqlnd to perform asynchronous mysql queries using the 'MYSQL_ASYNC' parameter:
$link->query("INSERT INTO `A_Table` VALUES('stuff!')", MYSQLI_ASYNC);
Goal
I just want to insert a record, and I don't need to retrieve it until the distant future so I'm not concerned about how long it takes for the asynchronous query to finish up, and I don't need to perform some final action when I know the query is complete. I do need to perform other unrelated mysql queries once I'm past the section of the code where the insert queries occur.
Problem
Performing a query like this will block other queries later in the script with out-of-sync errors. In order to deal with that, I had to add something like the following code after every async query:
$links = $errors = $reject = array($link);
if ($link->poll($links, $errors, $reject, 1)) {
foreach ($links as $resultLink) {
if ($result = $resultLink->reap_async_query()) {
if (is_object($result)) {
$result->free();
}
}
}
}
This effectively stops the out-of-sync errors and my code works fine.
Two things still trouble me though:
I don't want to have to bother with this, because I'm only performing insert queries and I don't care about having some response in my code when I know the inserts are complete.
The polling code runs really really slowly on my server; far slower than performing a regular query and synchronously get the results back.
Recap
I want to run the insert query with two requirements; the query is non-blocking (asynchronous), and I can still perform other mysql queries later on. I just want to insert-query and 'forget about it', and just move on with my code.
Any suggestions as to what the best way to do this is?
If your goal is to insert a record and continue executing code without worrying about the result you can use INSERT DELAYED syntax without the need for the ASYNC flag:
$link->query("INSERT DELAYED INTO `A_Table` VALUES('stuff!')" );
It returns right away and you can continue executing new querys on the same connection. We use this on our logs system.
If you want to be able to fire-and-forget your queries, your current approach is mostly correct, although I would suggest that the slowness you are experiencing is possibly because you have '1' set as the 'secs' parameter in $link->poll($links, $errors, $reject, 1). This means the poll will wait up to 1 second for the query to return. If you set this to '0' then it will not wait, it will return much faster with an answer of whether the query is complete.
Secondly, if you don't need the answer from this query, you can be doing other things while this query is executing, you don't have to sit there waiting for it to return in an async-poll loop if you have other things to be getting on with. This is what ASYNC queries were designed for. I would suggest you add checks to your query execute method to check if there is an outstanding ASYNC query already being executed, so whenever you need to fire off a new query, only then will it poll the server and wait for any previous query to finish and become available.
Alternatively, if you want to run multiple queries that don't have any impact or dependency on each other, there's nothing to stop you setting up multiple database connections and running ASYNC queries on all of them in parallel, or setting up new connections and executing queries as and when you need to.
If you want to entirely outsource your SQL to another process (without cron), one way would be to setup a persistent monitoring script, using Supervisor/Gearman for example, so you can send queries to it and have it process the queries in a process (or even server) different from the local one. This has the benefit that the remote script executes as a CLI application, so has more ability to use forks and multi-threaded processing, and can operate in a less exposed environment.
Another approach (optionally using cron) is to use a write-optimised (maybe in-memory) local database table, execute your inserts quickly into that table, then have another process that reads this table intermittently (or have triggers setup to watch for new entries), and have that process take the new inserted rows from that table and move them into the slower main table.

Close db connection after each query or end of script

It's php application using mysqli.
Someone else suggested to have db connection closed right after each query.
Current system have singleton database connection, so over-created new connection is not issue here. Only unused open connections.(Say, the script has not finished execution and the database is not closed by itself.)
So it seemed that there is something to balance - between the cost of waiting for the script to finish and multiple unnecessary closings of the db connection per script. I tend to think that the first is safer. But I am not very sure if it's sufficient. For example if I do:
$userA->sendMessageTo($userB);
And inside this:
$userA->send($userB);
$userA->useSomePoints();
$userA->flushPointsBalance();
....
Imagining each method will have some database operation but this is just one script call/request, if the db open/close happens around each query, this will certainly happen more than once, comparing to not closing it right after each query in method scope.
So which way is better?
generally, having your DB wrapper class (or ORM) create a single connection for the entire request and only close it during clean up (either via destructor, or via PHP's cleanup) is okay. if this is a problem, it probably means that something long is happening between your opening and closing of connections, and this is what you should be addressing instead.
causes could be:
slow queries that don't make use of indices
some other high latency blocking IO (file reading, decoding, etc)
you'll get better gains in terms of effort addressing those issues, rather than looking at how you open and close connections.

Are mysql_query commands executed in a top->bottom fashion as a .php file is executed?

I am attempting to build a progress bar loaded for a long running page (yes indexes are obvious solution but currently the dataset/schema prohibits partitioning/proper indexing) so I plan to use a GUID/uniqueID within the query comment to track progess with SHOW_FULL_PROCESSLIST via ajax - but the key to this rests on if the sequential queries are executed in order by php, does anyone know?
MySQL as a database server uses multiple threads to handle multiple running queries. It allocates a thread as and when it received a query from a client i.e. a PHP connection or ODBC connection etc.
However, since you mentioned mysql_query I think there can be 2 things in your mind:
if you call mysql_query() multiple times to pass various commands, then MySQL will be passed each query after the previous query is completely executed and the result returned to PHP. So, of course MySQL will seem to then work sequentially although it is actually PHP that is waiting to send MySQL a query till one query is finished.
In MySQL5 and PHP5 there is a function called (mysqli_multi_query()) using which you can pass multiple queries at once to MySQL without PHP waiting for 1 query to end. The function will return all results at once in the same result object. MySQL will actually run all the queries at once using multiple threads and the wait time in this case is substantially less and you also tend to use the server resources available much better as all the queries will run as separate threads.
As Babbage once said, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." Calls to mysql_query, like calls to any other function, execute in the order that they are reached in program flow.
Order in the file is mostly irrelevant. It's order of execution that matters.
<?php
myCmdFour();
myCmdTwo();
myCmdThree();
myCmdTwo();
function myCmdTwo() {
mysql_query(...);
}
function myCmdThree() {
mysql_query(...);
}
function myCmdFour() {
mysql_query(...);
}
myCmdFour();
myCmdThree();
myCmdTwo();
myCmdTwo();
?>
Although anyone that has PHP files that look like that, needs to seriously rethink things.

proper way to solve mysql max user connection error

I'm using PHP with MYSQL database as both are open source and easy to use.
I'm getting problem when I execute insert and/or update of millions of row one after another
while this operation perform I got the MYSQL error that:
'max_user_connections' active connections
which is the best way to solve this problem.
I don't want to use another database or language other then PHP.
connect_db();
$query = "insert into table(mobno,status,description,date,send_deltime,sms_id,msg,send_type) values('".$to."','".$status."','".$report."','','".$timewsha1."','".$smsID."','','".$type."')";
$result = mysql_query($query) or ("Query failed : " . mysql_error());
this query will execute thousand of times.
and then server give connection error.
First of all, try to know from your hosting server administrator about the max consecutive active connections available to the MySQL database. This is the most basic & primary information to have knowledge about.
If your page(s) load in a decent amount of time and release the connection once the page is loaded, it should be fine. The problem occurs when your script takes some long time to retrieve information from the database or maintains the connections.
Since you are executing INSERT and / or UPDATE operations of millions of rows, so you may have some problem.
Additionally, if you fail to close connections in your script(s), it is possible that someone will load a page and instead of closing the connection when the page is loaded, it is left open. No one else can then use that connection. So please make sure that at the end of execution of all the MySQL / SQL queries, the database connection is closed. Also please make sure that your server provides more than 250 connections, since 100 connections is available in almost all the servers generally.
Also make sure that you are not using the persistent connections (which is available when using the built-in function "mysql_pconnect()"), since this will lock up the user until the connection is manually closed.
Hope it helps.
//this loop is for preparing the subquery for mutiple records
for(// this loop for getting data for mutiple records){
$sub_query[] = "('".$to."','".$status."','".$report."','','".$timewsha1."','".$smsID."','','".$type."')";
}
$query = "insert into table(mobno,status,description,date,send_deltime,sms_id,msg,send_type) values ";
$query .= implode(',',$sub_query);
mysql_query($query );
So, a remote app calls into this script, sends it a list of values, and then this query is executed once, right? It's not in a foreach or for or while loop? When you say it will be executed millions of times, you don't mean in one sitting I mean. If it is in a loop, then move the db connect outside of the loop, otherwise it will attempt to connect again each time the loop iterates, and also, remember to call mysql_close at the end of the script, just in case.
mysql_pconnect() would create a persistent connection and that is the way to go if you don't want to exhaust your server's connection pool.

Categories