Is it possible to SELECT specific rows and DELETE the selected result in ONE request?
$res = $Connection->query("SELECT * FROM tasks");
if($res->num_rows > 0){
while($row = $res->fetch_assoc()){ ...
The problem is that I have limited Number of queries to my SQL database and I want to minimize it as mush as possible.
You can't do that using the regular mysql-api in PHP. Just execute two queries. The second one will be so fast that it won't matter. This is a typical example of micro optimization. Don't worry about it. (So timing doesn't matter much)
For the record, since you are worried about the number of queries, it can be done using mysqli and the mysqli_multi_query-function.
P.S. - I haven't tried this, but since this mysql_multi_query is there in the documentation, it might help... :)
Both SELECT and DELETE need one request individually. And there is no SQL can do this work in the official documents.
Related
I have an SQL query which can return quite a lot results (something like 10k rows) but I cannot use the SQL LIMIT parameter, as I don't know the exact amount of needed rows (there's a special grouping done in PHP). So the plan was to stop fetching rows once I have enough.
Since PDO normally operates in buffered mode, which fetches the whole result set and passes it to PHP, I switched PDO to unbuffered mode with
$pdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
Now I expected that executing the query should take about the same time no matter what LIMIT I pass. So basically
$result = $pdo->query($query);
$count = 0;
while ($row = $result->fetch()) {
++$count;
if ($count > 10) break;
}
should execute in about the same time for
$query = 'SELECT * FROM myTable';
and
$query = 'SELECT * FROM myTable LIMIT 10';
However the first one takes 8 seconds whereas the second one executes instantly. So it seems like the unbuffered query also waits until the whole result set is fetched - which shouldn't be the case according to the documentation.
Is there any way to get the query result instantly in PHP with PDO and stop the query once I have enough results?
Database applications like "Sequel Pro SQL" can do this (I can hit cancel after 1 second and get the results that were already queried until that time) so it can't be a general problem with MySQL servers.
I can workaround the problem by choosing a very high LIMIT which always has enough valid results after my grouping. But since performance is an issue, I'd like to query only as many entries as really needed. Please don't suggest anything that involves grouping in MySQL, the terrible performance of that is the reason we have to change the behaviour.
Now I expected that executing the query should take about the same time no matter what LIMIT I pass. So basically
This might not be completely true. While you won't get the overhead of receiving all your results, they are all queried (without a limit)! You do get the advantage of keeping most of the results serverside until you need them, but your server actually does perform the whole query first as far as I know. I'm not sure how complicated your query is, but this could be the issue?
Say for instance you have a very slow join (not indexed), but only want the first 10 by id, your query will get 10 based on the index, and then only do the join for those 10. This'll be quick
But if you don't actually limit, but ask for the result in batches, your complete join will have to be done (slow!) and then your resultsset is released in parts.
A quicker method might be to repeat your limited query untill you have your result. I know, this will increase overhead, but it might be way quicker. Only way to know is to test.
as response to your comment: this is from the manual
Unbuffered MySQL queries execute the query and then return a resource while the data is still waiting on the MySQL server for being fetched.
So it executes the query. The complete query. So as I tried to explain above, it will not be as quick as the same query with a LIMIT 10, as it doesn't perform a partial query! The fact that a different DB engine does this does not mean MySQL can...
Have you tried using prepare/execute instead of query, and putting a $stmt->closeCursor(); call after the break?
$stmt = $dbh->prepare($query);
$stmt->execute();
$count = 0;
while ($row = $stmt->fetch()) {
++$count;
if ($count > 10) break;
}
$stmt->closeCursor();
I was browsing around Stack Overflow attempting to find how to limit an SQL query with a while loop and I came across this code.
$count = 0;
while ($count < 4 && $info = mysql_fetch_assoc($result)) {
//stuff
$count++;
}
Q 1: What is the difference between this code and using the SQL LIMIT clause?
Q 2: For what reason would somebody want to use this code, rather than using LIMIT?
With this code, the MySQL server will send all the results to the client, but the client ignores everything after the 4th row. So the server has to do more work, and more bandwidth will be used between the client and server.
They might want to use mysql_num_rows() to find out how many total rows were selected, even though they only want to display the first 4. However, MySQL provides a way to do that with LIMIT -- you can put the SQL_CALC_FOUND_ROWS option in the SELECT clause, and then use SELECT FOUND_ROWS() to get the total number of rows. So there's no good reason, except they don't know about this feature.
Everyting #Barmar said is right on. Following with code like that will cause lots of problems as your result sets start to grow. Let a database do what its good at doing, let it supply the limit of results you want/need. Just think of what happens when you do a SELECT with no LIMIT clause in the command line client where there are thousands of rows...it just goes on and on.
One more thing, I wouldn't recommend using mysql_num_rows() as its a deprecated function. Might as well go along with mysqli or PDO.
I often run into the situation where I want to determine if a value is in a table. Queries often happen often in a short time period and with similar values being searched therefore I want to do this the most efficient way. What I have now is
if($statment = mysqli_prepare($link, 'SELECT name FROM inventory WHERE name = ? LIMIT 1'))//name and inventory are arbitrarily chosen for this example
{
mysqli_stmt_bind_param($statement, 's', $_POST['check']);
mysqli_stmt_execute($statement);
mysqli_stmt_bind_result($statement, $result);
mysqli_stmt_store_result($statement);//needed for mysqli_stmt_num_rows
mysqli_stmt_fetch($statement);
}
if(mysqli_stmt_num_rows($statement) == 0)
//value in table
else
//value not in table
Is it necessary to call all the mysqli_stmt_* functions? As discussed in this question for mysqli_stmt_num_rows() to work the entire result set must be downloaded from the database server. I'm worried this is a waste and takes too long as I know there is 1 or 0 rows. Would it be more efficient to use the SQL count() function and not bother with the mysqli_stmt_store_result()? Any other ideas?
I noticed the prepared statement manual says "A prepared statement or a parametrized statement is used to execute the same statement repeatedly with high efficiency". What is highly efficient about it and what does it mean same statement? For example if two separate prepared statements evaluated to be the same would it still be more efficient?
By the way I'm using MySQL but didn't want to add the tag as a solution may be non-MySQL specific.
if($statment = mysqli_prepare($link, 'SELECT name FROM inventory WHERE name = ? LIMIT 1'))//name and inventory are arbitrarily chosen for this example
{
mysqli_stmt_bind_param($statement, 's', $_POST['check']);
mysqli_stmt_execute($statement);
mysqli_stmt_store_result($statement);
}
if(mysqli_stmt_num_rows($statement) == 0)
//value not in table
else
//value in table
I believe this would be sufficient. Note that I switched //value not in table
and //value in table.
It really depends of field type you are searching for. Make sure you have an index on that field and that index fits in memory. If it does, SELECT COUNT(*) FROM <your_table> WHERE <cond_which_use_index> LIMIT 1. The important part is LIMIT 1 which prevent for unnecessary lookup. You can run EXPLAIN SELECT ... to see which indexes used and probably make a hint or ban some of them, it's up to you. COUNT(*) works damn fast, it is optimized by design return result very quickly (MyISAM only, for InnoDB the whole stuff is a bit different due to ACID). The main difference between COUNT(*) and SELECT <some_field(s)> is that count doesn't perform any data reading and with (*) it doesn't care about whether some field is a NULL or not, just count rows by most suitable index (chosen internally). Actually I can suggest that even for InnoDB it's a fastest technique.
Also use case matters. If you want insert unique value make constrain on that field and use INSERT IGNORE, if you want to delete value which may not be in table run DELETE IGNORE and same for UPDATE IGNORE.
Query analyzer define by itself whether two queries are the same on or not and manage queries cache, you don't have to worry about it.
The different between prepared and regular query is that the first one contains rule and data separately, so analyzer can define which data is dynamic and better handle that, optimize and so. It can do the same for regular query but for prepared we say that we will reuse it later and give a hint which data is variable and which is fixed. I'm not very good in MySQL internal so you can ask such questions on more specific sites to understand details in a nutshell.
P.S.: Prepared statements in MySQL are session global, so after session they are defined in ends they are deallocated. Exact behavior and possible internal MySQL caching is a subject of additional investigation.
This is the kind of things in-memory caches are really good at. Something like this should work better than most microoptimization attempts (pseudocode!):
function check_if_value_is_in_table($value) {
if ($cache->contains_key($value)) {
return $cache->get($value);
}
// run the SQL query here, put result in $result
// note: I'd benchmark if using mysqli_prepare actually helps
// performance-wise
$cache->put($value, $result);
return $result;
}
Have a look at memcache or the various alternatives.
I'm using a PHP webservice where I have performed a simple SELECT query, and stored it
$result = run_query($get_query);
I now need to perform further querying on the data based on different parameters, which I know is possible via MySQL in the form:
SELECT *
FROM (SELECT *
FROM customers
WHERE CompanyName > 'g')
WHERE ContactName < 'g'
I do know that this performs two Select queries on the table. However, what I would like to know is if I can simply use my previously saved query in the FROM section of the second section, such as this, and if my belief that it helps performance by not querying the entire database again is true:
SELECT *
FROM ($result)
WHERE ContactName < 'g'
You can make a temp table to put the initial results and then use it to select the data and in the second query. This will work faster only if your 1-st query is slow.
PHP and SQL are different languages and very different platforms. They often don't even run in the same computer. Your PHP variables won't interact at all with the MySQL server. You use PHP to create a string that happens to contain SQL code but that's all. In the end, the only thing that counts is the SQL code you sent to the server—how you manage to generate it is irrelevant.
Additionally, you can't really say how MySQL will run a query unless you obtain an explain plan:
EXPLAIN EXTENDED
SELECT *
FROM (SELECT *
FROM customers
WHERE CompanyName > 'g')
WHERE ContactName < 'g'
... but I doubt it'll read the table twice for your query. Memory is much faster than disk.
Thanks for the responses, everyone. Turns out what I was looking for was a "query of query", which isn't supported directly by PHP but I found a function over here which provides the functionality: http://www.tom-muck.com/blog/index.cfm?newsid=37
That was found from this other SO question: Can php query the results from a previous query?
I still need to do comparisons to determine whether it improves speed.
If I understand your question correctly you want to know whether saving the "from" part of your SQL query in a php variable improves the performance of you querying your SQL server, then the answer is NO. Simply because the variable keeping the value is inserted into the query.
Whether performance is gained in PHP, the answer is most probable yes; but depends on the length of the variable value (and how often you repeat using the variable instead of building a new complete query) whether the performance will be notable.
Why not just get this data in a single query like this?
SELECT *
FROM customers
WHERE CompanyName > 'g'
AND ContactName < 'g'
On my script, I have about 15 SQL queries just for counting the number of rows and displaying them to the user.
What is the most efficient way?
Should I use:
$stmt=$cxn->prepare("SELECT id FROM items WHERE seller=?");
$stmt->execute(array($username));
echo $stmt->rowCount();
Or this:
$stmt=$cxn->prepare("SELECT count(*) as count FROM items WHERE seller=?");
$stmt->execute(array($username));
while($row=$stmt->fetch(PDO::FETCH_ASSOC))
echo $row['count'];
Thanks in advance.
The short answer is Count(*) will be faster.
However, that assumes your not using the data, if you are going to select the data, and you want a count, then use your first method.
("SELECT id FROM items WHERE seller=?");
If you have an index on the table, then that will return almost instantly.
The rowCount command can be used not only for SELECT queries but also for UPDATE,INSERT etc.
So that's a benefit for that.
However according to the PDO documentation :
not guaranteed for all databases and should not be relied on for portable applications.
So in your case i'd suggest using count without worrying about preformence, though it will slightly faster.
It will be faster to use the MySQL row count. Less bandwidth is needed between PHP and the database.