I am trying to run a couple of queries to swap sort order values within a database when up or down buttons are clicked however when he code below is executed on the 2nd query is ran.
if ($_POST['up']){
$sort_this = $_POST['sort'];
$sort_other = $_POST['sort'] - 1;
$sql_this = "UPDATE portfolio SET sort = $sort_this -1 WHERE sort = $sort_this";
mysqli_query($conn, $sql_this);
$sql_other = "UPDATE portfolio SET sort = $sort_other +1 WHERE sort = $sort_other";
mysqli_query($conn, $sql_other);
}
They both work perfectly fine on their own when i comment out the other, however when they are both display the problem is as above. I have also tried running it in a mysqli_multi_query however that didnt work either.
Any ideas?
Thanks
Given the limited amount of data my best guess would be that they DO both execute but that they're not doing what you think they should be doing.
Say $_POST['sort'] is the number 3, this means $sort_this is also 3.
The first query will go through the database and update all 3's to a 2.
$sort_other will be 3-1 (2) and so the second query will go through the database and update all 2's to a 3. Effectively undoing what the first query did. (and altering any other 2's to 3's)
You will never see the end result of the first query because the third query will overwrite all changes the first query made.
Also, simply pasting in a variable into a query like you're doing is bad practice. It is prone to SQL injection. You can avoid this by using prepared statements: http://php.net/manual/en/pdo.prepared-statements.php
Related
I have an SQL query which can return quite a lot results (something like 10k rows) but I cannot use the SQL LIMIT parameter, as I don't know the exact amount of needed rows (there's a special grouping done in PHP). So the plan was to stop fetching rows once I have enough.
Since PDO normally operates in buffered mode, which fetches the whole result set and passes it to PHP, I switched PDO to unbuffered mode with
$pdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
Now I expected that executing the query should take about the same time no matter what LIMIT I pass. So basically
$result = $pdo->query($query);
$count = 0;
while ($row = $result->fetch()) {
++$count;
if ($count > 10) break;
}
should execute in about the same time for
$query = 'SELECT * FROM myTable';
and
$query = 'SELECT * FROM myTable LIMIT 10';
However the first one takes 8 seconds whereas the second one executes instantly. So it seems like the unbuffered query also waits until the whole result set is fetched - which shouldn't be the case according to the documentation.
Is there any way to get the query result instantly in PHP with PDO and stop the query once I have enough results?
Database applications like "Sequel Pro SQL" can do this (I can hit cancel after 1 second and get the results that were already queried until that time) so it can't be a general problem with MySQL servers.
I can workaround the problem by choosing a very high LIMIT which always has enough valid results after my grouping. But since performance is an issue, I'd like to query only as many entries as really needed. Please don't suggest anything that involves grouping in MySQL, the terrible performance of that is the reason we have to change the behaviour.
Now I expected that executing the query should take about the same time no matter what LIMIT I pass. So basically
This might not be completely true. While you won't get the overhead of receiving all your results, they are all queried (without a limit)! You do get the advantage of keeping most of the results serverside until you need them, but your server actually does perform the whole query first as far as I know. I'm not sure how complicated your query is, but this could be the issue?
Say for instance you have a very slow join (not indexed), but only want the first 10 by id, your query will get 10 based on the index, and then only do the join for those 10. This'll be quick
But if you don't actually limit, but ask for the result in batches, your complete join will have to be done (slow!) and then your resultsset is released in parts.
A quicker method might be to repeat your limited query untill you have your result. I know, this will increase overhead, but it might be way quicker. Only way to know is to test.
as response to your comment: this is from the manual
Unbuffered MySQL queries execute the query and then return a resource while the data is still waiting on the MySQL server for being fetched.
So it executes the query. The complete query. So as I tried to explain above, it will not be as quick as the same query with a LIMIT 10, as it doesn't perform a partial query! The fact that a different DB engine does this does not mean MySQL can...
Have you tried using prepare/execute instead of query, and putting a $stmt->closeCursor(); call after the break?
$stmt = $dbh->prepare($query);
$stmt->execute();
$count = 0;
while ($row = $stmt->fetch()) {
++$count;
if ($count > 10) break;
}
$stmt->closeCursor();
I often run into the situation where I want to determine if a value is in a table. Queries often happen often in a short time period and with similar values being searched therefore I want to do this the most efficient way. What I have now is
if($statment = mysqli_prepare($link, 'SELECT name FROM inventory WHERE name = ? LIMIT 1'))//name and inventory are arbitrarily chosen for this example
{
mysqli_stmt_bind_param($statement, 's', $_POST['check']);
mysqli_stmt_execute($statement);
mysqli_stmt_bind_result($statement, $result);
mysqli_stmt_store_result($statement);//needed for mysqli_stmt_num_rows
mysqli_stmt_fetch($statement);
}
if(mysqli_stmt_num_rows($statement) == 0)
//value in table
else
//value not in table
Is it necessary to call all the mysqli_stmt_* functions? As discussed in this question for mysqli_stmt_num_rows() to work the entire result set must be downloaded from the database server. I'm worried this is a waste and takes too long as I know there is 1 or 0 rows. Would it be more efficient to use the SQL count() function and not bother with the mysqli_stmt_store_result()? Any other ideas?
I noticed the prepared statement manual says "A prepared statement or a parametrized statement is used to execute the same statement repeatedly with high efficiency". What is highly efficient about it and what does it mean same statement? For example if two separate prepared statements evaluated to be the same would it still be more efficient?
By the way I'm using MySQL but didn't want to add the tag as a solution may be non-MySQL specific.
if($statment = mysqli_prepare($link, 'SELECT name FROM inventory WHERE name = ? LIMIT 1'))//name and inventory are arbitrarily chosen for this example
{
mysqli_stmt_bind_param($statement, 's', $_POST['check']);
mysqli_stmt_execute($statement);
mysqli_stmt_store_result($statement);
}
if(mysqli_stmt_num_rows($statement) == 0)
//value not in table
else
//value in table
I believe this would be sufficient. Note that I switched //value not in table
and //value in table.
It really depends of field type you are searching for. Make sure you have an index on that field and that index fits in memory. If it does, SELECT COUNT(*) FROM <your_table> WHERE <cond_which_use_index> LIMIT 1. The important part is LIMIT 1 which prevent for unnecessary lookup. You can run EXPLAIN SELECT ... to see which indexes used and probably make a hint or ban some of them, it's up to you. COUNT(*) works damn fast, it is optimized by design return result very quickly (MyISAM only, for InnoDB the whole stuff is a bit different due to ACID). The main difference between COUNT(*) and SELECT <some_field(s)> is that count doesn't perform any data reading and with (*) it doesn't care about whether some field is a NULL or not, just count rows by most suitable index (chosen internally). Actually I can suggest that even for InnoDB it's a fastest technique.
Also use case matters. If you want insert unique value make constrain on that field and use INSERT IGNORE, if you want to delete value which may not be in table run DELETE IGNORE and same for UPDATE IGNORE.
Query analyzer define by itself whether two queries are the same on or not and manage queries cache, you don't have to worry about it.
The different between prepared and regular query is that the first one contains rule and data separately, so analyzer can define which data is dynamic and better handle that, optimize and so. It can do the same for regular query but for prepared we say that we will reuse it later and give a hint which data is variable and which is fixed. I'm not very good in MySQL internal so you can ask such questions on more specific sites to understand details in a nutshell.
P.S.: Prepared statements in MySQL are session global, so after session they are defined in ends they are deallocated. Exact behavior and possible internal MySQL caching is a subject of additional investigation.
This is the kind of things in-memory caches are really good at. Something like this should work better than most microoptimization attempts (pseudocode!):
function check_if_value_is_in_table($value) {
if ($cache->contains_key($value)) {
return $cache->get($value);
}
// run the SQL query here, put result in $result
// note: I'd benchmark if using mysqli_prepare actually helps
// performance-wise
$cache->put($value, $result);
return $result;
}
Have a look at memcache or the various alternatives.
I just have a simple doubt in cakephp, it may be also silly.
Writing queries in cakephp:-
1.$output1 = $this->Modelname->query("Select * from tablename");
2.$output2 = $this->Modelname->query("Update tablename set .....");
When i execute the first query i.e $output1. It runs perfectly.
But wen i run $output2 it wont run correctly
What may be the problem ??
I would recommend you to use CakePHP methods to query against the database.
This way, it will be much more secure and things will be easier for you, more even if you have related models.
At first it can take a while to learn, but you will soon realize the advantages of it.
Your first query would be equivalent to:
$this->Modelname->find("all");
And your second one to something like:
// Update: id is set to a numerical value
$this->Modelname->id = 2;
$this->Modelname->save($this->request->data);
The is one php file that governs rotating ads that is causing serious server performance issues and causing "too many connections" sql errors on the site. Here is the php script. Can anyone give me some insight into how to correct this as I am an novice at php.
<?
require("../../admin/lib/config.php");
// connect to database
mysql_pconnect(DB_HOST,DB_USER,DB_PASS);
mysql_select_db(DB_NAME);
$i = 1;
function grab()
{
$getBanner = mysql_query("SELECT * FROM sponsor WHERE active='Y' AND ID != 999 AND bannerRotation = '0' ORDER BY RAND() LIMIT 1");
$banner = mysql_fetch_array($getBanner);
if ($banner['ID'] == ''){
mysql_query("UPDATE sponsor SET bannerRotation = '0'");
}
if (file_exists(AD_PATH . $banner['ID'] . ".jpg")){
$hasAd = 1;
}
if (file_exists(BANNER_PATH . $banner['ID'] . ".jpg")){
return "$banner[ID],$hasAd";
} else {
return 0;
}
}
while ($i <= 3){
$banner = grab();
if ($banner != 0){
$banner = explode(",",$banner);
mysql_query("UPDATE sponsor SET bannerView = bannerView + 1 WHERE ID='$banner[0]'");
mysql_query("UPDATE sponsor SET bannerRotation = '1' WHERE ID = '$banner[0]'");
echo "banner$i=$banner[0]&hasAd$i=$banner[1]&";
$i++;
}
}
?>
I see not mysqli
The problem is that mysql_pconnect() opens a persistent connection to the database and is not closed at end of execution, and as you are not calling mysql_close() anywhere the connection never gets closed.
Its all in the manual: http://php.net/manual/en/function.mysql-pconnect.php
Well, the good news for your client is that the previous developer abandoned the project. He could only have done more damage if he had continued working on it.
This script is using ext/mysql, not ext/mysqli. It would be better to use mysqli or PDO_mysql, since ext/mysql is deprecated.
It's recommended to use the full PHP open tag syntax (<?php), not the short-tags syntax (<?). The reason is that not every PHP environment enables the use of short tags, and if you deploy code into such an environment, your code will be viewable by anyone browsing to the page.
This script does no error checking. You should always check for errors after attempting to connect to a database or submitting a query.
The method of using ORDER BY RAND() LIMIT 1 to choose a random row from a database is well known to be inefficient, and it cannot be optimized. As the table grows to have more than a trivial number of rows, this query is likely to be your bottleneck. See some of my past answers about optimizing ORDER BY RAND queries, or a great blog by Jan Kneschke on selecting random rows.
Even if you are stuck using ORDER BY RAND(), there's no need to call it three times to get three distinct random sponsors. Just use ORDER BY RAND() LIMIT 3. Then you don't need the complex and error-prone update against bannerRotation to ensure that you get sponsors that haven't been chosen before.
Using SELECT * fetches all the columns, even though they aren't needed for this function.
If a sponsor isn't eligible for random selection, i.e. if it has active!='Y' or if its ID=999, then I would move it to a different table. This will simplify your queries, and make the table of sponsors smaller and quicker to query.
The UPDATE in the grab() function has no WHERE clause, so it applies to all rows in the sponsor table. I don't believe this is intentional. I assume it should apply only to the single row WHERE ID=$banner['ID'].
This code has two consecutive UPDATE statements against the same row of the same table. Combine these into a single UPDATE statement that modifies two columns.
The grab() function appends values together separated by commas, and then explodes that string into an array as soon as it returns. As if the programmer doesn't know that a function can return an array.
Putting the $i++ inside a conditional block makes it possible for this code to run in an infinite loop. That means this script can run forever. Once a few dozen of these are running concurrently, you'll run out of connections.
This code uses no caching. Any script that serves ad banners must be quick, and doing multiple updates to the database is not going to be quick enough. You need to use some in-memory caching for reads and writes. A popular choice is memcached.
Why is this client coding their own ad-banner server so inexpertly? Just use Google DFP Small Business.
Yikes!
grab() is being called from within a loop, but is not parameterized. Nor does there seem to be any rationale for repeatedly calling it.
A 200% speedup is easily realizable.
The following code runs without any errors but doesn't actually delete anything:
$update = $mysqli->prepare('DELETE FROM table WHERE RetailerID = ? AND Amount = ? AND FXRate = ?');
$update->bind_param('iii', $rID, $base_value, $fx_rate);
$update->execute();
$update->close();
I have numerous mysqli prepared statments in this same file that execute fine, but this one is the only one that doesn't modify the table. No errors or shown, but the row isn't deleted from the table either. I have verified that $rID, $base_value, and $fx_rate are the correct values, and a row is DEFINITELY present in table that matches those values.
The only difference between this statement and the others are the parameters and the fact that it's DELETE instead of SELECT or UPDATE. I also tried doing a SELECT or UPDATE instead of DELETE using the same WHERE parameters, but no luck. The issue seems to be that it's not finding a row that fits the WHERE parameters, but like I said, the row is definitely there.
Any ideas?
Is amount an integer or a double? You're converting to integer ('iii'), but I presume it'll be $0.34 or similar. Try 'idi' instead.
Edit: same applies for rate - is that an integer or double too?