I'm developing a PHP script and I just want to know if I can make this code piece with better performance.
I have to make 2 mysql queries to complete my task. Is there any other way to complete this with better performance?
$language = "en";
$checkLanguage = $db->query("select id from language where shortName='$language'")->num_rows;
if($checkLanguage>0){
$languageSql = $db->query("select id from language where shortName='$language'")->fetch_assoc();
$languageId = $languageSql['id'];
}else{
$languageSql = $db->query("insert into language(shortName) values('$language')");
$languageId = $db->insert_id;
}
echo $languageId
You can improve your performance, by storing the stamtement object to a variable, this way it will be one less query:
$checkLanguage = $db->query("select id from language where shortName='$language'");
if($checkLanguage->num_rows >0){
$languageSql = $checkLanguage->fetch_assoc();
$languageId = $languageSql['id'];
}else{
$languageSql = $db->query("insert into language(shortName) values('$language')");
$languageId = $db->insert_id;
}
echo $languageId
or second option you add unique constraint to language and shortName.
If you insert a duplicate it will throw an error, if not it will insert, this way you keep only one query the INSERT one, but you might need a try catch for duplicates.
Why not just do something like this:
$language = "en";
$dbh = $db->query("INSERT IGNORE INTO `language` (`shortName`) VALUES ('{$language}');");
$id = $db->insert_id;
echo (($id !== 0) ? $id : FALSE);
This will perform your logic in a single query, and return the id, or false on a duplicate. It is generally better to resolve database performance issues in the SQL rather than in PHP, because about 65% of your overhead is in the actual connection to the database, not the query itself. Reducing the number of queries you run typically has a lot better impact on performance than improving your scripting logic revolving around them. People that consistently rely on ORM's often have a lot of trouble with this, because cookie cutter SQL is usually not very performant.
Related
I have this php code in my server
$social = array();
$id = $db->idFromUsername($username);
switch ($_POST['action']) {
case 'GET':
$query = 'SELECT user_one_id, user_two_id FROM social WHERE (user_one_id=? OR user_two_id=?) AND status=1';
foreach ($db->queryExecute($query, $id, $id) as $relationship) {
$friend = array();
$friend['username'] = $db->usernameFromId($relationship['user_one_id'] == $id ? $relationship['user_two_id'] : $relationship['user_one_id']);
$friend['userImage'] = getUserImage($friend['username']);
$social[] = $friend;
$status = 'true';
}
break;
case 'PENDING':
$query = 'SELECT user_one_id, user_two_id FROM social WHERE (user_one_id=? OR user_two_id=?) AND action_user_id!=? AND status=0';
foreach ($db->queryExecute($query, $id, $id, $id) as $pending) {
$person = array();
$person['username'] = $db->usernameFromId($pending['user_one_id'] == $id ? $pending['user_two_id'] : $pending['user_one_id']);
$person['userImage'] = getUserImage($person['username']);
$social[] = $person;
$status = 'true';
}
break;
.....
....
...
..
.
}
and the database table looks like this :
for some reason the "get" takes at least 4 times longer to execute than "pending" if someone could give me insight on this situation ill be vary grateful.
thank you.
While there are things that can be done to narrow this gap, like adding appropriate indexes, you really should not assume queries will have similar execution times just because they look similar. A single parameterized query can have very different times just based on the values used for the parameters.
For an extreme example imagine "WHERE id = 0" vs "WHERE id = 1" on a multi-million row table where the majority of rows are id=1, and almost no rows have an id=0. Without an index, the "lookup" time on both queries will probably be similar, as the lack of an index will require a full table scan; but the amount of data collected and transmitted for one will be much greater for one than the other. With an index, the lookup time will be greatly reduced, but even then you'd still have to transmit all the data.
Long story short, from the two queries you've shown, your best index is probably a composite one on (status, action_user_id) in that order; the status part will (guessing from the data) narrow the rows inspected by half, and the action_user_id part will narrow that further (dependent on uniqueness of the value being searched). Reversing the order would mean the first query would be unable to take advantage of the index at all.
An index including user_two_id and/or user_two_id would likely NOT be helpful for these queries as OR conditions pretty much throw out any potential index use for the conditions being ORed together.
I have a considerably big table in my database, and I want to have a function that, for example, performs an UPDATE query.
What I used to do in my older projects was passing all the values for all the columns, and insert them in the query string, like this example:
function updateUser($id, $name, $username){
$query = "UPDATE user SET name = '{$name}', username = '{$username}' WHERE id = '{$id}' ";
return mysqli_query($this->conn, $query);
}
That meaning, every column was being altered even those that weren't changed.
But, this time being a big table, I don't want to sacrifice the application speed.
What I'm trying to do is make some comparisons, therefore optimizing the query, and only then sending it to the database.
Staying in the UPDATE query example from before, this is kind of what I want to do:
function updateUser($old_user, $new_user, $user_id){
$changed = false;
$oldFirstName = $old_user->getFirstName();
$newFirstName = $new_user->getFirstName();
if($oldFirstName == $newFirstName){
$firstNameQuery = "";
}else{
$firstNameQuery = " first_name = '".mysqli_escape_string($this->conn, $newFirstName)."',";
$changed = true;
}
$oldLastName = $old_user->getLastName();
$newLastName = $new_user->getLastName();
if($oldLastName == $newLastName){
$lastNameQuery = "";
}else{
$lastNameQuery = " last_name = '".mysqli_escape_string($this->conn, $newLastName)."',";
$changed = true;
}
$oldEmail = $old_user->getEmail();
$newEmail = $new_user->getEmail();
if($oldEmail == $newEmail){
$emailQuery = "";
}else{
$emailQuery = " email = '".mysqli_escape_string($this->conn, $newEmail)."',";
$changed = true;
}
if($changed){
$query = "UPDATE user SET {$firstNameQuery}{$lastNameQuery}{$emailQuery} WHERE user_id = {$user_id}";
return mysqli_query($this->conn, $query);
}else{
return 0;
}
}
Although, as you can see, as the table grows this function gets bigger and with a lot more comparisons and attributions.
My question is: Am I saving a noticeable amount of time doing this, or it isn't whorth it?
You are probably making the code less efficient. Much of the time for an update is on logging the transaction and physically storing the data page for each record. Because all the columns for a single record are (typically) stored on a single page, updating one column or many columns doesn't matter.
On the other hand, the additional comparisons in the application also take time.
Of course -- as with any performance related issue -- you can test the different scenarios. I wouldn't expect any noticeable improvement in performance by going through such logic to reduce the number of columns in the update, unless it eliminated entirely the need for updating certain rows.
I have a database and a string with about 100,000 key / value-pair records and I want to create a function to find the value.
Is it better to use a string or a database, considering performance (page load time) and crash safety? Here are my two code examples:
1.
echo find("Mohammad");
function find($value){
$sql = mysql_query("SELECT * FROM `table` WHERE `name`='$value' LIMIT 1");
$count = mysql_num_rows($sql);
if($count > 0){
$row = mysql_fetch_array($sql);
return $row["family"];
} else {
return 'NULL';
}
}
2.
$string = "Ali:Golam,Mohammad:Offer,Reza:Now,Saber:Yes";
echo find($string,"Mohammad");
function find($string,$value){
$array = explode(",",$string);
foreach($array as $into) {
$new = explode(":",$into);
if($new[0] == $value) {
return $new[1];
break;
}
}
}
The database is pretty sure a good idea.
Databases are fast, maybe they are not as fast as basic String operations in PHP, but if there is a lot of data, databases will probably be faster. A basic select Query takes (on my current default Desktop Hardware) about 15ms, cached less than 1ms, and this is pretty much independend of the number of names in your table, if the indexes are correct. So your site will always be fast.
Databases won't cause a StackOverflow or an out of memory error and crash your site (this is very depending on your PHP-Settings and Hardware)
Databases are more flexible, imagine you want to add / remove / edit names after creating the first Data-Set. It's very simple with "INSERT INTO", "DELETE FROM" and "UPDATE" to modify the data, better than editing a string somewhere in your code with about 100.000 entries.
Your Code
You definitly need to use MySQLi or PDO instead, code maybe like this:
$mysqli = new mysqli("host", "username", "password", "dbname");
$stmt = $mysqli->prepare(
'SELECT string FROM table WHERE name = ? LIMIT 1'
);
$stmt->bind_param("s", $value);
$stmt->execute();
$stmt->store_result();
$stmt->bind_result($string);
$stmt->fetch();
$stmt->close();
This uses MySQLi and Prepared Statements (for security), instead of the depracated MySQL extension (in PHP 5.5)
I have a bunch of photos on a page and using jQuery UI's Sortable plugin, to allow for them to be reordered.
When my sortable function fires, it writes a new order sequence:
1030:0,1031:1,1032:2,1040:3,1033:4
Each item of the comma delimited string, consists of the photo ID and the order position, separated by a colon. When the user has completely finished their reordering, I'm posting this order sequence to a PHP page via AJAX, to store the changes in the database. Here's where I get into trouble.
I have no problem getting my script to work, but I'm pretty sure it's the incorrect way to achieve what I want, and will suffer hugely in performance and resources - I'm hoping somebody could advise me as to what would be the best approach.
This is my PHP script that deals with the sequence:
if ($sorted_order) {
$exploded_order = explode(',',$sorted_order);
foreach ($exploded_order as $order_part) {
$exploded_part = explode(':',$order_part);
$part_count = 0;
foreach ($exploded_part as $part) {
$part_count++;
if ($part_count == 1) {
$photo_id = $part;
} elseif ($part_count == 2) {
$order = $part;
}
$SQL = "UPDATE article_photos ";
$SQL .= "SET order_pos = :order_pos ";
$SQL .= "WHERE photo_id = :photo_id;";
... rest of PDO stuff ...
}
}
}
My concerns arise from the nested foreach functions and also running so many database updates. If a given sequence contained 150 items, would this script cry for help? If it will, how could I improve it?
** This is for an admin page, so it won't be heavily abused **
you can use one update, with some cleaver code like so:
create the array $data['order'] in the loop then:
$q = "UPDATE article_photos SET order_pos = (CASE photo_id ";
foreach($data['order'] as $sort => $id){
$q .= " WHEN {$id} THEN {$sort}";
}
$q .= " END ) WHERE photo_id IN (".implode(",",$data['order']).")";
a little clearer perhaps
UPDATE article_photos SET order_pos = (CASE photo_id
WHEN id = 1 THEN 999
WHEN id = 2 THEN 1000
WHEN id = 3 THEN 1001
END)
WHERE photo_id IN (1,2,3)
i use this approach for exactly what your doing, updating sort orders
No need for the second foreach: you know it's going to be two parts if your data passes validation (I'm assuming you validated this. If not: you should =) so just do:
if (count($exploded_part) == 2) {
$id = $exploded_part[0];
$seq = $exploded_part[1];
/* rest of code */
} else {
/* error - data does not conform despite validation */
}
As for update hammering: do your DB updates in a transaction. Your db will queue the ops, but not commit them to the main DB until you commit the transaction, at which point it'll happily do the update "for real" at lightning speed.
I suggest making your script even simplier and changing names of the variables, so the code would be way more readable.
$parts = explode(',',$sorted_order);
foreach ($parts as $part) {
list($id, $position) = explode(':',$order_part);
//Now you can work with $id and $position ;
}
More info about list: http://php.net/manual/en/function.list.php
Also, about performance and your data structure:
The way you store your data is not perfect. But that way you will not suffer any performance issues, that way you need to send less data, less overhead overall.
However the drawback of your data structure is that most probably you will be unable to establish relationships between tables and make joins or alter table structure in a correct way.
I have this code (removed param escaping just to cut down some code):
private function _get_tag_id($value)
{
$sql = "INSERT INTO tags (tag, added) VALUES ('$value', ".time().") "
. "ON DUPLICATE KEY UPDATE tag_id = tag_id";
$id = execute($sql);
if (empty($id))
{
$sql = "SELECT tag_id FROM tags WHERE tag = '$value'";
$id = execute($sql);
}
return $id;
}
I'm really bad at organizing my code and I've been reading about the importance of keeping your code DRY. Does this include any queries you might have? For example, I need to perform these same queries for a few fields, and what I've done is change it to this:
private function _get_field_id($field, $value)
{
$sql = "INSERT INTO {$field}s ({$field}, added) VALUES ('$value', ".time().") "
. "ON DUPLICATE KEY UPDATE {$field}_id = {$field}_id";
$id = execute($sql);
if (empty($id))
{
$sql = "SELECT {$field}_id FROM {$field}s WHERE {$field} = '$value'";
$id = execute($sql);
}
return $id;
}
Although that reduces some similar functions, it also makes the query much harder to read at first. Another problem after doing this is what happens if sometimes the query may be slightly different for a field? Let's say if the field is tag, I don't need the added column any more and maybe the query would now change to:
$sql = "INSERT INTO {$field}s ({$field}".($field == 'tag' ? '' : ", added").") "
. "VALUES ('$value', ".($field == 'tag' ? '' : time()).") "
. "ON DUPLICATE KEY UPDATE {$field}_id = {$field}_id";
Now it's starting to get extra messy, but I have a feeling people don't actually do that.
Another thing I've read is that functions should only do one thing. So would I cut up this function like this?
private function _get_tag_id($value)
{
$id = $this->_add_tag_id($value);
if (empty($id))
{
$id = $this->_get_tag_id($value);
}
return $id;
}
Or would it be better to leave it the way it was before?
If you don't think either of the ways I've tried organizing the code is correct also feel free to suggest the way you'd do it, or in other words what would be the best way to organize these simple bits of code?
I would turn it upside down - select first, and insert if not found.
Two reasons:
1) You will select and find more often that select and miss, so select first is on average faster.
2) "on duplicate key" is a non-standard extension to INSERT that will cause problems in the future if you should ever move to a SQL database without it. (I think it is MySQL only).
As for which is better, I'd rather try to understand the first or third.