Increase Mysql/PHP Query Speed - php

I have 2 tables,one has 2 million and the other has 30 million records,
I need to compare the records on both tables but this is extremely slow.
can anyone offer suggestions on ways to increase the speed?
<?php
$con = mysql_connect("localhost","root","password");
mysql_select_db("DMBONE", $con);
$result = mysql_query("SELECT * FROM sucid where priority=''");
while($row = mysql_fetch_array($result))
{
$result1 = mysql_query("SELECT count(*) FROM bills_logic where month(tdate)=8 and x1=".$row[0]."");
if($row1 = mysql_fetch_array($result1))
{
if($row1[0]==0)
{
echo $row[0]." DEAD\r\n";
mysql_query("update sucid set priority='DEAD' where bid=".$row[0]."") or die(mysql_error());
}
else
{
echo $row[0]." ".$row1[0]."\r\n";
mysql_query("update sucid set priority='".$row1[0]."' where bid=".$row[0]."") or die(mysql_error());
}
}
}
?>
CREATE TABLE `sucid` (
`bid` varchar(500) NOT NULL,
`priority` varchar(500) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
CREATE TABLE `bills_logic` (
`bid` int(11) NOT NULL AUTO_INCREMENT,
`num` varchar(500) NOT NULL,
`stat` varchar(500) NOT NULL,
`tdate` varchar(500) NOT NULL,
`x1` varchar(500) NOT NULL,
`amt` varchar(500) NOT NULL DEFAULT '30',
PRIMARY KEY (`bid`)
) ENGINE=InnoDB AUTO_INCREMENT=35214848 DEFAULT CHARSET=latin1
above are the create table statements for the tables.

You have found the world's slowest way of doing a join. You may be happier to do it the old-fashioned way (but just fashionably enough to use mysqli):
<?php
$mysqli = new mysqli("localhost","root","password","database");
if ($mysqli->connect_errno) {
printf("Connect failed: %s\n", $mysqli->connect_error);
exit();
}
$sql = "update sucid
left join (
select count(*) as priority, x1
from bills_logic b
where month(tdate)=8
group by x1
) bg
on bg.x1 = sucid.bid
set sucid.priority = coalesce(bg.priority,'DEAD');"
if ($mysqli->query($sql) === TRUE) {
printf("I'm done already. This was fast, wasn't it?\n");
}
else {
echo "Something went wrong: " . $mysqli->error . "\n";
exit();
}
?>
You might want to add an index on bills_logic.x1, though it will not help too much here.
And you should really fix your columns, e.g. tdate should not be a varchar(500). The update will fail completely if you have any row with an invalid date. Using correct datatypes prevents you from having invalid values. num, stat and amt sound like they could be int (or maybe decimal), x1 maybe too. And your priority-column would also work as int, if you replace DEAD with 0.

Yeah it is possible that you have a big big problem on it. The speed and query performance depends on:
Database schema table structure.
Machine hardware specs.
Your query structure.
table Indexes.
Hope all Database expert do not include the blob at the same table which is the primary table. this is the big problem of all database when we talk about a million of data to fetch on. Much better to separate the blob data always as best practice.
This is just additional info, Hope it could help the others.

I would suggest to create indexes, particularly for the values that you are using to delimitate your searches. In your particular case, I would start by creating an index for the field priority.
For more info on how mysql deals with indexes take a look here.

Related

How to properly use Wildcard with CONCAT

(Spoiler: The Title has nothing to do with what is wrong with the code.)
I'm creating a live-search system just to show the user possible event types already listed on my website. During my speculations I may have an error with Wildcard binding which I'm unable to see.
I tried using different types of "WHERE LIKE" statements, and most of them didn't work at all. Such as I tried using placeholder query (question mark) and that did not work at all. If I ran this query manually on my database I will get results which I'm expecting.
This is how my code looks, the variable $q is obtained using $_GET method.
$query = $pdo->prepare('SELECT DISTINCT EventCategory FROM Events
WHERE EventCategory LIKE CONCAT(\'%\',:q,\'%\')');
$query->bindParam(":q", $q);
$query->execute();
$row = $query->fetch(PDO::FETCH_ASSOC);
while ($row = $query->fetchObject()) {
echo "<div> $row->EventCategory </div>";
}
The expected results would be: If the $q is equal to n, Meeting and Nightlife is returned. When $q is equal to ni, then Nightlife is only returned.
The search is NOT CASE SENSITIVE, N and n is treated equally.
The SHOW CREATE TABLE Events query returned the following:
CREATE TABLE `Events` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`Name` varchar(100) NOT NULL,
`Image` varchar(600) NOT NULL,
`Date` date NOT NULL,
`Description` varchar(1200) NOT NULL,
`SpacesAvailable` int(11) NOT NULL,
`EventCategory` varchar(50) NOT NULL,
`Trending` varchar(30) DEFAULT NULL,
`TrendingID` int(255) NOT NULL,
`Sale` int(255) NOT NULL,
PRIMARY KEY (`ID`)
)DEFAULT CHARSET=latin1
Images to show the operation of the website: https://imgur.com/a/yP0hTm3
Please if you are viewing the images the view from bottom to top. Thanks
I suspect the default collation in your EventCategory column is case-sensitive. That's why Ni and ni don't match in Nightlife.
Try this query instead.
'SELECT DISTINCT EventCategory FROM Events WHERE EventCategory COLLATE utf8_general_ci LIKE CONCAT(\'%\',:q,\'%\')'
Or, if your column's character set is not unicode but rather iso8859-1, try this:
'SELECT DISTINCT EventCategory FROM Events WHERE EventCategory COLLATE latin1_general_ci LIKE CONCAT(\'%\',:q,\'%\')'
This explains how to look up the available character sets and collations on MySQL.
How to change collation of database, table, column? explains how to alter the default collation of a table or a column. It's generally a good idea because collations are baked into indexes.
The problem is not in LIKE, but in PHP and PDO. Stare at the 3 conflicting uses of $row in your code:
$row = $query->fetch(PDO::FETCH_ASSOC);
while ($row = $query->fetchObject()) {
echo "<div> $row->EventCategory </div>"; }
Then review the documentation and examples. (Sorry, I'm not going to feed you the answer; you need to study to understand it.)
In complement to the comprehensive answer by O.Jones, another, simpler solution would be to just perform a case-insensitive search, like :
'SELECT DISTINCT EventCategory
FROM Events
WHERE UPPER(EventCategory) LIKE CONCAT(\'%\',UPPER(:q),\'%\')'

In a 1-1 relationship, why is my insert inserting two records in two tables?

I'm having trouble, as title says, when I INSERT a record in a table that has got a 1-1 relationship with another.
First things first, the SQL code that generates the tables:
DROP TABLE IF EXISTS Facebook_Info;
DROP TABLE IF EXISTS Conversations;
CREATE TABLE IF NOT EXISTS Conversations(
c_id INT AUTO_INCREMENT NOT NULL,
c_start TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
channel ENUM('desktop', 'facebook'),
u_name VARCHAR(20) DEFAULT NULL,
u_email VARCHAR(50) DEFAULT NULL,
PRIMARY KEY(c_id)
);
CREATE TABLE IF NOT EXISTS Facebook_Info (
c_id INT AUTO_INCREMENT NOT NULL,
f_id INT(12) NOT NULL,
PRIMARY KEY(c_id),
FOREIGN KEY(c_id) REFERENCES Conversations(c_id)
);
I assure you this code works: I tested it. I hope this is the best way to provide a 1-1 relationship between Conversations and Facebook_Info.
In any case, now I can introduce you my nightmare: I'm trying to insert a new record in Conversations via PHP (procedural style).
public function create_new_id_conv($channel = 1) {
$w_ch = '';
if ($channel == 2) {
$w_ch = 'facebook';
} else {
$w_ch = 'desktop';
}
$query = "INSERT INTO Conversations (c_id, c_start, channel) VALUES (NULL, CURRENT_TIMESTAMP,'$w_ch')";
$conn = mysqli_connect("localhost", Wrapper::DB_AGENT, Wrapper::DB_PSW, Wrapper::DB_NAME);
$res = mysqli_query($conn, $query);
$id_conv= mysqli_insert_id($conn);
mysqli_free_result($res);
return $id_conv;
}
The Wrapper:: * variables are all set well, in fact, an INSERT operation is done, but not only one! I'm having this situation after I call this function:
This is the content of Conversations table:
And here's the content of Facebook_Info:
What's happening?
I searched and searched...
Then I started to think about what I'm getting here: 2147483647. What represents this number? What's that? Seems like a big number!
And what if my script and my queries were correct but the mistake is the skeleton of my tables?
I must register a 14 digit integer, that is too large for the INT type.
So using BIGINT to store the f_id field made all correct and working!
Hope my mistake helps someone!

Speed up MySQL Query + PHP

I want to speed up this code. It is the query that takes time. If I change the amount of rows returned from 100 to 10, it takes almost the same amount of time (about 2 seconds). The GETs are based on user sort/search input. How do I improve the speed of this? This item table has about 2374744 rows, and the bot table about 20 rows.
$bot = " && user_items_new.bot_id != '0'";
if ($_GET['bot'] != 0) {
$bot = " && user_items_new.bot_id='".$_GET['bot']."'";
}
$name = '';
if (strlen($_GET['name']) > 0) {
$name = " && user_items_new.name LIKE '%".$_GET['name']."%'";
}
$min = '';
if (strlen($_GET['min']) > 0) {
$min = " && steam_price >= '".$_GET['min']."'";
}
$max = '';
if (strlen($_GET['max']) > 0) {
$max = " && steam_price <= '".$_GET['max']."'";
}
$order = '';
if ($_GET['order'] == 'price_desc') {
$order = "ORDER BY steam_price DESC, user_items_new.name ASC";
} elseif ($_GET['order'] == 'price_asc') {
$order = "ORDER BY steam_price ASC, user_items_new.name ASC";
} elseif ($_GET['order'] == 'name_desc') {
$order = "ORDER BY user_items_new.name DESC";
} else {
$order = "ORDER BY user_items_new.name ASC";
}
$limit = $_GET['start'];
$limit .= ', 100';
$i = 0;
$sql = mysql_query("SELECT user_item_id, user_items_new.bot_id AS item_bot_id, sticker, `key`, `case`, exterior, stattrak, image, user_items_new.name AS item_name, steam_price, color, bots_new.bot_id, bots_new.name AS bot_name, withdraw_enabled FROM user_items_new LEFT JOIN bots_new ON user_items_new.bot_id=bots_new.bot_id WHERE steam_price > '0.1' && deposit_start='0' && deposited='0' && user_id='0' && withdraw_enabled='1' ".$bot." ".$name." ".$min." ".$max." ".$order." LIMIT ".$limit)or die(mysql_error());
while ($item = mysql_fetch_assoc($sql)) {
//...
}
The item table looks like this (dumped from phpMyAdmin):
CREATE TABLE IF NOT EXISTS `user_items_new` (
`user_item_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`bot_id` int(11) NOT NULL,
`item_original_id` varchar(22) NOT NULL,
`item_real_id` varchar(22) NOT NULL,
`class_id` varchar(22) NOT NULL,
`weapon_id` int(11) NOT NULL,
`name` text NOT NULL,
`image` text NOT NULL,
`case` int(11) NOT NULL,
`key` int(11) NOT NULL,
`sticker` int(11) NOT NULL,
`capsule` int(11) NOT NULL,
`holo` int(11) NOT NULL,
`name_tag` int(11) NOT NULL,
`access_pass` int(11) NOT NULL,
`stattrak` int(11) NOT NULL,
`color` varchar(32) NOT NULL,
`exterior` text NOT NULL,
`steam_price` double NOT NULL,
`deposited` int(11) NOT NULL,
`deposit_start` int(11) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=5219079 DEFAULT CHARSET=utf8;
ALTER TABLE `user_items_new`
ADD PRIMARY KEY (`user_item_id`), ADD KEY `user_id` (`user_id`), ADD KEY `bot_id` (`bot_id`);
ALTER TABLE `user_items_new`
MODIFY `user_item_id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=5219079;
And then the bot table:
CREATE TABLE IF NOT EXISTS `bots_new` (
`bot_id` int(11) NOT NULL,
`name` varchar(64) NOT NULL,
`username` varchar(64) NOT NULL,
`password` varchar(64) NOT NULL,
`deposit_enabled` int(11) NOT NULL,
`withdraw_enabled` int(11) NOT NULL,
`ident` varchar(32) NOT NULL
) ENGINE=MyISAM AUTO_INCREMENT=19 DEFAULT CHARSET=utf8;
ALTER TABLE `bots_new`
ADD PRIMARY KEY (`bot_id`);
Edit (adding prettyprinted SELECT)
SELECT user_item_id, user_items_new.bot_id AS item_bot_id, sticker,
key, case, exterior, stattrak, image, user_items_new.name AS item_name,
steam_price, color, bots_new.bot_id, bots_new.name AS bot_name,
withdraw_enabled
FROM user_items_new
LEFT JOIN bots_new ON user_items_new.bot_id=bots_new.bot_id
WHERE user_items_new.bot_id != '0' && deposit_start='0' && deposited='0' && user_id='0' && withdraw_enabled='1'
ORDER BY user_items_new.name ASC
LIMIT , 100
How to speed this up...
Firstly, add a composite index on the columns that have predicates with equality comparisons first, e.g.
... ON user_items_new (user_id,deposited,deposit_start)
This will be of benefit if the predicates are filtering out a large number of rows. For example, if less than 10% of the rows satisfy the condition user_id = 0.
As an aside, the predicate withdraw_enabled='1' will negate the "outerness" of the LEFT JOIN. The result from the query will be equivalent if the keyword LEFT is omitted.
Another issue is that the ORDER BY will cause a "Using filesort" operation to sort the rows. The entire set will need to be sorted, before the LIMIT clause is applied. So we don't expect LIMIT 10 to be any faster than LIMIT 1000, apart from the additional time for the client to transfer an additional 990 rows. (The bit about sorting the entire set isn't entirely true; in some cases MySQL can abort the sort operation after identifying the first "limit" number of rows. But MySQL will still need to go through the entire set to get those first rows.)
It's possible that adding the column(s) in the ORDER BY clause to the index, following the columns with equality predicates. These would need to appear immediately following the columns referenced in the equality predicates. It may also be necessary to specify those same columns in the ORDER BY clause.
Assuming the current query includes:
...
WHERE ...
&& deposit_start='0' && u.deposited='0' && u.user_id='0' ...
...
ORDER BY steam_price ASC, user_items_new.name ASC
This index may be appropriate:
... ON user_items_new (user_id,deposited,deposit_start,steam_price,name)
The output from EXPLAIN will show whether that index is used for the query or not. Beyond the equality comparisons of the first three columns, MySQL can use a range scan operation on the index to satisfy the steam_price > predicate.
There's also the issue of the InnoDB buffer pool; how much memory is allocated to holding index and data pages in memory, to avoid storage i/o.
To avoid lookups to data pages in the underlying table, you can consider creating a covering index for the query. A covering index includes all of the columns referenced from the table, so the query can be satisfied entirely from the index. The EXPLAIN output will show "Using index" in the Extra column if the query is using a covering index. (But there are limits to the number of columns and the total row size in the index. This would most benefit the performance of the query when the table rows are large, and the size of the columns in the index is a small subset of the total table row.
With a table of that size, one of the simplest tricks you can use for optimizing the query is to add indexes on the fields you use in the where clause. This allows the parser to have stuff presorted for the queries you use most often.
For example, you should see significant gains by doing:
ALTER TABLE user_items_new ADD INDEX (steam_price);
The data and data type go a long way in determining the actual gains made. Adding indexes on all fields will result in going backwards on the efficiency of the query. So more is not necessarily better.
Your query is slow because your query against the user_items_new table requires inspecting 1.2 million rows. While you have indexes for user_item_id, user_id, and bot_id, those can only filter your results so far.
You will want to add indexes on some of your data columns. Which indexes you will want to add (and whether any of them are compound or not) is going to depend on the actual contents of the table and would be difficult to recommend without more information.
You will want to add indexes based on which columns where distinct values reduce the data that must be looked at significantly; an index on withdraw_enabled, for example, is not likely to gain much unless very few rows have withdraw_enabled == 1. An index on steam_price will be beneficial if very few of your rows have a steam_price >= 0.1.

MySQL fulltext basic search, multiple words

I have the following code:
$dbLink = mysql_connect('localhost', 'tester', 'test');
mysql_select_db('acianetm_pcSpec', $dbLink);
$q = $_GET['q'];
$q = mysql_real_escape_string($q);
$sql = "
SELECT *,
MATCH(part) AGAINST ('$q') AS score
FROM parts
WHERE MATCH(part) AGAINST('$q')
";
$rest = MySQL_query($sql);
while($row = MySQL_fetch_array($rest)) {
echo "<br /> <strong>".$row['id']. " - ". $row['part']. " - $". $row['price']."</strong>";
}
When I load up http://site.com/q?=Nvidia it does not display any output.
MySQL Structure:
CREATE TABLE `parts` (
`id` int(10) NOT NULL auto_increment,
`part` varchar(512) NOT NULL,
`price` varchar(15) NOT NULL,
`updated` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `part_2` (`part`),
FULLTEXT KEY `part` (`part`)
) ENGINE=MyISAM AUTO_INCREMENT=47 DEFAULT CHARSET=latin1
The data inside the table:
`id |#| part |#| price
46 |#| (VIC Clayton Clearance) GIGABYTE 9800GT 512MB Nvidia Geforce GF9800GT DVI P... |#| 95.00
I have tried this SQL query:
SELECT * FROM parts WHERE part LIKE '%$q%'
However without using str_replace eg.
str_replace(' ', '&'. $q); it never worked for multiple words. Using the str_replace only made it work with 2 words, I need multiple.
Doing this in PHPMyAdmin returns no rows either, so what part of the query is wrong?
If someone could assist that would be great.
Thanks alot
Omit the where clause in your sql statement.
Also the '?' in your URL should come before any name value pairs.
$sql = "SELECT * FROM parts
WHERE MATCH(part) AGAINST ('$q')";
Works well :-)

Server uptime script

I'm trying to do something for my website, to be specific, I'm trying to do a script for uptime.
I have the reader, a script which read the percents from a table.
This is not a very efficient way to use a Relational Database. I would, instead, suggest (at least with the SQL side), the following:
CREATE TABLE `servers` (
`srv_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
-- Additional Fields Omitted here.
PRIMARY KEY (`srv_id`)
)
ENGINE = InnoDB;
CREATE TABLE `stats` (
`stat_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`srv_id` INTEGER UNSIGNED NOT NULL,
`date` TIMESTAMP NOT NULL,
`uptime` INTEGER UNSIGNED NOT NULL,
PRIMARY KEY (`stat_id`)
)
ENGINE = InnoDB;
This way you can record as many measures as you like, against as many servers as you like, and then use SQL to either delete old content or keep the old content and use WHERE arguments to filter the data used in the interface displaying these stats.
$day = int(strftime("%j") % 5);
$key = 'day' . $day;
if($row[$key] == 0)
{
if($checkls && $checkgs) //if server is online update the percent
mysql_query("UPDATE s_stats SET ${key}=".($stats_row[$key] + 0.5)." WHERE srv_id=".$r[id]." ") or die(mysql_error()); //every 7.2 minutes add 0.5 percent
else echo "error day $day";
}

Categories