Mysql Full search from 2 columns in 2 tables - php

I have 2 tables. One is questions and the other is answers with the following format.
question(id,text,user)
answer(id,text,question_id,user)
both tables have the same number of rows obviously.
when a user searches for a phrase or a word I want it to search in both question text and answer text for that word and return the matches by most common.
I tried using the Full search of mySQL but I couldn't make it work on 2 different tables and 2 columns.
I also don't want to merge the question and answer into another table if possible.
Question table :
CREATE TABLE `questions` (
`id` int(11) NOT NULL,
`message_id` int(11) DEFAULT NULL,
`text` text NOT NULL,
`answer` int(11) DEFAULT NULL,
`status` varchar(255) NOT NULL,
`user` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `questions`
ADD PRIMARY KEY (`id`);
ALTER TABLE `questions` ADD FULLTEXT KEY `text` (`text`);
ALTER TABLE `questions`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;
Answers table :
CREATE TABLE `answers` (
`id` int(11) NOT NULL,
`message_id` int(11) DEFAULT NULL,
`text` text NOT NULL,
`question` int(11) NOT NULL,
`status` varchar(255) NOT NULL,
`user` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `answers`
ADD PRIMARY KEY (`id`);
ALTER TABLE `answers` ADD FULLTEXT KEY `text` (`text`);
ALTER TABLE `answers`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;

If you want to query on multiple keywords, an advice would be to store it somewhere in your application code (i.e use an explode function to separate each keyword).
Also in app code, try to generate your WHERE statement dynamically (since you cannot know in advance the number of keywords to be used):
String sql_where = " ";
for(int i = 0, i<search.length, i++){
if (i=0)
sql_where += "WHERE TEXT LIKE '%"+search[i]+"%';
else
sql_where += "\n OR TEXT LIKE '%"+search[i]+"%';
}
You will then need to query both your tables by using:
query = "SELECT ID,TEXT,'QUE' AS TYPE FROM QUESTION "+sql_where+" UNION SELECT ID,TEXT,'ANS' AS TYPE FROM ANSWER "+sql_where+";";
Note that type was added to each in order to separate the source of result row. This will help you in case you need to display the location the result was extracted from.
For the rest, I'll just explain the general idea. You will want to use the search array built earlier to compare to your result set. For each word, try to look for it in each returned row. On the side, you will create an array to store the common hits and the array index (Which will be used later). When you find a word in a row, its corresponding entry in your count array will be incremented by 1.
After you're done, all you have to is reorder the count array based on the descending order of hits. You will notice that the index id created earlier will shift, which will allow you to use it in the following stage.
For loading, you will loop on the count array, and load the result entry from result set using the index column created in the count array.
Assume the above code as a general idea, since I don't know which language you're working with

"(select id,text, 'que' as type from question WHERE text LIKE '%keyword%')
UNION
(select id,text,'ans' as type from answer WHERE text LIKE '%keyword%')";
if you have confusion that the row is selected from which table, you can check type for that.

Related

Dynamically update sql columns based on number of entries

I want to create a table like below:
id| timestamp | neighbour1_id | neighbour1_email | neighbour2_id | neighbour2_email
and so on upto max neighbour 20.
I have two questions:
Should I create columns statically or is there a way to create columns dynamically using php based on the count of json Array?
In either case, how would I refer to the columns dynamically and assign value to them based on jsonArray?
My jsonArray would look something like:
{id:123, email_id:abc, neighbours: [{neighbour1_id:234, neighbour1_email: bcd}, {neighbour2_id:345, neighbour2_email:dsf}, {}, {}...]}
Please advice. Thanks.
It looks like you need to rethink your database structure a bit. To me it looks like you need a single users (or whatever they are) table:
CREATE TABLE `users` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`cretaed_at` timestamp NOT NULL,
PRIMARY KEY (`id`)
);
And another table that defines relations between those users:
CREATE TABLE `neighbors` (
`parent` int(11) unsigned NOT NULL,
`child` int(11) unsigned NOT NULL,
PRIMARY KEY (`parent`,`child`)
);
Now you can add as many neighbors to each user as you want. Fetching them is as easy as:
SELECT * FROM `users`
LEFT JOIN `neighbors` ON `users`.`id` = `neighbors`.`child`
WHERE `neighbors`.`parent` = ?
Where that question mark would become the id of the user from which you are fetching the neighbors, preferably by using a prepared statement.
If it is all JSON you will be working with, and querying isn't much of an issue, you could consider working with a noSql database or document store (like redis or mongoDb), but that is an entirely different story.
Just repeating a bunch of columns x times is definitely not the way to go. Vertical size (# rows) of tables in relational databases is no big issue, they are designed for that. Horizontal size (# columns) however is something to be careful with, as it may make your db uanessacry large, and decrease performance.
Just consider what you would if you want to find a user that has a neighbor with an email address [x]. You would have to repeat your where statement 20 times for each possible email column. And that is just one example...
well, the answer i was working on while pevara was posting theirs faster is almost the same...
CREATE TABLE `neighbours` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`neighbour_email` char(64) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
CREATE TABLE `neighbour_email_collections` (
`id` int(10) unsigned NOT NULL,
`email_id` char(64) NOT NULL,
`neighbour_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`,`neighbour_id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
insert into neighbours values (234, "bcd");
insert into neighbours values (345, "dsf");
insert into neighbour_email_collections values(123, "abc", 234);
insert into neighbour_email_collections values(123, "abc", 345);
select *
from neighbours
left join neighbour_email_collections
on neighbour_email_collections.neighbour_id=neighbours.id
where neighbour_email_collections.id=123;

MySQL INSERT SELECT and get primary key values

I want to know if it's possible to INSERT records from a SELECT statement from a source table into a destination table, get the INSERT ID's and UPDATE a field on all the corresponding records from source table.
Take for example, the destination table 'payments':
CREATE TABLE `payments` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`txid` TEXT NULL,
`amount` DECIMAL(16,8) NOT NULL DEFAULT '0.00000000',
`worker` INT(10) UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
)
The source table 'log':
CREATE TABLE `log` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`solution` VARCHAR(80) NOT NULL,
`worker` INT(11) NOT NULL,
`amount` DECIMAL(16,8) NOT NULL DEFAULT '0.00000000',
`pstatus` VARCHAR(50) NOT NULL DEFAULT 'pending',
`payment_id` INT(10) UNSIGNED NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
The "log" table contains multiple "micro-payments" for a completed task. The purpose of the "payments" table is to consolidate the micro-payments into one larger payment:
INSERT INTO payments p (amount, worker)
SELECT SUM(l.amount) AS total, l.worker FROM log l
WHERE l.pstatus = "ready"
AND l.payment_id IS NULL
AND l.amount > 0
GROUP BY l.worker
I'm not sure if clear from the code above, but I would like the field "payment_id" to be given the value of the insert id so that it's possible to trace back the micro-payment to the larger consolidated payment.
I could do it all client side (PHP), but I was wondering if there was some magical SQL query that would do it for me? Or maybe I am going about it all wrong.
You can use mysql_insert_id() to get the id the inserted record.
See mysql_insert_id()
But the above function is deprecated.
If you're using PDO, use PDO::lastInsertId.
If you're using Mysqli, use mysqli::$insert_id.
Well, the linking column between the tables is the column worker. After you inserted your values, just do
UPDATE log l
INNER JOIN payments p ON l.worker = p.worker
SET l.payment_id = p.id;
and that's it. Or did I get the question wrong? Note, that the columns differ in the attribute signed/unsigned. You might want to change that.
I think you should use ORM in php as follows:
Look into Doctrine.
Doctrine 1.2 implements Active Record. Doctrine 2+ is a DataMapper ORM.
Also, check out Xyster. It's based on the Data Mapper pattern.
Also, take a look at DataMapper vs. Active Record.

MySQL with InnoDB revision control rows

I would like to have a way of controlling/tracking revisions of rows. I am trying to find the best solution for this problem.
The first thing that comes to mind is to have a table with a id to identify the row and and id for the revision number. The combined ids would be the primary key. so example data might look like this:
1, 0, "original post"
1, 1, "modified post"
1, 2, "modified again post"
How can I create a table with this behavior? or is there a better solution to do this?
I like InnoDB since it supports transactions, foreign keys and full text in MySQL 5.6+.
I know its possible to "force" this behavior by how I insert the data but I'm wondering if there is a way to have the table do this automatically.
Consider table structure:
TABLE posts
post_id INT AUTO_INCREMENT PK
cur_rev_id INT FK(revisions.rev_id)
TABLE revisions
rev_id INT AUTO_INCREMENT PK
orig_post INT FK(posts.post_id)
post_text VARCHAR
Where the posts table tracks non-versioned information about the post and its current revision, and revisions tracks each version of the post text with a link back to the parent post. Because of the circular FK constraints you'd need to enclose new post insertions in a transaction.
With this you should be able to easily add, remove, track, roll back, and preview revisions to your posts.
Edit:
Yeah, enclosing in a transaction won't exactly help since the keys are set to AUTO_INCREMENT, so you need to dip back in to PHP with LAST_INSERT_ID() and some temporarily NULL indexes.
CREATE TABLE `posts` (
`post_id` INT(10) NOT NULL AUTO_INCREMENT,
`cur_rev_id` INT(10) NULL DEFAULT NULL,
`post_title` VARCHAR(50) NULL DEFAULT NULL,
PRIMARY KEY (`post_id`),
INDEX `FK_posts_revisions` (`cur_rev_id`),
) ENGINE=InnoDB
CREATE TABLE `revisions` (
`rev_id` INT(10) NOT NULL AUTO_INCREMENT,
`orig_post` INT(10) NULL DEFAULT NULL,
`post_text` VARCHAR(32000) NULL DEFAULT NULL,
PRIMARY KEY (`rev_id`),
INDEX `FK_revisions_posts` (`orig_post`),
) ENGINE=InnoDB
ALTER TABLE `posts`
ADD CONSTRAINT `FK_posts_revisions` FOREIGN KEY (`cur_rev_id`) REFERENCES `revisions` (`rev_id`);
ALTER TABLE `revisions`
ADD CONSTRAINT `FK_revisions_posts` FOREIGN KEY (`orig_post`) REFERENCES `posts` (`post_id`);
Then:
$db_engine->query("INSERT INTO posts (cur_rev_id, post_title) VALUES (NULL, 'My post Title!')");
$post_id = $db_engine->last_insert_id();
$db_engine->query("INSERT INTO revisions (orig_post, post_text) VALUES($post_id, 'yadda yadda')");
$rev_id = $db_engine->last_insert_id();
$db_engine->query("UPDATE posts SET cur_rev_id = $rev_id WHERE post_id = $post_id");
If I've understood you correctly and the table doesn't receive large numbers of updates/deletes then you could look at setting a trigger such as:
DELIMITER $$
CREATE TRIGGER t_table_update BEFORE UPDATE ON table_name
FOR EACH ROW
INSERT INTO table_name_revisions (item_id, data, timestamp)
VALUES(OLD.id, OLD.data, NOW());
END$$
DELIMITER ;
See trigger syntax for more information

I want to select a specified set of rows then pick one random row out of those

I will be creating the following table
$sql[] = "CREATE TABLE IF NOT EXISTS #__GmQuestions(
QnID int(11) NOT NULL AUTO_INCREMENT,
Question text COLLATE utf8_unicode_ci NOT NULL,
Answer text COLLATE utf8_unicode_ci NOT NULL,
QnLevel int(11) NOT NULL,
QnPrize text COLLATE utf8_unicode_ci,
QnPoints DECIMAL( 10, 2 ) NOT NULL,
PRIMARY KEY (QnID),
)";
and the following table
$sql[] = "CREATE TABLE IF NOT EXISTS #__GmHistory(
HsID int(11) NOT NULL AUTO_INCREMENT,
HsGamerID int(11) NOT NULL,
HsQnID int(11) NOT NULL,
Hspoints DECIMAL( 10, 2 ) NOT NULL,
HsAnswer varchar(55) COLLATE utf8_unicode_ci NOT NULL,
HsStatus varchar(55) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'Pending',
HsDateCreated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
HsPrize varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (HsID)
)";
The level column will have intergers of 1 - 10. I want to select a random question from questions where Qnlevel=1 and this question must not have been answered by the user before.
so I have this query
//SELECT GAMERS question HISTORY
$result = mysql_query("SELECT HsQnID FROM #__QnHistory WHERE HsGamerID LIKE GamerID");
$Answeredquestions = Array();
while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) {
$Answeredquestions[] = $row['QnID'];
}
// I now select questions making sure the user hasent anweres them
$query="SELECT * FROM #__GmQuestions WHERE QnLevel = 1 AND QnID NOT IN ('.implode(',', $Answeredquestions).')";
my big problem is that I need to select one random question out of these selected question. My db is big upto 600,000 questions I have seen some complaints about rand() on big dbs. Amy ideas how to pick only one random question the user hasent answered yet. I am still developing and so all anwers are welcome even if it means changing my tables
If you are not using RAND() because it needs to search through the entire table (I still think you should time it, it might be "fast enough"); you can build a table of PKs that contain a list of only unanswered questions and user ids.
This second lookup table should be smaller than all possible questions. You can then get a random primary key from the lookup table and select it directly from the main question table.
The only problem with this is that you need to keep the lookup table refreshed; which can be a trigger or a scheduled tasks that can be done as per your system's usage patterns.
You can also cache the results of the a lookup such as SELECT id, userid, answer on the client side and use PHP's random functions to select from this list.
If you are going to do this; I suggest using an external cache so that multiple processes don't keep picking the same question out of this unanswered set.
I would probably solve it using the following
Get a count of all the questions that the user has not answered i.e. 400,000. Lets say count=400000
In your script get a random number between 1 to count i.e. 323875. Lets say random=323875
Using the random number as an offset and limit 1 you can then get the a single question record.

MySQL + PHP: select multiple rows on a join, then update those rows/insert new ones

I want to do the following:
Select multiple rows on an INNER JOIN between two tables.
Using the primary keys of the returned rows, either:
Update those rows, or
Insert rows into a different table with the returned primary key as a foreign key.
In PHP, echo the results of step #1 out, ideally with results of #2 included (to be consumed by a client).
I've written the join, but not much else. I tried using a user-defined variable to store the primary keys from step #1 to use in step #2, but as I understand it user-defined variables are single-valued, and my SELECT can return multiple rows. Is there a way to do this in a single MySQL transaction? If not, is there a way to do this with some modicum of efficiency?
Update: Here are the schemas of the tables I'm concerned with (names changed, 'natch):
CREATE TABLE IF NOT EXISTS `widgets` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`author` varchar(75) COLLATE utf8_unicode_ci NOT NULL,
`text` varchar(500) COLLATE utf8_unicode_ci NOT NULL,
`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated` timestamp
NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
CREATE TABLE IF NOT EXISTS `downloads` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`widget_id` int(11) unsigned NOT NULL,
`lat` float NOT NULL,
`lon` float NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
I'm currently doing a join to get all widgets paired with their downloads. Assuming $author and $batchSize are php vars:
SELECT w.id, w.author, w.text, w.created, d.lat, d.lon, d.date
FROM widgets AS w
INNER JOIN downloads AS d
ON w.id = d.widget_id
WHERE w.author NOT LIKE '$author'
ORDER BY w.updated ASC
LIMIT $batchSize;
Ideally my query would get a bunch of widgets, update their updated field OR insert a new download referencing that widget (I'd love to see answers for both approaches, haven't decided on one yet), and then allow the joined widgets and downloads to be echoed. Bonus points if the new inserted download or updated widgets are included in the echo.
Since you asked if you can do this in a single Mysql transaction I'll mention cursors. Cursors will allow you to do a select and loop through each row and do the insert or anything else you want all within the db. So you could create a stored procedure that does all the logic behind the scenes that you can call via php.
Based on your update I wanted to mention that you can have the stored procedure return the new recordset or an I'd, anything you want. For more info on creating stored procedures that return a recordset with php you can check out this post: http://www.joeyrivera.com/2009/using-mysql-stored-procedure-inout-and-recordset-w-php/

Categories