MYSQL search database for similar results - php

Essentially what I want to do is search a number of MYSQL databases and return results where a certain field is more than 50% similar to another record in the databases.
What am I trying to achieve?
I have a number of writers who add content to a network of websites that I own, I need a tool that will tell me if any of the pages they have written are too similar to any of the pages currently published on the network. This could run on post/update or as a cron... either way would work for me.
I've tried making something with php, drawing the records from the database and using the function similar_text(), which gives a % difference between two strings - this however is not a workable solution as you have to compare every entry against every other entry & I worked out with microtime that it would take around 80 hours to completely search all of the entries!
Wondering if it's even possible!?
Thanks!

You are probably looking for is SOUNDEX. It is the only sound based search in mysql. If you have A LOT of data to compare, you're probably going to need to pregenerate the soundex and compare the soundex columns or use it live like this:
SELECT * FROM data AS t1 LEFT JOIN data AS t2 ON SOUNDEX(t1.fieldtoanalyse) = SOUNDEX(t2.fieldtoanalyse)
Note that you can also use the
t1.fieldtoanalyze SOUNDS LIKE t2.fieldtoanalyze
syntax.
Finaly, you can save the SOUNDEX to a column, just create a column and:
UPDATE data SET fieldsoundex = SOUNDEX(fieldtoanalyze)
and then compare live with pregenerated values
More on Soundex
Soundex is a function that analyzes the composition of a word but in a very crude way. It is very useful for comparisons of "Color" vs "Colour" and "Armor" vs "Armour" but can also sometimes dish out weird results with long words because the SOUNDEX of a word is a letter + a 3 number code. There is just so much you can do sadly with these combinations.
Note that there is no levenstein or metaphone implementation in mysql... not yet, but probably, levenstein would have been the best for your case.

Anything is possible.
Without knowing your criteria for similar, it's difficult to offer a specific solution. However, my suggestion would be pre-build a similarity table, utilize a function such as similar_text(). Use this as your index table when searching by term.
You'll take an initial hit to build such an index. However, you can manage it easier as new records are added.

Thanks for your answers guys, for anyone looking for a solution to a problem similar to this I used the SOUNDEX function to pull out entries that had a similar title then compared them with the similar_text() function. Not quite a complete database comparison, but near as I could get it!

Related

CI & MySQL; using LIKE on a delimited string

Maybe LIKE isn't the proper solution for what I'm attempting to do; but what I'm looking to achieve is to take a string retrieved from the database. (i.e. tags [ tag1,tag2,tag3,tag4 ]) - then compare each value (delimited by the comma) and return like rows accordingly.
<?
$string = 'tag1,tag2,tag3,tag4';
$tags = explode(',', $string);
foreach ( $tags as $tag )
{
$this->db->limit(2); $this->db->like('Tags', $tag);
$result = $this->db->get('table');
$recommend[] = $result;
}
This is the best I could understand how to do what I'm looking for, but I'm quite confused on the LIKE method in general of CI's Active Record Class.
Is performing searches on delimited string in a database even really entirely wise?
Thanks for the input, and ideas you two. MySQL goes a bit over my head, but after reading a bit about the things possible with relational databases; I'm going to attempt to learn this process so I can build my database more efficient.
My database is going to be a catalog of video games and files, so obviously that means it's going to reach insane amounts of entries - so attempting to look through any sort of string for possible returned rows would be by far the dumbest thing I could do in the long run.
Again, thanks for the information and ideas.
If you have a properly normalized database, tags should be split up to a new table as this is a one-to-many (or maybe even a many-to-many) relationship.
However, the easiest way to achieve this is to also place a comma before the first tag, and after the last, and simply search for:
SELECT * FROM table WHERE Tags LIKE "%,coleslaw,%";
But yes, this is bad, and can be slow if your table grows as you will not be able to utilize indexes.
There is a FIELD command in MySQL that you might be able to take advantage of. You can read up on it here.
But in general, performing searches on delimited strings, or substrings in general, in a database will kill performance as you will lose any indexing when matching on the substring. If you have a thousand rows, MySQL will have to scan through all one thousand rows before discovering that there is no match. This applies to both Like and Substring appearing in a where clause. Many database implementations specifically provide a BEGINS WITH operator that can still take advantage of indexing, but I'm not aware of this in MySQL.
Consider instead breaking the string up during insertion, and storing each word seperately in a different table. If you find that you do this a lot, consider an engine built for text searching, such as Lucene. You can use PHP with Lucene through SOLR. While I throw that out there, it may very well be complete overkill for your project.
I know this probably isn't what you want to hear, but your gut is correct on this one.

fuzzy searching an array in php

after i searched i found how to do a fuzzy searching on a string
but i have an array of strings
$search = {"a" => "laptop","b" => "screen" ....}
that i retrieved from the DB MySQL
IS there any php class or function that does fuzzy searching on an array of words
or at least a link with maybe some useful info's
i saw a comment that recommend using PostgreSQL
and it's fuzzy searching capability but
the company had already a MySQL DB
Is there any recommendation ??
You could do this in MySQL since you already have a MySQL database - How do I do a fuzzy match of company names in MYSQL with PHP for auto-complete? which mentions the MySQL Double Metaphone implementation and has an implementation in SQL for MySQL 5.0+
Edit: Sorry answering here as there is more than could fit in a comment…
Since you've already accepted an answer using PHP Levenshtein function then I suggest you try that approach first. Software is iterative; the PHP array search may be exactly what you want but you have to test and implement it first against your requirements. As I said in your other question a find as you type solution might be the simplest solution here, which simply narrows the product as the user types. There might not be a need to implement any fuzzy searching since you are using the User to do the fuzzy search themselves :-)
For example a user starts typing S, a, m which allows you to narrow the products to those beginning with Sam. So you are always only letting the user select a product you already know is valid.
Look at the Levenshtein function
Basically it gives you the difference (in terms of cost) between to strings. I.e. what is the cost to transform string A into string B.
Set yourself a threshold levenshein distance and anything under that for two words mean they're similar.
Also the Bitap algorithm is faster since it can be implemented via bitwise operators, but I believe you will have to implement it yourself, unless there is a PHP lib for it somewhere.
EDIT
To use levenshtein method:
The search string is "maptop", and you set your "cost threshold" to say 2. That means you want any words that are two string transform operations away from your search string.
so you loop through your array "A" of strings until
levenshtein ( A[i] , searchString ) <= 2
That will be your match. However you may get more than one word that matches, so it is up to you how you want to handle the extra results.

How do I get this lightning fast search?

I just came over this site: http://www.hittaplagget.se. If you enter the following search word moo, the autosuggest pops up immediately.
But if you go to my site, http://storelocator.no, and use the same search phrase (in "Search for brand" field), it takes a lot longer for autosuggest to suggest anything.
I know that we can only guess on what type of technology they are using, but hopefully someone here can do an educational guess better than I can.
In my solution I only do a SELECT moo% FROM table and return the results.
I have yet not indexed my table as there are only 7000 rows in it. But I'm thinking of indexing my tables using Lucene.
Can anyone suggest what I need to do in order to get equally fast autosuggest?
You must add an index on the column holding your search terms, even at 7000 - otherwise, the database searching through the whole list every time. See http://dev.mysql.com/doc/refman/5.0/en/create-index.html.
Lucene is a full text search index and may or may not be what you're looking for. Lucene would find any occurrence of "moo" in the entire indexed column (e.g. Mootastic and Fantasticmoo) and does not necessarily speed up your search although it's faster than a where x like '%moo%' type of search.
As others have already pointed out a regular index (probably even unique?) is what you want if you're performing "starts with" type of searches.
You will need to table-scan the table, so I suggest:
Don't put any rows in the table you don't need - for example, "inactive" records - keep them in a different table
Don't put any columns in the table you don't need
You can achieve this by having a special "Search table" which just contains the rows/columns you're interested in, and updating it from the "Master table".
Table-scanning a 7000 row table should be extremely efficient if the rows are small; I understand from your problem domain that this will be the case.
But as others have pointed out - don't send the 7000 rows to the client-side when it doesn't need it.
A conventional index can optimise a LIKE 'someprefix%' into a range-scan, so it is probably helpful having one. If you want to search for the string in any part of the entry, it is going to be a table-scan (which should not be slow on such a tiny table!)

search big database

I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP.
First I'd suggest that you rethink the layout. It seems a little unnecessary to run this query for every user, try instead to create a result table, in which you just insert the results from that query that runs ones and everytime the patterns change.
Otherwise, make sure you have indexes (full text) set on the fields you need. For the query itself you could join the tables:
SELECT
yourFieldsHere
FROM
theUrlTable AS tu
JOIN
thePatternTable AS tp ON tu.link LIKE CONCAT('%', tp.pattern, '%');
I would say that you pretty definately want to do that in the SQL code, not the PHP code. Also searching on the strings of the URLs is going to be a long operation so perhaps some form of hashing would be good. I have seen someone use a variant of a Zobrist hash for this before (google will bring a load of results back).
Hope this helps,
Dan.
Do as much searching as you practically can within the database. If you're ending up with an n x m result set, and start with at least 5 million hits, that's a LOT Of data to be repeatedly slurping across the wire (or socket, however you're connecting to the db) just to end up throwing away most (a lot?) of it each time. Even if the DB's native search capabilities ('like' matches, regexp, full-text, etc...) aren't up to the task, culling unwanted rows BEFORE they get sent to the client (your code) will still be useful.
You must optimize your tables in DB. Use a md5 hash. New column with md5, will use index and faster found text.
But it don't help if you use LIKE '%text%'.
You can use Sphinx or Lucene.

PHP - How to suggest terms for search, "did you mean...?"

When searching the db with terms that retrieve no results I want to allow "did you mean..." suggestion (like Google).
So for example if someone looks for "jquyer"
", it would output "did you mean jquery?"
Of course, suggestion results have to be matched against the values inside the db (i'm using mysql).
Do you know a library that can do this? I've googled this but haven't found any great results.
Or perhaps you have an idea how to construct this on my own?
A quick and easy solution involves SOUNDEX or SOUNDEX-like functions.
In a nutshell the SOUNDEX function was originally used to deal with common typos and alternate spellings for family names, and this function, encapsulates very well many common spelling mistakes (in the english language). Because of its focus on family names, the original soundex function may be limiting (for example encoding stops after the third or fourth non-repeating consonant letter), but it is easy to expend the algorithm.
The interest of this type of function is that it allows computing, ahead of time, a single value which can be associated with the word. This is unlike string distance functions such as edit distance functions (such as Levenshtein, Hamming or even Ratcliff/Obershelp) which provide a value relative to a pair of strings.
By pre-computing and indexing the SOUNDEX value for all words in the dictionary, one can, at run-time, quickly search the dictionary/database based on the [run-time] calculated SOUNDEX value of the user-supplied search terms. This Soundex search can be done systematically, as complement to the plain keyword search, or only performed when the keyword search didn't yield a satisfactory number of records, hence providing the hint that maybe the user-supplied keyword(s) is (are) misspelled.
A totally different approach, only applicable on user queries which include several words, is based on running multiple queries against the dictionary/database, excluding one (or several) of the user-supplied keywords. These alternate queries' result lists provide a list of distinct words; This [reduced] list of words is typically small enough that pair-based distance functions can be applied to select, within the list, the words which are closer to the allegedly misspelled word(s). The word frequency (within the results lists) can be used to both limit the number of words (only evaluate similarity for the words which are found more than x times), as well as to provide weight, to slightly skew the similarity measurements (i.e favoring words found "in quantity" in the database, even if their similarity measurement is slightly less).
How about the levenshtein function, or similar_text function?
Actually, I believe Google's "did you mean" function is generated by what users type in after they've made a typo. However, that's obviously a lot easier for them since they have unbelievable amounts of data.
You could use Levenshtein distance as mgroves suggested (or Soundex), but store results in a database. Or, run separate scripts based on common misspellings and your most popular misspelled search terms.
http://www.phpclasses.org/browse/package/4859.html
Here's an off-the-shelf class that's rather easy to implement, which employs minimum edit distance. All you need to do is have a token (not type) list of all the words you want to work with handy. My suggestion is to make sure it's the complete list of words within your search index, and only within your search index. This helps in two ways:
Domain specificity helps avoid misleading probabilities from overtaking your implementation
Ex: "Memoize" may be spell-corrected to "Memorize" for most off-the-shelf, dictionaries, but that's a perfectly good search term for a computer science page.
Proper nouns that are available within your search index are now accounted for.
Ex: If you're Dell, and someone searches for 'inspiran', there's absolutely no chance the spell-correct function will know you mean 'inspiron'. It will probably spell-correct to 'inspiring' or something more common, and, again, less domain-specific.
When I did this a couple of years ago, I already had a custom built index of words that the search engine used. I studied what kinds of errors people made the most (based on logs) and sorted the suggestions based on how common the mistake was.
If someone searched for jQuery, I would build a select-statement that went
SELECT Word, 1 AS Relevance
FROM keywords
WHERE Word IN ('qjuery','juqery','jqeury' etc)
UNION
SELECT Word, 2 AS Relevance
FROM keywords
WHERE Word LIKE 'j_query' OR Word LIKE 'jq_uery' etc etc
ORDER BY Relevance, Word
The resulting words were my suggestions and it worked really well.
You should keep track of common misspellings that come through your search (or generate some yourself with a typo generator) and store the misspelling and the word it matches in a database. Then, when you have nothing matching any search results, you can check against the misspelling table, and use the suggested word.
Writing your own custom solution will take quite some time and is not guaranteed to work if your dataset isn't big enough, so I'd recommend using an API from a search giant such as Yahoo. Yahoo's results aren't as good as Google's but I'm not sure whether Google's is meant to be public.
You can simply use an Api like this one https://www.mashape.com/marrouchi/did-you-mean

Categories