We're using SphinxSearch (not really relevant likely as we're returning the resulting objects from MySQL) to return user's search results. This part is working fine. We're displaying 30 items per page, but there may be up to 20k results that match.
What we're trying to do is add the ability to filter search results based on the total search results attributes and options. Take this amazon search for instance:
https://www.amazon.ca/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=tablet
If you look at the left side, you can filter by brand, category, keywords, discount percentage, memory capacity, screen size, et al. Obviously this doesn't just apply to the currently displayed search results, but the entire result set (which in this Amazon maxes out at 400 pages).
If we were to do that, how can we avoid loading and looping through all 400*30 results to display relevant attribute/category filters? We've tried looping just to see how long that would take, and it's easily above 15 seconds. We've also tried caching common search terms (such as tablet in this case) but obviously, most user searches won't fall neatly into easily cacheable result sets.
Also, is there a name for this post search entire result set type of filtering?
Often called faceted search. Ie can filter results by facets.
Good overview...
http://sphinxsearch.com/blog/2013/06/21/faceted-search-with-sphinx/
In short let sphinx calculate the list and counts, rather than doing it in post
Related
I have a project where user can search for electrical goods. Search is implemented with Sphinx(Note: Sphinx version is 2.0.4 and I can't update it)
For exmaple, we have a query Светильник Е27(lamp e27). Results are following
As for me, results are not correct, because I think that results 6-11 are way more relevant than 1-5.
Is it possible to fix this issue?
P.S. Already tried to apply SPH_RANK_WORDCOUNT and SPH_RANK_SPH04 for ranking mode, but results are the same
Having now clarified in comments, can say
1) Check what fields you have indexed for each document, it might be that Светильник is used a lot in those fields, so boost the ranking. Where you seem to want most of the ranking to be on the title. Could omit less relevant fields.
2) You can also specifically make title play a bigger part in ranking with setFieldWeights().
3) Finally can even specifically only match against title using extended match mode #title Светильник Е27 - the words would have to be in the title, so results 1-5 wouldn't even show.
... basically all about manipulating what fields match and used for ranking.
There is something I am trying to accomplish although I'm not really sure where to start.
I currently have a MySql database with a list of articles. The DB contains the article title, content, and some other info like dates, etc.
There is an RSS feed that we monitor for new articles, it's a Google Alert feed that just contains the latest news on certain subjects. I want to be able to automatically monitor this feed and record any feed items that are similar to stories currently in our DB.
I know how to set a script to run automatically, and I know how to parse the RSS feed with SimplePie.
What I need to figure out is how to take the description of the rss feed items, run a check on our DB to see if the feed item is similar to something we have in our DB, and return a numerical score of some sort, sort of like a "similarity rating" or something.
After that I can have the info I need recorded to the DB if the "similarity rating" is above a set limit, which I know how to do.
So my only issue is how to compare each feed item to our current articles, and return a score based on how similar it is.
The Levenshtein function (available for both PHP and MySQL) is a good way to handle this. It basically calculates a value based on the number of permutations (replacements, moves, etc) required to convert one string to another. That score would be your "similarity rating".
EDIT: the Levenshtein function is not available natively in MySQL but there are SQL implementations of it that you can use such as: http://kristiannissen.wordpress.com/2010/07/08/mysql-levenshtein/
I want to generate a list of the most words used on a website. The application should crawl the content of the site.
Does anyone know if this can be done by Solr or any other technique?
The list can be php objects/array or an xml file.
you might want to check http://wiki.apache.org/solr/TermsComponent
Example -
http://host:port/solr/core/terms?terms.fl=title&terms.sort=count
Will give you all the terms for the field title ordered by count (default)
terms.fl - Field you want to check the terms on
terms.sort={count|index} - If count, sorts the terms by the term frequency (highest count first). If index, returns the terms in index order. Default is to sort by count.
This gives the indexed terms which go through the tokenizer and filters, so if you need terms as is, you can vary the field analysis. (probably use field type string)
SOLR is a search engine. It doesn't crawl websites. You need to make a simple website crawler using scrapy http://scrapy.org/ or some similar tool. Design a SOLR schema to record the data, crawl the websites, send record updates to SOLR. Your specific question would probably be answered by the SCHEMA BROWSER choice on the SOLR admin menu through the web admin interface. Click on DYNAMIC FIELDS, select the field you are interested and see the to 10. Change the number to 50, press ENTER and get the top 50.
I’ve got search lucene set up and running. Everything works perfectly.
My website is an application that populates results similar to that of ebay, each item has an image, title, content description and some other information come with it.
I have two solutions for populating my data, I want you to suggest which one should I go for.
store title, content, image name, and every other information in the index files. When users search, I will just query the index files, and get everything from there.
just store title and content and row ids. When users search, I will query the index files, get ids of match search then use those ids to query my actual database for every other information.
I would probably go with the first solution, storing everything into the search/index engine (Lucene, in your case).
This way, in order to display your list of products, you will not have to make any request to your database, which will lower the load on your DB server -- and your site will scale better.
As the title says, I need a search engine... for mysql searching.
My website is PHP based.
I was going with sphinx but my hosting company doesn't support full-text indexes!
So a search engine to be used without full-text!
It should be pretty powerful, and must include atleast these functions below:
When searching for 'bmw 520' only matches where these two words come in exactly this order is returned. not matches for only 'bmw' or only '520'.
When searching for 'bmw 330ci' results as the above will be returned, but, WITH AND WITHOUT the ci extension. There are a nr of extensions in cars as you all know (i, ci, si, fi etc).
I want the 'minus sign' to 'exclude' all returns containing the word after the sign, ex: 'bmw -330' will return all 'bmw' results without the '330' ones. (a NOT instead of minus sign is also ok)
all special character accents like 'é' are converted to their simple values, in this case 'e'.
list of words to ignore completely in the search
Thanks guys!
The Zend_Lucene search competent works fairly well. I am not sure how it would cope with your second requirement, however if you customized the tokenized you should be able to do it by treating a change from letters to numbers as a new word.
The one I am really not sure about is the top requirement. Given how it is indexed, order becomes irreverent in the search, so you may not be able to do it without heavy editing of Lucene, writing a filter (using lucene to pull the matches, then checking the order), or writing your own solution. All of these will slow the search down, and add load to your server.
There is also solr, but I have never used it and don't know anything about it. Sphinx was another one, but I see you have already ruled that out.
Xapian is very good (very comprehensive) if you have the time for the initial setup.
It functions as you would expect a search engine to work, tell the indexer what bits of information to index under what namespace/table/object (Page, Profile, Products etc), then issue a query for your users based on keywords, it also supports google style tags e.g. "profile:Mark icecream" would search my profile for the word icecream, i seem to remember it supporting ranges too for data you specify as numeric.
Can be used in local mode which can offer spelling modifications (Did you mean?), or remote mode that many sites can index to and query from.
What really saved me one time was the ability to attach transient non searchable data to an indexed item, e.g. attaching the DB id to all data indexed for that record, very good for then going and getting the whole record from the DB when your matches come back from xapian.
I have used a couple of Search Engines on my site during it's time, but in the next rebuild I'm planning to move to Google Site Search.
There are several reasons for this:
Users are very familiar with the Google style of search result listings which improves usability and hence click-through rates
The Google engine is very good at guessing when to use the page description and when to use a fragment of the page (it also very good at getting relevant fragments compared to some other engines)
It's used by thousands of very popular websites
Google is the most popular search engine around so you know their technology is both reliable and accurate
Google Site Search begins at $100 per annum for 1000 pages or less (and a limit on queries)
or you can use the free Google Custom Search Engine (but this has much less customizability)