I want to generate a list of the most words used on a website. The application should crawl the content of the site.
Does anyone know if this can be done by Solr or any other technique?
The list can be php objects/array or an xml file.
you might want to check http://wiki.apache.org/solr/TermsComponent
Example -
http://host:port/solr/core/terms?terms.fl=title&terms.sort=count
Will give you all the terms for the field title ordered by count (default)
terms.fl - Field you want to check the terms on
terms.sort={count|index} - If count, sorts the terms by the term frequency (highest count first). If index, returns the terms in index order. Default is to sort by count.
This gives the indexed terms which go through the tokenizer and filters, so if you need terms as is, you can vary the field analysis. (probably use field type string)
SOLR is a search engine. It doesn't crawl websites. You need to make a simple website crawler using scrapy http://scrapy.org/ or some similar tool. Design a SOLR schema to record the data, crawl the websites, send record updates to SOLR. Your specific question would probably be answered by the SCHEMA BROWSER choice on the SOLR admin menu through the web admin interface. Click on DYNAMIC FIELDS, select the field you are interested and see the to 10. Change the number to 50, press ENTER and get the top 50.
Related
how do i have to index my data and configure solr and my search options in solr, that an autocompletion (like google) with the following requirements is possible:
Products:
- We have products with their titles, descriptions, id's, e.g. for the title: toshiba tecra s1: centrino 1.5 ghz/xp pro/15.0" tft/40 gb/256 mb+256mb/cd-rw-dvd-rom/lan/wi-fi
- this products or fields of this product has to be indexed in such a way that the following should be possible (no differentation how a user search for the searchterm, e.g. TOSHIBA or tOSHiba)
- if a user starts entering the first three characters "tos" max. 20 results (the complete title (phrase) e.g. "toshiba tecra s1: centrino 1.5 ghz/xp pro/15.0" tft/40 gb/256 mb+256mb/cd-rw-dvd-rom/lan/wi-fi") should appear in the autocomplete box.
- if a user enters e.g. two terms "toshiba tecra" the searchresult must be more precisly and just all documents should be shown, that contain the (coherent) terms "toshiba tecra"
It would be great to get any hints for this, what kind of tokenizer/searchcomponent etc. to use.
I'm using solr Version 3.5
Thank you for oyur thoughts
Ramo
Solr 3.X has an inbuilt Suggester component, which allows you to build suggestion on limited fields.
The following links provide the implementation details -
1. http://lucidworks.lucidimagination.com/display/solr/Suggester
2. http://solr.pl/en/2010/11/15/solr-and-autocomplete-part-2/
For alternate approaches you can check EdgeNGrams implementation or Terms Component.
I was wondering what would be the best approach to generate a tag cloud from a input text (while user is typing it). For example, if user types a story's text containing keywords "sci-fi, technology, effects", the tag cloud will be formed from each of this keywords ordered by relevance according to their frequency on every story. The tag cloud will be displayed in descending order and using the same font size, it's not the display algorithm, but the search algorithm I should implement.
I'm using mysql and php.
Should I stick to MATCH...AGAINST clause? should I implement a tags table?
More details
I have a mysql table containing a lot of stories. When user is typing one of his/her own, I want to display a tag cloud containing the most frequent words, taken from the input text, occurring on this set of stories that are saved on my db.
The tag cloud will only be used to show to the user the relevance of the words he/she has entered on his/her own story according to the frequency they occur on all stories entered by all users.
I think the first thing you need to do is more clearly define the purpose of your tagging system. Do you want to simply build tags based on the words that occur most frequently within the text? This strikes me as something designed with search rankings in mind.
...Or do you want your content to be better organized, and the tag cloud be a way of providing a better user experience and creating more distinct relationships between pieces of content (ie both of these are tagged sci-fi, so display them in the sci-fi category).
If the former is the case, you might not need to do anything but:
Explode the text by a delimiter like a single space explode(' ', $content);
Have a list (possibly in a config file or within the script itself) of words which will occur frequently which you want to exclude from being tags (and, or, this, the, etc. You could just jack them off pages like this: http://www.esldesk.com/vocabulary/pronouns , http://www.english-grammar-revolution.com/list-of-conjunctions.html
Then you just need to decide how many times a word has to occur (either percentage or numeric), and store those tags in a table that shows the connection between tags and content.
To implement the "as the user is typing" part you just need to use a bit of jQuery's ajax functionality to continually call your script that builds the tag list (ie on keydown).
The other option (better user experience) will incorporate a lot of the same elements, but you'll have to think about a bit more. Some things I would consider:
Do you want to restrict to certain tags (perhaps you don't want to allow just anyone to create new tags)?
How you will deal with synonyms
If you will support multiple languages
If you want a preference towards suggesting existing tags (which might be close) over suggesting new ones
Once you've fully defined the logic and user experience you can come back to the search algorithm. MATCH and AGAINST are good options but you may find that a simple LIKE will do it for you.
Good luck = )
If you want the tag cloud to be generated as the user is typing it, you can do it in two ways.
Directly update the tag cloud from the input text
Send the input text to the backend (in realtime using ajax/comet), which then saves, calculates the word frequency and returns data from which you generate the cloud.
I would go with the former using a jQuery plugin such as - http://plugins.jquery.com/plugin-tags/tag-cloud
I was wondering if their was any sort of way to detect a pages genre/category.
Possibly their is a way to find keywords or something?
Unfortunately I don't have any idea so far, so I don't have any code to show you.
But if anybody has any ideas at all, let me know.
Thanks!
EDIT #Nican
Perhaps their is a way to set, let's say 10 category's (Entertainment, Funny, Tech).
Then creating keywords for these category's (Funny = Laughter, Funny, Joke etc).
Then searching through a webpage (maybe using a cUrl) for these keywords and assigning it to the right category.
Hope that makes sense.
What you are talking about is basically what Google Adsense and similar services do, and it's based on analyzing the content of a page and matching it to topics. Generally, this kind of stuff is beyond what you would call simple programming / development and would require significant resources to be invested to get it to work "right".
A basic system might work along the following lines:
Get page content
Get X most commonly used words (omitting stuff like "and" "or" etc.)
Get words used in headings
Assign weights to different words according to a set of factors (is used in heading, is used in more than one paragraph, is used in link anchors)
Match the filtered words against a database of words related to a specific "category"
If cumulative score > treshold, classify site as belonging to category
Rinse and repeat
Folksonomy may be a way of accomplishing what you're looking for:
http://en.wikipedia.org/wiki/Folksonomy
For instance, in Drupal they have a Folksonomy module:
http://drupal.org/node/19697 (Note this module appears to be dead, see http://drupal.org/taxonomy/term/71)
Couple that with a tag cloud generator, and you may get somewhere:
http://drupal.org/project/searchcloud
Plus, a little more complexity may be able to derive mapped relationships to other terms, especially if you control the structure of the tagging options.
http://intranetblog.blogware.com/blog/_archives/2008/5/22/3707044.html
EDIT
In general, the type of system you're trying to build relies on unique word values on a page. So you would need to...
Get unique word values from your content (index values or create a bot to crawl your site)
Remove all words and symbols you can't use (at, the, or, and, etc...)
Count the number of times the unique words appear on the page
Add them to some type of datastore so you can call them based on the relationships you're mapping
If you have a root label system in place, associate those values with the word counts on the page (such as a query or derived table)
This is very general, and there are a number of ways this can be implemented/interpreted. Folksonomies are meant to "crowdsource" much of the effort for you, in a "natural way", as long as you have a user base that will contribute.
As the title says, I need a search engine... for mysql searching.
My website is PHP based.
I was going with sphinx but my hosting company doesn't support full-text indexes!
So a search engine to be used without full-text!
It should be pretty powerful, and must include atleast these functions below:
When searching for 'bmw 520' only matches where these two words come in exactly this order is returned. not matches for only 'bmw' or only '520'.
When searching for 'bmw 330ci' results as the above will be returned, but, WITH AND WITHOUT the ci extension. There are a nr of extensions in cars as you all know (i, ci, si, fi etc).
I want the 'minus sign' to 'exclude' all returns containing the word after the sign, ex: 'bmw -330' will return all 'bmw' results without the '330' ones. (a NOT instead of minus sign is also ok)
all special character accents like 'é' are converted to their simple values, in this case 'e'.
list of words to ignore completely in the search
Thanks guys!
The Zend_Lucene search competent works fairly well. I am not sure how it would cope with your second requirement, however if you customized the tokenized you should be able to do it by treating a change from letters to numbers as a new word.
The one I am really not sure about is the top requirement. Given how it is indexed, order becomes irreverent in the search, so you may not be able to do it without heavy editing of Lucene, writing a filter (using lucene to pull the matches, then checking the order), or writing your own solution. All of these will slow the search down, and add load to your server.
There is also solr, but I have never used it and don't know anything about it. Sphinx was another one, but I see you have already ruled that out.
Xapian is very good (very comprehensive) if you have the time for the initial setup.
It functions as you would expect a search engine to work, tell the indexer what bits of information to index under what namespace/table/object (Page, Profile, Products etc), then issue a query for your users based on keywords, it also supports google style tags e.g. "profile:Mark icecream" would search my profile for the word icecream, i seem to remember it supporting ranges too for data you specify as numeric.
Can be used in local mode which can offer spelling modifications (Did you mean?), or remote mode that many sites can index to and query from.
What really saved me one time was the ability to attach transient non searchable data to an indexed item, e.g. attaching the DB id to all data indexed for that record, very good for then going and getting the whole record from the DB when your matches come back from xapian.
I have used a couple of Search Engines on my site during it's time, but in the next rebuild I'm planning to move to Google Site Search.
There are several reasons for this:
Users are very familiar with the Google style of search result listings which improves usability and hence click-through rates
The Google engine is very good at guessing when to use the page description and when to use a fragment of the page (it also very good at getting relevant fragments compared to some other engines)
It's used by thousands of very popular websites
Google is the most popular search engine around so you know their technology is both reliable and accurate
Google Site Search begins at $100 per annum for 1000 pages or less (and a limit on queries)
or you can use the free Google Custom Search Engine (but this has much less customizability)
I am facing a problem on developing my web app, here is the description:
This webapp (still in alpha) is based on user generated content (usually short articles although their length can become quite large, about one quarter of screen), every user submits at least 10 of these articles, so the number should grow pretty fast. By nature, about 10% of the articles will be duplicated, so I need an algorithm to fetch them.
I have come up with the following steps:
On submission fetch a length of text and store it in a separated table (article_id,length), the problem is the articles are encoded using PHP special_entities() function, and users post content with slight modifications (some one will miss the comma, accent or even skip some words)
Then retrieve all the entries from database with length range = new_post_length +/- 5% (should I use another threshold, keeping in mind that human factor on articles submission?)
Fetch the first 3 keywords and compare them against the articles fetched in the step 2
Having a final array with the most probable matches compare the new entry using PHP's levenstein() function
This process must be executed on article submission, not using cron. However I suspect it will create heavy loads on the server.
Could you provide any idea please?
Thank you!
Mike
Text similarity/plagiat/duplicate is a big topic. There are so many algos and solutions.
Lenvenstein will not work in your case. You can only use it on small texts (due to its "complexity" it would kill your CPU).
Some projects use the "adaptive local alignment of keywords" (you will find info on that on google.)
Also, you can check this (Check the 3 links in the answer, very instructive):
Cosine similarity vs Hamming distance
Hope this will help.
I'd like to point out that git, the version control system, has excellent algorithms for detecting duplicate or near-duplicate content. When you make a commit, it will show you the files modified (regardless of rename), and what percentage changed.
It's open source, and largely written in small, focused C programs. Perhaps there is something you could use.
You could design your app to reduce the load by not having to check text strings and keywords against all other posts in the same category. What if you had the users submit the third party content they are referencing as urls? See Tumblr implementation-- basically there is a free-form text field so each user can comment and create their own narrative portion of the post content, but then there are formatted fields also depending on the type of reference the user is adding (video, image, link, quote, etc.) An improvement on Tumblr would be letting the user add as many/few types of formatted content as they want in any given post.
Then you are only checking against known types like a url or embed video code. Combine that with rexem's suggestion to force users to classify by category or genre of some kind, and you'll have a much smaller scope to search for duplicates.
Also if you can give each user some way of posting to their own "stream" then it doesn't matter if many people duplicate the same content. Give people some way to vote up from the individual streams to a main "front page" level stream so the community can regulate when they see duplicate items. Instead of a vote up/down like Digg or Reddit, you could add a way for people to merge/append posts to related posts (letting them sort and manage the content as an activity on your app rather than making it an issue of behind the scenes processing).