Computing Trending Topics - php

Let's say I'm collecting tweets from twitter based on a variety of criteria and storing these tweets in a local mysql database. I want to be able to computer trending topics, like twitter, that can be anywhere from 1-3 words in length.
Is it possible to write a script to do something like this PHP and mysql?
I've found answering on how to compute which terms are "hot" once you're able to get counts of the terms, but I'm stuck at the first part. How should I store the data in the database, how can I count frequency of terms in the database that are 1-3 words in length?

trending topic receipt from me :
1. fetch the tweets
2. split each tweets by space into n-gram (up to 3 gram if you want 3 words length) array
3. filter out each array from url, #username, common words and junk chars
4. count all unique keyword / phrase frequency
5. mute some junk word / phrase
yes, you can do it on php & mysql ;)

How about decomposing your tweets first in single word tokens and calculate for every word its number of occurrences ?
Once you have them, you could decompose in all two word tokens, calculate the number of occurrences and finally do the same with all three word tokens.
You might also want to add some kind of dictionary of words you don't want to count

What you need is either
document classification, or..
automatic tagging
Probably second one. And only then you can count their popularity in time.

Or do the opposite of Dominik and store a set list of phrases you wish to match, spaces and all. Write them as regex strings. For each row in database (file, sql table, whatever), process regex, find count.
It depends on which way around you want to do it trivially: everything - that which is common, thereby finding what is truly trending, or set phrase lookup. In one case, you'll find a lot that might not interest you and you'll need an extensive blocklist - in the other case, you'll need a huge whitelist.
To go beyond that, you need natural language processing tools to determine the meaning of what is said.

Related

Find phrases using mysql and php

I am working on a project and I need your suggestions in a database query. I am using PHP and MySQL.
Context
I have a table named phrases containing a phrases column in which there are phrases stored, each of which consists of one to three words.
I have a text string which contains 500 - 1000 words
I need to highlight all the phrases in the text string which exist in my phrases database table.
My solution
I go through every phrase in the phrase list and compare it against the text, but the number of phrases is large (100k) so it takes about 2 min or more to do this matching.
Is there any more efficient way of doing this?
I'm gonna focus on how to do the comparision part with 100K Values. This will require two steps.
a) Write a C++ library and link it to PHP using an extension. Google PHP-CPP. There is a framework which allows you to do this.
b) Inside C/C++ , you need to create a data structure which has a time complexity of O(n) . n being length of the phrases you're searching for. Normally, this is called a tries data structure. This is conventionally used for words without space[not phrases]. but, surely you can write your own.
Here is a link, which contains the word implementation. aka dictionary.
http://www.geeksforgeeks.org/trie-insert-and-search/
This takes quite a bit of Memory since, the number is 100K. fair to say, you need a large system. But, when you're looking for better performance, then, Memory tends to be a tradeoff.
Alternative Approach
Only PHP. Here , extract phrases from your text input. Convert them into a Hash. the table data that you contain, should also be stored in a hash. [Needs Huge Memory]. The performance here will be rocket fast, per search aka O(1). so, for a sentence of k words. your time complexity will be O(K-factorial).

Comparing a text against multiple ones and find texts with matching sentences

I want to be able to compare a passage with multiple (say thousands or even more) different passages and see if any part of those passages in exactly used in the first one.
Imaging you have a passage named A which you want to check and see if it contains any sentence or part of a sentence from the other thousands of passages.
I though of a very inefficient way and no better answer comes to my mind. My way is to read the first three words from the input passage (A). Then, check to see if any exact match is in the database of all the thousands texts. If there was any match, list them and then add the forth word to the string and find matches to the 4-word string among the list from 3-word matches. Do this until there are no more matches with the n-word string. The list of (n-1)-word would be saved as the result of this run. Next, the new 3-word string would be nth, (n+1)th and (n+2)th words and everything starts again until the document ends.
This would be very inefficient for large input text and huge database of comparing texts. Any better algorithm?

Search 1000 Word Document For 15,000 Phrases

I have a database of ~15,000 multiple word phrases which range in length from 2-7 words. I want to be able to search a small document (~1000 words) to see which phrases it contains. I'm basically looking for the best way to achieve this.
I have currently have the data in MySQL in two tables:
phrases (~15,000 rows)
phrase_id
phrase
length (number of words in the phrase)
documents (100s/day)
document_id
text
The phrases list stays the same, new documents are being added all the time.
As far as I can tell the best way to do this is with some sort of index. Ideally when the document is added it would be indexed to see which phrases it contains so that when a search is done later the results come back immediately.
I've considered how to do this in MySQL
Tokenize the document into 2 word phrases finding phrases which begin with the token
Iterate through the results increasing the length of the token - if (phrase length == token length) {match} else {keep for next token length}.
Store the results in a new table document_phrases phrase_id, document_id
This all seems like a lot of overhead though and I'm wondering if an external tool like Sphinx would be able to do this more efficiently? I've looked into it but it seems that it's mostly for searching lots of documents for 1 phrase, not searching 1 document for many phrases.
Is there some technique that I've completely missed? Please note that, whilst technically interesting, solutions using java/python are beyond what I'm planning to learn for this project
Have you looked into Full Text Searches. The examples given, and the ability to find relevance might give you some ideas or alternatives.

Algorithm using soundex() or metaphone() to create Mad Gab style phrases

I'm attempting to create an algorithm that will suggest Mad Gab style phrases.
The input is a set of phrases. I also have a set of keywords that I'd like to use when possible. Currently, my solution is simply brute force:
loop over phrases (character by character)
if keyword is found
store keyword and branch (recursion)
increment character count
However, the problems I am running into are:
Account for compound keywords, e.g. "catches" can be "catches", "cat" + "cheeses"
Allow literal terms - "the", "and", "one", "two", "three".
How to suggest terms that are not keywords. i.e. fall back on something like the system dictionary when keywords or literals can not be found.
Skip phrase segments. Right now it just does one pass through. But consider the case where the phrase starts with something unmatched but a few characters later contains matches.
I am most familiar with PHP and MySQL. However, I am open to another technology if it provides a better solution.
I am also interested in any additional suggestions. Particularly ways to use the second parameter of metaphone() to make harder suggestions.
Perhaps start with a syllable division algorithm on the phrase bank. You can use even a simple resource that teaches children to divide syllables to create your rough divider method:
http://www.ewsdonline.org/education/components/scrapbook/default.php?sectiondetailid=7584
If you want a more technical, completely accurate way, there was a Ph.D. dissertation about how to do it:
http://www.tug.org/docs/liang/
Then turn each syllable into a phonetic representation using either something you roll yourself or metaphone(). You can use a similar site that explains vowel sound rules. These will only be generalizations. You will process vowels separately from consonants if you roll your own. Metaphone just uses consonants, which is fine, but not as cool as if you also took into account vowels.
Vowels:
http://www.eslgold.com/pronunciation/english_vowel_sounds.html
Consonants:
http://usefulenglish.ru/phonetics/english-consonant-sounds
Then, you have a dictionary of English words for your word bank. There are many open-source dictionaries available that you could stick into a MySQL table.
Start with the first syllable and look for a random word in the dictionary that matches the soundex test. If you can't find one (this will generally only find one syllable words) add the additional syllable and search again.
Example:
"Logical consequence"
A. Syllable split
"lo gi cal con se quence"
B. Vowel Sounds applied
"lah gee cahl con see quince"
C. Consonant Sounds applied
"lah jee kahl kon see quinse"
D. Soundtext test (one syllable soundex -obviously too easy to guess, but it proves the concept)
"Law Gee Call Con Sea Quints"
Soundex strcmp's return a number. So if you like, you could get the soundex values of everything in your word bank in advance. Then you can quickly run the strcmp.
An example of a Soundex MySQL comparison is:
select strcmp(soundex('lah'), soundex('law'));
I think using the MySQL soundex is easier for you than the PHP soundex test if you're wanting a random result from a big database and you've already captured the soundex value in a field in your dictionary table.
My suggestion may be inefficient, but optimization is a different question.
Update:
I didn't mean to imply that my solution would only yield one syllable words. I used one syllable as the example, but if you took two of the syllables together, you'd get multi-syllable matches. In fact, you could probably just start by jamming all the syllables together and running soundex in mysql. If you find an answer, great. But then you can roll off syllables until you get the longest match you can. Then you're left with the end of the phrase and can take those together and run a match. I think that's the essence of the solution below from the other contributor, but I think you need to avoid jamming all the letters together without spaces. In English, you'd lose information that way. Think of a phrase beginning with a "th" sound. If you jam the phrase together, you lose which "th" sound is needed. "Theremin" (the instrument) has a different "th" sound than "There, a man".
Taking a different tack from Jonathan Barlow's solution, I recommend an O(n2) algorithm that gives you the properties you seek, in randomness, robustness, and scalable difficulty. The complexity of this algorithm can be further improved in constant time or with optimizations to the modality of the search, but because the size of your input phrases is guaranteed to be small, it's not that big a deal.
Construct a hash table of all known words in the Oxford English Dictionary and a map of lists of words by soundex() value. This initially sounds intractable, until you realize that there aren't actually that many of them in current use. Assuming a decent one-way hashing algorithm, this should take several megabytes, tops.
Consider the words in your input phrase as a single, compressed string of characters with no word identity whatsoever, discarding whitespace and all punctuation. From this, walk the space for all character lengths, starting with a length of one, up to the full length of the amalgamated phrase minus one. For each string produced by this walk, perform a hash lookup against OED. When a word is encountered that's present in the dictionary, append its word and position to the end of a list in memory.(This pass will always take sum(n) time, which is by definition 0.5n(n+1). So, O(n2) it is. Its space complexity is worst-case O(n2), but in practice, a fully connected set of terms is extremely unlikely.)
Now comes your difficulty slider. From the produced list, chop off the first N% of the found terms, where N is your level of difficulty. The principle here is, smaller words are easier for someone to lexically process, while longer words are more difficult to sound out and differentiate.
Construct an array conforming to the original length of the phrase (without spaces and punctuation) and shuffle your list of encountered words. Now, walk the shuffled list. For each element, verify if all of the slots in the array are free for that word at its original position. If they are, keep the word and its position, marking the slots as used in the array. If they are not, iterate to the next word until the list is exhausted.*
From the final output array, construct a partitioned list of unused characters in the space, treating each bag of characters as its own phrase. For this list, perform syllable detection exactly as sketched out here, passing the results to metaphone() with a percentage chance of glomming two or more syllables together. Then, for the bag of output dictionary words from 4., perform soundex(), pulling a random word from the word's mapped list of comparable soundex values. For every word that can only soundex() to itself according to the backing map of lists, perform partitioning and metaphone(). Finally, stitch the two lists of results together by sorting on position and print your result.
This is a random algorithm with what I believe to be all of the desired properties, but it's still rough in my mind.
* Extra credit: determine the allowed overlaps for your system by character or syllable. This can make for an even larger gamut of accepted output phrases and a much higher level of difficulty.

PHP MySQL Name Score separation system

I've got a website which lists sports scores. It current works like this:
Firstname Lastname 1-0 Firstname Lastname
It explodes this based on spaces, then explodes the third one (containing the scores) based on the - .
The problem with this is that it does not support names with more than 2 words. If I explode using - first, it would not support names with - in there. The results are added in a textarea, because I have many thousands that need to be added, so I don't want to make multiple fields to input data into, as I can currently add matches quickly listing one result per line. Does anyone have advice on how to make the system both multi-word, and special character-insensitive? Is there maybe a way to split when it encounters a number, then select the first chunk as the first name, the last as that players score, and the rest as the last name?
I don't know if there's any way to teach a simple parsing command, or even a regular expression, to do what you want. Some cases will always be ambiguous. For example, if you have the names `Mary Ann Steiner" and "Constantin Van Dyke" the patterns are exactly the same, but one needs to be split (2/1) and the other needs to be split (1/2).
You could possibly find a library that knows how to make educated guesses based on a huge dictionary of known names, but failing that...
I think in this case you need the human brain inputting the data to make some of the decisions, and indicate them during data entry. In my experience using multiple fields isn't that slow if you navigate using the tab key instead of mousing around. You could also enter the data using a delimiter of your own, like:
Mary Ann,Steiner,2-3
Constantin,Van Dyke,4-2
Then you'd run something that explodes those lines based on "," and enters the elements into your db.
If you're copy/pasting or scraping the data from an external site, another option would be to just explode every line using the method you're currently using. This should work for most records, and when it doesn't work, it will be obvious -- the resulting record will have too many elements. You can have your script flag just those records for human intervention.

Categories