Algorithm to determine probable language of a text - php

I'm searching for a simple algorithm or an open source library (PHP) allowing to estimate whether a text mainly uses a specific language. I found the following answer relating to Python, which probably leads in the right direction. But something working out-of-the-box for PHP would be a charm.
Of course something like an n-gram estimator wouldn't be too hard to implement, but it requires a reference database as well.
The actual problem to solve is as follows. I run a WordPress blog, which currently is flooded by SPAM. The blog is in German language and virtually all trackback spam is English. My idea is to inmmediately spam all trackbacks seeming to be English. However, I cannot use marker words, because I do not want to spam typos or citations.
My solution:
Using the answers to this question I implemented a solution, which detects German by a simple stopword ratio. Any comment must contain at least 25% German stopwords, if it has a link. So you can still comment something like "cool article", which has no stopwords at all, but if you put a link, you should bother to write proper language.
Unfortunately the stopwords from NLTK are incorrect. The list contains words, which do not exist in German. So I used the snowball list. Using the Perl regexp optimizer I condensed the entire list into a single regexp and count the stopwords using preg_match_all(). The whole filter is 25 lines, a third of the Perl code to produce the regexp from the list. Let's see how it performs in the wild.
Thanks for your help.

I agree with #Thomas that what you are looking for is a spam classifier rather than a language detection algorithm. Nonetheless, I think this language detection solution is simple enough and out of the box as you want. Basically if you count the number of stop-words in different languages and select the language with higher number of them in the document you have a simple, yet very effective language classifier.
Now, the best part is that you do not need to code almost anything as you can used standard stop-words list and processing packages like nltk to deal with the information. Here you have the example of how to implement it from scratch with Python and nltk.
I hope this helps.

If all you want to do is recognize English then there's a very easy hack. If you just check for the letters in a post, English is one of the only languages that will be entirely in the pure-ASCII range. It's hacky, but it's a decently simplification on an otherwise very difficult problem I believe.
My guess on efficacy, just doing some quick back on the envelope calculations on a couple French and German blogs would be ~85%, which isn't foolproof, but is pretty good for the simplicity of it I would think.

Related

Is there a tool to obtain all get all derivatives of a word in PHP?

I need to input "face" and get "facial, faces, faced, facing, facer, faceable" etc.
I've come across some ineffective programs which do the opposite, such as SNOWBALL and a couple of Porter Stemming PHP scripts which don't seem to work.
I'm beginning to think I may have to write this script - But, I thought I'd check to see if somebody has already been there/done that.
It will be very hard to simply find an algorithm to find the different way a word can be written like that.
You can use a dictionary webservice instead that have all the words available already

php script to find synonyms

Im writing a php script to compare the similarity of 2 strings. This works pretty good at the moment, but what I would like to do is match words when one is a synonym of the first.
Any thoughts?
You might want to try looking for a thesaurus service that allows you to query the synonyms for a word and have it return an XML list of synonyms.
Here is something to look at: http://nbii-thesaurus.ornl.gov/thesaurus/
I don't know if this would be helpful for you but time ago I have been working on a PHP (CodeIgniter) library for Google Search that gets related terms by using the ~ on searches.
Maybe you can digg on the source code codeigniter-googlesearch-api
Formally aren't synonymous but depending on the application that you have in mind it could be useful (for example for SEO purposes).
As a side note, if you put ~term in Google, then it will bold you the terms that are related. Try it with ~investment for example.

identify tense in php

I'm looking for a way to analyze a string of text and find out in which tense it was written, for example : "I'm going to the store" == current, "I bought a car" == past ect..
Any tips on how I could this done?
Yes, this is going to be extremely difficult... I had started to do something similar for what was going to be a quick weekend project until I realized this... nonetheless here is a resource I found to be helpful.
Download the source code of Wordnet 3.0 from Princeton, which has a database of english words. The file /dict/index.verb is a list of present tense english verbs you should be able to import into your database as a CSV without too much trouble. From there, you're on your own, and will need to figure out how to handle the weirdness that is the English language.
This could be a rather tasking process. How detailed do you want to get? Do you want to consider only past, present, and future? Or do you want to consider Simple Present, Present Progressive, Simple Past, etc?
In any case, you'll also have to evaluate the Affirmative forms, Negative forms, and Question forms. A great chart online that can help can be found at http://www.ego4u.com/en/cram-up/grammar/tenses
Note the rules and signal words.
Tokenize / find action words from db/file (or at least, guess - *th=past, for example) / count tense hits?
For such a task, I believe Regular expressions won't be enough : it's a pretty difficult task...
Either you won't get anything good at all from regex, or you'll end with some kind of super-monster-regex that not even you will understand and be able to maintain...
This probably requires more than regex... Something like some kind of "linguistic-engine", I suppose...
If you actually need it and aren't just playing around, you might take a look at nltk. Parsing is a complex matter. Parsing natural languages is even more complex. And parsing a highly irregular language, such as English, is even worse. If you can narrow the problem scope down, you stand a better chance at a solution.
What do you need it for?
You can find a basic Brill Parser implementation for PHP at Ian Barber's PHP/ir site. The algorithm will tag your words.
If you enter the words "I think", the result will be:
I/NN think/VBP
NN= Noun,
VBP= Verb Present

Auto generation of META tags in PHP

I was thinking of writing a PHP script that would analyse a CMS'd page's content (i.e. database field) and then auto-generate (X)HTML META description & keyword tags, but as always there's no point reinventing the wheel so I'm wondering if anyone knows of such a beastie?
The former I imagine would be something like a relatively straightforward regex to grab the first sentence or two, whereas the latter would probably involve elimination of words against a common-words dictionary and then weighting of frequency or similar.
The problems you're considering are twofold: one of keyword extraction and one of document summarization. The first, which I'd obviously use for keywords has a very simple naive approach: pick the most frequent word in the content, minus all stopwords (look this up in Wikipedia if you don't know what these are). There are many more advanced methods, including weighting for the inclusion of synonyms, location in text or markup, and more. There are a few examples of easy keyword extraction scripts in PHP you can implement probably without trouble. Just Google search something like "PHP keyword extraction" and you'll find a few.
The second problem, on the other hand, is a little more difficult, and is still the source of a lot of academic work. You'd need summarization for a very thorough meta description tag. It may actually not be worth your time if you aren't looking for a long-scale AI project which may still come off as rigid or incoherent. Another approach would be simply a heuristic which uses keyword extraction: "This article is about (first most common keyword), (second most common keyword), and (third most common keyword)." You're at least getting the benefit of fitting in some content in both keyword and description. If you'd like to shake it up, use some synonyms instead. There is a semi-functional PHP implementation of WordNet, but I'd suggest outsourcing to the Natural Language Toolkit for Python for the heavy lifting there, as most of the work is already done for you.
I'd like to take a brief moment to encourage your research in this area and ignore the naysaying from Mr. Warnica. Meta information is important both for document classification and information extraction in the area of search. It would be foolish not to have the data, and it is, in fact, worthwhile to automate it for large-scale content management systems. Good luck with your efforts.
The Yahoo Pipes Term Extractor module does something similar to what you want. Unfortunately I am not aware of the source to pipes modules being open.

Gettext: Is it a good idea for the message ID to be the english text?

We're getting ready to translate our PHP website into various languages, and the gettext support in PHP looks like the way to go.
All the tutorials I see recommend using the english text as the message ID, i.e.
gettext("Hi there!")
But is that really a good idea? Let's say someone in marketing wants to change the text to "Hi there, y'all!". Then don't you have to update all the language files because that string -- which is actually the message ID -- has changed?
Is it better to have some kind of generic ID, like "hello.message", and an english translations file?
Wow, I'm surprised that no one is advocating using the English as a key. I used this style in a couple of software projects, and IMHO it worked out pretty well. The code readability is great, and if you change an English string it becomes obvious that the message needs to be considered for re-translation (which is a good thing).
In the case that you're only correcting spelling or making some other change that definitely doesn't require translation, it's a simple matter to update the IDs for that string in the resource files.
That said, I'm currently evaluating whether or not to carry this way of doing I18N forward to a new project, so it's good to hear some thoughts on why it might not be a good idea.
I strongly disagree with Richard Harrisons answer about which he states it is "the only way". Dear asker, do not trust an answer that states it is the only way, because the "only way" doesn't exist.
Here is another way which IMHO has a few advantages over Richards approach:
Start with using the proto-version of the English string as Original.
Don't display these proto-strings but create a translation file for English nontheless
Copy the proto-strings to the translation for the beginning
Advantages:
readable code
text in your code is very close if not identical to what your view displays
if you want to change the English text, you don't change the proto-string but the translation
if you want to translate the same thing twice, just write a slightly different proto-string or just add 'version for this and that' and you still have a perfectly readable code
I use meaningful IDs such as "welcome_back_1" which would be "welcome back, %1" etc. I always have English as my "base" language so in the worst case scenario when a specific language doesn't have a message ID, I fall-back on English.
I don't like to use actual English phrases as message ID's because if the English changes so does the ID. This might not affect you much if you use some automated tools, but it bothers me. I don't like to use simple codes (like msg3975) because they don't mean anything, so reading the code is more difficult unless you litter comments everywhere.
The reason for the IDs being English is so that the ID is returned if the translation fails for whatever reason - the translation for the current language and token not being available, or other errors.
That of course assumes the developer is writing the original English text, not some documentation person.
Also if the English text changes then probably the other translations need to be updated?
In practice we also use Pure IDs rather than then English text, but it does mean we have to do lots of extra work to default to English.
There is a lot to consider and answer is not so easy.
Using plain English
Pros
Easy to write and READ code
In most cases, it works even without running translation functions in code
Cons
Involved programmers must be also good copywriters :)
You need to write correct precise texts fully in English, even in the case that first language you need to run is something else (ie we're starting lof of projects in Czech language and we're localizing them to EN later).
In a lot of cases, you need to use contexts. If you fail to do it from begginig, it's a lot of work to add them later. To explain: In English, one word can have many different meands - and you need to use contexts to differentiate them - and it's not always so easy (order = sort order, or it can be purchase order).
It can be very hard to correct English later in the process. Corrections of the source strings will very often lead to loss of already translated phrases. It's very frustrating to loose translation to 3 different languages just because you corrected English.
Using keys
Pros
You can use localization platform functions even for the English language. I.e. we're using the lovely Crowdin platform. There is a lot of handy tools - or rather a complete workflow - for translation management: voting for different translations, translation history, glossaries (which helps to keep translation/language coherent), proofing, approval, etc. Using keys make this process much more smooth.
It's much easier to send Engish texts for proofreading etc. Usually, it's not a good idea to let copywriters to modify your code directly :)
Cons
More complicated project setup.
Harder to use %d, %s etc.
In a word don't do this.
The same word/phrase in English can often enough have more than one meaning, and each meaning a different translation.
Define mnemonic ids for your strings,and treat English as just another language.
Agree with other posters that id numbers in code are a nightmare for code readability.
Ex localisation engineer
Haven't you already answered your own question? :)
Clearly, if you intend to support i18n of your application, you should treat all the language implementations the same. If someone decides a string needs to change, you make a similar change in all the language files. The metadata with the checkin should group all the language files together in the same change. If your "default" language is handled differently, that makes it more difficult to maintain.
At the end of the day, a translator should be able to sit down and change the texts for every language (so they match in meaning) without having to involve the programmer that already did his/her job.
This makes me feel like the proper answer is to use a modified version of gettext where you put strings like this
_(id, backup_text, context)
_('ABOUT_ME', 'About Me', 'HOMEPAGE')
context being optional
why like this?
because you need to identify text in the system using unique ID's not english text that could get repeated elsewhere.
You should also keep the backup, id and context in the same place in your code to reduce discrepancies.
The id's also have to be readable, which brings in the problem of synonyms and duplicate use (even as ids), we could prefix the ids like this "HOMEPAGE_ABOUT_ME" or "MAIL_LETTER", but
people forget to do this at the start and changing it later is a problem
its more flexible for the system to be able to group both by id and context
which is why I also added the context variable at the end
the backup text can be pretty much anything, could even be "[ABOUT_ME#HOMEPAGE text failed to load, please contact example#example.com]"
It won't work with the current gettext editing programs like "poedit", but I think you can define custom variable names for translations like just "t()" without the underscore at the start.
I know that gettext also has support for contexts, but its not very well documented or widely used.
P.S. I'm not sure about the best variable order to enforce good and extendable code so suggestions are welcome.
I'd go so far as to say that you never (for most values of never) want to use free text as keys to anything. Imagine if SO used the query title as key to this page for instance. If someone links to it, and then the title is edited, the link is no longer valid.
Your problem is similar, except you would also be responsible for updating all links...
Like Douglas Leeder mentions, what you probably want to do is use English as the default (backup) language, although an interface that uses English and another language intermixed is highly confusing (but mildly amusing, too).
In addition to the considerations above, there are many cases where you'd want the "key" (msgid) to be different from the source text (English). For example, in the HTML view, I might want to say [yyyy] where the destination and label of that anchor tag depend on the locale of the user. E.g. it might be a link to a social network, and in US it would be Facebook but in China it would be Weibo. So the MsgIds might be something like socialSiteUrl and socialSiteLabel.
I use a mix.
For basic strings that I don't think will have conflicts/changes/weird meanings, I'll make the key be the same as the English.
We use Dutch. The strings should be written in the native language of the writer; this makes communication with translators less prone to errors, since the writer(s) can communicatie in their native language with them.

Categories