How do I remove Unicode Characters from String? - php

I need a Regex code to remove Emoji, Symbols ( basically any unicode character ) except Japanese, Korean, Chinese, Vietnamese , and any other languages that use unicode characters. The regex is going to be used for a php and Python server. I noticed that I'm having problem with iPhone users who uses Emoji keyboard to create some weird names.
So far I've tried a few regex but I couldn't find any proper one.
Below is my own text string which I use for testing. Please note that I have no idea what does the other english character means. If its a bad word please change it.
abcdefghij
klmnopqrst
uvwxyz
1234567890
한국 韓國
‎Công Ty Cổ Phần Hùng Đức
南极星
おはようございます
============== Below characters should be detected by regex ========
™£¢£¢§¢∞§¶•§ª§¶
[]{}"';?><_+=-
()*&^%$##!~`,.
😊🐻🏢🐭4️⃣⌘
❤❣☁♫🗽🐯

All symbols match \p{S} regex. You just need to be sure your PHP is in UTF-8 mode (whatever that means, I don't do PHP) – see http://php.net//manual/pl/regexp.reference.unicode.php – and for Python, you need an alternative regex library: https://pypi.python.org/pypi/regex

You may find that regular expressions aren't the hammer for all nails. In this case you simply want to exclude characters, so it probably isn't.
In Python 3 the string translate() method would be useful: if you mapped the characters you want excluded to None they will indeed be excluded from the result.
Unfortunately this method only applies to ASCII strings, and takes a 256-character string as its mapping table. You couldl ,however, program a similar alogirthm yourself in Python, but it's not going to be as efficient.
PS: There are no "bad words" in your text.

Related

Having en-dash at the end of the string doesn't allow json_encode

I am trying to extract n characters from a string using
substr($originalText,0,250);
The nth character is an en-dash. So I get the last character as †when I view it in notepad. In my editor, Brackets, I can't even open the log file it since it only supports UTF-8 encoding.
I also cannot run json_encode on this string.
However, when I use substr($originalText,0,251), it works just fine. I can open the log file and it shows an en-dash instead of â€. json_encode also works fine.
I can use mb_convert_encoding($mystring, "UTF-8", "Windows-1252") to circumvent the problem, but could anyone tell me why having these characters at the end specifically causes an error?
Moreover, on doing this, my log file shows †in brackets, which is confusing too.
My question is why is having the en-dash at the end of the string, different from having it anywhere else (followed by other characters).
Hopefully my question is clear, if not I can try to explain further.
Thanks.
Pid's answer gives an explanation for why this is happening, this answer just looks at what you can do about it...
Use mb_substr()
The multibyte string module was designed for exactly this situation, and provides a number of string functions that handle multibyte characters correctly. I suggest having a look through there as there are likely other ones that you will need in other places of your application.
You may need to install or enable this module if you get a function not found error. Instructions for this are platform dependent and out-of-scope for this question.
The function you want for the case in your question is called mb_substr() and is called the same as you would use substr(), but has other optional arguments.
UTF-8 uses so-called surrogates which extend the codepage beyond ASCII to accomodate many more characters.
A single UTF-8 character may be coded into one, two, three or four bytes, depending on the character.
You cut the string right in the middle of a multi-byte character:
[<-character->]
[byte-0|byte-1]
^
You cut the string right here in the middle!
[<-----character---->]
[byte-0|byte-1|byte-2]
^ ^
Or anywhere here if it's 3 bytes long.
So the decoder has the first byte(s) but can't read the entire character because the string ends prematurely.
This causes all the effects you are witnessing.
The solution to this problem is here in Dezza's answer.

Converting Unicode characters into the equivalent ASCII ones

I need to "flatten out" a number of Unicode strings for the purposes of indexing and searching. For example, I need to convert GötheФ€ into ASCII. The last two characters have no close representations in ASCII so it's Ok to discard them completely. So what I expect from
echo iconv("UTF-8", "ASCII//TRANSLIT//IGNORE", "GötheФ€");
is Gothe but instead it outputs Gothe?EUR.
In addition to letters, I'd also like all the variety of Unicode numerals and punctuation marks, such as periods, commas, dashes, slashes etc. to be replaced by their closest ASCII counterparts, which is something ASCII//TRANSLIT//IGNORE in iconv function does already but not without producing some garbage output for the Unicode characters for which it's not able to find any ASCII replacements. I'd like such characters to be totally ignored.
How do get the expected result? Is there a better way, perhaps using intl library?
You've picked a hard problem. It is better to tell the user entering Unicode characters to transliterate ASCII themselves. Doing it for them will only upset them when they disagree with your transliteration.
Anything you do will likely be jarring and offensive to people who place great meaning on Diacritics: http://en.wikipedia.org/wiki/Diacritic
No matter what transliteration strategy you use, you will not please everyone, since different people prescribe different meanings to different characters. A transliteration that delights one person will enrage another. You won't make everyone happy unless you let everyone use whatever character they want in Unicode.
But life is jarring and offensive, so off we go:
This PHP Code:
function toASCII( $str )
{
return strtr(utf8_decode($str),
utf8_decode(
'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ'),
'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy');
}
What the above PHP function does is replace each Unicode character in the first parameter of utf8_decode and replaces it with the corresponding character in the second parameter of utf8_decode.
For example the Unicode À is transliterated to ASCII A, and the å is converted to a. You'll have to specify this for every single Unicode character that you believe transliterates to an ASCII character. For the others, remove them or run them through another transliteration algorithm.
There are 95,221 other characters that you will have to look at which might transliterate to ASCII. It becomes an existential game of "When is an A no longer an A?". What about the Klingon characters and the road-map signs that kind of look like an A? The fish character kind of looks like an a. Who is to say what is what?
This is a lot of work, but if you are cleaning database input, you have to create a white list of characters and block out the other barbarians, keeping them out at the moat, it's the only reliable way.

regexunicode - Accented characters are removed when using preg_match_all

I have the the problem described in title.
If I use
preg_match_all('/\pL+/u', $_POST['word'], $new_word);
and I type hello à and ì the new_word returned is *hello and *
Why?
Someone advised me to specify all characters I want to convert in this way
preg_match_all('/\pL+/u', $_POST['word'], 'aäeëioöuáéíóú');
, but I want my application works with all existing accents (for a multilanguage website).
Can you help me?
Thanks.
EDIT: I specify that I utilise this regex to purify punctuation. It well purify all punctuation but unicode characters are wrong returned, in fact are not even returned.
EDIT 2: I am sorry, but I very badly explained.
The problem is not in preg_match_all but in
str_word_count($my_key, 2, 'aäáàeëéèiíìoöóòuúù');
I had to manually specify accented characters but I think there are many others. Right?
\pL should match all utf8 characters and spaces. Be sure, that $_POST['word'] is a string encoded with utf8. If not, try utf8_encode() before matching or check the encoding of your HTML form. In my tests, your example works like a charm.
You may use this together with count() to get the number of words. Then you need not care about the possible characters. \pL will do this for you. This should do the trick:
$string = "áll thât words wíth ìntérnâtiønal çhårs";
preg_match_all('/\pL+/u', $string, $words);
echo count($words[0]); // returns: 6
Try using mb_ereg_match() (instead of preg_match()) from Multibyte String PHP library. It is specially made for working with multibyte strings.

Converting a Javascript regular expression to preg_match() compatible

I have this code from a javascript
/+\uFF0B0-9\uFF10-\uFF19\u0660-\u0669\u06F0-\u06F9u/
after some read about php & \u support I convert it to \x
/\+\x{FF0B}0-9\x{FF10}-\x{FF19}\x{0660}-\x{0669}\x{06F0}-\x{06F9}/u
but still I'm not able to use it in php
$phoneNumber = '+911561110304';
$start = preg_match('/\+\x{FF0B}0-9\x{FF10}-\x{FF19}\x{0660}-\x{0669}\x{06F0}-\x{06F9}/u', $phoneNumber,$matches);
the matches will be null!
how to fix this?
It looks like you want to match an ASCII plus sign or its Japanese Halfwidth equivalent, followed by one or more digits from a few different writing systems. But, as #mario observed, you seem to be missing some square brackets. The JavaScript version probably should be:
/[+\uFF0B][0-9\uFF10-\uFF19\u0660-\u0669\u06F0-\u06F9]+/
(I couldn't see any reason for the u at the end, so I dropped it.) The PHP version would be:
'/[+\x{FF0B}][0-9\x{FF10}-\x{FF19}\x{0660}-\x{0669}\x{06F0}-\x{06F9}]+/u'
Of course, this will allow a mix of ASCII, Arabic and Halfwidth characters in the same number. If that's a problem, you'll need to break it up a bit. For example:
'/\+(?:[0-9]+|[\x{0660}-\x{0669}]+|[\x{06F0}-\x{06F9}]+)|\x{FF0B}[\x{FF10}-\x{FF19}]+/u'

PCRE seems to be removing particular characters

I have a piece of text (part French part English) that has the European style Canadian Dollar symbols ($C) in it multiple times. When I attempt to use a regex using either traditional or unicode characters, the symbols have been removed from the text and cannot be matched with. I used a lazy regex so that if it doesn't find the expected symbols it still works.
Additionally the text is in an xml utf-8 doc and being displayed from a web interface(made in house).
Escape the $ inside the RegExp, the dollar-sign has a special meaning in RegExp.
In perl, regex's and code are displayed in ascii, but if you want to embed unicode in your text, first you have to have an editor that does unicode, second you have to tell Perl your source code contains unicode (with a use utf8' pragma).
If you don't want to do that you can embed (in Perl) code points in strings (regex's) with a construct like this $regex = /this is some text, this: is \x{1209} a codepoint unicode character/;
It matches the character IF the data source is decoded Unicode (internalized) and contains that character.
Edit - I don't think there is a unicode for canadian dollar, rather '$C', like someone said you have to escape the $ if the regex is interpolated.
If you keep the $C, the character class [$C] matches $ or C, not the combination. Maybe (?:\$|\$C) would be a better anchor.
The issue turned out to be a bug in code just before i called eval(). Something in the french unicode was screwing with the code passed to eval, so by not combining the text and regex it worked fine.

Categories