Creating a slug with UTF-8 in it - php

I am trying to write some code to take UTF-8 text and create a slug that contains some UTF-8 characters. So this is not about transliterating UTF-8 into ASCII.
So basically I want to replace any UTF-8 character that is whitespace, a control character, punctuation, or a symbol with a dash. There exist Unicode categories that I should be able to use: \p{Z}, \p{C}, \p{P}, or \p{S}, respectively.
So I could do something as simple as this:
preg_replace("#(\p{P}|\p{C}|\p{S}|\p{Z})+#", "-", "This. test? has an ö in it");
but it results in this:
This-test-has-an-√-in-it
(and I'd want This-test-has-an-ö-in-it)
It butchers the umlaut o, possibly because in Unicode it is comprised of two bytes c3b6 of which the b6 seems to be recognized as a punctuation character (so the \p{P} matches here). The c3 remains in the text. This is strange because AFAIK a single byte b6 doesn't exist in UTF-8.
I also tried the same thing in Perl in order to make sure it is not a PHP problem, but the code
$s = 'This. test? has an ö in it';
$s =~ s/(\p{P}|\p{C}|\p{S}|\p{Z})+/-/g;
also produces:
This-test-has-an-√-in-it
(which probably makes sense as PHP's PCRE are Perl Compatible Regular Expressions)
While when I do this in Python
import regex as re
text=u"This. test? has an ö in it";
print re.sub(ur"(\p{P}|\p{C}|\p{S}|\p{Z})+", "-", text)
it produces my desired
This-test-has-an-ö-in-it
What to do?

The solution was to use the "Unicode modifier" u:
preg_replace("#(\p{P}|\p{C}|\p{S}|\p{Z})+#u", "-", "This. test? has an ö in it");
correctly produces
This-test-has-an-ö-in-it
So: using Unicode Categories without the Unicode modifier produces strange results without any warning.

Related

Normalize Turkish in PHP?

Is there a way to simply normalize turkish characters like Ç, Ğ, İ, Ö, Ş, Ü and ı ?
cause now I'm using str_replace but that doesn't seem the right way to go, cause it's possible to forget a character.... Is there a more standard way? I tried to use the normalize method within the PHP internationalization module, but the Turkish characters stay Turkish. I would like to replace them with normal characters for the URL. So Ç becomes C and Ş becomes S, and so on.
What do you mean by normalization? Just take the characters as they come in, but put your scripts, connection and html in correct encoding.
UTF-8 suggested, explanation: UTF-8 vs. Unicode
If you only want ASCII chars, you can test this by something like ord($char) < 255.
For conversion look into these functions:
http://php.net/iconv
http://php.net/utf8_encode
http://php.net/mb_convert_encoding
A call similiar to
$str = iconv('UTF-8', 'ASCII//TRANSLIT', $str);
would do the trick.
Another preg_replace way: Convert special characters to normal characters using PHP, like ã, é, ç to a, e, c

Language specific characters to regular English chars

I am not sure where to start with this, but here is what I want to do:
Users have a textfield where they need to input few words. Problem is that page will use people from different countries, and they will enter "weird" Latin characters like: ž, Ä, Ü, đ, Ť, Á etc.
Before saving to base I want to convert them to z, a, u, d, t, a... Is there a way to do this without making something like this (I think there is too much characters to cover):
$string = str_replace(array('Č','Ä','Á','đ'), array('C','A','A','d'), $string);
And, yes, I know that I can save utf-8 in database, but problem is that this string will later be sent by SMS, and because of sms protocol nature, these "special" chars use more space in message than regular English alphabet characters (I am limited to 120 chars, and if i put "Ä" in message, it will take more than 1 character place).
First of all, I would still store the original characters in utf-8 in the database. You can always "translate" them to ASCII characters upon retrieval. This is good because if, say, in the future SMS adds UTF-8 support (or you want to use user data for something else), you'll have the original characters intact.
That said, you can use iconv to do this:
iconv('utf-8', 'ascii//TRANSLIT', $input); //where $input contains "weird" characters
See this thread for more info, including some caveats of this approach: PHP: Replace umlauts with closest 7-bit ASCII equivalent in an UTF-8 string
Close but not perfect because it converts the accents and things into characters.
http://www.php.net/manual/en/function.iconv.php
echo iconv("ISO-8859-1", "ASCII//TRANSLIT", 'Martín');
//output: Mart'in
echo iconv("ISO-8859-1", "ASCII//TRANSLIT", "ÆÇÈÊÈÒÐÑÕ");
//output: AEC`E^E`E`OD~N~O
Using
echo iconv('utf-8', 'ascii//TRANSLIT', 'Martín');
//output: Mart
If the accented character is not UTF-8, it just cuts off the string from the special char onwards.

Converting Unicode characters into the equivalent ASCII ones

I need to "flatten out" a number of Unicode strings for the purposes of indexing and searching. For example, I need to convert GötheФ€ into ASCII. The last two characters have no close representations in ASCII so it's Ok to discard them completely. So what I expect from
echo iconv("UTF-8", "ASCII//TRANSLIT//IGNORE", "GötheФ€");
is Gothe but instead it outputs Gothe?EUR.
In addition to letters, I'd also like all the variety of Unicode numerals and punctuation marks, such as periods, commas, dashes, slashes etc. to be replaced by their closest ASCII counterparts, which is something ASCII//TRANSLIT//IGNORE in iconv function does already but not without producing some garbage output for the Unicode characters for which it's not able to find any ASCII replacements. I'd like such characters to be totally ignored.
How do get the expected result? Is there a better way, perhaps using intl library?
You've picked a hard problem. It is better to tell the user entering Unicode characters to transliterate ASCII themselves. Doing it for them will only upset them when they disagree with your transliteration.
Anything you do will likely be jarring and offensive to people who place great meaning on Diacritics: http://en.wikipedia.org/wiki/Diacritic
No matter what transliteration strategy you use, you will not please everyone, since different people prescribe different meanings to different characters. A transliteration that delights one person will enrage another. You won't make everyone happy unless you let everyone use whatever character they want in Unicode.
But life is jarring and offensive, so off we go:
This PHP Code:
function toASCII( $str )
{
return strtr(utf8_decode($str),
utf8_decode(
'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ'),
'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy');
}
What the above PHP function does is replace each Unicode character in the first parameter of utf8_decode and replaces it with the corresponding character in the second parameter of utf8_decode.
For example the Unicode À is transliterated to ASCII A, and the å is converted to a. You'll have to specify this for every single Unicode character that you believe transliterates to an ASCII character. For the others, remove them or run them through another transliteration algorithm.
There are 95,221 other characters that you will have to look at which might transliterate to ASCII. It becomes an existential game of "When is an A no longer an A?". What about the Klingon characters and the road-map signs that kind of look like an A? The fish character kind of looks like an a. Who is to say what is what?
This is a lot of work, but if you are cleaning database input, you have to create a white list of characters and block out the other barbarians, keeping them out at the moat, it's the only reliable way.

php preg_replace: unicode modifier for ascii strings

I need to handle strings in my php script using regular expressions. But there is a problem - different strings have different encodings. If string contains just ascii symbols, mb_detect_encoding function returns 'ASCII'. But if string contains russian symbols, for example, mb_detect_encoding returns 'UTF-8'. It's not good idea to check encoding of each string manually, I suppose.
So the question is - is it correct to use preg_replace (with unicode modifier) for ascii strings? Is it right to write such code preg_replace ("/[^_a-z]/u","",$string); for both ascii and utf-8 strings?
This would be no problem if the two choices were "UTF-8" or "ASCII", but that's not the case.
If PHP doesn't use UTF-8, it uses ISO-8859-1, which is NOT ASCII (it's a superset of ASCII in that the first 127 characters . It's a superset of ASCII. Some characters, for example the Swedish ones å, ä and ö, can be represented in both ISO-8859-1 and Unicode, with different code points! I don't think this matter much for preg_* functions so it may not be applicable to your question, but please keep this in mind when working with different encodings.
You should really, really try to know which character set your strings are in, without the magic of mb_detect_encoding (mb_detect_encoding is not a guarantee, just a good guess). For example, strings fetched through HTTP does have a character set specified in the HTTP header.
Yes sure, you can always use Unicode modifier and it will not affect neither results nor performance.
The 7-bit ASCII character set is encoded identically in UTF-8. If you have an ASCII string you should be able to use the PREG "u" modifier on it.
However, if you have a "supplemented" 8-bit ASCII character set such as ISO-8859-1, Windows-1252 or HP-Roman8 the characters with the leftmost bit set on (values x80 - xff) are not encoded the same in UTF-8 and it would not be appropriate to use the PREG "u" modifier.

PCRE seems to be removing particular characters

I have a piece of text (part French part English) that has the European style Canadian Dollar symbols ($C) in it multiple times. When I attempt to use a regex using either traditional or unicode characters, the symbols have been removed from the text and cannot be matched with. I used a lazy regex so that if it doesn't find the expected symbols it still works.
Additionally the text is in an xml utf-8 doc and being displayed from a web interface(made in house).
Escape the $ inside the RegExp, the dollar-sign has a special meaning in RegExp.
In perl, regex's and code are displayed in ascii, but if you want to embed unicode in your text, first you have to have an editor that does unicode, second you have to tell Perl your source code contains unicode (with a use utf8' pragma).
If you don't want to do that you can embed (in Perl) code points in strings (regex's) with a construct like this $regex = /this is some text, this: is \x{1209} a codepoint unicode character/;
It matches the character IF the data source is decoded Unicode (internalized) and contains that character.
Edit - I don't think there is a unicode for canadian dollar, rather '$C', like someone said you have to escape the $ if the regex is interpolated.
If you keep the $C, the character class [$C] matches $ or C, not the combination. Maybe (?:\$|\$C) would be a better anchor.
The issue turned out to be a bug in code just before i called eval(). Something in the french unicode was screwing with the code passed to eval, so by not combining the text and regex it worked fine.

Categories