How bad is mb_internal_encoding("UTF-8");? - php

After answering Zend_Cache: After loading cached data, character encoding seems messed up
I use it to change the PHP's internal encoding , its originally ISO-8859-1,
so I need to change the encoding of every non English input value, but using it I force PHP to convert every value to UTF-8, as you might see in the question linked above.
I am Caching arabic text in files using Zend_cache, I wasn't be able to do it without that function.
I need to know: How bad is to use this function mb_internal_encoding("UTF-8");?
I had adopt to use this function in every project I opt in , all of them are using non-english characters

Related

Is it safe to use raw emojis in PHP source code?

Example :
$fire = '🔥';
I know PHP 5+ supports this functionality natively but is it best practice or should I be storing them using their codepoints instead and if so, why?
As far as your editor and the PHP compiler are concerned, it's all just text, and '🔥' is no different from 'fire' or 'Φωτιά'.
When PHP runs, it will read the bytes in from the file and put them in memory, without caring what they mean. This leads to the most likely problem you'll have: if you save the file in your text editor as UTF-16, and then echo the string to a browser telling it that it's UTF-8, the browser won't show the right thing. But that's easily avoided by making sure your editor always uses UTF-8, and your output headers tell the browser that's what you're using.
If you don't trust your editor to do that, and you're running PHP7, you could write it in the escaped notation "\u{1f525}", but when it runs, the same bytes will end up in memory.
You might have similar problems if you send the text elsewhere - to a database, for instance - and that somewhere else doesn't know to handle it as UTF-8. How you write the string in your source file won't make any difference to that, though, that's just a case of making sure everything is configured to match.
Note: you don't actually have to use UTF-8 for this, you could use UTF-16, or some other encoding, as long as you're consistent; but UTF-8 is by far the most common these days, particularly on the web.

Is it a good practice to use mb_convert_encoding function

This question is different from UTF-8 all the way through as it asks for how safe and is it a good practice to use the mb_convert_encoding function.
Lets say that a user can upload the files using the PHP API. Each filename and path gets stored in a PostgreSQL database table which has UTF-8 as default encoding.
Sometimes user uploads files which names aren't UTF-8 encoded and they get imported into the database. The problem is that the characters that are not UTF-8 encoded are scrambled and do not display as they should in the table columns.
I was thinking of adding the following to the PHP code before import:
if ( ! mb_check_encoding($output, 'UTF-8') {
$output = mb_convert_encoding($content, 'UTF-8');
}
Does this look like a good practice and will it be displayed and converted by the user's client correctly if I return UTF-8 as the output? Is there a potential loss to the bytes by using mb_convert_encoding?
Thanks
If you're going to convert an encoding, you need to know what you're converting from. You can check whether the encoding is or isn't valid UTF-8, but if it tells you it's not valid UTF-8 then you still have no clue what it is. Omitting the $from_encoding parameter from mb_convert_encoding just makes it assume some preset encoding for that parameter, but that doesn't mean that $content actually is in that encoding.
In other words: if you don't know what encoding a string is in, you cannot meaningfully convert it to anything else either, and just trying to convert it from ¯\_(ツ)_/¯ is a crapshoot with the result being equally likely to be something useful and utter garbage.
If you encounter unknown encodings, you only have a few choices:
Reject the input value.
Test whether it's one of a handful of other expected encodings and then explicitly convert from your best guess; but that is pretty much a crapshoot as well.
Just use bin2hex or something similar on the value, essentially giving up on trying to interpret it correctly, but still leaving some semblance to the original value.

php + mysql + html + javascript = i18n headache

This has been always a problem for me , Character problem . I always tried to solve my problem with little patches , actually this never solves my problem in reality.So I am looking for very strong solution to solve all these problems.I want to learn how big apps(facebook , google, other multi lingual ajax apps and apis) solve this problem. I want a solution which will solve all my character encoding , etc problems.I use php, mysql, html and javascript to create my application , so the solution should solve all problems or all these languages together.If you write full configuration this is perfect , but if there is a long long document , I can read it to . I need help . Thank you . I can not transfer string(text) correctly through all these languages
Also I pull data from external apis.How should I take care of them
It's pretty easy if you just stick to using Unicode everywhere.
set MySQL table encodings to UTF-8
make sure you're talking to the database in UTF-8 by running SET NAMES utf8
save all your source code in UTF-8
when manipulating strings in PHP which may contain UTF-8 characters, use the mb_ functions
send HTTP Content-Type headers denoting that the content is in UTF-8
Javascript is intrinsically UTF-8, so you should have no worries there
The thing is that different technologies default to different character encodings. Unfortunately strings do not have implicit encoding metadata attached, they're just sequences of bytes. Unless being told, the receiver of a string can only make a best guess what encoding that sequence is supposed to be in. Whenever connecting two pieces of anything, you need to make sure they're using the same encoding (or you need to specifically convert from one encoding to the other). Always assume that you have to define the encoding somewhere, how exactly that needs to be done depends on the technology.

Is it possible to write a great PHP app which uses Unicode?

My next web application project will make extensive use of Unicode. I usually use PHP and CodeIgniter however Unicode is not one of PHP's strong points.
Is there a PHP tool out there that can help me get Unicode working well in PHP?
Or should I take the opportunity to look into alternatives such as Python?
PHP can handle unicode fine once you make sure to encode and decode on entry and exit. If you are storing in a database, ensure that the language encodings and charset mappings match up between the html pages, web server, your editor, and the database.
If the whole application uses UTF-8 everywhere, decoding is not necessary. The only time you need to decode is when you are outputting data in another charset that isn't on the web. When outputting html, you can use
htmlentities($var, ENT_QUOTES, 'UTF-8');
to get the correct output. The standard function will destroy the string in most cases. Same goes for mail functions too.
http://developer.loftdigital.com/blog/php-utf-8-cheatsheet is a very good resource for working in UTF-8
One of the Major feature of PHP 6 will be tightly integrated with UNICODE support.
Implementing UTF-8 in PHP 5.
Since PHP strings are byte-oriented, the only practical encoding scheme for Unicode text is UTF-8. Tricks are [Got it from PHp Architect Magazine]:
Present HTML pages in UTF-8
Convert PHP scripts to UTF-8
Convert the site content, back-end databases and the like to UTF-8
Ensure that no PHP functions corrupt the UTF-8 text
Check out http://www.gravitonic.com/talks/ PHP UTF 8 Cheat Sheet
PHP is mostly unaware of chrasets and treats strings as bytestreams. That's not much of a problem really, but you'll have to do a bit of work your self.
The general rule of thumb is that you should use the same charset everywhere. If you use UTF-8 everywhere, then you're 99% there. Just make sure that you don't mix charsets, because then it gets really complicated. The only thing that won't work correct with UTF-8, is string manipulation, which needs to operate on a character level. Eg. strlen, substr etc. You should use UTF-8-aware versions in place of those. The multibyte-string extension gives you just that.
For a checklist of places where you need to make sure the charset is set correct, look at:
http://developer.loftdigital.com/blog/php-utf-8-cheatsheet
For more information, look at:
http://www.phpwact.org/php/i18n/utf-8

PHP, MSSQL2005 and Codepages

I have a php script which accesses a MSSQL2005 database, reads some data from it and sends the results in a mail.
There are special characters in both some column names and in the fields itself.
When I access the script through my browser (webserver iis), the query is executed correctly and the contents of the mail are correctly (for my audience) encoded.
However, when I execute php from the console, the query fails (due to the special characters in the column names). If I replace the special characters in the query with calls to chr() and the character code in latin-1, the query gets executed correctly, but the results are also encoded in latin-1 and therefore not displayed correctly in the mail.
Why is PHP/the MSSQL driver/… using a different encoding in the two scenarios? Is there a way around it?
If you wonder, I need the console because I want to schedule the script using SQLAgent (or taskmanager or whatever).
Depending on the type of characters you have in your database, it might be a console limitation I guess. If you type chcp in the console, you'll see what is the active code page, which might something like CP437 also known as Extended ASCII. If you have characters out of this code page, like in UTF8, you might run into problems. You can change the current active code page by typing chcp 65001 to switch to UTF8.
You might also want to change the default Raster font to Lucida Console depending on the required characters as not all fonts support extended characters (right click on command prompt window's title, properties, font).
As already said, PHP's unicode support is not ideal, but you can manage to do it in PHP5 with a few well placed function call of utf8_decode. The secret of character encoding is to understand well what is the current encoding of all the tools you are using: database, database connection, current bytes in your PHP variable, your output to the console screen, your email's body encoding, your email client, and so on...
For everything that have special characters, in our modern days, something like UTF8 is often recommended. Make sure everything along the way is set to UTF8 and convert only where necessary.
PHP's poor support for the non English world is well known. I've never used a database with characters outside the basic ASCII realm, but obviously you already have a work around and it seems you just have to live with it.
If you wanted to take it a step further, you could:
1. Write an array that contains all the special chars and their CHR equivalents
2. foreach the array and str_replace on the query
But if the query is hardcoded, I guess what you have is fine. Also, make sure you are using the latest PHP, at least 4.4.x, there's always a change this was fixed but I skimmed the 4.x.x release notes and I don't see anything that relates to your problem.
The thing to remember about PHP strings is that they are streams of bytes. If you want to get the data in the correct character set (for whatever you are doing), you have to do this explicitly through some kind of function or filter. It's all pretty low-level.
Depending on your setup, you may need to know the internal character set of the strings in the database, but at the very least you need to know what character set the database is sending to PHP (because, remember, to PHP it's just a stream of bytes).
Then you have to know the target character set (and possibly specify it, which you really should anyway). For example, say that you are getting utf-8 from the database, but wish to send a latin-1 (and therefore base64 or q-printable encoded as 'Content-transfer-encoding'):
$send_string = base64_encode(utf8_decode($database_string));
Of course in this case, you'd have to know that all the utf-8 characters exist in the latin-1 character set, and you probably wouldn't really want base64 (PHP unfortunately does not have a good q-printable encoding function, though curiously, it does for decoding), and if you aren't talking about utf-8 <=> latin-1 you'll want to whip out the mbstring functions instead.
As far as the console, you'd have to know what PHP is getting when you are typing in special characters from the console, which probably depends on the shell and/or PHP settings. But remember that PHP only understands strings as byte byte byte and you should be able to work it out.

Categories