I think i encountered a PHP bug
When my header file is in UTF-8 encoding and my index.php file is in ANSI PHP gives
" headers already sent " error .
Is this normal ? and if yes , can you explain why ?
Maybe your editor is writing a UTF-8 BOM to the beginning of the "header" file, and PHP, not knowing what the BOM is, considers it as data to output and does that annoying thing PHP does?
There's a long-standing WONTFIX bug on PHP's mis-handling of the BOM. Probably your only workaround is to find an editor that writes UTF-8 without it (which, actually, is most of them.)
I've seen this before. Somewhere outside of the the encoding is generating some whitespace. It was a pain and a half to track down.
Related
Since yesterday, my server is adding some garbage characters to any script in PHP. If I look at my code with view source, I see some spaces and if I display a JSON string, it is considered invalid.
If I take this simple example:
<?PHP
echo "hello";
?>
It displays hello but in the source code I see a blank line before hello. The encoding of the file is in UTF8 without BOM (did it with Notepad++)
If I use file_get_contents to load the PHP file and then use rawurlencode before outputting the content, I get the following garbage before hello:
%EF%BB%BF%EF%BB%BF%EF%BB%BF
My first thought was that it was an encoding issue but I checked the concerned PHP file(s) and they are all in UTF8 without BOM. The only solution I have found is to remove this string of garbage each time before treating the content of a file.
I'm using Wordpress and the problem suddenly appeared yesterday while I had not modified any file.
Do you have any idea?
Thanks
Laurent
First off, I know this problem was signaled before, but the solutions do not apply to my case
Here is the url
http://www.astagiudiziaria.com/beni/porzione_di_rustico_e_terreni_agricoli/index.html
The page says its charset is ISO-8859-1, but it cannot be since it has the EURO sign on it. Chrome browser identifies it as windows-1252
I used
$file = str_replace('charset=iso-8859-1', 'charset=utf-8', $file);
$file = iconv('windows-1252', 'UTF-8', $file);
and save it and my text editor says it is UTF-8 encoded
Then I use
$doc2->loadHTML($file);
$doc2->saveHTMLFile('ggg.html');
and also my text editor says it is UTF-8 encoded
But http://i-tools.org/charset says this file, ggg.html is actually ASCII !
Nonetheless, inside it things look as expected, even though they are using html encodings , like Pré or proprietà
The xpath queries return garbage data, like
instead of Pré is Pré
instead of € is €Â
I have tried the solutions suggested around here without any success
I think it's about how php is dealing with libxml, since in ruby it works flawlessly - also using libxml through curb gem - problem being that my client wants a php script
I took a quick glance, and the way I see it the site outputs mixed encoding.
It is iso-8859-1 with a mixed-in windows-1252 € sign (I think).
Thats why the browser gets confused (but somehow handles it).
No idea how you would proceed here, apart from asking them to fix their site or alternativly do some bit-fiddling.
the Pré is Pré breaks because you attemt to windows-1252->utf8 transcode what actually is iso-8859-1 stuff (I suppose).
In a webapp I place a <div id="xxx" contentEditable=true > for editing purpose. The encodeURIComponent(xxx.innerHTML) will be send via Ajax POST type to a server, where a PHP script creates a simple txt file from it which in turn can be downloaded from the user to store it locally or print it on screen. It works perfect so far, but … Yes, but, character encoding is a mess. All special characters like the german Ä are interpretated wrong. In this case as ä
I google for some days and I study PHP methods like iconv() and I know how to set up a browsers character encoding and also set a text editor for a correct correspondending decoding. But nothing helps, its still a messs, or becoming even weired.
So my question is : Where in this encoding/decoding roundtrip from the browser to a server and back to the browser I have to do what, to ensure that an Ä will still be an Ä ?
I answer my question, because it turns out to be another problem as stated above. The contenteditable is actually part of a section of html code. On the serverside with PHP I need to filter out the contenteditable text which I do via a DOMDocument like this:
$doc = new DOMDocument();
$doc->loadHTML($_POST["data"]);
then I access the elements and their textual content as usual.
Finally I save the text with
file_put_contents($txtFile, $plainText, LOCK_EX);
The saved text then was a mess as written above. Now it turns out that you need to tell the DOMDocument the character set wich loadHTML() has to interpretate. In this case UTF-8.
First I did it as recommended in PHP this way :
$doc = new DOMDocument('1.0', 'UTF-8');
But that doesn't help (I wonder). Then I found this answer in SO. And the final solution is this :
$doc->loadHTML('<?xml encoding="UTF-8">' . $_POST["data"]);
Though it works it is a trick. Finally the question is left over, how to do it the right way ? If somebedoy has the definite answer, he is very welcome.
You need to make sure that the content is encoded consistently throughout its roundtrip from user input to server-side storage and back to the browser again.
I would recommend using UTF-8. Check that your HTML document (which includes the contenteditable zone) is UTF-8 encoded, and that the XMLHttpRequest/Ajax request does not specify a different encoding when it sends the content to the server.
Check that your server-side application encodes the text file as UTF-8 also. And check that the HTTP response headers declare the file's encoding as UTF-8 when the file is requested and downloaded in the browser.
Somewhere along this path, the encoding differs, and that is what is causing the error. iconv converts between different encodings, which should not be necessary if everything is consistent.
Good luck!
Request you all to help me set up Apache server on Cent OS. It looks like some encoding issue, but I am not able to resolve it yet.
Instead of HTML content it displays HTML source in (chrome,firefox), IE 9 works fine. It displays � character after each "<" symbol.
http://pdf.gen.in/index1.htm
Second Problem is with PHP. It displays source code of PHP http://pdf.gen.in/index.php with similar diamond characters, wherever it encounters a "<" character. It seems like php issue is related to the first issue.
Those files are encoded with UTF-16LE. For the static HTML page, you might be able to get it to work by setting the charset correctly in the MIME type (it's currently text/html; charset=UTF-8). I don't know how strong PHP's Unicode support is. Try using UTF-8 instead, it's generally more well supported due to its partial overlap with ASCII.
You should use a decent text editor, and always set encoding of php/html to "UTF-8 without BOM".
Create a file named "test.php", paste below codes and save with "UTF-8 without BOM" encoding, then it will work just fine.
<?php
phpinfo();
?>
I am not that good with encoding but I am even falling over with the basics here.
I am trying to create a file that is recognised as UTF-8
header("Content-Type: text/plain; charset=utf-8");
header("Content-disposition: attachment; filename=test.txt");
echo "test";
exit();
also tried
header("Content-Type: text/plain; charset=utf-8");
header("Content-disposition: attachment; filename=test.txt");
echo utf8_encode("test");
exit();
I then open the file with Notepad++ and it says its current encoding is ANSI not UTF-8, what am I missing how should I be outputting this file.
I will eventually be outputting an XML file of products for the Affiliate Window program.
Also if it helps My webserver is Centos, Apache2, PHP 5.2.8.
Thanks in advance for any help!
As Filip said, encoding is not an intrinsic attribute of a file; It's implicit. This means that unless you know what encoding a file is to be interpreted in, there is no way to determine it. The best you can do, is to make a guess. This is presumably what programs such as Notepad++ does. Since the actual data that you have sent, can be interpreted in many different encodings, it just picks the candidate that it likes best. For Notepad++ this appears to be ANSI (Which in itself is a rather inaccurate classification), while other programs might default to something else.
The reason why you have to specify the charset in a HTTP-header is exactly because the file itself doesn't contain this information, so the browser needs to be informed about it. Once you have saved the file to disk, this information is thus unavailable.
If the file you're going to serve is an XML-document, you have the option of putting the encoding information inside the actual document. That way it is preserved after the file is saved to disk. Eg. if you are using utf-8, you should put this at the top of your document:
<?xml version="1.0" encoding="utf-8" ?>
Note that apart from getting the meta-information about the charset across, you also need to make sure that the data you are serving is actually utf-8 encoded. This is much the same scenario: You need to know implicitly what encoding your data are in. The function utf8_encode is (despite the name) explicitly meant for converting iso-8859-1 into utf-8. Thus, if you use it on already utf-8 encoded data, you'll get it double-encoded, with the result of garbled data.
Charsets aren't that complicated in itself. The problem is that if you aren't careful about keeping things straight you'll mess it up. Whenever you have a string, you should be absolutely certain that you know which encoding it is in. Otherwise it's not a string - it's just a blob of binary data.
test is all ASCII. So there is no need to use UTF-8 for that.
But in fact, the first 128 characters of the Unicode charset are the same as ASCII’s charset. And UTF-8 uses the same code for that characters as ASCII does. See Wikipedia’s description of UTF-8 for furhter information.
Once you download the file it no longer carries the information about the encoding, so Notepad++ has to guess it from the contents. There's a thing called Byte-Order-Mark which allows specifying the UTF encodings by prefix in the contents.
See question "When a BOM is used, is it only in 16-bit Unicode text?".
I would imagine using something like echo "\xEF\xBB\xBF" before writing the actual contents will force Notepad++ to recognize the file correctly.
There is no such thing as headers for downloaded txt-files. As you try to create XML files in the end anyway, and you can specify the charset in the XML declaration, try creating a simple XML structure and save / open that, then it should work, as long as the OS has utf-8 support, which any modern Linux distribution should have.
I refer you to Joel's Absolute minimum every software developer should know about Unicode
I refer you to What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text