Mysql insert text data truncated by weird character encodings - php

I'm importing data from a CSV file that comes from excel, but i can't seem to insert my data correctly. This data contains french accented characters and if i open the CSV with OpenOffice (i don't use excel) i just select UTF-8 and the data gets converted and shown fine.
If i try to read that into php memory, i can see they are UTF-8 encoded strings if i use MB_DETECT_ENCODING. I connect to a database and specify all UTF-8 charsets using:
mysql_query('SET character_set_results = "utf8", character_set_client = "utf8", character_set_connection = "utf8", character_set_database = "utf8", character_set_server = "utf8"');
And i can certify that my database contains UTF-8 only fields and tables.
What happens is that my content gets truncated at the first accented character. But that happens only in my php script it seems. I output all my data to the browser and if i copy the INSERT statement, it inserts the whole data.
There might be something going on between php and the browser output but i can certify that it's not in the programming of the script... Thus far, i was able to circumvent this issue by HTMLENTITY'ing all my data, but the problem is that my search engine is going coo-coo-crazy because of that...
Any reason or way you can spare would be really appreciated...
EDIT #1:
I searched for the default excel encoding of CSV data and found out it was CP1252. I tried using ICONV('CP1252', 'UTF-8//TRANSLIT', $data) and now, the accented characters seem to fit. I'm going to try it everywhere in my script to see if all my accented character issues are fix and post the solution if so...

After countless tries, i was able to fix all my encoding problems but some of them i still don't know why they happen. I hope this will give some help to someone else later:
function fixEncoding($data){
//Replace
return iconv('CP1252', 'UTF-8//TRANSLIT', $data);
}
I used this function now to recode my strings correctly. It seems that excel saves data as CP1252 and NOT utf-8.
Further more, it seems there is a bug with accented characters at the start of a string in a CSV if you use fgetcsv, so i had to forego usage of fgetcsv and create an alternate method cause i'm not in PHP 5.3, maybe str_getcsv could have fixed my issue i'm not sure but in the current case it couldn't cause i don't have the function. I even tried looking for ports and nothing seems to exist and work correctly.
This is my solution, although very ugly, it works for me:
function fgetcsv2($filepointer, $maxlen, $sep, $enc){
$data = fgets($filepointer, $maxlen);
if($data === false){
return false;
}
$data = explode($sep, $data);
return $data;
}
Good luck to all who get similar problems

I also had to work on such a project, and, seriously, PHPExcel was my savior to avoid any brainfuck.
P.S. : also, there is this link to help you getting started (in french).

I have just had a similar problem and although I tested the $value using MB_DETECT_ENCODING and it said it was UTF-8, it still truncated the data.
Not knowing what to convert from, I couldn't use the iconv function mentioned above.
However I forced it to UTF-8 using utf8_encode($value) and everything works fine now.

Which encoding are you using for your tables?
MB_DETECT_ENCODING is not 100% correct all the time and no encoding detecter can ever be that.

Related

UTF-8, binary data and special characters issue while reading CSV file in laravel

I am using League/CSV Laravel package to read and manipulate CSV file and save that CSV data into a database but I am facing some issues for some rows only which has some special characters like "45.6 ºF" while reading data from CSV.
I have searched a lot about this problem and found that we should use "UTF-8" or "utf8mb4" in the database collation and save that CSV in "utf8" also but it works only for all those special characters which are on the keyboard.
I want to use all type of special characters like "45.6 ºF" which are not on the keyboard also.
Currently, my code is reading CSV column data and convert it into binary data ' b"column value" ' It adds "b" with the string and converts that string into binary value for only those strings which have any special characters.
I have spent a lot of time but could not find any better solution to this problem. So please help me, I shall be very thankful to you.
$reader = Reader::createFromPath(public_path().'/question.csv', 'r');
$reader->setHeaderOffset(0);
$records = $reader->getRecords();
foreach ($records as $offset => $record) {
$qs = Question::first();
$qs->question = $record['Question'];
$qs->save();
}
It is giving me this result after reading from CSV with "b".
array:2 [▼
"ID" => "1"
"Question" => b"Fahrenheit to Celsius (ºF to ºC) conversion calculator for temperature conversions with additional tables and formulas"
]
but it should be in the string format without "b" binary.
If I copy that string with special characters and assign it to the static variable, then it works fine and saves data into a database like this
$a="Fahrenheit to Celsius (ºF to ºC) conversion calculator for temperature conversions with additional tables and formulas";
$qs = Question::first();
$qs->question = $a;
$qs->save();
After a lot of struggle, i have found the solution of this problem.
I just added this line to code to convert it into utf8_encode before saving in the database.
$r = array_map("utf8_encode", $record);
Don't just copy paste the text from google to save in database because copy paste text and special characters don't work most of the time.
Thanks.
I have found a solution to this problem. below line of code fixed my issue $r = array_map("utf8_encode", $record); We just need to convert into utf8_encode before saving into database.
Do not use any conversion routines; it only leads to "two wrongs accidentally making a right".
With the existence of MySQL's LOAD DATA INFILE, do you even need fgetcsv? Simply execute the LOAD SQL command with the suitable character set specified in the command. The value for that should match the encoding of the csv file. If in doubt, try to get the hex of º from the file:
hex BA --> character set latin1
hex C2BA --> character set utf8 (or utf8mb4)
The column in the database table can be latin1 or utf8 or utf8mb4. The conversion, if needed, will happen during the LOAD.
The degree sign is one of the few special characters that exists in both charsets, so if you have others, latin1 may not be a viable option. (utf8/utf8mb4 has lots more special characters.)
The current use of b"..." may be making things worse by shoehorning C2BA into a latin1 column, leading to Mojibake: º instead of º.

Stuck writing UTF-8 file via PHP's fwrite

I can't figure out what I'm doing wrong. I'm getting file content from the database. When I echo the content, everything displays just fine, when I write it to a file (.html) it breaks. I've tried iconv and a few other solutions, but I just don't understand what I should put for the first parameter, I've tried blanks, and that didn't work very well either. I assume it's coming out of the DB as UTF-8 if it's echoing properly. Been stuck a little while now without much luck.
function file($fileName, $content) {
if (!file_exists("out/".$fileName)) {
$file_handle = fopen(DOCROOT . "out/".$fileName, "wb") or die("can't open file");
fwrite($file_handle, iconv('UTF-8', 'UTF-8', $content));
fclose($file_handle);
return TRUE;
} else {
return FALSE;
}
}
Source of the html file looks like.
Comes out of the DB like this:
<h5>Текущая стабильная версия CMS</h5>
goes in file like this
<h5>Ð¢ÐµÐºÑƒÑ‰Ð°Ñ ÑÑ‚Ð°Ð±Ð¸Ð»ÑŒÐ½Ð°Ñ Ð²ÐµÑ€ÑÐ¸Ñ CMS</h5>
EDIT:
Turns out the root of the problem was Apache serving the files incorrectly. Adding
AddDefaultCharset utf-8
To my .htaccess file fixed it. Hours wasted... At least I learned something though.
Edit: The database encoding does not seem to be the issue here, so this part of the answer is retained for information only
I assume it's coming out of the DB as UTF-8
This is most likely your problem, what database type do you use? Have you set the character encoding and collation details for the database, the table, the connection and the transfer.
If I was to hazard a guess, I would say your table is MySQL and that your MySQL collation for the database / table / column should all be UTF8_general_ci ?
However, for some reason MySQL UTF8 is not actually UTF8, as it stores its data in 3bits rather than 4bits, so can not store the whole UTF-8 Character sets, see UTF-8 all the way through .
So you need to go through every table, column on your MySQL and change it from UTF8_ to the UTF8mb4_ (note: since MySQL 5.5.3) which is UTF8_multibyte_4 which covers the whole UTF-8 Spectrum of characters.
Also if you do any PHP work on the data strings be aware you should be using mb_ PHP functions for multibyte encodings.
And finally, you need to specify a connection character set for the database, don't run with the default one as it will almost certainly not be UTF8mb4, and hence you can have the correct data in the database, but then that data is repackaged as 3bit UTF8 before then being treated as 4bit UTF8 by PHP at the other end.
Hope this helps, and if your DB is not MySQL, let us know what it is!
Edit:
function file($fileName, $content) {
if (!file_exists("out/".$fileName)) {
$file_handle = fopen(DOCROOT . "out/".$fileName, "wb") or die("can't open file");
fwrite($file_handle, iconv('UTF-8', 'UTF-8', $content));
fclose($file_handle);
return TRUE;
} else {
return FALSE;
}
}
your $file_handle is trying to open a file inside an if statement that will only run if the file does not exist.
Your iconv is worthless here, turning from "utf-8" to er, "utf-8". character detection is extremely haphazard and hard for programs to do correctly so it's generally advised not to try and work out / guess what a character encoding it, you need to know what it is and tell the function what it is.
The comment by Dean is actually very important. The HTML should have a <meta charset="UTF-8"> inside <head>.
That iconv call is actually not useful and, if you are right that you are getting your content as UTF-8, it is not necessary.
You should check the character set of your database connection. Your database can be encoded in UTF-8 but the connection could be in another character set.
Good luck!

Encoding MySQL, PHP and CSV file

I have a mysql database with some chinese keywords that i need to compare against some keywords in a CSV file using PHP.
I seem to have a problem with the encoding, when i compare 2 keywords i know are the same (with chinese characters) the script says they are different.
I use "SET NAMES utf8" at the beginning of the script for the database.
the collation for the keywords field on the table is utf8_bin.
on the script i also used
mb_internal_encoding("UTF-8");
header('Content-Type: text/xml, charset=UTF-8; encoding=UTF-8');
i read the CSV file with
$data = fgetcsv($handle, 1000, ",")) !== FALSE
and my comparison line is like this
$database_keyword == CSV_keyword
regarding the CSV file i have used notepad++ to try to change the encoding, but still not working.
thanks a lot.
edit: I am on Windows 7
EDIT: ADDING SOLUTION
it might help someone out there, i found that my issue was caused by the BOM being included on the strings from the csv files.
I managed to remove it by using this function
private function rmBOM($string) {
if(substr($string, 0,3) == pack('CCC',0xef,0xbb,0xbf)) {
$string=substr($string, 3);
}
return $string;
}
I have limited experience on Windows since I work usually on Linux, but I have worked in integration projects dealing with different charsets (encodings).
Be sure the database connection is in UTF mode. Have a look at this document.
Be sure the csv file is in UTF. You can force conversion with the iconv() function.

Accents in uploaded file being replaced with '?'

I am building a data import tool for the admin section of a website I am working on. The data is in both French and English, and contains many accented characters. Whenever I attempt to upload a file, parse the data, and store it in my MySQL database, the accents are replaced with '?'.
I have text files containing data (charset is iso-8859-1) which I upload to my server using CodeIgniter's file upload library. I then read the file in PHP.
My code is similar to this:
$this->upload->do_upload()
$data = array('upload_data' => $this->upload->data());
$fileHandle = fopen($data['upload_data']['full_path'], "r");
while (($line = fgets($fileHandle)) !== false) {
echo $line;
}
This produces lines with accents replaced with '?'. Everything else is correct.
If I download my uploaded file from my server over FTP, the charset is still iso-8850-1, but a diff reveals that the file has changed. However, if I open the file in TextEdit, it displays properly.
I attempted to use PHP's stream_encoding method to explicitly set my file stream to iso-8859-1, but my build of PHP does not have the method.
After running out of ideas, I tried wrapping my strings in both utf8_encode and utf8_decode. Neither worked.
If anyone has any suggestions about things I could try, I would be extremely grateful.
It's Important to see if the corruption is happening before or after the query is being issued to mySQL. There are too many possible things happening here to be able to pinpoint it. Are you able to output your MySql to check this?
Assuming that your query IS properly formed (no corruption at the stage the query is being outputted) there are a couple of things that you should check.
What is the character encoding of the database itself? (collation)
What is the Charset of the connection - this may not be set up correctly in your mysql config and can be manually set using the 'SET NAMES' command
In my own application I issue a 'SET NAMES utf8' as my first query after establishing a connection as I am unable to change the MySQL config.
See this.
http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html
Edit: If the issue is not related to mysql I'd check the following
You say the encoding of the file is 'charset is iso-8859-1' - can I ask how you are sure of this?
What happens if you save the file itself as utf8 (Without BOM) and try to reprocess it?
What is the encoding of the php file that is performing the conversion? (What are you using to write your php - it may be 'managing' this for you in an undesired way)
(an aside) Are the files you are processing suitable for processing using fgetcsv instead?
http://php.net/manual/en/function.fgetcsv.php
Files uploaded to your server should be returned the same on download. That means, the encoding of the file (which is just a bunch of binary data) should not be changed. Instead you should take care that you are able to store the binary information of that file unchanged.
To achieve that with your database, create a BLOB field. That's the right column type for it. It's just binary data.
Assuming you're using MySQL, this is the reference: The BLOB and TEXT Types, look out for BLOB.
The problem is that you are using iso-8859-1 instead of utf-8. In order to encode it in the correct charset, you should use the iconv function, like so:
$output_string = iconv('utf-8", "utf-8//TRANSLIT", $input_string);
iso-8859-1 does not have the encoding for any sort of accents.
It would be so much better if everything were utf-8, as it handles virtually every character known to man.

fwrite() and UTF8

I am creating a file using php fwrite() and I know all my data is in UTF8 ( I have done extensive testing on this - when saving data to db and outputting on normal webpage all work fine and report as utf8.), but I am being told the file I am outputting contains non utf8 data :( Is there a command in bash (CentOS) to check the format of a file?
When using vim it shows the content as:
Donâ~#~Yt do anything .... Itâ~#~Ys a
great site with
everything....Weâ~#~Yve only just
launched/
Any help would be appreciated: Either confirming the file is UTF8 or how to write utf8 content to a file.
UPDATE
To clarify how I know I have data in UTF8 i have done the following:
DB is set to utf8 When saving data
to database I run this first:
$enc = mb_detect_encoding($data);
$data = mb_convert_encoding($data, "UTF-8", $enc);
Just before I run fwrite i have checked the data with Note each piece of data returns 'IS utf-8'
if (strlen($data)==mb_strlen($data, 'UTF-8')) print 'NOT UTF-8';
else print 'IS utf-8';
Thanks!
If you know the data is in UTF8 than you want to set up the header.
I wrote a solution answering to another tread.
The solution is the following: As the UTF-8 byte-order mark is \xef\xbb\xbf we should add it to the document's header.
<?php
function writeStringToFile($file, $string){
$f=fopen($file, "wb");
$file="\xEF\xBB\xBF".$file; // this is what makes the magic
fputs($f, $string);
fclose($f);
}
?>
You can adapt it to your code, basically you just want to make sure that you write a UTF8 file (as you said you know your content is UTF8 encoded).
fwrite() is not binary safe. That means, that your data - be it correctly encoded or not - might get mangled by this command or it's underlying routines.
To be on the safe side, you should use fopen() with the binary mode flag. that's b. Afterwards, fwrite() will safe your string data "as-is", and that is in PHP until now binary data, because strings in PHP are binary strings.
Background: Some systems differ between text and binary data. The binary flag will explicitly command PHP on such systems to use the binary output. When you deal with UTF-8 you should take care that the data does not get's mangeled. That's prevented by handling the string data as binary data.
However: If it's not like you told in your question that the UTF-8 encoding of the data is preserved, than your encoding got broken and even binary safe handling will keep the broken status. However, with the binary flag you still ensure that this is not the fwrite() part of your application that is breaking things.
It has been rightfully written in another answer here, that you do not know the encoding if you have data only. However, you can validate data if it validates UTF-8 encoding or not, so giving you at least some chance to check the encoding. A function in PHP which does this I've posted in a UTF-8 releated question so it might be of use for you if you need to debug things: Answer to: SimpleXML and Chinese look for can_be_valid_utf8_statemachine, that's the name of the function.
//add BOM to fix UTF-8 in Excel
fputs($fp, $bom =( chr(0xEF) . chr(0xBB) . chr(0xBF) ));
I find this piece works for me :)
The problem is your data is double encoded. I assume your original text is something like:
Don’t do anything
with ’, i.e., not the straight apostrophe, but the right single quotation mark.
If you write a PHP script with this content and encoded in UTF-8:
<?php
//File in UTF-8
echo utf8_encode("Don’t"); //this will double encode
You will get something similar to your output.
$handle = fopen($file,"w");
fwrite($handle, pack("CCC",0xef,0xbb,0xbf));
fwrite($handle,$file);
fclose($handle);
I know all my data is in UTF8 - wrong.
Encoding it's not the format of a file. So, check charset in headers of the page, where you taking data from:
header("Content-type: text/html; charset=utf-8;");
And check if data really in multi-byte encoding:
if (strlen($data)==mb_strlen($data, 'UTF-8')) print 'not UTF-8';
else print 'utf-8';
There is some reason:
first you get information from database it is not utf-8.
if you sure that was true use this ,I always use this and it work :
$file= fopen('../logs/logs.txt','a');
fwrite($file,PHP_EOL."_____________________output_____________________".PHP_EOL);
fwrite($file,print_r($value,true));
The only thing I had to do is add a UTF8 BOM to the CSV, the data was correct but the file reader (external application) couldn't read the file properly without the BOM
Try this simple method that is more useful and add to the top of the page before tag <body> :
<head>
<meta charset="utf-8">
</head>

Categories