strange character inserted in table utf8 - php

I have a site developed in codeigniter where I'd like to insert a record inside my database in a table with utf8 fields.
The problem is that when I insert something inside the table I see that:
�"�h�t�t�p�:�/�/�I�m�a�g�e�1�.�u�r�l�f�o�r�i�m�a�g�e�s�.�c�o�m�/�D�e�f�a�u�l�t�/�8�8�0�4�/�2�3�6�9�2�2�6�2�-�1�8�4�3�3�8�5�2�6�6�.�j�p�g�"�
There are many more characters. The real string is a simple path. I don't know the format of the string because it is from an external server.
This is my query to insert record. I take the string from xml and if I print it inside the page I see the correct string. The problem occurs whenever I check inside the database:
foreach($img->childNodes as $node){
$data = array(
'image'=>$node->getAttribute('path'),
);
$this->db->insert('hotel_images',$data);
}

That data is not UTF-8. It is UCS-2 or UTF-16. UCS-2 is a subset of UTF-16, so treating it as UTF-16 should do the trick.
You can convert it using iconv.
$data = iconv("UTF-16", "UTF-8", $data);

Related

JSON creating from PHP giving wrong data?

I have one php form where i used to enter data to database(phpmyadmin), and i used SELECT query to display all values in database to view in php form.
Also i have another PHP file which i used to create JSON from the same db table.
Here when i enter foreign languages like "Experiența personală:" the value getting saved in DB is "ExperienÈ›a personală: " but when i use select query to display this in same php form it coming correctly "Experiența personală:". So the db is correct and now am using following php code to create JSON
<?php
$servername = "localhost";
$username = "root";
$password = "root";
$dbname = "aaps";
// Create connection
$con=mysqli_connect($servername,$username,$password,$dbname);
// Check connection
mysqli_set_charset($con, 'utf8');
//echo "connected";
$rslt=mysqli_query($con,"SELECT * FROM offers");
while($row=mysqli_fetch_assoc($rslt))
{
$taxi[] = array('code'=> $row["code"], 'name'=> $row["name"],'contact'=> $row["contact"], 'url'=> $row["url"], 'details'=> $row["details"]);
}
header("Content-type: application/json; charset=utf-8");
echo json_encode($taxi);
?>
and JSON looks like
[{"code":"CT1","name":"Experien\u00c8\u203aa personal\u00c4\u0192: ","contact":"4535623643","url":"images\/offers\/event-logo-8.jpg","details":"Experien\u00c8\u203aa personal\u00c4\u0192: jerhbehwgrh 234234 hjfhjerg#$%$#%#4"},{"code":"ewrw","name":"Experien\u00c8\u203aa personal\u00c4\u0192: ","contact":"ewfew","url":"","details":"eExperien\u00c8\u203aa personal\u00c4\u0192: Experien\u00c8\u203aa personal\u00c4\u0192: Experien\u00c8\u203aa personal\u00c4\u0192: "},{"code":"Experien\u00c8\u203aa personal\u00c4\u0192: ","name":"Experien\u00c8\u203aa personal\u00c4\u0192: ","contact":"","url":"","details":"Experien\u00c8\u203aa personal\u00c4\u0192: "}]
In this "\u00c8\u203aa" this is wrong it supposed to be "\u021b" (t).
So pho used to creating JSON making this issue.
But am unable to find exactly why its coming like this . please help
Avoid Unicode -- note the extra argument:
json_encode($s, JSON_UNESCAPED_UNICODE)
Don't use utf8_encode/decode.
ă turning into ă is Mojibake. It probably means that
The bytes you have in the client are correctly encoded in utf8 (good).
You connected with SET NAMES latin1 (or set_charset('latin1') or ...), probably by default. (It should have been utf8.)
The column in the tables may or may not have been CHARACTER SET utf8, but it should have been that.
If you need to fix for the data it takes a "2-step ALTER", something like
ALTER TABLE Tbl MODIFY COLUMN col VARBINARY(...) ...;
ALTER TABLE Tbl MODIFY COLUMN col VARCHAR(...) ... CHARACTER SET utf8 ...;
Before making any changes, do
SELECT col, HEX(col) FROM tbl WHERE ...
With that, ă should show hex of C483. If you see C384C692, you have "double-encoding", which is messier to fix.
Depending on the version of MySql in the database, it may not be using the full utf-8 set, as stated in the documentation:
The ucs2 and utf8 character sets do not support supplementary characters that lie outside the BMP. Characters outside the BMP compare as REPLACEMENT CHARACTER and convert to '?' when converted to a Unicode character set.
This, however, is not likely to be related to your problem. I would try a couple of different things and see if it solves your problem.
use SET NAMES utf-8
You can read more about that here
use utf8_encode() when inserting data to the database, and utf8_decode() when extracting. That way, you don't have to worry about MySql manipulating the unicode characters. Documentation

Datatables mysql charset circumflex

I have a mysql database with latin1 default character set name.
Through php I save strings in mysql table. From the table I parse some data into tables using DataTables.
Everything work ok, but now I have some problems with circumflex letters and FPDF.
So if I save a string "račun" in the table, the result in the table will be "raÄun".
Or the string "število" will be "Å¡tevilo".
OK -> DataTables decodes those words back normally..
But now when I am using FPDF it gets those string as they are stored in MySQL table and print them "encoded".
I tried
iconv("ISO-8859-1", "ISO-8859-2", "Števika računa")
and
utf8_decode("Števika računa)
But nothing worked.. does anybody has an idea what should I do?
At the end this was the solution:
http://isabelcastillo.com/international-characters-encoding-fpdf

String is not saved correctly in database

I am trying to save a string in database and get something like this
&#1055&#1077&#1088&#1080&#1086&#1076&#32&#1076&#10
The string that I want to save : Период действия S...
The table encoding is: cp1251_general_ci
I don't know in which encoding the string is - I am getting it from an excel document.
I tried this, but it didn' help.
$nomer = iconv('UTF-8','Windows-1251', $str );
Is there a solution for this?
You haven't mentioned the database engine you're using, but try changing the table encoding to something like utf8_general_ci or utf8_unicode_ci.

XOR encode a multibyte string and save to MySQL field without loss

I'm currently using this function to obfuscate a bit the field values in MySQL and protect it from direct dumping. It all works good and values are stored correctly, but what happens when i try to store a multibyte string?
Here's an example, let's try to encode the string álex:
<?
$v = xorencode('álex');
// step 1 - encode
echo $v."\n";
// step 2 - decode
echo xorencode($v);
?>
Works good, i see some obfuscated string first time, and then i see álex again. Now if i try to save it in a VARCHAR field in a MySQL table, and then select it - i no longer have a utf string, instead it gets returned as gllex.
Note, MySQL tables and fields collations are utf8_general_ci, files are UTF-8, and i SET NAMES utf8 after connecting. Any workaround to this?
Thanks

how to detect and fix character encoding in a mysql database via php?

I have received this database full of people names and data in French, which means, using characters such as é,è,ö,û, etc. Around 3000 entries.
Apparently, the data inside has been encoded sometimes using utf8_encode(), and sometimes not. This result in a messed up output: at some places the characters show up fine, at others they don't.
At first i tried to track down every place in the UI where those issues arise and use utf8_decode() where necessary, but it's really not a practicable solution.
I did some testing and there is no reason to use utf8_encode in the first place, so i'd rather remove all that and just work in UTF8 everywhere - at the browser, middleware and database levels. So i need to clean the database, converting all misencoded data by its cleaned up version.
Question : would it be possible to create a function in php that would check if a utf8 string is correctly encoded (without utf8_encode) or not (with utf8_encode), and, if it was, convert it back to its original state?
In other terms: i would like to know how i could detect utf8 content that has been utf8_encode() to utf8 content that has not been utf8_encode()d.
**UPDATE: EXAMPLE **
Here is a good example: you take a string full of special chars and take a copy of that string and utf8_encode() it. The function i'm dreaming of takes both strings, leaves the first one untouched and the second string is now the same as string one.
I tried this:
$loc_fr = setlocale(LC_ALL, 'fr_BE.UTF8','fr_BE#euro', 'fr_BE', 'fr', 'fra', 'fr_FR');
$str1= "éèöûêïà ";
$str2 = utf8_encode($str1);
function convert_charset($str) {
$charset= mb_detect_encoding($str);
if( $charset=="UTF-8" ) {
return utf8_decode($str);
}
else {
return $str;
}
}
function correctString($str) {
echo "\nbefore: $str";
$str= convert_charset($str);
echo "\nafter: $str";
}
correctString($str1);
echo('<hr/>'."\n");
correctString($str2);
And that gives me:
before: éèöûêïà after: �������
before: éèöûêïà after: éèöûêïà
Thanks,
Alex
It's not completely clear from the question what character-encoding lens you're currently looking through (this depends on the defaults of your text editor, browser headers, database configuration, etc), and what character-encoding transformations the data has gone through. It may be that, for example, by tweaking a database configuration everything will be corrected, and that's a lot better than making piecemeal changes to data.
It looks like it might be a problem of utf8 double-encoding, and if that's the case, both the original and the corrupted data will be in utf8, so encoding detection won't give you the information you need. The approach in that case requires making assumptions about what characters can reasonably turn up in your data: as far as PHP and Mysql are concerned "é" is perfectly legal utf8, so you have to make a judgement based on what you know about the data and its authors that it must be corrupted. These are risky assumptions to make if you're just a technician. Luckily, if you know the data is in French and there's only 3000 records, it's probably ok to make those kinds of assumptions.
Below is a script that you can adapt first of all to check your data, then to correct it, and finally to check it again. All it's doing is processing a string as utf8, breaking it into characters, and comparing the characters against a whitelist of expected French characters. It signals a problem if the string is either not in utf8 or contains characters that aren't normally expected in French, for example:
PROBABLY OK Côte d'Azur
HAS NON-WHITELISTED CHAR Côte d'Azur 195,180 ô
NON-UTF8 C�e d'Azur
Here's the script, you'll need to download the dependent unicode functions from http://hsivonen.iki.fi/php-utf8/
<?php
// Download from http://hsivonen.iki.fi/php-utf8/
require "php-utf8/utf8.inc";
$my_french_whitelist = array_merge(
range(0,127), // throw in all the lower ASCII chars
array(
0xE8, // small e-grave
0xE9, // small e-acute
0xF4, // small o-circumflex
//... Will need to add other accented chars,
// Euro sign, and whatever other chars
// are normally expected in the data.
)
);
// NB, whether this string literal is in utf8
// depends on the encoding of the text editor
// used to write the code
$str1 = "Côte d'Azur";
$test_data = array(
$str1,
utf8_encode($str1),
utf8_decode($str1),
);
foreach($test_data as $str){
$questionable_chars = non_whitelisted(
$my_french_whitelist,
$str
);
if($questionable_chars===true){
p("NON-UTF8", $str);
}else if ($questionable_chars){
p(
"HAS NON-WHITELISTED CHAR",
$str,
implode(",", $questionable_chars),
unicodeToUtf8($questionable_chars)
);
}else{
p("PROBABLY OK", $str);
}
}
function non_whitelisted($whitelist, $utf8_str){
$codepoints = utf8ToUnicode($utf8_str);
if($codepoints===false){ // has non-utf8 char
return true;
}
return array_diff(
array_unique($codepoints),
$whitelist
);
}
function p(){
$args = func_get_args();
echo implode("\t", $args), "\n";
}
I think you might be taking a more compilation approach. I received a Bulgarian database a few weeks back that was dynamically encoded in the DB, but when moving it to another database I got the funky ???
The way I solved that was by dumping the database, setting the database to utf8 collation and then importing the data as binary. This auto-converted everything to utf8 and didn't give me anymore ???.
This was in MySQL
When you connect to the database remember to always use mysql_set_charset('utf8', $db_connection);
it will fix everything, it solved all my problems.
See this: http://phpanswer.com/store-french-characters-into-mysql-db-and-display/
As you said that your data is sometimes converted using utf8_encode, your data is encoded with either UTF-8 oder ISO 8859-1 (since utf8_encode converts from ISO 8859-1 to UTF-8). And since UTF-8 encodes the characters from 128 to 255 with two bytes starting with 1100001x, you just have to test if your data is valid UTF-8 and convert it if not.
So scan all your data if it already is UTF-8 (see several is_utf8 functions) and use utf8_encode if it’s not UTF-8.
my problem is that somehow I got in my database chars like these à,é,ê in plain format or utf8 encoded. After investigation I got the conclusion that some browser (I do not know IE or FF or other) is encoding the submitted input data as there was no utf8 encoding intentionally added to handling the submit forms. So, if I would read data with utf8_encode, I'll alter the other plain chars, and vice-versa.
My solution, after I studied solutions given above:
1. I created a new database with charset utf8
2. Imported the database AFTER I changed the charset definition on CREATE TABLE statement in sql dump file from Latin.... to UTF8.
3. import data from original database
(until here maybe will be enough just to change the charset on existing db and tables, and this only if original db is not utf8)
4. update the content in database directly by replacing the utf8 encoded chars with there plain format something like
UPDATE `clients` SET `name` = REPLACE(`name`,"é",'é' ) WHERE `name` LIKE CONVERT( _latin1 '%é%' USING utf8 );
I put in db class (for php code) this line to make sure that their is a UTF8 communication
$this->query('SET CHARSET UTF8');
So, ho to update? (step 4)
I've built an array with possible chars that might be encoded
$special_chars = array(
'ù','û','ü',
'ÿ',
'à','â','ä','å','æ',
'ç',
'é','è','ê','ë',
'ï','î',
'ô','','ö','ó','ø',
'ü');
I've buit an array with pairs of table,field that should be updated
$where_to_look = array(
array("table_name" , "field_name"),
..... );
than,
foreach($special_chars as $char)
{
foreach($where_to_look as $pair)
{
//$table = $pair[0]; $field = $pair[1]
$sql = "SELECT id , `" . $pair[1] . "` FROM " .$pair[0] . " WHERE `" . $pair[1] . "` LIKE CONVERT( _latin1 '%" . $char . "%' USING utf8 );";
if($db->num_rows() > 0){
$sql1 = "UPDATE " . $pair[0] . " SET `" . $pair[1] . "` = REPLACE(`" . $pair[1] . "`,CONVERT( _latin1 '" . $char . "' USING utf8 ),'" . $char . "' ) WHERE `" . $pair[1] . "` LIKE CONVERT( _latin1 '%" . $char . "%' USING utf8 )";
$db1->query($sql1);
}
}
}
The basic ideea is to use encoding features of mysql to avoid encoding done between mysql, apache, browser and back;
NOTE: I had not avaiable php functions like mb_....
Best

Categories