Discover the charset from foreign database - php

I have mysql database (not mine). In this database all the encodings set to utf-8, and I connect with charset utf-8. But, when I try to read from the database I get this:
×¢×?ק 1
בית ×ª×•×’× ×” העוסק במספר שפות ×ª×•×›× ×”
× × ×œ× ×œ×¤× ×•×ª ×חרי 12 בלילה ..
What I supposed to get:
עסק 1
בית תוגנה העוסק במספר שפות תוכנה
נא לא לפנות אחרי 12 בלילה ..
When I look from phpmyadmin, I have the same thing(connection in pma is to utf-8).
I know that the data is supposed to be in Hebrew. Someone have an idea how to fix these?

You appear to have UTF-8 data that was treated as Windows-1252 and subsequently converted to UTF-8 (sometimes referred to as "double-encoding").
The first thing that you need to determine is at what stage the conversion took place: before the data was saved in the table, or upon your attempts to retrieve it? The easiest way is often to SELECT HEX(the_column) FROM the_table WHERE ... and manually inspect the byte-encoding as it is currently stored:
If, for the data above, you see C397C2A9... then the data is stored erroneously (an incorrect connection character set at the time of data insertion is the most common culprit); it can be corrected as follows (being careful to use data types of sufficient length in place of TEXT and BLOB as appropriate):
Undo the conversion from Windows-1252 to UTF-8 that caused the data corruption:
ALTER TABLE the_table MODIFY the_column TEXT CHARACTER SET latin1;
Drop the erroneous encoding metadata:
ALTER TABLE the_table MODIFY the_column BLOB;
Add corrected encoding metadata:
ALTER TABLE the_table MODIFY the_column TEXT CHARACTER SET utf8;
See it on sqlfiddle.
Beware to correctly insert any data in the future, or else the table will be partly encoded in one way and partly in another (which can be a nightmare to try and fix).
If you're unable to modify the database schema, the records can be transcoded to the correct encoding on-the-fly with CONVERT(BINARY CONVERT(the_column USING latin1) USING utf8) (see it on sqlfiddle), but I strongly recommended that you fix the database when possible instead of leaving it containing broken data.
However, if you see D7A2D73F... then the data is stored correctly and the corruption is taking place upon data retrieval; you will have to perform further tests to identify the exact cause. See UTF-8 all the way through for guidance.

Related

Compare same values stored with different encodings

This question is not a duplicate of PHP string comparison between two different types of encoding because my question requires a SQL solution, not a PHP solution.
Context ► There's a museum with two databases with the same charset and collation (engine=INNODB charset=utf8 collate=utf8_unicode_ci) used by two different PHP systems. Each PHP system stores the same data in a different way, next image is an example :
There are tons of data already stored that way and both systems are working fine, so I can't change the PHP encoding or the databases'. One system handles the sales from the box office, the other handles the sales from the website.
The problem ► I need to compare the right column (tipo_boleto_tipo) to the left column (tipo) in order to get the value in another column of the left table (unseen in image), but I'm getting no results because the same values are stored different, for example, when I search for "Niños" it is not found because it was stored as "Niños" ("children" in spanish). I tried to do it via PHP by using utf8_encode and utf8_decode but it is unacceptably slow, so I think it's better to do it with SQL only. This data will be used for a unified report of sales (box office and internet) in variable periods of time and it has to compare hundreds of thousands of rows, that's why it is so slow with PHP.
The question ► Is there anything like utf8_encode or utf8_decode in MYSQL that allows me to match the equivalent values of both columns? Any other suggestion will be welcome.
Next is my current code (with no results) :
DATABASE TABLE COLUMN
▼ ▼ ▼
SELECT boleteria.tipos_boletos.genero ◄ DESIRED COLUMN.
FROM boleteria.tipos_boletos ◄ DATABASE WITH WEIRD CHARS.
INNER JOIN venta_en_linea.ventas_detalle ◄ DATABASE WITH PROPER CHARS.
ON venta_en_linea.ventas_detalle.tipo_boleto_tipo = boleteria.tipos_boletos.tipo
WHERE venta_en_linea.ventas_detalle.evento_id='1'
AND venta_en_linea.ventas_detalle.tipo_boleto_tipo = 'Niños'
The line ON venta_en_linea.ventas_detalle.tipo_boleto_tipo = boleteria.tipos_boletos.tipo is never gonna work because both values are different ("Niños" vs "Niños").
It appears the application which writes to the boleteria database is not storing correct UTF-8. The database column character set refers to how MySQL interprets strings, but your application can still write in other character sets.
I can't tell from your example exactly what the incorrect character set is, but assuming it's Latin-1 you can convert it to latin1 (to make it "correct"), then convert it back to "actual" utf8:
SELECT 1
FROM tipos_boletos, ventas_detalle
WHERE CONVERT(CAST(CONVERT(tipo USING latin1) AS binary) USING utf8)
= tipo_boleto_tipo COLLATE utf8_unicode_ci
I've seen this all too often in PHP applications that weren't written carefully from the start to use UTF-8 strings. If you find the performance too slow and you need to convert frequently, and you don't have an opportunity to update the application writing the data incorrectly, you can add a new column and trigger to the tipos_boletos table and convert on the fly as records are added or edited.

MySQL saving UTF-8 entries from PHP correctly in VARCHAR but not in text blobs

I current have the following snippet of text in a text paragraph for my website
let’s get to it
The apostrophe character is part of the UTF-8 charset, and it saves properly in a table column that is designated a VARCHAR column, in the form
let’s get to it
Which is properly parsed by my client. If I put the same text into a TEXT column in MySQL, it's stored as the following:
let’s get to it.
Is there any reason the two would differ, and if so, how can I change it?
let’s is Mojibake. Latin1 is creeping in.
"text blob" -- which is it TEXT or BLOB? They are different datatypes.
let’s comes from htmlentities() or something equivalent. That can be stored and retrieved in VARCHAR, TEXT, or BLOB, regardless of CHARACTER SET. MySQL will not convert to that.
The Mojibake probably came from
The bytes you have in the client are correctly encoded in utf8 (good).
You connected with SET NAMES latin1 (or set_charset('latin1') or ...), probably by default. (It should have been utf8.)
The column in the tables may or may not have been CHARACTER SET utf8, but it should have been that.

Enconding problems with MySQL Database (web2project)

When I want to add the name of the project in Russian, the application saves the filled data in non-readable format into mysql database likeпроект номер три де фшоыÑшфыво шщфыовÑшщыв (проект номер три де фшоысшфыво шщфыовсшщыв) . But when i want to see the details about the current project, the view form of the project shows the data as its typed (e.g. проект номер три де фшоысшфыво шщфыовсшщыв).
Since the database is filled with non-utf8 format, the print view of project has the same inconveniences.
What should i change or delete so the inserting process of the data will be in proper way ?
Ð¿Ñ€Ð¾ÐµÐºÑ is Mojibake for проек.
This is the classic case of
The bytes you have in the client are correctly encoded in utf8 (good).
You connected with SET NAMES latin1 (or set_charset('latin1') or ...), probably by default. (It should have been utf8.)
The column in the tables may or may not have been CHARACTER SET utf8, but it should have been that.
If you need to fix for the data it takes a "2-step ALTER", something like
ALTER TABLE Tbl MODIFY COLUMN col VARBINARY(...) ...;
ALTER TABLE Tbl MODIFY COLUMN col VARCHAR(...) ... CHARACTER SET utf8 ...;
where the lengths are big enough and the other "..." have whatever else (NOT NULL, etc) was already on the column.

MySQL column encoding after converting to PHPBB3 from VBulletin 3

The following screen shot is going to be good introduction for the issue:
It is from phpmyadmin for the table topics of phpbb3. It shows that at the same table there are two columns one renders text in wrong encoding topic_title and other topic_first_poster_name renders the text correct.
In the convert script I set the $encoding to be windows-1256 as advised because my later VB forum was using windows-1256.
The screen shotted table has utf8_bin collation and topic_title collation is utf8_unicode_ci while topic_first_poster_name is utf8_bin.
What I need is to convert the text of topic_title to be rendered correctly because it make phpbb3 to render it wrong.
I tried the hint in this article about fixing column encoding but I miss able to determine what encoding that I have to use:
UPDATE table SET column=CONVERT(CONVERT(CONVERT(column USING binary) USING utf8) USING cp1251) WHERE id=123;
I have made the following using cp1256 but I did not get any result:
UPDATE t_topics SET topic_title=CONVERT(CONVERT(CONVERT(topic_title USING binary) USING utf8) USING cp1256) WHERE topic_id=2
Update:
When I alter the chaset i.e makin cp1256 first then utf8, the field text becomes like the following and it also wrong:
Update 2:
Using the following in the application viewtopic.php solve the problem in the browser's window:
'TOPIC_TITLE' => iconv( "UTF-8","Windows-1256//TRANSLIT", utf8_encode($topic_data['topic_title']))
However, what would this indicate in-order to solve this issue from the database field itself?

Issue with charset and data

I try to explain the whole problem with my poor english:
I use to save data from my application (encoded on utf8) to database using the default connection of PHP (latin1) to the tables of my DB with latin1 as charset.
That wasn't a big problem : for example the string Magnüs was stored as Magnüs, and when I recovered the data I saw correctly the string Magnüs (because the default connection, latin1).
Now, I change the connection, using the correct charset, with mysql_query("SET NAMES 'utf8'", $mydb), and I've also changed the charset of my tables's fields, so the value now is correctly store as Magnüs on DB; Then I still seeing Magnüs when I retrieve the data and I print on my Web Application.
Of course, unfortunatly, some old values now are badly printed (Magnüs is printed as Magnüs).
What I'd like to do is "to convert" these old values with the real encoding.
ALTER TABLE <table_name> CONVERT TO CHARACTER SET utf8; will convert only the field type, not the data.
So, a solution (discovered on internet) should be this:
ALTER TABLE table CHANGE field field BLOB;
ALTER TABLE table CHANGE field field VARCHAR(255) CHARACTER SET utf8;
But these old string won't change on database, so neither in the Web Application when I print them.
Why? And what can I do?
Make sure that your forms are sending UTF-8 encoded text, and that the text in your table is also UTF-8 encoded.
According to the MySQL reference, the last two ALTER you mentioned do not change the column contents encoding, its more like a "reinterpretation" of the contents.
Warning
The CONVERT TO operation converts column values between the character sets. This is not what you want if you have a column in one character set (like latin1) but the stored values actually use some other, incompatible character set (like utf8). In this case, you have to do the following for each such column:
ALTER TABLE t1 CHANGE c1 c1 BLOB;
ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8;
The reason this works is that there is no conversion when you convert to or from BLOB columns.

Categories