I have a query like so:
$profilesx->where('qZipCode', $location)->
orWhere('qCity', 'LIKE', '%'.$location.'%');
Where location is equal to belgrade, and the database column says Belgrade.
It seems to be case sensitive (using either = or LIKE) so if I search for Belgrade I get a result but if I search for belgrade I do not get any results.
How to make it case insensitive?
The default character set and collation are latin1 and
latin1_swedish_ci, so nonbinary string comparisons are case
insensitive by default. This means that if you search with col_name
LIKE 'a%', you get all column values that start with A or a. To make
this search case sensitive, make sure that one of the operands has a
case sensitive or binary collation. For example, if you are comparing
a column and a string that both have the latin1 character set, you can
use the COLLATE operator to cause either operand to have the
latin1_general_cs or latin1_bin collation:
source: http://dev.mysql.com/doc/refman/5.7/en/case-sensitivity.html
What's actually happened is that the case sensitivity has been switched off (which is not neccassarily a bad thing). The solution is given in the next section of that document. Try something like
orWhere('qCity', 'COLLATE latin1_general_cs LIKE', '%'.$location.'%');
If laravel doesn't like it you will have to use a raw query or change the collation setting for the column.
As a side note, try to avoid LIKE %something% queries if you can. Mysql cannot use an index for these sorts of queries and they generally tend to be slow on large tables because of it.
This is usually the collation settings for the database. You need to set it to a case insensitive collation type.
Read this link for more info:
http://dev.mysql.com/doc/refman/5.7/en/case-sensitivity.html
Related
In a table x, there is a column with the values u and ü.
SELECT * FROM x WHERE column='u'.
This returns u AND ü, although I am only looking for the u.
The table's collation is utf8mb4_unicode_ci . Wherever I read about similar problems, everyone suggests to use this collation because they say that utf8mb4 really covers ALL CHARACTERS. With this collation, all character set and collation problems should be solved.
I can insert ü, è, é, à, Chinese characters, etc. When I make a SELECT *, they are also retrieved and displayed correctly.
The problem only occurs when I COMPARE two strings as in above example (SELECT WHERE) or when I use a UNIQUE INDEX on the column. When I use the UNIQUE INDEX, a "ü" is not inserted when I have a "u" in the column already. So, when SQL compares u and ü in order to decide whether the ü is unique, it thinks it is the same as the u and doesn't insert the ü.
I changed everything to utf8mb4 because I don't want to worry about character sets and collation anymore. However, it seems that utf8mb4 isn't the solution either when it comes to COMPARING strings.
I also tried this:
SELECT * FROM x WHERE _utf8mb4 'ü' COLLATE utf8mb4_unicode_ci = column.
This code is executable (looks pretty sophisticated). However, it also returns ü AND u.
I have talked to some people in India and here in China about this issue. We haven't found a solution yet.
If anyone could solve the mystery, it would be really great.
Add_On: After reading all the answers and comments below, here is a code sample which solves the problem:
SELECT * FROM x WHERE 'ü' COLLATE utf8mb4_bin = column
By adding "COLLATE utf8mb4_bin" to the SELECT query, SQL is invited to put the "binary glasses" (ending _bin) on when it looks at the characters in the column. With the binary glasses on, SQL sees now the binary code in the column. And the binary code is different for every letter and character and emoji which one can think of. So, SQL can now also see the difference between u and ü. Therefore, now it only returns the ü when the SELECT query looks for the ü and doesn't also return the u.
In this way, one can leave everything (database collation, table collation) the same, but only add "COLLATE utf8mb4_bin" to a query when exact differentiation is needed.
(Actually, SQL takes all other glasses off (utf8mb4_german_ci, _general_ci, _unicode_ci etc.) and only does what it does when it is not forced to do anything additional. It simply looks at the binary code and doesn't adjust its search to any special cultural background.)
Thanks everybody for the support, especially to Pred.
Collation and character set are two different things.
Character set is just an 'unordered' list of characters and their representation.
utf8mb4 is a character set and covers a lots of characters.
Collation defines the order of characters (determines the end result of order by for example) and defines other rules (such as which characters or character combinations should be treated as same). Collations are derived from character sets, there can be more than one collation for the same character set. (It is an extension to the character set - sorta)
In utf8mb4_unicode_ci all (most?) accented characters are treated as the same character, this is why you get u and ü. In short this collation is an accent insensitive collation.
This is similar to the fact that German collations treat ss and ß as same.
utf8mb4_bin is another collation and it treats all characters as different ones. You may or may not want to use it as default, this is up to you and your business rules.
You can also convert the collation in queries, but be aware, that doing so will prevent MySQL to use indexes.
Here is an example using a similar, but maybe a bit more familiar part of collations:
The ci at the end of the collations means Case Insensitive and almost all collations with ci has a pair ending with cs, meaning Case Sensitive.
When your column is case insensitive, the where condition column = 'foo' will find all of these: foo Foo fOo FoO FOo FoO fOO, FOO.
Now if you try to set the collation to case sensitive (utf8mb4_unicode_cs for example), all the above values are treated as different values.
The localized collations (like German, UK, US, Hungarian, whatever) follow the rules of the named language. In Germany ss and ß are the same and this is stated in the rules of the German language. When a German user searches for a value Straße, they will expect that a software (supporting german language or written in Germany) will return both Straße and Strasse.
To go further, when it comes to ordering, the two words are the same, they are equal, their meaning is the same so there is no particular order.
Don't forget, that the UNIQUE constraint is just a way of ordering/filtering values. So if there is a unique key defined on a column with German collation, it will not allow to insert both Straße and Strasse, since by the rules of the language, they should be treated as equal.
Now lets see our original collation: utf8mb4_unicode_ci, This is a 'universal' collation, which means, that it tries to simplify everything so since ü is not a really common character and most users have no idea how to type it in, this collation makes it equal to u. This is a simplification in order to support most of the languages, but as you already know, these kind of simplifications have some side effects. (like in ordering, filtering, using unique constraints, etc).
The utf8mb4_bin is the other end of the spectrum. This collation is designed to be as strict as it can be. To achieve this, it literally uses the character codes to distinguish characters. This means, each and every form of a character are different, this collation is implicitly case sensitive and accent sensitive.
Both of these have drawbacks: the localized and general collations are designed for one specific language or to provide a common solution. (utf8mb4_unicode_ci is the 'extension' of the old utf8_general_ci collation)
The binary requires extra caution when it comes to user interaction. Since it is CS and AS it can confuse users who are used to get the value 'Foo' when they are looking for the value 'foo'. Also as a developer, you have to be extra cautious when it comes to joins and other features. The INNER JOIN 'foo' = 'Foo' will return nothing, since 'foo' is not equal to 'Foo'.
I hope that these examples and explanation helps a bit.
utf8_collations.html lists what letters are 'equal' in the various utf8 (or utf8mb4) collations. With rare exceptions, all accents are stripped before comparing in any ..._ci collation. Some of the exceptions are language-specific, not Unicode in general. Example: In Icelandic É > E.
..._bin is the only collation that honors the treats accented letters as different. Ditto for case folding.
If you are doing a lot of comparing, you should change the collation of the column to ..._bin. When using the COLLATE clause in WHERE, an index cannot be used.
A note on ß. ss = ß in virtually all collations. In particular, utf8_general_ci (which used to be the the default) treated them as unequal. That one collation made no effort to treat any 2-letter combination (ss) as a single 'letter'. Also, due to a mistake in 5.0, utf8_general_mysql500_ci treats them unequal.
Going forward, utf8mb4_unicode_520_ci is the best through version 5.7. For 8.0, utf8mb4_0900_ai_ci is 'better'. The "520" and "900" refer to Unicode standards, so there may be even newer ones in the future.
You can try the utf8_bin collation and you shouldn't face this issue, but it will be case sensitive. The bin collations compare strictly, only separating the characters out according to the encoding selected, and once that's done, comparisons are done on a binary basis, much like many programming languages would compare strings.
I'll just add to the other answers that a _bin collation has its peculiarities as well.
For example, after the following:
CREATE TABLE `dummy` (`key` VARCHAR(255) NOT NULL UNIQUE);
INSERT INTO `dummy` (`key`) VALUES ('one');
this will fail:
INSERT INTO `dummy` (`key`) VALUES ('one ');
This is described in The binary Collation Compared to _bin Collations.
Edit: I've posted a related question here.
I have a danish website developed in PHP. I am using mysqli.
I have the words like Daugård and Århus in database field called tags.
I want this both values as result when I run a query like below.
Query : select * from table_name where tags like '%år%';
Expected result : Daugård and Århus both
Actual result : Daugård
Right now its performing case-sensitive match and returning only Daugård word.
I tried changing charset to utf8 dynamically by function set_charset('utf8'), but it didn't work.
Collation for the 'tags' field is 'utf8_general_ci', table collation is 'utf8_general_ci' and my database collation is 'latin1_swedish_ci'.
Help me how can I achieve this?
To avoid the collation issue, use a case conversion function on "tags" before comparison:
select * from table_name where lcase(tags) like '%år%';
I might be missing something as MySQL isn't that familiar to me. I know the function LOWER(tags) does the job in Oracle as long as the pattern searched for also is in lower case.
There is an existing database/tables where i cannot change the charset. These tables use the collation "latin1_swedish_ci" but there is UTF-8 data stored inside. For example string "fußball" (german football) is saved as "fußball". That's the part i can not change.
My whole script works just fine with UTF-8 and it's own UTF-8 Tables and i use PDO(mySQL) with an UTF-8 Connection to query. But sometimes i have to query some "old" latin1 tables. Is there any "cool" way for solving this instead of sending SET NAMES.
This is my very first question at stackoverflow! :-)
It's actually very easy to think that data is encoded in one way, when it is actually encoded in some other way: this is because any attempt to directly retrieve the data will result in conversion first to the character set of your database connection and then to the character set of your output medium—therefore you should first verify the actual encoding of your stored data through either SELECT BINARY myColumn FROM myTable WHERE ... or SELECT HEX(myColumn) FROM myTable WHERE ....
Once you are certain that you have UTF-8 encoded data stored within a Windows-1252 encoded column (i.e. you are seeing 0xc39f where the character ß is expected), what you really want is to drop the encoding information from the column and then tell MySQL that the data is actually encoded as UTF-8. As documented under ALTER TABLE Syntax:
Warning
The CONVERT TO operation converts column values between the character sets. This is not what you want if you have a column in one character set (like latin1) but the stored values actually use some other, incompatible character set (like utf8). In this case, you have to do the following for each such column:
ALTER TABLE t1 CHANGE c1 c1 BLOB;
ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8;
The reason this works is that there is no conversion when you convert to or from BLOB columns.
Henceforth MySQL will correctly convert selected data to that of the connection's character set, as desired. That is, if a connection uses UTF-8, no conversion will be necessary; whereas a connection using Windows-1252 will receive strings converted to that character set.
Not only that, but string comparisons within MySQL will be correctly performed. For example, if you currently connect with the UTF-8 character set and search for 'fußball', you won't get any results; whereas you would after the modifications above.
The pitfall to which you allude, of having to change numerous legacy scripts, only applies insofar as those legacy scripts are using an incorrect connection character set (for example, are telling MySQL that they use Windows-1252 whereas they are in fact sending and expecting receipt of data in UTF-8). You really should fix this in any case, as it can lead to all sorts of horrors down the road.
I solved it with creating another database handle in my DB class, that uses latin1 so whenever i need to query the "legacy tables" i can use
$pdo = Db::getInstance();
$pdo->legacyDbh->query("MY QUERY");
# instead of
$pdo->dbh->query("MY QUERY");
if anyone has a better solution that also do not touch the tables.. :-)
I know that the answer is very simple, but I'm going bananas. I think I've tried every solution available. Here we go...
I have a database with charset latin1. Yeah, i should have it in utf8, but I have several running projects on it, so I don't want to mess them.
The issue comes with SELECT with LIKE "%...%"
The table is utf8 with COLLATE utf8_general_ci. The fields are also utf8 with utf8_general_ci collation. My script files (php) are utf-8 encoded, and the server also serves files in utf-8. So, everything is utf-8.
Ok, as everything is collated with utf8_general_ci, I should be able to search case insensitive and accent insentive. For example:
Having in table providers...
id providerName
1 Jose
2 José
I should be able to do...
SELECT * FROM providers WHERE providerName LIKE "%jose%"
or
SELECT * FROM providers WHERE providerName LIKE "%josé%"
And have, in both cases, the two rows returned. But, with the first query, I only get row 1; and with second query, I only get row two. Case insensitive search seems to work well, but accent insensitive does not.
So I tried adding COLLATE utf8_general_ci after the LIKE "%...%". Same result.
Then, I discovered that the connection was been made in latin1 (vía PHP function mysql_client_encoding()). So I added a query everytime a connection was made, indicating to use utf8. I used both SET NAMES UTF8 COLLATE utf8_general_ci AND php's mysql_set_charset(). When I add this configuration, the first query return row 1, but the second query does not return any result. In addition, all results returns rare characters (you know, like ð, even if all was set to utf8).
This is pluzzing me. Everything is set in UTF8, but it doesn't work as (I) expect.
MySQL Server 5.0.95
PHP 5.2.14
Win7
Stop the machines!!
I found out that I was doing everything OK and it DID respond as expected. The only problem was that, even if the table, fields, files and server were in utf8, when the table was populated (some time in the past), the connection was been made with latin1.
So I re-populated the table, now with utf8 connection, and it worked just fine.
Thank you guys!
I don't have the setup to test this properly, but here's a possible solution. So many places to set UTF8! :)
I want to run a SELECT ... LIKE query in SQLite that is case-sensitive. But I only want this one query to be case sensitive, and nothing else.
I know there is
PRAGMA case_sensitive_like = boolean;
But that seems to change all LIKE queries.
How can I enable case-sensitive LIKE on a single query?
Examples:
I want a query for "FuN" to match "blah FuN blah", but not "foo fun bar".
(This is running under PHP using PDO)
I might be able to toggle that on, then off after the query but I can concerned about the repercussions that may have (efficiency etc). Is there any harm?
I don't have write access to the database.
(This is under Windows Server 2008)
I also tried SELECT id, summary, status FROM Tickets WHERE summary COLLATE BINARY LIKE '%OPS%'; but that did not do a case-sensitive SELECT, it still returned results returns like laptops.
Why not go the simple way of using
PRAGMA case_sensitive_like = true/false;
before and after each query you want to be case sensitve? But beware- case sensitivity does only work for ASCII characters, not Unicode which makes SQlite not fully UC-compliant at this time.
Alternatively, SQlite allows applications to implement the REGEXP operator which might help according to www.sqlite.org/lang_expr.html.
Try:
SELECT id, summary, status FROM Tickets WHERE summary GLOB \"*OPS*\";
there is no space between *and OPS.
I think you may need to do a seperate check in your php code on returned value to see if they match your case-sensitive data.
$rs = mysql_query("Select * from tbl where myfield like '%$Value%'");
while($row == mysql_fetch_assoc($rs))
{
if (strpos($row['myfield'],$Value) !== false)
{
$Matches[] = $row;
}
}
print_R($Matches);
You can try something like this:
SELECT YOUR_COLUMN
FROM YOUR_TABLE
WHERE YOUR_COLUMN
COLLATE latin1_general_cs LIKE '%YOUR_VALUE%'
Not sure what your collation set is on the column. I picked latin as an example. Run the query and change 'cs' to 'ci' at the end. You should see different results.
UPDATE
Sorry. read the question too fast. The above collation is for mysql. For SQLLite, you should be able to use BINARY which should give you case sensitive search.
ref: http://www.sqlite.org/datatype3.html#collation
You can do that per column, not per query (which may be your case). For this, use sqlite collations.
CREATE TABLE user (name VARCHAR(255) COLLATE NOCASE);
All LIKE operations on this column then will be case insensitive.
You also can COLLATE on queries even though the column isn't declared with a specific collation:
SELECT * FROM list WHERE name LIKE '%php%' COLLATE NOCASE
Note that only ASCII chars are case insensitive by collation. "A" == "a", but "Æ" != "æ"
SQLite allows you to declare new collation types with sqlite_create_collation and them implement the collation logic on the PHP side, but PDO doesn't expose this.
SELECT * FROM table WHERe field LIKE '%search_term%'
In this form the SELECT is case insensitive.