Issue
Performing an insert using prepared statements with PDO and an ODBC driver gives the following error when at least one of the parameters is over 30 characters:
SQLSTATE[HY010]: Function sequence error: 0
[unixODBC][Driver Manager]Function sequence error (SQLExecute[0] at /usr/src/builddir/ext/pdo_odbc/odbc_stmt.c:254)
Inserts work for any bound string that is <= 30 characters in length.
I have no issues with SELECT queries.
Using INSERT with isql and sqlcmd does not produce an error, but the column values are truncated in the database if they are over 30 characters.
It appears to be a driver issue.
Any ideas on what is causing the issue, and how it can be solved?
Example
Below is a minimal example duplicating the error on my system in PHP, isql, and sqlcmd.
The table used (called table) has three columns:
colvarchar varchar(70)
colnvarchar nvarchar(40)
colnchar nchar(60)
And the code that produces the error:
<?php
$dns = 'odbc:testdb';
$username = 'user';
$password = 'pass';
$pdo = new PDO($dns, $username, $password);
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$sql = <<<'QUERY'
INSERT INTO table
(colvarchar, colnvarchar, colnchar)
VALUES
(CAST(:colvarchar AS varchar)
CAST(:colnvarchar AS nvarchar),
CAST(:colnchar AS nchar));
QUERY;
$prepStmt = $pdo->prepare($sql);
// Add one more characters to any of the following strings to cause the error
$prepStmt->bindValue('colvarchar', '012345678901234567890123456789');
$prepStmt->bindValue('colnvarchar', '012345678901234567890123456789');
$prepStmt->bindValue('colnchar', '012345678901234567890123456789');
$prepStmt->execute();
?>
Adding an additional characters to any of the 30 character strings will cause the following error:
PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY010]: Function sequence error: 0 [unixODBC][Driver Manager]Function sequence error (SQLExecute[0] at /usr/src/builddir/ext/pdo_odbc/odbc_stmt.c:254)' in ...
Using isql and sqlcmd from command line performs the insert, but a select on the table shows the strings are truncated to 30 characters.
slqcmd:
sqlcmd -D -S testdb -U user -P pass -q "INSERT INTO table (colvarchar,
colnvarchar, colnchar) VALUES
(CAST('012345678901234567890123456789xxx' AS varchar),
CAST('012345678901234567890123456789xxx' AS nvarchar),
CAST('012345678901234567890123456789xxx' AS nchar))"
(1 rows affected)
isql:
isql testdb -U user -P pass
SQL> INSERT INTO table (colvarchar,
colnvarchar, colnchar) VALUES
(CAST('012345678901234567890123456789xxx' AS varchar),
CAST('012345678901234567890123456789xxx' AS nvarchar),
CAST('012345678901234567890123456789xxx' AS nchar))"
SQLRowCount returns 1
Result:
SELECT colvarchar, colnvarchar, colnchar FROM table;
colvarchar | colnvarchar | colnchar
012345678901234567890123456789 | 012345678901234567890123456789 | 012345678901234567890123456789
012345678901234567890123456789 | 012345678901234567890123456789 | 012345678901234567890123456789
Research
This post details a similar issue with inserts that do not conform to max column width, timestamp format, or column type.
Timestamp format has been tested and does work, since it is less than 30 characters.
Test strings over 30 characters have been checked to make sure their length is less than the max width of their column.
The datatype of the problematic columns are varchar.
System Setup
OS: Debian wheezy (64bit)
Database: Microsoft SQL Server 2014
Webserver: Apache 2.2.22 (64bit)
PHP 5.6.24 with Zend thread safety (64bit)
Microsoft ODBC Driver 11.0.2270.0 (Red Hat Linux) (64bit)
created appropriate environment to work with Debian according to this blog and spiceworks
unixODBC 2.3.0 (64bit)
required by the MS ODBC driver on linux
UPDATE:
I believe this to be a unixODBC issue, as the 30 character truncation was present when I was using FreeTDS and unixODBC (which was changed to MS ODBC and unixODBC because of this issue). The difference was there was no error message when using FreeTDS; it failed silently like isql and sqlcmd are doing presently.
Changed unixODBC version to 2.3.0 from 2.3.4 for its compatibility with MS ODBC 11, as shown here. Issue persists.
All programs that required ODBC shared libraries were linking with versions that were from 2011. They now all link with the update to date shared libraries from unixODBC. Issue persists.
You CAST to NVARCHAR and NVARCHAR is limited to 30 characters in SQL-Server
make the cast like this:
CAST (colvarchar AS varchar(70) )
CAST (colnvarchar AS nvarchar(40) )
CAST (colnchar AS nchar(60) )
The issue is with the SQL CAST expressions.
From MSDN - CAST & CONVERT:
Truncating and Rounding Results
When you convert character or binary expressions (char, nchar,
nvarchar, varchar, binary, or varbinary) to an expression of a
different data type, data can be truncated, only partially displayed,
or an error is returned because the result is too short to display.
The documentation lists specific exceptions that are guaranteed not to be truncated. However, it does not detail what length data will be truncated too.
Specify the data type length to avoid the truncation:
CAST (colvarchar AS varchar(70))
CAST (colnvarchar AS nvarchar(40))
CAST (colnchar AS nchar(60))
Related
I have an app that fetches a VARBINARY(max) data from SQL Server database. On my local environment the app connects via SQL Driver. Connection string of odbc_connect contains:
DRIVER={SQL Server}
I am fetching the VARBINARY data like this:
// Hexadecimal data of attachment
$query = 'SELECT * FROM attachments WHERE LOC_BLOB_ID = ' . $blob_id;
$attach_result = odbc_exec($connection, $query);
odbc_binmode($attach_result, ODBC_BINMODE_CONVERT);
odbc_longreadlen ($attach_result, 262144);
$attach_row = odbc_fetch_array($attach_result);
$hex_data = $attach_row['attachment_value'];
$binary = hex2bin($hex_data);
It works well. Now I need to run this app on a server where my only option is to use the ODBC driver 17 for SQL Server. Connection string contains:
DRIVER={ODBC Driver 17 for SQL Server}
And it doesn't work. It fails on line number 6 of the preview above (on odbc_fetch_array). I've tried commenting out the odbc_binmode and odbc_longreadlen lines (I assumed this driver might handle those data natively), but no luck, same result: Service unavailable timeout error.
Is there a different approach to this width ODBC Driver 17?
Edit: I found out it hangs on ODBC_BINMODE_CONVERT. If I change it to ODBC_BINMODE_RETURN, it runs within few seconds - however the output is wrong. The ODBC_BINMODE_CONVERT is indeed what I need, but it doesn't process the entire data in time (the timeout is 30 seconds), which is strange, because the VARBINARY field in the database is only 65K characters long, and it runs extremely fast on my local environment.
*Edit2: I've tried to convert the incomplete binary data fetched from the database to hexadecimal and then to PNG and it displays half of the image. So I am positive it is fetching the correct data, it just takes incredibly long to fetch that column, resulting in timeouts in almost every case.
OK. Finaly figured it out. What ended up working for me was using ODBC_BINMODE_RETURN flag instead of ODBC_BINMODE_CONVERT, and NOT using hex2bin() conversion at the end.
The code in my original question worked fine with {SQL Server}, and the following code works with {ODBC Driver 17 for SQL Server}:
$query = 'SELECT * FROM attachments WHERE LOC_BLOB_ID = ' . $blob_id;
$attach_result = odbc_exec($connection, $query);
odbc_binmode($attach_result, ODBC_BINMODE_RETURN);
odbc_longreadlen ($attach_result, 262144);
$attach_row = odbc_fetch_array($attach_result);
$binary = $attach_row['attachment_value'];
I connect to BBDD HANA from PHP code. The connector is unixodbc. (table have spanish characters)
When try to select records, if any field have special characters (ex: euro character) they skip fetch and log in odbc:
DIAG [S1000] [SAP AG][LIBODBCHDB SO][HDBODBC] General error;-10427 Conversion of parameter/column (8) from data type NVARCHAR to ASCII failed
The config odbc.ini
[hanadb]
Driver = /usr/sap/hdbclient/libodbcHDB.so
ServerNode = 172.17.xx.xx:31015
(i try to add this line, without any change)
DriverUnicodeType=1
DriverManagerEncoding = UTF-8
Locale = es_ES
characterset=UTF8
IANAAPPCODEPAGE=2026
The code in php
$result = odbc_exec($link,"SELECT * FROM ZIF_TCONDW ");
while($datos=odbc_fetch_array($result)) {
$query=sprintf("INSERT INTO condiciones values
{...}
fields with text:
Pedidos de 701€ a 1200€
Crash, and in the trace file:
DIAG [S1000] [SAP AG][LIBODBCHDB SO][HDBODBC] General error;-10427 Conversion of parameter/column (8) from data type NVARCHAR to ASCII failed
I try too convert type in select sentence
$result = odbc_exec($link,"SELECT LIFNR,ZONA,POSCOND,LEFT(STRTOBIN(CONCEPTO,'UTF-8') ,400) AS CONCEPTO, CONDICION ,ORDEN,AEDAT,AEUHR,AENAM FROM ZIF_TCONDW ");
or
$result = odbc_exec($link,"SELECT LIFNR,ZONA,POSCOND,base64_encode(CONCEPTO) AS CONCEPTO, base64_encode(CONDICION) AS CONDICION ,ORDEN,AEDAT,AEUHR,AENAM FROM ZIF_TCONDW ");
Without change.
This is a problem that occurs when the ODBC driver tries to hand over Unicode data to your client variables.
You might want to set the CHAR_AS_UTF8 = true connection option to avoid that.
See SAP HANA Client Interface Programming Reference.
I want to use ruby to read/insert data to a mysql database, onto which data were saved by a php code. When I read Chinese data, it does not appear correctly. It appears like 刘佳. But in a php page, it shows Chinese data correctly as 刘佳.
I confirmed the database uses utf-8 charset (CHARSET=utf8 COLLATE=utf8_unicode_ci).
my ruby code
require 'active_record'
class Student < ActiveRecord::Base
end
ActiveRecord::Base.establish_connection(
adapter: 'mysql2',
host: 'xxxx',
username: 'xxxx',
password: 'xxxx',
database: 'xxx_db',
encoding: 'utf8'
)
puts Student.first.name
It outputs an unknown string "刘佳".
How can I read Chinese data correctly and save a new Chinese record to database?
puts Student.first.name
It outputs an unknown string "刘佳".
I believe that is because whatever device you are using to view the ruby program's output (a terminal window?) is not set to "UTF-8" (see below for how to check that).
As far as I can tell, you have done everything right:
mysql docs:(http://dev.mysql.com/doc/refman/5.0/en/charset-applications.html)
Specify character settings per database. To create a database such
that its tables will use a given default character set and collation
for data storage, use a CREATE DATABASE statement like this:
CREATE DATABASE mydb
DEFAULT CHARACTER SET utf8
DEFAULT COLLATE utf8_general_ci;
Tables created in the database will use utf8 and utf8_general_ci by
default for any character columns.
Applications that use the database should also configure their
connection to the server each time they connect. This can be done by
executing a SET NAMES 'utf8' statement after connecting. The statement
can be used regardless of connection method: The mysql client, PHP
scripts, and so forth.
rails docs: (http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/MysqlAdapter.html)
All the options for ActiveRecord::Base.establish_connection() are as follows (note the description for :encoding):
Options:
:host - Defaults to “localhost”.
:port - Defaults to 3306.
:socket - Defaults to “/tmp/mysql.sock”.
:username - Defaults to “root”
:password - Defaults to nothing.
:database - The name of the database. No default, must be provided.
:encoding - (Optional) Sets the client encoding by executing
“SET NAMES <encoding>” after connection.
:reconnect - Defaults to false (See MySQL documentation: dev.mysql.com/doc/refman/5.0/en/auto-reconnect.html).
:strict - Defaults to true. Enable STRICT_ALL_TABLES. (See MySQL documentation: dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html)
:variables - (Optional) A hash session variables to send as `SET ##SESSION.key = value` on each database connection. Use the value `:default` to set a variable to its DEFAULT value. (See MySQL documentation: dev.mysql.com/doc/refman/5.0/en/set-statement.html).
:sslca - Necessary to use MySQL with an SSL connection.
:sslkey - Necessary to use MySQL with an SSL connection.
:sslcert - Necessary to use MySQL with an SSL connection.
:sslcapath - Necessary to use MySQL with an SSL connection.
:sslcipher - Necessary to use MySQL with an SSL connection.
(I had a hard time locating those, so I am posting all of them for future google searchers.)
And, when I run the following program in a terminal window, e.g.:
$ r 1.rb
where my terminal window is set to UTF-8:
~/ruby_programs$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=
...
# encoding: UTF-8
require 'active_record'
require 'mysql2'
class Student < ActiveRecord::Base
end
ActiveRecord::Base.establish_connection(
adapter: 'mysql2',
#host: 'localhost', #this is the default
#username: 'root', #this is the default
#password: '', #this is the default
database: 'mydb2',
encoding: 'utf8'
)
#Insert a record in the db (It shouldn't matter whether a php or a ruby program writes to the database.)
Student.create(
name: "\u732a", #Because of the comment at top of the program, this
#string will be encoded in UTF-8
info: "a pig" #..so will this one.
)
name = Student.first.name
puts name
name.each_byte{|b| printf "%x \n", b}
puts
...the output I see is a Chinese character in my terminal window, which when compared to the Chinese character for 'pig' matches exactly, followed by:
e7
8c
aa
And if you look here: http://www.fileformat.info/info/unicode/char/732a/index.html, those bytes make up the UTF-8 encoding of the unicode integer \u732a, which represents 'pig' in Chinese, which is what was in the string that was inserted into the db.
In any case, you should run my program and if you get the same kind of error, then it will prove that it is your terminal's encoding that is the problem.
I have an app with Doctrine 1 and I generate update_datetime fields for objects via new Zend_Date->getIso(). It worked just fine for years, but now I got a new notebook and Doctrine tries to insert a DATETIME fields as a string "2013-07-12T03:00:00+07:00" instead of normal MySQL datetime format "2013-07-12 00:00:00" which is totally weird.
The very same code runs just fine on another computer. Everything is nearly identical – MySQL 5.6.12, PHP 5.3.15 on both. Any idea where should I look?
Fatal error: Uncaught exception 'Doctrine_Connection_Mysql_Exception' with message 'SQLSTATE[22007]: Invalid datetime format: 1292 Incorrect datetime value: '2013-07-12T03:00:00+07:00' for column 'nextrun' at row 1' in library/Doctrine/Connection.php:1083
UPDATE
Ok with the help from StackOverflow community I finally solved it. The problem was with STRICT_TRANS_TABLES in sql_mode variable. But changing it in /etc/my.cnf seemed not enough, so I had to run mysql -uroot and type the following:
set sql_mode=NO_ENGINE_SUBSTITUTION; set global sql_mode=NO_ENGINE_SUBSTITUTION;
Thus removing STRICT_TRANS_TABLES
UPDATE2
How to get rid of STRICT forever? How to get rid of STRICT SQL mode in MySQL
If it exists, you can try removing STRICT_TRANS_TABLES from sql-mode in your my.ini.
This can cause this error with a datetime string value containing the format you have not being converted to mysql datetime. This my.ini change was reported as a fix in:
PING causes 500 Internal Server Error - Incorrect datetime value
(AuthPuppy Bug #907203)
Date constants in zend are determined from sniffing out locale in this order (form zend_locale comments)
1. Given Locale
2. HTTP Client
3. Server Environment
4. Framework Standard
I'm thinking the difference between the two systems is going to be reflected in the Server Environment.
To correct and avoid this problem in the future you can specify the locale options within your application.ini using these configuration directive.
resources.locale.default = <DEFAULT_LOCALE>
resources.locale.force = false
resources.locale.registry_key = "Zend_Locale"
The locale should be set to a string like en_US
Zend_Locale specificly sniffs the locale from the environment from a call to setlocale and parsing the results.
This is caused by Zend not setting your timestamp format to one that matches what MySQL is expecting. You could disable STRICT mode in MySQL, but this is a hack and not a solution (MySQL will attempt to guess what the date you're entering is).
In Zend you can set the datetime format to what MySQL is expecting to solve this:
$log = new Zend_Log ();
$log->setTimestampFormat("Y-m-d H:i:s");
Ok with the help from StackOverflow community I finally solved it. The problem was with STRICT_TRANS_TABLES in sql_mode variable. But changing it in /etc/my.cnf seemed not enough, so I had to run mysql -uroot and type the following:
set sql_mode=NO_ENGINE_SUBSTITUTION; set global sql_mode=NO_ENGINE_SUBSTITUTION;
Thus removing STRICT_TRANS_TABLES
------this answer works
On my home computer,
mysql_fetch_row( mysql_query(" select b'1' ") )[0]
returns string "1".
But when hosted on webserver it returns string having ASCII character 1.
Doc does say-
Bit values are returned as binary values. To display them in printable
form, add 0 or use a conversion function such as BIN().
But on my local machine it still returns "1" without any conversion done by me.
How can I have the same behavior on my web server?
If I get the same behavior then I don't have to convert my PHP codes from like
$row = mysql_fetch_row( mysql_query(" select bit1_field from .. where .. ") );
if( $row[0] === '1' ) ...;
to
... select bit1_field+0 as bit1_field ...
where bit1_field is of type bit(1).
It seems you are using two different drivers on the machines. There are two, php5-mysqlnd and php5-mysql. Website Factor wrote about the different behavior for BIT field in late April and I have also several machines with same version, but different drives. I's probably because the driver is not changed when upgrading from an earlier version, but when php >5.4 is installed, it gets installed with php5-mysqlnd by default. Here is the MySQL page about the differences.