I encountered this error when I attempted to insert something into a MySQL table. What's the possible reason? How to solve this problem?
The raw value of "budget" is 800元, when inserted, it became 800, 元 is missing.
This would mean that you are trying to insert data that would overflow the allocated storage for that column.
If your SQL Mode is set to strict, any insertion of data that would not fit, will cause an error and the insert will fail, if the mode is non-strict the insert will succeed with truncation giving a warning. But this should not be use as a workaround for this problem. Instead you need to identify the field which is causing this and should update the table to make it wide enough to accommodate the widest data your application might give MySql
Related
I have 2 servers connected over a low speed wan and we're running SQL Server 2008 with Merge replication.
At the subscriber, sometimes when attempting to insert new rows, I get this error:
A trigger returned a resultset and/or was running with SET NOCOUNT OFF
while another outstanding result set was active.
My database doesn't have any triggers; the only triggers are the one created by the Merge replication
Also, whenever this error occurs it automatically rolls back the existing transaction
I am using DataTables and TableAdapters to insert and update the database using transactions
What I have checked:
the database log file size is below 50Mb
Checked the source code for Zombie transactions (since I wasn't able to retrieve the actual error at the beginning)
Checked the connection between the two servers and found it congested
Questions:
How to avoid this behavior and why it's occurring at first place?
Why it's cancelling the open transaction?
The trigger is returning a "resultset" of sorts, the number of rows affected. "(1 row(s) affected) // or n rows...." I don't know why this is being interpreted as a resultset but it is.
I had a similar issue today and I realized that this issue can be fixed by putting a SET NOCOUNT OFF towards the end of the trigger. Just having the SET NOCOUNT ON at the top of the trigger is not sufficient.
The trigger in which I am referring is the pre-made "insert" statement in your application.This is most likely the statement throwing the SQL error.
Now you can use sp_configure 'disallow results from triggers' 1, but, that will disable this for the entire database and that may not be desirable, in case you are expecting some other trigger to return a resultset.
The OP of the Source I used described the exact same problem you are having if my answer doesn't suffice. Also, MSDN had said
The ability to return result sets from triggers will be removed in a future version of SQL Server. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do this. To prevent triggers from returning result sets in SQL Server 2005, set the disallow results from triggers Option to 1. The default setting of this option will be 1 in a future version of SQL Server.
set in top of script
SET NOCOUNT ON;
I got the same issue and have a solution.
The problem was a stored-procedures with a result set, which was executed in this trigger.
It seems, that this stored-procedure leads the trigger to have a result set as well.
I had the same problem.
The problem was an insert with select:
insert into table (...)
select ...
The problem was that select had a group statements and returned:
Warning: Null value is eliminated by an aggregate or other SET operation.
this causing the problem, I changed instructions max(field) by max(coalesce(field,'')) to avoid the null operation in max group function.
Context, using doctrine to store an array as longtext in mysql column. Received some Notice: unserialize(): Error at offset 250 of 255 bytes.I therefore did some backtracking to realize the serialized string was truncated because its too big for a long text column. I really doubt that is the case. This string is near and far away from being 4GB.
Someone from this question suggested to take a look at SET max_allowed_packet but mine is 32M.
a:15:{i:0;s:7:"4144965";i:1;s:7:"4144968";i:2;s:7:"4673331";i:3;s:7:"4673539";i:4;s:7:"4673540";i:5;s:7:"4673541";i:6;s:7:"5138026";i:7;s:7:"5140255";i:8;s:7:"5140256";i:9;s:7:"5140257";i:10;s:7:"5140258";i:11;s:7:"5152925";i:12;s:7:"5152926";i:13;s:7:"51
Mysql table collation: utf8_unicode_ci
Any help would be greatly appreciated !!
Full Error
Operation failed: There was an error while applying the SQL script to the database.
ERROR 1406: 1406: Data too long for column 'numLotCommuns' at row 1
SQL Statement:
UPDATE `db`.`table` SET `numLotCommuns`='a:15:{i:0;s:7:\"4144965\";i:1;s:7:\"4144968\";i:2;s:7:\"4673331\";i:3;s:7:\"4673539\";i:4;s:7:\"4673540\";i:5;s:7:\"4673541\";i:6;s:7:\"5138026\";i:7;s:7:\"5140255\";i:8;s:7:\"5140256\";i:9;s:7:\"5140257\";i:10;s:7:\"5140258\";i:11;s:7:\"5152925\";i:12;s:7:\"5152926\";i:13;s:7:\"51}' WHERE `id`='14574'
The column was a tinytext...
Only logical explanation I can understand from this is that whether when I created my table in earlier version of doctrine, the default was tiny text
OR
I remember changing the type of the column within doctrine annotations and maybe the update didn't fully convert the type correctly.
Bottom line, check your types even though you use an orm.
Your column must have been defined as varchar(250).
You need to first convert it to the longtext.
I seem to be having an issue when updating records on a specific table.
For reference here is an example of the query that throws an error:
UPDATE `dbname`.`tblname` SET `CustomerID` = '543' WHERE `tblname`.`Issue_ID` = 440
I am able to insert, delete and query rows, as well as update other columns however whenever trying to update the CustomerID field (int, non-null) it throws an error saying:
#1054 - Unknown column 'Revision' in 'field list'
I have all rights to both the database and table however while trying to update the CustomerID column on any rows, ever when Revision isn't even in the query I get the same error.
I looked around a great deal into the issue using a regex in my php code to remove all non-printable characters however even when running the query from phpMyAdmin the same error is thrown.
If anyone has insight into this error it would be greatly appreciated.
Table description:
You may possibly encounter this if you have an update trigger firing off which is referencing a column that does not exist. May be the offending trigger is not even trying to read/write to this table! As such, that column may not exist where it is trying to reference it. Further, you could kick off a cascade of such triggers, and have this buried more than one layer deep.
To show triggers:
http://dev.mysql.com/doc/refman/5.7/en/show-triggers.html
To modify them:
http://dev.mysql.com/doc/refman/5.7/en/trigger-syntax.html
I have a MySQL database as above, with all Null set to Yes except UID. The data will be collected from a survey and it works when all the entries fit the "Type" defined by MySQL.
Here's a problem though, sometimes users may input something that doesn't fit the criteria, for e.g.: "a user input varchar(4) in age instead of int(3)".
What happens now, is the whole row will not be inserted as a result of that single error. What I want to do is to have a way, so that only age's entry will be omitted. I thought setting Null to Yes can solve the problem but apparently it can't. Please help :)
ALLOW NULL doesn't mean "if invalid, insert NULL". You should validate the data prior to the insert.
You can use the IGNORE key word when inserting like this:
INSERT IGNORE INTO tbl
Which provides the following functionality:
Data conversions that would trigger errors abort the statement if
IGNORE is not specified. With IGNORE, invalid values are adjusted to
the closest values and inserted; warnings are produced but the
statement does not abort. You can determine with the mysql_info() C
API function how many rows were actually inserted into the table.
So data conversion errors will not abort, but constraint errors will abort like a duplicate unique or primary key.
In most cases you should be validating the data yourself to make sure it's the right type, and possibly giving the user an error message if they enter something invalid. Use the simple function is_numeric($age) to find out if age is a valid number. If it is, you can go on to use $age = intval($age) to prevent some smartypants from entering their age as 18.1667. If $age isn't a number, you can manually set it to null and it will be entered into your database that way.
A slightly faster way would be to just include the code (int)$age as part of your insert query. However, casting a string to an int gives 0, and for some fields 0 may also be a valid response! So don't do that.
There are a few similar questions but I'll throw this in the mix.
Basically, I have a timestamp column (which is an int) and one of my updates is ONLY updating this timestamp column and sometimes it's a barely noticeable distance. For instance, it might change from 1316631442 to 1316631877. Not really much of a difference between the two.
So the record is being updated, I can check in phpMyAdmin before the query is run and then afterward and see the difference. However, I'm doing a var_dump() on the affected row count and it remains 0.
If I update another column at the same time to a different value then the affected rows are 1.
So what does it take to trigger the row as being affected? Even though it's being affected since the update is successful.
Also, I'm using the Laravel PHP framework and its query builder. Currently doing a bit of debugging in there to see if something may be off but so far all seems to be well.
EDIT: Sorry I had mistyped something above. When I completely change another column's value the affected rows is 1, not 0.
For those curious I resorted to using an INSERT INTO... ON DUPLICATE KEY UPDATE to achieve the desired result. I still couldn't figure out why MySQL wasn't reporting the record as being affected.
I also tried using PHPs mysql_query() and related functions to check the affected row but it didn't work like that also. Perhaps MySQL doesn't deem the change to be worthy of marking the record as affected.
I can replicate the issue with MySql 5.1 but not with Mysql 5.7.
In the earlier version
UPDATE `test`
SET
`time` = TIMESTAMPADD(MINUTE, 10, NOW())
WHERE
`id = 1
does update the timestamp but the message is that zero rows have been affected.
With the later version of MySql the same query gives me one row affected.
It seems to be a MySql bug that has been fixed.
Time to get the Server admin to update some stuff?