SQL INSERT with ExecuteNonQuery() inserts extra decimal places - c#

I have an asp.net application written in C#. I have an insert button that takes user-entered information on the page, adds the values to a list, and sends the list to a method that inserts the data to a MSSQL database table.
Everything is working as expected except for fields with data type set to 'float' on the backend.
If I enter a value such as '70.3' in the textbox for field 'Temperature', the value inserted actually becomes '70.3000030517578'. I also have a button that calls an update method. If I change the '70.3000030517578' back to '70.3' and send it to the update method, then 70.3 is correctly inserted in the database.
Here is my SqlCommand Parameter code:
cmd.Parameters.AddWithValue("#temp", listIn[0].Temperature == null ? (object)DBNull.Value : listIn[0].Temperature);
After the above line runs, I check the parameter in debugging mode and confirmed that the Value is 70.3. See screenshot:
I compared this to the update since that method works properly and the properties become:
The two differences are in the DbType and SqlDbType.
So I tried the below line which did not change anything (still adds extra decimal places):
cmd.Parameters.Add("#temp", SqlDbType.Float).Value = listIn[0].Temperature == null ? (object)DBNull.Value : listIn[0].Temperature;
If I change the data type on the backend to decimal(18,3) then the insert method correctly inserts '70.300'. Is my only option to use decimal data type instead of float? We would prefer to use float, if possible. Any ideas are welcomed. I have not been doing this long so I apologize if my question is not detailed enough.

Read about Using decimal, float, and real Data
Approximate numeric data types do not store the exact values specified
for many numbers; they store an extremely close approximation of the
value. For many applications, the tiny difference between the
specified value and the stored approximation is not noticeable. At
times, though, the difference becomes noticeable. Because of the
approximate nature of the float and real data types, do not use these
data types when exact numeric behavior is required, such as in
financial applications, in operations involving rounding, or in
equality checks. Instead, use the integer, decimal, money, or
smallmoney data types.

You're probably running into something related to floating point precision/error. 70.3 is not a value that can be precisely represented in the float/real type. The closest value that can be stored (following IEEE-754 standards) is 70.30000305175781
http://www.h-schmidt.net/FloatConverter/IEEE754.html
If you need accuracy in base 10, decimal is probably the way to go. Otherwise, you'll have to do the rounding yourself.

Related

Store values in separate, C# type-specific columns or all in one column?

I'm building a C# project configuration system that will store configuration values in a SQL Server db.
I was originally going to set the table up as such:
KeyId int
FieldName varchar
DataType varchar
StringValue varchar
IntValue int
DecimalValue decimal
...
Values would be stored and retrieved with the value in the DataType column determining which Value column to use, but I really don't like that design. So I thought I'd go this route:
KeyId int
FieldName varchar
DataType varchar
Value varbinary
Here the value in DataType would still determine the type of Value brought back, but it would all be in one column and I wouldn't have to write a ton of overloads to accommodate the different types like I would have with the previous solution. I would just pull the Value in as a byte array and use DataType to perform whatever conversion(s) necessary to get my Value.
Is the varbinary approach going to cause any performance issues or is it just bad practice to drop all these different types of data into a varbinary? I've been searching around for about an hour and I can't get to a definitive answer.
Also, if there is a more preferred method anyone can think of to reach the same conclusion, I'm all ears (or eyes).
You could serialize your settings as JSON and just store that as a string. Then you have all the settings within one row and your clients can deserialize as needed. This is also a safe way to add additional settings at any time without any modifications to your database.
We are using the second solution and it works well. Remember, that the disk access is in orders of magnitude greater, than the ex. casting operation (it's milliseconds vs. nanoseconds, see ref), so do not look for bottleneck here.
The solution can be to implement polymorphic association (1, 2). But I dont think there is a need for that, or that you should do this. The second solution is close to non-Sql db - you can dump as a value anything, might be as well entire html markup for a page. It should be the caller responsability to know what to do wit the data.
Also, see threads on how to store settings in DB: 1, 2 and 3 for critique.

Entity framework appending a decimal place to a field of type decimal(10,0)

I'm using EF database first. In my database I have a field which I know will always be 10 digits long, so naturally I've opted for a decimal(10,0) type, when I insert values into the table I can insert any number up to 10 digits long, however when I insert an entity with EF6 it adds a 0 decimal and then throws a parameter out of range value. The type of the field in my C# code is decimal
Here is the entity immediately before calling context.SaveChanges():
And a sanity check, here is the column in sql server:
EDIT:
Here is the EF mapping:
Just reported it on codeplex.
The error sounds like a bug in EF, but if it is always an integer, wouldn't it make more sense to use an int instead of decimal? I am thinking both for logical and performance sake.
This is actually a bug in SqlClient - SqlDecimal code doesn't handle the precision correctly for some edge cases.
I believe the culprit is here:
http://referencesource.microsoft.com/#System.Data/data/System/Data/SQLTypes/SQLDecimal.cs#416
However, given that this has been the behavior for a long time, it is very unlikely it will ever be fixed. I would recommend working around this by using bigint, like user1648371 and Andrew Morton have suggested.

inserting c# double values to sql server float field. (no fractional part)

I am trying to insert double values to my database via EF 5. I generated EF entity model from db. There is a price column in the table which is float, and naturally EF generated a double type for the mapper class.
I read some string values from a file and convert it to double and save it to db. When I debug I can see that values are converted correctly. For example string value "120,53" is converted to double like 120.53, just fine. But when I save my context it goes to db like "12053".
What can cause such a problem? Is there any setting in SQL Server has anything to do with this?
I believe the problem is in your Convert.ToDouble and the , being used instead of a decimal. Without supplying the culture for the number it's likely instead interpreting the , as a thousandths separator and ignoring it. Try passing in CultureInfo.CurrentCulture as the second argument (IFormatProvider) if the culture of your environment is one where the , is used as a decimal. Or, since it looks like you're replacing a decimal with the , - just don't replace it.

How do I keep my stored procedure inputs from being silently truncated?

We use the standard System.Data classes, DbConnection and DbCommand, to connect to SQL Server from C#, and we have many stored procedures that take VARCHAR or NVARCHAR parameters as input. We found that neither SQL Server nor our C# application throws any kind of error or warning when a string longer than maximum length of a parameter is passed in as the value to that parameter. Instead, the value is silently truncated to the maximum length of the parameter.
So, for example, if the stored procedure input is of type VARCHAR(10) and we pass in 'U R PRETTY STUPID', the stored procedure receives the input as 'U R PRETTY', which is very nice but totally not what we meant to say.
What I've done in the past to detect these truncations, and what others have likewise suggested, is to make the parameter input length one character larger than required, and then check if the length of the input is equal to that new max length. So in the above example my input would become VARCHAR(11) and I would check for input of length 11. Any input of length 11 or more would be caught by this check. This works, but feels wrong. Ideally, the data access layer would detect these problems automatically.
Is there a better way to detect that the provided stored procedure input is longer than allowed? Shouldn't DbCommand already be aware of the input length limits?
Also, as a matter of curiosity, what is responsible for silently truncating our inputs?
Use VARCHAR(8000), NVARCHAR(4000) or even N/VARCHAR(MAX), for all the variables and parameters. This way you do not need to worry about truncation when assigning #variables and #parameters. Truncation may occur at actual data write (insert or update) but that is not silent, is going to trigger a hard error and you'll find out about it. You also get the added benefit of the stored procedure code not having to be changed with schema changes (change column length, code is still valid). And you also get better plan cache behavior from using consistent parameter lengths, see How Data Access Code Affects Database Performance.
Be aware that there is a slight performance hit for using MAX types for #variables/#parameters, see Performance comparison of varchar(max) vs. varchar(N).

SQL adding random Decimal points when getting a real from DB to a Double in C#

When I am Entering a double number from C# to the DB I use a real type in the DB. When I check the value it is exactly what I enter.
Now when I go to retreive the value from the DB and store it into a C# double, it adds random decimal values at then end. The number in the database is the correct value that I want, but the value in C# as a double is just random (sometimes higher sometimes lower then the actual value.)
ie
- Enter into the db 123.43 as a double to an sql real.
- View the value in the DB, it's exactly 123.43
- Get the value from the DB and store into a C# double, value in the double is now 123.4300000305176
I have tried changing the type to float, decimal, etc. in the DB but these types actually alter the value when I put it into the DB to the same format as the above.
Any help on whats going on or how to fix it?
You should probably be using Decimal type. See What represents a double in sql server? for further explanation.
Try using a Single if your DB type is real. Check here for data mappings http://msdn.microsoft.com/en-us/library/ms131092.aspx

Categories

Resources