Decimal to Float conversion issue in C# - c#

Can anyone please help me with this problem?
I have in the database a nullable "real" column that Entity Framework represents as "float?"
The system does some calculations in decimal but when assigns the value in this column the precision is lost.
The system is using the next conversion:
dbObject.floatCol = (float) decimalVar;
decimalVar is = 123.12341
but dbObject.floatCol is = 123.1234
I been trying to do this convertion in other ways without luck.
NOTES:
In the database I can update the value to 123.12341 without problem.
I can't alter the database's column
I'm using Visual Studio 2010 with .NET Framework 4
EDIT:
If I debug the problem (with the real numbers of my issue) I have:
decimal decimalVar = new decimal(123.13561);
// decimalVar is 123.13561
float? floatVar = 0;
// floatVar is 0
floatVar = (float)decimalVar;
// floatVar is 123.135612 <- has appeared a 2 at the end
// with a different decimal number
decimalVar = new decimal(128.13561); // replaced the 123 for 128
// decimalVar is 128.13561
floatVar = (float)decimalVar;
// floatVar is 128.1356 <- has disappeared the 1 at the end
The behavior is really weird
I need to know if is possible to do a correct conversion
When I try to print the values to Console or Logfile they are different :s

You are getting about 7 digits precision. This is what you can expect from a C# float. If you need more precision, use a C# double value instead. It gives you about 15 to 16 digits precision.
double d = (double)decimalVar;
An also you must make a difference between the C# data types and the database data types. In C#, the best data type for reading in a db decimal column is decimal.
Depending on the database type, different names are used for the numeric column types. And those names are different from the C# types. A SQL-Server float corresponds to a C# double. A SQL-Server real corresponds to a C# float (.NET Single).
See: SQL Server Data Type Mappings
Concerning your EDIT: float and double values have a binary representation as a floating point number internally which requires decimal numbers to be rounded to the closest possible matching binary value. This binary value is then converted back to a decimal representation for display. Depending on whether the value was rounded up or down this might add or remove decimals. This is what you experienced.
See Jon Skeet's article on Binary floating point and .NET

Related

Decimal out of range exception

In my C# application, I am trying to save a decimal price into a SQL Server table. Columns type is decimal with no total digits defined.
Without discount calculations, everything works fine.
But when I run some calculations, I get a final value of 21800, and I get the following error when trying to save it.
"Parameter value '218000/00000000000000' is out of range."
I don't understand where the extra zeros come from! The only thing I know is with myValue.ToString() I get 218000/00000000000000, too!
I understand that the digits after floating point are caused by the calculations. But whatever they do, my value is 21800 when watching it. Why is it trying to save 218000/00000000000000? Why does it care at all about the zeros?
Any ideas why this happens and how to fix it?
There are two ways to store decimal numbers:
Fixed-Point: This is what you have defined in your SQL Server project. You defined decimal (19,4), which means 19 digits total and 4 digits after the decimal point. If you define such a type, then every number always has 4 digits after the decimal point; and also, you cannot store a number with more than 4 digits after the decimal point. And note, there is no equivalent to fixed point in C#!
Floating Point: These are the types float, double and decimal in C#, and the type float(n)in SQL Server. In floating point, as the names says, you can basically vary the number of digits before and behind the decimal point. (Technically, you have a fixed number of digits, the so-called mantissa, and you can then move the decimal point around with a 2nd number, the exponent)
So now to your project: In your database, you use fixed point, while in C#, you have floating point. Therefore, when you calculate something in C#, and you want to store in to your database, then it is always converted from floating point to fixed point. However, if your number in C# does not fit the decimal numbers in the database, then you get the out of range error.
I had to specify the total digits of my decimal column in the SQL table (I set it as decimal (19, 4)).
Besides, my C# representing field also needed to have the right precision count.
I still don't know about the extra zeros and why SQL care about them. Any explanations is appreciated.

ASP.NET 4 MVC Decimal Overflow with Fluent NHibernate and SQL Server 2008 RC2

The following problem is running on a web API application in ASP.NET MVC 4 connected in an SQL Server 2008 R2 by Fluent NHibernate.
So I have this form that can store a decimal number that can hold a value to 15 integers and 14 decimals places. The database column is defined as decimal(29,14) so as the mapped property Map((x) => x.Factor).Column("FACTOR").Precision(29).Scale(14).Not.Nullable().
The data of that field should hold any value in this mask 999999999999999.99999999999999, but it does't. That number causes an OverflowException. I believe that is because of the number limitation, described in its reference: C# and SQL Server.
I really don't understand this notation, -7.9x1028 to 7.9x1028 (from the C# reference) or -10^38 +1 to 10^38 -1 (from the SQL Server reference), but I think that what limits the number are the SQL Server decimals, because the error is on the Transaction commit action. The ViewModel shows the right number.
What precision/scale do I need to set on the table column in order to accept the application value?
I really don't understand this notation, -7.9x1028 to 7.9x1028 (from the C# reference)
I prefer the explanation in Decimal Structure (MSDN).
The binary representation of a Decimal value consists of a 1-bit sign,
a 96-bit integer number, and a scaling factor used to divide the
96-bit integer and specify what portion of it is a decimal fraction.
The scaling factor is implicitly the number 10, raised to an exponent
ranging from 0 to 28. Therefore, the binary representation of a
Decimal value the form, ((-296 to 296) / 10(0 to 28)), where -(296-1)
is equal to MinValue, and 296-1 is equal to MaxValue.
Your decimal number should be written 9 (repeated 29 times) / 1014.
The problem is that 9 (repeated 29 times), which is greater than 9x1028, is too big to be convertible to 96-bits integer representation (which max value is the 7.9x1028 of your first definition, which you will retrieve in the remarks of the link above)
In fact, if your try to write
decimal d = 99999999999999999999999999999m;
you will get an error stating "Overflow in constant value computation".
I guess the thing to remember is:
You can have up to 29 significant digits as long as these significant digits don't form an integer greater than 7.9x1028
It seems that no error is issued when writing
decimal d = 999999999999999.99999999999999m ;
but some rounding happens. Running some tests on your issue with an SQLite database, it happens that the number passed as parameter is 1000000000000000.0000000000000 which is (29,13) and is not convertible to (29,14).
Trying to insert 1000000000000000.0000000000000 in a (29,14) column results in "Arithmetic overflow error converting numeric to data type numeric."

Convertion of textbox value to single adds extra precision in C#

While converting value of a textbox to Convert.ToSingle() adds some extra precision for values like 2.7 to 2.7999999523162842. I don't want any rounding of this value because I have to store the exact value entered by the user to DB. So if any one have this issue before please post the solution, any help will appreciated.
That's because a float is a binary fraction which cannot exactly represent the number 2.7 -- when you try to assign a number that is not a binary fraction to a float or a double, you will inevitably end up with an inexact representation.
You need use a decimal if you don't want to precise represent decimal numbers like 2.7.
var myDecimal = Convert.ToDecimal(input);
SImilarly, to store this value in a sql server database you need to use a SQL data type capable of storing precise decimal numerics. Conventiently, this data type is called decimal in SQL as well

What's the correct data type I should use in this case?

I have to represent numbers in my database, which are amounts of chemical substances in food, like fats, energy, magnesium and others. These values are decimals in format 12345.67.
If I use decimal (5,2) as data type in SQL Server, it maps to Decimal type in Entity Framework. If I use float as data type in SQL Server, it maps to Double in Entity Framework.
I'm not sure what the best data type in SQL Server would have to be, or doesn't it really matter a lot?
EDIT - in my case it should be decimal(7,2), as mentioned in some of the remarks!
Thanks.
You need decimal(7,2)
7 is total number of digits
2 is after the decimal point
Differences:
float is approximate and will give unexpected results
decimal is exact
References:
Accuracy problem with floating numbers
SQL Server Float data type calculation vs decimal
DECIMAL(7,2) would be better than float - it's exactly what you need (5 + 2 digits). With floating types (eg. float, double) you may have some problems - e.g. with rounding.

How can I 'trim' a C# double to the value it will be stored as in an sqlite database?

I noticed that when I store a double value such as e.g. x = 0.56657011973046234 in an sqlite database, and then retrieve it later, I get y = 0.56657011973046201. According to the sqlite spec and the .NET spec (neither of which I originally bothered to read :) this is expected and normal.
My problem is that while high precision is not important, my app deals with users inputting/selecting doubles that represent basic 3D info, and then running simulations on them to find a result. And this input can be saved to an sqlite database to be reloaded and re-run later.
The confusion occurs because a freshly created series of inputs will obviously simulate in slightly different way to those same inputs once stored and reloaded (as the double values have changed). This is logical, but not desireable.
I haven't quite come to terms of how to deal with this, but in the meantime I'd like to limit/clamp the user inputs to values which can be exactly stored in an sqlite database. So if a user inputs 0.56657011973046234, it is actually transformed into 0.56657011973046201.
However I haven't been able to figure out, given a number, what value would be stored in the database, short of actually storing and retrieving it from the database, which seems clunky. Is there an established way of doing this?
The answer may be to store the double values as 17 significant digit strings. Look at the difference between how SQLite handles real numbers vs. text (I'll illustrate with the command line interface, for simplicity):
sqlite> create table t1(dr real, dt varchar(25));
sqlite> insert into t1 values(0.56657011973046234,'0.56657011973046234');
sqlite> select * from t1;
0.566570119730462|0.56657011973046234
Storing it with real affinity is the cause of your problem -- SQLite only gives you back a 15 digit approximation. If instead you store it as text, you can retrieve the original string with your C# program and convert it back to the original double.
Double round has an implementation with a parameter that specifies the number of digits. Use this to round to 14 digits (say) with: rval = Math.Round(Val, 14)
Then round when receiving the value from the database, and at the beginning of simulations, ie. So at the values match?
For details:
http://msdn.microsoft.com/en-us/library/75ks3aby.aspx
Another thought if you are not comparing values in the database, just storing them : Why not simply store them as binary data? Then all the bits would be stored and recovered verbatim?
Assuming that both SQL Lite and .NET correctly implement the IEEE specification, you should be able to get the same numeric results if you used the same floating point type on both of the sides (because the value shouldn't be altered when passed from database to C# and vice versa).
Currently you're using 8-byte IEEE floating point (single) (*) in SQL Lite and 16-byte floating-point in C# (double). The float type in C# corresponds to the 8-byte IEEE standard, so using this type instead of double could solve the problem.
(*) The SQL Lite documentation says that REAL is a floating point value, stored as an 8-byte IEEE floating point number.
You can use a string to store the # in the db. Personally I've done what winwaed suggested of rounding before storing and after fetching from the db (which used numeric()).
I recall being burned by bankers rounding but it could just be that didn't meet spec.
You can store the double as a string, and by using the round-trip formatting when converting the double to a string, it's guaranteed to generate the same value when parsed:
string formatted = theDouble.ToString("R", CultureInfo.Invariant);
If you want the decimal input values to round-trip, then you'll have to limit them to 15 significant digits. If you want the SQLite internal double-precision floating-point values to round-trip, then you might be out of luck; that requires printing to a minimum of 17 significant digits, but from what I can tell, SQLite prints them to a maximum of 15 (EDIT: maybe an SQLite expert can confirm this? I just read the source code and traced it -- I was correct, the precision is limited to 15 digits.)
I tested your example in the SQLite command interface on Windows. I inserted 0.56657011973046234, and select returned 0.566570119730462. In C, when I assigned 0.566570119730462 to a double and printed it to 17 digits, I got 0.56657011973046201; that's the same value you get from C#. 0.56657011973046234 and 0.56657011973046201 map to different floating-point numbers, so in other words, the SQLite double does not round-trip.

Categories

Resources