While converting value of a textbox to Convert.ToSingle() adds some extra precision for values like 2.7 to 2.7999999523162842. I don't want any rounding of this value because I have to store the exact value entered by the user to DB. So if any one have this issue before please post the solution, any help will appreciated.
That's because a float is a binary fraction which cannot exactly represent the number 2.7 -- when you try to assign a number that is not a binary fraction to a float or a double, you will inevitably end up with an inexact representation.
You need use a decimal if you don't want to precise represent decimal numbers like 2.7.
var myDecimal = Convert.ToDecimal(input);
SImilarly, to store this value in a sql server database you need to use a SQL data type capable of storing precise decimal numerics. Conventiently, this data type is called decimal in SQL as well
Related
Can anyone please help me with this problem?
I have in the database a nullable "real" column that Entity Framework represents as "float?"
The system does some calculations in decimal but when assigns the value in this column the precision is lost.
The system is using the next conversion:
dbObject.floatCol = (float) decimalVar;
decimalVar is = 123.12341
but dbObject.floatCol is = 123.1234
I been trying to do this convertion in other ways without luck.
NOTES:
In the database I can update the value to 123.12341 without problem.
I can't alter the database's column
I'm using Visual Studio 2010 with .NET Framework 4
EDIT:
If I debug the problem (with the real numbers of my issue) I have:
decimal decimalVar = new decimal(123.13561);
// decimalVar is 123.13561
float? floatVar = 0;
// floatVar is 0
floatVar = (float)decimalVar;
// floatVar is 123.135612 <- has appeared a 2 at the end
// with a different decimal number
decimalVar = new decimal(128.13561); // replaced the 123 for 128
// decimalVar is 128.13561
floatVar = (float)decimalVar;
// floatVar is 128.1356 <- has disappeared the 1 at the end
The behavior is really weird
I need to know if is possible to do a correct conversion
When I try to print the values to Console or Logfile they are different :s
You are getting about 7 digits precision. This is what you can expect from a C# float. If you need more precision, use a C# double value instead. It gives you about 15 to 16 digits precision.
double d = (double)decimalVar;
An also you must make a difference between the C# data types and the database data types. In C#, the best data type for reading in a db decimal column is decimal.
Depending on the database type, different names are used for the numeric column types. And those names are different from the C# types. A SQL-Server float corresponds to a C# double. A SQL-Server real corresponds to a C# float (.NET Single).
See: SQL Server Data Type Mappings
Concerning your EDIT: float and double values have a binary representation as a floating point number internally which requires decimal numbers to be rounded to the closest possible matching binary value. This binary value is then converted back to a decimal representation for display. Depending on whether the value was rounded up or down this might add or remove decimals. This is what you experienced.
See Jon Skeet's article on Binary floating point and .NET
In my C# application, I am trying to save a decimal price into a SQL Server table. Columns type is decimal with no total digits defined.
Without discount calculations, everything works fine.
But when I run some calculations, I get a final value of 21800, and I get the following error when trying to save it.
"Parameter value '218000/00000000000000' is out of range."
I don't understand where the extra zeros come from! The only thing I know is with myValue.ToString() I get 218000/00000000000000, too!
I understand that the digits after floating point are caused by the calculations. But whatever they do, my value is 21800 when watching it. Why is it trying to save 218000/00000000000000? Why does it care at all about the zeros?
Any ideas why this happens and how to fix it?
There are two ways to store decimal numbers:
Fixed-Point: This is what you have defined in your SQL Server project. You defined decimal (19,4), which means 19 digits total and 4 digits after the decimal point. If you define such a type, then every number always has 4 digits after the decimal point; and also, you cannot store a number with more than 4 digits after the decimal point. And note, there is no equivalent to fixed point in C#!
Floating Point: These are the types float, double and decimal in C#, and the type float(n)in SQL Server. In floating point, as the names says, you can basically vary the number of digits before and behind the decimal point. (Technically, you have a fixed number of digits, the so-called mantissa, and you can then move the decimal point around with a 2nd number, the exponent)
So now to your project: In your database, you use fixed point, while in C#, you have floating point. Therefore, when you calculate something in C#, and you want to store in to your database, then it is always converted from floating point to fixed point. However, if your number in C# does not fit the decimal numbers in the database, then you get the out of range error.
I had to specify the total digits of my decimal column in the SQL table (I set it as decimal (19, 4)).
Besides, my C# representing field also needed to have the right precision count.
I still don't know about the extra zeros and why SQL care about them. Any explanations is appreciated.
Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;
I have to represent numbers in my database, which are amounts of chemical substances in food, like fats, energy, magnesium and others. These values are decimals in format 12345.67.
If I use decimal (5,2) as data type in SQL Server, it maps to Decimal type in Entity Framework. If I use float as data type in SQL Server, it maps to Double in Entity Framework.
I'm not sure what the best data type in SQL Server would have to be, or doesn't it really matter a lot?
EDIT - in my case it should be decimal(7,2), as mentioned in some of the remarks!
Thanks.
You need decimal(7,2)
7 is total number of digits
2 is after the decimal point
Differences:
float is approximate and will give unexpected results
decimal is exact
References:
Accuracy problem with floating numbers
SQL Server Float data type calculation vs decimal
DECIMAL(7,2) would be better than float - it's exactly what you need (5 + 2 digits). With floating types (eg. float, double) you may have some problems - e.g. with rounding.
I noticed that when I store a double value such as e.g. x = 0.56657011973046234 in an sqlite database, and then retrieve it later, I get y = 0.56657011973046201. According to the sqlite spec and the .NET spec (neither of which I originally bothered to read :) this is expected and normal.
My problem is that while high precision is not important, my app deals with users inputting/selecting doubles that represent basic 3D info, and then running simulations on them to find a result. And this input can be saved to an sqlite database to be reloaded and re-run later.
The confusion occurs because a freshly created series of inputs will obviously simulate in slightly different way to those same inputs once stored and reloaded (as the double values have changed). This is logical, but not desireable.
I haven't quite come to terms of how to deal with this, but in the meantime I'd like to limit/clamp the user inputs to values which can be exactly stored in an sqlite database. So if a user inputs 0.56657011973046234, it is actually transformed into 0.56657011973046201.
However I haven't been able to figure out, given a number, what value would be stored in the database, short of actually storing and retrieving it from the database, which seems clunky. Is there an established way of doing this?
The answer may be to store the double values as 17 significant digit strings. Look at the difference between how SQLite handles real numbers vs. text (I'll illustrate with the command line interface, for simplicity):
sqlite> create table t1(dr real, dt varchar(25));
sqlite> insert into t1 values(0.56657011973046234,'0.56657011973046234');
sqlite> select * from t1;
0.566570119730462|0.56657011973046234
Storing it with real affinity is the cause of your problem -- SQLite only gives you back a 15 digit approximation. If instead you store it as text, you can retrieve the original string with your C# program and convert it back to the original double.
Double round has an implementation with a parameter that specifies the number of digits. Use this to round to 14 digits (say) with: rval = Math.Round(Val, 14)
Then round when receiving the value from the database, and at the beginning of simulations, ie. So at the values match?
For details:
http://msdn.microsoft.com/en-us/library/75ks3aby.aspx
Another thought if you are not comparing values in the database, just storing them : Why not simply store them as binary data? Then all the bits would be stored and recovered verbatim?
Assuming that both SQL Lite and .NET correctly implement the IEEE specification, you should be able to get the same numeric results if you used the same floating point type on both of the sides (because the value shouldn't be altered when passed from database to C# and vice versa).
Currently you're using 8-byte IEEE floating point (single) (*) in SQL Lite and 16-byte floating-point in C# (double). The float type in C# corresponds to the 8-byte IEEE standard, so using this type instead of double could solve the problem.
(*) The SQL Lite documentation says that REAL is a floating point value, stored as an 8-byte IEEE floating point number.
You can use a string to store the # in the db. Personally I've done what winwaed suggested of rounding before storing and after fetching from the db (which used numeric()).
I recall being burned by bankers rounding but it could just be that didn't meet spec.
You can store the double as a string, and by using the round-trip formatting when converting the double to a string, it's guaranteed to generate the same value when parsed:
string formatted = theDouble.ToString("R", CultureInfo.Invariant);
If you want the decimal input values to round-trip, then you'll have to limit them to 15 significant digits. If you want the SQLite internal double-precision floating-point values to round-trip, then you might be out of luck; that requires printing to a minimum of 17 significant digits, but from what I can tell, SQLite prints them to a maximum of 15 (EDIT: maybe an SQLite expert can confirm this? I just read the source code and traced it -- I was correct, the precision is limited to 15 digits.)
I tested your example in the SQLite command interface on Windows. I inserted 0.56657011973046234, and select returned 0.566570119730462. In C, when I assigned 0.566570119730462 to a double and printed it to 17 digits, I got 0.56657011973046201; that's the same value you get from C#. 0.56657011973046234 and 0.56657011973046201 map to different floating-point numbers, so in other words, the SQLite double does not round-trip.