This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Hi. I've got following problem:
43.65+61.11=104.75999999999999
for decimal is correct:
(decimal)43.65+(decimal)61.11=104.76
Why result for double is wrong?
This question and its answers are a wealth of info on this - Difference between decimal, float and double in .NET?
To quote:
For values which are "naturally exact
decimals" it's good to use decimal.
This is usually suitable for any
concepts invented by humans: financial
values are the most obvious example,
but there are others too. Consider the
score given to divers or ice skaters,
for example.
For values which are more artefacts of
nature which can't really be measured
exactly anyway, float/double are more
appropriate. For example, scientific
data would usually be represented in
this form. Here, the original values
won't be "decimally accurate" to start
with, so it's not important for the
expected results to maintain the
"decimal accuracy". Floating binary
point types are much faster to work
with than decimals.
Short answer: floating point representation (such as "double") is inherently inaccurate. So is fixed point (such as "decimal"), but the inaccuracy in fixed-point representation is of a different kind. Here's one short explanation: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm
You can google for "floating point inaccuracy" or so for more.
It isn't exactly wrong. It's the closest decimal representation to the binary floating-point number that results from the sum.
The problem is that IEEE floats cannot represent 43.65 + 61.11 exactly, due to the use of a binary mantissa. Some systems (such as Python 2.7 and Visual C++'s standard I/O libraries) will round to the simplest decimal that resolves to the same binary, and will print the expected 104.76. But all these systems arrive at exactly the same answer internally.
Interestingly, decimal notation can finitely represent any finite binary fraction, whereas the opposite doesn't hold. If humans had two fingers and computers used ten-state memory, we wouldn't have this problem. :-)
Because double uses a fractional model. Any number < 1 is expressed in the form of x / y. Given that information, some numbers can only be approximated. Use decimal, not double for high precision calculations.
See here for some light reading :)
It comes down to the fact that floats are stored as binary floats, and like in base 10, there are some numbers which can't be stored without truncation. Take for example 1/3rd in base 10, that is .3 recurring. The numbers you are dealing with, when converted to binary are recurring.
I disagree that floats or doubles are more or less accurate than decimal representations. They are as accurate as you choose to have precision. They are a different representation however, and different numbers can be expressed wholey than in base 10.
Decimal stores numbers in base 10. That will probably give you the result you expect
Decimal arithmetic is well-suited for base-10 representations of numbers, as base-10 numbers can be exactly represented in decimal. (Which is why currency is always stored in currency appropriate classes, or stored in int with a scaling factor to represent 'pennies' or 'pence' or other similar decimal currency.)
IEEE-754 Binary numbers cannot calculate with .1 or .01 accurately. So floating point formats approximate the inputs, and you get approximate outputs back, which is perfectly acceptable for what floating point was designed to handle -- physical simulations, scientific data, and fast numerical mathematical methods.
Note this simple program and output:
#include <stdio.h>
int main(int argc, char* argv[]) {
float f, g;
double d, e;
long double l, m;
f=0.1;
g=f*f;
d=0.1;
e=d*d;
l=0.1;
m=l*l;
printf("%40.40f, %40.40f, %40.40Lf\n", g, e, m);
return 0;
}
$ ./fp
0.0100000007078051567077636718750000000000,
0.0100000000000000019428902930940239457414,
0.0100000000000000011102569059430467124372
All three possibilities don't give the exact answer, 0.01, but they do give numbers that are quite close to the answer.
But use decimal arithmetic for currency.
Really? The code below returned 104.76 as expected:
class Program
{
static void Main(string[] args)
{
double d1 = 43.65;
double d2 = 61.11;
double d3 = d1 + d2;
Console.WriteLine(d3);
Console.ReadLine();
}
}
whereas the below code returned 104.76000213623
class Program
{
static void Main(string[] args)
{
float d1 = 43.65f;
float d2 = 61.11f;
double d3 = d1 + d2;
Console.WriteLine(d3);
Console.ReadLine();
}
}
Check if you are converting from float to double which may have caused this issue.
Related
This question already has answers here:
Difference between decimal, float and double in .NET?
(18 answers)
Closed 4 years ago.
Can some more experienced guys explain this strange error I found today?
I was getting strange amounts when I was loading table data with my C# script.
As it turns out the problem is a different output from similar functions:
string amount_in_string = "1234567,15";
double amount_in_double = double.Parse(amount_in_string);
float amount_in_float = float.Parse(amount_in_string);
//amount_in_double = 1234567.15;
//amount_in_float = 1234567.13;
Why do I get such a different result when float and double are similar types (floating point). Can the precision make a difference with small amounts like these?
When “1234567.15” is converted to double, the result is the closest value representable in double, which is 1234567.1499999999068677425384521484375. Although you report in the question that the value is 1234567.15, the actual value is 1234567.1499999999068677425384521484375. “1234567.15” would be displayed when the value is displayed with a limited number of decimal digits.
When “1234567.15” is converted to float, the result is the closet value representable in float, which is 1234567.125. Although you report the value is 1234567.13, the actual value is 1234567.125. “1234567.13” may be displayed when the value is displayed with a limited number of decimal digits.
Observe that 1234567 exceeds 1,048,576, which is 220. The 32-bit floating-point format used for float uses 24 bits for the significand (fraction portion of the number). If the high bit of the significand represents 220, the low bit represents 220−23 = 2−3 = ⅛. This is why you see “1234567.15” converted to a value rounded to the nearest eighth.
Floating point numbers are never exact, they are representations of numbers. An example commonly used is think of
1/3 + 1/3 = 2/3
...so the answer in floating point numbers, .33333 + .33333, is not 2/3rds exactly, it is .66666.
Long story short, the more precise fraction you take that can't be converted to an exact binary is going to always have a rounding number. The more precise the more likely it will have rounding errors.
Keep in mind if you do multiple different fractions, you can even have multiple different rounding errors that either make the number accidentally correct, or even further off.
A fiddle you can see the results (there is a culture problem here too)
https://dotnetfiddle.net/Lnv1q7
string amount_in_string = "1234567,15"; // NOTE THE COMMA in original
string amount_in_string = "1234567.15"; //my correct culture
double amount_in_double = double.Parse(amount_in_string);
float amount_in_float = float.Parse(amount_in_string);
Console.WriteLine(amount_in_string);
Console.WriteLine(amount_in_double);
Console.WriteLine(amount_in_float);
Results (parsing in the incorrect culture!)
1234567,15
123456715
1.234567E+08
Results (parsing in the correct culture!)
1234567.15
1234567.15
1234567
Another one that demonstrates the loss of precision with float
float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);
Result
float: 0.3333333
double: 0.333333333333333
decimal: 0.3333333333333333333333333333
floats should only be used in cases were some loss of precision is not extremely valuable. This is because floats are 32bit where decimal is 128 bit
floats are mainly used in pixel coordinates, and loss of precision doesn't matter because a the consumer translates locations to more precise coordinates any way.
in .NET floats "should" be actively avoided unless you don't care about loss of (some) precision. Which is probably never now a days (unless you writting games)
This is where the banking problem came from when 100th's of a penny/cent across transactions where lost, seemingly invisible per transaction but amounting to large amounts of "missing" money.
Use decimal
This question already has answers here:
Round a double to x significant figures
(17 answers)
Closed 7 years ago.
I need to round significant digits of doubles. Example
Round(1.2E-20, 0) should become 1.0E-20
I cannot use Math.Round(1.2E-20, 0), which returns 0, because Math.Round() doesn't round significant digits in a float, but to decimal digits, i.e. doubles where E is 0.
Of course, I could do something like this:
double d = 1.29E-20;
d *= 1E+20;
d = Math.Round(d, 1);
d /= 1E+20;
Which actually works. But this doesn't:
d = 1.29E-10;
d *= 1E+10;
d = Math.Round(d, 1);
d /= 1E+10;
In this case, d is 0.00000000013000000000000002. The problem is that double stores internally fractions of 2, which cannot match exactly fractions of 10. In the first case, it seems C# is dealing just with the exponent for the * and /, but in the second case it makes an actual * or / operation, which then leads to problems.
Of course I need a formula which always gives the proper result, not only sometimes.
Meaning I should not use any double operation after the rounding, because double arithmetic cannot deal exactly with decimal fractions.
Another problem with the calculation above is that there is no double function returning the exponent of a double. Of course one could use the Math library to calculate it, but it might be difficult to guarantee that this has always precisely the same result as the double internal code.
In my desperation, I considered to convert a double to a string, find the significant digits, do the rounding and convert the rounded number back into a string and then finally convert that one to a double. Ugly, right ? Might also not work properly in all case :-(
Is there any library or any suggestion how to round the significant digits of a double properly ?
PS: Before declaring that this is a duplicate question, please make sure that you understand the difference between SIGNIFICANT digits and decimal places
The problem is that double stores internally fractions of 2, which cannot match exactly fractions of 10
That is a problem, yes. If it matters in your scenario, you need to use a numeric type that stores numbers as decimal, not binary. In .NET, that numeric type is decimal.
Note that for many computational tasks (but not currency, for example), the double type is fine. The fact that you don't get exactly the value you are looking for is no more of a problem than any of the other rounding error that exists when using double.
Note also that if the only purpose is for displaying the number, you don't even need to do the rounding yourself. You can use a custom numeric format to accomplish the same. For example:
double value = 1.29e-10d;
Console.WriteLine(value.ToString("0.0E+0"));
That will display the string 1.3E-10;
Another problem with the calculation above is that there is no double function returning the exponent of a double
I'm not sure what you mean here. The Math.Log10() method does exactly that. Of course, it returns the exact exponent of a given number, base 10. For your needs, you'd actually prefer Math.Floor(Math.Log10(value)), which gives you the exponent value that would be displayed in scientific notation.
it might be difficult to guarantee that this has always precisely the same result as the double internal code
Since the internal storage of a double uses an IEEE binary format, where the exponent and mantissa are both stored as binary numbers, the displayed exponent base 10 is never "precisely the same as the double internal code" anyway. Granted, the exponent, being an integer, can be expressed exactly. But it's not like a decimal value is being stored in the first place.
In any case, Math.Log10() will always return a useful value.
Is there any library or any suggestion how to round the significant digits of a double properly ?
If you only need to round for the purpose of display, don't do any math at all. Just use a custom numeric format string (as I described above) to format the value the way you want.
If you actually need to do the rounding yourself, then I think the following method should work given your description:
static double RoundSignificant(double value, int digits)
{
int log10 = (int)Math.Floor(Math.Log10(value));
double exp = Math.Pow(10, log10);
value /= exp;
value = Math.Round(value, digits);
value *= exp;
return value;
}
Our existing application reads some floating point numbers from a file. The numbers are written there by some other application (let's call it Application B). The format of this file was fixed long time ago (and we cannot change it). In this file all the floating point numbers are saved as floats in binary representation (4 bytes in the file).
In our program as soon as we read the data we convert the floats to doubles and use doubles for all calculations because the calculations are quite extensive and we are concerned with the spread of rounding errors.
We noticed that when we convert floats via decimal (see the code below) we are getting more precise results than when we convert directly. Note: Application B also uses doubles internally and only writes them into the file as floats. Let's say Application B had the number 0.012 written to file as float. If we convert it after reading to decimal and then to double we get exactly 0.012, if we convert it directly, we get 0.0120000001043081.
This can be reproduced without reading from a file - with just an assignment:
float readFromFile = 0.012f;
Console.WriteLine("Read from file: " + readFromFile);
//prints 0.012
double forUse = readFromFile;
Console.WriteLine("Converted to double directly: " + forUse);
//prints 0.0120000001043081
double forUse1 = (double)Convert.ToDecimal(readFromFile);
Console.WriteLine("Converted to double via decimal: " + forUse1);
//prints 0.012
Is it always beneficial to convert from float to double via decimal, and if not, under what conditions is it beneficial?
EDIT: Application B can obtain the values which it saves in two ways:
Value can be a result of calculations
Value can be typed in by user as a decimal fraction (so in the example above the user had typed 0.012 into an edit box and it got converted to double, then saved to float)
we get exactly 0.012
No you don't. Neither float nor double can represent 3/250 exactly. What you do get is a value that is rendered by the string formatter Double.ToString() as "0.012". But this happens because the formatter doesn't display the exact value.
Going through decimal is causing rounding. It is likely much faster (not to mention easier to understand) to just use Math.Round with the rounding parameters you want. If what you care about is the number of significant digits, see:
Round a double to x significant figures
For what it's worth, 0.012f (which means the 32-bit IEEE-754 value nearest to 0.012) is exactly
0x3C449BA6
or
0.012000000104308128
and this is exactly representable as a System.Decimal. But Convert.ToDecimal(0.012f) won't give you that exact value -- per the documentation there is a rounding step.
The Decimal value returned by this method contains a maximum of seven significant digits. If the value parameter contains more than seven significant digits, it is rounded using rounding to nearest.
As strange as it may seem, conversion via decimal (with Convert.ToDecimal(float)) may be beneficial in some circumstances.
It will improve the precision if it is known that the original numbers were provided by users in decimal representation and users typed no more than 7 significant digits.
To prove it I wrote a small program (see below). Here is the explanation:
As you recall from the OP this is the sequence of steps:
Application B has doubles coming from two sources:
(a) results of calculations; (b) converted from user-typed decimal numbers.
Application B writes its doubles as floats into the file - effectively
doing binary rounding from 52 binary digits (IEEE 754 single) to the 23 binary digits (IEEE 754 double).
Our Application reads that float and converts it to a double by one of two ways:
(a) direct assignment to double - effectively padding a 23-bit number to a 52-bit number with binary zeros (29 zero-bits);
(b) via conversion to decimal with (double)Convert.ToDecimal(float).
As Ben Voigt properly noticed Convert.ToDecimal(float) (see MSDN in the Remark section) rounds the result to 7 significant decimal digits. In Wikipedia's IEEE 754 article about Single we can read that precision is 24 bits - equivalent to log10(pow(2,24)) ≈ 7.225 decimal digits. So, when we do the conversion to decimal we lose that 0.225 of a decimal digit.
So, in the generic case, when there is no additional information about doubles, the conversion to decimal will in most cases make us loose some precision.
But (!) if there is the additional knowledge that originally (before being written to a file as floats) the doubles were decimals with no more than 7 digits, the rounding errors introduced in decimal rounding (step 3(b) above) will compensate the rounding errors introduced with the binary rounding (in step 2. above).
In the program to prove the statement for the generic case I randomly generate doubles, then cast it to float, then convert it back to double (a) directly, (b) via decimal, then I measure the distance between the original double and the double (a) and double (b). If the double(a) is closer to the original than the double(b), I increment pro-direct conversion counter, in the opposite case I increment the pro-viaDecimal counter. I do it in a loop of 1 mln. cycles, then I print the ratio of pro-direct to pro-viaDecimal counters. The ratio turns out to be about 3.7, i.e. approximately in 4 cases out of 5 the conversion via decimal will spoil the number.
To prove the case when the numbers are typed in by users I used the same program with the only change that I apply Math.Round(originalDouble, N) to the doubles. Because I get originalDoubles from the Random class, they all will be between 0 and 1, so the number of significant digits coincides with the number of digits after the decimal point. I placed this method in a loop by N from 1 significant digit to 15 significant digits typed by user. Then I plotted it on the graph. The dependency of (how many times direct conversion is better than conversion via decimal) from the number of significant digits typed by user.
.
As you can see, for 1 to 7 typed digits the conversion via Decimal is always better than the direct conversion. To be exact, for a million of random numbers only 1 or 2 are not improved by conversion to decimal.
Here is the code used for the comparison:
private static void CompareWhichIsBetter(int numTypedDigits)
{
Console.WriteLine("Number of typed digits: " + numTypedDigits);
Random rnd = new Random(DateTime.Now.Millisecond);
int countDecimalIsBetter = 0;
int countDirectIsBetter = 0;
int countEqual = 0;
for (int i = 0; i < 1000000; i++)
{
double origDouble = rnd.NextDouble();
//Use the line below for the user-typed-in-numbers case.
//double origDouble = Math.Round(rnd.NextDouble(), numTypedDigits);
float x = (float)origDouble;
double viaFloatAndDecimal = (double)Convert.ToDecimal(x);
double viaFloat = x;
double diff1 = Math.Abs(origDouble - viaFloatAndDecimal);
double diff2 = Math.Abs(origDouble - viaFloat);
if (diff1 < diff2)
countDecimalIsBetter++;
else if (diff1 > diff2)
countDirectIsBetter++;
else
countEqual++;
}
Console.WriteLine("Decimal better: " + countDecimalIsBetter);
Console.WriteLine("Direct better: " + countDirectIsBetter);
Console.WriteLine("Equal: " + countEqual);
Console.WriteLine("Betterness of direct conversion: " + (double)countDirectIsBetter / countDecimalIsBetter);
Console.WriteLine("Betterness of conv. via decimal: " + (double)countDecimalIsBetter / countDirectIsBetter );
Console.WriteLine();
}
Here's a different answer - I'm not sure that it's any better than Ben's (almost certainly not), but it should produce the right results:
float readFromFile = 0.012f;
decimal forUse = Convert.ToDecimal(readFromFile.ToString("0.000"));
So long as .ToString("0.000") produces the "correct" number (which should be easy to spot-check), then you'll get something you can work with and not have to worry about rounding errors. If you need more precision, just add more 0's.
Of course, if you actually need to work with 0.012f out to the maximum precision, then this won't help, but if that's the case, then you don't want to be converting it from a float in the first place.
I'm working on something and I've got a problem which I do not understand.
double d = 95.24 / (double)100;
Console.Write(d); //Break point here
The console output is 0.9524 (as expected) but if I look to 'd' after stoped the program it returns 0.95239999999999991.
I have tried every cast possible and the result is the same. The problem is I use 'd' elsewhere and this precision problem makes my program failed.
So why does it do that? How can I fix it?
Use decimal instead of double.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
The short of it is that a floating-point number is stored in what amounts to base-2 scientific notation. There is an integer significand understood to have one place in front of a decimal point, which is raised to an integer power of two. This allows for the storage of numbers in a relatively compact format; the downside is that the conversion from base ten to base 2 and back can introduce error.
To mitigate this, whenever high precision at low magnitudes is required, use decimal instead of double; decimal is a 128-bit floating point number type designed to have very high precision, at the cost of reduced range (it can only represent numbers up to +- 79 nonillion, 7.9E28, instead of double's +- 1.8E308; still plenty for most non-astronomical, non-physics computer programs) and double the memory footprint of a double.
A very good article that describes a lot: What Every Computer Scientist Should Know About Floating-Point Arithmetic It is not related to C#, but to the float arithmetic in general.
You could use a decimal instead:
decimal d = 95.24 / 100m;
Console.Write(d); //Break point here
Try:
double d = Math.Round(95.24/(double)100, 4);
Console.Write(d);
edit: Or use a decimal, yeah. I was just trying to keep the answer as similar to the question as possible :)
Convert.ToDouble is adding zeros and 1 like in this picture:
Why it is turning from 21.62 to 21.620000000000001 ?
Is this about floating point issue?
Double (and Float) are floating-point types, and in a binary system will have some imprecision.
If you need more precise comparisons use decimal instead. If you're just doing calculations double should be fine. If you need to compare doubles for absolute equiality then compare the absolute value of the difference to some small constant:
if (a == b) // not reliable for floating point
{
....
}
double EPSILON = 0.0000001;
if (Math.Abs(a-b) < EPSILON)
{
....
}
The floating point numbers have some problems of approximation.
This is because decimal fraction like 0,00001 can't be represented exactly on a binary system (where fractional numbers are represented in module q/p).
The problem is intrinsic.
In short, yes; between any two bases (in this case, 2 and 10), there are always values that can expressed w/ a finite number of "decimal" (binimal?) places in one that cannot in the other.
Rounding errors like this are common in almost every high level programming language. In Java, to get around this you use a class called BigDecimal if you need guaranteed accuracy. Otherwise, you can just write a method that rounds a decimal to the nearest place you need.