Description
This is not a real world example! Please don't suggest using decimal or something else.
I am only asking this because I really want to know why this happens.
I recently saw the awesome Tekpub Webcast Mastering C# 4.0 with Jon Skeet again.
On episode 7 - Decimals and Floating Points it is going really weird and even our
Chuck Norris of Programming (aka Jon Skeet) does not have a real answer to my question.
Only a might be.
Question: Why did MyTestMethod() fail and MyTestMethod2() pass?
Example 1
[Test]
public void MyTestMethod()
{
double d = 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
Console.WriteLine("d = " + d);
Assert.AreEqual(d, 1.0d);
}
This results in
d = 1
Expected: 0.99999999999999989d
But was: 1.0d
Example 2
[Test]
public void MyTestMethod2()
{
double d = 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
d += 0.1d;
Console.WriteLine("d = " + d);
Assert.AreEqual(d, 0.5d);
}
This results in success
d = 0,5
But why ?
Update
Why doesn't Assert.AreEqual() cover that?
Assert.AreEqual() does cover that; you have to use the overload with a third delta argument:
Assert.AreEqual(0.1 + 0.1 + 0.1, 0.3, 0.00000001);
Because Doubles, like all floating point numbers, are approximations, not absolute values binary (base-2) representations, which may not be able to perfectly represent base-10 fractions (the same way that base-10 cannot represent 1/3 perfectly). So the fact that the second one happens to round to the correct value when you perform equality comparison (and the fact that the first one doesn't) is just luck, and not a bug in the framework or anything else.
Also, read this: Casting a result to float in method returning float changes result
Assert.Equals does not cover this case because the principle of least astonishment states that since every other built-in numeric value type in .NET defines .Equals() to perform an equivalent operation of ==, so Double does so as well. Since in fact the two numbers that you are generating in your test (the literal 0.5d and the 5x sum of .1d) are not == equal (the actual values in the processors' registers are different) Equals() returns false.
It is not the framework's intent to break the generally accepted rules of computing in order to make your life convenient.
Finally, I'd offer that NUnit has indeed realized this problem and according to http://www.nunit.org/index.php?p=equalConstraint&r=2.5 offers the following method to test floating point equality within a tolerance:
Assert.That( 5.0, Is.EqualTo( 5 );
Assert.That( 5.5, Is.EqualTo( 5 ).Within(0.075);
Assert.That( 5.5, Is.EqualTo( 5 ).Within(1.5).Percent;
Okay, I haven't checked what Assert.AreEqual does... but I suspect that by default it's not applying any tolerance. I wouldn't expect it to behind my back. So let's look for another explanation...
You're basically seeing a coincidence - the answer after four additions happens to be the exact value, probably because the lowest bit gets lost somewhere when the magnitude changes - I haven't looked at the bit patterns involved, but if you use DoubleConverter.ToExactString (my own code) you can see exactly what the value is at any point:
using System;
public class Test
{
public static void Main()
{
double d = 0.1d;
Console.WriteLine("d = " + DoubleConverter.ToExactString(d));
d += 0.1d;
Console.WriteLine("d = " + DoubleConverter.ToExactString(d));
d += 0.1d;
Console.WriteLine("d = " + DoubleConverter.ToExactString(d));
d += 0.1d;
Console.WriteLine("d = " + DoubleConverter.ToExactString(d));
d += 0.1d;
Console.WriteLine("d = " + DoubleConverter.ToExactString(d));
}
}
Results (on my box):
d = 0.1000000000000000055511151231257827021181583404541015625
d = 0.200000000000000011102230246251565404236316680908203125
d = 0.3000000000000000444089209850062616169452667236328125
d = 0.40000000000000002220446049250313080847263336181640625
d = 0.5
Now if you start with a different number, it doesn't work itself out in the same way:
(Starting with d=10.1)
d = 10.0999999999999996447286321199499070644378662109375
d = 10.199999999999999289457264239899814128875732421875
d = 10.2999999999999989341858963598497211933135986328125
d = 10.39999999999999857891452847979962825775146484375
d = 10.4999999999999982236431605997495353221893310546875
So basically you happened to get lucky or unlucky with your test - the errors cancelled themselves out.
Assert.AreEqual does take that into account.
But in order to do so, you need to supply your margin of error - the delta within the difference between the two float values are deemed equal for your application.
There are two overloads to Assert.AreEqual that take only two parameters - a generic one (T, T) and a non generic one - (object, object). These can only do the default comparisons.
Use one of the overloads that take double and that also has a parameter for the delta.
This is the feature of computer floating point arithmetics
(http://www.eskimo.com/~scs/cclass/progintro/sx5.html)
It's important to remember that the precision of floating-point
numbers is usually limited, and this can lead to surprising results.
The result of a division like 1/3 cannot be represented exactly (it's
an infinitely repeating fraction, 0.333333...), so the computation (1
/ 3) x 3 tends to yield a result like 0.999999... instead of 1.0.
Furthermore, in base 2, the fraction 1/10, or 0.1 in decimal, is also
an infinitely repeating fraction, and cannot be represented exactly,
either, so (1 / 10) x 10 may also yield 0.999999.... For these reasons
and others, floating-point calculations are rarely exact. When working
with computer floating point, you have to be careful not to compare
two numbers for exact equality, and you have to ensure that ``round
off error'' doesn't accumulate until it seriously degrades the results
of your calculations.
You should explicit set the precision for Assert
For example:
double precision = 1e-6;
Assert.AreEqual(d, 1.0, precision);
It's work for you sample. I often use this way in my code, but precision depending on the situation
This is because floating point numbers lose precision. The best way to compare equals is to subtract the numbers and verify the different is less then a certain number such as .001 (or to whatever precision you need). Look at http://msdn.microsoft.com/en-us/library/system.double%28v=VS.95%29.aspx specifically the Floating-Point Values and Loss of Precision section.
0.1 can't be represented exactly in a double because of it's internal format.
Use decimal if you want to represent base 10 numbers.
If you want to compare doubles check whether they are within a very small amount of each other.
Related
I downloaded for myself GnuMP library: https://gnumpnet.codeplex.com/ which behind the curtains uses gmp.dll which is wrapper for https://gmplib.org.
It has type Real, which is used for high precision calculations. In other project I have double and decimal type, which I want to replace with Real.
I need to replace Math.Round() with customized round for Real ( type in gnump.net). Did anybody tried to implement Round but for Real of Gnump.
Today I found an mathematically correct answer:
public static Real Round(Real r, int precision)
{
Real scaled = Real.Pow(10, precision + 1);
Real multiplied = r*scaled;
Real truncated = Trunc(multiplied);
Real lastNumber = truncated - Trunc(truncated/10)*10;
if (lastNumber >= 5)
{
truncated += 10;
}
truncated = Trunc(truncated/10);
return truncated * 10 /(scaled);
}
When I say mathematically correct I mean that following code:
Real r = 2.5;
r = Real.Round(r, 0);
will give 3. According to http://msdn.microsoft.com/en-us/library/wyk4d9cy.aspx Math.Round will give "round to even" (so called banker's rounding), but I for my task need mathematical round.
I decided to re-create my question:
decimal dTotal = 0m;
foreach (DictionaryEntry item in _totals)
{
if (!string.IsNullOrEmpty(item.Value.ToString()))
{
dTotal += Convert.ToDecimal(item.Value);
}
}
Console.WriteLine(dTotal / 3600m);
Console.WriteLine(decimal.Round(dTotal / 3600m, 2));
Console.WriteLine(decimal.Divide(dTotal, 3600m));
The above code returns:
579.99722222222222222222222222
580.00
579.99722222222222222222222222
So, that is where my issues are coming from, I really need it to just display the 579.99; but any round, be it decimal.Round or Math.Round still return 580; even the string formats for {0:F} return 580.00.
How can i properly do this?
New answer (to new question)
Okay, so you've got a value of 579.99722222222222222222222222 - and you're asking that to be rounded to two decimal places. Isn't 580.00 the natural answer? It's closer to the original value than 579.99 is. It sounds like you essentially want flooring behaviour, but with a given number of digits. For that, you can use:
var floored = Math.Floor(original * 100) / 100;
In this case, you can do both in one step:
var hours = Math.Floor(dTotal / 36) / 100;
... which is equivalent to
var hours = Math.Floor((dTotal / 3600) * 100) / 100;
Original answer (to original question)
Sounds like you've probably got payTotal in an inappropriate form to start with:
using System;
class Test
{
static void Main()
{
decimal pay = 2087975.7m;
decimal time = pay / 3600;
Console.WriteLine(time); // Prints 579.99325
}
}
This is the problem:
var payTotal = 2087975.7;
That's assigning payTotal to a double variable. The value you've actually got is 2087975.69999999995343387126922607421875, which isn't what you wanted. Any time you find yourself casting from double to decimal or vice versa, you should be worried: chances are you've used the wrong type somewhere. Currency values should absolutely be stored in decimal rather than double (and there are various other Stack Overflow questions talking about when to use which).
See my two articles on floating point for more info:
Binary floating point in .NET
Decimal floating point in .NET
(Once you've got correct results, formatting them is a different matter of course, but that shouldn't be too bad...)
I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg:
static void Main(string[] args)
{
double x = 0, y = 0;
x += 20013.8;
x += 20012.7;
y += 10016.4;
y += 30010.1;
Console.WriteLine("Result: "+ x + " " + y + " " + (x==y));
Console.Write("Press any key to continue . . . "); Console.ReadKey(true);
}
However, in this case, x and y are equal in the end.
Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.
double x = (0.1 * 3) / 3;
Console.WriteLine("x: {0}", x); // prints "x: 0.1"
Console.WriteLine("x == 0.1: {0}", x == 0.1); // prints "x == 0.1: False"
Remark: based on this don't make the assumption that floating point arithmetic is unreliable in .NET.
Here's an example based on a prior question that demonstrates float arithmetic not working out exactly as you would think.
float f = (13.45f * 20);
int x = (int)f;
int y = (int)(13.45f * 20);
Console.WriteLine(x == y);
In this case, false is printed to the screen. Why? Because of where the math is performed versus where the cast to int is happening. For x, the math is performed in one statement and stored to f, then it is being cast to an integer. For y, the value of the calculation is never stored before the cast. (In x, some precision is lost between the calculation and the cast, not the case for y.)
For an explanation behind what's specifically happening in float math, see this question/answer. Why differs floating-point precision in C# when separated by parantheses and when separated by statements?
My favourite demonstration boils down to
double d = 0.1;
d += 0.2;
d -= 0.3;
Console.WriteLine(d);
The output is not 0.
Try making it so the decimal is not .5.
Take a look at this article here
http://floating-point-gui.de/
try sum VERY big and VERY small number. small one will be consumed and result will be same as large number.
Try performing repeated operations on an irrational number (such as a square root) or very long length repeating fraction. You'll quickly see errors accumulate. For instance, compute 1000000*Sqrt(2) vs. Sqrt(2)+Sqrt(2)+...+Sqrt(2).
The simplest I can think of right now is this:
class Test
{
private static void Main()
{
double x = 0.0;
for (int i = 0; i < 10; ++i)
x += 0.1;
Console.WriteLine("x = {0}, expected x = {1}, x == 1.0 is {2}", x, 1.0, x == 1.0);
Console.WriteLine("Allowing for a small error: x == 1.0 is {0}", Math.Abs(x - 1.0) < 0.001);
}
}
I suggest that, if you're truly interested, you take a look any one of a number of pages that discuss floating point numbers, some in gory detail. You will soon realize that, in a computer, they're a compromise, trading off accuracy for range. If you are going to be writing programs that use them, you do need to understand their limitations and problems that can arise if you don't take care. It will be worth your time.
double is accurate to ~15 digits. You need more precision to really start hitting problems with only a few floating point operations.
I've a double variable called x.
In the code, x gets assigned a value of 0.1 and I check it in an 'if' statement comparing x and 0.1
if (x==0.1)
{
----
}
Unfortunately it does not enter the if statement
Should I use Double or double?
What's the reason behind this? Can you suggest a solution for this?
It's a standard problem due to how the computer stores floating point values. Search here for "floating point problem" and you'll find tons of information.
In short – a float/double can't store 0.1 precisely. It will always be a little off.
You can try using the decimal type which stores numbers in decimal notation. Thus 0.1 will be representable precisely.
You wanted to know the reason:
Float/double are stored as binary fractions, not decimal fractions. To illustrate:
12.34 in decimal notation (what we use) means
1 * 101 + 2 * 100 + 3 * 10-1 + 4 * 10-2
The computer stores floating point numbers in the same way, except it uses base 2: 10.01 means
1 * 21 + 0 * 20 + 0 * 2-1 + 1 * 2-2
Now, you probably know that there are some numbers that cannot be represented fully with our decimal notation. For example, 1/3 in decimal notation is 0.3333333…. The same thing happens in binary notation, except that the numbers that cannot be represented precisely are different. Among them is the number 1/10. In binary notation that is 0.000110011001100….
Since the binary notation cannot store it precisely, it is stored in a rounded-off way. Hence your problem.
double and Double are the same (double is an alias for Double) and can be used interchangeably.
The problem with comparing a double with another value is that doubles are approximate values, not exact values. So when you set x to 0.1 it may in reality be stored as 0.100000001 or something like that.
Instead of checking for equality, you should check that the difference is less than a defined minimum difference (tolerance). Something like:
if (Math.Abs(x - 0.1) < 0.0000001)
{
...
}
You need a combination of Math.Abs on X-Y and a value to compare with.
You can use following Extension method approach
public static class DoubleExtensions
{
const double _3 = 0.001;
const double _4 = 0.0001;
const double _5 = 0.00001;
const double _6 = 0.000001;
const double _7 = 0.0000001;
public static bool Equals3DigitPrecision(this double left, double right)
{
return Math.Abs(left - right) < _3;
}
public static bool Equals4DigitPrecision(this double left, double right)
{
return Math.Abs(left - right) < _4;
}
...
Since you rarely call methods on double except ToString I believe its pretty safe extension.
Then you can compare x and y like
if(x.Equals4DigitPrecision(y))
Comparing floating point number can't always be done precisely because of rounding. To compare
(x == .1)
the computer really compares
(x - .1) vs 0
Result of sybtraction can not always be represeted precisely because of how floating point number are represented on the machine. Therefore you get some nonzero value and the condition evaluates to false.
To overcome this compare
Math.Abs(x- .1) vs some very small threshold ( like 1E-9)
From the documentation:
Precision in Comparisons
The Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values. The following example reports that the Double value .3333 and the Double returned by dividing 1 by 3 are unequal.
...
Rather than comparing for equality, one recommended technique involves defining an acceptable margin of difference between two values (such as .01% of one of the values). If the absolute value of the difference between the two values is less than or equal to that margin, the difference is likely to be due to differences in precision and, therefore, the values are likely to be equal. The following example uses this technique to compare .33333 and 1/3, the two Double values that the previous code example found to be unequal.
So if you really need a double, you should use the techique described on the documentation.
If you can, change it to a decimal. It' will be slower, but you won't have this type of problem.
Use decimal. It doesn't have this "problem".
Exact comparison of floating point values is know to not always work due to the rounding and internal representation issue.
Try imprecise comparison:
if (x >= 0.099 && x <= 0.101)
{
}
The other alternative is to use the decimal data type.
double (lowercase) is just an alias for System.Double, so they are identical.
For the reason, see Binary floating point and .NET.
In short: a double is not an exact type and a minute difference between "x" and "0.1" will throw it off.
Double (called float in some languages) is fraut with problems due to rounding issues, it's good only if you need approximate values.
The Decimal data type does what you want.
For reference decimal and Decimal are the same in .NET C#, as are the double and Double types, they both refer to the same type (decimal and double are very different though, as you've seen).
Beware that the Decimal data type has some costs associated with it, so use it with caution if you're looking at loops etc.
Official MS help, especially interested "Precision in Comparisons" part in context of the question.
https://learn.microsoft.com/en-us/dotnet/api/system.double.equals
// Initialize two doubles with apparently identical values
double double1 = .333333;
double double2 = (double) 1/3;
// Define the tolerance for variation in their values
double difference = Math.Abs(double1 * .00001);
// Compare the values
// The output to the console indicates that the two values are equal
if (Math.Abs(double1 - double2) <= difference)
Console.WriteLine("double1 and double2 are equal.");
else
Console.WriteLine("double1 and double2 are unequal.");
1) Should i use Double or double???
Double and double is the same thing. double is just a C# keyword working as alias for the class System.Double
The most common thing is to use the aliases! The same for string (System.String), int(System.Int32)
Also see Built-In Types Table (C# Reference)
Taking a tip from the Java code base, try using .CompareTo and test for the zero comparison. This assumes the .CompareTo function takes in to account floating point equality in an accurate manner. For instance,
System.Math.PI.CompareTo(System.Math.PI) == 0
This predicate should return true.
// number of digits to be compared
public int n = 12
// n+1 because b/a tends to 1 with n leading digits
public double MyEpsilon { get; } = Math.Pow(10, -(n+1));
public bool IsEqual(double a, double b)
{
// Avoiding division by zero
if (Math.Abs(a)<= double.Epsilon || Math.Abs(b) <= double.Epsilon)
return Math.Abs(a - b) <= double.Epsilon;
// Comparison
return Math.Abs(1.0 - a / b) <= MyEpsilon;
}
Explanation
The main comparison function done using division a/b which should go toward 1. But why division? it simply puts one number as reference defines the second one. For example
a = 0.00000012345
b = 0.00000012346
a/b = 0.999919002
b/a = 1.000081004
(a/b)-1 = 8.099789405475458e-5
1-(b/a) = 8.100445524503848e-5
or
a=12345*10^8
b=12346*10^8
a/b = 0.999919002
b/a = 1.000081004
(a/b)-1 = 8.099789405475458e-5
1-(b/a) = 8.100445524503848e-5
by division we get rid of trailing or leading zeros (or relatively small numbers) that pollute our judgement of number precision. In the example, the comparison is of order 10^-5, and we have 4 number accuracy, because of that in the beginning code I wrote comparison with 10^(n+1) where n is number accuracy.
Adding onto Valentin Kuzub's answer above:
we could use a single method that supports providing nth precision number:
public static bool EqualsNthDigitPrecision(this double value, double compareTo, int precisionPoint) =>
Math.Abs(value - compareTo) < Math.Pow(10, -Math.Abs(precisionPoint));
Note: This method is built for simplicity without added bulk and not with performance in mind.
As a general rule:
Double representation is good enough in most cases but can miserably fail in some situations. Use decimal values if you need complete precision (as in financial applications).
Most problems with doubles doesn't come from direct comparison, it use to be a result of the accumulation of several math operations which exponentially disturb the value due to rounding and fractional errors (especially with multiplications and divisions).
Check your logic, if the code is:
x = 0.1
if (x == 0.1)
it should not fail, it's to simple to fail, if X value is calculated by more complex means or operations it's quite possible the ToString method used by the debugger is using an smart rounding, maybe you can do the same (if that's too risky go back to using decimal):
if (x.ToString() == "0.1")
Floating point number representations are notoriously inaccurate because of the way floats are stored internally. E.g. x may actually be 0.0999999999 or 0.100000001 and your condition will fail. If you want to determine if floats are equal you need to specify whether they're equal to within a certain tolerance.
I.e.:
if(Math.Abs(x - 0.1) < tol) {
// Do something
}
My extensions method for double comparison:
public static bool IsEqual(this double value1, double value2, int precision = 2)
{
var dif = Math.Abs(Math.Round(value1, precision) - Math.Round(value2, precision));
while (precision > 0)
{
dif *= 10;
precision--;
}
return dif < 1;
}
To compare floating point, double or float types, use the specific method of CSharp:
if (double1.CompareTo(double2) > 0)
{
// double1 is greater than double2
}
if (double1.CompareTo(double2) < 0)
{
// double1 is less than double2
}
if (double1.CompareTo(double2) == 0)
{
// double1 equals double2
}
https://learn.microsoft.com/en-us/dotnet/api/system.double.compareto?view=netcore-3.1
I have a code, and I do not understand it. I am developing an application which precision is very important. but it does not important for .NET, why? I don't know.
double value = 3.5;
MessageBox.Show((value + 1 * Math.Pow(10, -20)).ToString());
but the message box shows: 3.5
Please help me, Thank you.
If you're doing anything where precision is very important, you need to be aware of the limitations of floating point. A good reference is David Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
You may find that floating-point doesn't give you enough precision and you need to work with a decimal type. These, however, are always much slower than floating point -- it's a tradeoff between accuracy and speed.
You can have precision, but it depends on what else you want to do. If you put the following in a Console application:
double a = 1e-20;
Console.WriteLine(" a = {0}", a);
Console.WriteLine("1+a = {0}", 1+a);
decimal b = 1e-20M;
Console.WriteLine(" b = {0}", b);
Console.WriteLine("1+b = {0}", 1+b);
You will get
a = 1E-20
1+a = 1
b = 0,00000000000000000001
1+b = 1,00000000000000000001
But Note that The Pow function, like almost everything in the Math class, only takes doubles:
double Pow(double x, double y);
So you cannot take the Sine of a decimal (other then by converting it to double)
Also see this question.
Or use the Decimal type rather than double.
The precision of a Double is 15 digits (17 digits internally). The value that you calculate with Math.Pow is correct, but when you add it to value it just is too small to make a difference.
Edit:
A Decimal can handle that precision, but not the calculation. If you want that precision, you need to do the calculation, then convert each value to a Decimal before adding them together:
double value = 3.5;
double small = Math.Pow(10, -20);
Decimal result = (Decimal)value + (Decimal)small;
MessageBox.Show(result.ToString());
Double precision means it can hold 15-16 digits. 3.5 + 1e-20 = 21 digits. It cannot be represented in double precicion. You can use another type like decimal.