May be this is a very basic question, but I am really interested to know what really happens.
For example if we do the following in c#:
object obj = "330.1500249000119";
var val = Convert.ToDouble(obj);
The val becomes: 330.15002490001189
The question is that why the last 9 is replace by 89? Can we stop it from happening this way? And is this precision dependent on the Current Culture?
This has nothing to do with culture. Some numbers can not be exactly represented by a base-2 number, just like in base-10 1/3rd can't be exactly represented by .3333333
Note that in your specific case you are putting in more digits than the data type allows: the significant digits available with a Double is 15-16 (depending on range), which your number goes beyond.
Instead of a Double, you can use a Decimal in this case:
object obj = "330.1500249000119";
var val = Convert.ToDecimal(obj);
A decimal would retain the precision.
object obj = "330.1500249000119";
var val = Convert.ToDecimal(obj);
The "issue" you are having is floating point representation.
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
No, you can't stop it from happening. You are parsing a value that has more digits that the data type can represent.
The precision is not dependent of the culture. A double always has the same precision.
So, if you don't want it to happen, then simply don't do it. If you don't want the effects of the limited precision of floating point numbers, don't use floating point numbers. If you would use a fixed point number (Decimal) instead, it could represent the value exactly.
A CPU represents doubles in 8 bytes. Which is divided into 1 sign bit, 11 bits for the exponent ("the range") and 52 for the mantissa ("the precision").
You have limited range and precision.
The C constant DBL_DIG in <float.h> tells you that such a double can only represent 15 digits precisely, not more. But this number entirely dependent on your c library and CPU.
330.1500249000119 contains 18 digits, so it will be rounded to 330.150024900012. 330.15002490001189 is only one off, which is good. Normally you should expect 1.189 vs 1.2.
For the exact mathematics behind try to read David Goldberg, “What Every Computer Scientist Should Know About Floating-point Arithmetic,” ACM Computing Surveys 23, 1 (1991-03), 5-48. This is worth reading if you are interested in the details, but it does require a background in computer science.
http://www.validlab.com/goldberg/paper.pdf
You can stop this from happening by using better floating point types, like long double or __float128, or using a better cpu, like a Sparc64 or s390 which use 41 digits (__float128) natively in HW as long double.
Yes, using an UltraSparc/Niagara or an IBM S390 is culture.
The usual answer is: use long double, dude. Which gives you two more bytes on Intel (18 digits) and several more an powerpc (31 digits), and 41 on sparc64/s390.
Related
Why does the following program print what it prints?
class Program
{
static void Main(string[] args)
{
float f1 = 0.09f*100f;
float f2 = 0.09f*99.999999f;
Console.WriteLine(f1 > f2);
}
}
Output is
false
Floating point only has so many digits of precision. If you're seeing f1 == f2, it is because any difference requires more precision than a 32-bit float can represent.
I recommend reading What Every Computer Scientist Should Read About Floating Point
The main thing is this isn't just .Net: it's a limitation of the underlying system most every language will use to represent a float in memory. The precision only goes so far.
You can also have some fun with relatively simple numbers, when you take into account that it's not even base ten. 0.1 (1/10th), for example, is a repeating decimal when represented in binary, just as 1/3rd is when represented in decimal.
In this particular case, it’s because .09 and .999999 cannot be represented with exact precision in binary (similarly, 1/3 cannot be represented with exact precision in decimal). For example, 0.111111111111111111101111 base 2 is 0.999998986721038818359375 base 10. Adding 1 to the previous binary value, 0.11111111111111111111 base 2 is 0.99999904632568359375 base 10. There isn’t a binary value for exactly 0.999999. Floating point precision is also limited by the space allocated for storing the exponent and the fractional part of the mantissa. Also, like integer types, floating point can overflow its range, although its range is larger than integer ranges.
Running this bit of C++ code in the Xcode debugger,
float myFloat = 0.1;
shows that myFloat gets the value 0.100000001. It is off by 0.000000001. Not a lot, but if the computation has several arithmetic operations, the imprecision can be compounded.
imho a very good explanation of floating point is in Chapter 14 of Introduction to Computer Organization with x86-64 Assembly Language & GNU/Linux by Bob Plantz of California State University at Sonoma (retired) http://bob.cs.sonoma.edu/getting_book.html. The following is based on that chapter.
Floating point is like scientific notation, where a value is stored as a mixed number greater than or equal to 1.0 and less than 2.0 (the mantissa), times another number to some power (the exponent). Floating point uses base 2 rather than base 10, but in the simple model Plantz gives, he uses base 10 for clarity’s sake. Imagine a system where two positions of storage are used for the mantissa, one position is used for the sign of the exponent* (0 representing + and 1 representing -), and one position is used for the exponent. Now add 0.93 and 0.91. The answer is 1.8, not 1.84.
9311 represents 0.93, or 9.3 times 10 to the -1.
9111 represents 0.91, or 9.1 times 10 to the -1.
The exact answer is 1.84, or 1.84 times 10 to the 0, which would be 18400 if we had 5 positions, but, having only four positions, the answer is 1800, or 1.8 times 10 to the zero, or 1.8. Of course, floating point data types can use more than four positions of storage, but the number of positions is still limited.
Not only is precision limited by space, but “an exact representation of fractional values in binary is limited to sums of inverse powers of two.” (Plantz, op. cit.).
0.11100110 (binary) = 0.89843750 (decimal)
0.11100111 (binary) = 0.90234375 (decimal)
There is no exact representation of 0.9 decimal in binary. Even carrying the fraction out more places doesn’t work, as you get into repeating 1100 forever on the right.
Beginning programmers often see floating point arithmetic as more
accurate than integer. It is true that even adding two very large
integers can cause overflow. Multiplication makes it even more likely
that the result will be very large and, thus, overflow. And when used
with two integers, the / operator in C/C++ causes the fractional part
to be lost. However, ... floating point representations have their own
set of inaccuracies. (Plantz, op. cit.)
*In floating point, both the sign of the number and the sign of the exponent are represented.
Why does the following program print what it prints?
class Program
{
static void Main(string[] args)
{
float f1 = 0.09f*100f;
float f2 = 0.09f*99.999999f;
Console.WriteLine(f1 > f2);
}
}
Output is
false
Floating point only has so many digits of precision. If you're seeing f1 == f2, it is because any difference requires more precision than a 32-bit float can represent.
I recommend reading What Every Computer Scientist Should Read About Floating Point
The main thing is this isn't just .Net: it's a limitation of the underlying system most every language will use to represent a float in memory. The precision only goes so far.
You can also have some fun with relatively simple numbers, when you take into account that it's not even base ten. 0.1 (1/10th), for example, is a repeating decimal when represented in binary, just as 1/3rd is when represented in decimal.
In this particular case, it’s because .09 and .999999 cannot be represented with exact precision in binary (similarly, 1/3 cannot be represented with exact precision in decimal). For example, 0.111111111111111111101111 base 2 is 0.999998986721038818359375 base 10. Adding 1 to the previous binary value, 0.11111111111111111111 base 2 is 0.99999904632568359375 base 10. There isn’t a binary value for exactly 0.999999. Floating point precision is also limited by the space allocated for storing the exponent and the fractional part of the mantissa. Also, like integer types, floating point can overflow its range, although its range is larger than integer ranges.
Running this bit of C++ code in the Xcode debugger,
float myFloat = 0.1;
shows that myFloat gets the value 0.100000001. It is off by 0.000000001. Not a lot, but if the computation has several arithmetic operations, the imprecision can be compounded.
imho a very good explanation of floating point is in Chapter 14 of Introduction to Computer Organization with x86-64 Assembly Language & GNU/Linux by Bob Plantz of California State University at Sonoma (retired) http://bob.cs.sonoma.edu/getting_book.html. The following is based on that chapter.
Floating point is like scientific notation, where a value is stored as a mixed number greater than or equal to 1.0 and less than 2.0 (the mantissa), times another number to some power (the exponent). Floating point uses base 2 rather than base 10, but in the simple model Plantz gives, he uses base 10 for clarity’s sake. Imagine a system where two positions of storage are used for the mantissa, one position is used for the sign of the exponent* (0 representing + and 1 representing -), and one position is used for the exponent. Now add 0.93 and 0.91. The answer is 1.8, not 1.84.
9311 represents 0.93, or 9.3 times 10 to the -1.
9111 represents 0.91, or 9.1 times 10 to the -1.
The exact answer is 1.84, or 1.84 times 10 to the 0, which would be 18400 if we had 5 positions, but, having only four positions, the answer is 1800, or 1.8 times 10 to the zero, or 1.8. Of course, floating point data types can use more than four positions of storage, but the number of positions is still limited.
Not only is precision limited by space, but “an exact representation of fractional values in binary is limited to sums of inverse powers of two.” (Plantz, op. cit.).
0.11100110 (binary) = 0.89843750 (decimal)
0.11100111 (binary) = 0.90234375 (decimal)
There is no exact representation of 0.9 decimal in binary. Even carrying the fraction out more places doesn’t work, as you get into repeating 1100 forever on the right.
Beginning programmers often see floating point arithmetic as more
accurate than integer. It is true that even adding two very large
integers can cause overflow. Multiplication makes it even more likely
that the result will be very large and, thus, overflow. And when used
with two integers, the / operator in C/C++ causes the fractional part
to be lost. However, ... floating point representations have their own
set of inaccuracies. (Plantz, op. cit.)
*In floating point, both the sign of the number and the sign of the exponent are represented.
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)
Mathematically, consider for this question the rational number
8725724278030350 / 2**48
where ** in the denominator denotes exponentiation, i.e. the denominator is 2 to the 48th power. (The fraction is not in lowest terms, reducible by 2.) This number is exactly representable as a System.Double. Its decimal expansion is
31.0000000000000'49'73799150320701301097869873046875 (exact)
where the apostrophes do not represent missing digits but merely mark the boudaries where rounding to 15 resp. 17 digits is to be performed.
Note the following: If this number is rounded to 15 digits, the result will be 31 (followed by thirteen 0s) because the next digits (49...) begin with a 4 (meaning round down). But if the number is first rounded to 17 digits and then rounded to 15 digits, the result could be 31.0000000000001. This is because the first rounding rounds up by increasing the 49... digits to 50 (terminates) (next digits were 73...), and the second rounding might then round up again (when the midpoint-rounding rule says "round away from zero").
(There are many more numbers with the above characteristics, of course.)
Now, it turns out that .NET's standard string representation of this number is "31.0000000000001". The question: Isn't this a bug? By standard string representation we mean the String produced by the parameterles Double.ToString() instance method which is of course identical to what is produced by ToString("G").
An interesting thing to note is that if you cast the above number to System.Decimal then you get a decimal that is 31 exactly! See this Stack Overflow question for a discussion of the surprising fact that casting a Double to Decimal involves first rounding to 15 digits. This means that casting to Decimal makes a correct round to 15 digits, whereas calling ToSting() makes an incorrect one.
To sum up, we have a floating-point number that, when output to the user, is 31.0000000000001, but when converted to Decimal (where 29 digits are available), becomes 31 exactly. This is unfortunate.
Here's some C# code for you to verify the problem:
static void Main()
{
const double evil = 31.0000000000000497;
string exactString = DoubleConverter.ToExactString(evil); // Jon Skeet, http://csharpindepth.com/Articles/General/FloatingPoint.aspx
Console.WriteLine("Exact value (Jon Skeet): {0}", exactString); // writes 31.00000000000004973799150320701301097869873046875
Console.WriteLine("General format (G): {0}", evil); // writes 31.0000000000001
Console.WriteLine("Round-trip format (R): {0:R}", evil); // writes 31.00000000000005
Console.WriteLine();
Console.WriteLine("Binary repr.: {0}", String.Join(", ", BitConverter.GetBytes(evil).Select(b => "0x" + b.ToString("X2"))));
Console.WriteLine();
decimal converted = (decimal)evil;
Console.WriteLine("Decimal version: {0}", converted); // writes 31
decimal preciseDecimal = decimal.Parse(exactString, CultureInfo.InvariantCulture);
Console.WriteLine("Better decimal: {0}", preciseDecimal); // writes 31.000000000000049737991503207
}
The above code uses Skeet's ToExactString method. If you don't want to use his stuff (can be found through the URL), just delete the code lines above dependent on exactString. You can still see how the Double in question (evil) is rounded and cast.
ADDITION:
OK, so I tested some more numbers, and here's a table:
exact value (truncated) "R" format "G" format decimal cast
------------------------- ------------------ ---------------- ------------
6.00000000000000'53'29... 6.0000000000000053 6.00000000000001 6
9.00000000000000'53'29... 9.0000000000000053 9.00000000000001 9
30.0000000000000'49'73... 30.00000000000005 30.0000000000001 30
50.0000000000000'49'73... 50.00000000000005 50.0000000000001 50
200.000000000000'51'15... 200.00000000000051 200.000000000001 200
500.000000000000'51'15... 500.00000000000051 500.000000000001 500
1020.00000000000'50'02... 1020.000000000005 1020.00000000001 1020
2000.00000000000'50'02... 2000.000000000005 2000.00000000001 2000
3000.00000000000'50'02... 3000.000000000005 3000.00000000001 3000
9000.00000000000'54'56... 9000.0000000000055 9000.00000000001 9000
20000.0000000000'50'93... 20000.000000000051 20000.0000000001 20000
50000.0000000000'50'93... 50000.000000000051 50000.0000000001 50000
500000.000000000'52'38... 500000.00000000052 500000.000000001 500000
1020000.00000000'50'05... 1020000.000000005 1020000.00000001 1020000
The first column gives the exact (though truncated) value that the Double represent. The second column gives the string representation from the "R" format string. The third column gives the usual string representation. And finally the fourth column gives the System.Decimal that results from converting this Double.
We conclude the following:
Round to 15 digits by ToString() and round to 15 digits by conversion to Decimal disagree in very many cases
Conversion to Decimal also rounds incorrectly in many cases, and the errors in these cases cannot be described as "round-twice" errors
In my cases, ToString() seems to yield a bigger number than Decimal conversion when they disagree (no matter which of the two rounds correctly)
I only experimented with cases like the above. I haven't checked if there are rounding errors with numbers of other "forms".
So from your experiments, it appears that Double.ToString doesn't do correct rounding.
That's rather unfortunate, but not particularly surprising: doing correct rounding for binary to decimal conversions is nontrivial, and also potentially quite slow, requiring multiprecision arithmetic in corner cases. See David Gay's dtoa.c code here for one example of what's involved in correctly-rounded double-to-string and string-to-double conversion. (Python currently uses a variant of this code for its float-to-string and string-to-float conversions.)
Even the current IEEE 754 standard for floating-point arithmetic recommends, but doesn't require that conversions from binary floating-point types to decimal strings are always correctly rounded. Here's a snippet, from section 5.12.2, "External decimal character sequences representing finite numbers".
There might be an implementation-defined limit on the number of
significant digits that can be converted with correct rounding to and
from supported binary formats. That limit, H, shall be such that H ≥
M+3 and it should be that H is unbounded.
Here M is defined as the maximum of Pmin(bf) over all supported binary formats bf, and since Pmin(float64) is defined as 17 and .NET supports the float64 format via the Double type, M should be at least 17 on .NET. In short, this means that if .NET were to follow the standard, it would be providing correctly rounded string conversions up to at least 20 significant digits. So it looks as though the .NET Double doesn't meet this standard.
In answer to the 'Is this a bug' question, much as I'd like it to be a bug, there really doesn't seem to be any claim of accuracy or IEEE 754 conformance anywhere that I can find in the number formatting documentation for .NET. So it might be considered undesirable, but I'd have a hard time calling it an actual bug.
EDIT: Jeppe Stig Nielsen points out that the System.Double page on MSDN states that
Double complies with the IEC 60559:1989 (IEEE 754) standard for binary
floating-point arithmetic.
It's not clear to me exactly what this statement of compliance is supposed to cover, but even for the older 1985 version of IEEE 754, the string conversion described seems to violate the binary-to-decimal requirements of that standard.
Given that, I'll happily upgrade my assessment to 'possible bug'.
First take a look at the bottom of this page which shows a very similar 'double rounding' problem.
Checking the binary / hex representation of the following floating point numbers shows that that the given range is stored as the same number in double format:
31.0000000000000480 = 0x403f00000000000e
31.0000000000000497 = 0x403f00000000000e
31.0000000000000515 = 0x403f00000000000e
As noted by several others, that is because the closest representable double has an exact value of 31.00000000000004973799150320701301097869873046875.
There are an additional two aspects to consider in the forward and reverse conversion of IEEE 754 to strings, especially in the .NET environment.
First (I cannot find a primary source) from Wikipedia we have:
If a decimal string with at most 15 significant decimal is converted
to IEEE 754 double precision and then converted back to the same
number of significant decimal, then the final string should match the
original; and if an IEEE 754 double precision is converted to a
decimal string with at least 17 significant decimal and then converted
back to double, then the final number must match the original.
Therefore, regarding compliance with the standard, converting a string 31.0000000000000497 to double will not necessarily be the same when converted back to string (too many decimal places given).
The second consideration is that unless the double to string conversion has 17 significant digits, it's rounding behavior is not explicitly defined in the standard either.
Furthermore, documentation on Double.ToString() shows that it is governed by numeric format specifier of the current culture settings.
Possible Complete Explanation:
I suspect the twice-rounding is occurring something like this: the initial decimal string is created to 16 or 17 significant digits because that is the required precision for "round trip" conversion giving an intermediate result of 31.00000000000005 or 31.000000000000050. Then due to default culture settings, the result is rounded to 15 significant digits, 31.00000000000001, because 15 decimal significant digits is the minimum precision for all doubles.
Doing an intermediate conversion to Decimal on the other hand, avoids this problem in a different way: it truncates to 15 significant digits directly.
The question: Isn't this a bug?
Yes. See this PR on GitHub. The reason of rounding twice AFAK is for "pretty" format output but it introduces a bug as you have already discovered here. We tried to fix it - remove the 15 digits precision converting, directly go to 17 digits precision converting. The bad news is it's a breaking change and will break things a lot. For example, one of the test case will break:
10:12:26 Assert.Equal() Failure
10:12:26 Expected: 1.1
10:12:26 Actual: 1.1000000000000001
The fix would impact a large set of existing libraries so finally this PR has been closed for now. However, .NET Core team is still looking for a chance to fix this bug. Welcome to join the discussion.
Truncation is the correct way to limit the precision of a number that will later be rounded, precisely to avoid the double rounding issue.
I have a simpler suspicion: The culprit is likely the pow operator => **;
While your number is exactly representable as a double, for convenience reasons
(the power operator needs much work to work right) the power is calculated
by the exponential function. This is one reason that you can optimize performance
by multiplying a number repeatedly instead of using pow() because pow() is very
expensive.
So it does not give you the correct 2^48, but something slightly incorrect and
therefore you have your rounding problems.
Please check out what 2^48 exactly returns.
EDIT: Sorry, I did only a scan on the problem and give a wrong suspicion. There is
a known issue with double rounding on the Intel processors. Older code use the
internal 80-bit format of the FPU instead of the SSE instructions which is likely
to cause the error. The value is written exactly to the 80bit register and then
rounded twice, so Jeppe has already found and neatly explained the problem.
Is it a bug ? Well, the processor is doing everything right, it is simply the
problem that the Intel FPU internally has more precision for floating-point
operations.
FURTHER EDIT AND INFORMATION:
The "double rounding" is a known issue and explicitly mentioned in "Handbook of Floating-Point Arithmetic" by Jean-Michel Muller et. al. in the chapter "The Need
for a Revision" under "3.3.1 A typical problem : 'double rounding'" at page 75:
The processor being used may offer an internal precision that is wider
than the precision of the variables of the program (a typical example
is the double-extended format available on Intel Platforms, when the
variables of the program are single- precision or double-precision
floating-point numbers). This may sometimes have strange side effects , as
we will see in this section. Consider the C program [...]
#include <stdio.h>
int main(void)
{
double a = 1848874847.0;
double b = 19954562207.0;
double c;
c = a * b;
printf("c = %20.19e\n", c);
return 0;
}
32bit:
GCC 4.1.2 20061115 on Linux/Debian
With Compilerswitch or with -mfpmath=387 (80bit-FPU): 3.6893488147419103232e+19
-march=pentium4 -mfpmath=sse (SSE) oder 64-bit : 3.6893488147419111424e+19
As explained in the book, the solution for the discrepancy is the double rounding with 80 bits and 53 bits.
All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.
Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?
This is a classic speed-versus-accuracy trade off.
However, keep in mind that for PI, for example, the most digits you will ever need is 41.
The largest number of digits of pi
that you will ever need is 41. To
compute the circumference of the
universe with an error less than the
diameter of a proton, you need 41
digits of pi †. It seems safe to
conclude that 41 digits is sufficient
accuracy in pi for any circle
measurement problem you're likely to
encounter. Thus, in the over one
trillion digits of pi computed in
2002, all digits beyond the 41st have
no practical value.
In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.
Also consider:
System.Double 8 bytes Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal 12 bytes Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures
As you can see, decimal has a smaller range, but a higher precision.
No, - decimals are no more "exact" than doubles, or for that matter, any type. The concept of "exactness", (when speaking about numerical representations in a compuiter), is what is wrong. Any type is absolutely 100% exact at representing some numbers. unsigned bytes are 100% exact at representing the whole numbers from 0 to 255. but they're no good for fractions or for negatives or integers outside the range.
Decimals are 100% exact at representing a certain set of base 10 values. doubles (since they store their value using binary IEEE exponential representation) are exact at representing a set of binary numbers.
Neither is any more exact than than the other in general, they are simply for different purposes.
To elaborate a bit furthur, since I seem to not be clear enough for some readers...
If you take every number which is representable as a decimal, and mark every one of them on a number line, between every adjacent pair of them there is an additional infinity of real numbers which are not representable as a decimal. The exact same statement can be made about the numbers which can be represented as a double. If you marked every decimal on the number line in blue, and every double in red, except for the integers, there would be very few places where the same value was marked in both colors.
In general, for 99.99999 % of the marks, (please don't nitpick my percentage) the blue set (decimals) is a completely different set of numbers from the red set (the doubles).
This is because by our very definition for the blue set is that it is a base 10 mantissa/exponent representation, and a double is a base 2 mantissa/exponent representation. Any value represented as base 2 mantissa and exponent, (1.00110101001 x 2 ^ (-11101001101001) means take the mantissa value (1.00110101001) and multiply it by 2 raised to the power of the exponent (when exponent is negative this is equivilent to dividing by 2 to the power of the absolute value of the exponent). This means that where the exponent is negative, (or where any portion of the mantissa is a fractional binary) the number cannot be represented as a decimal mantissa and exponent, and vice versa.
For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles.
Decimal is more precise but has less of a range. You would generally use Double for physics and mathematical calculations but you would use Decimal for financial and monetary calculations.
See the following articles on msdn for details.
Double
http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
Decimal
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Seems like most of the arguments here to "It does not do what I want" are "but it's faster", well so is ANSI C+Gmp library, but nobody is advocating that right?
If you particularly want to control accuracy, then there are other languages which have taken the time to implement exact precision, in a user controllable way:
http://www.doughellmann.com/PyMOTW/decimal/
If precision is really important to you, then you are probably better off using languages that mathematicians would use. If you do not like Fortran then Python is a modern alternative.
Whatever language you are working in, remember the golden rule:
Avoid mixing types...
So do convert a and b to be the same before you attempt a operator b
If I were to hazard a guess, I'd say those functions leverage low-level math functionality (perhaps in C) that does not use decimals internally, and so returning a decimal would require a cast from double to decimal anyway. Besides, the purpose of the decimal value type is to ensure accuracy; these functions do not and cannot return 100% accurate results without infinite precision (e.g., irrational numbers).
Neither Decimal nor float or double are good enough if you require something to be precise. Furthermore, Decimal is so expensive and overused out there it is becoming a regular joke.
If you work in fractions and require ultimate precision, use fractions. It's same old rule, convert once and only when necessary. Your rounding rules too will vary per app, domain and so on, but sure you can find an odd example or two where it is suitable. But again, if you want fractions and ultimate precision, the answer is not to use anything but fractions. Consider you might want a feature of arbitrary precision as well.
The actual problem with CLR in general is that it is so odd and plain broken to implement a library that deals with numerics in generic fashion largely due to bad primitive design and shortcoming of the most popular compiler for the platform. It's almost the same as with Java fiasco.
double just turns out to be the best compromise covering most domains, and it works well, despite the fact MS JIT is still incapable of utilising a CPU tech that is about 15 years old now.
[piece to users of MSDN slowdown compilers]
Double is a built-in type. Is is supported by FPU/SSE core (formerly known as "Math coprocessor"), that's why it is blazingly fast. Especially at multiplication and scientific functions.
Decimal is actually a complex structure, consisting of several integers.