I asked about System.Double recently and was told that computations may differ depending on platform/architecture. Unfortunately, I cannot find any information to tell me whether the same applies to System.Decimal.
Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?
Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?
The C# 4 spec is clear that the value you get will be computed the same on any platform.
As LukeH's answer notes, the ECMA version of the C# 2 spec grants leeway to conforming implementations to provide more precision, so an implementation of C# 2.0 on another platform might provide a higher-precision answer.
For the purposes of this answer I'll just discuss the C# 4.0 specified behaviour.
The C# 4.0 spec says:
The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position [...]. A zero result always has a sign of 0 and a scale of 0.
Since the calculation of the exact value of an operation should be the same on any platform, and the rounding algorithm is well-defined, the resulting value should be the same regardless of platform.
However, note the parenthetical and that last sentence about the zeroes. It might not be clear why that information is necessary.
One of the oddities of the decimal system is that almost every quantity has more than one possible representation. Consider exact value 123.456. A decimal is the combination of a 96 bit integer, a 1 bit sign, and an eight-bit exponent that represents a number from -28 to 28. That means that exact value 123.456 could be represented by decimals 123456 x 10-3 or 1234560 x 10-4 or 12345600 x 10-5. Scale matters.
The C# specification also mandates how information about scale is computed. The literal 123.456m would be encoded as 123456 x 10-3, and 123.4560m would be encoded as 1234560 x 10-4.
Observe the effects of this feature in action:
decimal d1 = 111.111000m;
decimal d2 = 111.111m;
decimal d3 = d1 + d1;
decimal d4 = d2 + d2;
decimal d5 = d1 + d2;
Console.WriteLine(d1);
Console.WriteLine(d2);
Console.WriteLine(d3);
Console.WriteLine(d4);
Console.WriteLine(d5);
Console.WriteLine(d3 == d4);
Console.WriteLine(d4 == d5);
Console.WriteLine(d5 == d3);
This produces
111.111000
111.111
222.222000
222.222
222.222000
True
True
True
Notice how information about significant zero figures is preserved across operations on decimals, and that decimal.ToString knows about that and displays the preserved zeroes if it can. Notice also how decimal equality knows to make comparisons based on exact values, even if those values have different binary and string representations.
The spec I think does not actually say that decimal.ToString() needs to correctly print out values with trailing zeroes based on their scales, but it would be foolish of an implementation to not do so; I would consider that a bug.
I also note that the internal memory format of a decimal in the CLR implementation is 128 bits, subdivided into: 16 unused bits, 8 scale bits, 7 more unused bits, 1 sign bit and 96 mantissa bits. The exact layout of those bits in memory is not defined by the specification, and if another implementation wants to stuff additional information into those 23 unused bits for its own purposes, it can do so. In the CLR implementation the unused bits are supposed to always be zero.
Even though the format of floating point types is clearly defined, floating point calculations can indeed have differing results depending on architecture, as stated in section 4.1.6 of the C# specification:
Floating-point operations may be
performed with higher precision than
the result type of the operation. For
example, some hardware architectures
support an “extended” or “long double”
floating-point type with greater range
and precision than the double type,
and implicitly perform all
floating-point operations using this
higher precision type. Only at
excessive cost in performance can such
hardware architectures be made to
perform floating-point operations with
less precision, and rather than
require an implementation to forfeit
both performance and precision, C#
allows a higher precision type to be
used for all floating-point
operations.
While the decimal type is subject to approximation in order for a value to be represented within its finite range, the range is, by definition, defined to be suitable for financial and monetary calculations. Therefore, it has a higher precision (and smaller range) than float or double. It is also more clearly defined than the other floating point types such that it would appear to be platform-independent (see section 4.1.7 - I suspect this platform independence is more because there isn't standard hardware support for types with the size and precision of decimal rather than because of the type itself, so this may change with future specifications and hardware architectures).
If you need to know if a specific implementation of the decimal type is correct, you should be able to craft some unit tests using the specification that will test the correctness.
The decimal type is represented in what amounts to base-10 using a struct (containing integers, I believe), as opposed to double and other floating-point types, which represent non-integral values in base-2. Therefore, decimals are exact representations of base-10 values, within a standardized precision, on any architecture. This is true for any architecture running a correct implementation of the .NET spec.
So to answer your question, since the behavior of decimal is standardized this way in the specification, decimal values should be the same on any architecture conforming to that spec. If they don't conform to that spec, then they're not really .NET.
"Decimal" .NET Type vs. "Float" and "Double" C/C++ Type
A reading of the specification suggests that decimal -- like float and double -- might be allowed some leeway in its implementation so long as it meets certain minimum standards.
Here are some excerpts from the ECMA C# spec (section 11.1.7). All emphasis in bold is mine.
The decimal type can represent values including those in
the range 1 x 10−28 through 1 x 1028 with
at least 28 significant digits.
The finite set of values of type decimal are of the form
(-1)s x c x 10-e, where the sign s
is 0 or 1, the coefficient c is given by 0 <= c < Cmax,
and the scale e is such that Emin <= e <= Emax, where
Cmax is at least 1 x 1028, Emin <= 0, and
Emax >= 28. The decimal type does not necessarily
support signed zeros, infinities, or NaN's.
For decimals with an absolute value less than 1.0m, the
value is exact to at least the 28th decimal
place. For decimals with an absolute value greater than or
equal to 1.0m, the value is exact to at least 28 digits.
Note that the wording of the Microsoft C# spec (section 4.1.7) is significantly different to that of the ECMA spec. It appears to lock down the behaviour of decimal a lot more strictly.
Related
The Double data type cannot correctly represent some base 10 values. This is because of how floating point numbers represent real numbers. What this means is that when representing monetary values, one should use the decimal value type to prevent errors. (feel free to correct errors in this preamble)
What I want to know is what are the values which present such a problem under the Double data-type under a 64 bit architecture in the standard .Net framework (C# if that makes a difference) ?
I expect the answer the be a formula or rule to find such values but I would also like some example values.
Any number which cannot be written as the sum of positive and negative powers of 2 cannot be exactly represented as a binary floating-point number.
The common IEEE formats for 32- and 64-bit representations of floating-point numbers impose further constraints; they limit the number of binary digits in both the significand and the exponent. So there are maximum and minimum representable numbers (approximately +/- 10^308 (base-10) if memory serves) and limits to the precision of a number that can be represented. This limit on the precision means that, for 64-bit numbers, the difference between the exponent of the largest power of 2 and the smallest power in a number is limited to 52, so if your number includes a term in 2^52 it can't also include a term in 2^-1.
Simple examples of numbers which cannot be exactly represented in binary floating-point numbers include 1/3, 2/3, 1/5.
Since the set of floating-point numbers (in any representation) is finite, and the set of real numbers is infinite, one algorithm to find a real number which is not exactly representable as a floating-point number is to select a real number at random. The probability that the real number is exactly representable as a floating-point number is 0.
You generally need to be prepared for the possibility that any value you store in a double has some small amount of error. Unless you're storing a constant value, chances are it could be something with at least some error. If it's imperative that there never be any error, and the values aren't constant, you probably shouldn't be using a floating point type.
What you probably should be asking in many cases is, "How do I deal with the minor floating point errors?" You'll want to know what types of operations can result in a lot of error, and what types don't. You'll want to ensure that comparing two values for "equality" actually just ensures they are "close enough" rather than exactly equal, etc.
This question actually goes beyond any single programming language or platform. The inaccuracy is actually inherent in binary data.
Consider that with a double, each number N to the left (at 0-based index I) of the decimal point represents the value N * 2^I and every digit to the right of the decimal point represents the value N * 2^(-I).
As an example, 5.625 (base 10) would be 101.101 (base 2).
Given this calculation, and decimal value that can't be calculated as a sum of 2^(-I) for different values of I would have an incorrect value as a double.
A float is represented as s, e and m in the following formula
s * m * 2^e
This means that any number that cannot be represented using the given expression (and in the respective domains of s, e and m) cannot be represented exactly.
Basically, you can represent all numbers between 0 and 2^53 - 1 multiplied by a certain power of two (possibly a negative power).
As an example, all numbers between 0 and 2^53 - 1 can be represented multiplied with 2^0 = 1. And you can also represent all those numbers by dividing them by 2 (with a .5 fraction). And so on.
This answer does not fully cover the topic, but I hope it helps.
Mathematically, consider for this question the rational number
8725724278030350 / 2**48
where ** in the denominator denotes exponentiation, i.e. the denominator is 2 to the 48th power. (The fraction is not in lowest terms, reducible by 2.) This number is exactly representable as a System.Double. Its decimal expansion is
31.0000000000000'49'73799150320701301097869873046875 (exact)
where the apostrophes do not represent missing digits but merely mark the boudaries where rounding to 15 resp. 17 digits is to be performed.
Note the following: If this number is rounded to 15 digits, the result will be 31 (followed by thirteen 0s) because the next digits (49...) begin with a 4 (meaning round down). But if the number is first rounded to 17 digits and then rounded to 15 digits, the result could be 31.0000000000001. This is because the first rounding rounds up by increasing the 49... digits to 50 (terminates) (next digits were 73...), and the second rounding might then round up again (when the midpoint-rounding rule says "round away from zero").
(There are many more numbers with the above characteristics, of course.)
Now, it turns out that .NET's standard string representation of this number is "31.0000000000001". The question: Isn't this a bug? By standard string representation we mean the String produced by the parameterles Double.ToString() instance method which is of course identical to what is produced by ToString("G").
An interesting thing to note is that if you cast the above number to System.Decimal then you get a decimal that is 31 exactly! See this Stack Overflow question for a discussion of the surprising fact that casting a Double to Decimal involves first rounding to 15 digits. This means that casting to Decimal makes a correct round to 15 digits, whereas calling ToSting() makes an incorrect one.
To sum up, we have a floating-point number that, when output to the user, is 31.0000000000001, but when converted to Decimal (where 29 digits are available), becomes 31 exactly. This is unfortunate.
Here's some C# code for you to verify the problem:
static void Main()
{
const double evil = 31.0000000000000497;
string exactString = DoubleConverter.ToExactString(evil); // Jon Skeet, http://csharpindepth.com/Articles/General/FloatingPoint.aspx
Console.WriteLine("Exact value (Jon Skeet): {0}", exactString); // writes 31.00000000000004973799150320701301097869873046875
Console.WriteLine("General format (G): {0}", evil); // writes 31.0000000000001
Console.WriteLine("Round-trip format (R): {0:R}", evil); // writes 31.00000000000005
Console.WriteLine();
Console.WriteLine("Binary repr.: {0}", String.Join(", ", BitConverter.GetBytes(evil).Select(b => "0x" + b.ToString("X2"))));
Console.WriteLine();
decimal converted = (decimal)evil;
Console.WriteLine("Decimal version: {0}", converted); // writes 31
decimal preciseDecimal = decimal.Parse(exactString, CultureInfo.InvariantCulture);
Console.WriteLine("Better decimal: {0}", preciseDecimal); // writes 31.000000000000049737991503207
}
The above code uses Skeet's ToExactString method. If you don't want to use his stuff (can be found through the URL), just delete the code lines above dependent on exactString. You can still see how the Double in question (evil) is rounded and cast.
ADDITION:
OK, so I tested some more numbers, and here's a table:
exact value (truncated) "R" format "G" format decimal cast
------------------------- ------------------ ---------------- ------------
6.00000000000000'53'29... 6.0000000000000053 6.00000000000001 6
9.00000000000000'53'29... 9.0000000000000053 9.00000000000001 9
30.0000000000000'49'73... 30.00000000000005 30.0000000000001 30
50.0000000000000'49'73... 50.00000000000005 50.0000000000001 50
200.000000000000'51'15... 200.00000000000051 200.000000000001 200
500.000000000000'51'15... 500.00000000000051 500.000000000001 500
1020.00000000000'50'02... 1020.000000000005 1020.00000000001 1020
2000.00000000000'50'02... 2000.000000000005 2000.00000000001 2000
3000.00000000000'50'02... 3000.000000000005 3000.00000000001 3000
9000.00000000000'54'56... 9000.0000000000055 9000.00000000001 9000
20000.0000000000'50'93... 20000.000000000051 20000.0000000001 20000
50000.0000000000'50'93... 50000.000000000051 50000.0000000001 50000
500000.000000000'52'38... 500000.00000000052 500000.000000001 500000
1020000.00000000'50'05... 1020000.000000005 1020000.00000001 1020000
The first column gives the exact (though truncated) value that the Double represent. The second column gives the string representation from the "R" format string. The third column gives the usual string representation. And finally the fourth column gives the System.Decimal that results from converting this Double.
We conclude the following:
Round to 15 digits by ToString() and round to 15 digits by conversion to Decimal disagree in very many cases
Conversion to Decimal also rounds incorrectly in many cases, and the errors in these cases cannot be described as "round-twice" errors
In my cases, ToString() seems to yield a bigger number than Decimal conversion when they disagree (no matter which of the two rounds correctly)
I only experimented with cases like the above. I haven't checked if there are rounding errors with numbers of other "forms".
So from your experiments, it appears that Double.ToString doesn't do correct rounding.
That's rather unfortunate, but not particularly surprising: doing correct rounding for binary to decimal conversions is nontrivial, and also potentially quite slow, requiring multiprecision arithmetic in corner cases. See David Gay's dtoa.c code here for one example of what's involved in correctly-rounded double-to-string and string-to-double conversion. (Python currently uses a variant of this code for its float-to-string and string-to-float conversions.)
Even the current IEEE 754 standard for floating-point arithmetic recommends, but doesn't require that conversions from binary floating-point types to decimal strings are always correctly rounded. Here's a snippet, from section 5.12.2, "External decimal character sequences representing finite numbers".
There might be an implementation-defined limit on the number of
significant digits that can be converted with correct rounding to and
from supported binary formats. That limit, H, shall be such that H ≥
M+3 and it should be that H is unbounded.
Here M is defined as the maximum of Pmin(bf) over all supported binary formats bf, and since Pmin(float64) is defined as 17 and .NET supports the float64 format via the Double type, M should be at least 17 on .NET. In short, this means that if .NET were to follow the standard, it would be providing correctly rounded string conversions up to at least 20 significant digits. So it looks as though the .NET Double doesn't meet this standard.
In answer to the 'Is this a bug' question, much as I'd like it to be a bug, there really doesn't seem to be any claim of accuracy or IEEE 754 conformance anywhere that I can find in the number formatting documentation for .NET. So it might be considered undesirable, but I'd have a hard time calling it an actual bug.
EDIT: Jeppe Stig Nielsen points out that the System.Double page on MSDN states that
Double complies with the IEC 60559:1989 (IEEE 754) standard for binary
floating-point arithmetic.
It's not clear to me exactly what this statement of compliance is supposed to cover, but even for the older 1985 version of IEEE 754, the string conversion described seems to violate the binary-to-decimal requirements of that standard.
Given that, I'll happily upgrade my assessment to 'possible bug'.
First take a look at the bottom of this page which shows a very similar 'double rounding' problem.
Checking the binary / hex representation of the following floating point numbers shows that that the given range is stored as the same number in double format:
31.0000000000000480 = 0x403f00000000000e
31.0000000000000497 = 0x403f00000000000e
31.0000000000000515 = 0x403f00000000000e
As noted by several others, that is because the closest representable double has an exact value of 31.00000000000004973799150320701301097869873046875.
There are an additional two aspects to consider in the forward and reverse conversion of IEEE 754 to strings, especially in the .NET environment.
First (I cannot find a primary source) from Wikipedia we have:
If a decimal string with at most 15 significant decimal is converted
to IEEE 754 double precision and then converted back to the same
number of significant decimal, then the final string should match the
original; and if an IEEE 754 double precision is converted to a
decimal string with at least 17 significant decimal and then converted
back to double, then the final number must match the original.
Therefore, regarding compliance with the standard, converting a string 31.0000000000000497 to double will not necessarily be the same when converted back to string (too many decimal places given).
The second consideration is that unless the double to string conversion has 17 significant digits, it's rounding behavior is not explicitly defined in the standard either.
Furthermore, documentation on Double.ToString() shows that it is governed by numeric format specifier of the current culture settings.
Possible Complete Explanation:
I suspect the twice-rounding is occurring something like this: the initial decimal string is created to 16 or 17 significant digits because that is the required precision for "round trip" conversion giving an intermediate result of 31.00000000000005 or 31.000000000000050. Then due to default culture settings, the result is rounded to 15 significant digits, 31.00000000000001, because 15 decimal significant digits is the minimum precision for all doubles.
Doing an intermediate conversion to Decimal on the other hand, avoids this problem in a different way: it truncates to 15 significant digits directly.
The question: Isn't this a bug?
Yes. See this PR on GitHub. The reason of rounding twice AFAK is for "pretty" format output but it introduces a bug as you have already discovered here. We tried to fix it - remove the 15 digits precision converting, directly go to 17 digits precision converting. The bad news is it's a breaking change and will break things a lot. For example, one of the test case will break:
10:12:26 Assert.Equal() Failure
10:12:26 Expected: 1.1
10:12:26 Actual: 1.1000000000000001
The fix would impact a large set of existing libraries so finally this PR has been closed for now. However, .NET Core team is still looking for a chance to fix this bug. Welcome to join the discussion.
Truncation is the correct way to limit the precision of a number that will later be rounded, precisely to avoid the double rounding issue.
I have a simpler suspicion: The culprit is likely the pow operator => **;
While your number is exactly representable as a double, for convenience reasons
(the power operator needs much work to work right) the power is calculated
by the exponential function. This is one reason that you can optimize performance
by multiplying a number repeatedly instead of using pow() because pow() is very
expensive.
So it does not give you the correct 2^48, but something slightly incorrect and
therefore you have your rounding problems.
Please check out what 2^48 exactly returns.
EDIT: Sorry, I did only a scan on the problem and give a wrong suspicion. There is
a known issue with double rounding on the Intel processors. Older code use the
internal 80-bit format of the FPU instead of the SSE instructions which is likely
to cause the error. The value is written exactly to the 80bit register and then
rounded twice, so Jeppe has already found and neatly explained the problem.
Is it a bug ? Well, the processor is doing everything right, it is simply the
problem that the Intel FPU internally has more precision for floating-point
operations.
FURTHER EDIT AND INFORMATION:
The "double rounding" is a known issue and explicitly mentioned in "Handbook of Floating-Point Arithmetic" by Jean-Michel Muller et. al. in the chapter "The Need
for a Revision" under "3.3.1 A typical problem : 'double rounding'" at page 75:
The processor being used may offer an internal precision that is wider
than the precision of the variables of the program (a typical example
is the double-extended format available on Intel Platforms, when the
variables of the program are single- precision or double-precision
floating-point numbers). This may sometimes have strange side effects , as
we will see in this section. Consider the C program [...]
#include <stdio.h>
int main(void)
{
double a = 1848874847.0;
double b = 19954562207.0;
double c;
c = a * b;
printf("c = %20.19e\n", c);
return 0;
}
32bit:
GCC 4.1.2 20061115 on Linux/Debian
With Compilerswitch or with -mfpmath=387 (80bit-FPU): 3.6893488147419103232e+19
-march=pentium4 -mfpmath=sse (SSE) oder 64-bit : 3.6893488147419111424e+19
As explained in the book, the solution for the discrepancy is the double rounding with 80 bits and 53 bits.
Is the equality comparison for C# decimal types any more likely to work as we would intuitively expect than other floating point types?
I guess that depends on your intuition. I would assume that some people would think of the result of dividing 1 by 3 as the fraction 1/3, and others would think more along the lines of "Oh, 1 divided by 3 can't be represented as a decimal number, we'll have to decide how many digits to keep, let's go with 0.333".
If you think in the former way, Decimal won't help you much, but if you think in the latter way, and are explicit about rounding when needed, it is more likely that operations that are "intuitively" not subject to rounding errors in decimal, e.g. dividing by 10, will behave as you expect. This is more intuitive to most people than the behavior of a binary floating-point type, where powers of 2 behave nicely, but powers of 10 do not.
Basically, no. The Decimal type simply represents a specialised sort of floating-point number that is designed to reduce rounding error specifically in the base 10 system. That is, the internal representation of a Decimal is in fact in base 10 (denary) and not the usual binary. Hence, it is a rather more appropriate type for monetary calculations -- though not of course limited to such applications.
From the MSDN page for the structure:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
A decimal number is a floating-point value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value.
I came across following issue while developing some engineering rule value engine using eval(...) implementation.
Dim first As Double = 1.1
Dim second As Double = 2.2
Dim sum As Double = first + second
If (sum = 3.3) Then
Console.WriteLine("Matched")
Else
Console.WriteLine("Not Matched")
End If
'Above condition returns false because sum's value is 3.3000000000000003 instead of 3.3
It looks like 15th digit is round-tripped. Someone may give better explanation on this pls.
Is Math.Round(...) only solution available OR there is something else also I can attempt?
You are not adding decimals - you are adding up doubles.
Not all doubles can be represented accurately in a computer, hence the error. I suggest reading this article for background (What Every Computer Scientist Should Know About Floating-Point Arithmetic).
Use the Decimal type instead, it doesn't suffer from these issues.
Dim first As Decimal = 1.1
Dim second As Decimal = 2.2
Dim sum As Decimal= first + second
If (sum = 3.3) Then
Console.WriteLine("Matched")
Else
Console.WriteLine("Not Matched")
End If
that's how the double number work in PC.
The best way to compare them is to use such a construction
if (Math.Abs(second - first) <= 1E-9)
Console.WriteLine("Matched")
instead if 1E-9 you can use another number, that would represent the possible error in comparison.
Equality comparisons with floating point operations are always inaccurate because of how fractional values are represented within the machine. You should have some sort of epsilon value by which you're comparing against. Here is an article that describes it much more thoroughly:
http://www.cygnus-software.com/papers/comparingfloats/Comparing%20floating%20point%20numbers.htm
Edit: Math.Round will not be an ideal choice because of the error generated with it for certain comparisons. You are better off determining an epsilon value that can be used to limit the amount of error in the comparison (basically determining the level of accuracy).
A double uses floating-point arithmetic, which is approximate but more efficient. If you need to compare against exact values, use the decimal data type instead.
In C#, Java, Python, and many other languages, decimals/floats are not perfect. Because of the way they are represented (using multipliers and exponents), they often have inaccuracies. See http://www.yoda.arachsys.com/csharp/decimal.html for more info.
From the documentaiton:
http://msdn.microsoft.com/en-us/library/system.double.aspx
Floating-Point Values and Loss of
Precision
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear
equal for a particular precision might
not compare equal because their least
significant digits are different.
A mathematical or comparison operation
that uses a floating-point number
might not yield the same result if a
decimal number is used because the
floating-point number might not
exactly approximate the decimal
number.
A value might not roundtrip if a
floating-point number is involved. A
value is said to roundtrip if an
operation converts an original
floating-point number to another form,
an inverse operation transforms the
converted form back to a
floating-point number, and the final
floating-point number is equal to the
original floating-point number. The
roundtrip might fail because one or
more least significant digits are lost
or changed in a conversion.
In addition, the result of arithmetic
and assignment operations with Double
values may differ slightly by platform
because of the loss of precision of
the Double type. For example, the
result of assigning a literal Double
value may differ in the 32-bit and
64-bit versions of the .NET Framework.
The following example illustrates this
difference when the literal value
-4.42330604244772E-305 and a variable whose value is -4.42330604244772E-305
are assigned to a Double variable.
Note that the result of the
Parse(String) method in this case does
not suffer from a loss of precision.
THis is a well known problem with floating point arithmatic. Look into binary coding for further details.
Use the type "decimal" if that will fit your needs.
But in general, you should never compare floating point values to constant floating point values with the equality sign.
Failing that, compare to the number of places that you want to compare to (e.g. say it is 4 then you would go (if sum > 3.2999 and sum < 3.3001)
All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.
Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?
This is a classic speed-versus-accuracy trade off.
However, keep in mind that for PI, for example, the most digits you will ever need is 41.
The largest number of digits of pi
that you will ever need is 41. To
compute the circumference of the
universe with an error less than the
diameter of a proton, you need 41
digits of pi †. It seems safe to
conclude that 41 digits is sufficient
accuracy in pi for any circle
measurement problem you're likely to
encounter. Thus, in the over one
trillion digits of pi computed in
2002, all digits beyond the 41st have
no practical value.
In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.
Also consider:
System.Double 8 bytes Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal 12 bytes Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures
As you can see, decimal has a smaller range, but a higher precision.
No, - decimals are no more "exact" than doubles, or for that matter, any type. The concept of "exactness", (when speaking about numerical representations in a compuiter), is what is wrong. Any type is absolutely 100% exact at representing some numbers. unsigned bytes are 100% exact at representing the whole numbers from 0 to 255. but they're no good for fractions or for negatives or integers outside the range.
Decimals are 100% exact at representing a certain set of base 10 values. doubles (since they store their value using binary IEEE exponential representation) are exact at representing a set of binary numbers.
Neither is any more exact than than the other in general, they are simply for different purposes.
To elaborate a bit furthur, since I seem to not be clear enough for some readers...
If you take every number which is representable as a decimal, and mark every one of them on a number line, between every adjacent pair of them there is an additional infinity of real numbers which are not representable as a decimal. The exact same statement can be made about the numbers which can be represented as a double. If you marked every decimal on the number line in blue, and every double in red, except for the integers, there would be very few places where the same value was marked in both colors.
In general, for 99.99999 % of the marks, (please don't nitpick my percentage) the blue set (decimals) is a completely different set of numbers from the red set (the doubles).
This is because by our very definition for the blue set is that it is a base 10 mantissa/exponent representation, and a double is a base 2 mantissa/exponent representation. Any value represented as base 2 mantissa and exponent, (1.00110101001 x 2 ^ (-11101001101001) means take the mantissa value (1.00110101001) and multiply it by 2 raised to the power of the exponent (when exponent is negative this is equivilent to dividing by 2 to the power of the absolute value of the exponent). This means that where the exponent is negative, (or where any portion of the mantissa is a fractional binary) the number cannot be represented as a decimal mantissa and exponent, and vice versa.
For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles.
Decimal is more precise but has less of a range. You would generally use Double for physics and mathematical calculations but you would use Decimal for financial and monetary calculations.
See the following articles on msdn for details.
Double
http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
Decimal
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Seems like most of the arguments here to "It does not do what I want" are "but it's faster", well so is ANSI C+Gmp library, but nobody is advocating that right?
If you particularly want to control accuracy, then there are other languages which have taken the time to implement exact precision, in a user controllable way:
http://www.doughellmann.com/PyMOTW/decimal/
If precision is really important to you, then you are probably better off using languages that mathematicians would use. If you do not like Fortran then Python is a modern alternative.
Whatever language you are working in, remember the golden rule:
Avoid mixing types...
So do convert a and b to be the same before you attempt a operator b
If I were to hazard a guess, I'd say those functions leverage low-level math functionality (perhaps in C) that does not use decimals internally, and so returning a decimal would require a cast from double to decimal anyway. Besides, the purpose of the decimal value type is to ensure accuracy; these functions do not and cannot return 100% accurate results without infinite precision (e.g., irrational numbers).
Neither Decimal nor float or double are good enough if you require something to be precise. Furthermore, Decimal is so expensive and overused out there it is becoming a regular joke.
If you work in fractions and require ultimate precision, use fractions. It's same old rule, convert once and only when necessary. Your rounding rules too will vary per app, domain and so on, but sure you can find an odd example or two where it is suitable. But again, if you want fractions and ultimate precision, the answer is not to use anything but fractions. Consider you might want a feature of arbitrary precision as well.
The actual problem with CLR in general is that it is so odd and plain broken to implement a library that deals with numerics in generic fashion largely due to bad primitive design and shortcoming of the most popular compiler for the platform. It's almost the same as with Java fiasco.
double just turns out to be the best compromise covering most domains, and it works well, despite the fact MS JIT is still incapable of utilising a CPU tech that is about 15 years old now.
[piece to users of MSDN slowdown compilers]
Double is a built-in type. Is is supported by FPU/SSE core (formerly known as "Math coprocessor"), that's why it is blazingly fast. Especially at multiplication and scientific functions.
Decimal is actually a complex structure, consisting of several integers.