I'm experiencing strange issue when casting decimal to double.
Following code returns true:
Math.Round(0.010000000312312m, 2) == 0.01m //true
However, when I cast this to double it returns false:
(double)Math.Round(0.010000000312312m, 2) == (double)0.01m //false
I've experienced this problem when I wanted to use Math.Pow and was forced to cast decimal to double since there is no Math.Pow overload for decimal.
Is this documented behavior? How can I avoid it when I'm forced to cast decimal to double?
Screenshot from Visual Studio:
Casting Math.Round to double me following result:
(double)Math.Round(0.010000000312312m, 2) 0.0099999997764825821 double
(double)0.01m 0.01 double
UPDATE
Ok, I'm reproducing the issue as follows:
When I run WPF application and check the output in watch just after it started I get true like on empty project.
There is a part of application that sends values from the slider to the calculation algorithm. I get wrong result and I put breakpoint on the calculation method. Now, when I check the value in watch window I get false (without any modifications, I just refresh watch window).
As soon as I reproduce the issue in some smaller project I will post it here.
UPDATE2
Unfortunately, I cannot reproduce the issue in smaller project. I think that Eric's answer explains why.
People are reporting in the comments here that sometimes the result of the comparison is true and sometimes it is false.
Unfortunately, this is to be expected. The C# compiler, the jitter and the CPU are all permitted to perform arithmetic on doubles in more than 64 bit double precision, as they see fit. This means that sometimes the results of what looks like "the same" computation can be done in 64 bit precision in one calculation, 80 or 128 bit precision in another calculation, and the two results might differ in their last bit.
Let me make sure that you understand what I mean by "as they see fit". You can get different results for any reason whatsoever. You can get different results in debug and retail. You can get different results if you make the compiler do the computation in constants and if you make the runtime do the computation at runtime. You can get different results when the debugger is running. You can get different results in the runtime and the debugger's expression evaluator. Any reason whatsoever. Double arithmetic is inherently unreliable. This is due to the design of the floating point chip; double arithmetic on these chips cannot be made more repeatable without a considerable performance penalty.
For this and other reasons you should almost never compare two doubles for exact equality. Rather, subtract the doubles, and see if the absolute value of the difference is smaller than a reasonable bound.
Moreover, it is important that you understand why rounding a double to two decimal places is a difficult thing to do. A non-zero, finite double is a number of the form (1 + f) x 2e where f is a fraction with a denominator that is a power of two, and e is an exponent. Clearly it is not possible to represent 0.01 in that form, because there is no way to get a denominator equal to a power of ten out of a denominator equal to a power of two.
The double 0.01 is actually the binary number 1.0100011110101110000101000111101011100001010001111011 x 2-7, which in decimal is 0.01000000000000000020816681711721685132943093776702880859375. That is the closest you can possibly get to 0.01 in a double. If you need to represent exactly that value then use decimal. That's why its called decimal.
Incidentally, I have answered variations on this question many times on StackOverflow. For example:
Why differs floating-point precision in C# when separated by parantheses and when separated by statements?
Also, if you need to "take apart" a double to see what its bits are, this handy code that I whipped up a while back is quite useful. It requires that you install Solver Foundation, but that's a free download.
http://ericlippert.com/2011/02/17/looking-inside-a-double/
This is documented behavior. The decimal data type is more precise than the double type. So when you convert from decimal to double there is the possibility of data loss. This is why you are required to do an explicit conversion of the type.
See the following MSDN C# references for more information:
decimal data type: http://msdn.microsoft.com/en-us/library/364x0z75(v=vs.110).aspx
double data type: http://msdn.microsoft.com/en-us/library/678hzkk9(v=vs.110).aspx
casting and type conversion: http://msdn.microsoft.com/en-us/library/ms173105.aspx
Related
When I run the following code, I get 0 printed on both lines:
Double a = 9.88131291682493E-324;
Double b = a*0.1D;
Console.WriteLine(b);
Console.WriteLine(BitConverter.DoubleToInt64Bits(b));
I would expect to get Double.NaN if an operation result gets out of range. Instead I get 0. It looks that to be able to detect when this happens I have to check:
Before the operation check if any of the operands is zero
After the operation, if neither of operands were zero, check if the result is zero. If not let it run. If it is zero, assign Double.NaN to it instead to indicate that it's not really a zero, it's just a result that can't be represented within this variable.
That's rather unwieldy. Is there a better way? What Double.NaN is designed for? I'm assuming some operations must have return it, surely designers did not put it there just in case? Is it possible that this is a bug in BCL? (I know unlikely, but, that's why I'd like to understand how that Double.NaN is supposed to work)
Update
By the way, this problem is not specific for double. decimal exposes it all the same:
Decimal a = 0.0000000000000000000000000001m;
Decimal b = a* 0.1m;
Console.WriteLine(b);
That also gives zero.
In my case I need double, because I need the range they provide (I'm working on probabilistic calculations) and I'm not that worried about precision.
What I need though is to be able to detect when my results stop mean anything, that is when calculations drop the value so low, that it can no longer be presented by double.
Is there a practical way of detecting this?
Double works exactly according to the floating point numbers specification, IEEE 754. So no, it's not an error in BCL - it's just the way IEEE 754 floating points work.
The reason, of course, is that it's not what floats are designed for at all. Instead, you might want to use decimal, which is a precise decimal number, unlike float/double.
There's a few special values in floating point numbers, with different meanings:
Infinity - e.g. 1f / 0f.
-Infinity - e.g. -1f / 0f.
NaN - e.g. 0f / 0f or Math.Sqrt(-1)
However, as the commenters below noted, while decimal does in fact check for overflows, coming too close to zero is not considered an overflow, just like with floating point numbers. So if you really need to check for this, you will have to make your own * and / methods. With decimal numbers, you shouldn't really care, though.
If you need this kind of precision for multiplication and division (that is, you want your divisions to be reversible by multiplication), you should probably use rational numbers instead - two integers (big integers if necessary). And use a checked context - that will produce an exception on overflow.
IEEE 754 in fact does handle underflow. There's two problems:
The return value is 0 (or -1 for negative undreflow). The exception flag for underflow is set, but there's no way to get that in .NET.
This only occurs for the loss of precision when you get too close to zero. But you lost most of your precision way long before that. Whatever "precise" number you had is long gone - the operations are not reversible, and they are not precise.
So if you really do care about reversibility etc., stick to rational numbers. Neither decimal nor double will work, C# or not. If you're not that precise, you shouldn't care about underflows anyway - just pick the lowest reasonable number, and declare anything under that as "invalid"; may sure you're far away from the actual maximum precision - double.Epsilon will not help, obviously.
All you need is epsilon.
This is a "small number" which is small enough so you're no longer interested in.
You could use:
double epsilon = 1E-50;
and whenever one of your factors gets smaller than epislon you take action (for example treat it like 0.0)
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.
When I execute this line:
double dParsed = double.Parse("0.00000002036");
dParsed actually gets the value: 0.000000020360000000000002
Compared to this line,
double dInitialized = 0.00000002036;
in which case the value of dInitialized is exactly 0.00000002036
Here they are in the debugger:
This inconsistency is a trifle annoying, because I want to run tests along the lines of:
[Subject("parsing doubles")]
public class when_parsing_crazy_doubles
{
static double dInitialized = 0.00000002036;
static double dParsed;
Because of = () => dParsed = double.Parse("0.00000002036");
It should_match = () => dParsed.ShouldBeLike(dInitialized);
}
This of course fails with:
Machine.Specifications.SpecificationException
"":
Expected: [2.036E-08]
But was: [2.036E-08]
In my production code, the 'parsed' doubles are read from a data file whereas the comparison values are hard coded as object initializers. Over many hundreds of records, 4 or 5 of them don't match. The original data appears in the text file like this:
0.00000002036 0.90908165072 6256.77753019160
So the values being parsed have only 11 decimal places. Any ideas for working around this inconsistency?
While I accept that comparing doubles for equality is risky, I'm surprised that the compiler can get an exact representation when the text is used as an object initializer, but that double.Parse can't get an exact representation when parsing exactly the same text. How can I limit the parsed doubles to 11 decimal places?
Compared to this line,
double dInitialized = 0.00000002036;
in which case the value of dInitialized is exactly 0.00000002036
If you have anything remotely resembling a commodity computer, dInitialized is not initialized as exactly 0.00000002036. It can't be because the base 10 number 0.00000002036 does not have a finite representation in base 2.
Your mistake is expecting two doubles to compare equal. That's usually not a good idea. Unless you have very good reasons and know what you are doing, it is best to not compare two doubles for equality or inequality. Instead test whether the difference between the two lies within some small epsilon of zero.
Getting the size of that epsilon right is a bit tricky. If your two numbers are both small, (less than one, for example), an epsilon of 1e-15 might well be appropriate. If the numbers are large (larger than ten, for example), that small of an epsilon value is equivalent to testing for equality.
Edit: I didn't answer the question.
How can I limit the parsed doubles to 11 decimal places?
If you don't have to worry about very small values,
static double epsilon = 1e-11;
if (Math.Abs(dParsed-dInitialized) > epsilon*Math.Abs(dInitialized)) {
noteTestAsFailed();
}
You should be able to safely change that epsilon to 4e-16.
Edit #2: Why is it that the compiler and double.Parse produce different internal representations for the same text?
That's kind of obvious, isn't it? The compiler and double.Parse use different algorithms. The number in question 0.00000002036 is very close to being on the cusp of whether rounding up or rounding down should be used to yield a representable value that is within half an ULP of the desired value (0.00000002036). The "right" value is the one that is within a half an ULP of the desired value. In this case, the compiler makes the right decision of picking the rounded-down value while the parser makes the wrong decision of picking the rounded-up value.
The value 0.00000002036 is a nasty corner case. It is not an exactly representable value. The two closest values that can be represented exactly as IEEE doubles are 6153432421838462/2^78 and 6153432421838463/2^78. The value halfway between these two is 12306864843676925/2^79, which is very, very close to 0.00000002036. That's what makes this a corner case. I suspect all of the values you found where the compiled value is not identically equal to the value from double.Parse are corner cases, cases where the desired value is almost halfway between the two closest exactly representable values.
Edit #3:
Here are a number of different ways to interpret 0.00000002036:
2/1e8 + 3/1e10 + 6/1e11
2*1e-8 + 3*1e-10 + 6*1e-11
2.036 * 1e-8
2.036 / 1e8
2036 * 1e-11
2036 / 1e11
On an ideal computer all of these will be the same. Don't count on that being the case on a computer that uses finite precision arithmetic.
I have read tons of things about floating error, and floating approximation, and all that.
The thing is : I never read an answer to a real world problem. And today, I came across a real world problem. And this is really bad, and I really don't know how to escape.
Take a look at this example :
[TestMethod]
public void TestMethod1()
{
float t1 = 8460.32F;
float t2 = 5990;
var x = t1 - t2;
var y = F(x);
Assert.AreEqual(x, y);
}
float F(float x)
{
if (x <= 2470.32F) { return x; }
else { return -x; }
}
x is supposed to be 2470.32. But in fact, due to rounding error, its value is 2470.32031.
Most of the time, this is not a problem. Functions are continuous, and all is good, the result is off by a little value.
But here, we have a discontinous function, and the error is really, really big. The test failed exactly on the discontinuous point.
How could I handle the rounding error with discontinuous functions?
The key problem here is:
The function has a large (and significant) change in output value in certain cases when there is a small change in input value.
You are passing an incorrect input value to the function.
As you write, “due to rounding error, [x’s value] is 2470.32031”. Suppose you could write any code you desire—simply describe the function to be performed, and a team of expert programmers will provide complete, bug-free source code within seconds. What would you tell them?
The problem you are posing is, “I am going to pass a wrong value, 2470.32031, to this function. I want it to know that the correct value is something else and to provide the result for the correct value, which I did not pass, instead of the incorrect value, which I did pass.”
In general, that problem is impossible to solve, because it is impossible to distinguish when 2470.32031 is passed to the function but 2470.32 is intended from when 2470.32031 is passed to the function and 2470.32031 is intended. You cannot expect a computer to read your mind. When you pass incorrect input, you cannot expect correct output.
What this tells us is that no solution inside of the function F is possible. Therefore, we must zoom out and look at the larger problem. You must examine whether the value passed to F can be improved (calculated in a better way or with higher precision or with supplementary information) or whether the nature of the problem is such that, when 2470.32031 is passed, 2470.32 is always intended, so that this knowledge can be incorporated into F.
NOTE: this answer is essentially the same as the one of Eric
It just enlighten the test point of view, since a test is a form of specification.
The problem here is that testMethod1 does not test F.
It rather tests that conversion of decimal quantity 8460.32 to float and float subtraction are inexact.
But is it the intention of the test?
All you can say is that in certain bad conditions (near discontinuity), a small error on input will result in a large error on output, so the test could express that it is an expected result.
Note that function F is almost perfect, except maybe for the float value 2470.32F itself.
Indeed, the floating point approximation will round the decimal by excess (1/3200 exactly).
So the answer should be:
Assert.AreEqual(F(2470.32F), -2470.32F); /* because 2470.32F exceed the decimal 2470.32 */
If you want to test such low level requirements, you'll need a library with high (arbitrary/infinite) precision to perform the tests.
If you can't afford such imprecision on function F, then Float is a mismatch., and you'll have to find another implementation with increased, arbitrary or infinite precision.
It's up to you to specify your needs, and testMethod1 shall explicit this specification better than it does right now.
If you need the 8460.32 number to be exactly that without rounding error, you could look at the .NET Decimal type which was created explicitly to represent base 10 fractional numbers without rounding error. How they perform that magic is beyond me.
Now, I realize this may be impractical for you to do because the float presumably comes from somewhere and refactoring it to Decimal type could be way too much to do, but if you need it to have that much precision for the discontinuous function that relies on that value you'll either need a more precise type or some mathematical trickery. Perhaps there is some way to always ensure that a float is created that has rounding error such that it's always less than the actual number? I'm not sure if such a thing exists but it should also solve your issue.
You have three numbers represented in your application, you have accepted imprecision in each of them by representing them as floats.
So I think you can reasonably claim that your program is working correctly
(oneNumber +/- some imprecision ) - (another number +/- some imprecision)
is not quite bigger than another number +/- some imprecision
when viewed in decimal representation on paper it looks wrong but that's not what you've implemented. What's the origin of the data? How precisely was 8460.32 known? Had it been 8460.31999 what should have happened? 8460.32001? Was the original value known to such precision?
In the end if you want to model more accuracy use a different data type, as suggested elsewhere.
I always just assume that when comparing floating point values a small margin of error is needed because of rounding issues. In your case, this would most likely mean choosing values in your test method that aren't quite so stringent--e.g., define a very small error constant and subtract that value from x. Here's a SO question that relates to this.
Edit to better address the concluding question: Presumably it doesn't matter what the function outputs on the discontinuity exactly, so test just slightly on either side of it. If it does matter, then really about the best you can do is allow either of two outputs from the function at that point.
Is there a library for decimal calculation, especially the Pow(decimal, decimal) method? I can't find any.
It can be free or commercial, either way, as long as there is one.
Note: I can't do it myself, can't use for loops, can't use Math.Pow, Math.Exp or Math.Log, because they all take doubles, and I can't use doubles. I can't use a serie because it would be as precise as doubles.
One of the multipliyers is a rate : 1/rate^(days/365).
The reason there is no decimal power function is because it would be pointless to use decimal for that calculation. Use double.
Remember, the point of decimal is to ensure that you get exact arithmetic on values that can be exactly represented as short decimal numbers. For reasonable values of rate and days, the values of any of the other subexpressions are clearly not going to be exactly represented as short decimal values. You're going to be dealing with inexact values, so use a type designed for fast calculations of slightly inexact values, like double.
The results when computed in doubles are going to be off by a few billionths of a penny one way or the other. Who cares? You'll round out the error later. Do the rate calculation in doubles. Once you have a result that needs to be turned back into a currency again, multiply the result by ten thousand, round it off to the nearest integer, convert that to a decimal, and then divide it out by ten thousand again, and you'll have a result accurate to four decimal places, which ought to be plenty for a financial calculation.
Here is what I used.
output = (decimal)Math.Pow((double)var1, (double)var2);
Now I'm just learning but this did work but I don't know if I can explain it correctly.
what I believe this does is take the input of var1 and var2 and cast them to doubles to use as the argument for the math.pow method. After that have (decimal) in front of math.pow take the value back to a decimal and place the value in the output variable.
I hope someone can correct me if my explination is wrong but all I know is that it worked for me.
I know this is an old thread but I'm putting this here in case someone finds it when searching for a solution.
If you don't want to mess around with casting and doing you own custom implementation you can install the NuGet DecimalMath.DecimalEx and use it like DecimalEx.Pow(number,power).
Well, here is the Wikipedia page that lists current C# numerics libraries. But TBH I don't think there is a lot of support for decimals
http://en.wikipedia.org/wiki/List_of_numerical_libraries
It's kind of inappropriate to use decimals for this kind of calculation in general. It's high precision yes - but it's also low range. As the MSDN docs state it's for financial/monetary calculations - where there isn't much call for POW unfortunately!
Of course you might have a specific problem domain that needs super high precision and all numbers are within 10(28) - 10(-28). But in that case you will probably just need to write your own series calculator such as the one linked to in the comments to the question.
Not using decimal. Use double instead. According to this thread, the Math.Pow(double, double) is called directly from CLR.
How is Math.Pow() implemented in .NET Framework?
Here is what .NET Framework 4 has (2 lines only)
[SecuritySafeCritical]
public static extern double Pow(double x, double y);
64-bit decimal is not native in this 32-bit CLR yet. Maybe on 64-bit Framework in the future?
wait, huh? why can't you use doubles? you could always cast if you're using ints or something:
int a = 1;
int b = 2;
int result = (int)Math.Pow(a,b);