I used the base converter from here and changed it to work with ulong values, but when converting large numbers, specifically numbers higher than 16677181699666568 it was returning incorrect values. I started looking into this and discovered that Math.Pow(3, 34) returns the value 16677181699666568, when actually 3^34 is 16677181699666569. This therefore throws a spanner in the works for me. I assume this is just an issue with double precision within the Pow method? Is my easiest fix just to create my own Pow that takes ulong values?
If so, what's the quickest way to do Pow? I assume there's something faster than a for loop with multiplication each time.
You can use BigInteger.Pow. Or use my power method for long.
The problem is that Math.Pow returns a double, and the closest double value to 16677181699666569 is 16677181699666568.
So without getting Math.Pow involved:
long accurate = 16677181699666569;
double closestDouble = accurate;
// See http://pobox.com/~skeet/csharp/DoubleConverter.cs
Console.WriteLine(DoubleConverter.ToExactString(closestDouble));
That prints 16677181699666568.
In other words whatever Math.Pow does internally, it can't return a result that's more accurate than the one you're getting.
As others have said, BigInteger.Pow is your friend if you're using .NET 4.
Read What Every Computer Scientist Should Know About Floating-Point
Floating point types are an approximation, the rounding you see is normal.
If you want exact results use BigInteger.
I assume this is just an issue with
double precision within the Pow
method?
Yes.
Is my easiest fix just to create my
own Pow that takes ulong values?
You can use BigInteger.Pow.
If you're using .NET Framework 4, Microsoft has included a new BigInteger class that lets you manipulate large numbers.
http://msdn.microsoft.com/en-us/library/system.numerics.biginteger.aspx
Alternatively, you can use a nice library that someone else created:
http://intx.codeplex.com/ (IntX library)
Related
I'm experiencing strange issue when casting decimal to double.
Following code returns true:
Math.Round(0.010000000312312m, 2) == 0.01m //true
However, when I cast this to double it returns false:
(double)Math.Round(0.010000000312312m, 2) == (double)0.01m //false
I've experienced this problem when I wanted to use Math.Pow and was forced to cast decimal to double since there is no Math.Pow overload for decimal.
Is this documented behavior? How can I avoid it when I'm forced to cast decimal to double?
Screenshot from Visual Studio:
Casting Math.Round to double me following result:
(double)Math.Round(0.010000000312312m, 2) 0.0099999997764825821 double
(double)0.01m 0.01 double
UPDATE
Ok, I'm reproducing the issue as follows:
When I run WPF application and check the output in watch just after it started I get true like on empty project.
There is a part of application that sends values from the slider to the calculation algorithm. I get wrong result and I put breakpoint on the calculation method. Now, when I check the value in watch window I get false (without any modifications, I just refresh watch window).
As soon as I reproduce the issue in some smaller project I will post it here.
UPDATE2
Unfortunately, I cannot reproduce the issue in smaller project. I think that Eric's answer explains why.
People are reporting in the comments here that sometimes the result of the comparison is true and sometimes it is false.
Unfortunately, this is to be expected. The C# compiler, the jitter and the CPU are all permitted to perform arithmetic on doubles in more than 64 bit double precision, as they see fit. This means that sometimes the results of what looks like "the same" computation can be done in 64 bit precision in one calculation, 80 or 128 bit precision in another calculation, and the two results might differ in their last bit.
Let me make sure that you understand what I mean by "as they see fit". You can get different results for any reason whatsoever. You can get different results in debug and retail. You can get different results if you make the compiler do the computation in constants and if you make the runtime do the computation at runtime. You can get different results when the debugger is running. You can get different results in the runtime and the debugger's expression evaluator. Any reason whatsoever. Double arithmetic is inherently unreliable. This is due to the design of the floating point chip; double arithmetic on these chips cannot be made more repeatable without a considerable performance penalty.
For this and other reasons you should almost never compare two doubles for exact equality. Rather, subtract the doubles, and see if the absolute value of the difference is smaller than a reasonable bound.
Moreover, it is important that you understand why rounding a double to two decimal places is a difficult thing to do. A non-zero, finite double is a number of the form (1 + f) x 2e where f is a fraction with a denominator that is a power of two, and e is an exponent. Clearly it is not possible to represent 0.01 in that form, because there is no way to get a denominator equal to a power of ten out of a denominator equal to a power of two.
The double 0.01 is actually the binary number 1.0100011110101110000101000111101011100001010001111011 x 2-7, which in decimal is 0.01000000000000000020816681711721685132943093776702880859375. That is the closest you can possibly get to 0.01 in a double. If you need to represent exactly that value then use decimal. That's why its called decimal.
Incidentally, I have answered variations on this question many times on StackOverflow. For example:
Why differs floating-point precision in C# when separated by parantheses and when separated by statements?
Also, if you need to "take apart" a double to see what its bits are, this handy code that I whipped up a while back is quite useful. It requires that you install Solver Foundation, but that's a free download.
http://ericlippert.com/2011/02/17/looking-inside-a-double/
This is documented behavior. The decimal data type is more precise than the double type. So when you convert from decimal to double there is the possibility of data loss. This is why you are required to do an explicit conversion of the type.
See the following MSDN C# references for more information:
decimal data type: http://msdn.microsoft.com/en-us/library/364x0z75(v=vs.110).aspx
double data type: http://msdn.microsoft.com/en-us/library/678hzkk9(v=vs.110).aspx
casting and type conversion: http://msdn.microsoft.com/en-us/library/ms173105.aspx
Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;
Is there a library for decimal calculation, especially the Pow(decimal, decimal) method? I can't find any.
It can be free or commercial, either way, as long as there is one.
Note: I can't do it myself, can't use for loops, can't use Math.Pow, Math.Exp or Math.Log, because they all take doubles, and I can't use doubles. I can't use a serie because it would be as precise as doubles.
One of the multipliyers is a rate : 1/rate^(days/365).
The reason there is no decimal power function is because it would be pointless to use decimal for that calculation. Use double.
Remember, the point of decimal is to ensure that you get exact arithmetic on values that can be exactly represented as short decimal numbers. For reasonable values of rate and days, the values of any of the other subexpressions are clearly not going to be exactly represented as short decimal values. You're going to be dealing with inexact values, so use a type designed for fast calculations of slightly inexact values, like double.
The results when computed in doubles are going to be off by a few billionths of a penny one way or the other. Who cares? You'll round out the error later. Do the rate calculation in doubles. Once you have a result that needs to be turned back into a currency again, multiply the result by ten thousand, round it off to the nearest integer, convert that to a decimal, and then divide it out by ten thousand again, and you'll have a result accurate to four decimal places, which ought to be plenty for a financial calculation.
Here is what I used.
output = (decimal)Math.Pow((double)var1, (double)var2);
Now I'm just learning but this did work but I don't know if I can explain it correctly.
what I believe this does is take the input of var1 and var2 and cast them to doubles to use as the argument for the math.pow method. After that have (decimal) in front of math.pow take the value back to a decimal and place the value in the output variable.
I hope someone can correct me if my explination is wrong but all I know is that it worked for me.
I know this is an old thread but I'm putting this here in case someone finds it when searching for a solution.
If you don't want to mess around with casting and doing you own custom implementation you can install the NuGet DecimalMath.DecimalEx and use it like DecimalEx.Pow(number,power).
Well, here is the Wikipedia page that lists current C# numerics libraries. But TBH I don't think there is a lot of support for decimals
http://en.wikipedia.org/wiki/List_of_numerical_libraries
It's kind of inappropriate to use decimals for this kind of calculation in general. It's high precision yes - but it's also low range. As the MSDN docs state it's for financial/monetary calculations - where there isn't much call for POW unfortunately!
Of course you might have a specific problem domain that needs super high precision and all numbers are within 10(28) - 10(-28). But in that case you will probably just need to write your own series calculator such as the one linked to in the comments to the question.
Not using decimal. Use double instead. According to this thread, the Math.Pow(double, double) is called directly from CLR.
How is Math.Pow() implemented in .NET Framework?
Here is what .NET Framework 4 has (2 lines only)
[SecuritySafeCritical]
public static extern double Pow(double x, double y);
64-bit decimal is not native in this 32-bit CLR yet. Maybe on 64-bit Framework in the future?
wait, huh? why can't you use doubles? you could always cast if you're using ints or something:
int a = 1;
int b = 2;
int result = (int)Math.Pow(a,b);
I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.
I am using Convert.ChangeType() to convert from Object (which I get from DataBase) to a generic type T. The code looks like this:
T element = (T)Convert.ChangeType(obj, typeof(T));
return element;
and this works great most of the time, however I have discovered that if I try to cast something as simple as return of the following sql query
select 3.2
the above code (T being double) wont return 3.2, but 3.2000000000000002. I can't realise why this is happening, or how to fix it. Please help!
What you're seeing is an artifact of the way floating-point numbers are represented in memory. There's quite a bit of information available on exactly why this is, but this paper is a good one. This phenomenon is why you can end up with seemingly anomalous behavior. A double or single should never be displayed to the user unformatted, and you should avoid equality comparisons like the plague.
If you need numbers that are accurate to a greater level of precision (ie, representing currency values), then use decimal.
This probably is because of floating point arithmetic. You probably should use decimal instead of double.
It is not a problem of Convert. Internally double type represent as infinite fraction of 2 of real number, that is why you got such result. Depending of your purpose use:
Either Decimal
Or use precise formating {0:F2}
Use Math.Flor/Math.Ceil