Convert.ToDouble is adding zeros and 1 like in this picture:
Why it is turning from 21.62 to 21.620000000000001 ?
Is this about floating point issue?
Double (and Float) are floating-point types, and in a binary system will have some imprecision.
If you need more precise comparisons use decimal instead. If you're just doing calculations double should be fine. If you need to compare doubles for absolute equiality then compare the absolute value of the difference to some small constant:
if (a == b) // not reliable for floating point
{
....
}
double EPSILON = 0.0000001;
if (Math.Abs(a-b) < EPSILON)
{
....
}
The floating point numbers have some problems of approximation.
This is because decimal fraction like 0,00001 can't be represented exactly on a binary system (where fractional numbers are represented in module q/p).
The problem is intrinsic.
In short, yes; between any two bases (in this case, 2 and 10), there are always values that can expressed w/ a finite number of "decimal" (binimal?) places in one that cannot in the other.
Rounding errors like this are common in almost every high level programming language. In Java, to get around this you use a class called BigDecimal if you need guaranteed accuracy. Otherwise, you can just write a method that rounds a decimal to the nearest place you need.
Related
I'm attempting to truncate a series of double-precision values in C#. The following value fails no matter what rounding method I use. What is wrong with this value that causes both of these methods to fail? Why does even Math.Round fail to correctly truncate the number? What method can be used instead to correctly truncate such values?
The value :
double value = 0.61740451388888251;
Method 1:
return Math.Round(value, digits);
Method 2:
double multiplier = Math.Pow(10, decimals)
return Math.Round(value * multiplier) / multiplier;
Fails even in VS watch window!
Double is a floating binary point type. They are represented in binary system (like 11010.00110). When double is presented in decimal system it is only an approximation as not all binary numbers have exact representation in decimal system. Try for example this operation:
double d = 3.65d + 0.05d;
It will not result in 3.7 but in 3.6999999999999997. It is because the variable contains a closest available double.
The same happens in your case. Your variable contains closest available double.
For precise operations double/float is not the most fortunate choice.
Use double/float when you need fast performance or you want to operate on larger range of numbers, but where high precision is not required. For instance, it is perfect type for calculations in physics.
For precise decimal operations use, well, decimal.
Here is an article about float/decimal: http://csharpindepth.com/Articles/General/FloatingPoint.aspx
If you need a more exact representation of the number you might have to use the decimal type, which has more precision but smaller range (it's usually used financial calculations).
More info on when to use each here: https://stackoverflow.com/a/618596/1373170
According to this online tool which gives the binary representation of doubles, the two closest double values to 0.62 are:
6.19999999999999995559107901499E-1 or 0x3FE3D70A3D70A3D7
link
6.20000000000000106581410364015E-1 or 0x3FE3D70A3D70A3D8
link
I'm not sure why neither of these agree with your value exactly, but like the others said, it is likely a floating point representation issue.
I think you are running up against the binary limit of a double-precision float (64 bits). From http://en.wikipedia.org/wiki/Double-precision_floating-point_format, a double only gives between 15-17 significant digits.
I'm working on something and I've got a problem which I do not understand.
double d = 95.24 / (double)100;
Console.Write(d); //Break point here
The console output is 0.9524 (as expected) but if I look to 'd' after stoped the program it returns 0.95239999999999991.
I have tried every cast possible and the result is the same. The problem is I use 'd' elsewhere and this precision problem makes my program failed.
So why does it do that? How can I fix it?
Use decimal instead of double.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
The short of it is that a floating-point number is stored in what amounts to base-2 scientific notation. There is an integer significand understood to have one place in front of a decimal point, which is raised to an integer power of two. This allows for the storage of numbers in a relatively compact format; the downside is that the conversion from base ten to base 2 and back can introduce error.
To mitigate this, whenever high precision at low magnitudes is required, use decimal instead of double; decimal is a 128-bit floating point number type designed to have very high precision, at the cost of reduced range (it can only represent numbers up to +- 79 nonillion, 7.9E28, instead of double's +- 1.8E308; still plenty for most non-astronomical, non-physics computer programs) and double the memory footprint of a double.
A very good article that describes a lot: What Every Computer Scientist Should Know About Floating-Point Arithmetic It is not related to C#, but to the float arithmetic in general.
You could use a decimal instead:
decimal d = 95.24 / 100m;
Console.Write(d); //Break point here
Try:
double d = Math.Round(95.24/(double)100, 4);
Console.Write(d);
edit: Or use a decimal, yeah. I was just trying to keep the answer as similar to the question as possible :)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.
I have come across a precision issue with double in .NET I thought this only applied to floats but now I see that double is a float.
double test = 278.97 - 90.46;
Debug.WriteLine(test) //188.51000000000005
//correct answer is 188.51
What is the correct way to handle this? Round? Lop off the unneeded decimal places?
Use the decimal data type. "The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors."
This happens in many languages and stems from the fact that we cannot usually store doubles or floats precisely in digital hardware. For a detailed explanation of how doubles, floats, and all other floating point values are often stored, look at the various IEEE specs described on wikipedia. For example: http://en.wikipedia.org/wiki/Double_precision
Of course there are other formats, such as fixed-point format. But this imprecision is in most languages, and why you often need to use epsilon tests instead of equality tests when using doubles in conditionals (i.e. abs(x - y) <= 0.0001 instead of x == y).
How you deal with this imprecision is up to you, and depends on your application.
I have a simple C# function:
public static double Floor(double value, double step)
{
return Math.Floor(value / step) * step;
}
That calculates the higher number, lower than or equal to "value", that is multiple of "step". But it lacks precision, as seen in the following tests:
[TestMethod()]
public void FloorTest()
{
int decimals = 6;
double value = 5F;
double step = 2F;
double expected = 4F;
double actual = Class.Floor(value, step);
Assert.AreEqual(expected, actual);
value = -11.5F;
step = 1.1F;
expected = -12.1F;
actual = Class.Floor(value, step);
Assert.AreEqual(Math.Round(expected, decimals),Math.Round(actual, decimals));
Assert.AreEqual(expected, actual);
}
The first and second asserts are ok, but the third fails, because the result is only equal until the 6th decimal place. Why is that? Is there any way to correct this?
Update If I debug the test I see that the values are equal until the 8th decimal place instead of the 6th, maybe because Math.Round introduces some imprecision.
Note In my test code I wrote the "F" suffix (explicit float constant) where I meant "D" (double), so if I change that I can have more precision.
I actually sort of wish they hadn't implemented the == operator for floats and doubles. It's almost always the wrong thing to do to ever ask if a double or a float is equal to any other value.
If you want precision, use System.Decimal. If you want speed, use System.Double (or System.Float). Floating point numbers are not "infinite precision" numbers, and therefore asserting equality must include a tolerance. As long as your numbers have a reasonable number of significant digits, this is ok.
If you're looking to do math on very large AND very small numbers, don't use float or double.
If you need infinite precision, don't use float or double.
If you are aggregating a very large number of values, don't use float or double (the errors will compound themselves).
If you need speed and size, use float or double.
See this answer (also by me) for a detailed analysis of how precision affects the outcome of your mathematical operations.
Floating point arithmetic on computers are not Exact Science :).
If you want exact precision to a predefined number of decimals use Decimal instead of double or accept a minor interval.
If you omit all the F postfixes (ie -12.1 instead of -12.1F) you will get equality to a few digits more. Your constants (and especially the expected values) are now floats because of the F. If you are doing that on purpose then please explain.
But for the rest i concur with the other answers on comparing double or float values for equality, it's just not reliable.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it.
Only use floating point if you want a machine's interpretation (binary) of number systems. You can't represent 10 cents.
Check the answers to this question: Is it safe to check floating point values for equality to 0?
Really, just check for "within tolerance of..."
floats and doubles cannot accurately store all numbers. This is a limitation with the IEEE floating point system. In order to have faithful precision you need to use a more advanced math library.
If you don't need precision past a certain point, then perhaps decimal will work better for you. It has a higher precision than double.
For the similar issue, I end up using the following implementation which seems to success most of my test case (up to 5 digit precision):
public static double roundValue(double rawValue, double valueTick)
{
if (valueTick <= 0.0) return 0.0;
Decimal val = new Decimal(rawValue);
Decimal step = new Decimal(valueTick);
Decimal modulo = Decimal.Round(Decimal.Divide(val,step));
return Decimal.ToDouble(Decimal.Multiply(modulo, step));
}
Sometimes the result is more precise than you would expect from strict:FP IEEE 754.
That's because HW uses more bits for the computation.
See C# specification and this article
Java has strictfp keyword and C++ have compiler switches. I miss that option in .NET