I came across following issue while developing some engineering rule value engine using eval(...) implementation.
Dim first As Double = 1.1
Dim second As Double = 2.2
Dim sum As Double = first + second
If (sum = 3.3) Then
Console.WriteLine("Matched")
Else
Console.WriteLine("Not Matched")
End If
'Above condition returns false because sum's value is 3.3000000000000003 instead of 3.3
It looks like 15th digit is round-tripped. Someone may give better explanation on this pls.
Is Math.Round(...) only solution available OR there is something else also I can attempt?
You are not adding decimals - you are adding up doubles.
Not all doubles can be represented accurately in a computer, hence the error. I suggest reading this article for background (What Every Computer Scientist Should Know About Floating-Point Arithmetic).
Use the Decimal type instead, it doesn't suffer from these issues.
Dim first As Decimal = 1.1
Dim second As Decimal = 2.2
Dim sum As Decimal= first + second
If (sum = 3.3) Then
Console.WriteLine("Matched")
Else
Console.WriteLine("Not Matched")
End If
that's how the double number work in PC.
The best way to compare them is to use such a construction
if (Math.Abs(second - first) <= 1E-9)
Console.WriteLine("Matched")
instead if 1E-9 you can use another number, that would represent the possible error in comparison.
Equality comparisons with floating point operations are always inaccurate because of how fractional values are represented within the machine. You should have some sort of epsilon value by which you're comparing against. Here is an article that describes it much more thoroughly:
http://www.cygnus-software.com/papers/comparingfloats/Comparing%20floating%20point%20numbers.htm
Edit: Math.Round will not be an ideal choice because of the error generated with it for certain comparisons. You are better off determining an epsilon value that can be used to limit the amount of error in the comparison (basically determining the level of accuracy).
A double uses floating-point arithmetic, which is approximate but more efficient. If you need to compare against exact values, use the decimal data type instead.
In C#, Java, Python, and many other languages, decimals/floats are not perfect. Because of the way they are represented (using multipliers and exponents), they often have inaccuracies. See http://www.yoda.arachsys.com/csharp/decimal.html for more info.
From the documentaiton:
http://msdn.microsoft.com/en-us/library/system.double.aspx
Floating-Point Values and Loss of
Precision
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear
equal for a particular precision might
not compare equal because their least
significant digits are different.
A mathematical or comparison operation
that uses a floating-point number
might not yield the same result if a
decimal number is used because the
floating-point number might not
exactly approximate the decimal
number.
A value might not roundtrip if a
floating-point number is involved. A
value is said to roundtrip if an
operation converts an original
floating-point number to another form,
an inverse operation transforms the
converted form back to a
floating-point number, and the final
floating-point number is equal to the
original floating-point number. The
roundtrip might fail because one or
more least significant digits are lost
or changed in a conversion.
In addition, the result of arithmetic
and assignment operations with Double
values may differ slightly by platform
because of the loss of precision of
the Double type. For example, the
result of assigning a literal Double
value may differ in the 32-bit and
64-bit versions of the .NET Framework.
The following example illustrates this
difference when the literal value
-4.42330604244772E-305 and a variable whose value is -4.42330604244772E-305
are assigned to a Double variable.
Note that the result of the
Parse(String) method in this case does
not suffer from a loss of precision.
THis is a well known problem with floating point arithmatic. Look into binary coding for further details.
Use the type "decimal" if that will fit your needs.
But in general, you should never compare floating point values to constant floating point values with the equality sign.
Failing that, compare to the number of places that you want to compare to (e.g. say it is 4 then you would go (if sum > 3.2999 and sum < 3.3001)
Related
Let me start right off by saying this isn't a question about decimal precision tolerance comparision! This is more of an issue of the inherent representation of large doubles in that after a certain range the computer starts losing significat presicion and not on the decimal side. E.g.
double d1 = 1e20+8000;
double d2 = 1e20;
Debug.WriteLine(d1 == d2);
Now this yields true which for me is very unacceptable. Converting to decimal at this point in the project is out of the question. Is there a way to mitigate this for example computing somehow that at ^20 the computer represents numbers ^20 +- 8000 the same? The tolerance seems to be 8000 for this exponent going above it triggers correct comparison but I need exact comparison up to the decimal point, beyond that i don't care for large numbers.
Now this yields true which for me is very unacceptable.
Then you should use a data type that does what you want. You should never use floating point precision numbers when you expect accuracy. decimal is what you should use.
Changing data types is your only option that works on the long term. If you don't want to go there, there is no option that would not need a significant amount of work on checking the calculations made.
I have the following code which I can't change...
public static decimal Convert(decimal value, Measurement currentMeasurement, Measurement targetMeasurement, bool roundResult = true)
{
double result = Convert(System.Convert.ToDouble(value), currentMeasurement, targetMeasurement, roundResult);
return System.Convert.ToDecimal(result);
}
now result is returned as -23.333333333333336 but once the conversion to a decimal takes place it becomes -23.3333333333333M.
I thought decimals could hold bigger values and were hence more accurate so how am I losing data going from double to decimal?
This is by design. Quoting from the documentation of Convert.ToDecimal:
The Decimal value returned by this method contains a maximum of
15 significant digits. If the value parameter contains more than 15
significant digits, it is rounded using rounding to nearest. The
following example illustrates how the Convert.ToDecimal(Double) method
uses rounding to nearest to return a Decimal value with 15
significant digits.
Console.WriteLine(Convert.ToDecimal(123456789012345500.12D)); // Displays 123456789012346000
Console.WriteLine(Convert.ToDecimal(123456789012346500.12D)); // Displays 123456789012346000
Console.WriteLine(Convert.ToDecimal(10030.12345678905D)); // Displays 10030.123456789
Console.WriteLine(Convert.ToDecimal(10030.12345678915D)); // Displays 10030.1234567892
The reason for this is mostly that double can only guarantee 15 decimal digits of precision, anyway. Everything that's displayed after them (converted to a string it's 17 digits because that's what double uses internally and because that's the number you might need to exactly reconstruct every possible double value from a string) is not guaranteed to be part of the exact value being represented. So Convert takes the only sensible route and rounds them away. After all, if you have a type that can represent decimal values exactly, you wouldn't want to start with digits that are inaccurate.
So you're not losing data, per se. In fact, you're only losing garbage. Garbage you thought of being data.
EDIT: To clarify my point from the comments: Conversions between different numeric data types may incur a loss of precision. This is especially the case between double and decimal because both types are capable of representing values the other type cannot represent. Furthermore, both double and decimal have fairly specific use cases they're intended for, which is also evident from the documentation (emphasis mine):
The Double value type represents a double-precision 64-bit number with
values ranging from negative 1.79769313486232e308 to positive
1.79769313486232e308, as well as positive or negative zero, PositiveInfinity, NegativeInfinity, and not a number (NaN). It is
intended to represent values that are extremely large (such as
distances between planets or galaxies) or extremely small (the
molecular mass of a substance in kilograms) and that often are
imprecise (such as the distance from earth to another solar system).
The Double type complies with the IEC 60559:1989 (IEEE 754) standard
for binary floating-point arithmetic.
The Decimal value type represents decimal numbers ranging from
positive 79,228,162,514,264,337,593,543,950,335 to negative
79,228,162,514,264,337,593,543,950,335. The Decimal value type is
appropriate for financial calculations that require large numbers of
significant integral and fractional digits and no round-off errors.
The Decimal type does not eliminate the need for rounding. Rather, it
minimizes errors due to rounding.
This basically means that for quantities that will not grow unreasonably large and you need an accurate decimal representation, you should use decimal (sounds fairly obvious when written that way). In practice this most often means financial calculations, as the documentation already states.
On the other hand, in the vast majority of other cases, double is the right way to go and usually does not hurt as a default choice (languages like Lua and JavaScript get away just fine with double being the only numeric data type).
In your specific case, since you mentioned in the comments that those are temperature readings, it is very, very, very simple: Just use double throughout. You have temperature readings. Wikipedia suggests that highly-specialized thermometers reach around 10−3 °C precision. Which basically means that the differences in value of around 10−13 (!) you are worried about here are simply irrelevant. Your thermometer gives you (let's be generous) five accurate digits here and you worry about the ten digits of random garbage that come after that. Just don't.
I'm sure a physicist or other scientist might be able to chime in here with proper handling of measurements and their precision, but we were taught in school that it's utter bullshit to even give values more precise than the measurements are. And calculations potentially affect (and reduce) that precision.
I'm attempting to truncate a series of double-precision values in C#. The following value fails no matter what rounding method I use. What is wrong with this value that causes both of these methods to fail? Why does even Math.Round fail to correctly truncate the number? What method can be used instead to correctly truncate such values?
The value :
double value = 0.61740451388888251;
Method 1:
return Math.Round(value, digits);
Method 2:
double multiplier = Math.Pow(10, decimals)
return Math.Round(value * multiplier) / multiplier;
Fails even in VS watch window!
Double is a floating binary point type. They are represented in binary system (like 11010.00110). When double is presented in decimal system it is only an approximation as not all binary numbers have exact representation in decimal system. Try for example this operation:
double d = 3.65d + 0.05d;
It will not result in 3.7 but in 3.6999999999999997. It is because the variable contains a closest available double.
The same happens in your case. Your variable contains closest available double.
For precise operations double/float is not the most fortunate choice.
Use double/float when you need fast performance or you want to operate on larger range of numbers, but where high precision is not required. For instance, it is perfect type for calculations in physics.
For precise decimal operations use, well, decimal.
Here is an article about float/decimal: http://csharpindepth.com/Articles/General/FloatingPoint.aspx
If you need a more exact representation of the number you might have to use the decimal type, which has more precision but smaller range (it's usually used financial calculations).
More info on when to use each here: https://stackoverflow.com/a/618596/1373170
According to this online tool which gives the binary representation of doubles, the two closest double values to 0.62 are:
6.19999999999999995559107901499E-1 or 0x3FE3D70A3D70A3D7
link
6.20000000000000106581410364015E-1 or 0x3FE3D70A3D70A3D8
link
I'm not sure why neither of these agree with your value exactly, but like the others said, it is likely a floating point representation issue.
I think you are running up against the binary limit of a double-precision float (64 bits). From http://en.wikipedia.org/wiki/Double-precision_floating-point_format, a double only gives between 15-17 significant digits.
The Double data type cannot correctly represent some base 10 values. This is because of how floating point numbers represent real numbers. What this means is that when representing monetary values, one should use the decimal value type to prevent errors. (feel free to correct errors in this preamble)
What I want to know is what are the values which present such a problem under the Double data-type under a 64 bit architecture in the standard .Net framework (C# if that makes a difference) ?
I expect the answer the be a formula or rule to find such values but I would also like some example values.
Any number which cannot be written as the sum of positive and negative powers of 2 cannot be exactly represented as a binary floating-point number.
The common IEEE formats for 32- and 64-bit representations of floating-point numbers impose further constraints; they limit the number of binary digits in both the significand and the exponent. So there are maximum and minimum representable numbers (approximately +/- 10^308 (base-10) if memory serves) and limits to the precision of a number that can be represented. This limit on the precision means that, for 64-bit numbers, the difference between the exponent of the largest power of 2 and the smallest power in a number is limited to 52, so if your number includes a term in 2^52 it can't also include a term in 2^-1.
Simple examples of numbers which cannot be exactly represented in binary floating-point numbers include 1/3, 2/3, 1/5.
Since the set of floating-point numbers (in any representation) is finite, and the set of real numbers is infinite, one algorithm to find a real number which is not exactly representable as a floating-point number is to select a real number at random. The probability that the real number is exactly representable as a floating-point number is 0.
You generally need to be prepared for the possibility that any value you store in a double has some small amount of error. Unless you're storing a constant value, chances are it could be something with at least some error. If it's imperative that there never be any error, and the values aren't constant, you probably shouldn't be using a floating point type.
What you probably should be asking in many cases is, "How do I deal with the minor floating point errors?" You'll want to know what types of operations can result in a lot of error, and what types don't. You'll want to ensure that comparing two values for "equality" actually just ensures they are "close enough" rather than exactly equal, etc.
This question actually goes beyond any single programming language or platform. The inaccuracy is actually inherent in binary data.
Consider that with a double, each number N to the left (at 0-based index I) of the decimal point represents the value N * 2^I and every digit to the right of the decimal point represents the value N * 2^(-I).
As an example, 5.625 (base 10) would be 101.101 (base 2).
Given this calculation, and decimal value that can't be calculated as a sum of 2^(-I) for different values of I would have an incorrect value as a double.
A float is represented as s, e and m in the following formula
s * m * 2^e
This means that any number that cannot be represented using the given expression (and in the respective domains of s, e and m) cannot be represented exactly.
Basically, you can represent all numbers between 0 and 2^53 - 1 multiplied by a certain power of two (possibly a negative power).
As an example, all numbers between 0 and 2^53 - 1 can be represented multiplied with 2^0 = 1. And you can also represent all those numbers by dividing them by 2 (with a .5 fraction). And so on.
This answer does not fully cover the topic, but I hope it helps.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.