Precision of double after decimal point - c#

In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?

As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.

You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.

C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.

Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)

Related

Parsing float resulting in strange decimal values

I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#

Parsing floats with Single.Parse()

Here comes a silly question. I'm playing with the parse function of System.Single and it behaves unexpected which might be because I don't really understand floating-point numbers. The MSDN page of System.Single.MaxValue states that the max value is 3.402823e38, in standard form that is
340282300000000000000000000000000000000
If I use this string as an argument for the Parse() method, it will succeed without error, if I change any of the zeros to an arbitrary digit it will still succeed without error (although it seems to ignore them looking at the result). In my understanding, that exceeds the limit, so What am I missing?
It may be easier to think about this by looking at some lower numbers. All (positive) integers up to 16777216 can be exactly represented in a float. After that point, only every other integer can be represented (up to the next time we hit a limit, at which point it's only every 4th integer that can be represented).
So what has to happen then is the 16777218 has to stand for 16777218∓1, 16777220 has to stand for 16777220∓1, etc. As you move up into even larger numbers, the range of integers that each value has to "represent" grows wider and wider - until the point where 340282300000000000000000000000000000000 represents all numbers in the range 340282300000000000000000000000000000000∓100000000000000000000000000000000, approximately (I've not actually worked out what the right ∓ value is here, but hopefully you get the point)
Number Significand Exponent
16777215 = 1 11111111111111111111111 2^0 = 111111111111111111111111
16777216 = 1 00000000000000000000000 2^1 = 1000000000000000000000000
16777218 = 1 00000000000000000000001 2^1 = 1000000000000000000000010
^
|
Implicit leading bit
That's actually not true - change the first 0 to 9 and you will see an exception. Actually change it to anything 6 and up and it blows up.
Any other number is just rounded down as float is not an 100% accurate representation of a decimal with 38+1 positions that's fine.
A floating point number is not like a decimal. It comprises a mantissa that carries the significant digits and an exponent that effectively says how far left or right of the decimal point to place the mantissa. A System.Single can only handle seven significant digits in the mantissa. If you replace any of your trailing zeroes with an arbitrary digit it is being lost when your decimal is converted into the mantissa and exponent form.
Good question. That is happening because the fact you can save a number with that range doesn't mean this type'll have enough precision to hold it. You can only store ~6-7 leading digits for floats and add an exponent to describe decimal point position.
0.012345 and 1234500 hold the same amount of informations - same mantissa, different exponents. The MSDN states only that value AFTRER EXPONENTIATION cannot be bigger, than MaxValue.

Why is C# Rounding This Way?

So, I've got this floating point number:
(float)-123456.668915
It's a number chosen at random because I'm doing some unit testing for a chunk of BCD code I'm writing. Whenever I go to compare the number above with a a string ("-123456.668915" to be clear), I'm getting an issue with how C# rounded the number. It rounds it to -123456.7. This has been checked in NUnit and with straight console output.
Why is it rounding like this? According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
Thanks for the help!
According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
"6 digits after the decimal point" isn't the same as "6 digits of precision". The number of digits of precision is the number of significant digits which can be reliably held. Your number has 12 significant digits, so it's not at all surprising that it can't be represented exactly by float.
Note that the number it's (supposedly) rounding to, -123456.7, does have 7 significant digits. In fact, that's not the value of your float either. I strongly suspect the exact value is -123456.671875, as that's the closest float to -123456.668915. However, when you convert the exact value to a string representation, the result is only 7 digits, partly because beyond that point the digits aren't really meaningful anyway.
You should probably read my article about binary floating point in .NET for more details.
The float type has a precision of 24 significant bits (except for denormals), which is equivalent to 24 log10 2 ≈ 7.225 significant decimal digits. The number -123456.668915 has 12 significant digits, so it can't be represented accurately.
The actual binary value, rounded to 24 significant bits, is -11110001001000000.1010110. This is equivalent to the fraction -7901227/64 = -123456.671875. Rounding to 7 significant digits gives the -123456.7 you see.

Accuracy of decimal

I use the decimal type for high precise calculation (monetary).
But I came across this simple division today:
1 / (1 / 37) which should result in 37 again
http://www.wolframalpha.com/input/?i=1%2F+%281%2F37%29
But C# gives me:
37.000000000000000000000000037M
I tried both these:
1m/(1m/37m);
and
Decimal.Divide(1, Decimal.Divide(1, 37))
but both yield the same results. How is the behaviour explainable?
Decimal stores the value as decimal floating point with only limited precision. The result of 1 / 37 is not precicely stored, as it's stored as 0.027027027027027027027027027M. The true number has the group 027 going indefinitely in decimal representation. For that reason, you cannot get the precise numbers in decimal representation for every possible number.
If you use Double in the same calculation, the end result is correct in this case (but it does not mean it will always be better).
A good answer on that topic is here: Difference between decimal, float and double in .NET?
Decimal data type has an accuracy of 28-29 significant digits.
So what you have to understand is when you consider 28-29 significant digits you are still not exact.
So when you compute a decimal value for (1/37) what you have to note is that at this stage you are only getting an accuracy of 28-29 digits. e.g 1/37 is 0.02 when you take 2 significant digits and 0.027 when you take 3 significant digits. Imagine you divide 1 with these values in each case. you get a 50 in first case and in second case you get 37.02...Considering 28-29 digits (decimal ) takes you to an accuracy of 37.000000000000000000000000037. If you have to get an exact 37 you simply need more than 28-29 significant digits than the decimal offers.
Always do computations with maximum significant digits and round off only your answer with Math.Round for desired result.

Wasn't the Double Type precision of 15 digits in C#?

I was testing this code from Brainteasers:
double d1 = 1.000001;
double d2 = 0.000001;
Console.WriteLine((d1 - d2) == 1.0);
And the result is "False". When I change the data type:
decimal d1 = 1.000001M;
decimal d2 = 0.000001M;
decimal d3 = d1-d2;
Console.WriteLine(d3 == 1);
The program writes the correct answer: "True".
This problem just uses 6 digits after the floating point. What happened with the precision of 15 digits?
This has nothing to do with precision - it has to do with representational rounding errors.
System.Decimal is capable of representing large floating point numbers with a significantly reduced risk of incurring any rounding errors like the one you are seeing. System.Single and System.Double are not capable of this and will round these numbers off and create issues like the one you are seeing in your example.
System.Decimal uses a scaling factor to hold the position of the decimal place thus allowing for exact representation of the given floating-point number, whereas System.Single and System.Double only approximate your value as best they can.
For more information, please see System.Double:
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear equal for a particular
precision might not compare equal
because their least significant digits
are different.
A mathematical or comparison operation that uses a floating-point
number might not yield the same result
if a decimal number is used because
the floating-point number might not
exactly approximate the decimal
number.
Generally, the way to check for equality of floating-point values is to check for near-equality, i.e., check for a difference that is close to the smallest value (called epsilon) for that datatype. For example,
if (Math.Abs(d1 - d2) <= Double.Epsilon) ...
This tests to see if the d1 and d2 are represented by the same bit pattern give or take the least significant bit.
Correction (Added 2 Mar 2015)
Upon further examination, the code should be more like this:
// Assumes that d1 and d2 are not both zero
if (Math.Abs(d1 - d2) / Math.Max(Math.Abs(d1), Math.Abs(d2)) <= Double.Epsilon) ...
In other words, take the absolute difference between d1 and d2, then scale it by the largest of d1 and d2, and then compare it to Epsilon.
References
• http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
• http://msdn.microsoft.com/en-us/library/system.double.aspx#Precision
The decimal type implements decimal floating point whereas double is binary floating point.
The advantage of decimal is that it behaves as a human would with respect to rounding, and if you initialise it with a decimal value, then that value is stored precisely as you specified. This is only true for decimal numbers of finite length and within the representable range and precision. If you initialised it with say 1.0M/3.0M, then it would not be stored precisely just as you would write 0.33333-recurring on paper.
If you initialise a binary FP value with a decimal, it will be converted from the human readable decimal form, to a binary representation that will seldom be precisely the same value.
The primary purpose of the decimal type is for implementing financial applications, in the .NET implementation it also has a far higher precision than double, however binary FP is directly supported by the hardware so is significantly faster than decimal FP operations.
Note that double is accurate to approximately 15 significant digits not 15 decimal places. d1 is initialised with a 7 significant digit value not 6, while d2 only has 1 significant digit. The fact that they are of significantly different magnitude does not help either.
The idea of floating point numbers is that they are not precise to a particular number of digits. If you want that sort of functionality, you should look at the decimal data type.
The precision isn't absolute, because it's not possible to convert between decimal and binary numbers exactly.
In this case, .1 decimal repeats forever when represented in binary. It converts to .000110011001100110011... and repeats forever. No amount of precision will store that exactly.
Avoid comparision on equality for floating point numbers.

Categories

Resources