I use the decimal type for high precise calculation (monetary).
But I came across this simple division today:
1 / (1 / 37) which should result in 37 again
http://www.wolframalpha.com/input/?i=1%2F+%281%2F37%29
But C# gives me:
37.000000000000000000000000037M
I tried both these:
1m/(1m/37m);
and
Decimal.Divide(1, Decimal.Divide(1, 37))
but both yield the same results. How is the behaviour explainable?
Decimal stores the value as decimal floating point with only limited precision. The result of 1 / 37 is not precicely stored, as it's stored as 0.027027027027027027027027027M. The true number has the group 027 going indefinitely in decimal representation. For that reason, you cannot get the precise numbers in decimal representation for every possible number.
If you use Double in the same calculation, the end result is correct in this case (but it does not mean it will always be better).
A good answer on that topic is here: Difference between decimal, float and double in .NET?
Decimal data type has an accuracy of 28-29 significant digits.
So what you have to understand is when you consider 28-29 significant digits you are still not exact.
So when you compute a decimal value for (1/37) what you have to note is that at this stage you are only getting an accuracy of 28-29 digits. e.g 1/37 is 0.02 when you take 2 significant digits and 0.027 when you take 3 significant digits. Imagine you divide 1 with these values in each case. you get a 50 in first case and in second case you get 37.02...Considering 28-29 digits (decimal ) takes you to an accuracy of 37.000000000000000000000000037. If you have to get an exact 37 you simply need more than 28-29 significant digits than the decimal offers.
Always do computations with maximum significant digits and round off only your answer with Math.Round for desired result.
Related
I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#
With Single precision (32 bits): the bits division goes like this :
So we have 23 bits of mantissa/Significand .
So we can represent 2^23 numbers (via 23 bits ) : which is 8388608 --> which is 7 digit long.
BUT
I was reading that the mantissa is normalized (the leading digit in the mantissa will always be a 1) - so the pattern is actually 1.mmm and only the mmm is represented in the mantissa.
for example : look here :
0.75 is represented but it's actually 1.75
Question #1
So basically it adds 1 more precision digit....no ?
If so then we have 8 Significand !
So why does msdn says : 7 ?
Question #2
In double there are 52 bits for mantissa. (0..51)
If I add 1 for the normalized mantissa so its 2^53 possibilites which is : 9007199254740992 ( 16 digits)
and MS does say : 15-16 :
Why is this inconsistency ? am I missing something ?
It doesn't add one more decimal digit - just a single binary digit. So instead of 23 bits, you have 24 bits. This is handy, because the only number you can't represent as starting with a one is zero, and that's a special value.
In short, you're not looking at 2 ^ 24 (that would be a decimal number, base-10) - you're looking at 2 ^ (-24). That's the most important difference between float-double and decimal. decimal is what you imagine floats to be, ie. a simple exponent-shifted, base-10 number. float and double aren't that.
Now, decimal digits versus binary digits is a tricky matter. You're mistaken in your understanding that the precision has anything to do with the 2 ^ 24 figure - that would only be true if you were talking about e.g. the decimal type, which actually stores decimal values as decimal point offsets of a normal (huge-ass) integer.
Just like 1 / 3 cannot be written in decimal (0.333333...), many simple decimal numbers can't be represented in a float precisely (0.2 is the typical example). decimal doesn't have a problem with that - it's just 2 shifted one digit to the right, easy peasy. For floats, however, you have to represent this value as a sum of negative powers of two - 0.5, 0.25, 0.125 ... The same would apply in the opposite direction if 2 wasn't a factor of 10 - every finite binary "decimal" can be represented with finite precision in decimal.
Now, in fact, float can easily represent a number with 24 decimal digits - it just has to be 2 ^ (-24) - a number you're not going to encounter in your usual day job, and a weird number in decimal. So where does the 7 (actually more like 7.22...) come from? Simple, just do a decimal logarithm of 2 ^ (-24).
The fact that it seems that 0.2 can be represented "exactly" in a float is simply because everytime you e.g. convert it to a string, you're rounding. So, even though the number isn't 0.2 exactly, it ends up that way when you convert it to a decimal number.
All this means that when you need decimal precision, you want to use decimal, as simple as that. This is not because it's a better base for calculations, it's simply because humans use it, and they will not be happy if your application gives different results from what they calculate on a piece of paper - especially when dealing with money. Accountants are very focused on having everything correct to the least significant digit.
Floats are used where it's not about decimal precision, but rather about generally having some sort of precision - this makes them well suited for physics calculations and similar, because you don't actually care about having the number come up the same in decimal - you're working with a given precision, and you're going to get that - 24 significant binary "decimals".
The implied leading 1 adds one more binary digit of precision, not decimal.
So, I've got this floating point number:
(float)-123456.668915
It's a number chosen at random because I'm doing some unit testing for a chunk of BCD code I'm writing. Whenever I go to compare the number above with a a string ("-123456.668915" to be clear), I'm getting an issue with how C# rounded the number. It rounds it to -123456.7. This has been checked in NUnit and with straight console output.
Why is it rounding like this? According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
Thanks for the help!
According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
"6 digits after the decimal point" isn't the same as "6 digits of precision". The number of digits of precision is the number of significant digits which can be reliably held. Your number has 12 significant digits, so it's not at all surprising that it can't be represented exactly by float.
Note that the number it's (supposedly) rounding to, -123456.7, does have 7 significant digits. In fact, that's not the value of your float either. I strongly suspect the exact value is -123456.671875, as that's the closest float to -123456.668915. However, when you convert the exact value to a string representation, the result is only 7 digits, partly because beyond that point the digits aren't really meaningful anyway.
You should probably read my article about binary floating point in .NET for more details.
The float type has a precision of 24 significant bits (except for denormals), which is equivalent to 24 log10 2 ≈ 7.225 significant decimal digits. The number -123456.668915 has 12 significant digits, so it can't be represented accurately.
The actual binary value, rounded to 24 significant bits, is -11110001001000000.1010110. This is equivalent to the fraction -7901227/64 = -123456.671875. Rounding to 7 significant digits gives the -123456.7 you see.
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)
This question already has answers here:
When should I use double instead of decimal?
(12 answers)
Closed 9 years ago.
I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision.
My question is when should a use a double and when should I use a decimal type?
Which type is suitable for money computations? (ie. greater than $100 million)
For money, always decimal. It's why it was created.
If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.
If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".
My question is when should a use a
double and when should I use a decimal
type?
decimal for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.
double for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magnitudes - double covers more than 10^(+/-300). Scientific calculations are the best example here.
which type is suitable for money
computations?
decimal, decimal, decimal
Accept no substitutes.
The most important factor is that double, being implemented as a binary fraction, cannot accurately represent many decimal fractions (like 0.1) at all and its overall number of digits is smaller since it is 64-bit wide vs. 128-bit for decimal. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law). decimal supports these; double does not.
According to Characteristics of the floating-point types:
.NET Type
C# Keyword
Precision
System.Single
float
~6-9 digits
System.Double
double
~15-17 digits
System.Decimal
decimal
28-29 digits
The way I've been stung by using the wrong type (a good few years ago) is with large amounts:
£520,532.52 - 8 digits
£1,323,523.12 - 9 digits
You run out at 1 million for a float.
A 15 digit monetary value:
£1,234,567,890,123.45
9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:
A mathematical or comparison operation
that uses a floating-point number
might not yield the same result if a
decimal number is used because the
floating-point number might not
exactly approximate the decimal
number.
When should I use double instead of decimal? has some similar and more in depth answers.
Using double instead of decimal for monetary applications is a micro-optimization - that's the simplest way I look at it.
Decimal is for exact values. Double is for approximate values.
USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
For money: decimal. It costs a little more memory, but doesn't have rounding troubles like double sometimes has.
Definitely use integer types for your money computations.
This cannot be emphasized enough since at first glance it might seem that a floating point type is adequate.
Here an example in python code:
>>> amount = float(100.00) # one hundred dollars
>>> print amount
100.0
>>> new_amount = amount + 1
>>> print new_amount
101.0
>>> print new_amount - amount
>>> 1.0
looks pretty normal.
Now try this again with 10^20 Zimbabwe dollars:
>>> amount = float(1e20)
>>> print amount
1e+20
>>> new_amount = amount + 1
>>> print new_amount
1e+20
>>> print new_amount-amount
0.0
As you can see, the dollar disappeared.
If you use the integer type, it works fine:
>>> amount = int(1e20)
>>> print amount
100000000000000000000
>>> new_amount = amount + 1
>>> print new_amount
100000000000000000001
>>> print new_amount - amount
1
I think that the main difference beside bit width is that decimal has exponent base 10 and double has 2
http://software-product-development.blogspot.com/2008/07/net-double-vs-decimal.html