This question already has answers here:
When should I use double instead of decimal?
(12 answers)
Closed 9 years ago.
I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision.
My question is when should a use a double and when should I use a decimal type?
Which type is suitable for money computations? (ie. greater than $100 million)
For money, always decimal. It's why it was created.
If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.
If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".
My question is when should a use a
double and when should I use a decimal
type?
decimal for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.
double for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magnitudes - double covers more than 10^(+/-300). Scientific calculations are the best example here.
which type is suitable for money
computations?
decimal, decimal, decimal
Accept no substitutes.
The most important factor is that double, being implemented as a binary fraction, cannot accurately represent many decimal fractions (like 0.1) at all and its overall number of digits is smaller since it is 64-bit wide vs. 128-bit for decimal. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law). decimal supports these; double does not.
According to Characteristics of the floating-point types:
.NET Type
C# Keyword
Precision
System.Single
float
~6-9 digits
System.Double
double
~15-17 digits
System.Decimal
decimal
28-29 digits
The way I've been stung by using the wrong type (a good few years ago) is with large amounts:
£520,532.52 - 8 digits
£1,323,523.12 - 9 digits
You run out at 1 million for a float.
A 15 digit monetary value:
£1,234,567,890,123.45
9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:
A mathematical or comparison operation
that uses a floating-point number
might not yield the same result if a
decimal number is used because the
floating-point number might not
exactly approximate the decimal
number.
When should I use double instead of decimal? has some similar and more in depth answers.
Using double instead of decimal for monetary applications is a micro-optimization - that's the simplest way I look at it.
Decimal is for exact values. Double is for approximate values.
USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
For money: decimal. It costs a little more memory, but doesn't have rounding troubles like double sometimes has.
Definitely use integer types for your money computations.
This cannot be emphasized enough since at first glance it might seem that a floating point type is adequate.
Here an example in python code:
>>> amount = float(100.00) # one hundred dollars
>>> print amount
100.0
>>> new_amount = amount + 1
>>> print new_amount
101.0
>>> print new_amount - amount
>>> 1.0
looks pretty normal.
Now try this again with 10^20 Zimbabwe dollars:
>>> amount = float(1e20)
>>> print amount
1e+20
>>> new_amount = amount + 1
>>> print new_amount
1e+20
>>> print new_amount-amount
0.0
As you can see, the dollar disappeared.
If you use the integer type, it works fine:
>>> amount = int(1e20)
>>> print amount
100000000000000000000
>>> new_amount = amount + 1
>>> print new_amount
100000000000000000001
>>> print new_amount - amount
1
I think that the main difference beside bit width is that decimal has exponent base 10 and double has 2
http://software-product-development.blogspot.com/2008/07/net-double-vs-decimal.html
Related
I use the decimal type for high precise calculation (monetary).
But I came across this simple division today:
1 / (1 / 37) which should result in 37 again
http://www.wolframalpha.com/input/?i=1%2F+%281%2F37%29
But C# gives me:
37.000000000000000000000000037M
I tried both these:
1m/(1m/37m);
and
Decimal.Divide(1, Decimal.Divide(1, 37))
but both yield the same results. How is the behaviour explainable?
Decimal stores the value as decimal floating point with only limited precision. The result of 1 / 37 is not precicely stored, as it's stored as 0.027027027027027027027027027M. The true number has the group 027 going indefinitely in decimal representation. For that reason, you cannot get the precise numbers in decimal representation for every possible number.
If you use Double in the same calculation, the end result is correct in this case (but it does not mean it will always be better).
A good answer on that topic is here: Difference between decimal, float and double in .NET?
Decimal data type has an accuracy of 28-29 significant digits.
So what you have to understand is when you consider 28-29 significant digits you are still not exact.
So when you compute a decimal value for (1/37) what you have to note is that at this stage you are only getting an accuracy of 28-29 digits. e.g 1/37 is 0.02 when you take 2 significant digits and 0.027 when you take 3 significant digits. Imagine you divide 1 with these values in each case. you get a 50 in first case and in second case you get 37.02...Considering 28-29 digits (decimal ) takes you to an accuracy of 37.000000000000000000000000037. If you have to get an exact 37 you simply need more than 28-29 significant digits than the decimal offers.
Always do computations with maximum significant digits and round off only your answer with Math.Round for desired result.
I remember reading issues about certain math operations and the type double, but I forget when they would occur, or how I need to deal with them.
A "Bitcoin" is a float that has 8 decimal places. I'm assuming that I use they type double with it, and not any other kind (decimal, etc). Is this correct?
What other issues should I consider as I write, debug, and test an application that uses 8 decimal points?
If you are doing anything with money you should be using decimal. You will be getting accuracy issues well before 8 decimal places, depending on the size of the number.
As there is a fixed amount of space (number of significant figures) float can represent numbers in the range -1 to +1 more accurately than it can numbers in the range 9,000 to 10,000 (say).
Float only has 7 digits of precision this means that it can't represent numbers down to 8 decimal places.
Double has 15-16 digits of precision so is more accurate but still not accurate enough for monetary calculations - particularly with large values.
If they call it a float then it's misleading. They probably mean "floating point type" which float is only one.
If you are worried about decimal places and accuracy, in particular when dealing with currencies, you should be using decimal not float or double.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Why is there a bias in floating point ops? Any specific reason?
Output:
160
139
static void Main()
{
float x = (float) 1.6;
int y = (int)(x * 100);
float a = (float) 1.4;
int b = (int)(a * 100);
Console.WriteLine(y);
Console.WriteLine(b);
Console.ReadKey();
}
Any rational number that has a denominator that is not a power of 2 will lead to an infinite number of digits when represented as a binary. Here you have 8/5 and 7/5. Therefore there is no exact binary representation as a floating-point number (unless you have infinite memory).
The exact binary representation of 1.6 is 110011001100110011001100110011001100...
The exact binary representation of 1.4 is 101100110011001100110011001100110011...
Both values have an infinite number of digits (1100 is repeated endlessly).
float values have a precision of 24 bits. So the binary representation of any value will be rounded to 24 bits. If you round the given values to 24 bits you get:
1.6: 110011001100110011001101 (decimal 13421773) - rounded up
1.4: 101100110011001100110011 (decimal 11744051) - rounded down
Both values have an exponent of 0 (the first bit is 2^0 = 1, the second is 2^-1 = 0.5 etc.).
Since the first bit in a 24 bit value is 2^23 you can calculate the exact decimal values by dividing the 24 bit values (13421773 and 11744051) by two 23 times.
The values are: 1.60000002384185791015625 and 1.39999997615814208984375.
When using floating-point types you always have to consider that their precision is finite. Values that can be written exact as decimal values might be rounded up or down when represented as binaries. Casting to int does not respect that because it truncates the given values. You should always use something like Math.Round.
If you really need an exact representation of rational numbers you need a completely different approach. Since rational numbers are fractions you can use integers to represent them. Here is an example of how you can achieve that.
However, you can not write Rational x = (Rational)1.6 then. You have to write something like Rational x = new Rational(8, 5) (or new Rational(16, 10) etc.).
This is due to the fact that floating point arithmetic is not precise. When you set a to 1.4, internally it may not be exactly 1.4, just as close as can be made with machine precision. If it is fractionally less than 1.4, then multiplying by 100 and casting to integer will take only the integer portion which in this case would be 139. You will get far more technically precise answers but essentially this is what is happening.
In the case of your output for the 1.6 case, the floating point representation may actually be minutely larger than 1.6 and so when you multiply by 100, the total is slightly larger than 160 and so the integer cast gives you what you expect. The fact is that there is simply not enough precision available in a computer to store every real number exactly.
See this link for details of the conversion from floating point to integer types http://msdn.microsoft.com/en-us/library/aa691289%28v=vs.71%29.aspx - it has its own section.
The floating point types float (32 bit) and double (64 bit) have a limited precision and more over the value is represented as a binary value internally. Just as you cannot represent 1/7 precisely in a decimal system (~ 0.1428571428571428...), 1/10 cannot be represented precisely in a binary system.
You can however use the decimal type. It still has a limited (however high) precision, but the numbers a represented in a decimal way internally. Therefore a value like 1/10 is represented exactly like 0.1000000000000000000000000000 internally. 1/7 is still a problem for decimal. But at least you don't get a loss of precision by converting to binary and then back to decimal.
Consider using decimal.
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.