I'm using the decimal data type throughout my project in the belief that it would give me the most accurate results. However, I've come across a situation where rounding errors are creeping in and it appears I'd be better off using doubles.
I have this calculation:
decimal tempDecimal = 15M / 78M;
decimal resultDecimal = tempDecimal * 13M;
Here resultDecimal is 2.4999999999999999 when the correct answer for 13*15/78 is 2.5. It seems this is because tempDecimal (the result of 15/78) is a recurring decimal value.
I am subsequently rounding this result to zero decimal places (away from zero) which I was expecting to be 3 in this case but actually becomes 2.
If I use doubles instead:
double tempDouble = 15D / 78D;
double resultDouble = tempDouble * 13D;
Then I get 2.5 in resultDouble which is the answer I'm looking for.
From this example it feels like I'm better of using doubles or floats even though they are lower precision. I'm assuming I manage to get the incorrect result of 2.4999999999999999 simply because a decimal can store a result to that number of decimal places whereas the double rounds it off.
Should I use doubles instead?
EDIT: This calculation is being used in financial software to decide how many contracts are allocated to different portfolios so deciding between 2 or 3 is important. I am more concerned with the correct calculation than with speed.
Strange thing is, if you write it all on one line it results in 2.5.
If precision is cruical (financial calculations) you should definately use decimal. You can print a rounded decimal using Math.Round(myDecimalValue, signesAfterDecimalPoint) or String.Format("{0:0.00}", myDecimalValue), but make the calculations with the exact number. Otherwise double will be just fine.
Related
Let me start right off by saying this isn't a question about decimal precision tolerance comparision! This is more of an issue of the inherent representation of large doubles in that after a certain range the computer starts losing significat presicion and not on the decimal side. E.g.
double d1 = 1e20+8000;
double d2 = 1e20;
Debug.WriteLine(d1 == d2);
Now this yields true which for me is very unacceptable. Converting to decimal at this point in the project is out of the question. Is there a way to mitigate this for example computing somehow that at ^20 the computer represents numbers ^20 +- 8000 the same? The tolerance seems to be 8000 for this exponent going above it triggers correct comparison but I need exact comparison up to the decimal point, beyond that i don't care for large numbers.
Now this yields true which for me is very unacceptable.
Then you should use a data type that does what you want. You should never use floating point precision numbers when you expect accuracy. decimal is what you should use.
Changing data types is your only option that works on the long term. If you don't want to go there, there is no option that would not need a significant amount of work on checking the calculations made.
I support a financial .net application. There is a lot of advice to use the decimal datatype for financial stuff.
Now I am stuck with this:
decimal price = 1.0m/12.0m;
decimal quantity = 2637.18m;
decimal result = price * quantity; //results in 219.76499999999999999999999991
The problem is that the correct value to charge our customer is 219.77 (round function, MidpointRounding.AwayFromZero) and not 219.76.
If I change everything to double, it seems to work:
double price = 1.0/12.0;
double quantity = 2637.18;
double result = price * quantity; //results in 219.765
Shall I change everything to double? Will there be other problems with fractions?
I think this question is different from Difference between decimal, float and double in .NET? because it does not really explain to me why the result with a more precise datatype decimal is less accurate (in the sample above) than the result with the double datatype which uses fewer bytes.
The reason decimal is recommended is that all numbers that can be represented as non-repeating decimals can be accurately represented in a decimal type. Units of money in the real world are always non-repeating decimals. Your problem as others have said is that your price is, for some reason, not representable as a non-repeating decimal. That is it is 0.083333333.... Using a double doesn't actually help in terms of accuracy - a double can not accurately represent 1/12 either. In this case the lack of accuracy is not causing a problem but in others it might.
Also more importantly the choice to use a double will mean there are many more numbers that you couldn't represent completely accurately. For example 0.01, 0.02, 0.03... Yeah, quite a lot of numbers you are likely to care about can't be accurately represented as a double.
In this case the question of where the price comes from is really the important one. Wherever you are storing that price almost certainly isn't storing 1/12 exactly. Either you are storing an approximation already or that price is actually the result of a calculations (or you are using a very unusual number storage system where you are storing rational numbers but this seems wildly unlikely).
What you really want is a price that can be represented as a double. If that is what you have but then you modify it (eg by dividing by 12 to get a monthly cost from an annual) then you need to do that division as late as possible. And quite possibly you also need to calculate the monthly cost as a division of the outstanding balance. What I mean by this last part is that if you are paying $10 a year in monthly instalments you might charge $0.83 for the first month. Then the second month you charge ($10-0.83)/11. This would be 0.83 again. On the fifth month you charge (10-0.83*4)/8 which now is 0.84 (once rounded). Then next month its (10-0.83*4-0.84)/7 and so on. This way you guarantee that the total charge is correct and don't worry about compounded errors.
At the end of the day you are the only one to judge whether you can re-architect your system to remove all rounding errors like this or whether you have to mitigate them in some way as I've suggested. Your best bet though is to read up on everything you can about floating point numbers, both decimal and binary, so that you fully understand the implications of choosing one over the other.
Usually, in financial calculations, multiplications and divisions are expected to be rounded to a certain number of decimal places and in a certain way. (Most currency systems use only base-10 amounts of money; in these systems, non-base-10 amounts of money are rare, if they ever occur.) Dividing a price by 12 without more is not always expected to result in a base 10 number; the business logic will dictate how that price will be rounded, including the number of decimal places the result will have. Depending on the business logic, such a result as 0.083333333333333333 might not be the appropriate one.
I'm attempting to truncate a series of double-precision values in C#. The following value fails no matter what rounding method I use. What is wrong with this value that causes both of these methods to fail? Why does even Math.Round fail to correctly truncate the number? What method can be used instead to correctly truncate such values?
The value :
double value = 0.61740451388888251;
Method 1:
return Math.Round(value, digits);
Method 2:
double multiplier = Math.Pow(10, decimals)
return Math.Round(value * multiplier) / multiplier;
Fails even in VS watch window!
Double is a floating binary point type. They are represented in binary system (like 11010.00110). When double is presented in decimal system it is only an approximation as not all binary numbers have exact representation in decimal system. Try for example this operation:
double d = 3.65d + 0.05d;
It will not result in 3.7 but in 3.6999999999999997. It is because the variable contains a closest available double.
The same happens in your case. Your variable contains closest available double.
For precise operations double/float is not the most fortunate choice.
Use double/float when you need fast performance or you want to operate on larger range of numbers, but where high precision is not required. For instance, it is perfect type for calculations in physics.
For precise decimal operations use, well, decimal.
Here is an article about float/decimal: http://csharpindepth.com/Articles/General/FloatingPoint.aspx
If you need a more exact representation of the number you might have to use the decimal type, which has more precision but smaller range (it's usually used financial calculations).
More info on when to use each here: https://stackoverflow.com/a/618596/1373170
According to this online tool which gives the binary representation of doubles, the two closest double values to 0.62 are:
6.19999999999999995559107901499E-1 or 0x3FE3D70A3D70A3D7
link
6.20000000000000106581410364015E-1 or 0x3FE3D70A3D70A3D8
link
I'm not sure why neither of these agree with your value exactly, but like the others said, it is likely a floating point representation issue.
I think you are running up against the binary limit of a double-precision float (64 bits). From http://en.wikipedia.org/wiki/Double-precision_floating-point_format, a double only gives between 15-17 significant digits.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.
I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.