Where is the floating point problem in this C# financial calculation? - c#

I receive a FIX message for a trade Allocation, where Price is given in cents (ZAR / 100), but commission is given in Rands. These values are represented by the constants. When I run this calculation, commPerc1 shows a value of 0.099999999999999978, and commPerc2 shows 0.1. These values look to differ by x10, yet with the check of calculating back to Rands, both commRands1 and commRands2 show very similar values of 336.4831554 and 336.48315540000004 respectively.
private const double Price = 5708.91;
private const double Qty = 5894;
private const double AbsComm = 336.4831554;
static void Main()
{
double commCents = AbsComm * 100;
double commPerc1 = commCents / (Qty * Price) * 100;
double commRands1 = (Qty * Price) * (commPerc1 / 100) / 100;
double commPerc2 = (AbsComm / (Qty * (Price / 100))) * 100;
double commRands2 = (Qty * Price) * (commPerc2 / 100) / 100;
}
PLEASE NOTE:
I am dealing with legacy code here where a conversion to decimal would involve several changes and QA, so right now I have to accept double.

Don't use double for financial calculations, use decimal instead. Floating point numbers are OK for physical, measured values, where the value is never exact anyway. But for financial calculations you're working with exact values, you can't afford errors due to floating point representations. That's what decimal is made for:
The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors.
(from MSDN)

The floating point problem is everywhere, because all of your values are of type double, which is a double-precision floating-point type, not a base-10 numeric type. (Hence, all calculations being performed are floating-point calculations.)
You should declare your variables decimal instead. That type is purposed precisely for financial calculations.

When I run this calculation, commPerc1 shows a value of 0.099999999999999978, and commPerc2 shows 0.1. These values look to differ by x10
No they don't. There is only an infinitely small (but existing) rounding difference between. As others have already noted, you should never use floating point with calculations that demand absolute precision. With floating point you always have rounding errors like this, and e.g. financial clients don't like missing or extra pennies/cents in their books.

That's due to the way floating numbers are stored in memory. You could use decimal instead of double or float when dealing with financial calculations.

Financial calculations should be carried out using the Decimal datatype. Doubles store 'approximations' of the specified number since the specified number can't be stored perfectly within the specified bit-allocation.

0.099999999999999978 and 0.1 don't differ by a factor of 10, they are almost the same. Just like 0.999 is almost 1.
But you really should use Decimal instead of Double for financial calculations. Double can't exactly represent numbers like 0.1 whereas Decimal can. (On the other hand neither can represent 1/3 exactly).
Additionally Decimal has a larger mantissa and throws exceptions when overflows occur.

Please use decimal for monetary calculations.
Have a look Here

Related

Why '+=' operator in double gives some random numbers? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 23 hours ago.
I have project that should calculate an equation and it checks every number to see if it fits the equation. every time it pluses the number by 0.00001. but sometiomes it gets random on next decimals for example 0.000019999999997.
I even tried a breakpoint at that line. When I am right on that line it is 1.00002 for example but when I go to next line it is 1.00002999999997.
I don't why it is like that. I even tried smaller numbers like 0.001.
List<double> anwsers = new List<double>();
double i = startingPoint;
double prei = 0;
double preAbsoluteValueOfResult = 0;
while (i <= endingPoint)
{
prei = i;
i += 0.001;
}
Added a call to Math.Round to round the result to 3 decimal places before adding it to the answers list. This ensures that the values in the list will always have exactly 3 digits past the decimal point.
List<double> answers = new List<double>();
double i = startingPoint;
double prei = 0;
double preAbsoluteValueOfResult = 0;
while (i <= endingPoint)
{
prei = i;
i += 0.001;
double result = Math.Round(prei, 3);
answers.Add(result);
}
Computers don't store exact floating point numbers. A float is usually 32 bits or 64 bits and cannot store numbers to arbitrary precision.
If you want dynamic precision floating point, use a number library like GMP.
There are some good video's on this https://www.youtube.com/watch?v=2gIxbTn7GSc
But esentially its because there arnt enough bits. if you have a number being stored e.g. 1/3 it can only store so many decimal places, so its actually going to be something like 0.3334. Which means when you do something like 1/3 + 1/3 it isnt going to equal 2/3 like one might expect, it would equal 0.6668
To summarize, using the decimal type https://learn.microsoft.com/en-us/dotnet/api/system.decimal?view=net-7.0 Over double, should fix your issues.
TL;DR: You should use a decimal data type instead of a float or double. You can declare your 0.001 as a decimal by adding an m after the value: 0.001m
The double data type you chose relies on a representation of decimal numbers via a fraction of two integers. It is great for storing large decimal numbers with little memory, but it also means your number gets rounded to the closest value which can be represented by such a fraction. A decimal on the other hand will store the information in a different way, which will more closely represent what you intuitively expect from decimal numbers.
More information about float values: https://floating-point-gui.de/
More information about numeric types declaration: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
The documentation also explains:
Just as decimal fractions are unable to precisely represent some fractional values (such as 1/3 or Math.PI), binary fractions are unable to represent some fractional values. For example, 1/10, which is represented precisely by .1 as a decimal fraction, is represented by .001100110011 as a binary fraction, with the pattern "0011" repeating to infinity. In this case, the floating-point value provides an imprecise representation of the number that it represents. Performing additional mathematical operations on the original floating-point value often tends to increase its lack of precision.

Rounding issues with decimal

I'm using the decimal data type throughout my project in the belief that it would give me the most accurate results. However, I've come across a situation where rounding errors are creeping in and it appears I'd be better off using doubles.
I have this calculation:
decimal tempDecimal = 15M / 78M;
decimal resultDecimal = tempDecimal * 13M;
Here resultDecimal is 2.4999999999999999 when the correct answer for 13*15/78 is 2.5. It seems this is because tempDecimal (the result of 15/78) is a recurring decimal value.
I am subsequently rounding this result to zero decimal places (away from zero) which I was expecting to be 3 in this case but actually becomes 2.
If I use doubles instead:
double tempDouble = 15D / 78D;
double resultDouble = tempDouble * 13D;
Then I get 2.5 in resultDouble which is the answer I'm looking for.
From this example it feels like I'm better of using doubles or floats even though they are lower precision. I'm assuming I manage to get the incorrect result of 2.4999999999999999 simply because a decimal can store a result to that number of decimal places whereas the double rounds it off.
Should I use doubles instead?
EDIT: This calculation is being used in financial software to decide how many contracts are allocated to different portfolios so deciding between 2 or 3 is important. I am more concerned with the correct calculation than with speed.
Strange thing is, if you write it all on one line it results in 2.5.
If precision is cruical (financial calculations) you should definately use decimal. You can print a rounded decimal using Math.Round(myDecimalValue, signesAfterDecimalPoint) or String.Format("{0:0.00}", myDecimalValue), but make the calculations with the exact number. Otherwise double will be just fine.

C# and the mischief of floats

In testing as to why my program is not working as intended, I tried typing the calculations that seem to be failing into the immediate window.
Math.Floor(1.0f)
1.0 - correct
However:
200f * 0.005f
1.0
Math.Floor(200f * 0.005f)
0.0 - incorrect
Furthermore:
(float)(200f * 0.005f)
1.0
Math.Floor((float)(200f * 0.005f))
0.0 - incorrect
Probably some float loss is occuring, 0.99963 ≠ 1.00127 for example.
I wouldn't mind storing less pricise values, but in a non lossy way, for example if there were a numeric type that stored values as integers do, but to only three decimal places, if it could be made performant.
I think probably there is a better way of calculating (n * 0.005f) in regards to such errors.
edit:
TY, a solution:
Math.Floor(200m * 0.005m)
Also, as I understand it, this would work if I didn't mind changing the 1/200 into 1/256:
Math.Floor(200f * 0.00390625f)
The solution I'm using. It's the closest I can get in my program and seems to work ok:
float x = ...;
UInt16 n = 200;
decimal d = 1m / n;
... = Math.Floor((decimal)x * d)
Floats represent numbers as fractions with powers of two in the denominator. That is, you can exactly represent 1/2, or 3/4, or 19/256. Since .005 is 1/200, and 200 is not a power of two, instead what you get for 0.005f is the closest fraction that has a power of two on the bottom that can fit into a 32 bit float.
Decimals represent numbers as fractions with powers of ten in the denominator. Like floats, they introduce errors when you try to represent numbers that do not fit that pattern. 1m/333m for example, will give you the closest number to 1/333 that has a power of ten as the denominator and 29 or fewer significant digits. Since 0.005 is 5/1000, and that is a power of ten, 0.005m will give you an exact representation. The price you pay is that decimals are much larger and slower than floats.
You should always always always use decimals for financial calculations, never floats.
The problem is that 0.005f is actually 0.004999999888241291046142578125... so less than 0.005. That's the closest float value to 0.005. When you multiply that by 200, you end up with something less than 1.
If you use decimal instead - all the time, not converting from float - you should be fine in this particular scenario. So:
decimal x = 0.005m;
decimal y = 200m;
decimal z = x * y;
Console.WriteLine(z == 1m); // True
However, don't assume that this means decimal has "infinite precision". It's still a floating point type with limited precision - it's just a floating decimal point type, so 0.005 is exactly representable.
If you cannot tolerate any floating point precision issues, use decimal.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Ultimately even decimal has precision issues (it allows for 28-29 significant digits). If you are working in it's supported range ((-7.9 x 10^28 to 7.9 x 10^28) / (100^28)), you are quite unlikely to be impacted by them.

What data type should I use to represent money in C#?

In C#, what data type should I use to represent monetary amounts? Decimal? Float? Double? I want to take in consideration: precision, rounding, etc.
Use System.Decimal:
The Decimal value type represents
decimal numbers ranging from positive
79,228,162,514,264,337,593,543,950,335
to negative
79,228,162,514,264,337,593,543,950,335.
The Decimal value type is appropriate
for financial calculations requiring
large numbers of significant integral
and fractional digits and no round-off
errors. The Decimal type does not
eliminate the need for rounding.
Rather, it minimizes errors due to
rounding.
Neither System.Single (float) nor System.Double (double) are precise enough capable of representing high-precision floating point numbers without rounding errors.
Use decimal and money in the DB if you're using SQL.
In C#, the Decimal type actually a struct with overloaded functions for all math and comparison operations in base 10, so it will have less significant rounding errors. A float (and double), on the other hand is akin to scientific notation in binary. As a result, Decimal types are more accurate when you know the precision you need.
Run this to see the difference in the accuracy of the 2:
using System;
using System.Collections.Generic;
using System.Text;
namespace FloatVsDecimal
{
class Program
{
static void Main(string[] args)
{
Decimal _decimal = 1.0m;
float _float = 1.0f;
for (int _i = 0; _i < 5; _i++)
{
Console.WriteLine("float: {0}, decimal: {1}",
_float.ToString("e10"),
_decimal.ToString("e10"));
_decimal += 0.1m;
_float += 0.1f;
}
Console.ReadKey();
}
}
}
Decimal is the one you want.
Consider using the Money Type for the CLR. It is a custom value type (struct) that also supports currencies and handles rounding off issues.
In C# You should take "Decimal" to represent monetary amounts.
For something quick and dirty, any of the floating point primitive types will do.
The problem with float and double is that neither of them can represent 1/10 accurately, occasionally resulting in surprising trillionths of a cent. You've probably heard of the infamous 10¢ + 20¢. More realistically, try calculating a 6% sales tax on three items valued at $39.99 each pre-tax.
Also, float and double have values like negative infinity and NaN that are of no use whatsoever for representing money. So decimal, which can represent 1/10 precisely would seem to be the best choice.
However, decimal doesn't carry any information about what currency we're dealing with. Does the amount $29.89, for example, equal €29.89? Is $29.89 > €29.89? How do I make sure these amounts are displayed with the correct currency symbols?
If these sorts of details matter for your program, then you should either use a third-party library or create your own CurrencyAmount class (or whatever you want to call it).
But if that sort of thing doesn't matter to the program, you can just use a floating point type. Or maybe even integers (e.g., my blackjack implementation in Java asks the player to enter a wager in whole dollars).

Double precision problems on .NET

I have a simple C# function:
public static double Floor(double value, double step)
{
return Math.Floor(value / step) * step;
}
That calculates the higher number, lower than or equal to "value", that is multiple of "step". But it lacks precision, as seen in the following tests:
[TestMethod()]
public void FloorTest()
{
int decimals = 6;
double value = 5F;
double step = 2F;
double expected = 4F;
double actual = Class.Floor(value, step);
Assert.AreEqual(expected, actual);
value = -11.5F;
step = 1.1F;
expected = -12.1F;
actual = Class.Floor(value, step);
Assert.AreEqual(Math.Round(expected, decimals),Math.Round(actual, decimals));
Assert.AreEqual(expected, actual);
}
The first and second asserts are ok, but the third fails, because the result is only equal until the 6th decimal place. Why is that? Is there any way to correct this?
Update If I debug the test I see that the values are equal until the 8th decimal place instead of the 6th, maybe because Math.Round introduces some imprecision.
Note In my test code I wrote the "F" suffix (explicit float constant) where I meant "D" (double), so if I change that I can have more precision.
I actually sort of wish they hadn't implemented the == operator for floats and doubles. It's almost always the wrong thing to do to ever ask if a double or a float is equal to any other value.
If you want precision, use System.Decimal. If you want speed, use System.Double (or System.Float). Floating point numbers are not "infinite precision" numbers, and therefore asserting equality must include a tolerance. As long as your numbers have a reasonable number of significant digits, this is ok.
If you're looking to do math on very large AND very small numbers, don't use float or double.
If you need infinite precision, don't use float or double.
If you are aggregating a very large number of values, don't use float or double (the errors will compound themselves).
If you need speed and size, use float or double.
See this answer (also by me) for a detailed analysis of how precision affects the outcome of your mathematical operations.
Floating point arithmetic on computers are not Exact Science :).
If you want exact precision to a predefined number of decimals use Decimal instead of double or accept a minor interval.
If you omit all the F postfixes (ie -12.1 instead of -12.1F) you will get equality to a few digits more. Your constants (and especially the expected values) are now floats because of the F. If you are doing that on purpose then please explain.
But for the rest i concur with the other answers on comparing double or float values for equality, it's just not reliable.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it.
Only use floating point if you want a machine's interpretation (binary) of number systems. You can't represent 10 cents.
Check the answers to this question: Is it safe to check floating point values for equality to 0?
Really, just check for "within tolerance of..."
floats and doubles cannot accurately store all numbers. This is a limitation with the IEEE floating point system. In order to have faithful precision you need to use a more advanced math library.
If you don't need precision past a certain point, then perhaps decimal will work better for you. It has a higher precision than double.
For the similar issue, I end up using the following implementation which seems to success most of my test case (up to 5 digit precision):
public static double roundValue(double rawValue, double valueTick)
{
if (valueTick <= 0.0) return 0.0;
Decimal val = new Decimal(rawValue);
Decimal step = new Decimal(valueTick);
Decimal modulo = Decimal.Round(Decimal.Divide(val,step));
return Decimal.ToDouble(Decimal.Multiply(modulo, step));
}
Sometimes the result is more precise than you would expect from strict:FP IEEE 754.
That's because HW uses more bits for the computation.
See C# specification and this article
Java has strictfp keyword and C++ have compiler switches. I miss that option in .NET

Categories

Resources