In testing as to why my program is not working as intended, I tried typing the calculations that seem to be failing into the immediate window.
Math.Floor(1.0f)
1.0 - correct
However:
200f * 0.005f
1.0
Math.Floor(200f * 0.005f)
0.0 - incorrect
Furthermore:
(float)(200f * 0.005f)
1.0
Math.Floor((float)(200f * 0.005f))
0.0 - incorrect
Probably some float loss is occuring, 0.99963 ≠ 1.00127 for example.
I wouldn't mind storing less pricise values, but in a non lossy way, for example if there were a numeric type that stored values as integers do, but to only three decimal places, if it could be made performant.
I think probably there is a better way of calculating (n * 0.005f) in regards to such errors.
edit:
TY, a solution:
Math.Floor(200m * 0.005m)
Also, as I understand it, this would work if I didn't mind changing the 1/200 into 1/256:
Math.Floor(200f * 0.00390625f)
The solution I'm using. It's the closest I can get in my program and seems to work ok:
float x = ...;
UInt16 n = 200;
decimal d = 1m / n;
... = Math.Floor((decimal)x * d)
Floats represent numbers as fractions with powers of two in the denominator. That is, you can exactly represent 1/2, or 3/4, or 19/256. Since .005 is 1/200, and 200 is not a power of two, instead what you get for 0.005f is the closest fraction that has a power of two on the bottom that can fit into a 32 bit float.
Decimals represent numbers as fractions with powers of ten in the denominator. Like floats, they introduce errors when you try to represent numbers that do not fit that pattern. 1m/333m for example, will give you the closest number to 1/333 that has a power of ten as the denominator and 29 or fewer significant digits. Since 0.005 is 5/1000, and that is a power of ten, 0.005m will give you an exact representation. The price you pay is that decimals are much larger and slower than floats.
You should always always always use decimals for financial calculations, never floats.
The problem is that 0.005f is actually 0.004999999888241291046142578125... so less than 0.005. That's the closest float value to 0.005. When you multiply that by 200, you end up with something less than 1.
If you use decimal instead - all the time, not converting from float - you should be fine in this particular scenario. So:
decimal x = 0.005m;
decimal y = 200m;
decimal z = x * y;
Console.WriteLine(z == 1m); // True
However, don't assume that this means decimal has "infinite precision". It's still a floating point type with limited precision - it's just a floating decimal point type, so 0.005 is exactly representable.
If you cannot tolerate any floating point precision issues, use decimal.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Ultimately even decimal has precision issues (it allows for 28-29 significant digits). If you are working in it's supported range ((-7.9 x 10^28 to 7.9 x 10^28) / (100^28)), you are quite unlikely to be impacted by them.
Related
I was going through the documentation for "Floating-point numeric types (C# reference)" at MSDN, https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types.
It has a table, "Characteristics of the floating-point types," describing the approximate ranges for the different floating datatypes that C# deals with. What I do not understand is why both the MIN and MAX in "Approximate range" column are both positive and negative. Skipping a link click, here is the table,
C# type/keyword
Approximate range
Precision
Size
.NET type
float
±1.5 x 10−45 to ±3.4 x 1038
~6-9 digits
4 bytes
System.Single
double
±5.0 × 10−324 to ±1.7 × 10308
~15-17 digits
8 bytes
System.Double
decimal
±1.0 x 10-28 to ±7.9228 x 1028
28-29 digits
16 bytes
System.Decimal
Why does the approximate range on both the MIN and MAX have a ±? Should it not be a - for the MIN, and + for the MAX, as it does for the Integer type here https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/integral-numeric-types? Maybe I misunderstood something about floating points.
Thank you.
Perhaps it could be made clearer, but this expresses the smallest absolute value that can be expressed as well as the largest absolute value that can be expressed by the given data type. To take an example, if we consider double, it is impossible to represent 3e-324 - it would become approximately 5.0e-324, which is double.Epsilon (https://learn.microsoft.com/en-us/dotnet/api/system.double.epsilon?view=net-7.0).
These values work for both positive and negative values, hence the use of ±.
The important thing here is the sign of the exponent.
The double and float types are used for approximations and should be avoided for exact values.
Some programmers make the mistake of using them because there is a performance gain in the execution of complex calculations in relation to the decimal type.
Explaining what I think is confusing you
I'll lower the exponent to make it easier to understand
± is a replecement for "more or less", since float and double types are recommended for approximations.
The first part is for values that are less than one (fractions).
±1.5 x 10^−5 = "more or less" 0,000015
The seconde part is for integers.
±3.4 x 10^5 = "more or less" 340.000
A colleague has written some code along these lines:
var roundedNumber = (float) Math.Round(someFloat, 2);
Console.WriteLine(roundedNumber);
I have an uncertainty about this code - is the number that gets written here even guaranteed to have 2 decimal places any more? It seems plausible to me that truncation of the double Math.Round(someFloat, 2) to float might result in a number whose string representation has more than 2 digits. Can anybody either provide an example of this (demonstrating that such a cast is unsafe) or else demonstrate somehow that it is safe to perform such a cast?
Assuming single and double precision IEEE754 representation and rules, I have checked for the first 2^24 integers i that
float(double( i/100 )) = float(i/100)
in other words, converting a decimal value with 2 decimal places twice (first to the nearest double, then to the nearest single precision float) is the same as converting the decimal directly to single precision, as long as the integer part of the decimal is not too large.
I have no guarantee for larger values.
The double approximation and the single approximation are different, but that's not really the question.
Converting twice is innocuous up to at least 167772.16, it's the same as if Math.Round would have done it directly in single precision.
Here is the testing code in Squeak/Pharo Smalltalk with ArbitraryPrecisionFloat package (sorry to not exhibit it in c# but the language does not really matter, only IEEE rules do).
(1 to: 1<<24)
detect: [:i |
(i/100.0 asArbitraryPrecisionFloatNumBits: 24) ~= (i/100 asArbitraryPrecisionFloatNumBits: 24) ]
ifNone: [nil].
EDIT
Above test was superfluous because, thanks to excellent reference provided by Mark Dickinson (Innocuous double rounding of basic arithmetic operations) , we know that doing float(double(x) / double(y)) produces a correctly-rounded value for x / y, as long as x and y are both representable as floats, which is the case for any 0 <= x <= 2^24 and for y=100.
EDIT
I have checked with numerators up to 2^30 (decimal value > 10 millions), and converting twice is still identical to converting once. Going further with an interpreted language is not good wrt global warming...
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Why is there a bias in floating point ops? Any specific reason?
Output:
160
139
static void Main()
{
float x = (float) 1.6;
int y = (int)(x * 100);
float a = (float) 1.4;
int b = (int)(a * 100);
Console.WriteLine(y);
Console.WriteLine(b);
Console.ReadKey();
}
Any rational number that has a denominator that is not a power of 2 will lead to an infinite number of digits when represented as a binary. Here you have 8/5 and 7/5. Therefore there is no exact binary representation as a floating-point number (unless you have infinite memory).
The exact binary representation of 1.6 is 110011001100110011001100110011001100...
The exact binary representation of 1.4 is 101100110011001100110011001100110011...
Both values have an infinite number of digits (1100 is repeated endlessly).
float values have a precision of 24 bits. So the binary representation of any value will be rounded to 24 bits. If you round the given values to 24 bits you get:
1.6: 110011001100110011001101 (decimal 13421773) - rounded up
1.4: 101100110011001100110011 (decimal 11744051) - rounded down
Both values have an exponent of 0 (the first bit is 2^0 = 1, the second is 2^-1 = 0.5 etc.).
Since the first bit in a 24 bit value is 2^23 you can calculate the exact decimal values by dividing the 24 bit values (13421773 and 11744051) by two 23 times.
The values are: 1.60000002384185791015625 and 1.39999997615814208984375.
When using floating-point types you always have to consider that their precision is finite. Values that can be written exact as decimal values might be rounded up or down when represented as binaries. Casting to int does not respect that because it truncates the given values. You should always use something like Math.Round.
If you really need an exact representation of rational numbers you need a completely different approach. Since rational numbers are fractions you can use integers to represent them. Here is an example of how you can achieve that.
However, you can not write Rational x = (Rational)1.6 then. You have to write something like Rational x = new Rational(8, 5) (or new Rational(16, 10) etc.).
This is due to the fact that floating point arithmetic is not precise. When you set a to 1.4, internally it may not be exactly 1.4, just as close as can be made with machine precision. If it is fractionally less than 1.4, then multiplying by 100 and casting to integer will take only the integer portion which in this case would be 139. You will get far more technically precise answers but essentially this is what is happening.
In the case of your output for the 1.6 case, the floating point representation may actually be minutely larger than 1.6 and so when you multiply by 100, the total is slightly larger than 160 and so the integer cast gives you what you expect. The fact is that there is simply not enough precision available in a computer to store every real number exactly.
See this link for details of the conversion from floating point to integer types http://msdn.microsoft.com/en-us/library/aa691289%28v=vs.71%29.aspx - it has its own section.
The floating point types float (32 bit) and double (64 bit) have a limited precision and more over the value is represented as a binary value internally. Just as you cannot represent 1/7 precisely in a decimal system (~ 0.1428571428571428...), 1/10 cannot be represented precisely in a binary system.
You can however use the decimal type. It still has a limited (however high) precision, but the numbers a represented in a decimal way internally. Therefore a value like 1/10 is represented exactly like 0.1000000000000000000000000000 internally. 1/7 is still a problem for decimal. But at least you don't get a loss of precision by converting to binary and then back to decimal.
Consider using decimal.
I receive a FIX message for a trade Allocation, where Price is given in cents (ZAR / 100), but commission is given in Rands. These values are represented by the constants. When I run this calculation, commPerc1 shows a value of 0.099999999999999978, and commPerc2 shows 0.1. These values look to differ by x10, yet with the check of calculating back to Rands, both commRands1 and commRands2 show very similar values of 336.4831554 and 336.48315540000004 respectively.
private const double Price = 5708.91;
private const double Qty = 5894;
private const double AbsComm = 336.4831554;
static void Main()
{
double commCents = AbsComm * 100;
double commPerc1 = commCents / (Qty * Price) * 100;
double commRands1 = (Qty * Price) * (commPerc1 / 100) / 100;
double commPerc2 = (AbsComm / (Qty * (Price / 100))) * 100;
double commRands2 = (Qty * Price) * (commPerc2 / 100) / 100;
}
PLEASE NOTE:
I am dealing with legacy code here where a conversion to decimal would involve several changes and QA, so right now I have to accept double.
Don't use double for financial calculations, use decimal instead. Floating point numbers are OK for physical, measured values, where the value is never exact anyway. But for financial calculations you're working with exact values, you can't afford errors due to floating point representations. That's what decimal is made for:
The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors.
(from MSDN)
The floating point problem is everywhere, because all of your values are of type double, which is a double-precision floating-point type, not a base-10 numeric type. (Hence, all calculations being performed are floating-point calculations.)
You should declare your variables decimal instead. That type is purposed precisely for financial calculations.
When I run this calculation, commPerc1 shows a value of 0.099999999999999978, and commPerc2 shows 0.1. These values look to differ by x10
No they don't. There is only an infinitely small (but existing) rounding difference between. As others have already noted, you should never use floating point with calculations that demand absolute precision. With floating point you always have rounding errors like this, and e.g. financial clients don't like missing or extra pennies/cents in their books.
That's due to the way floating numbers are stored in memory. You could use decimal instead of double or float when dealing with financial calculations.
Financial calculations should be carried out using the Decimal datatype. Doubles store 'approximations' of the specified number since the specified number can't be stored perfectly within the specified bit-allocation.
0.099999999999999978 and 0.1 don't differ by a factor of 10, they are almost the same. Just like 0.999 is almost 1.
But you really should use Decimal instead of Double for financial calculations. Double can't exactly represent numbers like 0.1 whereas Decimal can. (On the other hand neither can represent 1/3 exactly).
Additionally Decimal has a larger mantissa and throws exceptions when overflows occur.
Please use decimal for monetary calculations.
Have a look Here
I'm having an issue with a query I wrote where for some reason the variable that I'm using to store a decimal value in returns 6 values after the decimal place (they're mostly 0).
I have tried the following (and different combinations using Math.Round) with no luck.
Sales =
(from invhed in INVHEAD
... // Joins here
orderby cust.State ascending
select new Sale
{
InvoiceLine = inv.InvoiceLine,
InvoiceNum = inv.InvoiceNum,
...
NetPrice = Math.Round((inv.ExtPrice - inv.Discount) * (Decimal) (qsales.RepSplit / 100.0), 2, MidpointRounding.ToEven),
}).ToList<Sale>();
The NetPrice member has values like 300.000000, 5000.500000, 3245.250000, etc.
Any clues? I can't seem to find anything on this issue online.
EDIT:
Decimal.Round did the trick (I forgot to mention that the NetPrice member was a Decimal type). See my answer below.
System.Decimal preserves trailing zeroes by design. In other words, 1m and 1.00m are two different decimals (though they will compare as equal), and can be interpreted as being rounded to different number of decimal places - e.g. Math.Round(1.001m) will give 1m, and Math.Round(1.001m, 2) will give 1.00m. Arithmetic operators treat them differently - + and - will produce a result that has the the same number of places as the operand which has most of them (so 1.50m + 0.5m == 1.10m), and * and / will have the sum of number of places for their operands (so 1.00m * 1.000m == 1.00000m).
Trailing zeros can appear in the output of .ToString on the decimal type. You need to specify the number of digits after the decimal point you want display using the correct format string. For example:-
var s = dec.ToString("#.00");
display 2 decimal places.
Internally the decimal type uses an integer and a decimal scaling factor. Its the scaling factor which gives rise to the trailing 0. If you start off with a decimal type with a scaling factor of 2, you will get 2 digits after the decimal point even if they are 0.
Adding and substracting decimals will result in a decimal which has a scaling factor of the is the maximum of the decimals involved. Hence subtracting one decimal with a scaling factor of 2 from another with the same the resulting decimal also has a factor of 2.
Multiplication and division of decimals will result in a decimal that has a scaling factor that is the sum of the scaling factors of the two decimal operands. Multiplying decimals with a scaling factor of 2 results in a new decimal that has a scaling factor of 4.
Try this:-
var x = new decimal(1.01) - (decimal)0.01;
var y = new decimal(2.02) - (decimal)0.02;
Console.WriteLine(x * y * x * x);
You get 2.00000000.
I got it to work using Decimal.Round() with the same arguments as before. :)
Looks like the issue is somewhat on the trail of what Pavel was saying, where Decimal types behave differently and it would seem Math.Round doesn't quite work with them as one would expect it to...
Thanks for all the input.