I'm having an issue with a query I wrote where for some reason the variable that I'm using to store a decimal value in returns 6 values after the decimal place (they're mostly 0).
I have tried the following (and different combinations using Math.Round) with no luck.
Sales =
(from invhed in INVHEAD
... // Joins here
orderby cust.State ascending
select new Sale
{
InvoiceLine = inv.InvoiceLine,
InvoiceNum = inv.InvoiceNum,
...
NetPrice = Math.Round((inv.ExtPrice - inv.Discount) * (Decimal) (qsales.RepSplit / 100.0), 2, MidpointRounding.ToEven),
}).ToList<Sale>();
The NetPrice member has values like 300.000000, 5000.500000, 3245.250000, etc.
Any clues? I can't seem to find anything on this issue online.
EDIT:
Decimal.Round did the trick (I forgot to mention that the NetPrice member was a Decimal type). See my answer below.
System.Decimal preserves trailing zeroes by design. In other words, 1m and 1.00m are two different decimals (though they will compare as equal), and can be interpreted as being rounded to different number of decimal places - e.g. Math.Round(1.001m) will give 1m, and Math.Round(1.001m, 2) will give 1.00m. Arithmetic operators treat them differently - + and - will produce a result that has the the same number of places as the operand which has most of them (so 1.50m + 0.5m == 1.10m), and * and / will have the sum of number of places for their operands (so 1.00m * 1.000m == 1.00000m).
Trailing zeros can appear in the output of .ToString on the decimal type. You need to specify the number of digits after the decimal point you want display using the correct format string. For example:-
var s = dec.ToString("#.00");
display 2 decimal places.
Internally the decimal type uses an integer and a decimal scaling factor. Its the scaling factor which gives rise to the trailing 0. If you start off with a decimal type with a scaling factor of 2, you will get 2 digits after the decimal point even if they are 0.
Adding and substracting decimals will result in a decimal which has a scaling factor of the is the maximum of the decimals involved. Hence subtracting one decimal with a scaling factor of 2 from another with the same the resulting decimal also has a factor of 2.
Multiplication and division of decimals will result in a decimal that has a scaling factor that is the sum of the scaling factors of the two decimal operands. Multiplying decimals with a scaling factor of 2 results in a new decimal that has a scaling factor of 4.
Try this:-
var x = new decimal(1.01) - (decimal)0.01;
var y = new decimal(2.02) - (decimal)0.02;
Console.WriteLine(x * y * x * x);
You get 2.00000000.
I got it to work using Decimal.Round() with the same arguments as before. :)
Looks like the issue is somewhat on the trail of what Pavel was saying, where Decimal types behave differently and it would seem Math.Round doesn't quite work with them as one would expect it to...
Thanks for all the input.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 23 hours ago.
I have project that should calculate an equation and it checks every number to see if it fits the equation. every time it pluses the number by 0.00001. but sometiomes it gets random on next decimals for example 0.000019999999997.
I even tried a breakpoint at that line. When I am right on that line it is 1.00002 for example but when I go to next line it is 1.00002999999997.
I don't why it is like that. I even tried smaller numbers like 0.001.
List<double> anwsers = new List<double>();
double i = startingPoint;
double prei = 0;
double preAbsoluteValueOfResult = 0;
while (i <= endingPoint)
{
prei = i;
i += 0.001;
}
Added a call to Math.Round to round the result to 3 decimal places before adding it to the answers list. This ensures that the values in the list will always have exactly 3 digits past the decimal point.
List<double> answers = new List<double>();
double i = startingPoint;
double prei = 0;
double preAbsoluteValueOfResult = 0;
while (i <= endingPoint)
{
prei = i;
i += 0.001;
double result = Math.Round(prei, 3);
answers.Add(result);
}
Computers don't store exact floating point numbers. A float is usually 32 bits or 64 bits and cannot store numbers to arbitrary precision.
If you want dynamic precision floating point, use a number library like GMP.
There are some good video's on this https://www.youtube.com/watch?v=2gIxbTn7GSc
But esentially its because there arnt enough bits. if you have a number being stored e.g. 1/3 it can only store so many decimal places, so its actually going to be something like 0.3334. Which means when you do something like 1/3 + 1/3 it isnt going to equal 2/3 like one might expect, it would equal 0.6668
To summarize, using the decimal type https://learn.microsoft.com/en-us/dotnet/api/system.decimal?view=net-7.0 Over double, should fix your issues.
TL;DR: You should use a decimal data type instead of a float or double. You can declare your 0.001 as a decimal by adding an m after the value: 0.001m
The double data type you chose relies on a representation of decimal numbers via a fraction of two integers. It is great for storing large decimal numbers with little memory, but it also means your number gets rounded to the closest value which can be represented by such a fraction. A decimal on the other hand will store the information in a different way, which will more closely represent what you intuitively expect from decimal numbers.
More information about float values: https://floating-point-gui.de/
More information about numeric types declaration: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
The documentation also explains:
Just as decimal fractions are unable to precisely represent some fractional values (such as 1/3 or Math.PI), binary fractions are unable to represent some fractional values. For example, 1/10, which is represented precisely by .1 as a decimal fraction, is represented by .001100110011 as a binary fraction, with the pattern "0011" repeating to infinity. In this case, the floating-point value provides an imprecise representation of the number that it represents. Performing additional mathematical operations on the original floating-point value often tends to increase its lack of precision.
how to round the below values to remove the consecutive 9's from a double value.
Significant digit rounding in .Net framework and .net core working differently.
Used the below code to round the double value to significant digits by passing the value 5.9E-12 and rounding to 12 significant digits
private static double RoundToSignificantDigits(double d, int digits)
{
if (d == 0)
return 0;
double scale = Math.Pow(10, Math.Floor(Math.Log10(Math.Abs(d))) + 1);
return scale * Math.Round(d / scale, digits);
}
.Net core result : 5.8999999999999995E-12
.Net Framework Result : 5.9E-12
How to get the result like .Net framework in .Net core
It is still unclear what exactly you want to achive. But you cannot represent any decimal number correctly as a double value.
Say you have a number x, it needs to hold to the following formula
x = n/m where n and m do not share any factor and m is a power of 2
to have a chance of being currectly represetable as double.
So the number 5.9E-12 is equal to 59/(10^13). You clearly see that 10^13 is not a power of 2 so 5.9E-12 can not corrctly represented as a double value
If you would do Console.WriteLine("{0:G17}", double.Parse("5.9E-12")); you would get the value 5.9000000000000003E-12 and not 5.9E-12.
All this basicly araises for the same reason you cant write the decimal expansion of 1/3 or 1/7 to the end (without the periodical shortcut).
So if you want numbers that can be rounded to values that have a finite decimal expansion use decimal and not double.
Or you could create a Fractional Type based on BitInteger that would have an unlimited amount of precision. See this for a discussion.
Sorry for the daft question, but I get back this value from database
"7.545720553985866E+29"
I need to convert this value to a decimal, rounded to 6 digits. What is the best way to do that? I tried
var test = double.Parse("7.545720553985866E+29");
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
but the value remains unchanged and the conversion crashes.
Math.Round rounds to N digits to the right of the decimal point. Your number has NO digits to the right of the decimal (it is equivalent to 754,572,055,398,586,600,000,000,000,000), so rounding it does not change the value.
If you want to round to N significant digits then look at some of the existing answers:
Round a double to x significant figures
Rounding the SIGNIFICANT digits in a double, not to decimal places
the conversion crashes.
That's because the value is too large for a decimal. The largest value a decimal can hold is 7.9228E+28 - your value is about 10 times larger than that.
Maybe you can substring it and then after, parse.
var test= "7.545720553985866E+29".Substring(0,8); // 7.545720
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
You can use this to round to 6 significant digits:
round(test, 6 - int(math.log10(test)))
The resulting value from that is
7.545721e+29
This works by using log10 from the math module to get the power of 10 in test, rounds it down to get an integer, subtracts that from 6 then uses round to get the desired digits.
As noted by others, round works to the given number of decimal places. The log10 and the rest figures how many decimal places are needed to get the desired number of significant digits. If the decimal places are negative, round rounds to the left of the decimal point.
You should be aware that log10 is not perfectly accurate and taking the int of that may be off from the expected value by one. This happens rarely but it does happen. Also, even if the computed value is correct, converting the value to string (such as when you print it) may give a different-than-expected result. If you need perfect accuracy you would be better off working from the string representation of the value.
In testing as to why my program is not working as intended, I tried typing the calculations that seem to be failing into the immediate window.
Math.Floor(1.0f)
1.0 - correct
However:
200f * 0.005f
1.0
Math.Floor(200f * 0.005f)
0.0 - incorrect
Furthermore:
(float)(200f * 0.005f)
1.0
Math.Floor((float)(200f * 0.005f))
0.0 - incorrect
Probably some float loss is occuring, 0.99963 ≠ 1.00127 for example.
I wouldn't mind storing less pricise values, but in a non lossy way, for example if there were a numeric type that stored values as integers do, but to only three decimal places, if it could be made performant.
I think probably there is a better way of calculating (n * 0.005f) in regards to such errors.
edit:
TY, a solution:
Math.Floor(200m * 0.005m)
Also, as I understand it, this would work if I didn't mind changing the 1/200 into 1/256:
Math.Floor(200f * 0.00390625f)
The solution I'm using. It's the closest I can get in my program and seems to work ok:
float x = ...;
UInt16 n = 200;
decimal d = 1m / n;
... = Math.Floor((decimal)x * d)
Floats represent numbers as fractions with powers of two in the denominator. That is, you can exactly represent 1/2, or 3/4, or 19/256. Since .005 is 1/200, and 200 is not a power of two, instead what you get for 0.005f is the closest fraction that has a power of two on the bottom that can fit into a 32 bit float.
Decimals represent numbers as fractions with powers of ten in the denominator. Like floats, they introduce errors when you try to represent numbers that do not fit that pattern. 1m/333m for example, will give you the closest number to 1/333 that has a power of ten as the denominator and 29 or fewer significant digits. Since 0.005 is 5/1000, and that is a power of ten, 0.005m will give you an exact representation. The price you pay is that decimals are much larger and slower than floats.
You should always always always use decimals for financial calculations, never floats.
The problem is that 0.005f is actually 0.004999999888241291046142578125... so less than 0.005. That's the closest float value to 0.005. When you multiply that by 200, you end up with something less than 1.
If you use decimal instead - all the time, not converting from float - you should be fine in this particular scenario. So:
decimal x = 0.005m;
decimal y = 200m;
decimal z = x * y;
Console.WriteLine(z == 1m); // True
However, don't assume that this means decimal has "infinite precision". It's still a floating point type with limited precision - it's just a floating decimal point type, so 0.005 is exactly representable.
If you cannot tolerate any floating point precision issues, use decimal.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Ultimately even decimal has precision issues (it allows for 28-29 significant digits). If you are working in it's supported range ((-7.9 x 10^28 to 7.9 x 10^28) / (100^28)), you are quite unlikely to be impacted by them.
I receive a FIX message for a trade Allocation, where Price is given in cents (ZAR / 100), but commission is given in Rands. These values are represented by the constants. When I run this calculation, commPerc1 shows a value of 0.099999999999999978, and commPerc2 shows 0.1. These values look to differ by x10, yet with the check of calculating back to Rands, both commRands1 and commRands2 show very similar values of 336.4831554 and 336.48315540000004 respectively.
private const double Price = 5708.91;
private const double Qty = 5894;
private const double AbsComm = 336.4831554;
static void Main()
{
double commCents = AbsComm * 100;
double commPerc1 = commCents / (Qty * Price) * 100;
double commRands1 = (Qty * Price) * (commPerc1 / 100) / 100;
double commPerc2 = (AbsComm / (Qty * (Price / 100))) * 100;
double commRands2 = (Qty * Price) * (commPerc2 / 100) / 100;
}
PLEASE NOTE:
I am dealing with legacy code here where a conversion to decimal would involve several changes and QA, so right now I have to accept double.
Don't use double for financial calculations, use decimal instead. Floating point numbers are OK for physical, measured values, where the value is never exact anyway. But for financial calculations you're working with exact values, you can't afford errors due to floating point representations. That's what decimal is made for:
The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors.
(from MSDN)
The floating point problem is everywhere, because all of your values are of type double, which is a double-precision floating-point type, not a base-10 numeric type. (Hence, all calculations being performed are floating-point calculations.)
You should declare your variables decimal instead. That type is purposed precisely for financial calculations.
When I run this calculation, commPerc1 shows a value of 0.099999999999999978, and commPerc2 shows 0.1. These values look to differ by x10
No they don't. There is only an infinitely small (but existing) rounding difference between. As others have already noted, you should never use floating point with calculations that demand absolute precision. With floating point you always have rounding errors like this, and e.g. financial clients don't like missing or extra pennies/cents in their books.
That's due to the way floating numbers are stored in memory. You could use decimal instead of double or float when dealing with financial calculations.
Financial calculations should be carried out using the Decimal datatype. Doubles store 'approximations' of the specified number since the specified number can't be stored perfectly within the specified bit-allocation.
0.099999999999999978 and 0.1 don't differ by a factor of 10, they are almost the same. Just like 0.999 is almost 1.
But you really should use Decimal instead of Double for financial calculations. Double can't exactly represent numbers like 0.1 whereas Decimal can. (On the other hand neither can represent 1/3 exactly).
Additionally Decimal has a larger mantissa and throws exceptions when overflows occur.
Please use decimal for monetary calculations.
Have a look Here