C# rounding of double not working [duplicate] - c#

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I have this code to round some value. I am using Math.Round() for the same.
double h = 128.015999031067;
double d = Math.Round(h, 3) * 1000;
result is 128015.99999999999
if I write it to console it gives me 128016. I want the value to be 128016
Am I missing any type conversions? Or there is some other way of doing this?

128.016 cannot be represented exactly in binary. There is no way that you will get 128016 by multiplying with 1000.
There are multiple ways around this:
Do nothing. It is already printed correctly. (Unless you want to perform further calculations.)
The "obvious" solution would be to simply round again.
The "simplest" solution would be to multiply first, then round.
The most "correct" solution would be, if you need exact values, you should use an exact type; there are many implementations of bignum, rational, decimal, and precise arithmetic available via NuGet, also consider using decimal.

Have you tried using the round function without any other arguments?
double h = 128.015999031067;
double d = Math.Round(h*1000);

Related

Why does my double round my variables sometimes? [duplicate]

This question already has answers here:
Difference between decimal, float and double in .NET?
(18 answers)
Closed 3 years ago.
I really dont unterstand why, but my double array just sometimes round my variables, even though it shouldn't. The weird thing is, that it only does this sometimes, as you see in the picture. All of the array elements should have this long precision. The Variable "WertunterschiedPerSec" is also at this precision every time, but still, if i add it to Zwischenwerte[i], then it sometimes just get less precisie, even though i dont do anything anywhere. Does anybody know why?
I would suggest using a decimal, but let's get into the exact details:
double, float and decimal are all floating point.
The difference is double and float are base 2 and decimal is base 10.
Base 2 numbers cannot accurately represent all base 10 numbers.
This is why you're seeing what appears to be "rounding".
I would use the decimal variable instead of double because you can basically do the same functions as a double except for using the Math function. Try to use Decimals and if you need to convert than use:
Double variable = Convert.ToDouble(decimal var);
Decimals are meant for decimals so they will hold more information than a float or decimal

Is the double data type not suitable for my data / calculation? [duplicate]

This question already has answers here:
Is double Multiplication Broken in .NET? [duplicate]
(6 answers)
Closed 6 years ago.
my unit test failed because of this, so I wonder if I am using the right datatype?
Reading the specs of the double I'd think it should be ok, but this is what's happening:
I'm reading a string from a file with value 0,0175 (comma as decimal sep.)
then I convert it to a double and then multiply it by 10000.
The function which does the multiply is this:
private static double? MultiplyBy10000(double? input)
{
if (!input.HasValue)
{
return null;
}
return input.Value*10000;
}
next is from the immediate window:
input.Value
0.0175
input.Value*10000
175.00000000000003
And that is where my unit test fails, because I expect 175.
Is the double not accurate enough for this?
Checked other values too:
input.Value*1
0.0175
input.Value*10
0.17500000000000002
input.Value*100
1.7500000000000002
input.Value*1000
17.5
input.Value*10000
175.00000000000003
The weird thing is, I have 12 testcases
0,0155
0,0225
0,016
0,0175
0,0095
0,016
0,016
0,0225
0,0235
0,0265
and assert 4 of these, and the other 3 don't have this behaviour
Is the double not accurate enough for this?
No, doubles are floating point numbers, which are inaccurate by design. You can use decimal which is accurate and more suitable if you need exact numbers.
Floating point numbers where the value to the right of the decimal point are not powers of 2 cannot be accurately represented. Although you've got what looks like 0.0175 there's actually more data there.
After you've scaled the number up use Math.Round to trim off date on the right off the decimal point.

Why 2.0 % 0.1 gives 0.09999999 on debugging How to make the solution if i have to do solution if(doubleVar ==0) [duplicate]

This question already has answers here:
Why is floating point arithmetic in C# imprecise?
(3 answers)
Closed 8 years ago.
I am trying to find out modulo in c# as i know remainder is obtained on doing a modulo b= remainder so here a%b=remainder the same i tried to do like this:
var distanceFactor = slider.Value % distance;
But the value on debugging of slider.Value= 2.0 and distance =0.1 and distanceFactor i found surprisingly is 0.0999999999.. and i was expecting it to be 0.
Is it due to var ? what could be the reason for this non zero value.?
And how to do the solution of this problem ? because on rounding of this 0.0999999999 becomes 0.1 ans my control never go in condition if(distanceFactor==0) (and roundoff is also necessary in current situation).Is there any alternative to achieve it ?
This is expected behavior. A floating point number does not exactly represent a decimal number like the type decimal would do. Look at What Every Computer Scientist Should Know About Floating-Point Arithmetic for a detailed description.

Why Math.Round() works differently in C# [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
First I noticed that Math.Round() doesn't round for 2 digits as I as for:
double dd = Math.Round(1.66666, 2);
Result:
dd = 1.6699999570846558;
and then I created a new project with same .NET framework and the result is 1.67 now as it should been in the first place.
I've never seen Round behaving like this before, what is causing this problem?
Like the other comments mentioned, use decimal to hold the returned value:
decimal dd = Math.Round(1.66666M, 2);
The issue you described has nothing to do with the Round() function. You can read a bit about how the floating point numbers & fixed point numbers work, but the short explanation is:
With floating point variables (e.g. double), you cannot guarantee the precision of the number you save it them. So when you save something like 1.67 in a variable of the type double and then you check the value later on, therer is no guarantee you will get exactly 1.67. You may get a value like 1.66999999 (similar to what you got) or like 1.6700000001.
Fixed point variables (e.g. decimal) on the other hand, will give you that precision, so if you save 1.67, you will always get 1.67 back.
Please note that the Round() function returns the same type that you pass to it, so to make return a decimal, you need to pass it 1.66666M which is a decimal value rather than 1.66666 which is a floating point number.

Why is this simple calculation of two doubles inaccurate? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# (4): double minus double giving precision problems
86.25 - 86.24 = 0.01
generally, I would think the above statement is true right?
However, if I enter this
double a = 86.24;
double b = 86.25;
double c = b - a;
I find that c = 0.010000000000005116
Why is this?
Floating point numbers (in this case doubles) cannot represent decimal values exactly. I would recommend reading David Goldberg's What every computer scientist should know about floating-point arithmetic
This applies to all languages that deal with floating point numbers whether it be C#, Java, JavaScript, ... This is essential reading for any developer working with floating point arithmetic.
As VinayC suggests, if you need an exact (and slower) representation, use System.Decimal (aka decimal in C#) instead.
Double is incorrect data type for decimal calculations, use Decimal for that. Main reason lies in how double stores the number - its essentially a binary approximation of the decimal number.

Categories

Resources