Why Math.Round() works differently in C# [duplicate] - c#

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
First I noticed that Math.Round() doesn't round for 2 digits as I as for:
double dd = Math.Round(1.66666, 2);
Result:
dd = 1.6699999570846558;
and then I created a new project with same .NET framework and the result is 1.67 now as it should been in the first place.
I've never seen Round behaving like this before, what is causing this problem?

Like the other comments mentioned, use decimal to hold the returned value:
decimal dd = Math.Round(1.66666M, 2);
The issue you described has nothing to do with the Round() function. You can read a bit about how the floating point numbers & fixed point numbers work, but the short explanation is:
With floating point variables (e.g. double), you cannot guarantee the precision of the number you save it them. So when you save something like 1.67 in a variable of the type double and then you check the value later on, therer is no guarantee you will get exactly 1.67. You may get a value like 1.66999999 (similar to what you got) or like 1.6700000001.
Fixed point variables (e.g. decimal) on the other hand, will give you that precision, so if you save 1.67, you will always get 1.67 back.
Please note that the Round() function returns the same type that you pass to it, so to make return a decimal, you need to pass it 1.66666M which is a decimal value rather than 1.66666 which is a floating point number.

Related

Why does my double round my variables sometimes? [duplicate]

This question already has answers here:
Difference between decimal, float and double in .NET?
(18 answers)
Closed 3 years ago.
I really dont unterstand why, but my double array just sometimes round my variables, even though it shouldn't. The weird thing is, that it only does this sometimes, as you see in the picture. All of the array elements should have this long precision. The Variable "WertunterschiedPerSec" is also at this precision every time, but still, if i add it to Zwischenwerte[i], then it sometimes just get less precisie, even though i dont do anything anywhere. Does anybody know why?
I would suggest using a decimal, but let's get into the exact details:
double, float and decimal are all floating point.
The difference is double and float are base 2 and decimal is base 10.
Base 2 numbers cannot accurately represent all base 10 numbers.
This is why you're seeing what appears to be "rounding".
I would use the decimal variable instead of double because you can basically do the same functions as a double except for using the Math function. Try to use Decimals and if you need to convert than use:
Double variable = Convert.ToDouble(decimal var);
Decimals are meant for decimals so they will hold more information than a float or decimal

C# rounding of double not working [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I have this code to round some value. I am using Math.Round() for the same.
double h = 128.015999031067;
double d = Math.Round(h, 3) * 1000;
result is 128015.99999999999
if I write it to console it gives me 128016. I want the value to be 128016
Am I missing any type conversions? Or there is some other way of doing this?
128.016 cannot be represented exactly in binary. There is no way that you will get 128016 by multiplying with 1000.
There are multiple ways around this:
Do nothing. It is already printed correctly. (Unless you want to perform further calculations.)
The "obvious" solution would be to simply round again.
The "simplest" solution would be to multiply first, then round.
The most "correct" solution would be, if you need exact values, you should use an exact type; there are many implementations of bignum, rational, decimal, and precise arithmetic available via NuGet, also consider using decimal.
Have you tried using the round function without any other arguments?
double h = 128.015999031067;
double d = Math.Round(h*1000);

Why exactly 0.09f - 0.01f == 0.0800000057f? [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 7 years ago.
I know that floating point variable stores the number in a sign-exponent-fraction format (as it's stated in the IEEE 754), it's never precise and I should probably never compare two floats without specifying the precision.
But why exactly 0.09f - 0.01f gives you the value of 0.0800000057f? What exactly happens in under the hood of the .NET VM and in the memory when I do that subtraction?
0.09 is represented as 00111101101110000101000111101100 in ieee-754, which is closest to the decimal value of 0.09000000357627869
0.01 is represented as 00111100001000111101011100001010 in ieee-754, which is closest to the decimal value of 0.009999999776482582
Their subtraction yields 00111101101000111101011100001011 which is closest to the decimal value of 0.08000000566244125
You can see how the actual subtraction is done here

float.Parse returns incorrect value [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
I have a string X with value 2.26
when I parse it using float.Parse(X) ..it returns 2.2599999904632568. Why so? And how to overcome this ?
But if instead I use double.Parse(X) it returns the exact value, i.e. 2.26.
EDIT: Code
float.Parse(dgvItemSelection[Quantity.Index, e.RowIndex].Value.ToString());
Thanks for help
This is due to limitations in the precision of floating point numbers. They can't represent infinitely precise values and often resort to approximate values. If you need highly precise numbers you should be using Decimal instead.
There is a considerable amount of literature on this subject that you should take a look at. My favorite resource is the following
What Every Computer Scientist Should Know About Floating Point
Because floats don't properly represent decimal values in base 10.
Use a Decimal instead if you want an exact representation.
Jon Skeet on this topic
Not all numbers can be repesented exactly in floating point. Approximations are made and when you have operation after operation on an unexact number the situation gets worse.
See this Wikipedia entry for an example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
If you changed you inputs to something that can be represented exactly by floating point (like 1/8), it would work. Try the number 2.25 and it will work as expected.
The only numbers that will work exactly are numbers that can be represented by the sum of any of 1/2, 1/4, 1/8, 1/16, etc since those numbers are represented by the binary 0.1, 0.01, 0.001, 0.0001, etc.
This situation happens with all floating point systems, by nature. .Net, JavaScript, etc.
It is returning the best approximation to 2.26 that is possible in a float. You're probably getting more significant digits than that because your float is being printed as a double instead of a float.
When I test
var str = "2.26";
var flt = float.Parse(str);
flt is exactly 2.26 in the VS debugger.
How do you see what it returns?

Why is this simple calculation of two doubles inaccurate? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# (4): double minus double giving precision problems
86.25 - 86.24 = 0.01
generally, I would think the above statement is true right?
However, if I enter this
double a = 86.24;
double b = 86.25;
double c = b - a;
I find that c = 0.010000000000005116
Why is this?
Floating point numbers (in this case doubles) cannot represent decimal values exactly. I would recommend reading David Goldberg's What every computer scientist should know about floating-point arithmetic
This applies to all languages that deal with floating point numbers whether it be C#, Java, JavaScript, ... This is essential reading for any developer working with floating point arithmetic.
As VinayC suggests, if you need an exact (and slower) representation, use System.Decimal (aka decimal in C#) instead.
Double is incorrect data type for decimal calculations, use Decimal for that. Main reason lies in how double stores the number - its essentially a binary approximation of the decimal number.

Categories

Resources