This question already has answers here:
Why is floating point arithmetic in C# imprecise?
(3 answers)
Closed 1 year ago.
Try this:
(float)100008009
And you will probably get
100008008
The issue is that we get no warning. And this can't be overflow since floats can take higher values. So I can't explain this result.
What is the Max value for 'float'?
The issue is that we get no warning.
Floating-point is intended to approximate real-number arithmetic. So rounding during conversion is part of the design, meaning it is normal, so it does not get a warning. The closest value to 100008009 representable in float is 100008008, so that is the result.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Why are floating point numbers inaccurate?
(5 answers)
Closed 1 year ago.
My code snippet as below.
int nStdWorkDays;
double dbStdWorkDaysMin;
nStdWorkDays = 21;
dbStdWorkDaysMin = nStdWorkDays * 0.9;
Here, I found the value of dbStdWorkDaysMin is 18.900000000000002 (other than
18.9) when I debugged and added a watch in Visual Studio.
The error resulted in that '18.9 < dbStdWorkDaysMin' is true!
I wonder why this is happening. What are the similar traps? How can we get the correct calculation result?
Thank you all in advance.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
C# adds a decimal at the end in the result, my code:
public static double CalcCompoundedInterest()
{
return (1.1 * 1.1);
}
Result: 1.2100000000000002
Does someone have a clue why this happens?
This is not a C# problem, this is the way computers work when they handle decimal values.
You see, 1.1 is stored as a float, which is encoded in binary using the IEEE 754 standard. Most decimals numbers are not possible to store without adding a very small error to them.
This question already has answers here:
Floating point inaccuracy examples
(7 answers)
Closed 6 years ago.
double a=60.5;
double a=60.1;
Console.WriteLine(a-b);
Return value is 0.399999999999999 not 0.4
It is because you use double: double is floating point, which is not precise; decimal, on the other hand, is precise. If you change both variable to decimal, it will be the exact number.
That is why in certain domains, like financial industry, decimal is desired for accuracy and precision.
This question already has answers here:
Why does integer division in C# return an integer and not a float?
(8 answers)
Closed 6 years ago.
I am trying to show the time taken by a process in label.
So I implemented watch.
Now let say process took 7 miliseconds and I want to show it in seconds so I wrote 7/1000 which should be 0.007 but its showing 0.
I am showing it into label, so if any conversions of string can show this format please suggest me.
You're not posting any code, but I suppose that you divide two integer values. Integer division always results in an integer as well.
If you divide 7/1000.0 instead (and/or cast at least one operand to a floating-point number, e.g. double) the division will give you the expected result.
You are probably using an int which will not have decimal points. try and change it to a double.
The simplest fix here would be to change your calculation to
seconds/1000.0
This question already has answers here:
Find number of decimal places in decimal value regardless of culture
(20 answers)
Closed 8 years ago.
If I have a number, how can I determine the number of decimals?
e.g. for 0.0001 I want to get the result 4
The duplicate suggested above is less suitable than this one because
they are taking about culture-independent code but this question is
just about decimal oriented code (i.e. after the decimal). So no need
to introduce any more overhead:
Finding the number of places after the decimal point of a Double
but they both are good threads.
You can't really. A double is a floating point precision data type, so it's never precise.
You could hack something around, using ToString:
double d = 0.994562d;
int numberOfDecimals = d.ToString(CultureInfo.InvariantCulture).Length
- d.ToString(CultureInfo.InvariantCulture).IndexOf('.')
- 1
;