This question already has answers here:
Why is floating point arithmetic in C# imprecise?
(3 answers)
Closed 7 years ago.
In my computer when I harcode a variable with 0.3(or maybe some other value) of value and I debug and check the variable's value , its value is 0.29999992 but in my friends computer, it stays in 0.3.
//stores 0.29999992
double variable= 0.3;
is there a configuration problem or something related?
Thanks
This is just an artifact of how binary floating-point works. There is no way of accurately representing 0.3 in a double (or a float for that matter). If you need that (e.g. for monetary applications), use decimal instead.
Welcome to the world of floating point numbers. Some, seemingly innocuous numbers cannot be represented exactly in floating point notation. A very close approximation is used instead.
Related
This question already has answers here:
Difference between decimal, float and double in .NET?
(18 answers)
Closed 3 years ago.
I really dont unterstand why, but my double array just sometimes round my variables, even though it shouldn't. The weird thing is, that it only does this sometimes, as you see in the picture. All of the array elements should have this long precision. The Variable "WertunterschiedPerSec" is also at this precision every time, but still, if i add it to Zwischenwerte[i], then it sometimes just get less precisie, even though i dont do anything anywhere. Does anybody know why?
I would suggest using a decimal, but let's get into the exact details:
double, float and decimal are all floating point.
The difference is double and float are base 2 and decimal is base 10.
Base 2 numbers cannot accurately represent all base 10 numbers.
This is why you're seeing what appears to be "rounding".
I would use the decimal variable instead of double because you can basically do the same functions as a double except for using the Math function. Try to use Decimals and if you need to convert than use:
Double variable = Convert.ToDouble(decimal var);
Decimals are meant for decimals so they will hold more information than a float or decimal
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 7 years ago.
I know that floating point variable stores the number in a sign-exponent-fraction format (as it's stated in the IEEE 754), it's never precise and I should probably never compare two floats without specifying the precision.
But why exactly 0.09f - 0.01f gives you the value of 0.0800000057f? What exactly happens in under the hood of the .NET VM and in the memory when I do that subtraction?
0.09 is represented as 00111101101110000101000111101100 in ieee-754, which is closest to the decimal value of 0.09000000357627869
0.01 is represented as 00111100001000111101011100001010 in ieee-754, which is closest to the decimal value of 0.009999999776482582
Their subtraction yields 00111101101000111101011100001011 which is closest to the decimal value of 0.08000000566244125
You can see how the actual subtraction is done here
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
.Net float to double conversion
I though I understood floating point but can someone explain the following
float f = 1.85847987E+9F;
double d = Convert.ToDouble(f);
d is now converted to a string as 1858479872.0. I'm assuming the extra 2 is because double cannot represent the floating point number exactly.
My question is why does it seem to be able to rerepsent the same number when assigned directly
double d = 1.85847987E+9;
and it is shown exactly as 185847987.0
Because double can, and float cannot represent 1.85847987E+9 precisely.
Why the compiler doesn't complains about "float f = 1.85847987E+9F;" if it cant represents it properly
As per C# specification, section 4.1.6 Floating point types
The floating-point operators, including the assignment operators, never produce exceptions.
The problem is not double but float. Float is limited to 32 bit, while double uses 64 bit for precision.
Because a float exists out of a sign, mantissa and exponent. See Jon Skeet's answer here how to extract those values.
For 1.85847987E+9F, the mantissa is 7259687 and the exponent is 8. Then mantissa << exponent equals 1858479872. Since the precision for a float is limited to 7 digits, the value of any digit beyond 7 digits depends on the implementation, not the input. You can test this easily by inputting 123456789F.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is the difference between Decimal, Float and Double in C#?
Today I'm wondering about Double in .Net. I've used it with Int32 the past days and started wondering what the max value is.
The MSDN page for Double.MaxValue says 1.7976931348623157E+308. I'm pretty sure I'm reading that wrong.
How many bytes does Double take up (in memory)?
What is the actual maximum number (explain the the E+308)?
Is Double.MaxValue bigger than UInt32? Bigger than UInt64?
And while we are at it, what is the difference between Float and Double?
Basically,
Double is 64 bit floating point value and float is a 32 bit.
So double is able to store twice big value as of float.
http://msdn.microsoft.com/en-us/library/678hzkk9(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/b1e65aza(v=vs.71).aspx
Just read the top lines on the links, you'll get an idea.
About E+308: though 2^64 is far less that 1e+308, you must consider that double is not "precise" number, it has only a few significant digits (precision), so it does not need to store all ~308 digits. With this logic behind the double structure, it can contain numbers up to e+308 in 64 bits.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# (4): double minus double giving precision problems
86.25 - 86.24 = 0.01
generally, I would think the above statement is true right?
However, if I enter this
double a = 86.24;
double b = 86.25;
double c = b - a;
I find that c = 0.010000000000005116
Why is this?
Floating point numbers (in this case doubles) cannot represent decimal values exactly. I would recommend reading David Goldberg's What every computer scientist should know about floating-point arithmetic
This applies to all languages that deal with floating point numbers whether it be C#, Java, JavaScript, ... This is essential reading for any developer working with floating point arithmetic.
As VinayC suggests, if you need an exact (and slower) representation, use System.Decimal (aka decimal in C#) instead.
Double is incorrect data type for decimal calculations, use Decimal for that. Main reason lies in how double stores the number - its essentially a binary approximation of the decimal number.