I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#
Related
This question already has answers here:
How to calculate float type precision and does it make sense?
(4 answers)
Closed 1 year ago.
I have some doubts about what "precision" actually means in C# when working with floating numbers. I apologize in advance if a logic is weak and for the long explanation.
I know float number (e.g. 10.01F) has a precision of 6 to 9 digits. So, let's say we have the next code:
float myFloat = 1.000001F;
Console.WriteLine(myFloat);
I get the exact number in console. Now, let's use the next code:
myFloat = 1.00000006F;
Console.WriteLine(myFloat);
A different number is printed: 1.0000001, even thought the number has 9 digits, which is the limit.
This is my first doubt. Does precision depends of the number itself or the computer's architecture?
Furthermore, data is store as bits in the computer, bearing that in mid, I remember that converting the decimal part of a number to bits can lead to a different number when transforming the number back to decimal. For example:
(Decimal) 1.0001 -> (Binary) 1.00000000000001101001
(Binary) 1.00000000000001101001 -> (Decimal) 1.00010013580322265625 (It's not the same)
My logic after this is: maybe a float number doesn't lose information when stored, maybe such information is lost when the number is converted back to decimal to show it to the user.
E.g.
float myFloat = 999999.11F + 1.11F;
The result of the above should be: 1000000.22. However, since this number exceeds the precision of a float, I should see a different number, which indeed happens: 1000000.25
There is a 0.03 difference. In order to see if the actual result is 1000000.22 I did the next condition:
if (myFloat == 1000000.22F) {
Console.WriteLine("Real result = 100000.22");
}
And it actually prints it: Real result = 100000.22.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
1.000001F in source code is converted to the float value 8,388,616•2−23, which is 1.00000095367431640625.
1.00000006F in source code is converted to the float value 8,388,609•2−23, which is 1.00000011920928955078125.
Console.WriteLine shows only some of the value of these; it rounds its display to a limited number of digits, by default.
999999.11F is converted to 15,999,986•2−4 which is 999,999.125. 1.11F is converted to 9,311,355•2−23, which is 1.11000001430511474609375. When these are added using real-number mathematics, the result is 8,388,609,971,323•2−23. That cannot be represented in a float, because the fraction portion of a float (called the significand) can only have 24 bits, so its maximum value as an integer is 16,777,215. If we divide that significand by 219 to reduce it to that limit, we get approximately 8,388,609,971,323/219 • 2−23•219 = 16,000,003.76•2−4. Rounding that significand to an integer produces 16,000,004•2−4. So, when those two numbers are added, float arithmetic rounds the result and produces 16,000,004•2−4, which is 1,000,000.25.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
Converting a decimal numeral to floating-point generally introduces a rounding error.
Adding floating-point numbers generally introduces a rounding error.
Converting a floating-point number to a decimal numeral with limited precision generally introduces a rounding error.
The rounding occurs both when you write 1000000.22F in your code (the compiler must find the exponent and mantissa that give a result closest to the decimal number to typed), and again when converting to decimal to display.
There isn't any decimal/binary type of rounding in the actual arithmetic operations, although arithmetic operations do have rounding error related to the limited number of mantissa bits.
With Single precision (32 bits): the bits division goes like this :
So we have 23 bits of mantissa/Significand .
So we can represent 2^23 numbers (via 23 bits ) : which is 8388608 --> which is 7 digit long.
BUT
I was reading that the mantissa is normalized (the leading digit in the mantissa will always be a 1) - so the pattern is actually 1.mmm and only the mmm is represented in the mantissa.
for example : look here :
0.75 is represented but it's actually 1.75
Question #1
So basically it adds 1 more precision digit....no ?
If so then we have 8 Significand !
So why does msdn says : 7 ?
Question #2
In double there are 52 bits for mantissa. (0..51)
If I add 1 for the normalized mantissa so its 2^53 possibilites which is : 9007199254740992 ( 16 digits)
and MS does say : 15-16 :
Why is this inconsistency ? am I missing something ?
It doesn't add one more decimal digit - just a single binary digit. So instead of 23 bits, you have 24 bits. This is handy, because the only number you can't represent as starting with a one is zero, and that's a special value.
In short, you're not looking at 2 ^ 24 (that would be a decimal number, base-10) - you're looking at 2 ^ (-24). That's the most important difference between float-double and decimal. decimal is what you imagine floats to be, ie. a simple exponent-shifted, base-10 number. float and double aren't that.
Now, decimal digits versus binary digits is a tricky matter. You're mistaken in your understanding that the precision has anything to do with the 2 ^ 24 figure - that would only be true if you were talking about e.g. the decimal type, which actually stores decimal values as decimal point offsets of a normal (huge-ass) integer.
Just like 1 / 3 cannot be written in decimal (0.333333...), many simple decimal numbers can't be represented in a float precisely (0.2 is the typical example). decimal doesn't have a problem with that - it's just 2 shifted one digit to the right, easy peasy. For floats, however, you have to represent this value as a sum of negative powers of two - 0.5, 0.25, 0.125 ... The same would apply in the opposite direction if 2 wasn't a factor of 10 - every finite binary "decimal" can be represented with finite precision in decimal.
Now, in fact, float can easily represent a number with 24 decimal digits - it just has to be 2 ^ (-24) - a number you're not going to encounter in your usual day job, and a weird number in decimal. So where does the 7 (actually more like 7.22...) come from? Simple, just do a decimal logarithm of 2 ^ (-24).
The fact that it seems that 0.2 can be represented "exactly" in a float is simply because everytime you e.g. convert it to a string, you're rounding. So, even though the number isn't 0.2 exactly, it ends up that way when you convert it to a decimal number.
All this means that when you need decimal precision, you want to use decimal, as simple as that. This is not because it's a better base for calculations, it's simply because humans use it, and they will not be happy if your application gives different results from what they calculate on a piece of paper - especially when dealing with money. Accountants are very focused on having everything correct to the least significant digit.
Floats are used where it's not about decimal precision, but rather about generally having some sort of precision - this makes them well suited for physics calculations and similar, because you don't actually care about having the number come up the same in decimal - you're working with a given precision, and you're going to get that - 24 significant binary "decimals".
The implied leading 1 adds one more binary digit of precision, not decimal.
So, I've got this floating point number:
(float)-123456.668915
It's a number chosen at random because I'm doing some unit testing for a chunk of BCD code I'm writing. Whenever I go to compare the number above with a a string ("-123456.668915" to be clear), I'm getting an issue with how C# rounded the number. It rounds it to -123456.7. This has been checked in NUnit and with straight console output.
Why is it rounding like this? According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
Thanks for the help!
According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
"6 digits after the decimal point" isn't the same as "6 digits of precision". The number of digits of precision is the number of significant digits which can be reliably held. Your number has 12 significant digits, so it's not at all surprising that it can't be represented exactly by float.
Note that the number it's (supposedly) rounding to, -123456.7, does have 7 significant digits. In fact, that's not the value of your float either. I strongly suspect the exact value is -123456.671875, as that's the closest float to -123456.668915. However, when you convert the exact value to a string representation, the result is only 7 digits, partly because beyond that point the digits aren't really meaningful anyway.
You should probably read my article about binary floating point in .NET for more details.
The float type has a precision of 24 significant bits (except for denormals), which is equivalent to 24 log10 2 ≈ 7.225 significant decimal digits. The number -123456.668915 has 12 significant digits, so it can't be represented accurately.
The actual binary value, rounded to 24 significant bits, is -11110001001000000.1010110. This is equivalent to the fraction -7901227/64 = -123456.671875. Rounding to 7 significant digits gives the -123456.7 you see.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Why is there a bias in floating point ops? Any specific reason?
Output:
160
139
static void Main()
{
float x = (float) 1.6;
int y = (int)(x * 100);
float a = (float) 1.4;
int b = (int)(a * 100);
Console.WriteLine(y);
Console.WriteLine(b);
Console.ReadKey();
}
Any rational number that has a denominator that is not a power of 2 will lead to an infinite number of digits when represented as a binary. Here you have 8/5 and 7/5. Therefore there is no exact binary representation as a floating-point number (unless you have infinite memory).
The exact binary representation of 1.6 is 110011001100110011001100110011001100...
The exact binary representation of 1.4 is 101100110011001100110011001100110011...
Both values have an infinite number of digits (1100 is repeated endlessly).
float values have a precision of 24 bits. So the binary representation of any value will be rounded to 24 bits. If you round the given values to 24 bits you get:
1.6: 110011001100110011001101 (decimal 13421773) - rounded up
1.4: 101100110011001100110011 (decimal 11744051) - rounded down
Both values have an exponent of 0 (the first bit is 2^0 = 1, the second is 2^-1 = 0.5 etc.).
Since the first bit in a 24 bit value is 2^23 you can calculate the exact decimal values by dividing the 24 bit values (13421773 and 11744051) by two 23 times.
The values are: 1.60000002384185791015625 and 1.39999997615814208984375.
When using floating-point types you always have to consider that their precision is finite. Values that can be written exact as decimal values might be rounded up or down when represented as binaries. Casting to int does not respect that because it truncates the given values. You should always use something like Math.Round.
If you really need an exact representation of rational numbers you need a completely different approach. Since rational numbers are fractions you can use integers to represent them. Here is an example of how you can achieve that.
However, you can not write Rational x = (Rational)1.6 then. You have to write something like Rational x = new Rational(8, 5) (or new Rational(16, 10) etc.).
This is due to the fact that floating point arithmetic is not precise. When you set a to 1.4, internally it may not be exactly 1.4, just as close as can be made with machine precision. If it is fractionally less than 1.4, then multiplying by 100 and casting to integer will take only the integer portion which in this case would be 139. You will get far more technically precise answers but essentially this is what is happening.
In the case of your output for the 1.6 case, the floating point representation may actually be minutely larger than 1.6 and so when you multiply by 100, the total is slightly larger than 160 and so the integer cast gives you what you expect. The fact is that there is simply not enough precision available in a computer to store every real number exactly.
See this link for details of the conversion from floating point to integer types http://msdn.microsoft.com/en-us/library/aa691289%28v=vs.71%29.aspx - it has its own section.
The floating point types float (32 bit) and double (64 bit) have a limited precision and more over the value is represented as a binary value internally. Just as you cannot represent 1/7 precisely in a decimal system (~ 0.1428571428571428...), 1/10 cannot be represented precisely in a binary system.
You can however use the decimal type. It still has a limited (however high) precision, but the numbers a represented in a decimal way internally. Therefore a value like 1/10 is represented exactly like 0.1000000000000000000000000000 internally. 1/7 is still a problem for decimal. But at least you don't get a loss of precision by converting to binary and then back to decimal.
Consider using decimal.
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)