I' having double value as 10293.01416015625 in c# and i'm trying to convert to float. Since float has only 24 bits, i suppose to get the result as 10293.0141. But i'm getting value as 10293.0137
double value = 10293.01416015625;
float converted = (float)value;
Expected value - 10293.0141
Value am getting - 10293.0137
Thanks in advance
From the System.Single documentation:
A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
Your result is correct up to 7 significant digits (10293.01). You shouldn't expect to be able to get more than that with float.
The exact value of the floats closest to 10293.01416015625 are 10293.013671875 and 10293.0146484375. Both are exactly 0.00048828125 away from the value you're trying to represent.
Related
This question already has answers here:
How to calculate float type precision and does it make sense?
(4 answers)
Closed 1 year ago.
I have some doubts about what "precision" actually means in C# when working with floating numbers. I apologize in advance if a logic is weak and for the long explanation.
I know float number (e.g. 10.01F) has a precision of 6 to 9 digits. So, let's say we have the next code:
float myFloat = 1.000001F;
Console.WriteLine(myFloat);
I get the exact number in console. Now, let's use the next code:
myFloat = 1.00000006F;
Console.WriteLine(myFloat);
A different number is printed: 1.0000001, even thought the number has 9 digits, which is the limit.
This is my first doubt. Does precision depends of the number itself or the computer's architecture?
Furthermore, data is store as bits in the computer, bearing that in mid, I remember that converting the decimal part of a number to bits can lead to a different number when transforming the number back to decimal. For example:
(Decimal) 1.0001 -> (Binary) 1.00000000000001101001
(Binary) 1.00000000000001101001 -> (Decimal) 1.00010013580322265625 (It's not the same)
My logic after this is: maybe a float number doesn't lose information when stored, maybe such information is lost when the number is converted back to decimal to show it to the user.
E.g.
float myFloat = 999999.11F + 1.11F;
The result of the above should be: 1000000.22. However, since this number exceeds the precision of a float, I should see a different number, which indeed happens: 1000000.25
There is a 0.03 difference. In order to see if the actual result is 1000000.22 I did the next condition:
if (myFloat == 1000000.22F) {
Console.WriteLine("Real result = 100000.22");
}
And it actually prints it: Real result = 100000.22.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
1.000001F in source code is converted to the float value 8,388,616•2−23, which is 1.00000095367431640625.
1.00000006F in source code is converted to the float value 8,388,609•2−23, which is 1.00000011920928955078125.
Console.WriteLine shows only some of the value of these; it rounds its display to a limited number of digits, by default.
999999.11F is converted to 15,999,986•2−4 which is 999,999.125. 1.11F is converted to 9,311,355•2−23, which is 1.11000001430511474609375. When these are added using real-number mathematics, the result is 8,388,609,971,323•2−23. That cannot be represented in a float, because the fraction portion of a float (called the significand) can only have 24 bits, so its maximum value as an integer is 16,777,215. If we divide that significand by 219 to reduce it to that limit, we get approximately 8,388,609,971,323/219 • 2−23•219 = 16,000,003.76•2−4. Rounding that significand to an integer produces 16,000,004•2−4. So, when those two numbers are added, float arithmetic rounds the result and produces 16,000,004•2−4, which is 1,000,000.25.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
Converting a decimal numeral to floating-point generally introduces a rounding error.
Adding floating-point numbers generally introduces a rounding error.
Converting a floating-point number to a decimal numeral with limited precision generally introduces a rounding error.
The rounding occurs both when you write 1000000.22F in your code (the compiler must find the exponent and mantissa that give a result closest to the decimal number to typed), and again when converting to decimal to display.
There isn't any decimal/binary type of rounding in the actual arithmetic operations, although arithmetic operations do have rounding error related to the limited number of mantissa bits.
Sorry for the daft question, but I get back this value from database
"7.545720553985866E+29"
I need to convert this value to a decimal, rounded to 6 digits. What is the best way to do that? I tried
var test = double.Parse("7.545720553985866E+29");
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
but the value remains unchanged and the conversion crashes.
Math.Round rounds to N digits to the right of the decimal point. Your number has NO digits to the right of the decimal (it is equivalent to 754,572,055,398,586,600,000,000,000,000), so rounding it does not change the value.
If you want to round to N significant digits then look at some of the existing answers:
Round a double to x significant figures
Rounding the SIGNIFICANT digits in a double, not to decimal places
the conversion crashes.
That's because the value is too large for a decimal. The largest value a decimal can hold is 7.9228E+28 - your value is about 10 times larger than that.
Maybe you can substring it and then after, parse.
var test= "7.545720553985866E+29".Substring(0,8); // 7.545720
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
You can use this to round to 6 significant digits:
round(test, 6 - int(math.log10(test)))
The resulting value from that is
7.545721e+29
This works by using log10 from the math module to get the power of 10 in test, rounds it down to get an integer, subtracts that from 6 then uses round to get the desired digits.
As noted by others, round works to the given number of decimal places. The log10 and the rest figures how many decimal places are needed to get the desired number of significant digits. If the decimal places are negative, round rounds to the left of the decimal point.
You should be aware that log10 is not perfectly accurate and taking the int of that may be off from the expected value by one. This happens rarely but it does happen. Also, even if the computed value is correct, converting the value to string (such as when you print it) may give a different-than-expected result. If you need perfect accuracy you would be better off working from the string representation of the value.
The value 0.105700679f should be convertible precisely to decimal. decimal clearly is able to hold this value precisely:
decimal d = 0.105700679m;
Console.WriteLine(d); //0.105700679
float also is able to hold the value precisely:
float f = 0.105700679f;
Console.WriteLine(f == 0.105700679f); //True
Console.WriteLine(f == 0.1057007f); //False
Console.WriteLine(f.ToString("R")); //Round-trip representation, 0.105700679
(Note, that float.ToString() seems to drop precision. I just asked about that as well.)
https://www.h-schmidt.net/FloatConverter/IEEE754.html says:
It seems the value really is stored like that. I am seeing this value right now in the debugger. I received it over the network as IEEE float. This value exists!
But when I convert from float to decimal precision is dropped:
float f = 0.105700679f;
decimal d = (decimal)f;
Console.WriteLine(d); //0.1057007
Console.WriteLine(d.ToString("F15")); //0.105700700000000
Console.WriteLine(((double)d).ToString("R")); //0.1057007
I do understand that floating point numbers are imprecise. But here I see no reason for a loss of information. This is on .NET 4.7.1. How can I convert from float to decimal and preserve precision in all cases where doing so is possible?
This is important to me because I am processing financial market data and joining data sources based on price. Data is given to me as a float over a 3rd party network protocol. I need to import that float to a decimal representation.
Try converting f to double and then converting that to decimal.
I suspect you are seeing shortcomings in .NET.
Let’s look at some of the code in your question line by line. In float f = 0.105700679f;, the decimal numeral “0.105700679” is converted to 32-bit binary floating-point. The result of this is the number 0.105700679123401641845703125.
In Console.WriteLine(f == 0.105700679f);. This compares f to the value represented by 0.105700679f. Since the f suffix denotes a float type, 0.105700679f represents the decimal numeral “0.105700679” converted to 32-bit binary floating-point. So of course it has the same value as it did before, and the test for equality returns true. You have not tested whether f is equal to 0.105700679, you have tested whether it is equal to 0.105700679f, and it is.
Then we have decimal d = (decimal)f;. Based on the results you are seeing, it appears to me this conversion produces a number with only seven decimal digits, .1057007. I presume Microsoft has decided that, because a float is only “capable” of holding seven decimal digits, that only seven should be produced when converting to decimal. (This is both a false understanding of what the value of a binary floating-point number represents and an incorrect number. A conversion from decimal to float and back is only guaranteed to preserve six decimal digits, and a conversion from float to decimal and back requires nine decimal digits to preserve the float value. So seven is just wrong.)
If there is a solution to your problem, it is to convert f to decimal by some means other than the cast (decimal) f. I do not know C#, so I cannot say what the solution should be. I suggest trying to convert to double first and then decimal. Quite likely C# will convert float to double without changing the value, and then the conversion to decimal will produce more decimal digits. Another possibility could be converting f to a string with the number of decimal digits you desire and then converting the string to a decimal number.
Also, you say the data is coming via a third-party network protocol. It appears the protocol is incapable of representing the actual values it is supposed to be communicating. That is a serious defect that you should complain to the third party about. I know that may seem futile, but it should be done. Also, it casts doubt on your need to convert the float value to 0.105700679. Most eight-digit decimal numbers in the origin of this data could not survive the conversion to float and back to decimal without change. So, even if you are able to convert float to eight decimal digits, most of the results will differ from the original pre-transport values. E.g., if the original number were 0.105700680, it is changed to 0.105700679123401641845703125 when converted to a float to be sent over the network. So the receiver receives 0.105700679123401641845703125, and it is impossible for them to know the original number was 0.105700680 rather than 0.105700679.
C# data type FLOAT have Precision is 7.
float f = 0.105700679f;
Console.WriteLine(d); //0.1057007 so that result is true!
read more: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/float
I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#
In C# 4.0, the following cast behaves very unexpectedly:
(decimal)1056964.63f
1056965
Casting to double works just fine:
(double)1056964.63f
1056964.625
(decimal)(double)1056964.63f
1056964.625
Is this by design?
The problem is with your initial value - float is only accurate to 7 significant decimal digits anyway:
float f = 1056964.63f;
Console.WriteLine(f); // Prints 1056965
So really the second example is the unexpected one in some ways.
Now the exact value in f is 1056965.625, but that's the value given for all values from about 1056964.563 to 1056964.687 - so even the ".6" part isn't always correct. That's why the docs for System.Single state:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
The extra information is still preserved when you convert to double, because that's can preserve it without "interpreting" it at all - where converting it to a decimal form (either to print or for the decimal type) goes through code which knows it can't "trust" those last two digits.
It is by design. Float can hold your number [edit]quite accurate[/edit], but for conversion purposes to it rounds it up to nearest integer, because there are only few representable float values between your number and integer (1056964.75 and 1056964.88). See COMNumber::FormatSingle and COMDecimal::InitSingle from SSCLI.