Casting float to decimal loses precision in C# - c#

In C# 4.0, the following cast behaves very unexpectedly:
(decimal)1056964.63f
1056965
Casting to double works just fine:
(double)1056964.63f
1056964.625
(decimal)(double)1056964.63f
1056964.625
Is this by design?

The problem is with your initial value - float is only accurate to 7 significant decimal digits anyway:
float f = 1056964.63f;
Console.WriteLine(f); // Prints 1056965
So really the second example is the unexpected one in some ways.
Now the exact value in f is 1056965.625, but that's the value given for all values from about 1056964.563 to 1056964.687 - so even the ".6" part isn't always correct. That's why the docs for System.Single state:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
The extra information is still preserved when you convert to double, because that's can preserve it without "interpreting" it at all - where converting it to a decimal form (either to print or for the decimal type) goes through code which knows it can't "trust" those last two digits.

It is by design. Float can hold your number [edit]quite accurate[/edit], but for conversion purposes to it rounds it up to nearest integer, because there are only few representable float values between your number and integer (1056964.75 and 1056964.88). See COMNumber::FormatSingle and COMDecimal::InitSingle from SSCLI.

Related

Understanding C# floating point numbers precision. How does C# stores float numbers? [duplicate]

This question already has answers here:
How to calculate float type precision and does it make sense?
(4 answers)
Closed 1 year ago.
I have some doubts about what "precision" actually means in C# when working with floating numbers. I apologize in advance if a logic is weak and for the long explanation.
I know float number (e.g. 10.01F) has a precision of 6 to 9 digits. So, let's say we have the next code:
float myFloat = 1.000001F;
Console.WriteLine(myFloat);
I get the exact number in console. Now, let's use the next code:
myFloat = 1.00000006F;
Console.WriteLine(myFloat);
A different number is printed: 1.0000001, even thought the number has 9 digits, which is the limit.
This is my first doubt. Does precision depends of the number itself or the computer's architecture?
Furthermore, data is store as bits in the computer, bearing that in mid, I remember that converting the decimal part of a number to bits can lead to a different number when transforming the number back to decimal. For example:
(Decimal) 1.0001 -> (Binary) 1.00000000000001101001
(Binary) 1.00000000000001101001 -> (Decimal) 1.00010013580322265625 (It's not the same)
My logic after this is: maybe a float number doesn't lose information when stored, maybe such information is lost when the number is converted back to decimal to show it to the user.
E.g.
float myFloat = 999999.11F + 1.11F;
The result of the above should be: 1000000.22. However, since this number exceeds the precision of a float, I should see a different number, which indeed happens: 1000000.25
There is a 0.03 difference. In order to see if the actual result is 1000000.22 I did the next condition:
if (myFloat == 1000000.22F) {
Console.WriteLine("Real result = 100000.22");
}
And it actually prints it: Real result = 100000.22.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
1.000001F in source code is converted to the float value 8,388,616•2−23, which is 1.00000095367431640625.
1.00000006F in source code is converted to the float value 8,388,609•2−23, which is 1.00000011920928955078125.
Console.WriteLine shows only some of the value of these; it rounds its display to a limited number of digits, by default.
999999.11F is converted to 15,999,986•2−4 which is 999,999.125. 1.11F is converted to 9,311,355•2−23, which is 1.11000001430511474609375. When these are added using real-number mathematics, the result is 8,388,609,971,323•2−23. That cannot be represented in a float, because the fraction portion of a float (called the significand) can only have 24 bits, so its maximum value as an integer is 16,777,215. If we divide that significand by 219 to reduce it to that limit, we get approximately 8,388,609,971,323/219 • 2−23•219 = 16,000,003.76•2−4. Rounding that significand to an integer produces 16,000,004•2−4. So, when those two numbers are added, float arithmetic rounds the result and produces 16,000,004•2−4, which is 1,000,000.25.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
Converting a decimal numeral to floating-point generally introduces a rounding error.
Adding floating-point numbers generally introduces a rounding error.
Converting a floating-point number to a decimal numeral with limited precision generally introduces a rounding error.
The rounding occurs both when you write 1000000.22F in your code (the compiler must find the exponent and mantissa that give a result closest to the decimal number to typed), and again when converting to decimal to display.
There isn't any decimal/binary type of rounding in the actual arithmetic operations, although arithmetic operations do have rounding error related to the limited number of mantissa bits.

Why my precision getting loss while converting double to float in c#?

I' having double value as 10293.01416015625 in c# and i'm trying to convert to float. Since float has only 24 bits, i suppose to get the result as 10293.0141. But i'm getting value as 10293.0137
double value = 10293.01416015625;
float converted = (float)value;
Expected value - 10293.0141
Value am getting - 10293.0137
Thanks in advance
From the System.Single documentation:
A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
Your result is correct up to 7 significant digits (10293.01). You shouldn't expect to be able to get more than that with float.
The exact value of the floats closest to 10293.01416015625 are 10293.013671875 and 10293.0146484375. Both are exactly 0.00048828125 away from the value you're trying to represent.

How to not drop precision when converting from float to decimal in a case where this is clearly possible

The value 0.105700679f should be convertible precisely to decimal. decimal clearly is able to hold this value precisely:
decimal d = 0.105700679m;
Console.WriteLine(d); //0.105700679
float also is able to hold the value precisely:
float f = 0.105700679f;
Console.WriteLine(f == 0.105700679f); //True
Console.WriteLine(f == 0.1057007f); //False
Console.WriteLine(f.ToString("R")); //Round-trip representation, 0.105700679
(Note, that float.ToString() seems to drop precision. I just asked about that as well.)
https://www.h-schmidt.net/FloatConverter/IEEE754.html says:
It seems the value really is stored like that. I am seeing this value right now in the debugger. I received it over the network as IEEE float. This value exists!
But when I convert from float to decimal precision is dropped:
float f = 0.105700679f;
decimal d = (decimal)f;
Console.WriteLine(d); //0.1057007
Console.WriteLine(d.ToString("F15")); //0.105700700000000
Console.WriteLine(((double)d).ToString("R")); //0.1057007
I do understand that floating point numbers are imprecise. But here I see no reason for a loss of information. This is on .NET 4.7.1. How can I convert from float to decimal and preserve precision in all cases where doing so is possible?
This is important to me because I am processing financial market data and joining data sources based on price. Data is given to me as a float over a 3rd party network protocol. I need to import that float to a decimal representation.
Try converting f to double and then converting that to decimal.
I suspect you are seeing shortcomings in .NET.
Let’s look at some of the code in your question line by line. In float f = 0.105700679f;, the decimal numeral “0.105700679” is converted to 32-bit binary floating-point. The result of this is the number 0.105700679123401641845703125.
In Console.WriteLine(f == 0.105700679f);. This compares f to the value represented by 0.105700679f. Since the f suffix denotes a float type, 0.105700679f represents the decimal numeral “0.105700679” converted to 32-bit binary floating-point. So of course it has the same value as it did before, and the test for equality returns true. You have not tested whether f is equal to 0.105700679, you have tested whether it is equal to 0.105700679f, and it is.
Then we have decimal d = (decimal)f;. Based on the results you are seeing, it appears to me this conversion produces a number with only seven decimal digits, .1057007. I presume Microsoft has decided that, because a float is only “capable” of holding seven decimal digits, that only seven should be produced when converting to decimal. (This is both a false understanding of what the value of a binary floating-point number represents and an incorrect number. A conversion from decimal to float and back is only guaranteed to preserve six decimal digits, and a conversion from float to decimal and back requires nine decimal digits to preserve the float value. So seven is just wrong.)
If there is a solution to your problem, it is to convert f to decimal by some means other than the cast (decimal) f. I do not know C#, so I cannot say what the solution should be. I suggest trying to convert to double first and then decimal. Quite likely C# will convert float to double without changing the value, and then the conversion to decimal will produce more decimal digits. Another possibility could be converting f to a string with the number of decimal digits you desire and then converting the string to a decimal number.
Also, you say the data is coming via a third-party network protocol. It appears the protocol is incapable of representing the actual values it is supposed to be communicating. That is a serious defect that you should complain to the third party about. I know that may seem futile, but it should be done. Also, it casts doubt on your need to convert the float value to 0.105700679. Most eight-digit decimal numbers in the origin of this data could not survive the conversion to float and back to decimal without change. So, even if you are able to convert float to eight decimal digits, most of the results will differ from the original pre-transport values. E.g., if the original number were 0.105700680, it is changed to 0.105700679123401641845703125 when converted to a float to be sent over the network. So the receiver receives 0.105700679123401641845703125, and it is impossible for them to know the original number was 0.105700680 rather than 0.105700679.
C# data type FLOAT have Precision is 7.
float f = 0.105700679f;
Console.WriteLine(d); //0.1057007 so that result is true!
read more: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/float

Rounding the SIGNIFICANT digits in a double, not to decimal places [duplicate]

This question already has answers here:
Round a double to x significant figures
(17 answers)
Closed 7 years ago.
I need to round significant digits of doubles. Example
Round(1.2E-20, 0) should become 1.0E-20
I cannot use Math.Round(1.2E-20, 0), which returns 0, because Math.Round() doesn't round significant digits in a float, but to decimal digits, i.e. doubles where E is 0.
Of course, I could do something like this:
double d = 1.29E-20;
d *= 1E+20;
d = Math.Round(d, 1);
d /= 1E+20;
Which actually works. But this doesn't:
d = 1.29E-10;
d *= 1E+10;
d = Math.Round(d, 1);
d /= 1E+10;
In this case, d is 0.00000000013000000000000002. The problem is that double stores internally fractions of 2, which cannot match exactly fractions of 10. In the first case, it seems C# is dealing just with the exponent for the * and /, but in the second case it makes an actual * or / operation, which then leads to problems.
Of course I need a formula which always gives the proper result, not only sometimes.
Meaning I should not use any double operation after the rounding, because double arithmetic cannot deal exactly with decimal fractions.
Another problem with the calculation above is that there is no double function returning the exponent of a double. Of course one could use the Math library to calculate it, but it might be difficult to guarantee that this has always precisely the same result as the double internal code.
In my desperation, I considered to convert a double to a string, find the significant digits, do the rounding and convert the rounded number back into a string and then finally convert that one to a double. Ugly, right ? Might also not work properly in all case :-(
Is there any library or any suggestion how to round the significant digits of a double properly ?
PS: Before declaring that this is a duplicate question, please make sure that you understand the difference between SIGNIFICANT digits and decimal places
The problem is that double stores internally fractions of 2, which cannot match exactly fractions of 10
That is a problem, yes. If it matters in your scenario, you need to use a numeric type that stores numbers as decimal, not binary. In .NET, that numeric type is decimal.
Note that for many computational tasks (but not currency, for example), the double type is fine. The fact that you don't get exactly the value you are looking for is no more of a problem than any of the other rounding error that exists when using double.
Note also that if the only purpose is for displaying the number, you don't even need to do the rounding yourself. You can use a custom numeric format to accomplish the same. For example:
double value = 1.29e-10d;
Console.WriteLine(value.ToString("0.0E+0"));
That will display the string 1.3E-10;
Another problem with the calculation above is that there is no double function returning the exponent of a double
I'm not sure what you mean here. The Math.Log10() method does exactly that. Of course, it returns the exact exponent of a given number, base 10. For your needs, you'd actually prefer Math.Floor(Math.Log10(value)), which gives you the exponent value that would be displayed in scientific notation.
it might be difficult to guarantee that this has always precisely the same result as the double internal code
Since the internal storage of a double uses an IEEE binary format, where the exponent and mantissa are both stored as binary numbers, the displayed exponent base 10 is never "precisely the same as the double internal code" anyway. Granted, the exponent, being an integer, can be expressed exactly. But it's not like a decimal value is being stored in the first place.
In any case, Math.Log10() will always return a useful value.
Is there any library or any suggestion how to round the significant digits of a double properly ?
If you only need to round for the purpose of display, don't do any math at all. Just use a custom numeric format string (as I described above) to format the value the way you want.
If you actually need to do the rounding yourself, then I think the following method should work given your description:
static double RoundSignificant(double value, int digits)
{
int log10 = (int)Math.Floor(Math.Log10(value));
double exp = Math.Pow(10, log10);
value /= exp;
value = Math.Round(value, digits);
value *= exp;
return value;
}

C# Convert.ToDouble() loses decimal points when converting string to double

Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;

Categories

Resources