Is there a way to format a C# double exactly? [duplicate] - c#

This question already has answers here:
Formatting doubles for output in C#
(10 answers)
Closed 8 years ago.
Is there a way to get a string showing the exact value of a double, with all the decimal places needed to represent its precise value in base 10?
For example (via Jon Skeet and Tony the Pony), when you type
double d = 0.3;
the actual value of d is exactly
0.299999999999999988897769753748434595763683319091796875
Every binary floating-point value (ignoring things like infinity and NaN) will resolve to a terminating decimal value. So with enough digits of precision in the output (55 in this case), you can always take whatever is in a double and show its exact value in decimal. And being able to show people the exact value would be really useful when there's a need to explain the oddities of floating-point arithmetic. But is there a way to do this in C#?
I've tried all of the standard numeric format strings, both with and without precision specified, and nothing gives the exact value. A few highlights:
d.ToString("n55") outputs 0.3 followed by 54 zeroes -- it does its usual "round to what you probably want to see" and then tacks more zeroes on the end. Same thing if I use a custom format string of 0.00000...
d.ToString("r") gives you a value with enough precision that if you parse it you'll get the same bit pattern you started with -- but in this case it just outputs 0.3. (This would be more useful if I was dealing with the result of a calculation, rather than a constant.)
d.ToString("e55") outputs 2.9999999999999999000000000000000000000000000000000000000e-001 -- some of the precision, but not all of it like I'm looking for.
Is there some format string I missed, or some other library function, or NuGet package or other library, that is able to convert a double to a string with full precision?

You could try using # placeholders if you want to suppress trailing zeroes, and avoid scientific notation. Though you'll need a lot of them for very small numbers, e.g.:
Console.WriteLine(double.Epsilon.ToString("0.########....###"));

I believe you can do this, based on what you want to accomplish with the display:
Consider this:
Double myDouble = 10/3;
myDouble.ToString("G17");
Your output will be:
3.3333333333333335
See this link for why: http://msdn.microsoft.com/en-us/library/kfsatb94(v=vs.110).aspx
By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, ToString returns PositiveInfinitySymbol or NegativeInfinitySymbol instead of the expected number. If you require more precision, specify format with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision.
You can also do:
myDouble.ToString("n16");
That will discard the 16th and 17th noise digits, and return the following:
3.3333333333333300
If you're looking to display the actual variable value as a number, you'll likely want to use "G17". If you're trying to display a numerical value being used in a calculation with high precision, you'll want to use "n16".

Related

C# Convert.ToDouble() loses decimal points when converting string to double

Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;

Round-twice error in .NET's Double.ToString method

Mathematically, consider for this question the rational number
8725724278030350 / 2**48
where ** in the denominator denotes exponentiation, i.e. the denominator is 2 to the 48th power. (The fraction is not in lowest terms, reducible by 2.) This number is exactly representable as a System.Double. Its decimal expansion is
31.0000000000000'49'73799150320701301097869873046875 (exact)
where the apostrophes do not represent missing digits but merely mark the boudaries where rounding to 15 resp. 17 digits is to be performed.
Note the following: If this number is rounded to 15 digits, the result will be 31 (followed by thirteen 0s) because the next digits (49...) begin with a 4 (meaning round down). But if the number is first rounded to 17 digits and then rounded to 15 digits, the result could be 31.0000000000001. This is because the first rounding rounds up by increasing the 49... digits to 50 (terminates) (next digits were 73...), and the second rounding might then round up again (when the midpoint-rounding rule says "round away from zero").
(There are many more numbers with the above characteristics, of course.)
Now, it turns out that .NET's standard string representation of this number is "31.0000000000001". The question: Isn't this a bug? By standard string representation we mean the String produced by the parameterles Double.ToString() instance method which is of course identical to what is produced by ToString("G").
An interesting thing to note is that if you cast the above number to System.Decimal then you get a decimal that is 31 exactly! See this Stack Overflow question for a discussion of the surprising fact that casting a Double to Decimal involves first rounding to 15 digits. This means that casting to Decimal makes a correct round to 15 digits, whereas calling ToSting() makes an incorrect one.
To sum up, we have a floating-point number that, when output to the user, is 31.0000000000001, but when converted to Decimal (where 29 digits are available), becomes 31 exactly. This is unfortunate.
Here's some C# code for you to verify the problem:
static void Main()
{
const double evil = 31.0000000000000497;
string exactString = DoubleConverter.ToExactString(evil); // Jon Skeet, http://csharpindepth.com/Articles/General/FloatingPoint.aspx
Console.WriteLine("Exact value (Jon Skeet): {0}", exactString); // writes 31.00000000000004973799150320701301097869873046875
Console.WriteLine("General format (G): {0}", evil); // writes 31.0000000000001
Console.WriteLine("Round-trip format (R): {0:R}", evil); // writes 31.00000000000005
Console.WriteLine();
Console.WriteLine("Binary repr.: {0}", String.Join(", ", BitConverter.GetBytes(evil).Select(b => "0x" + b.ToString("X2"))));
Console.WriteLine();
decimal converted = (decimal)evil;
Console.WriteLine("Decimal version: {0}", converted); // writes 31
decimal preciseDecimal = decimal.Parse(exactString, CultureInfo.InvariantCulture);
Console.WriteLine("Better decimal: {0}", preciseDecimal); // writes 31.000000000000049737991503207
}
The above code uses Skeet's ToExactString method. If you don't want to use his stuff (can be found through the URL), just delete the code lines above dependent on exactString. You can still see how the Double in question (evil) is rounded and cast.
ADDITION:
OK, so I tested some more numbers, and here's a table:
exact value (truncated) "R" format "G" format decimal cast
------------------------- ------------------ ---------------- ------------
6.00000000000000'53'29... 6.0000000000000053 6.00000000000001 6
9.00000000000000'53'29... 9.0000000000000053 9.00000000000001 9
30.0000000000000'49'73... 30.00000000000005 30.0000000000001 30
50.0000000000000'49'73... 50.00000000000005 50.0000000000001 50
200.000000000000'51'15... 200.00000000000051 200.000000000001 200
500.000000000000'51'15... 500.00000000000051 500.000000000001 500
1020.00000000000'50'02... 1020.000000000005 1020.00000000001 1020
2000.00000000000'50'02... 2000.000000000005 2000.00000000001 2000
3000.00000000000'50'02... 3000.000000000005 3000.00000000001 3000
9000.00000000000'54'56... 9000.0000000000055 9000.00000000001 9000
20000.0000000000'50'93... 20000.000000000051 20000.0000000001 20000
50000.0000000000'50'93... 50000.000000000051 50000.0000000001 50000
500000.000000000'52'38... 500000.00000000052 500000.000000001 500000
1020000.00000000'50'05... 1020000.000000005 1020000.00000001 1020000
The first column gives the exact (though truncated) value that the Double represent. The second column gives the string representation from the "R" format string. The third column gives the usual string representation. And finally the fourth column gives the System.Decimal that results from converting this Double.
We conclude the following:
Round to 15 digits by ToString() and round to 15 digits by conversion to Decimal disagree in very many cases
Conversion to Decimal also rounds incorrectly in many cases, and the errors in these cases cannot be described as "round-twice" errors
In my cases, ToString() seems to yield a bigger number than Decimal conversion when they disagree (no matter which of the two rounds correctly)
I only experimented with cases like the above. I haven't checked if there are rounding errors with numbers of other "forms".
So from your experiments, it appears that Double.ToString doesn't do correct rounding.
That's rather unfortunate, but not particularly surprising: doing correct rounding for binary to decimal conversions is nontrivial, and also potentially quite slow, requiring multiprecision arithmetic in corner cases. See David Gay's dtoa.c code here for one example of what's involved in correctly-rounded double-to-string and string-to-double conversion. (Python currently uses a variant of this code for its float-to-string and string-to-float conversions.)
Even the current IEEE 754 standard for floating-point arithmetic recommends, but doesn't require that conversions from binary floating-point types to decimal strings are always correctly rounded. Here's a snippet, from section 5.12.2, "External decimal character sequences representing finite numbers".
There might be an implementation-defined limit on the number of
significant digits that can be converted with correct rounding to and
from supported binary formats. That limit, H, shall be such that H ≥
M+3 and it should be that H is unbounded.
Here M is defined as the maximum of Pmin(bf) over all supported binary formats bf, and since Pmin(float64) is defined as 17 and .NET supports the float64 format via the Double type, M should be at least 17 on .NET. In short, this means that if .NET were to follow the standard, it would be providing correctly rounded string conversions up to at least 20 significant digits. So it looks as though the .NET Double doesn't meet this standard.
In answer to the 'Is this a bug' question, much as I'd like it to be a bug, there really doesn't seem to be any claim of accuracy or IEEE 754 conformance anywhere that I can find in the number formatting documentation for .NET. So it might be considered undesirable, but I'd have a hard time calling it an actual bug.
EDIT: Jeppe Stig Nielsen points out that the System.Double page on MSDN states that
Double complies with the IEC 60559:1989 (IEEE 754) standard for binary
floating-point arithmetic.
It's not clear to me exactly what this statement of compliance is supposed to cover, but even for the older 1985 version of IEEE 754, the string conversion described seems to violate the binary-to-decimal requirements of that standard.
Given that, I'll happily upgrade my assessment to 'possible bug'.
First take a look at the bottom of this page which shows a very similar 'double rounding' problem.
Checking the binary / hex representation of the following floating point numbers shows that that the given range is stored as the same number in double format:
31.0000000000000480 = 0x403f00000000000e
31.0000000000000497 = 0x403f00000000000e
31.0000000000000515 = 0x403f00000000000e
As noted by several others, that is because the closest representable double has an exact value of 31.00000000000004973799150320701301097869873046875.
There are an additional two aspects to consider in the forward and reverse conversion of IEEE 754 to strings, especially in the .NET environment.
First (I cannot find a primary source) from Wikipedia we have:
If a decimal string with at most 15 significant decimal is converted
to IEEE 754 double precision and then converted back to the same
number of significant decimal, then the final string should match the
original; and if an IEEE 754 double precision is converted to a
decimal string with at least 17 significant decimal and then converted
back to double, then the final number must match the original.
Therefore, regarding compliance with the standard, converting a string 31.0000000000000497 to double will not necessarily be the same when converted back to string (too many decimal places given).
The second consideration is that unless the double to string conversion has 17 significant digits, it's rounding behavior is not explicitly defined in the standard either.
Furthermore, documentation on Double.ToString() shows that it is governed by numeric format specifier of the current culture settings.
Possible Complete Explanation:
I suspect the twice-rounding is occurring something like this: the initial decimal string is created to 16 or 17 significant digits because that is the required precision for "round trip" conversion giving an intermediate result of 31.00000000000005 or 31.000000000000050. Then due to default culture settings, the result is rounded to 15 significant digits, 31.00000000000001, because 15 decimal significant digits is the minimum precision for all doubles.
Doing an intermediate conversion to Decimal on the other hand, avoids this problem in a different way: it truncates to 15 significant digits directly.
The question: Isn't this a bug?
Yes. See this PR on GitHub. The reason of rounding twice AFAK is for "pretty" format output but it introduces a bug as you have already discovered here. We tried to fix it - remove the 15 digits precision converting, directly go to 17 digits precision converting. The bad news is it's a breaking change and will break things a lot. For example, one of the test case will break:
10:12:26 Assert.Equal() Failure
10:12:26 Expected: 1.1
10:12:26 Actual: 1.1000000000000001
The fix would impact a large set of existing libraries so finally this PR has been closed for now. However, .NET Core team is still looking for a chance to fix this bug. Welcome to join the discussion.
Truncation is the correct way to limit the precision of a number that will later be rounded, precisely to avoid the double rounding issue.
I have a simpler suspicion: The culprit is likely the pow operator => **;
While your number is exactly representable as a double, for convenience reasons
(the power operator needs much work to work right) the power is calculated
by the exponential function. This is one reason that you can optimize performance
by multiplying a number repeatedly instead of using pow() because pow() is very
expensive.
So it does not give you the correct 2^48, but something slightly incorrect and
therefore you have your rounding problems.
Please check out what 2^48 exactly returns.
EDIT: Sorry, I did only a scan on the problem and give a wrong suspicion. There is
a known issue with double rounding on the Intel processors. Older code use the
internal 80-bit format of the FPU instead of the SSE instructions which is likely
to cause the error. The value is written exactly to the 80bit register and then
rounded twice, so Jeppe has already found and neatly explained the problem.
Is it a bug ? Well, the processor is doing everything right, it is simply the
problem that the Intel FPU internally has more precision for floating-point
operations.
FURTHER EDIT AND INFORMATION:
The "double rounding" is a known issue and explicitly mentioned in "Handbook of Floating-Point Arithmetic" by Jean-Michel Muller et. al. in the chapter "The Need
for a Revision" under "3.3.1 A typical problem : 'double rounding'" at page 75:
The processor being used may offer an internal precision that is wider
than the precision of the variables of the program (a typical example
is the double-extended format available on Intel Platforms, when the
variables of the program are single- precision or double-precision
floating-point numbers). This may sometimes have strange side effects , as
we will see in this section. Consider the C program [...]
#include <stdio.h>
int main(void)
{
double a = 1848874847.0;
double b = 19954562207.0;
double c;
c = a * b;
printf("c = %20.19e\n", c);
return 0;
}
32bit:
GCC 4.1.2 20061115 on Linux/Debian
With Compilerswitch or with -mfpmath=387 (80bit-FPU): 3.6893488147419103232e+19
-march=pentium4 -mfpmath=sse (SSE) oder 64-bit : 3.6893488147419111424e+19
As explained in the book, the solution for the discrepancy is the double rounding with 80 bits and 53 bits.

System.Single accuracy with C#

In the example below the number 12345678.9 loses accuracy when it's converted to a string as it becomes 1.234568E+07. I just need a way to preserve the accuracy for large floating point numbers. Thanks.
Single sin1 = 12345678.9F;
String str1 = sin1.ToString();
Console.WriteLine(str1); // displays 1.234568E+07
If you want to preserve decimal numbers, you should use System.Decimal. It's as simple as that. System.Single is worse than System.Double in that as per the documentation:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
You haven't just lost information when you've converted it to a string - you've lost information in the very first line. That's not just because you're using float instead of double - it's because you're using a floating binary point number.
The decimal number 0.1 can't be represented accurately in a binary floating point system no matter how big you make the type...
See my articles on floating binary point and floating decimal point for more information. Of course, it's possible that you should be using double or even float and just not caring about the loss of precision - it depends on what you're trying to represent. But if you really do care about preserving decimal digits, then use a decimal-based type.
You can't. Simple as that. In memory your number is 12345679. Try the code below.
Single sin1 = 12345678.9F;
String str1 = sin1.ToString("r"); // Shows "all" the number
Console.WriteLine(sin1 == 12345679); // true
Console.WriteLine(str1); // displays 12345679
Technically r means (quoting from MSDN) round-trip: Result: A string that can round-trip to an identical number. so in reality it isn't showing all the decimals. It's only showing all the decimals needed to distinguish it from other possible values of Single. If you want to show all the decimals use F20.
If you want more precision use double or better use decimal. float has the precision that it has. As we say in Italy "Non puoi spremere sangue da una rapa" (You can't squeeze blood from a turnip)
You could also write an IFormatProvider for your purpose - but the precision doesn't get any better unless you use a different type.
this article may help - http://www.csharp-examples.net/string-format-double/

float.Parse returns incorrect value [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
I have a string X with value 2.26
when I parse it using float.Parse(X) ..it returns 2.2599999904632568. Why so? And how to overcome this ?
But if instead I use double.Parse(X) it returns the exact value, i.e. 2.26.
EDIT: Code
float.Parse(dgvItemSelection[Quantity.Index, e.RowIndex].Value.ToString());
Thanks for help
This is due to limitations in the precision of floating point numbers. They can't represent infinitely precise values and often resort to approximate values. If you need highly precise numbers you should be using Decimal instead.
There is a considerable amount of literature on this subject that you should take a look at. My favorite resource is the following
What Every Computer Scientist Should Know About Floating Point
Because floats don't properly represent decimal values in base 10.
Use a Decimal instead if you want an exact representation.
Jon Skeet on this topic
Not all numbers can be repesented exactly in floating point. Approximations are made and when you have operation after operation on an unexact number the situation gets worse.
See this Wikipedia entry for an example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
If you changed you inputs to something that can be represented exactly by floating point (like 1/8), it would work. Try the number 2.25 and it will work as expected.
The only numbers that will work exactly are numbers that can be represented by the sum of any of 1/2, 1/4, 1/8, 1/16, etc since those numbers are represented by the binary 0.1, 0.01, 0.001, 0.0001, etc.
This situation happens with all floating point systems, by nature. .Net, JavaScript, etc.
It is returning the best approximation to 2.26 that is possible in a float. You're probably getting more significant digits than that because your float is being printed as a double instead of a float.
When I test
var str = "2.26";
var flt = float.Parse(str);
flt is exactly 2.26 in the VS debugger.
How do you see what it returns?

Cutting down output after bytes to gigabytes conversion

I'm converting bytes into gigabytes and when I do, the output is something like:
57.686961286
Is there any way I can cut down the total number displayed to something like:
57.6?
Yes, you could use the Math.Round method, which will round a value to the nearest integer or the specified number of decimal places.
The appropriate overload will be chosen depending on the type of the value that you pass in (either a Double or a Decimal). Specifying an Integer value for the second parameter allows you to indicate the number of fractional digits the result should contain. In this case, you would specify "1".
Of course, the result would not be 57.6. When the value 57.686… is rounded, the 8 in the hundredths place causes the 6 in the tenths place to round up to 7, rather than down to 6. The correct result is 57.7.
Certain overloads of the above method also allow you to specify a rounding style to apply when the number in question is halfway between two others, either IEEE Standard 754 section 4 rounding (also called rounding-to-nearest, or "banker's" rounding), or the "away from zero" style that you probably learned in school.
You can format the value for display using the ToString() method, which takes a format parameter.
double myValue = 57.686961286;
string outputValue = myValue.ToString("0.0"); //output: 57.7, rounded

Categories

Resources