I'm trying to convert C# double values to string of exponential notation. Consider this C# code:
double d1 = 0.12345678901200021;
Console.WriteLine(d1.ToString("0.0####################E0"));
//outputs: 1.23456789012E-1 (expected: 1.2345678901200021E-1)
Can anyone tell me the format string to output "1.2345678901200021E-1" from double d1, if it's possible?
Double values only hold 15 to 16 digits, you have 17 (if I counted right). Because 64 bit double numbers only hold 16 digits, your last digit is getting truncated and therefore when you convert the number to scientific notation, the last digit appears to have been truncated.
You should use Decimal instead. Decimal types can hold 128 bits of data, while double can only hold 64 bits.
According to the documentation for double.ToString(), double doesn't have the precision:
By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, ToString returns PositiveInfinitySymbol or NegativeInfinitySymbol instead of the expected number. If you require more precision, specify format with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision.
Console.WriteLine(d1) should show you that double doesn't support your wanted precision. Use decimal instead (64bit vs 128bit).
My immediate window is saying that the maximum resolution you can expect from that double
number is about
15 digits.
My VS2012 immediate window is saying that the resolution of 0.12345678901200021 is actually 16 significant digits:
0.1234567890120002
Therefore we expect that at least the last "2" digit should be reported in the string.
However if you use the "G17" format string:
0.12345678901200021D.ToString("G17");
you will get a string with 16 digit precision.
See this answer.
Related
I am having value double value = 1427799000;
I would like to convert it to scientific notation where the values exponent must always 10^11 (E+11).
I have tried following but it is not working.
Console.WriteLine(value.ToString("00.##E+11", CultureInfo.InvariantCulture));
Output should be : 0.14 x 10^11 or 0.14E+11
How to convert any double value to scientific notation with fixed exponent ? Here fixed exponent is 11.
double value = 1427799000;
Console.WriteLine(value.ToString("G2", CultureInfo.InvariantCulture));
//output: 1.4E+09
The General ("G") Format Specifier
The general ("G") format specifier converts a number to the most
compact of either fixed-point or scientific notation, depending on the
type of the number and whether a precision specifier is present.
EDIT: About your comment you can't display the scientific notation in your desirable way, it is not defined that way ! The coeficient must be greater or equal to 1 and less to 10.
For number 1.23*10^11 ->Article source
The first number 1.23 is called the coefficient. It must be greater than or equal to 1 and less than 10.
The second number is called the base . It must always be 10 in
scientific notation. The base number 10 is always written in exponent
form. In the number 1.23 x 10^11 the number 11 is referred to as the
exponent or power of ten.
I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#
So, I've got this floating point number:
(float)-123456.668915
It's a number chosen at random because I'm doing some unit testing for a chunk of BCD code I'm writing. Whenever I go to compare the number above with a a string ("-123456.668915" to be clear), I'm getting an issue with how C# rounded the number. It rounds it to -123456.7. This has been checked in NUnit and with straight console output.
Why is it rounding like this? According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
Thanks for the help!
According to MSDN, the range of float is approximately -3.4 * 10^38 to +3.4 * 10^38 with 7 digits of precision. The above number, unless I'm completely missing something, is well within that range, and only has 6 digits after the decimal point.
"6 digits after the decimal point" isn't the same as "6 digits of precision". The number of digits of precision is the number of significant digits which can be reliably held. Your number has 12 significant digits, so it's not at all surprising that it can't be represented exactly by float.
Note that the number it's (supposedly) rounding to, -123456.7, does have 7 significant digits. In fact, that's not the value of your float either. I strongly suspect the exact value is -123456.671875, as that's the closest float to -123456.668915. However, when you convert the exact value to a string representation, the result is only 7 digits, partly because beyond that point the digits aren't really meaningful anyway.
You should probably read my article about binary floating point in .NET for more details.
The float type has a precision of 24 significant bits (except for denormals), which is equivalent to 24 log10 2 ≈ 7.225 significant decimal digits. The number -123456.668915 has 12 significant digits, so it can't be represented accurately.
The actual binary value, rounded to 24 significant bits, is -11110001001000000.1010110. This is equivalent to the fraction -7901227/64 = -123456.671875. Rounding to 7 significant digits gives the -123456.7 you see.
I use the decimal type for high precise calculation (monetary).
But I came across this simple division today:
1 / (1 / 37) which should result in 37 again
http://www.wolframalpha.com/input/?i=1%2F+%281%2F37%29
But C# gives me:
37.000000000000000000000000037M
I tried both these:
1m/(1m/37m);
and
Decimal.Divide(1, Decimal.Divide(1, 37))
but both yield the same results. How is the behaviour explainable?
Decimal stores the value as decimal floating point with only limited precision. The result of 1 / 37 is not precicely stored, as it's stored as 0.027027027027027027027027027M. The true number has the group 027 going indefinitely in decimal representation. For that reason, you cannot get the precise numbers in decimal representation for every possible number.
If you use Double in the same calculation, the end result is correct in this case (but it does not mean it will always be better).
A good answer on that topic is here: Difference between decimal, float and double in .NET?
Decimal data type has an accuracy of 28-29 significant digits.
So what you have to understand is when you consider 28-29 significant digits you are still not exact.
So when you compute a decimal value for (1/37) what you have to note is that at this stage you are only getting an accuracy of 28-29 digits. e.g 1/37 is 0.02 when you take 2 significant digits and 0.027 when you take 3 significant digits. Imagine you divide 1 with these values in each case. you get a 50 in first case and in second case you get 37.02...Considering 28-29 digits (decimal ) takes you to an accuracy of 37.000000000000000000000000037. If you have to get an exact 37 you simply need more than 28-29 significant digits than the decimal offers.
Always do computations with maximum significant digits and round off only your answer with Math.Round for desired result.
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)