Double value to scientific notation with fixed exponent C# - c#

I am having value double value = 1427799000;
I would like to convert it to scientific notation where the values exponent must always 10^11 (E+11).
I have tried following but it is not working.
Console.WriteLine(value.ToString("00.##E+11", CultureInfo.InvariantCulture));
Output should be : 0.14 x 10^11 or 0.14E+11
How to convert any double value to scientific notation with fixed exponent ? Here fixed exponent is 11.

double value = 1427799000;
Console.WriteLine(value.ToString("G2", CultureInfo.InvariantCulture));
//output: 1.4E+09
The General ("G") Format Specifier
The general ("G") format specifier converts a number to the most
compact of either fixed-point or scientific notation, depending on the
type of the number and whether a precision specifier is present.
EDIT: About your comment you can't display the scientific notation in your desirable way, it is not defined that way ! The coeficient must be greater or equal to 1 and less to 10.
For number 1.23*10^11 ->Article source
The first number 1.23 is called the coefficient. It must be greater than or equal to 1 and less than 10.
The second number is called the base . It must always be 10 in
scientific notation. The base number 10 is always written in exponent
form. In the number 1.23 x 10^11 the number 11 is referred to as the
exponent or power of ten.

Related

Parsing float resulting in strange decimal values

I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#

Is there a way to format a C# double exactly? [duplicate]

This question already has answers here:
Formatting doubles for output in C#
(10 answers)
Closed 8 years ago.
Is there a way to get a string showing the exact value of a double, with all the decimal places needed to represent its precise value in base 10?
For example (via Jon Skeet and Tony the Pony), when you type
double d = 0.3;
the actual value of d is exactly
0.299999999999999988897769753748434595763683319091796875
Every binary floating-point value (ignoring things like infinity and NaN) will resolve to a terminating decimal value. So with enough digits of precision in the output (55 in this case), you can always take whatever is in a double and show its exact value in decimal. And being able to show people the exact value would be really useful when there's a need to explain the oddities of floating-point arithmetic. But is there a way to do this in C#?
I've tried all of the standard numeric format strings, both with and without precision specified, and nothing gives the exact value. A few highlights:
d.ToString("n55") outputs 0.3 followed by 54 zeroes -- it does its usual "round to what you probably want to see" and then tacks more zeroes on the end. Same thing if I use a custom format string of 0.00000...
d.ToString("r") gives you a value with enough precision that if you parse it you'll get the same bit pattern you started with -- but in this case it just outputs 0.3. (This would be more useful if I was dealing with the result of a calculation, rather than a constant.)
d.ToString("e55") outputs 2.9999999999999999000000000000000000000000000000000000000e-001 -- some of the precision, but not all of it like I'm looking for.
Is there some format string I missed, or some other library function, or NuGet package or other library, that is able to convert a double to a string with full precision?
You could try using # placeholders if you want to suppress trailing zeroes, and avoid scientific notation. Though you'll need a lot of them for very small numbers, e.g.:
Console.WriteLine(double.Epsilon.ToString("0.########....###"));
I believe you can do this, based on what you want to accomplish with the display:
Consider this:
Double myDouble = 10/3;
myDouble.ToString("G17");
Your output will be:
3.3333333333333335
See this link for why: http://msdn.microsoft.com/en-us/library/kfsatb94(v=vs.110).aspx
By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, ToString returns PositiveInfinitySymbol or NegativeInfinitySymbol instead of the expected number. If you require more precision, specify format with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision.
You can also do:
myDouble.ToString("n16");
That will discard the 16th and 17th noise digits, and return the following:
3.3333333333333300
If you're looking to display the actual variable value as a number, you'll likely want to use "G17". If you're trying to display a numerical value being used in a calculation with high precision, you'll want to use "n16".

C# double type formatting

I'm trying to convert C# double values to string of exponential notation. Consider this C# code:
double d1 = 0.12345678901200021;
Console.WriteLine(d1.ToString("0.0####################E0"));
//outputs: 1.23456789012E-1 (expected: 1.2345678901200021E-1)
Can anyone tell me the format string to output "1.2345678901200021E-1" from double d1, if it's possible?
Double values only hold 15 to 16 digits, you have 17 (if I counted right). Because 64 bit double numbers only hold 16 digits, your last digit is getting truncated and therefore when you convert the number to scientific notation, the last digit appears to have been truncated.
You should use Decimal instead. Decimal types can hold 128 bits of data, while double can only hold 64 bits.
According to the documentation for double.ToString(), double doesn't have the precision:
By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, ToString returns PositiveInfinitySymbol or NegativeInfinitySymbol instead of the expected number. If you require more precision, specify format with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision.
Console.WriteLine(d1) should show you that double doesn't support your wanted precision. Use decimal instead (64bit vs 128bit).
My immediate window is saying that the maximum resolution you can expect from that double
number is about
15 digits.
My VS2012 immediate window is saying that the resolution of 0.12345678901200021 is actually 16 significant digits:
0.1234567890120002
Therefore we expect that at least the last "2" digit should be reported in the string.
However if you use the "G17" format string:
0.12345678901200021D.ToString("G17");
you will get a string with 16 digit precision.
See this answer.

Precision of double after decimal point

In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)

Wasn't the Double Type precision of 15 digits in C#?

I was testing this code from Brainteasers:
double d1 = 1.000001;
double d2 = 0.000001;
Console.WriteLine((d1 - d2) == 1.0);
And the result is "False". When I change the data type:
decimal d1 = 1.000001M;
decimal d2 = 0.000001M;
decimal d3 = d1-d2;
Console.WriteLine(d3 == 1);
The program writes the correct answer: "True".
This problem just uses 6 digits after the floating point. What happened with the precision of 15 digits?
This has nothing to do with precision - it has to do with representational rounding errors.
System.Decimal is capable of representing large floating point numbers with a significantly reduced risk of incurring any rounding errors like the one you are seeing. System.Single and System.Double are not capable of this and will round these numbers off and create issues like the one you are seeing in your example.
System.Decimal uses a scaling factor to hold the position of the decimal place thus allowing for exact representation of the given floating-point number, whereas System.Single and System.Double only approximate your value as best they can.
For more information, please see System.Double:
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear equal for a particular
precision might not compare equal
because their least significant digits
are different.
A mathematical or comparison operation that uses a floating-point
number might not yield the same result
if a decimal number is used because
the floating-point number might not
exactly approximate the decimal
number.
Generally, the way to check for equality of floating-point values is to check for near-equality, i.e., check for a difference that is close to the smallest value (called epsilon) for that datatype. For example,
if (Math.Abs(d1 - d2) <= Double.Epsilon) ...
This tests to see if the d1 and d2 are represented by the same bit pattern give or take the least significant bit.
Correction (Added 2 Mar 2015)
Upon further examination, the code should be more like this:
// Assumes that d1 and d2 are not both zero
if (Math.Abs(d1 - d2) / Math.Max(Math.Abs(d1), Math.Abs(d2)) <= Double.Epsilon) ...
In other words, take the absolute difference between d1 and d2, then scale it by the largest of d1 and d2, and then compare it to Epsilon.
References
• http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
• http://msdn.microsoft.com/en-us/library/system.double.aspx#Precision
The decimal type implements decimal floating point whereas double is binary floating point.
The advantage of decimal is that it behaves as a human would with respect to rounding, and if you initialise it with a decimal value, then that value is stored precisely as you specified. This is only true for decimal numbers of finite length and within the representable range and precision. If you initialised it with say 1.0M/3.0M, then it would not be stored precisely just as you would write 0.33333-recurring on paper.
If you initialise a binary FP value with a decimal, it will be converted from the human readable decimal form, to a binary representation that will seldom be precisely the same value.
The primary purpose of the decimal type is for implementing financial applications, in the .NET implementation it also has a far higher precision than double, however binary FP is directly supported by the hardware so is significantly faster than decimal FP operations.
Note that double is accurate to approximately 15 significant digits not 15 decimal places. d1 is initialised with a 7 significant digit value not 6, while d2 only has 1 significant digit. The fact that they are of significantly different magnitude does not help either.
The idea of floating point numbers is that they are not precise to a particular number of digits. If you want that sort of functionality, you should look at the decimal data type.
The precision isn't absolute, because it's not possible to convert between decimal and binary numbers exactly.
In this case, .1 decimal repeats forever when represented in binary. It converts to .000110011001100110011... and repeats forever. No amount of precision will store that exactly.
Avoid comparision on equality for floating point numbers.

Categories

Resources