Sorry for the daft question, but I get back this value from database
"7.545720553985866E+29"
I need to convert this value to a decimal, rounded to 6 digits. What is the best way to do that? I tried
var test = double.Parse("7.545720553985866E+29");
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
but the value remains unchanged and the conversion crashes.
Math.Round rounds to N digits to the right of the decimal point. Your number has NO digits to the right of the decimal (it is equivalent to 754,572,055,398,586,600,000,000,000,000), so rounding it does not change the value.
If you want to round to N significant digits then look at some of the existing answers:
Round a double to x significant figures
Rounding the SIGNIFICANT digits in a double, not to decimal places
the conversion crashes.
That's because the value is too large for a decimal. The largest value a decimal can hold is 7.9228E+28 - your value is about 10 times larger than that.
Maybe you can substring it and then after, parse.
var test= "7.545720553985866E+29".Substring(0,8); // 7.545720
test = Math.Round(test, 6);
var test2 = Convert.ToDecimal(test);
You can use this to round to 6 significant digits:
round(test, 6 - int(math.log10(test)))
The resulting value from that is
7.545721e+29
This works by using log10 from the math module to get the power of 10 in test, rounds it down to get an integer, subtracts that from 6 then uses round to get the desired digits.
As noted by others, round works to the given number of decimal places. The log10 and the rest figures how many decimal places are needed to get the desired number of significant digits. If the decimal places are negative, round rounds to the left of the decimal point.
You should be aware that log10 is not perfectly accurate and taking the int of that may be off from the expected value by one. This happens rarely but it does happen. Also, even if the computed value is correct, converting the value to string (such as when you print it) may give a different-than-expected result. If you need perfect accuracy you would be better off working from the string representation of the value.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 23 hours ago.
I have project that should calculate an equation and it checks every number to see if it fits the equation. every time it pluses the number by 0.00001. but sometiomes it gets random on next decimals for example 0.000019999999997.
I even tried a breakpoint at that line. When I am right on that line it is 1.00002 for example but when I go to next line it is 1.00002999999997.
I don't why it is like that. I even tried smaller numbers like 0.001.
List<double> anwsers = new List<double>();
double i = startingPoint;
double prei = 0;
double preAbsoluteValueOfResult = 0;
while (i <= endingPoint)
{
prei = i;
i += 0.001;
}
Added a call to Math.Round to round the result to 3 decimal places before adding it to the answers list. This ensures that the values in the list will always have exactly 3 digits past the decimal point.
List<double> answers = new List<double>();
double i = startingPoint;
double prei = 0;
double preAbsoluteValueOfResult = 0;
while (i <= endingPoint)
{
prei = i;
i += 0.001;
double result = Math.Round(prei, 3);
answers.Add(result);
}
Computers don't store exact floating point numbers. A float is usually 32 bits or 64 bits and cannot store numbers to arbitrary precision.
If you want dynamic precision floating point, use a number library like GMP.
There are some good video's on this https://www.youtube.com/watch?v=2gIxbTn7GSc
But esentially its because there arnt enough bits. if you have a number being stored e.g. 1/3 it can only store so many decimal places, so its actually going to be something like 0.3334. Which means when you do something like 1/3 + 1/3 it isnt going to equal 2/3 like one might expect, it would equal 0.6668
To summarize, using the decimal type https://learn.microsoft.com/en-us/dotnet/api/system.decimal?view=net-7.0 Over double, should fix your issues.
TL;DR: You should use a decimal data type instead of a float or double. You can declare your 0.001 as a decimal by adding an m after the value: 0.001m
The double data type you chose relies on a representation of decimal numbers via a fraction of two integers. It is great for storing large decimal numbers with little memory, but it also means your number gets rounded to the closest value which can be represented by such a fraction. A decimal on the other hand will store the information in a different way, which will more closely represent what you intuitively expect from decimal numbers.
More information about float values: https://floating-point-gui.de/
More information about numeric types declaration: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
The documentation also explains:
Just as decimal fractions are unable to precisely represent some fractional values (such as 1/3 or Math.PI), binary fractions are unable to represent some fractional values. For example, 1/10, which is represented precisely by .1 as a decimal fraction, is represented by .001100110011 as a binary fraction, with the pattern "0011" repeating to infinity. In this case, the floating-point value provides an imprecise representation of the number that it represents. Performing additional mathematical operations on the original floating-point value often tends to increase its lack of precision.
This question already has answers here:
How to calculate float type precision and does it make sense?
(4 answers)
Closed 1 year ago.
I have some doubts about what "precision" actually means in C# when working with floating numbers. I apologize in advance if a logic is weak and for the long explanation.
I know float number (e.g. 10.01F) has a precision of 6 to 9 digits. So, let's say we have the next code:
float myFloat = 1.000001F;
Console.WriteLine(myFloat);
I get the exact number in console. Now, let's use the next code:
myFloat = 1.00000006F;
Console.WriteLine(myFloat);
A different number is printed: 1.0000001, even thought the number has 9 digits, which is the limit.
This is my first doubt. Does precision depends of the number itself or the computer's architecture?
Furthermore, data is store as bits in the computer, bearing that in mid, I remember that converting the decimal part of a number to bits can lead to a different number when transforming the number back to decimal. For example:
(Decimal) 1.0001 -> (Binary) 1.00000000000001101001
(Binary) 1.00000000000001101001 -> (Decimal) 1.00010013580322265625 (It's not the same)
My logic after this is: maybe a float number doesn't lose information when stored, maybe such information is lost when the number is converted back to decimal to show it to the user.
E.g.
float myFloat = 999999.11F + 1.11F;
The result of the above should be: 1000000.22. However, since this number exceeds the precision of a float, I should see a different number, which indeed happens: 1000000.25
There is a 0.03 difference. In order to see if the actual result is 1000000.22 I did the next condition:
if (myFloat == 1000000.22F) {
Console.WriteLine("Real result = 100000.22");
}
And it actually prints it: Real result = 100000.22.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
1.000001F in source code is converted to the float value 8,388,616•2−23, which is 1.00000095367431640625.
1.00000006F in source code is converted to the float value 8,388,609•2−23, which is 1.00000011920928955078125.
Console.WriteLine shows only some of the value of these; it rounds its display to a limited number of digits, by default.
999999.11F is converted to 15,999,986•2−4 which is 999,999.125. 1.11F is converted to 9,311,355•2−23, which is 1.11000001430511474609375. When these are added using real-number mathematics, the result is 8,388,609,971,323•2−23. That cannot be represented in a float, because the fraction portion of a float (called the significand) can only have 24 bits, so its maximum value as an integer is 16,777,215. If we divide that significand by 219 to reduce it to that limit, we get approximately 8,388,609,971,323/219 • 2−23•219 = 16,000,003.76•2−4. Rounding that significand to an integer produces 16,000,004•2−4. So, when those two numbers are added, float arithmetic rounds the result and produces 16,000,004•2−4, which is 1,000,000.25.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?
Converting a decimal numeral to floating-point generally introduces a rounding error.
Adding floating-point numbers generally introduces a rounding error.
Converting a floating-point number to a decimal numeral with limited precision generally introduces a rounding error.
The rounding occurs both when you write 1000000.22F in your code (the compiler must find the exponent and mantissa that give a result closest to the decimal number to typed), and again when converting to decimal to display.
There isn't any decimal/binary type of rounding in the actual arithmetic operations, although arithmetic operations do have rounding error related to the limited number of mantissa bits.
I'm attempting to parse a string with 2 decimal places as a float.
The problem is, the resultant object has an incorrect mantissa.
As it's quite a bit off what I'd expect, I struggle to believe it's a rounding issue.
However double seems to work.
This value does seem to be within the range of a float (-3.4 × 10^38 to +3.4 × 10^38) so I don't see why it doesn't parse it as I'd expect.
I tried a few more test but it doesn't make what's happening any more clear to me.
From the documentation for System.Single:
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
It's not a matter of the range of float - it's the precision.
The closest exact value to 650512.56 (for example) is 650512.5625... which is then being shown as 650512.5625 in the watch window.
To be honest, if you're parsing a decimal number, you should probably use decimal to represent it. That way, assuming it's in range and doesn't have more than the required number of decimal digits, you'll have the exact numeric representation of the original string. While you could use double and be fine for 9 significant digits, you still wouldn't be storing the exact value you parsed - for example, "0.1" can't be exactly represented as a double.
The mantissa of a float in c# has 23 bits, which means that it can have 6-7 significant digits. In your example 650512.59 you have 8, and it is just that digit which is 'wrong'. Double has a mantissa of 52 bits (15-16 digits), so of course it will show correctly all your 8 or 9 significant digits.
See here for more: Type float in C#
In the lunch break we started debating about the precision of the double value type.
My colleague thinks, it always has 15 places after the decimal point.
In my opinion one can't tell, because IEEE 754 does not make assumptions
about this and it depends on where the first 1 is in the binary
representation. (i.e. the size of the number before the decimal point counts, too)
How can one make a more qualified statement?
As stated by the C# reference, the precision is from 15 to 16 digits (depending on the decimal values represented) before or after the decimal point.
In short, you are right, it depends on the values before and after the decimal point.
For example:
12345678.1234567D //Next digit to the right will get rounded up
1234567.12345678D //Next digit to the right will get rounded up
Full sample at: http://ideone.com/eXvz3
Also, trying to think about double value as fixed decimal values is not a good idea.
You're both wrong. A normal double has 53 bits of precision. That's roughly equivalent to 16 decimal digits, but thinking of double values as though they were decimals leads to no end of confusion, and is best avoided.
That said, you are much closer to correct than your colleague--the precision is relative to the value being represented; sufficiently large doubles have no fractional digits of precision.
For example, the next double larger than 4503599627370496.0 is 4503599627370497.0.
C# doubles are represented according to IEEE 754 with a 53 bit significand p (or mantissa) and a 11 bit exponent e, which has a range between -1022 and 1023. Their value is therefore
p * 2^e
The significand always has one digit before the decimal point, so the precision of its fractional part is fixed. On the other hand the number of digits after the decimal point in a double depends also on its exponent; numbers whose exponent exceeds the number of digits in the fractional part of the significand do not have a fractional part themselves.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is probably the most widely recognized publication on this subject.
Since this is the only question on SO that I could find on this topic, I would like to make an addition to jorgebg's answer.
According to this, precision is actually 15-17 digits. An example of a double with 17 digits of precision would be 0.92107099070578813 (don't ask me how I got that number :P)
How would I round the following number in C# to obtain the following results.
Value : 500.0349999999
After Rounding to 2 digits after decimal : 500.04
I have tried Math.Round(Value,2, MidpointRounding.AwayFromZero); //but it returns the value 500.03 instead of 500.04
You're asking for non-standard rounding rules. The value 500.03499999999 rounded to the nearest hundredth should be 500.03. Since the thousandths digit is less than 5, the hundredths digit remains unchanged.
One way I can see to achieve your desired result is to round the number to the decimal place one smaller than what you ultimately want. Then round that result to the precision you want.
In your example, you would round the value to 3 decimal places resulting in 500.035. You would then round that to 2 decimal places which should result in 500.04 (assuming you're using MidpointRounding.AwayFromZero.
Hope that helps.
You could make it needlessly complex and wrap a variable Round(Decimal, Int32) function in a for loop. Decrement the Int32 to work it's way back to the decimal precision needed. It's a bit of work but like Eric said, you are asking for non-standard rounding rules.
Extra details: http://msdn.microsoft.com/en-us/library/ms131274.aspx