This question already has answers here:
Why am I getting the wrong result when using float? [duplicate]
(4 answers)
Float is converting my values
(4 answers)
Closed 9 years ago.
The result must be 806603.77 but why I get 806603.8 ?
float a = 855000.00f;
float b = 48396.23f;
float res = a - b;
Console.WriteLine(res);
Console.ReadKey();
You should use decimal instead because float has 32-bit with 7 digit precision only that is why the result differs, on other hand decimal has 128-bit with 28-29 digit precision.
decimal a = 855000.00M;
decimal b = 48396.23M;
decimal res = a - b;
Console.WriteLine(res);
Console.ReadKey();
Output: 806603.77
A float (also called System.Single) has a precision equivalent to approximately seven decimal figures. Your res difference needs eight significant decimal digits. Therefore it is to be expected that there is not enough precision in a float.
ADDITION:
Some extra information: Near 806,000 (806 thousand), a float only has four bits left for the fractional part. So for res it will have to choose between
806603 + 12/16 == 806603.75000000, and
806603 + 13/16 == 806603.81250000
It chooses the first one since it's closest to the ideal result. But both of these values are output as "806603.8" when calling ToString() (which Console.WriteLine(float) does call). A maximum of 7 significant decimal figures are shown with the general ToString call. To reveal that two floating-point numbers are distinct even though they print the same with the standard formatting, use the format string "R", for example
Console.WriteLine(res.ToString("R"));
Because float has limited precision (32 bits). Use double or decimal if you want more precision.
Please be aware that just blindly using Decimal isn't good enough.
Read the link posted by Oded: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Only then decide on the appropriate numeric type to use.
Don't fall into the trap of thinking that just using Decimal will give you exact results; it won't always.
Consider the following code:
Decimal d1 = 1;
Decimal d2 = 101;
Decimal d3 = d1/d2;
Decimal d4 = d3*d2; // d4 = (d1/d2) * d2 = d1
if (d4 == d1)
{
Console.WriteLine("Yay!");
}
else
{
Console.WriteLine("Urk!");
}
If Decimal calculations were exact, that code should print "Yay!" because d1 should be the same as d4, right?
Well, it doesn't.
Also be aware that Decimal calculations are thousands of times slower than double calculations. They are not always suitable for non-currency calculations (e.g. calculating pixel offsets or physical things such as velocities, or anything involving transcendental numbers and so on).
Related
This question already has answers here:
Double vs Decimal Rounding in C#
(2 answers)
Closed 5 years ago.
I should write a program that the input is double (variable called money), and I should print separately the digits before the decimal point and the digits after.
for example:
for the input: 36.5 should print: The number before the decimal point is: 36 The number after decimal point is: 5
for the input: 25.4 should print: The number before the decimal point is: 24 The number after decimal point is: 4
Console.WriteLine("Enter money:");
double money = double.Parse(Console.ReadLine());
int numBeforePoint = (int)money;
double numAfterPoint = (money - (int)money)*10;
Console.WriteLine("The number beforethe decimal point is: {0}. the number after the decimal point is: {1}",numBeforePoint,numAfterPoint);
If I enter 25.4 it prints: The number before the decimal point is: 24 The number after decimal point is: 3.9999999
I don't want 3.999999 I want 4
You should use decimal to represent numeric types, rather than doubles - it's what they were designed for!
You've been the victim of a floating point error, where the value you're assigning to a floating point value can't be exactly represented with its precision (the .999... you get is the closest value it can represent).
decimals have a lower range than doubles, but much higher precision - this means they're more likely to be able to represent the values you're assigning. See here or the linked decimal documentation page for more details.
Note that a more conventional way of getting the decimal part involves Math.Truncate (which by the way will work for negative values as well):
decimal numAfterPoint = (money - Math.Truncate(money))*10;
Probably easiest to use the string representation of the decimal, and use substring before and after the index of '.'
Something like this:
string money = Console.ReadLine();
int decimalIndex = money.IndexOf('.');
string numBeforePoint = money.Substring(0, decimalIndex);
string numAfterPoint = money.Substring(decimalIndex + 1);
Then you can parse the string representations as needed.
Try this:
static string Foo(double d)
{
var str = d.ToString(CultureInfo.InvariantCulture).Split('.');
var left = str[0];
var right = str[1];
return $"The number before the decimal point is: {left} The number after decimal point is: {right}";
}
using System.Linq;
public static string GetDecimalRemainder(double d)
{
return d.ToString(CultureInfo.InvariantCulture).Split('.').Last();
{
Using LINQ is much more convenient in my opinion.
I have a string that can be up to 9 characters long including an optional decimal point but all the others will be numbers. It could be "123456789" or "12.345678", for example.
What variable type should I convert it to so that I can use it in calculations?
And how do I do that?
float.Parse("12.345678");
or
float.Parse("12.345678", CultureInfo.InvariantCulture.NumberFormat);
For avoiding these kind of outputs:
1.524157875019e+16
8.10000007371e-9
For integers you can checkout this link also: https://msdn.microsoft.com/en-us/library/bb397679.aspx
You should convert it to float, double or decimal, depending on how big the numbers become.
You can use Parse() or TryParse() to parse a string to an arithmetic type.
string numberString = "123456789";
double number;
if (!double.TryParse(numberString, out number))
{
// There was an error parsing ...
// Ex. report the error back or whatever ...
// You can also set a default value for it ...
// Ex. number = 0;
}
// Use number ...
It's a question of precision and a bit of memory consumption.
if the floating point remainder is important to you use one of the following:
float - 4 bytes, 7 digits precision
Double - 8 bytes, 15-16 digits precision
Decimal - 16 bytes , 28-29 digits precision
Given 2 values like so:
decimal a = 0.15m;
decimal b = 0.85m;
Where a + b will always be 1.0m, both values are only specified to 2 decimal places and both values are >= 0.0m and <= 1.0m
Is it guaranteed that x == total will always be true, for all possible Decimal values of x, a and b? Using the calculation below:
decimal x = 105.99m;
decimal total = (x * a) + (x * b);
Or are there cases where x == total only to 2 decimal places, but not beyond that?
Would it make any difference if a and b could be specified to unlimited decimal places (as much as Decimal allows), but as long as a + b = 1.0m still holds?
Decimal is stored as a sign, an integer, and an integer exponent for the number 10 that represents the decimal location. So long as your integral portion of the number (e.g. 105 in 105.99) is not sufficiently large, then a + b will always equal one. and the outcome of your equation (x * a) + (x * b) will always have the correct value for the four decimal places.
Unlike float and double, precision is not lost up to the size of the data type (128 bits)
From MSDN:
The Decimal value type represents decimal numbers ranging from
positive 79,228,162,514,264,337,593,543,950,335 to negative
79,228,162,514,264,337,593,543,950,335. The Decimal value type is
appropriate for financial calculations requiring large numbers of
significant integral and fractional digits and no round-off errors.
The Decimal type does not eliminate the need for rounding. Rather, it
minimizes errors due to rounding. For example, the following code
produces a result of 0.9999999999999999999999999999 rather than 1
decimal dividend = Decimal.One;
decimal divisor = 3;
// The following displays 0.9999999999999999999999999999 to the console
Console.WriteLine(dividend/divisor * divisor);
The maximum precision of decimal in the CLR is 29 significant digits. When you're using that kind of precision, you're really talking approximation especially if you do multiplication because that requires intermediate results that the CLR must be able to process (see also http://msdn.microsoft.com/en-us/library/364x0z75.aspx).
If you have x with 2 significant digits and, say, a with 20 significant digits, then x * a will already have a minimum precision of 22 digits, and possibly more may be needed for intermediate results.
If x always has only 2 significant digits and you can keep the number of significant digits in a and b low enough (say, 22 digits -- pretty good and probably far enough away from 27 to deal with rounding errors), then I suppose (x * a) + (x * b) should be a pretty precise calculation always.
Finally, whether a + b always makes up 1.0m is of no significance related to a and b's individual precisions.
I have a method that tests a value is within the range allowed on fields. If it is outside the range returns null and if inside returns the value.
internal float? ExtractMoneyInRangeAndPrecision(string fieldValue, string fieldName, float min, float max, int scale, int lineNumber)
{
float returnValue;
//Check whether valid float if
if (float.TryParse(fieldValue, out returnValue))
{
//Check whether in range
if (returnValue >= min && returnValue <= max)
{
int decPosition = 0;
decPosition = fieldValue.IndexOf('.');
if (
(decPosition == -1) ||
((decPosition != -1) && (fieldValue.Substring(decPosition, fieldValue.Length - decPosition).Length -1 <= scale))
)
{
return returnValue;
}
}
}
return null;
}
Here is my unit test:
[TestMethod()]
[DeploymentItem("ImporterEngine.dll")]
public void ExtractMoneyInRangeAndPrecisionTest_OutsideRange()
{
MockSyntaxValidator target = new MockSyntaxValidator("", 0);
string fieldValue = "1000000";
string fieldName = "";
float min = 1;
float max = 999999.99f;
int scale = 2;
int lineNumber = 0;
float? Int16RangeReturned;
Int16RangeReturned = target.ExtractMoneyInRangeAndPrecision(fieldValue, fieldName, min, max, scale, lineNumber);
Assert.IsNull(Int16RangeReturned);
}
As you can see the max is 999999.99 but when the method takes it in it changes it to 1,000,000
Why is this?
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
In short, because of the way floating-point numbers represent real numbers, the number you assign to a float is not always the number you get back out. The value you specify is converted to the nearest value that can be represented in scientific notation with a magnitude determined by a base of 2.
In the case of 999999.99, the nearest number that can be represented as a float with the same number of sig figs is 7.6293945 * 217 = 999999.99504, which when rounded to the same sig figs is 1,000,000.00. This may not be the EXACT case, but error like this is inherent in the use of floats.
Do not use floating-point types in situations where the accuracy of the number at a given level of precision is required. Instead, use the decimal type, which will retain the precision of values entered.
Not every string of digits can be converted to a float. Without checking, I would say that 999999.99 is one such number. A decimal would solve this.
The float type doesn't have enough precision to do what you want to do. I would recommend using the decimal type. Floats can be accurate to 7 decimal digits at most, and you're using 8 here. Decimal can have up to 28 digits, which is more than enough for any amount. Moreover, unlike float, the value the compiler uses and the value you write will always be the same.
Here's the long explanation:
Floats (single-precision floating-point numbers) are stored as an integer times a power of two, where the integer is in a certain range (between 2^23 and 2^24).
When you write a decimal number in your code, the compiler interprets this as the number in this form that is closest to the number you wrote. Sometimes the match is exact (99999.75). In other cases, your number needs to be rounded to the closest floating-point number. This is what happened here:
99999.99 = 2^19 * 1.907348613739013671875
= 2^19 * 2^-23 * (2^23 * 1.907348613739013671875)
= 2^-4 * 15999999.84
The closest integer to 15999999.84 is 16000000, so the rounded value is
(float)99999.99 = 2^-4 * 16000000
= 1000000
The big advantage of the decimal type is that is represented as a 96 bit integer times a power of 10, so decimal numbers with up to 28 digits can be represented exactly, without any rounding. What you see is what you get.
The biggest disadvantage of decimal is that it is significantly slower, but in a situation like this where you're converting strings to numbers, this is not a factor.
Floating-point types (as defined in C#) are approximate. For precision you should always use decimal.
From MSDN:
The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations. The approximate range and precision for the decimal type are shown in the following table.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
There seems to be some dispute about what qualifies as a floating-point type in C#. While decimal does qualify as a floating-point type by actual definition, it is not defined as such according to the MSDN specification.
http://msdn.microsoft.com/en-us/library/9ahet949.aspx
I have a code, and I do not understand it. I am developing an application which precision is very important. but it does not important for .NET, why? I don't know.
double value = 3.5;
MessageBox.Show((value + 1 * Math.Pow(10, -20)).ToString());
but the message box shows: 3.5
Please help me, Thank you.
If you're doing anything where precision is very important, you need to be aware of the limitations of floating point. A good reference is David Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
You may find that floating-point doesn't give you enough precision and you need to work with a decimal type. These, however, are always much slower than floating point -- it's a tradeoff between accuracy and speed.
You can have precision, but it depends on what else you want to do. If you put the following in a Console application:
double a = 1e-20;
Console.WriteLine(" a = {0}", a);
Console.WriteLine("1+a = {0}", 1+a);
decimal b = 1e-20M;
Console.WriteLine(" b = {0}", b);
Console.WriteLine("1+b = {0}", 1+b);
You will get
a = 1E-20
1+a = 1
b = 0,00000000000000000001
1+b = 1,00000000000000000001
But Note that The Pow function, like almost everything in the Math class, only takes doubles:
double Pow(double x, double y);
So you cannot take the Sine of a decimal (other then by converting it to double)
Also see this question.
Or use the Decimal type rather than double.
The precision of a Double is 15 digits (17 digits internally). The value that you calculate with Math.Pow is correct, but when you add it to value it just is too small to make a difference.
Edit:
A Decimal can handle that precision, but not the calculation. If you want that precision, you need to do the calculation, then convert each value to a Decimal before adding them together:
double value = 3.5;
double small = Math.Pow(10, -20);
Decimal result = (Decimal)value + (Decimal)small;
MessageBox.Show(result.ToString());
Double precision means it can hold 15-16 digits. 3.5 + 1e-20 = 21 digits. It cannot be represented in double precicion. You can use another type like decimal.