So I'm attempting to use the Newton-Raphson method to find the square root of a BigInteger.
Here is my code:
private void sqrRt(BigInteger candidate)
{
BigInteger epsilon = new BigInteger(0.0001);
BigInteger guess = candidate / 2;
while (BigInteger.Abs(guess * guess - candidate) >= epsilon)
{
// guess = guess - (((guess**2) - y)/(2*guess))
guess = BigInteger.Subtract(guess, BigInteger.Divide(BigInteger.Subtract(BigInteger.Multiply(guess, guess), candidate), BigInteger.Multiply(2, guess)));
MessageBox.Show(Convert.ToString(guess));
}
}
The problem seems to be that the BigInteger is not precise enough to fall within the degree of accuracy of the epsilon in the while loop - i.e. it needs a decimal place. My question is what/how/where do I convert to a double to make the while loop eventually return false?
You are using the wrong data type. In order to have decimal points, you would need to use double, float, decimal, or Complex.
Check the links of all these types so you can see their digits of precision.
.NET and Compact Framework by default use banker's rounding which means that
the value: 1,165 will be rounded to: 1,16.
sql server as a opposite rounds it to 1,17 as for me it is the correct behaviour.
Has anyone come across rounding topic and have a workaround for a Compact Framework?
(In .net there is an additional parameter which has an influence on the rounding behaviour)
Math.Floor(double) seems to be supported, so;
private static double RoundToTwo(double value)
{
return Math.Floor(100*value + .5)/100;
}
Console.WriteLine(RoundToTwo(1.165));
> 1.17
Console.WriteLine(RoundToTwo(1.16499));
> 1.16
Here is a method you can use instead of decimal.round:
public static decimal RoundHalfUp(this decimal d, int decimals)
{
if (decimals < 0)
{
throw new ArgumentException("The decimals must be non-negative",
"decimals");
}
decimal multiplier = (decimal)Math.Pow(10, decimals);
decimal number = d * multiplier;
if (decimal.Truncate(number) < number)
{
number += 0.5m;
}
return decimal.Round(number) / multiplier;
}
Taken from: Why does .NET use banker's rounding as default?
This question also asks why .Net uses bankers rounding. So I believe it will be a good read for you.
To answer why
This bankers algorithm means that results collected will be evenly spread of rounding up/down when the decimal == .5, so really it is just to even out data results.
Here's another link which describes this by Mr. Skeet
Try
double number = 1.165;
string RoundedNumber = number.ToString("f3");
Where 3 is the scale
I decided to re-create my question:
decimal dTotal = 0m;
foreach (DictionaryEntry item in _totals)
{
if (!string.IsNullOrEmpty(item.Value.ToString()))
{
dTotal += Convert.ToDecimal(item.Value);
}
}
Console.WriteLine(dTotal / 3600m);
Console.WriteLine(decimal.Round(dTotal / 3600m, 2));
Console.WriteLine(decimal.Divide(dTotal, 3600m));
The above code returns:
579.99722222222222222222222222
580.00
579.99722222222222222222222222
So, that is where my issues are coming from, I really need it to just display the 579.99; but any round, be it decimal.Round or Math.Round still return 580; even the string formats for {0:F} return 580.00.
How can i properly do this?
New answer (to new question)
Okay, so you've got a value of 579.99722222222222222222222222 - and you're asking that to be rounded to two decimal places. Isn't 580.00 the natural answer? It's closer to the original value than 579.99 is. It sounds like you essentially want flooring behaviour, but with a given number of digits. For that, you can use:
var floored = Math.Floor(original * 100) / 100;
In this case, you can do both in one step:
var hours = Math.Floor(dTotal / 36) / 100;
... which is equivalent to
var hours = Math.Floor((dTotal / 3600) * 100) / 100;
Original answer (to original question)
Sounds like you've probably got payTotal in an inappropriate form to start with:
using System;
class Test
{
static void Main()
{
decimal pay = 2087975.7m;
decimal time = pay / 3600;
Console.WriteLine(time); // Prints 579.99325
}
}
This is the problem:
var payTotal = 2087975.7;
That's assigning payTotal to a double variable. The value you've actually got is 2087975.69999999995343387126922607421875, which isn't what you wanted. Any time you find yourself casting from double to decimal or vice versa, you should be worried: chances are you've used the wrong type somewhere. Currency values should absolutely be stored in decimal rather than double (and there are various other Stack Overflow questions talking about when to use which).
See my two articles on floating point for more info:
Binary floating point in .NET
Decimal floating point in .NET
(Once you've got correct results, formatting them is a different matter of course, but that shouldn't be too bad...)
I've a double variable called x.
In the code, x gets assigned a value of 0.1 and I check it in an 'if' statement comparing x and 0.1
if (x==0.1)
{
----
}
Unfortunately it does not enter the if statement
Should I use Double or double?
What's the reason behind this? Can you suggest a solution for this?
It's a standard problem due to how the computer stores floating point values. Search here for "floating point problem" and you'll find tons of information.
In short – a float/double can't store 0.1 precisely. It will always be a little off.
You can try using the decimal type which stores numbers in decimal notation. Thus 0.1 will be representable precisely.
You wanted to know the reason:
Float/double are stored as binary fractions, not decimal fractions. To illustrate:
12.34 in decimal notation (what we use) means
1 * 101 + 2 * 100 + 3 * 10-1 + 4 * 10-2
The computer stores floating point numbers in the same way, except it uses base 2: 10.01 means
1 * 21 + 0 * 20 + 0 * 2-1 + 1 * 2-2
Now, you probably know that there are some numbers that cannot be represented fully with our decimal notation. For example, 1/3 in decimal notation is 0.3333333…. The same thing happens in binary notation, except that the numbers that cannot be represented precisely are different. Among them is the number 1/10. In binary notation that is 0.000110011001100….
Since the binary notation cannot store it precisely, it is stored in a rounded-off way. Hence your problem.
double and Double are the same (double is an alias for Double) and can be used interchangeably.
The problem with comparing a double with another value is that doubles are approximate values, not exact values. So when you set x to 0.1 it may in reality be stored as 0.100000001 or something like that.
Instead of checking for equality, you should check that the difference is less than a defined minimum difference (tolerance). Something like:
if (Math.Abs(x - 0.1) < 0.0000001)
{
...
}
You need a combination of Math.Abs on X-Y and a value to compare with.
You can use following Extension method approach
public static class DoubleExtensions
{
const double _3 = 0.001;
const double _4 = 0.0001;
const double _5 = 0.00001;
const double _6 = 0.000001;
const double _7 = 0.0000001;
public static bool Equals3DigitPrecision(this double left, double right)
{
return Math.Abs(left - right) < _3;
}
public static bool Equals4DigitPrecision(this double left, double right)
{
return Math.Abs(left - right) < _4;
}
...
Since you rarely call methods on double except ToString I believe its pretty safe extension.
Then you can compare x and y like
if(x.Equals4DigitPrecision(y))
Comparing floating point number can't always be done precisely because of rounding. To compare
(x == .1)
the computer really compares
(x - .1) vs 0
Result of sybtraction can not always be represeted precisely because of how floating point number are represented on the machine. Therefore you get some nonzero value and the condition evaluates to false.
To overcome this compare
Math.Abs(x- .1) vs some very small threshold ( like 1E-9)
From the documentation:
Precision in Comparisons
The Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values. The following example reports that the Double value .3333 and the Double returned by dividing 1 by 3 are unequal.
...
Rather than comparing for equality, one recommended technique involves defining an acceptable margin of difference between two values (such as .01% of one of the values). If the absolute value of the difference between the two values is less than or equal to that margin, the difference is likely to be due to differences in precision and, therefore, the values are likely to be equal. The following example uses this technique to compare .33333 and 1/3, the two Double values that the previous code example found to be unequal.
So if you really need a double, you should use the techique described on the documentation.
If you can, change it to a decimal. It' will be slower, but you won't have this type of problem.
Use decimal. It doesn't have this "problem".
Exact comparison of floating point values is know to not always work due to the rounding and internal representation issue.
Try imprecise comparison:
if (x >= 0.099 && x <= 0.101)
{
}
The other alternative is to use the decimal data type.
double (lowercase) is just an alias for System.Double, so they are identical.
For the reason, see Binary floating point and .NET.
In short: a double is not an exact type and a minute difference between "x" and "0.1" will throw it off.
Double (called float in some languages) is fraut with problems due to rounding issues, it's good only if you need approximate values.
The Decimal data type does what you want.
For reference decimal and Decimal are the same in .NET C#, as are the double and Double types, they both refer to the same type (decimal and double are very different though, as you've seen).
Beware that the Decimal data type has some costs associated with it, so use it with caution if you're looking at loops etc.
Official MS help, especially interested "Precision in Comparisons" part in context of the question.
https://learn.microsoft.com/en-us/dotnet/api/system.double.equals
// Initialize two doubles with apparently identical values
double double1 = .333333;
double double2 = (double) 1/3;
// Define the tolerance for variation in their values
double difference = Math.Abs(double1 * .00001);
// Compare the values
// The output to the console indicates that the two values are equal
if (Math.Abs(double1 - double2) <= difference)
Console.WriteLine("double1 and double2 are equal.");
else
Console.WriteLine("double1 and double2 are unequal.");
1) Should i use Double or double???
Double and double is the same thing. double is just a C# keyword working as alias for the class System.Double
The most common thing is to use the aliases! The same for string (System.String), int(System.Int32)
Also see Built-In Types Table (C# Reference)
Taking a tip from the Java code base, try using .CompareTo and test for the zero comparison. This assumes the .CompareTo function takes in to account floating point equality in an accurate manner. For instance,
System.Math.PI.CompareTo(System.Math.PI) == 0
This predicate should return true.
// number of digits to be compared
public int n = 12
// n+1 because b/a tends to 1 with n leading digits
public double MyEpsilon { get; } = Math.Pow(10, -(n+1));
public bool IsEqual(double a, double b)
{
// Avoiding division by zero
if (Math.Abs(a)<= double.Epsilon || Math.Abs(b) <= double.Epsilon)
return Math.Abs(a - b) <= double.Epsilon;
// Comparison
return Math.Abs(1.0 - a / b) <= MyEpsilon;
}
Explanation
The main comparison function done using division a/b which should go toward 1. But why division? it simply puts one number as reference defines the second one. For example
a = 0.00000012345
b = 0.00000012346
a/b = 0.999919002
b/a = 1.000081004
(a/b)-1 = 8.099789405475458e-5
1-(b/a) = 8.100445524503848e-5
or
a=12345*10^8
b=12346*10^8
a/b = 0.999919002
b/a = 1.000081004
(a/b)-1 = 8.099789405475458e-5
1-(b/a) = 8.100445524503848e-5
by division we get rid of trailing or leading zeros (or relatively small numbers) that pollute our judgement of number precision. In the example, the comparison is of order 10^-5, and we have 4 number accuracy, because of that in the beginning code I wrote comparison with 10^(n+1) where n is number accuracy.
Adding onto Valentin Kuzub's answer above:
we could use a single method that supports providing nth precision number:
public static bool EqualsNthDigitPrecision(this double value, double compareTo, int precisionPoint) =>
Math.Abs(value - compareTo) < Math.Pow(10, -Math.Abs(precisionPoint));
Note: This method is built for simplicity without added bulk and not with performance in mind.
As a general rule:
Double representation is good enough in most cases but can miserably fail in some situations. Use decimal values if you need complete precision (as in financial applications).
Most problems with doubles doesn't come from direct comparison, it use to be a result of the accumulation of several math operations which exponentially disturb the value due to rounding and fractional errors (especially with multiplications and divisions).
Check your logic, if the code is:
x = 0.1
if (x == 0.1)
it should not fail, it's to simple to fail, if X value is calculated by more complex means or operations it's quite possible the ToString method used by the debugger is using an smart rounding, maybe you can do the same (if that's too risky go back to using decimal):
if (x.ToString() == "0.1")
Floating point number representations are notoriously inaccurate because of the way floats are stored internally. E.g. x may actually be 0.0999999999 or 0.100000001 and your condition will fail. If you want to determine if floats are equal you need to specify whether they're equal to within a certain tolerance.
I.e.:
if(Math.Abs(x - 0.1) < tol) {
// Do something
}
My extensions method for double comparison:
public static bool IsEqual(this double value1, double value2, int precision = 2)
{
var dif = Math.Abs(Math.Round(value1, precision) - Math.Round(value2, precision));
while (precision > 0)
{
dif *= 10;
precision--;
}
return dif < 1;
}
To compare floating point, double or float types, use the specific method of CSharp:
if (double1.CompareTo(double2) > 0)
{
// double1 is greater than double2
}
if (double1.CompareTo(double2) < 0)
{
// double1 is less than double2
}
if (double1.CompareTo(double2) == 0)
{
// double1 equals double2
}
https://learn.microsoft.com/en-us/dotnet/api/system.double.compareto?view=netcore-3.1
I have a code, and I do not understand it. I am developing an application which precision is very important. but it does not important for .NET, why? I don't know.
double value = 3.5;
MessageBox.Show((value + 1 * Math.Pow(10, -20)).ToString());
but the message box shows: 3.5
Please help me, Thank you.
If you're doing anything where precision is very important, you need to be aware of the limitations of floating point. A good reference is David Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
You may find that floating-point doesn't give you enough precision and you need to work with a decimal type. These, however, are always much slower than floating point -- it's a tradeoff between accuracy and speed.
You can have precision, but it depends on what else you want to do. If you put the following in a Console application:
double a = 1e-20;
Console.WriteLine(" a = {0}", a);
Console.WriteLine("1+a = {0}", 1+a);
decimal b = 1e-20M;
Console.WriteLine(" b = {0}", b);
Console.WriteLine("1+b = {0}", 1+b);
You will get
a = 1E-20
1+a = 1
b = 0,00000000000000000001
1+b = 1,00000000000000000001
But Note that The Pow function, like almost everything in the Math class, only takes doubles:
double Pow(double x, double y);
So you cannot take the Sine of a decimal (other then by converting it to double)
Also see this question.
Or use the Decimal type rather than double.
The precision of a Double is 15 digits (17 digits internally). The value that you calculate with Math.Pow is correct, but when you add it to value it just is too small to make a difference.
Edit:
A Decimal can handle that precision, but not the calculation. If you want that precision, you need to do the calculation, then convert each value to a Decimal before adding them together:
double value = 3.5;
double small = Math.Pow(10, -20);
Decimal result = (Decimal)value + (Decimal)small;
MessageBox.Show(result.ToString());
Double precision means it can hold 15-16 digits. 3.5 + 1e-20 = 21 digits. It cannot be represented in double precicion. You can use another type like decimal.