I have 6 InputFields in my scene. Their content type is decimal.
I fetch values from these input fields and check if their sum is equal to 100.02. I enter 16.67 in all of them.
float fireP = float.Parse(firePercentage.text);
float waterP = float.Parse(waterPercentage.text);
float lightP = float.Parse(lightPercentage.text);
float nightP = float.Parse(nightPercentage.text);
float natureP = float.Parse(naturePercentage.text);
float healthP = float.Parse(healthPercentage.text);
float total = fireP + waterP + lightP + nightP + natureP + healthP;
if (total == 100.02f)
{
Debug.Log("It's equal");
}
else
{
Debug.Log(" Not equal. Your sum is = " + total);
}
I am getting " Not equal. Your sum is = 100.02" in my console log.
Anyways has any idea why this might be happening ?
You indeed have a floating point issue.
In unity you can and should use Mathf.Approximately, it's a utility function they built exactly for this purpose
Try this
if (Mathf.Approximately(total, 100.02f))
{
Debug.Log("It's equal");
}
else
{
Debug.Log(" Not equal. Your sum is = " + total);
}
Additionally, as a side note, you should work with Decimals if you plan on doing any calculations where having the EXACT number is of critical importance. It is a slightly bigger data structure, and thus slower, but it is designed not to have floating point issues. (or accurate to 10^28 at least)
For 99.99% of cases floats and doubles are enough, given that you compare them properly.
A more in-depth explanation can be found here : Difference between decimal float and double in .net
The nearest float to 16.67 is 16.6700000762939453125.
The nearest float to 100.02 is 100.01999664306640625
Adding the former to itself 5 times is not exactly equal to the latter, so they will not compare equal.
In this particular case, comparing with a tolerance in the order of 1e-6 is probably the way to go.
Related
I wanted to ask a question about a calculation I had today in C#.
double expenses = (pricePen + priceMark + priceLitres) - discount / 100*(pricePen + priceMark + priceLitres); //Incorrect
double expenses = (pricePen + priceMark + priceLitres) - (pricePen + priceMark + priceLitres)* discount/100; //Correct
So as you can see at the end of the equation I had to multiply the brackets by the integer named "discount", which is obviously a discount percentage.
When I change the places of that value whether it would be in front of the brackets or behind the brackets the answer will always be different, but in Maths I even checked myself that I should get the same answer even if the value is placed in front of the brackets to multiply or placed behind the brackets to multiply again, but C# doesn't think so.
I wanted to ask people, how does C# actually calculate this and why am I getting different results at the end? (Result should be 28.5, not 38)
[Data: pricePen = 11.6; priceMark = 21.6; priceLitres = 4.8; discount = 25;]
(I know that the question is irrelevant.)
In first line after dividing by 100 the result is in an integer. For that the rest of division get lost. So the multiplication has a lower result.
In second line the multiplication has the correct result and the rest of devision is lower than one.
So I know its already answered but if you want to learn more about divisions with int
here it is:
for example:
float value = 3/4 you would expect it to be 0.75 but that's not the case.
Because when the Compiler goes through the values 3 and 4 he makes des Literal of the highest data type - in this case (int)-.
That means the result of this division will be "0".75 because int has no floating numbers and just cuts it off. Then the program just takes that value and puts it in the float value ...
so the result will be
"3/4" 0 ->"float value" 0.0 = 0.0
Some guys before me already told you the solution to that problem like making one divisor to float with .0
float value = 3.0/4
or you can tell the Compiler to store the value in a float Literal with the (float) "command"
float value = (float) 3/4
I hope it helped you explain why you did that :)
To avoid these problems makes sure you are doing math with floating point types, and not int types. In your case discount is an int and thus
x * (discount / 100) = x * <integer>
Best to define a function to do the calculation which forces the type
double DiscountedPrice(double price, double discount)
{
return price - (discount/100) * price;
}
and then call it as
var x = DiscountedPrice( pricePen + priceMark + priceLitres, 15);
In the above scenario, the compiler will force the integer 15 to be converted into an double as a widening conversion (double has more digits than integer).
It's my generating algorithm it's generating random double elements for the array which sum must be 1
public static double [] GenerateWithSumOfElementsIsOne(int elements)
{
double sum = 1;
double [] arr = new double [elements];
for (int i = 0; i < elements - 1; i++)
{
arr[i] = RandomHelper.GetRandomNumber(0, sum);
sum -= arr[i];
}
arr[elements - 1] = sum;
return arr;
}
And the method helper
public static double GetRandomNumber(double minimum, double maximum)
{
Random random = new Random();
return random.NextDouble() * (maximum - minimum) + minimum;
}
My test cases are:
[Test]
[TestCase(7)]
[TestCase(5)]
[TestCase(4)]
[TestCase(8)]
[TestCase(10)]
[TestCase(50)]
public void GenerateWithSumOfElementsIsOne(int num)
{
Assert.AreEqual(1, RandomArray.GenerateWithSumOfElementsIsOne(num).Sum());
}
And the thing is - when I'm testing it returns every time different value like this cases :
Expected: 1
But was: 0.99999999999999967d
Expected: 1
But was: 0.99999999999999989d
But in the next test, it passes sometimes all of them, sometimes not.
I know that troubles with rounding and ask for some help, dear experts :)
https://en.wikipedia.org/wiki/Floating-point_arithmetic
In computing, floating-point arithmetic is arithmetic using formulaic
representation of real numbers as an approximation so as to support a
trade-off between range and precision. For this reason, floating-point
computation is often found in systems which include very small and
very large real numbers, which require fast processing times. A number
is, in general, represented approximately to a fixed number of
significant digits (the significand) and scaled using an exponent in
some fixed base; the base for the scaling is normally two, ten, or
sixteen.
In short, this is what floats do, they dont hold every single value and do approximate. If you would like more precision try using a Decimal instead, or adding tolerance by an epsilon (an upper bound on the relative error due to rounding in floating point arithmetic)
var ratio = a / b;
var diff = Math.Abs(ratio - 1);
return diff <= epsilon;
Round up errors are frequent in case of floating point types (like Single and Double), e.g. let's compute an easy sum:
// 0.1 + 0.1 + ... + 0.1 = ? (100 times). Is it 0.1 * 100 == 10? No!
Console.WriteLine((Enumerable.Range(1, 100).Sum(i => 0.1)).ToString("R"));
Outcome:
9.99999999999998
That's why when comparing floatinfg point values with == or != add tolerance:
// We have at least 8 correct digits
// i.e. the asbolute value of the (round up) error is less than tolerance
Assert.IsTrue(Math.Abs(RandomArray.GenerateWithSumOfElementsIsOne(num).Sum() - 1.0) < 1e-8);
I am trying to calculate average for an array of floats. I need to use indices because this is inside a binary search so the top and bottom will move. (Big picture we are trying to optimize a half range estimation so we don't have to re-create the array each pass).
Anyway I wrote a custom average loop and I'm getting 2 places less accuracy than the c# Average() method
float test = input.Average();
int count = (top - bottom) + 1;//number of elements in this iteration
int pos = bottom;
float average = 0f;//working average
while (pos <= top)
{
average += input[pos];
pos++;
}
average = average / count;
example:
0.0371166766 - c#
0.03711666 - my loop
125090.148 - c#
125090.281 - my loop
http://pastebin.com/qRE3VrCt
I'm getting 2 places less accuracy than the c# Average()
No, you are only losing 1 significant digit. The float type can only store 7 significant digits, the rest are just random noise. Inevitably in a calculation like this, you can accumulate round-off error and thus lose precision. Getting the round-off errors to balance out requires luck.
The only way to avoid it is to use a floating point type that has more precision to accumulate the result. Not an issue, you have double available. Which is why the Linq Average method looks like this:
public static float Average(this IEnumerable<float> source) {
if (source == null) throw Error.ArgumentNull("source");
double sum = 0; // <=== NOTE: double
long count = 0;
checked {
foreach (float v in source) {
sum += v;
count++;
}
}
if (count > 0) return (float)(sum / count);
throw Error.NoElements();
}
Use double to reproduce the Linq result with a comparable number of significant digits in the result.
I'd rewrite this as:
int count = (top - bottom) + 1;//number of elements in this iteration
double sum = 0;
for(int i = bottom; i <= top; i++)
{
sum += input[i];
}
float average = (float)(sum/count);
That way you're using a high precision accumulator, which helps reduce rounding errors.
btw. if performance isn't that important, you can still use LINQ to calculate the average of an array slice:
input.Skip(bottom).Take(top - bottom + 1).Average()
I'm not entirely sure if that fits your problem, but if you need to calculate the average of many subarrays, it can be useful to create a persistent sum array, so calculating an average simply becomes two table lookups and a division.
Just to add to the conversation, be careful when using Floating point primitives.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Internally floating point numbers store additional least significant bits that are not reflected in the displayed value (aka: Guard Bits or Guard Digits). They are, however, utilized when performing mathematical operations and equality checks. One common result is that a variable containing 0f is not always zero. When accumulating floating point values this can also lead to precision errors.
Use Decimal for your accumulator:
Will not have rounding errors due to Guard Digits
Is a 128bit data type (less likely to exceed Max Value in your accumulator).
For more info:
What is the difference between Decimal, Float and Double in C#?
I was wondering if the above was at all possible. For example:
Math.Sqrt(myVariableHere);
When looking at the overload, it requires a double parameter, so I'm not sure if there is another way to replicate this with decimal datatypes.
I don't understand why all the answers to that question are the same.
There are several ways to calculate the square root from a number. One of them was proposed by Isaac Newton. I'll only write one of the simplest implementations of this method. I use it to improve the accuracy of double's square root.
// x - a number, from which we need to calculate the square root
// epsilon - an accuracy of calculation of the root from our number.
// The result of the calculations will differ from an actual value
// of the root on less than epslion.
public static decimal Sqrt(decimal x, decimal epsilon = 0.0M)
{
if (x < 0) throw new OverflowException("Cannot calculate square root from a negative number");
decimal current = (decimal)Math.Sqrt((double)x), previous;
do
{
previous = current;
if (previous == 0.0M) return 0;
current = (previous + x / previous) / 2;
}
while (Math.Abs(previous - current) > epsilon);
return current;
}
About speed: in the worst case (epsilon = 0 and number is decimal.MaxValue) the loop repeats less than a three times.
If you want to know more, read this (Hacker's Delight by Henry S. Warren, Jr.)
I just came across this question, and I'd suggest a different algorithm than the one SLenik proposed. This is based on the Babylonian Method.
public static decimal Sqrt(decimal x, decimal? guess = null)
{
var ourGuess = guess.GetValueOrDefault(x / 2m);
var result = x / ourGuess;
var average = (ourGuess + result) / 2m;
if (average == ourGuess) // This checks for the maximum precision possible with a decimal.
return average;
else
return Sqrt(x, average);
}
It doesn't require using the existing Sqrt function, and thus avoids converting to double and back, with the accompanying loss of precision.
In most cases involving a decimal (currency etc), it isn't useful to take a root; and the root won't have anything like the expected precision that you might expect a decimal to have. You can of course force it by casting (assuming we aren't dealing with extreme ends of the decimal range):
decimal root = (decimal)Math.Sqrt((double)myVariableHere);
which forces you to at least acknowledge the inherent rounding issues.
Simple: Cast your decimal to a double and call the function, get the result and cast that back to a decimal. That will probably be faster than any sqrt function you could make on your own, and save a lot of effort.
Math.Sqrt((double)myVariableHere);
Will give you back a double that's the square root of your decimal myVariableHere.
I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg:
static void Main(string[] args)
{
double x = 0, y = 0;
x += 20013.8;
x += 20012.7;
y += 10016.4;
y += 30010.1;
Console.WriteLine("Result: "+ x + " " + y + " " + (x==y));
Console.Write("Press any key to continue . . . "); Console.ReadKey(true);
}
However, in this case, x and y are equal in the end.
Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.
double x = (0.1 * 3) / 3;
Console.WriteLine("x: {0}", x); // prints "x: 0.1"
Console.WriteLine("x == 0.1: {0}", x == 0.1); // prints "x == 0.1: False"
Remark: based on this don't make the assumption that floating point arithmetic is unreliable in .NET.
Here's an example based on a prior question that demonstrates float arithmetic not working out exactly as you would think.
float f = (13.45f * 20);
int x = (int)f;
int y = (int)(13.45f * 20);
Console.WriteLine(x == y);
In this case, false is printed to the screen. Why? Because of where the math is performed versus where the cast to int is happening. For x, the math is performed in one statement and stored to f, then it is being cast to an integer. For y, the value of the calculation is never stored before the cast. (In x, some precision is lost between the calculation and the cast, not the case for y.)
For an explanation behind what's specifically happening in float math, see this question/answer. Why differs floating-point precision in C# when separated by parantheses and when separated by statements?
My favourite demonstration boils down to
double d = 0.1;
d += 0.2;
d -= 0.3;
Console.WriteLine(d);
The output is not 0.
Try making it so the decimal is not .5.
Take a look at this article here
http://floating-point-gui.de/
try sum VERY big and VERY small number. small one will be consumed and result will be same as large number.
Try performing repeated operations on an irrational number (such as a square root) or very long length repeating fraction. You'll quickly see errors accumulate. For instance, compute 1000000*Sqrt(2) vs. Sqrt(2)+Sqrt(2)+...+Sqrt(2).
The simplest I can think of right now is this:
class Test
{
private static void Main()
{
double x = 0.0;
for (int i = 0; i < 10; ++i)
x += 0.1;
Console.WriteLine("x = {0}, expected x = {1}, x == 1.0 is {2}", x, 1.0, x == 1.0);
Console.WriteLine("Allowing for a small error: x == 1.0 is {0}", Math.Abs(x - 1.0) < 0.001);
}
}
I suggest that, if you're truly interested, you take a look any one of a number of pages that discuss floating point numbers, some in gory detail. You will soon realize that, in a computer, they're a compromise, trading off accuracy for range. If you are going to be writing programs that use them, you do need to understand their limitations and problems that can arise if you don't take care. It will be worth your time.
double is accurate to ~15 digits. You need more precision to really start hitting problems with only a few floating point operations.