Comparing floats as string! can I? Is it valid approach? - c#

In order to check if difference between two float numbers is 0.01 I do this
if ((float1 - float.Parse(someFloatAsStringFromXML).ToString(), System.Globalization.CultureInfo.InvariantCulture)).ToString() == "0,01000977")
Is this type of "approach" is acceptable? Is there better way? How to?
p.s. I'm very new to c# and strong typing languages! So, if you have more than brief explanation, I would love to read it!
I forgot to mention, that numbers are "353.58" and "353.59". I have them as strings with dot "." not "," that is the reason why I use float.Parse

Firtly, you should always compare numbers as their underlying types (float, double, decimal), NOT as strings.
Now, you might think that you can compare like so:
float floatFromXml = float.Parse(someFloatAsStringFromXML);
if (Math.Abs(float1 - floatFromXml) == 0.01)
My example works as follows:
Firstly, calculate the difference between the two values:
float1 - floatFromXml
Then take the absolute value of that (which just removes the minus sign)
Math.Abs(float1 - floatFromXml)
Then see if that value is equal to 0.01:
if (Math.Abs(float1 - floatFromXml) == 0.01)
And if you don't want to ignore the sign, you wouldn't do the Math.Abs() part:
if ((float1 - floatFromXml) == 0.01)
But that won't work for your example because of rounding errors!
Because you are using floats (and this would apply to doubles too) you are going to get rounding errors which makes comparing the difference to 0.01 impossible. It will be more like 0.01000001 or some other slightly wrong value.
To fix that, you have to compare the actual difference to the target difference, and if that is only different by a tiny amount you say "that'll do".
In the following code, target is the target difference that you are looking for. In your case, it is 0.01.
Then epsilon is the smallest amount by which target can be wrong. In this example, it is 0.00001.
What we are saying is "if the difference between the two numbers is within 0.00001 of 0.1, then we will consider it as matching a difference of 0.1".
So the code calculates the actual difference between the two numbers, difference, then it sees how far away that difference is from 0.01 and if it is within 0.00001 it prints "YAY".
using System;
namespace Demo
{
class Program
{
void Run()
{
float f1 = 353.58f;
float f2 = 353.59f;
if (Math.Abs(f1 - f2) == 0.01f)
Console.WriteLine("[A] YAY");
else
Console.WriteLine("[A] Oh dear"); // This gets printed.
float target = 0.01f;
float difference = Math.Abs(f2 - f1);
float epsilon = 0.00001f; // Any difference smaller than this is ok.
float differenceFromTarget = Math.Abs(difference - target);
if (differenceFromTarget < epsilon)
Console.WriteLine("[B] YAY"); // This gets printed.
else
Console.WriteLine("[B] Oh dear");
}
static void Main()
{
new Program().Run();
}
}
}
However, the following is probably the answer for you
Alternatively, you can use the decimal type instead of floats, then the direct comparison will work (for this particular case):
decimal d1 = 353.58m;
decimal d2 = 353.59m;
if (Math.Abs(d1 - d2) == 0.01m)
Console.WriteLine("YAY"); // This gets printed.
else
Console.WriteLine("Oh dear");

You give an example of your two numbers
353.58
353.59
These numbers cannot be exactly represented in binary floating point format. Also, the value 0.01 cannot be exactly represented in binary floating point format. Please read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
So, for example, when you try to represent 353.58 as a float, there is no float with that value and you get the closest float value which happens to be 353.579986572265625.
In my view you are using the wrong data type to represent these values. You need to be using a decimal format. That will allow you to represent these values exactly. In C# you use the decimal type.
Then you can write:
decimal value1 = 353.58m;
decimal value2 = 353.59m;
Debug.Assert(Math.Abs(value2-value1) == 0.01m);
Use decimal.Parse() or decimal.TryParse() to convert from your textual representation of the number into a decimal value.

In direct answer to your question, can you do this? Yes. Is it a valid approach? Almost certainly not.
If you give some more context, you will get more detailed answers.

This isn't a good way of doing it. There's no need to compare it to a string and you don't gain anything by doing it that way. Compare it to an actual float value.
The immediately visible problem with your approach there is it's not culture safe. In the UK we use . instead of , as the decimal point - so the result of .ToString() wouldn't match at all. But aside from that it's just bad practice to do that without a good reason.

Related

How to remove the leading 0s and divide the remaining value by 100, use a dot separator between the integer and the decimal part in C#

for example let's just say I have a:
var value = "0000000000002022"
how can I get : 20.22
Mathematically speaking, it doesn't matter how many zeroes are before your number, their the same, so 0000002 = 2 is true. We can use this fact to simply parse our string to a number, and then do the division, but we have to be a little careful in which number type we use, because doing (int) 16 / (int) 5 will result in 3, which obviously isn't correct, but integer division does that. So, just to be sure we don't loose any precision, we'll use float
string value = "0000000000002022";
if (float.TryParse(value, out var number))
{
// Successfully parsed our string to a float
Console.WriteLine(number / 100);
}
else
{
// We failed to parse our string to a float :(
Console.WriteLine($"Could not parse '{value}' to a float");
}
Always use TryParse except if you're 110% sure the given string will always be a number, and even then, circumstances can (and will, this is software development after all) change.
Note: float isn't infinitely large, it has a maximum and minimum value, and anything outside that range cannot be represented by a float. Plus, floating point numbers also have a caveat: They're not 100% accurate, for example 0.1 + 0.2 == 0.3 is false, you can read up more on the topic here. If you need to be as accurate as possible, for example when working with money, then maybe use decimal instead (or, make the decision to represent the money as a whole number, representing the minor units of currency your country uses)
by using convert
Int16.Parse(value);
Convert.ToDecimal(int1)/100;

Number type suffixes in C#

I searched for this question first before posting, but all I got was based on C++.
Here is my question:
Is a double with f suffix normal in c#? If yes, why and how is this possible?
Have a look at this code:
double d1 = 1.2f;
double d2 = 2.0f;
Console.WriteLine("{0}", d2 - d1);
decimal dm1 = 1.2m;
decimal dm2 = 2.0m;
Console.WriteLine("{0}", dm2 - dm1);
The answers for the first calculation is 0.799999952316284 with f suffix instead of 0.8. Also, when I change the f to a d which I think should be the normal way, it gives a correct answer of 0.8.
The right hand expression is evaluated as float and then "deposited" in a double variable. Nothing wrong or weird here. I think the difference in result has to do with the precision of the two data types.
Referring to your appreciation of the "correct answer", the fact that 0.8 came out "correct" is not because you changed from a float literal to a double literal. That's just a better approximation of the result. The "correct" result is indeed coming from the second expression, the one using decimal types.
The f suffix stand for float and not double. So 1.2f is a single precission floating point number which will be saved to a double directly after creating it because of an implicit cast to double.
The inprecission you are getting seems to be happening there and not at the calculation as it seems to be working with 1.2d.
Such behaviour is normal when using floating-point values. Use decimal if you do not want such behaviour as you already did in you examples yourself...
Double and Float both are binary numbers.
The Problem is not their precision but the kind of numbers they can store in an exact manner, which must be binary, too. Change 1.2f to 0.5f of 0.25f or 0.125f and so an and you will see 'correct' results. But any number with different factorials must be stored in an approximation. There is a '3' hidden in the 1.2 and you can't store in in a float or double. If you try, only an approximation will be stored.
Decimals are actually storing decimal digits and you won't see any approximations there as long as you don't leave the decimal realm. If you try to store, say, 1/3 in a decimal, it'll have to approximate as well..

Why does this simple double assertion fail in C#

The following test will fail in C#
Assert.AreEqual<double>(10.0d, 16.1d - 6.1d);
The problem appears to be a floating point error.
16.1d - 6.1d == 10.000000000000002
This is causing me headaches in writing unit tests for code that uses double. Is there a way to fix this?
There is no exact conversion between the decimal system and the binary representation of a double (see excellent comment by #PatriciaShanahan below on why).
In this case the .1 part of the numbers is the problem, it cannot be finitely represented in a double (like 1/3 can't be finitely represented exactly as a decimal number).
A code snippet to explain what happends:
double larger = 16.1d; //Assign closest double representation of 16.1.
double smaller = 6.1; //Assign closest double representation of 6.1.
double diff = larger - smaller; //Assign closest diff between larger and
//smaller, but since a smaller value has a
//larger precision the result will have better
//precision than larger but worse than smaller.
//The difference shows up as the ...000002.
Always use the Assert.Equal overload which takes a delta parameter when comparing doubles.
Alternatively if you really need exact decimal conversion, use the decimal data type, that has another binary representation and would return exactly 10 in your example.
Floatingoint numbers are an estimate of the actual value based on an exponent so the test fails correctly. If you require exact equivalence in two decimal numbers you may need to check out the decimal data type.
If you are using NUnit please use the Within option. Here can you find additional information: http://www.nunit.org/index.php?p=equalConstraint&r=2.6.2.
I agree with anders abel. There won't be a way to do this using a float number representation. In direct result of IEE 1985-754 only the numbers that can be represented by
can be stored and calculated with precisly (as long as the chosen bit number allows this).
For Example : 1024 * 1.75 * 183.375 / 1040.0675 <-- will be stored precisly
10 / 1.1 <-- wont be stored precisly
If you are hardly interested in exact representation of rational numbers you could write your own number-implementation using fractions.
This could be done by saving numerator, denominator and sign. Then operations like multiply, subtract, etc. need to be implemented (very hard to ensure good performance). A toString()-method could look like this (I assume cachedRepresentation, cachedDotIndex and cachedNumerator to be member-variables)
public String getString(int digits) {
if(this.cachedRepresentation == ""){
this.cachedRepresentation += this.positiveSign ? "" : "-";
this.cachedRepresentation += this.numerator/this.denominator;
this.cachedNumerator = 10 * (this.numerator % this.denominator);
this.cachedDotIndex = this.cachedRepresentation.Length;
this.cachedRepresentation += ".";
}
if ((this.cachedDotIndex + digits) < this.cachedRepresentation.Length)
return this.cachedRepresentation.Substring(0, this.cachedDotIndex + digits + 1);
while((this.cachedDotIndex + digits) >= this.cachedRepresentation.Length){
this.cachedRepresentation += this.cachedNumerator / this.denominator;
this.cachedNumerator = 10 * (this.cachedNumerator % denominator);
}
return cachedRepresentation;
}
This worked for me. At the operations itself with long numbers I got some problems with too small datatypes (usually I don't use c#). I think for an experienced c#-developer it should be no problem to implement this without problems of to small datatypes.
If you want to implement this you should do minifications of the fraction at initializing and before operations using euclids greatest-common-divider.
Non rational numbers can (in every case I know) be specified by a algorithm that comes as close to the exact representation as you want (and computer allows).

C# Maths gives wrong results!

I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.

Double precision problems on .NET

I have a simple C# function:
public static double Floor(double value, double step)
{
return Math.Floor(value / step) * step;
}
That calculates the higher number, lower than or equal to "value", that is multiple of "step". But it lacks precision, as seen in the following tests:
[TestMethod()]
public void FloorTest()
{
int decimals = 6;
double value = 5F;
double step = 2F;
double expected = 4F;
double actual = Class.Floor(value, step);
Assert.AreEqual(expected, actual);
value = -11.5F;
step = 1.1F;
expected = -12.1F;
actual = Class.Floor(value, step);
Assert.AreEqual(Math.Round(expected, decimals),Math.Round(actual, decimals));
Assert.AreEqual(expected, actual);
}
The first and second asserts are ok, but the third fails, because the result is only equal until the 6th decimal place. Why is that? Is there any way to correct this?
Update If I debug the test I see that the values are equal until the 8th decimal place instead of the 6th, maybe because Math.Round introduces some imprecision.
Note In my test code I wrote the "F" suffix (explicit float constant) where I meant "D" (double), so if I change that I can have more precision.
I actually sort of wish they hadn't implemented the == operator for floats and doubles. It's almost always the wrong thing to do to ever ask if a double or a float is equal to any other value.
If you want precision, use System.Decimal. If you want speed, use System.Double (or System.Float). Floating point numbers are not "infinite precision" numbers, and therefore asserting equality must include a tolerance. As long as your numbers have a reasonable number of significant digits, this is ok.
If you're looking to do math on very large AND very small numbers, don't use float or double.
If you need infinite precision, don't use float or double.
If you are aggregating a very large number of values, don't use float or double (the errors will compound themselves).
If you need speed and size, use float or double.
See this answer (also by me) for a detailed analysis of how precision affects the outcome of your mathematical operations.
Floating point arithmetic on computers are not Exact Science :).
If you want exact precision to a predefined number of decimals use Decimal instead of double or accept a minor interval.
If you omit all the F postfixes (ie -12.1 instead of -12.1F) you will get equality to a few digits more. Your constants (and especially the expected values) are now floats because of the F. If you are doing that on purpose then please explain.
But for the rest i concur with the other answers on comparing double or float values for equality, it's just not reliable.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it.
Only use floating point if you want a machine's interpretation (binary) of number systems. You can't represent 10 cents.
Check the answers to this question: Is it safe to check floating point values for equality to 0?
Really, just check for "within tolerance of..."
floats and doubles cannot accurately store all numbers. This is a limitation with the IEEE floating point system. In order to have faithful precision you need to use a more advanced math library.
If you don't need precision past a certain point, then perhaps decimal will work better for you. It has a higher precision than double.
For the similar issue, I end up using the following implementation which seems to success most of my test case (up to 5 digit precision):
public static double roundValue(double rawValue, double valueTick)
{
if (valueTick <= 0.0) return 0.0;
Decimal val = new Decimal(rawValue);
Decimal step = new Decimal(valueTick);
Decimal modulo = Decimal.Round(Decimal.Divide(val,step));
return Decimal.ToDouble(Decimal.Multiply(modulo, step));
}
Sometimes the result is more precise than you would expect from strict:FP IEEE 754.
That's because HW uses more bits for the computation.
See C# specification and this article
Java has strictfp keyword and C++ have compiler switches. I miss that option in .NET

Categories

Resources