We are using one code analyzer which has a rule like this "Do Not Check Floating Point Equality/Inequality".Below is the example given.
float f = 0.100000001f; // 0.1
double d = 0.10000000000000001; // 0.1
float myNumber = 3.146f;
if ( myNumber == 3.146f ) //Noncompliant. Because of floating point imprecision, this will be false
{
////
}
else
{
////
}
if (myNumber <= 3.146f && mNumber >= 3.146f) // Noncompliant indirect equality test
{
// ...
}
if (myNumber < 4 || myNumber > 4) // Noncompliant indirect inequality test
{
// ...
}
when I tested this code if ( myNumber == 3.146f ) is true so I am not able to understand what this rule is trying to say.
What is solution or code change required for this rule?
Is this rule applicable for C#? When I googled I see more examples of C/C++ for this rule
Floating point is not precise. In some cases, the result is unexpected, so it's bad practice to compare floating point number for equality without some tolerance.
It can be demonstrated with simple example.
if(0.1 + 0.2 == 0.3)
{
Console.WriteLine("Equal");
}
else
{
Console.WriteLine("Not Equal");
}
It will print Not Equal.
Demo: https://dotnetfiddle.net/ltAFWe
The solution is to add some tolerance, for example:
if(Math.Abs((0.1 + 0.2) - 0.3) < 0.0001)
{
Console.WriteLine("Equal");
}
else
{
Console.WriteLine("Not Equal");
}
Now it will print Equal.
A fairly readable solution to this is to define an extension method for double like so:
public static class FloatAndDoubleExt
{
public static bool IsApproximately(this double self, double other, double within)
{
return Math.Abs(self - other) <= within;
}
public static bool IsApproximately(this float self, float other, float within)
{
return Math.Abs(self - other) <= within;
}
}
Then use it like so:
float myNumber = 3.146f;
if (myNumber.IsApproximately(3.146f, within:0.001f))
{
////
}
else
{
////
}
Also see the documentation for Double.Equals() for more information.
Related
I want to pass a number and have the next whole number returned,
I've tried Math.Ceiling(3) , but it returns 3.
Desired output :
double val = 9.1 => 10
double val = 3 => 4
Thanks
There are two ways I would suggest doing this:
Using Math.Floor():
return Math.Floor(input + 1);
Using casting (to lose precision)
return (int)input + 1;
Fiddle here
Using just the floor or ceiling wont give you the next whole number in every case.
For eg:- If you input negative numbers. Better way is to create a function that does that.
public class Test{
public int NextWholeNumber(double n)
{
if(n < 0)
return 0;
else
return Convert.ToInt32(Math.Floor(n)+1);
}
// Main method
static public void Main()
{
Test o = new Test();
Console.WriteLine(o.NextWholeNumber(1.254));
}
}
Usually when you refer to whole number it is positive integers only. But if you require negative integers as well then you can try this, the code will return 3.0 => 4, -1.0 => 0, -1.1 => -1
double doubleValue = double.Parse(Console.ReadLine());
int wholeNumber = 0;
if ((doubleValue - Math.Floor(doubleValue) > 0))
{
wholeNumber = int.Parse(Math.Ceiling(doubleValue).ToString());
}
else
{
wholeNumber = int.Parse((doubleValue + 1).ToString());
}
i want to compare thickness, by checking if
Thickness A equals thickness B,
And.. it dont work. Always false, why?
ps.
Why new Thickness(2.1) returns 2.09923289[..] not 2.1
and new Thickness(2.0) returns clear 2.0?
Double values are not safe to compare, because of how double are stored in memory. I would advise you to use something like if(Math.Abs(Thickness - new Thickness(2.1)) < TOLERANCE).
You can do a quick test and try to check something like:
var passed = false;
if(0.2 + 0.1 == 0.3)
passed = true;
And you'll see that it is false
The values for the left, top, right and bottom of a Thickness are double values.
As such, you have to use Math.Abs to compare them against a tolerance value.
These are the helper methods I've got in my WinUX library which will do the job for you:
public static readonly double Epsilon = 2.2204460492503131E-16;
public static bool AreClose(Thickness value1, Thickness value2)
{
return AreClose(value1.Left, value2.Left) && AreClose(value1.Top, value2.Top) && AreClose(value1.Right, value2.Right) && AreClose(value1.Bottom, value2.Bottom);
}
public static bool AreClose(double value1, double value2)
{
if (Math.Abs(value1 - value2) < 0.00005)
{
return true;
}
var a = (Math.Abs(value1) + Math.Abs(value2) + 10.0) * Epsilon;
var b = value1 - value2;
return (-a < b) && (a > b);
}
You'd then use it in your scenario like this:
if (AreClose(new Thickness(2.1), lessonGrid.BorderThickness))
{
// Code-here
}
Original source: https://github.com/jamesmcroft/WinUX-UWP-Toolkit/blob/develop/WinUX/WinUX.Common/Maths/MathHelper.cs
Is it possible to convert float to double, then back without losing precision? I mean first float should be exaclty bit by bit same like result float.
Yes, and we can test it:
float fl = float.NegativeInfinity;
long cycles = 0;
while (true)
{
double dbl = fl;
float fl2 = (float)dbl;
int flToInt1 = new Ieee754.Int32SingleConverter { Single = fl }.Int32;
int flToInt2 = new Ieee754.Int32SingleConverter { Single = fl2 }.Int32;
if (flToInt1 != flToInt2)
{
Console.WriteLine("\nDifferent: {0} (Int32: {1}, {2})", fl, flToInt1, flToInt2);
}
if (fl == 0)
{
Console.WriteLine("\n0, Sign: {0}", flToInt1 < 0 ? "-" : "+");
}
if (fl == float.PositiveInfinity)
{
fl = float.NaN;
}
else if (float.IsNaN(fl))
{
break;
}
else
{
fl = Ieee754.NextSingle(fl);
}
cycles++;
if (cycles % 100000000 == 0)
{
Console.Write(".");
}
}
Console.WriteLine("\nDone");
Console.ReadKey();
and the utility classes:
public static class Ieee754
{
[StructLayout(LayoutKind.Explicit)]
public struct Int32SingleConverter
{
[FieldOffset(0)]
public int Int32;
[FieldOffset(0)]
public float Single;
}
public static float NextSingle(float value)
{
int bits = new Int32SingleConverter { Single = value }.Int32;
if (bits >= 0)
{
bits++;
}
else if (bits != int.MinValue)
{
bits--;
}
else
{
bits = 0;
}
return new Int32SingleConverter { Int32 = bits }.Single;
}
}
On my computer, in Release Mode, without the debugger (Ctrl+F5 from Visual Studio), it is around 2 minutes.
There are around 4 billion different float values. I cast them around and convert them to int to binary check them. Note that NaN values are "particular". The IEEE754 standard has multiple values for NaN, but .NET "compresses" them to a single NaN value. So you could create a NaN value (manually, through bit manipulation) that wouldn't be converted back and forth correctly. The "standard" NaN values is converted correctly, so are PositiveInfinity and NegativeInfinity, +0 and -0.
Yes, as every float can be exactly represented as a double, the round trip will give you the exact value that you started with.
There is one possible technical exception to your requirement that they are bit-by-bit the same: there are multiple bit patterns that correspond to NaN values (this is often referred to as the "NaN payload"). As far as I know, there is no strict requirement that this be preserved: you will still get a NaN, just maybe a slightly different one.
Is it possible (in C#) to cause a checked(...) expression to have dynamic "scope" for the overflow checking? In other words, in the following example:
int add(int a, int b)
{
return a + b;
}
void test()
{
int max = int.MaxValue;
int with_call = checked(add(max, 1)); // does NOT cause OverflowException
int without_call = checked(max + 1); // DOES cause OverflowException
}
because in the expression checked(add(max, 1)), a function call causes the overflow, no OverflowException is thrown, even though there is an overflow during the dynamic extent of the checked(...) expression.
Is there any way to cause both ways to evaluate int.MaxValue + 1 to throw an OverflowException?
EDIT: Well, either tell me if there is a way, or give me a better way to do this (please).
The reason I think I need this is because I have code like:
void do_op(int a, int b, Action<int, int> forSmallInts, Action<long, long> forBigInts)
{
try
{
checked(forSmallInts(a, b));
}
catch (OverflowException)
{
forBigInts((long)a, (long)b);
}
}
...
do_op(n1, n2,
(int a, int b) => Console.WriteLine("int: " + (a + b)),
(long a, long b) => Console.WriteLine("long: " + (a + b)));
I want this to print int: ... if a + b is in the int range, and long: ... if the small-integer addition overflows. Is there a way to do this that is better than simply changing every single Action (of which I have many)?
To be short, no it is not possible for checked blocks or expressions to have dynamic scope.
If you want to apply this in the entirety of your code base you should look to adding it to your compiler options.
Checked expressions or checked blocks should be used where the operation is actually happening.
int add(int a, int b)
{
int returnValue = 0;
try
{
returnValue = checked(a + b);
}
catch(System.OverflowException ex)
{
//TODO: Do something with exception or rethrow
}
return returnValue;
}
void test()
{
int max = int.MaxValue;
int with_call = add(max, 1);
}
You shouldn't catch exceptions as part of the natural flow of your program. Instead, you should anticipate the problem. There are quite a few ways you can do this, but assuming you just care about int and long and when the addition overflows:
EDIT: Using the types you mention below in your comment instead of int and long:
void Add(RFSmallInt a, RFSmallInt b)
{
RFBigInt result = new RFBigInt(a) + new RFBigInt(b);
Console.WriteLine(
(result > RFSmallInt.MaxValue ? "RFBigInt: " : "RFSmallInt: ") + result);
}
This makes an assumption that you have a constructor for RFBigInt that promotes a RFSmallInt. This should be trivial as BigInteger has that same for long. There is also an explicit cast from BigInteger to long that you can use to "demote" the value if it is does not overflow.
An exception should be an exception, not the usual program flow. But lets not care about that for now :)
The direct answer to you question I believe is no, but you can always work yourself around the problem. I'm posting a small part of some of the ninja stuff I made when implementing unbounded integers (in effect a linked list of integers) which could help you.
This is a very simplistic approach for doing checked addition manually if performance is not an issue. Is quite nice if you can overload the operators of the types, ie you control the types.
public static int SafeAdd(int left, int right)
{
if (left == 0 || right == 0 || left < 0 && right > 0 || right < 0 && left > 0)
// One is 0 or they are both on different sides of 0
return left + right;
else if (right > 0 && left > 0 && int.MaxValue - right > left)
// More than 0 and ok
return left + right;
else if (right < 0 && left < 0 && int.MinValue - right < left)
// Less than 0 and ok
return left + right;
else
throw new OverflowException();
}
Example with your own types:
public struct MyNumber
{
public MyNumber(int value) { n = value; }
public int n; // the value
public static MyNumber operator +(MyNumber left, MyNumber right)
{
if (left == 0 || right == 0 || left < 0 && right > 0 || right < 0 && left > 0)
// One is 0 or they are both on different sides of 0
return new MyNumber(left.n + right.n); // int addition
else if (right > 0 && left > 0 && int.MaxValue - right > left)
// More than 0 and ok
return new MyNumber(left.n + right.n); // int addition
else if (right < 0 && left < 0 && int.MinValue - right < left)
// Less than 0 and ok
return new MyNumber(left.n + right.n); // int addition
else
throw new OverflowException();
}
// I'm lazy, you should define your own comparisons really
public static implicit operator int(MyNumber number) { return number.n; }
}
As I stated earlier, you will lose performance, but gain the exceptions.
You could use Expression Tree & modify it to introduce Checked for math operator & execute it. This sample is not compiled and tested, you will have to tweak it little more.
void CheckedOp (int a, int b, Expression <Action <int, int>> small, Action <int, int> big){
var smallFunc = InjectChecked (small);
try{
smallFunc(a, b);
}catch (OverflowException oe){
big(a,b);
}
}
Action<int, int> InjectChecked( Expression<Action<int, int>> exp )
{
var v = new CheckedNodeVisitor() ;
var r = v.Visit ( exp.Body);
return ((Expression<Action<int, int>> exp) Expression.Lambda (r, r. Parameters) ). Compile() ;
}
class CheckedNodeVisitor : ExpressionVisitor {
public CheckedNodeVisitor() {
}
protected override Expression VisitBinary( BinaryExpression be ) {
switch(be.NodeType){
case ExpressionType.Add:
return Expression.AddChecked( be.Left, be.Right);
}
return be;
}
}
I have a C# code which is working good when the "optimize code" option is off, but fails otherwise. Is there any function or class attribute which can prevent the optimisation of a function or class, but let the compiler optimize the others ?
(I tried unsafe or MethodImpl, but without success)
Thanks
Edit :
I have done some more test...
The code is like this :
double arg = (Math.PI / 2d - Math.Atan2(a, d));
With a = 1 and d = 0, arg should be 0.
Thid code is a function which is called by Excel via ExcelDNA.
Calling an identical code from an optimized console app : OK
Calling this code from Excel without optimization : OK
Calling this code from Excel with optimization : Not OK, arg == 0 is false (instead arg is a very small value near 0, but not 0)
Same result with [MethodImpl(MethodImplOptions.NoOptimization)] before the called function.
This is very likely to do with the floating point mode which Excel likely has set - meaning that your program is calculating floating points slightly different because of the program (Excel) hosting your assembly (DLL). This might impact how your results are calculated, or how/what values are automatically coerced to zero.
To be absolutely sure you are not going to run into issues with different floating point modes and/or errors you should check for equality rather by checking if the values are very close together. This is not really a hack.
public class AlmostDoubleComparer : IComparer<double>
{
public static readonly AlmostDoubleComparer Default = new AlmostDoubleComparer();
public const double Epsilon = double.Epsilon * 64d; // 0.{322 zeroes}316
public static bool IsZero(double x)
{
return Compare(x, 0) == 0;
}
public static int Compare(double x, double y)
{
// Very important that cmp(x, y) == cmp(y, x)
if (Double.IsNaN(x) || Double.IsNaN(y))
return 1;
if (Double.IsInfinity(x) || Double.IsInfinity(y))
return 1;
var absX = Math.Abs(x);
var absY = Math.Abs(y);
var diff = absX > absY ? absX - absY : absY - absX;
if (diff < Epsilon)
return 0;
if (x < y)
return -1;
else
return 1;
}
int IComparer<double>.Compare(double x, double y)
{
return Compare(x, y);
}
}
// E.g.
double arg = (Math.PI / 2d - Math.Atan2(a, d));
if (AlmostDoubleComparer.IsZero(arg))
// Regard it as zero.
I also ported the re-interpret integer comparison, in case you find that more suitable (it deals with larger values more consistently).
public class AlmostDoubleComparer : IComparer<double>
{
public static readonly AlmostDoubleComparer Default = new AlmostDoubleComparer();
public const double MaxUnitsInTheLastPlace = 3;
public static bool IsZero(double x)
{
return Compare(x, 0) == 0;
}
public static int Compare(double x, double y)
{
// Very important that cmp(x, y) == cmp(y, x)
if (Double.IsNaN(x) || Double.IsNaN(y))
return 1;
if (Double.IsInfinity(x) || Double.IsInfinity(y))
return 1;
var ix = DoubleInt64.Reinterpret(x);
var iy = DoubleInt64.Reinterpret(y);
var diff = Math.Abs(ix - iy);
if (diff < MaxUnitsInTheLastPlace)
return 0;
if (ix < iy)
return -1;
else
return 1;
}
int IComparer<double>.Compare(double x, double y)
{
return Compare(x, y);
}
}
[StructLayout(LayoutKind.Explicit)]
public struct DoubleInt64
{
[FieldOffset(0)]
private double _double;
[FieldOffset(0)]
private long _int64;
private DoubleInt64(long value)
{
_double = 0d;
_int64 = value;
}
private DoubleInt64(double value)
{
_int64 = 0;
_double = value;
}
public static double Reinterpret(long value)
{
return new DoubleInt64(value)._double;
}
public static long Reinterpret(double value)
{
return new DoubleInt64(value)._int64;
}
}
Alternatively you could try and NGen the assembly and see if you can work around the either the mode Excel has, or how it is hosting the CLR.
That is what you get when working with floating point datatypes. You don't get exactly 0, but a very close value, since a double has limited precision and not every value can be represented and sometimes those tiny precision errors add up. You either need to expect that (check that the value is close enough to 0).