i want to compare thickness, by checking if
Thickness A equals thickness B,
And.. it dont work. Always false, why?
ps.
Why new Thickness(2.1) returns 2.09923289[..] not 2.1
and new Thickness(2.0) returns clear 2.0?
Double values are not safe to compare, because of how double are stored in memory. I would advise you to use something like if(Math.Abs(Thickness - new Thickness(2.1)) < TOLERANCE).
You can do a quick test and try to check something like:
var passed = false;
if(0.2 + 0.1 == 0.3)
passed = true;
And you'll see that it is false
The values for the left, top, right and bottom of a Thickness are double values.
As such, you have to use Math.Abs to compare them against a tolerance value.
These are the helper methods I've got in my WinUX library which will do the job for you:
public static readonly double Epsilon = 2.2204460492503131E-16;
public static bool AreClose(Thickness value1, Thickness value2)
{
return AreClose(value1.Left, value2.Left) && AreClose(value1.Top, value2.Top) && AreClose(value1.Right, value2.Right) && AreClose(value1.Bottom, value2.Bottom);
}
public static bool AreClose(double value1, double value2)
{
if (Math.Abs(value1 - value2) < 0.00005)
{
return true;
}
var a = (Math.Abs(value1) + Math.Abs(value2) + 10.0) * Epsilon;
var b = value1 - value2;
return (-a < b) && (a > b);
}
You'd then use it in your scenario like this:
if (AreClose(new Thickness(2.1), lessonGrid.BorderThickness))
{
// Code-here
}
Original source: https://github.com/jamesmcroft/WinUX-UWP-Toolkit/blob/develop/WinUX/WinUX.Common/Maths/MathHelper.cs
Related
Example:
public static double ComputeFoo(double nom, double den, double epsilon = 2.2e-16)
{
double den1 = den == 0.0 ? epsilon : den;
// den1 can still be zero if epsilon is zero
// is there any way to retrieve 2.2e-16 here and assign it to den1?
return nom/den1;
}
Is there a way to retrieve 2.2e-16 value and use it in method?
P.S.: I understand that for this particular example I can just call ComputeFoo(nom, den1).
You can set a constant value somewhere in your class, and pass it as the default value to the method. Once there you can check if the passed value is different from the constant or viceversa:
static void Main(string[] args)
{
Test(0);
}
const int constantValue = 15;
static int Test(int testValue = constantValue)
{
Console.WriteLine(testValue);
Console.WriteLine(constantValue);
return constantValue;
}
Note: constantValue must be a constant in order to build successfuly.
Here's a different approach I mentioned above in my comment using Reflection; generic approach.
public static T GetDefaultOptionalParamValue<T, TClass>(string methodName, string paramName)
{
if (typeof(TClass).GetMethod(methodName)?.GetParameters().Where(p => p.Attributes.HasFlag(ParameterAttributes.Optional) &&
p.Attributes.HasFlag(ParameterAttributes.HasDefault) && p.Name == paramName)?.FirstOrDefault()?.DefaultValue is T myValue)
{
return myValue;
}
else { return default; }
}
You can this call it like so:
var t = GetDefaultOptionalParamValue<double, ClassName>("ComputeFoo", "epsilon");
t value is 2.2E-16
Use reflection in System.Diagnostics to get access to the parameters. This will get the default value of the third parameter of the enclosing function:
var x = new StackFrame(0).GetMethod().GetParameters()[2].DefaultValue;
It's not possible to get the default value just from the method. A way this might work for you is making the default value a constant in your class. For example:
private const double epsilonDefault = 2.2e-16;
public static double ComputeFoo(double nom, double den, double epsilon = epsilonDefault)
{
double den1 = den == 0.0 ? epsilon : den;
if (den1 == 0) den1 = epsilonDefault;
return nom / den1;
}
This way your default value is declared outside the method and available when you need it.
EDIT: To be complete, through reflection it is possible to do this but this seems too much for this question. An basic example of how to do this with reflection:
public static void Execute(int number = 10)
{
Console.WriteLine(number);
var defaultValue = typeof(Program)
.GetMethod("Execute")
.GetParameters()[0]
.DefaultValue;
Console.WriteLine(defaultValue); // 10
}
NO, you can't. The only default value that can be set is null
public static double ComputeFoo(double nom, double den, double ?epsilon )
{
if (epsilon == null)
epsilon = 2.2e-16
double den1 = den == 0.0 ? epsilon : den;
// den1 can still be zero if epsilon is zero
// is there any way to retrieve 2.2e-16 here and assign it to den1?
return nom/den1;
}
This is the implementation from Microsoft for Sinh of a Complex
public static Complex Sinh(Complex value) /* Hyperbolic sin */
{
double a = value.m_real;
double b = value.m_imaginary;
return new Complex(Math.Sinh(a) * Math.Cos(b), Math.Cosh(a) * Math.Sin(b));
}
and the implementation for Cosh
public static Complex Cos(Complex value) {
double a = value.m_real;
double b = value.m_imaginary;
return new Complex(Math.Cos(a) * Math.Cosh(b), - (Math.Sin(a) * Math.Sinh(b)));
}
and finally the the implementation for Tanh
public static Complex Tanh(Complex value) /* Hyperbolic tan */
{
return (Sinh(value) / Cosh(value));
}
Source: https://referencesource.microsoft.com/System.Numerics/a.html#e62f37ac1d0c67da
I don't understand why Microsoft implented the Tanh method that way?
It will fail for very large values. E.g.:
tanh(709 + 0i) --> 1, ok
tanh(711 + 0i) --> NaN, failed should be 1
Any ideas how to improve the tanh method that?
For double the Math.Tanh methods works for large values.
The complex tanh method could be implemented like that:
public static Complex Tanh(Complex value)
{
double a = value.Real;
double b = value.Imaginary;
double tanh_a = Math.Tanh(a);
double tan_b = Math.Tan(b);
Complex num = new Complex(tanh_a, tan_b);
Complex den = new Complex(1, tanh_a * tan_b);
return num / den;
}
This will work as well for large values, see https://dotnetfiddle.net/xGWdQt.
Update
As well the complex tan method needs to be re-implemented that it works with larges values (imaginary part):
public static Complex Tan(Complex value)
{
double a = value.Real;
double b = value.Imaginary;
double tan_a = Math.Tan(a);
double tanh_b = Math.Tanh(b);
Complex num = new Complex(tan_a, tanh_b);
Complex den = new Complex(1, -tan_a * tanh_b);
return num / den;
}
See https://dotnetfiddle.net/dh6CSG.
Using the comment from Hans Passant another way to implement the tanh method would be:
public static Complex Tanh(Complex value)
{
if (Math.Abs(value.Real) > 20)
return new Complex(Math.Sign(value.Real), 0);
else
return Complex.Tanh(value);
}
See https://dotnetfiddle.net/QvUECX.
And the tan method:
public static Complex Tan(Complex value)
{
if (Math.Abs(value.Imaginary) > 20)
return new Complex(0, Math.Sign(value.Imaginary));
else
return Complex.Tan(value);
}
See https://dotnetfiddle.net/Xzclcu.
We are using one code analyzer which has a rule like this "Do Not Check Floating Point Equality/Inequality".Below is the example given.
float f = 0.100000001f; // 0.1
double d = 0.10000000000000001; // 0.1
float myNumber = 3.146f;
if ( myNumber == 3.146f ) //Noncompliant. Because of floating point imprecision, this will be false
{
////
}
else
{
////
}
if (myNumber <= 3.146f && mNumber >= 3.146f) // Noncompliant indirect equality test
{
// ...
}
if (myNumber < 4 || myNumber > 4) // Noncompliant indirect inequality test
{
// ...
}
when I tested this code if ( myNumber == 3.146f ) is true so I am not able to understand what this rule is trying to say.
What is solution or code change required for this rule?
Is this rule applicable for C#? When I googled I see more examples of C/C++ for this rule
Floating point is not precise. In some cases, the result is unexpected, so it's bad practice to compare floating point number for equality without some tolerance.
It can be demonstrated with simple example.
if(0.1 + 0.2 == 0.3)
{
Console.WriteLine("Equal");
}
else
{
Console.WriteLine("Not Equal");
}
It will print Not Equal.
Demo: https://dotnetfiddle.net/ltAFWe
The solution is to add some tolerance, for example:
if(Math.Abs((0.1 + 0.2) - 0.3) < 0.0001)
{
Console.WriteLine("Equal");
}
else
{
Console.WriteLine("Not Equal");
}
Now it will print Equal.
A fairly readable solution to this is to define an extension method for double like so:
public static class FloatAndDoubleExt
{
public static bool IsApproximately(this double self, double other, double within)
{
return Math.Abs(self - other) <= within;
}
public static bool IsApproximately(this float self, float other, float within)
{
return Math.Abs(self - other) <= within;
}
}
Then use it like so:
float myNumber = 3.146f;
if (myNumber.IsApproximately(3.146f, within:0.001f))
{
////
}
else
{
////
}
Also see the documentation for Double.Equals() for more information.
Is it possible (in C#) to cause a checked(...) expression to have dynamic "scope" for the overflow checking? In other words, in the following example:
int add(int a, int b)
{
return a + b;
}
void test()
{
int max = int.MaxValue;
int with_call = checked(add(max, 1)); // does NOT cause OverflowException
int without_call = checked(max + 1); // DOES cause OverflowException
}
because in the expression checked(add(max, 1)), a function call causes the overflow, no OverflowException is thrown, even though there is an overflow during the dynamic extent of the checked(...) expression.
Is there any way to cause both ways to evaluate int.MaxValue + 1 to throw an OverflowException?
EDIT: Well, either tell me if there is a way, or give me a better way to do this (please).
The reason I think I need this is because I have code like:
void do_op(int a, int b, Action<int, int> forSmallInts, Action<long, long> forBigInts)
{
try
{
checked(forSmallInts(a, b));
}
catch (OverflowException)
{
forBigInts((long)a, (long)b);
}
}
...
do_op(n1, n2,
(int a, int b) => Console.WriteLine("int: " + (a + b)),
(long a, long b) => Console.WriteLine("long: " + (a + b)));
I want this to print int: ... if a + b is in the int range, and long: ... if the small-integer addition overflows. Is there a way to do this that is better than simply changing every single Action (of which I have many)?
To be short, no it is not possible for checked blocks or expressions to have dynamic scope.
If you want to apply this in the entirety of your code base you should look to adding it to your compiler options.
Checked expressions or checked blocks should be used where the operation is actually happening.
int add(int a, int b)
{
int returnValue = 0;
try
{
returnValue = checked(a + b);
}
catch(System.OverflowException ex)
{
//TODO: Do something with exception or rethrow
}
return returnValue;
}
void test()
{
int max = int.MaxValue;
int with_call = add(max, 1);
}
You shouldn't catch exceptions as part of the natural flow of your program. Instead, you should anticipate the problem. There are quite a few ways you can do this, but assuming you just care about int and long and when the addition overflows:
EDIT: Using the types you mention below in your comment instead of int and long:
void Add(RFSmallInt a, RFSmallInt b)
{
RFBigInt result = new RFBigInt(a) + new RFBigInt(b);
Console.WriteLine(
(result > RFSmallInt.MaxValue ? "RFBigInt: " : "RFSmallInt: ") + result);
}
This makes an assumption that you have a constructor for RFBigInt that promotes a RFSmallInt. This should be trivial as BigInteger has that same for long. There is also an explicit cast from BigInteger to long that you can use to "demote" the value if it is does not overflow.
An exception should be an exception, not the usual program flow. But lets not care about that for now :)
The direct answer to you question I believe is no, but you can always work yourself around the problem. I'm posting a small part of some of the ninja stuff I made when implementing unbounded integers (in effect a linked list of integers) which could help you.
This is a very simplistic approach for doing checked addition manually if performance is not an issue. Is quite nice if you can overload the operators of the types, ie you control the types.
public static int SafeAdd(int left, int right)
{
if (left == 0 || right == 0 || left < 0 && right > 0 || right < 0 && left > 0)
// One is 0 or they are both on different sides of 0
return left + right;
else if (right > 0 && left > 0 && int.MaxValue - right > left)
// More than 0 and ok
return left + right;
else if (right < 0 && left < 0 && int.MinValue - right < left)
// Less than 0 and ok
return left + right;
else
throw new OverflowException();
}
Example with your own types:
public struct MyNumber
{
public MyNumber(int value) { n = value; }
public int n; // the value
public static MyNumber operator +(MyNumber left, MyNumber right)
{
if (left == 0 || right == 0 || left < 0 && right > 0 || right < 0 && left > 0)
// One is 0 or they are both on different sides of 0
return new MyNumber(left.n + right.n); // int addition
else if (right > 0 && left > 0 && int.MaxValue - right > left)
// More than 0 and ok
return new MyNumber(left.n + right.n); // int addition
else if (right < 0 && left < 0 && int.MinValue - right < left)
// Less than 0 and ok
return new MyNumber(left.n + right.n); // int addition
else
throw new OverflowException();
}
// I'm lazy, you should define your own comparisons really
public static implicit operator int(MyNumber number) { return number.n; }
}
As I stated earlier, you will lose performance, but gain the exceptions.
You could use Expression Tree & modify it to introduce Checked for math operator & execute it. This sample is not compiled and tested, you will have to tweak it little more.
void CheckedOp (int a, int b, Expression <Action <int, int>> small, Action <int, int> big){
var smallFunc = InjectChecked (small);
try{
smallFunc(a, b);
}catch (OverflowException oe){
big(a,b);
}
}
Action<int, int> InjectChecked( Expression<Action<int, int>> exp )
{
var v = new CheckedNodeVisitor() ;
var r = v.Visit ( exp.Body);
return ((Expression<Action<int, int>> exp) Expression.Lambda (r, r. Parameters) ). Compile() ;
}
class CheckedNodeVisitor : ExpressionVisitor {
public CheckedNodeVisitor() {
}
protected override Expression VisitBinary( BinaryExpression be ) {
switch(be.NodeType){
case ExpressionType.Add:
return Expression.AddChecked( be.Left, be.Right);
}
return be;
}
}
I have a C# code which is working good when the "optimize code" option is off, but fails otherwise. Is there any function or class attribute which can prevent the optimisation of a function or class, but let the compiler optimize the others ?
(I tried unsafe or MethodImpl, but without success)
Thanks
Edit :
I have done some more test...
The code is like this :
double arg = (Math.PI / 2d - Math.Atan2(a, d));
With a = 1 and d = 0, arg should be 0.
Thid code is a function which is called by Excel via ExcelDNA.
Calling an identical code from an optimized console app : OK
Calling this code from Excel without optimization : OK
Calling this code from Excel with optimization : Not OK, arg == 0 is false (instead arg is a very small value near 0, but not 0)
Same result with [MethodImpl(MethodImplOptions.NoOptimization)] before the called function.
This is very likely to do with the floating point mode which Excel likely has set - meaning that your program is calculating floating points slightly different because of the program (Excel) hosting your assembly (DLL). This might impact how your results are calculated, or how/what values are automatically coerced to zero.
To be absolutely sure you are not going to run into issues with different floating point modes and/or errors you should check for equality rather by checking if the values are very close together. This is not really a hack.
public class AlmostDoubleComparer : IComparer<double>
{
public static readonly AlmostDoubleComparer Default = new AlmostDoubleComparer();
public const double Epsilon = double.Epsilon * 64d; // 0.{322 zeroes}316
public static bool IsZero(double x)
{
return Compare(x, 0) == 0;
}
public static int Compare(double x, double y)
{
// Very important that cmp(x, y) == cmp(y, x)
if (Double.IsNaN(x) || Double.IsNaN(y))
return 1;
if (Double.IsInfinity(x) || Double.IsInfinity(y))
return 1;
var absX = Math.Abs(x);
var absY = Math.Abs(y);
var diff = absX > absY ? absX - absY : absY - absX;
if (diff < Epsilon)
return 0;
if (x < y)
return -1;
else
return 1;
}
int IComparer<double>.Compare(double x, double y)
{
return Compare(x, y);
}
}
// E.g.
double arg = (Math.PI / 2d - Math.Atan2(a, d));
if (AlmostDoubleComparer.IsZero(arg))
// Regard it as zero.
I also ported the re-interpret integer comparison, in case you find that more suitable (it deals with larger values more consistently).
public class AlmostDoubleComparer : IComparer<double>
{
public static readonly AlmostDoubleComparer Default = new AlmostDoubleComparer();
public const double MaxUnitsInTheLastPlace = 3;
public static bool IsZero(double x)
{
return Compare(x, 0) == 0;
}
public static int Compare(double x, double y)
{
// Very important that cmp(x, y) == cmp(y, x)
if (Double.IsNaN(x) || Double.IsNaN(y))
return 1;
if (Double.IsInfinity(x) || Double.IsInfinity(y))
return 1;
var ix = DoubleInt64.Reinterpret(x);
var iy = DoubleInt64.Reinterpret(y);
var diff = Math.Abs(ix - iy);
if (diff < MaxUnitsInTheLastPlace)
return 0;
if (ix < iy)
return -1;
else
return 1;
}
int IComparer<double>.Compare(double x, double y)
{
return Compare(x, y);
}
}
[StructLayout(LayoutKind.Explicit)]
public struct DoubleInt64
{
[FieldOffset(0)]
private double _double;
[FieldOffset(0)]
private long _int64;
private DoubleInt64(long value)
{
_double = 0d;
_int64 = value;
}
private DoubleInt64(double value)
{
_int64 = 0;
_double = value;
}
public static double Reinterpret(long value)
{
return new DoubleInt64(value)._double;
}
public static long Reinterpret(double value)
{
return new DoubleInt64(value)._int64;
}
}
Alternatively you could try and NGen the assembly and see if you can work around the either the mode Excel has, or how it is hosting the CLR.
That is what you get when working with floating point datatypes. You don't get exactly 0, but a very close value, since a double has limited precision and not every value can be represented and sometimes those tiny precision errors add up. You either need to expect that (check that the value is close enough to 0).