I have following extension methods, one for float and one for double.
public static string ToTimeString(this float f, TimeStringAccuracy accuracyFlags = DefaultAccuracyFlags, TimeStringSeparator separator = DefaultSeparator)
public static string ToTimeString(this double d, TimeStringAccuracy accuracyFlags = DefaultAccuracyFlags, TimeStringSeparator separator = DefaultSeparator)
When i call the functions from Visual Studio unit tests the tests run correctly and the call is not ambiguous. When i call the functions from Unity3D code for float (using Mono) the call is ambiguous between these two. Why doesn't the compiler know that it should be calling the float extension? Might it be Mono causing this?
This is the call:
float i = 1f;
i.ToTimeString();
Compiler error:
Assets/Scipts/UIScripts/GameplayUIWindow.cs(61,48): error CS0121: The call is ambiguous between the following methods or properties: ToTimeString(this double, TimeStringAccuracy, TimeStringSeparator)' and `ToTimeString(this float, TimeStringAccuracy, TimeStringSeparator)'
Try this version instead:
public static string ToTimeString<T>(this T target, TimeStringAccuracy accuracyFlags = DefaultAccuracyFlags, TimeStringSeparator separator = DefaultSeparator) where T : struct
It may compile to different code in Mono but still allow you to use the code in the way you want. As a disclaimer, I recommend doing some type checking on the type parameter T inside of the method body to guarantee that the method behaves the way you expect for expected types (such as float and double) and throws exceptions for unexpected types (such as int or Enum).
Related
I'm trying to clean up some code I've inherited, and I'm wondering if there's a way in C# to prevent implicit casts of method parameters. We have a series of overloaded methods that take a variety of parameters, but we get unexpected/bad results if an incoming parameter is cast to a different type than what the method is using (hence the overloads).
For example, this
private unsafe void MyMethod(float arg0, float arg1)
{
/* Do Very Unsafe Things */
}
will get called by this
int i = 0;
int j = 1;
MyMethod(i, j);
if I don't have a specific overload to take (int, int).
I want to find some way at compile time to ensure that we have to write the (int, int) version of the method to call it like that instead of having the compiler "help" by using implicit casting. Is this possible?
I have the following issue:
I have designed a framework that uses a generic delegate function (Func<...>) to calculate distance between two real number vectors
The delegate has the following form:
Func<double[], double[], double> distance;
I have many different distance functions implemented
I would rather not change the signature of distance delegate
I need to create a new function that besides two double vectors needs some additional parameters i.e.
Func<double[], double[], double, double, double> newDistance;
This does not have to be the delegate. What I want is to define a method with additional parameters and pass it to the old interface.
can this and how, be achieved without a serious redesign and without changing the signature of distance delegate?
Regards,
Sebastian
OK I think I have found a way:
I can create a method that takes this two additional parameters and returns a Func delegate. Something like this
public static Func<double[], double[], double> NewDistanceMethodWithParams(double alpha, double beta)
{
Func<double[], double[], double> dist = (vec1, vec2) =>
{
//do some calculations on vec1 and vec2 with alpha, beta params
};
return dist;
}
First tests shows that it works !!! :)
You could always add an object at the end for "extra data" but as written, you cannot add extra parameters to delegates. This would disrupt the ability for your framework to call them from its end.
That being said, however, you CAN make certain lambda calls which only take the specified arguments, but then call into a method with more arguments.
double extra1 = //...
double extra2 = //...
newDistance = (firstAr, secondAr) =>
newDistanceAsMethod(firstAr, secondAr, extra1, extra2);
Also note there is a delegate type for when you don't know the arguments at compile time, but using that can get very very tricky if you want a variable number of arguments.
My advice is to consider, despite mentioned misgivings, changing parameters on the original delegate. If you're concerned about further extensibility, make the parameters into an interface or class, and pull them out as obj.First and such - inheritance allows you to add properties to class objects and overload methods much more easily than delegates do. Thusly distance becomes
Func<DistanceArgs, double> distance;
...
public class DistanceArgs
{
public double[] First;
public double[] Second;
}
public class NewDistanceArgs : DistanceArgs
{
public double Extra1;
public double Extra2;
...
}
Consider the following code:
static void Main()
{
dynamic a = 1;
int b = OneMethod(a);
}
private static string OneMethod(int number)
{
return "";
}
Please notice that type of b and return type of OneMethod does not match. Nevertheless it builds and throws the exception at runtime. My question is that why does the compiler let this? Or what is the philosophy behind this?
The reason behind this may be Compiler does not know which OneMethod would be called, because a is dynamic. But why it cannot see that there is only one OneMethod. There will surely be an exception at runtime.
Any expression that has an operand of type dynamic will have a type of dynamic itself.
Thus your expression OneMethod(a) returns an object that's typed dynamically
so the first part of your code is equivalent to
static void Main()
{
dynamic a = 1;
dynamic temp = OneMethod(a);
int b = temp;
}
one way of argue why this is sensible even in your case depends on whether or not you think the compiler should change behavior for that particular line depending when you add the below method
private static T OneMethod<T>(T number)
Now the compiler won't know the type returned until runtime. It won't even know which method is called. The generic or the non generic. Wouldn't you be surprised if it in the first case marked the assignment as a compile error and then by adding a completely different method it got moved to a runtime error?
The Type class has a method IsAssignableFrom() that almost works. Unfortunately it only returns true if the two types are the same or the first is in the hierarchy of the second. It says that decimal is not assignable from int, but I'd like a method that would indicate that decimals are assignable from ints, but ints are not always assignable from decimals. The compiler knows this but I need to figure this out at runtime.
Here's a test for an extension method.
[Test]
public void DecimalsShouldReallyBeAssignableFromInts()
{
Assert.IsTrue(typeof(decimal).IsReallyAssignableFrom(typeof(int)));
Assert.IsFalse(typeof(int).IsReallyAssignableFrom(typeof(decimal)));
}
Is there a way to implement IsReallyAssignableFrom() that would work like IsAssignableFrom() but also passes the test case above?
Thanks!
Edit:
This is basically the way it would be used. This example does not compile for me, so I had to set Number to be 0 (instead of 0.0M).
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Parameter)]
public class MyAttribute : Attribute
{
public object Default { get; set; }
}
public class MyClass
{
public MyClass([MyAttribute(Default= 0.0M)] decimal number)
{
Console.WriteLine(number);
}
}
I get this error:
Error 4 An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type
There are actually three ways that a type can be “assignable” to another in the sense that you are looking for.
Class hierarchy, interface implementation, covariance and contravariance. This is what .IsAssignableFrom already checks for. (This also includes permissible boxing operations, e.g. int to object or DateTime to ValueType.)
User-defined implicit conversions. This is what all the other answers are referring to. You can retrieve these via Reflection, for example the implicit conversion from int to decimal is a static method that looks like this:
System.Decimal op_Implicit(Int32)
You only need to check the two relevant types (in this case, Int32 and Decimal); if the conversion is not in those, then it doesn’t exist.
Built-in implicit conversions which are defined in the C# language specification. Unfortunately Reflection doesn’t show these. You will have to find them in the specification and copy the assignability rules into your code manually. This includes numeric conversions, e.g. int to long as well as float to double, pointer conversions, nullable conversions (int to int?), and lifted conversions.
Furthermore, a user-defined implicit conversion can be chained with a built-in implicit conversion. For example, if a user-defined implicit conversion exists from int to some type T, then it also doubles as a conversion from short to T. Similarly, T to short doubles as T to int.
This one almost works... it's using Linq expressions:
public static bool IsReallyAssignableFrom(this Type type, Type otherType)
{
if (type.IsAssignableFrom(otherType))
return true;
try
{
var v = Expression.Variable(otherType);
var expr = Expression.Convert(v, type);
return expr.Method == null || expr.Method.Name == "op_Implicit";
}
catch(InvalidOperationException ex)
{
return false;
}
}
The only case that doesn't work is for built-in conversions for primitive types: it incorrectly returns true for conversions that should be explicit (e.g. int to short). I guess you could handle those cases manually, as there is a finite (and rather small) number of them.
I don't really like having to catch an exception to detect invalid conversions, but I don't see any other simple way to do it...
Timwi's answer is really complete, but I feel there's an even simpler way that would get you the same semantics (check "real" assignability), without actually defining yourself what this is.
You can just try the assignment in question and look for an InvalidCastException (I know it's obvious). This way you avoid the hassle of checking the three possible meanings of assignability as Timwi mentioned. Here's a sample using xUnit:
[Fact]
public void DecimalsShouldReallyBeAssignableFromInts()
{
var d = default(decimal);
var i = default(i);
Assert.Throws<InvalidCastException)( () => (int)d);
Assert.DoesNotThrow( () => (decimal)i);
}
What you are looking for is if there's an implicit cast from the one type to the other. I would think that's doable by reflection, though it might be tricky because the implicit cast should be defined as an operator overload which is a static method and I think it could be defined in any class, not just the one that can be implicitly converted.
In order to find out if one type can be assigned to another, you have to look for implicit conversions from one to the other. You can do this with reflection.
As Timwi said, you will also have to know some built-in rules, but those can be hard-coded.
It actually happens to be the case that the decimal type is not "assignable" to the int type, and vice versa. Problems occur when boxing/unboxing gets involved.
Take the example below:
int p = 0;
decimal d = 0m;
object o = d;
object x = p;
// ok
int a = (int)d;
// invalid cast exception
int i = (int)o;
// invalid cast exception
decimal y = (decimal)p;
// compile error
int j = d;
This code looks like it should work, but the type cast from object produces an invalid cast exception, and the last line generates a compile-time error.
The reason the assignment to a works is because the decimal class has an explicit override on the type cast operator to int. There does not exist an implicit type cast operator from decimal to int.
Edit: There does not exist even the implicit operator in reverse. Int32 implements IConvertible, and that is how it converts to decimal
End Edit
In other words, the types are not assignable, but convertible.
You could scan assemblies for explicit type cast operators and IConvertible interfaces, but I get the impression that would not serve you as well as programming for the specific few cases you know you will encounter.
Good luck!
I am writing unit tests that verify calculations in a database and there is a lot of rounding and truncating and stuff that mean that sometimes figures are slightly off.
When verifying, I'm finding a lot of times when things will pass but say they fail - for instance, the figure will be 1 and I'm getting 0.999999
I mean, I could just round everything into an integer but since I'm using a lot of randomized samples, eventually i'm going to get something like this
10.5
10.4999999999
one is going to round to 10, the other will round to 11.
How should I solve this problem where I need something to be approximately correct?
Define a tolerance value (aka an 'epsilon' or 'delta'), for instance, 0.00001, and then use to compare the difference like so:
if (Math.Abs(a - b) < delta)
{
// Values are within specified tolerance of each other....
}
You could use Double.Epsilon but you would have to use a multiplying factor.
Better still, write an extension method to do the same. We have something like Assert.AreSimiliar(a,b) in our unit tests.
Microsoft's Assert.AreEqual() method has an overload that takes a delta: public static void AreEqual(double expected, double actual, double delta)
NUnit also provides an overload to their Assert.AreEqual() method that allows for a delta to be provided.
You could provide a function that includes a parameter for an acceptable difference between two values. For example
// close is good for horseshoes, hand grenades, nuclear weapons, and doubles
static bool CloseEnoughForMe(double value1, double value2, double acceptableDifference)
{
return Math.Abs(value1 - value2) <= acceptableDifference;
}
And then call it
double value1 = 24.5;
double value2 = 24.4999;
bool equalValues = CloseEnoughForMe(value1, value2, 0.001);
If you wanted to be slightly professional about it, you could call the function ApproximatelyEquals or something along those lines.
static bool ApproximatelyEquals(this double value1, double value2, double acceptableDifference)
I haven't checked in which MS Test version were added but in v10.0.0.0 Assert.AreEqual methods have overloads what accept a delta parameter and do approximate comparison.
I.e.
https://msdn.microsoft.com/en-us/library/ms243458(v=vs.140).aspx
//
// Summary:
// Verifies that two specified doubles are equal, or within the specified accuracy
// of each other. The assertion fails if they are not within the specified accuracy
// of each other.
//
// Parameters:
// expected:
// The first double to compare. This is the double the unit test expects.
//
// actual:
// The second double to compare. This is the double the unit test produced.
//
// delta:
// The required accuracy. The assertion will fail only if expected is different
// from actual by more than delta.
//
// Exceptions:
// Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException:
// expected is different from actual by more than delta.
public static void AreEqual(double expected, double actual, double delta);
In NUnit, I like the clarity of this form:
double expected = 10.5;
double actual = 10.499999999;
double tolerance = 0.001;
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
One way to compare floating point numbers is to compare how many floating point representations that separate them. This solution is indifferent to the size of the numbers and thus you don't have to worry about the size of "epsilon" mentioned in other answers.
A description of the algorithm can be found here (the AlmostEqual2sComplement function in the end) and here is my C# version of it.
UPDATE:
The provided link is outdated. The new version which includes some improvements and bugfixes is here
public static class DoubleComparerExtensions
{
public static bool AlmostEquals(this double left, double right, long representationTolerance)
{
long leftAsBits = left.ToBits2Complement();
long rightAsBits = right.ToBits2Complement();
long floatingPointRepresentationsDiff = Math.Abs(leftAsBits - rightAsBits);
return (floatingPointRepresentationsDiff <= representationTolerance);
}
private static unsafe long ToBits2Complement(this double value)
{
double* valueAsDoublePtr = &value;
long* valueAsLongPtr = (long*)valueAsDoublePtr;
long valueAsLong = *valueAsLongPtr;
return valueAsLong < 0
? (long)(0x8000000000000000 - (ulong)valueAsLong)
: valueAsLong;
}
}
If you'd like to compare floats, change all double to float, all long to int and 0x8000000000000000 to 0x80000000.
With the representationTolerance parameter you can specify how big an error is tolerated. A higher value means a larger error is accepted. I normally use the value 10 as default.
The question was asking how to assert something was almost equal in unit testing. You assert something is almost equal by using the built-in Assert.AreEqual function. For example:
Assert.AreEqual(expected: 3.5, actual : 3.4999999, delta:0.1);
This test will pass. Problem solved and without having to write your own function!
FluentAssertions provides this functionality in a way that is perhaps clearer to the reader.
result.Should().BeApproximately(expectedResult, 0.01m);