There is an exercise "OverloadResolutionOverride"
What will be the output of the following code:
class Foo
{
public virtual void Quux(int a)
{
Console.WriteLine("Foo.Quux(int)");
}
}
class Bar : Foo
{
public override void Quux(int a)
{
Console.WriteLine("Bar.Quux(int)");
}
public void Quux(object a)
{
Console.WriteLine("Bar.Quux(object)");
}
}
class Baz : Bar
{
public override void Quux(int a)
{
Console.WriteLine("Baz.Quux(int)");
}
public void Quux<T>(params T[] a)
{
Console.WriteLine("Baz.Quux(params T[])");
}
}
void Main()
{
new Bar().Quux(42);
new Baz().Quux(42);
}
There answer is:
Bar.Quux(object)
Baz.Quux(params T[])
There is an explanation on the site:
If compiler found a suitable signature for a method
call in the “current” class, compiler will not look to parents
classes.
Is it considered that the overloaded Quux(int) method is in the base class, and not in the current one? If so, how i can call exactly Quux(int) method in current class?
This is definitely an interesting effect of what happens when the compiler does its method resolution.
Let's go to the specification and see what we can get out of that (This is my understanding and I am by no means an expert in reading the specification, so if I got something wrong, please let me know!).
The first step is to find methods that have parameters that are Applicable with the first bullet reading:
Each argument in A [Where A is the argument list] corresponds to a parameter in the function member
declaration as described in Corresponding parameters, and any
parameter to which no argument corresponds is an optional parameter.
So now we go check out what a Corresponding parameter is and we get:
For virtual methods and indexers defined in classes, the parameter
list is picked from the most specific declaration or override of the
function member, starting with the static type of the receiver, and
searching through its base classes.
And we also have
For all other function members and delegates there is only a single
parameter list, which is the one used.
Therefore, for the class Bar, we find two methods the fit the bill:
Bar.Quux(object). This is from the second paragraph because it is defined on the type directly.
Foo.Quux(int). This is from the derived type by following the override up to the virtual method declaration.
For the Baz class we get 3:
Baz.Quux(int[]). This is defined on the type.
Bar.Quux(object). This is defined in a parent class and is visible to the scope of Baz.
Foo.Quux(int). This is the virtual declaration of the override.
This gives us 2 method matches in Bar, and 3 possible method matches in Baz. This means we need to do further culling of the parameter set with the following next set (Emphasis mine):
The set of candidate methods is reduced to contain only methods from
the most derived types: For each method C.F in the set, where C is the
type in which the method F is declared, all methods declared in a base
type of C are removed from the set. Furthermore, if C is a class type
other than object, all methods declared in an interface type are
removed from the set. (This latter rule only has affect when the
method group was the result of a member lookup on a type parameter
having an effective base class other than object and a non-empty
effective interface set.)
Therefore, for Bar, we will cull Foo.Quux(int) because it is declared in a base type and is therefore removed from the set.
For Baz, we remove the following two methods because they are both declared in base types:
Bar.Quux(object)
Foo.Quux(int)
Now, each set has only one method, and we can execute the two methods Bar.Quux(object) and Baz.Quux<int>(int[]).
Picking the right Method
So this begs the question, can we force the correct method to be called? And based on the second resolution step, where it uses the most derived type, the answer is that yes we can.
If we want to call Foo's methods, we need to set the type of our caller to Foo. If we want Baz to call Bar's method, then we will need to set Baz to Bar.
Considering the following set of method calls:
new Bar().Quux(42);
new Baz().Quux(42);
((Foo)new Bar()).Quux(42);
((Foo)new Baz()).Quux(42);
((Bar)new Baz()).Quux(42);
We get the following output:
Bar.Quux(object)
Baz.Quux(params T[])
Bar.Quux(int)
Baz.Quux(int)
Bar.Quux(object)
And are able to target the specific methods that we want to using a similar method resolution as above.
One More Step
If we change the definition of Baz to be following:
class Baz : Bar
{
public override void Quux(int a)
{
Console.WriteLine("Baz.Quux(int)");
}
}
And then make the method call: new Baz().Quux(42); our output is still: Bar.Quux(object). This seems strange because of the fact that we have a method override defined within Baz directly. However, the originating type for the method is Foo which is less specific than Bar. So when we match our parameter lists, we end up with Bar.Quux(object) and Foo.Quux(int) since the int parameter list is defined on Foo. Therefore, Foo.Quux(int) is culled in the second step since Bar is more derived than Foo, and we call Bar.Quux(object).
I think the moral of the story here is don't name methods the same as overrides!
As other note, the way to call the method you wan't is to downcast: (Bar)(new Baz).Quux(42);.
But why does the compiler choose the generic method? Isn't an exact non generic match preferred over a generic method when resolving method resolution?
Yes, but another rule applies here too; method resolution will prefer the nearest applicable method, nearest meaning where the method is declared in relation to the callsite.
Uhm... Quux(int) is declared in Baz so I'm not really sure...
No! Overriden methods belong to the class that implements the virtual method; in this case, as far as the compiler is concerned, the "owner" of Quux is Foo, not Baz.
And why would the compiler apply such a rule? Well, suppose the following scenario:
Company Alpha publishes a class A:
class A { ... }
And company Beta goes ahead and consumes A by extending it with B:
class B: A {
public void Frob<T>(T frobbinglevel) { ... }
... }
Everything is honky dory and Beta is having tremendous success with its brilliant frobbing. Company Alpha decides to have a go with its own frobbing and publishes a new version of A:
class A {
public virtual void Frob(int frobbinglevel) { ... }
... }
Beta recompiles its library with Alpha's updates and now what should happen? Should the compiler start choosing A.Frob(1) over B.Frob(1), A.Frob is a non generic exact match?
But hey... Beta hasn't changed anything and now its class is not working as it should! Suddenly, without any change in its code, method resolution is choosing different methods, breaking its customers...that doesn't seem right. This is the reason why the rule exists.
To call exactly Quux(int) from Bar or Baz you should cast Bar or Baz to Foo:
Foo bar = new Bar();
Foo baz = new Baz();
bar.Quux(42);
baz.Quux(42);
The output:
Bar.Quux(int)
Baz.Quux(int)
Related
This is a question that might perhaps be foolish, but I can't seem to reason my way around it so I have come here for some thoughts. It's about inheritance and lists.
Alright, so I have two classes: TestClass and DerivedTestClass. DerivedTestClass is derived from TestClass. Both classes have a method called 'Method1'.
These are both very simple methods. They just have a message box saying what method is being accessed. In fact, let me write the code just to remove ambiguity:
public void Method1(String typeName)
{
MessageBox.Show("Base method 1 + Type: "+typeName);
}
This is for the base class (TestClass).
public void Method1(String typeName)
{
MessageBox.Show("Derived method 1 + Type: "+typeName);
}
This is for the derived class (DerivedTestClass).
Next I created an instance of both of these and called the Method1 for each. And it's exactly as you'd expect. When I called it for the base class I for the first message box, and when I called the derived class I got the second one.
No mysteries so far, but now we get to the part where my understanding seems to be lacking.
I create a List into which I add both instances I have created: the instance of the base class and the instance of the derived class. Then I created a foreach loop which went through each item in the list and called Method1, as follows:
foreach (var tc in listT)
{
tc.Method1(tc.GetType().Name);
}
In both cases the base method is called. Now in one respect I get that as the list itself is the type of the base class. The problem is if I look at the types. If I ask the first item in the list what its type is, it will say TestClass. If I ask the second item in the list what its type is, it will say DerivedBaseClass.
Now one can solve this by casting each item within the list to its own type. But it can require a long list of if statements based on how many derived types you have. Also, and I suppose this is the heart of the problem, I'm just confused about ever having to cast something to that which it already is. If the item is a DerivedBaseClass (as evidenced by the GetType()), it seems odd that I have to cast that to DerivedBaseClass. Can I rely on GetType in these situations? Should it come with an asterisk that says 'well the memory declared is only enough for the base class, and while this is a derived class, it's currently in the form of a base class'?
So you could say I'm a bit confused and looking for clarification.
Two things:
You aren't overriding Method1, you are hiding it in the derived class so you can consider these methods as completely different things entirely.
Your list only contains references to the base type, and as such will only call the methods it exposes.
To fix this, change your types to something like this, note the virtual base method and the override in the derived class:
public class BaseType
{
public virtual void Method1(String typeName)
//^^^^^^^ This
{
MessageBox.Show("Base method 1 + Type: "+typeName);
}
}
public class DerivedType : BaseType
{
public override void Method1(String typeName)
//^^^^^^^^ And this
{
MessageBox.Show("Derived method 1 + Type: "+typeName);
}
}
Consider the following snippet of code:
using System;
class Base
{
public virtual void Foo(int x)
{
Console.WriteLine("Base.Foo(int)");
}
}
class Derived : Base
{
public override void Foo(int x)
{
Console.WriteLine("Derived.Foo(int)");
}
public void Foo(object o)
{
Console.WriteLine("Derived.Foo(object)");
}
}
public class Program
{
public static void Main()
{
Derived d = new Derived();
int i = 10;
d.Foo(i);
}
}
And the surprising output is:
Derived.Foo(object)
I would expect it to select the overridden Foo(int x) method, since it's more specific. However, C# compiler picks the non-inherited Foo(object o) version. This also causes a boxing operation.
What is the reason for this behaviour?
This is the rule, and you may not like it...
Quote from Eric Lippert
if any method on a more-derived class is an applicable candidate, it
is automatically better than any method on a less-derived class, even
if the less-derived method has a better signature match.
The reason is because the method (that is a better signature match) might have been added in a later version and thereby be introducing a "brittle base class" failure
Note : This is a fairly complicated/in-depth part of the C# specs and it jumps all over the place. However, the main parts of the issue you are experiencing are written as follows
Update
And this is why i like stackoverflow, It is such a great place to learn.
I was quoting the the section on the run time processing of the method call. Where as the question is about compile time overload resolution, and should be.
7.6.5.1 Method invocations
...
The set of candidate methods is reduced to contain only methods from
the most derived types: For each method C.F in the set, where C is the
type in which the method F is declared, all methods declared in a base
type of C are removed from the set. Furthermore, if C is a class type
other than object, all methods declared in an interface type are
removed from the set. (This latter rule only has affect when the
method group was the result of a member lookup on a type parameter
having an effective base class other than object and a non-empty
effective interface set.)
Please see Eric's post answer https://stackoverflow.com/a/52670391/1612975 for a full detail on whats going on here and the appropriate part of the specs
Original
C#
Language Specification
Version 5.0
7.5.5 Function member invocation
...
The run-time processing of a function member invocation consists of
the following steps, where M is the function member and, if M is an
instance member, E is the instance expression:
...
If M is an instance function member declared in a reference-type:
E is evaluated. If this evaluation causes an exception, then no further steps are executed.
The argument list is evaluated as described in §7.5.1.
If the type of E is a value-type, a boxing conversion (§4.3.1) is performed to convert E to type object, and E is considered to be of
type object in the following steps. In this case, M could only be a
member of System.Object.
The value of E is checked to be valid. If the value of E is null, a System.NullReferenceException is thrown and no further steps are
executed.
The function member implementation to invoke is determined:
If the binding-time type of E is an interface, the function member to invoke is the implementation of M provided by the run-time
type of the instance referenced by E. This function member is
determined by applying the interface mapping rules (§13.4.4) to
determine the implementation of M provided by the run-time type of the
instance referenced by E.
Otherwise, if M is a virtual function member, the function member to invoke is the implementation of M provided by the run-time type of
the instance referenced by E. This function member is determined by
applying the rules for determining the most derived implementation
(§10.6.3) of M with respect to the run-time type of the instance
referenced by E.
Otherwise, M is a non-virtual function member, and the function member to invoke is M itself.
After reading the specs what's interesting is, if you use an interface which describes the method, the compiler will choose the overload signature, in-turn working as expected
public interface ITest
{
void Foo(int x);
}
Which can be shown here
In regards to the interface, it does make sense when considering the overloading behavior was implemented to protect against Brittle base class
Additional Resources
Eric Lippert, Closer is better
The aspect of overload resolution in C# I want to talk about today is
really the fundamental rule by which one potential overload is judged
to be better than another for a given call site: closer is always
better than farther away. There are a number of ways to characterize
“closeness” in C#. Let’s start with the closest and move our way out:
A method first declared in a derived class is closer than a method first declared in a base class.
A method in a nested class is closer than a method in a containing class.
Any method of the receiving type is closer than any extension method.
An extension method found in a class in a nested namespace is closer than an extension method found in a class in an outer namespace.
An extension method found in a class in the current namespace is closer than an extension method found in a class in a namespace
mentioned by a using directive.
An extension method found in a class in a namespace mentioned in a using directive where the directive is in a nested namespace is closer
than an extension method found in a class in a namespace mentioned in
a using directive where the directive is in an outer namespace.
The accepted answer is correct (excepting the fact that it quotes the wrong section of the spec) but it explains things from the perspective of the specification rather than giving a justification for why the specification is good.
Let's suppose we have base class B and derived class D. B has a method M that takes Giraffe. Now, remember, by assumption, the author of D knows everything about B's public and protected members. Put another way: the author of D must know more than the author of B, because D was written after B, and D was written to extend B to a scenario not already handled by B. We should therefore trust that the author of D is doing a better job of implementing all functionality of D than the author of B.
If the author of D makes an overload of M that takes an Animal, they are saying I know better than the author of B how to deal with Animals, and that includes Giraffes. We we should expect overload resolution when given a call to D.M(Giraffe) to call D.M(Animal), and not B.M(Giraffe).
Let's put this another way: We are given two possible justifications:
A call to D.M(Giraffe) should go to B.M(Giraffe) because Giraffe is more specific than Animal
A call to D.M(Giraffe) should go to D.M(Animal) because D is more specific than B
Both justifications are about specificity, so which justification is better? We're not calling any method on Animal! We're calling the method on D, so that specificity should be the one that wins. The specificity of the receiver is far, far more important than the specificity of any of its parameters. The parameter types are there for tie breaking. The important thing is making sure we choose the most specific receiver because that method was written later by someone with more knowledge of the scenario that D is intended to handle.
Now, you might say, what if the author of D has also overridden B.M(Giraffe)? There are two arguments why a call to D.M(Giraffe) should call D.M(Animal) in this case.
First, the author of D should know that D.M(Animal) can be called with a Giraffe, and it must be written do the right thing. So it should not matter from the user's perspective whether the call is resolved to D.M(Animal) or B.M(Giraffe), because D has been written correctly to do the right thing.
Second, whether the author of D has overridden a method of B or not is an implementation detail of D, and not part of the public surface area. Put another way: it would be very strange if changing whether or not a method was overridden changes which method is chosen. Imagine if you're calling a method on some base class in one version, and then in the next version the author of the base class makes a minor change to whether a method is overridden or not; you would not expect overload resolution in the derived class to change. C# has been designed carefully to prevent this kind of failure.
The following is an interview question. I came up with a solution, but I'm not sure why it works.
Question:
Without modifying the Sparta class, write some code that makes MakeItReturnFalse return false.
public class Sparta : Place
{
public bool MakeItReturnFalse()
{
return this is Sparta;
}
}
My solution: (SPOILER)
public class Place
{
public interface Sparta { }
}
But why does Sparta in MakeItReturnFalse() refer to {namespace}.Place.Sparta instead of {namespace}.Sparta?
But why does Sparta in MakeItReturnFalse() refer to {namespace}.Place.Sparta instead of {namespace}.Sparta?
Basically, because that's what the name lookup rules say. In the C# 5 specification, the relevant naming rules are in section 3.8 ("Namespace and type names").
The first couple of bullets - truncated and annotated - read:
If the namespace-or-type-name is of the form I or of the form I<A1, ..., AK> [so K = 0 in our case]:
If K is zero and the namespace-or-type-name appears within a generic method declaration [nope, no generic methods]
Otherwise, if the namespace-or-type-name appears within a type declaration, then for each instance type T (§10.3.1), starting with the instance type of that type declaration and continuing with the instance type of each enclosing class or struct declaration (if any):
If K is zero and the declaration of T includes a type parameter with name I, then the namespace-or-type-name refers to that type parameter. [Nope]
Otherwise, if the namespace-or-type-name appears within the body of the type declaration, and T or any of its base types contain a nested accessible type having name I and K type parameters, then the namespace-or-type-name refers to that type constructed with the given type arguments. [Bingo!]
If the previous steps were unsuccessful then, for each namespace N, starting with the namespace in which the namespace-or-type-name occurs, continuing with each enclosing namespace (if any), and ending with the global namespace, the following steps are evaluated until an entity is located:
If K is zero and I is the name of a namespace in N, then... [Yes, that would succeed]
So that final bullet point is what picks up the Sparta class if the first bullet doesn't find anything... but when the base class Place defines an interface Sparta, it gets found before we consider the Sparta class.
Note that if you make the nested type Place.Sparta a class rather than an interface, it still compiles and returns false - but the compiler issues a warning because it knows that an instance of Sparta will never be an instance of the class Place.Sparta. Likewise if you keep Place.Sparta an interface but make the Sparta class sealed, you'll get a warning because no Sparta instance could ever implement the interface.
When resolving a name to its value the "closeness" of the definition is used to resolve ambiguities. Whatever definition is "closest" is the one that is chosen.
The interface Sparta is defined within a base class. The class Sparta is defined in the containing namespace. Things defined within a base class are "closer" than things defined in the same namespace.
Beautiful question! I'd like to add a slightly longer explanation for those who don't do C# on a daily basis... because the question is a good reminder of name resolution issues in general.
Take the original code, slightly modified in the following ways:
Let's print out the type names instead of comparing them as in the original expression (i.e. return this is Sparta).
Let's define the interface Athena in the Place superclass to illustrate interface name resolution.
Let's also print out the type name of this as it is bound in the Sparta class, just to make everything very clear.
The code looks like this:
public class Place {
public interface Athena { }
}
public class Sparta : Place
{
public void printTypeOfThis()
{
Console.WriteLine (this.GetType().Name);
}
public void printTypeOfSparta()
{
Console.WriteLine (typeof(Sparta));
}
public void printTypeOfAthena()
{
Console.WriteLine (typeof(Athena));
}
}
We now create a Sparta object and call the three methods.
public static void Main(string[] args)
{
Sparta s = new Sparta();
s.printTypeOfThis();
s.printTypeOfSparta();
s.printTypeOfAthena();
}
}
The output we get is:
Sparta
Athena
Place+Athena
However, if we modify the Place class and define the interface Sparta:
public class Place {
public interface Athena { }
public interface Sparta { }
}
then it is this Sparta -- the interface -- that will be available first to the name lookup mechanism and the output of our code will change to:
Sparta
Place+Sparta
Place+Athena
So we have effectively messed up with the type comparison in the MakeItReturnFalse function definition just by defining the Sparta interface in the superclass, which is found first by the name resolution.
But why does C# chose to prioritize interfaces defined in the superclass in the name resolution? #JonSkeet knows! And if you read his answer you'll get the details of the name resolution protocol in C#.
I have an abstract class like this:
public abstract class BaseCamera<TCamera> : ICamera where TCamera : ManagedCameraBase
{
public static uint GetNumberOfCameras()
{
using (var bus = new ManagedBusManager())
{
bus.RescanBus();
return bus.GetNumOfCameras();
}
}
}
And want to call it like this:
BaseCamera.GetNumberOfCameras()
It makes sense to me, because since this is an abstract class, only the concrete children classes must choose a TCamera, and the base class wants to get the number of all cameras, no matter they type.
However, the compiler does not approve:
Using the Generic type 'BaseCamera' requires 1 type
arguments.
Is there some way around it or do I need to create a new class just for this?
I think it is worth pointing out that ManagedCameraBase is a class from an external API I'm wrapping. Therefore, I do not want to include it in any of my calls for BaseCamera and that is why I'm trying to avoid specifying a type.
because since this is an abstract class, only the concrete children classes must choose a TCamera
That's not how generics work. This has nothing at all to do with the class being abstract. If the class was generic and not abstract you would still need to specify a generic argument in order to call a static method of the class. On top of that, there's nothing to say that a child class can't also be generic. Yours may happen to not be, but there's nothing requiring that to be the case.
Now, in your particular case, the GetNumberOfCameras method doesn't use the generic argument (T) at all, so it doesn't matter what generic argument you provide, you can put in whatever you want and it'll work just fine. Of course, because of that, it's a sign that this method probably doesn't belong in this class; it should probably be in another class that this class also uses.
Here's the problem. The static method GetNumberOfCameras belongs to the class that contains it, but a generic class actually gets compiled into separate classes for each type. So, for example if you had this:
public class Foo<T>
{
static int foo = 0;
public static void IncrementFoo()
{
foo++;
}
public static int GetFoo()
{
return foo;
}
}
And then you did this:
Foo<string>.IncrementFoo();
Console.WriteLine(Foo<string>.GetFoo());
Console.WriteLine(Foo<int>.GetFoo());
You will see that the first call to GetFoo will return one, but the second will return zero. Foo<string>.GetFoo() and Foo<int>.GetFoo() are two separate static method that belong to two different classes (and access two different fields). So that's why you need a type. Otherwise the compiler won't know which static method of which class to call.
What you need is a non-generic base class for your generic class to inherit from. So if you do this:
public class Foo<T> : Foo
{
}
public class Foo
{
static int foo = 0;
public static void IncrementFoo()
{
foo++;
}
public static int GetFoo()
{
return foo;
}
}
Then this:
Foo<string>.IncrementFoo();
Console.WriteLine(Foo<string>.GetFoo());
Console.WriteLine(Foo<int>.GetFoo());
Will give you what you might have expected at first. In other words, both calls to GetFoo will return the same result. And, of course, you don't actually need the type argument anymore and can just do:
Foo.IncrementFoo();
Or course, the alternative is to just move your static methods into an entirely different class if there's no reason why it should be part of BaseCamera
Well, there are a couple of things here you need to understand better.
First of all, I see a problem with your design. The method you are attempting to stick into this class really has nothing to do with the generic nature of it. In fact, you are instantiating another class to do the job so it really does not belong here at all.
If it actually had something to do with an object that inherits from ManagedCameraBase, the method would probably not need to be static but rather an instance method. You can then decide on the accessor (public/private) based on usage.
Finally, you need to understand what Generics actually do under the covers. When you use the generic base with a particular type, an underlying specialized type is created for you behind the scenes by the compiler. If you were to use the static method, the compiler would need to know the type you are targeting in order to create the static instance that will serve your call. Because of this, if you call the static method, you must pass a type and you will end up with as many static instances as the types you use to call it (the types must derive from ManagedCameraBase, of course).
As you can see, you should either move that method out to some helper class or something of the sort, or make it a non-static, instance method.
Pluggable framework
Imagine a simple pluggable system, which is pretty straightforward using inheritance polymorphism:
We have a graphics rendering system
There are different types of graphics shapes (monochrome, color, etc.) that need rendering
Rendering is done by a data-specific plugin, e.g. a ColorRenderer will render a ColorShape.
Every plugin implements IRenderer, so they can all be stored in an IRenderer[].
On startup, IRenderer[] is populated with a series of specific renderers
When data for a new shape is received, a plugin is chosen from the array based on the type of the shape.
The plugin is then invoked by calling its Render method, passing the shape as its base type.
The Render method is overridden in each descendant class; it casts the Shape back to its descendant type and then renders it.
Hopefully the above is clear-- I think it is a pretty common sort of setup. Very easy with inheritance polymorphism and run-time casting.
Doing it without casting
Now the tricky part. In response to this question, I wanted to come up with a way to do this all without any casting whatsoever. This is tricky because of that IRenderer[] array-- to get a plugin out of the array, you would normally need to cast it to a specific type in order to use its type-specific methods, and we can't do that. Now, we could get around that by interacting with a plugin only with its base class members, but part of the requirements was that the renderer must run a type-specific method that has a type-specific data packet as an argument, and the base class would not be able to do that because there is no way to pass it a type-specific data packet without a casting it to the base and then back to the ancestor. Tricky.
At first I thought it was impossible, but after a few tries I found I could make it happen by juking the c# generic system. I create an interface that is contravariant with respect to both plugin and shape type and then used that. Resolution of the renderer is decided by the type-specific Shape. Xyzzy, the contravariant interface makes the cast unnecessary.
Here is the shortest version of the code I could come up with as an example. This compiles and runs and behaves correctly:
public enum ColorDepthEnum { Color = 1, Monochrome = 2 }
public interface IRenderBinding<in TRenderer, in TData> where TRenderer : Renderer
where TData: Shape
{
void Render(TData data);
}
abstract public class Shape
{
abstract public ColorDepthEnum ColorDepth { get; }
abstract public void Apply(DisplayController controller);
}
public class ColorShape : Shape
{
public string TypeSpecificString = "[ColorShape]"; //Non-virtual, just to prove a point
override public ColorDepthEnum ColorDepth { get { return ColorDepthEnum.Color; } }
public override void Apply(DisplayController controller)
{
IRenderBinding<ColorRenderer, ColorShape> renderer = controller.ResolveRenderer<ColorRenderer, ColorShape>(this.ColorDepth);
renderer.Render(this);
}
}
public class MonochromeShape : Shape
{
public string TypeSpecificString = "[MonochromeShape]"; //Non-virtual, just to prove a point
override public ColorDepthEnum ColorDepth { get { return ColorDepthEnum.Monochrome; } }
public override void Apply(DisplayController controller)
{
IRenderBinding<MonochromeRenderer, MonochromeShape> component = controller.ResolveRenderer<MonochromeRenderer, MonochromeShape>(this.ColorDepth);
component.Render(this);
}
}
abstract public class Renderer : IRenderBinding<Renderer, Shape>
{
public void Render(Shape data)
{
Console.WriteLine("Renderer::Render(Shape) called.");
}
}
public class ColorRenderer : Renderer, IRenderBinding<ColorRenderer, ColorShape>
{
public void Render(ColorShape data)
{
Console.WriteLine("ColorRenderer is now rendering a " + data.TypeSpecificString);
}
}
public class MonochromeRenderer : Renderer, IRenderBinding<MonochromeRenderer, MonochromeShape>
{
public void Render(MonochromeShape data)
{
Console.WriteLine("MonochromeRenderer is now rendering a " + data.TypeSpecificString);
}
}
public class DisplayController
{
private Renderer[] _renderers = new Renderer[10];
public DisplayController()
{
_renderers[(int)ColorDepthEnum.Color] = new ColorRenderer();
_renderers[(int)ColorDepthEnum.Monochrome] = new MonochromeRenderer();
//Add more renderer plugins here as needed
}
public IRenderBinding<T1,T2> ResolveRenderer<T1,T2>(ColorDepthEnum colorDepth) where T1 : Renderer where T2: Shape
{
IRenderBinding<T1, T2> result = _renderers[(int)colorDepth];
return result;
}
public void OnDataReceived<T>(T data) where T : Shape
{
data.Apply(this);
}
}
static public class Tests
{
static public void Test1()
{
var _displayController = new DisplayController();
var data1 = new ColorShape();
_displayController.OnDataReceived<ColorShape>(data1);
var data2 = new MonochromeShape();
_displayController.OnDataReceived<MonochromeShape>(data2);
}
}
If you run Tests.Test1() the output will be:
ColorRenderer is now rendering a [ColorShape]
MonochromeRenderer is now rendering a [MonochromeShape]
Beautiful, it works, right? Then I got to wondering... what if ResolveRenderer returned the wrong type?
Type safe?
According to this MSDN article,
Contravariance, on the other hand, seems counterintuitive....This seems backward, but it is type-safe code that compiles and runs. The code is type-safe because T specifies a parameter type.
I am thinking, there is no way this is actually type safe.
Introducing a bug that returns the wrong type
So I introduced a bug into the controller so that is mistakenly stores a ColorRenderer where the MonochromeRenderer belongs, like this:
public DisplayController()
{
_renderers[(int)ColorDepthEnum.Color] = new ColorRenderer();
_renderers[(int)ColorDepthEnum.Monochrome] = new ColorRenderer(); //Oops!!!
}
I thought for sure I'd get some sort of type mismatch exception. But no, the program completes, with this mysterious output:
ColorRenderer is now rendering a [ColorShape]
Renderer::Render(Shape) called.
What the...?
My questions:
First,
Why did MonochromeShape::Apply call Renderer::Render(Shape)? It is attempting to call Render(MonochromeShape), which obviously has a different method signature.
The code within the MonochromeShape::Apply method only has a reference to an interface, specifically IRelated<MonochromeRenderer,MonochromeShape>, which only exposes Render(MonochromeShape).
Although Render(Shape) looks similar, it is a different method with a different entry point, and isn't even in the interface being used.
Second,
Since none of the Render methods are virtual (each descendant type introduces a new, non-virtual, non-overridden method with a different, type-specific argument), I would have thought that the entry point was bound at compile time. Are method prototypes within a method group actually chosen at run-time? How could this possibly work without a VMT entry for dispatch? Does it use some sort of reflection?
Third,
Is c# contravariance definitely not type safe? Instead of an invalid cast exception (which at least tells me there is a problem), I get an unexpected behavior. Is there any way to detect problems like this at compile time, or at least to get them to throw an exception instead of doing something unexpected?
OK, first of all, do not write generic types like this. As you have discovered, it rapidly becomes a huge mess. Never do this:
class Animal {}
class Turtle : Animal {}
class BunchOfAnimals : IEnumerable<Animal> {}
class BunchOfTurtles : BunchOfAnimals, IEnumerable<Turtle> {}
OH THE PAIN. Now we have two paths by which to get an IEnumerable<Animal> from a BunchOfTurtles: Either ask the base class for its implementation, or ask the derived class for its implementation of the IEnumerable<Turtle> and then covariantly convert it to IEnumerable<Animal>. The consequences are: you can ask a bunch of turtles for a sequence of animals, and giraffes can come out. That's not a contradiction; all the capabilities of the base class are present in the derived class, and that includes generating a sequence of giraffes when asked.
Let me re-emphasize this point so that it is very clear. This pattern can create in some cases implementation-defined situations where it becomes impossible to determine statically what method will actually be called. In some odd corner cases, you can actually have the order in which the methods appear in the source code be the deciding factor at runtime. Just don't go there.
For more on this fascinating topic I encourage you to read all the comments to my 2007 blog post on the subject: https://blogs.msdn.microsoft.com/ericlippert/2007/11/09/covariance-and-contravariance-in-c-part-ten-dealing-with-ambiguity/
Now, in your specific case everything is nicely well defined, it's just not defined as you think it ought to be.
To start with: why is this typesafe?
IRenderBinding<MonochromeRenderer, MonochromeShape> component = new ColorRenderer();
Because you said it should be. Work it out from the point of view of the compiler.
A ColorRenderer is a Renderer
A Renderer is a IRenderBinding<Renderer, Shape>
IRenderBinding is contravariant in both its parameters, so it may always be made to have a more specific type argument.
Therefore a Renderer is an IRenderBinding<MonochromeRenderer, MonochromeShape>
Therefore the conversion is valid.
Done.
So why is Renderer::Render(Shape) called here?
component.Render(this);
You ask:
Since none of the Render methods are virtual (each descendant type introduces a new, non-virtual, non-overridden method with a different, type-specific argument), I would have thought that the entry point was bound at compile time. Are method prototypes within a method group actually chosen at run-time? How could this possibly work without a VMT entry for dispatch? Does it use some sort of reflection?
Let's go through it.
component is of compile-time type IRenderBinding<MonochromeRenderer, MonochromeShape>.
this is of compile-time type MonochromeShape.
So we are calling whatever method implements IRenderBinding<MonochromeRenderer, MonochromeShape>.Render(MonochromeShape) on a ColorRenderer.
The runtime must figure out which interface is actually meant. ColorRenderer implements IRenderBinding<ColorRenderer, ColorShape> directly and IRenderBinding<Renderer, Shape> via its base class. The former is not compatible with IRenderBinding<MonochromeRenderer, MonochromeShape>, but the latter is.
So the runtime deduces that you meant the latter, and executes the call as though it were IRenderBinding<Renderer, Shape>.Render(Shape).
So which method does that call? Your class implements IRenderBinding<Renderer, Shape>.Render(Shape) on the base class so that's the one that's called.
Remember, interfaces define "slots", one per method. When the object is created, each interface slot is filled with a method. The slot for IRenderBinding<Renderer, Shape>.Render(Shape) is filled with the base class version, and the slot for IRenderBinding<ColorRenderer, ColorShape>.Render(ColorShape) is filled with the derived class version. You chose the slot from the former, so you get the contents of that slot.
Is c# contravariance definitely not type safe?
I promise you it is type safe. As you should have noticed: every conversion you made without a cast was legal, and every method you called was called with something of a type that it expected. You never invoked a method of ColorShape with a this referring to a MonochromeShape, for instance.
Instead of an invalid cast exception (which at least tells me there is a problem), I get an unexpected behavior.
No, you get entirely expected behaviour. You just have created a type lattice that is extraordinarily confusing, and you don't have a sufficient level of understanding of the type system to understand the code you wrote. Don't do that.
Is there any way to detect problems like this at compile time, or at least to get them to throw an exception instead of doing something unexpected?
Don't write code like that in the first place. Never implement two versions of the same interface such that they may unify via covariant or contravariant conversions. It is nothing but pain and confusion. And similarly, never implement an interface with methods that unify under generic substitution. (For example, interface IFoo<T> { void M(int); void M(T); } class Foo : IFoo<int> { uh oh } )
I considered adding a warning to that effect, but it was difficult to see how to turn off the warning in the rare cases where it is desirable. Warnings that can only be turned off with pragmas are poor warnings.
First. MonochromeShape::Apply call Renderer::Render(Shape) because of the following:
IRenderBinding<ColorRenderer, ColorShape> x1 = new ColorRenderer();
IRenderBinding<Renderer, Shape> x2 = new ColorRenderer();
// fails - cannot convert IRenderBinding<ColorRenderer, ColorShape> to IRenderBinding<MonochromeRenderer, MonochromeShape>
IRenderBinding<MonochromeRenderer, MonochromeShape> c1 = x1;
// works, because you can convert IRenderBinding<Renderer, Shape> toIRenderBinding<MonochromeRenderer, MonochromeShape>
IRenderBinding<MonochromeRenderer, MonochromeShape> c2 = x2;
So in short: ColorRenderer inherits from Renderer and that in turn implements IRenderBinding<Renderer, Shape>. This interface is what allows ColorRendered to be implicitly converted to IRenderBinding<MonochromeRenderer, MonochromeShape>. This interface is implemented by class Renderer and so it's not suprising that Renderer.Render is called when you call MonochromeShape::Apply. The fact you pass instance of MonochromeShape and not Shape is not a problem exactly because TData is contravariant.
About your second question. Dispatch by interface is virtual just by definition. In fact, if method implements some method from interface - it's marked as virtual in IL. Consider this:
class Test : ITest {
public void DoStuff() {
}
}
public class Test2 {
public void DoStuff() {
}
}
interface ITest {
void DoStuff();
}
Method Test.DoStuff has following signature in IL (note virtual:
.method public final hidebysig virtual newslot instance void
DoStuff() cil managed
Method Test2.DoStuff is just:
.method public hidebysig instance void
DoStuff() cil managed
As for third question I think it's clear from above that it behaves as expected and is type-safe exactly because no invalid cast exceptions are possible.