Why does this program output Generic Value and not Hello world!:
using System;
class Example
{
public static void Print<T>(T value)
{
Console.WriteLine("Generic Value");
}
public static void Print(string value)
{
Console.WriteLine(value);
}
public static void GenericFunc<T>(T value)
{
Print(value);
}
static void Main()
{
GenericFunc("Hello world!");
}
}
How is the generic method parameter being translated under the hood of C#?
Overload resolution is only done at compile-time.
Since GenericFunc<T> doesn't know whether T is a string or something else at compile-time, it can only ever use the Print<T>(T value) "overload".
Using dynamic, you can change this to a dynamic dispatch, and get the behaviour you expect:
Print((dynamic)value);
This makes the overload resolution happen at runtime, with the actual runtime type of value.
Simple answer to simple question
The other answer explains the way generic are bound (compile-time). But it doesn't answer on the OOP, good practice, or simply why you should never write this code in the first place.
OOP
The first O in OOP means Object and there is none with only static methods.
Responsibility
Let's think about the Generic versin of the method as a method responsible for printing a set of different possible types. String type is part of the set. So it should be managed by the generic version of your Print function.
public static void Print<T>(T value)
{
Console.WriteLine(value.ToString());
}
then you got the problem of nullity for ref types.
public static void Print<T>(T value) where T : class
{
if (value != null)
{
Console.WriteLine(value.ToString());
}
}
public static void GenericFunc<T>(T value) where T : class
{
Print(value);
}
And for those who are aware of why you should not use dynamic unless in some cases (see my anwer on that).
More clean OOP solution
Now imagine you have different objects to print. Each object should be responsible for knowing how to display it. Firstly, because it ease encapsulation of data by not leaking internal data to the external world. Secondly, because you've got an inherent coupling between the internal data and the printing function, so both should be located in the same place: inside the class. That's the purpose of the ToString function.
Let's take some height...
Now, we could imagine that it's not a Print function but something else.
We got a hierarchy of classes with overload on the same function (let's call it Foo) and a collection of instances from these classes for which you must call the function Foo. Let's then make all these classes implement the IFooCallable interface :
public interface IFooCallable
{
void Foo();
}
A little more complex...
Ok, but imagine that there is not common way to process all these instances of classes because there are very different.
Let's call the Visitor pattern. It's commonly used when you want to analyze some object tree with each node very different (like in AST).
It's a known pattern, making it easy to share knowledgable information with your team.
You got the Visitor:
public class Visitor : IVisitor
{
public void Visit(Foo foo)
{
// do something with foo
}
public void Visit(Bar bar)
{
// do something with bar
}
}
and the Visitable
public class Foo : IVisitable
{
public void Accept(IVisitor visitor)
{
visitor.Visit(this);
}
}
Moreover this pattern is reusable (you could write several implementation of IVisitor should you need to).
I don't buy the dynamic thing. Especially when there is more cleaner, faster alternative. If dynamic so great, why not writing this then ;)
public static void Print(dynamic value)
{
Console.WriteLine(value);
}
public static void GenericFunc(dynamic value)
{
Print(value);
}
static void Main(dynamic[] args)
{
GenericFunc((dynamic)"Hello World");
}
Related
When I run the following code I get RuntimeBinderException: 'object' does not contain a definition for 'SetIt'.
public interface IInput { }
public interface IThing<in TInput>
{
void SetIt(TInput value);
}
public class ThingMaker
{
private class Thing<TInput> : IThing<TInput>
where TInput : IInput
{
public void SetIt(TInput value){}
}
private class Input : IInput {}
public IThing<IInput> GetThing()
{
return new Thing<IInput>();
}
public IInput GetInput()
{
return new Input();
}
}
class Program
{
static void Main(string[] args)
{
var thingMaker = new ThingMaker();
var input = thingMaker.GetInput();
dynamic thing= thingMaker.GetThing();
thing.SetIt(input);
}
}
If I switch thing to a var it works fine. So clearly, thing has SetIt. How come this fails when I use dynamic? Shouldn't any thing that works as var also work when it is dynamic?
I think it has something to do with Thing<TInput> being a private inner class because it works when I make it public.
Here is a simpler example of your problem:
class A
{
class B
{
public void M() { }
}
public static object GetB() { return new B(); }
}
class Program
{
static void Main(string[] args)
{
dynamic o = A.GetB();
o.M();
}
}
The generics and interfaces are just a distraction.
The problem is that the type B (or in your case, Thing<IInput>) is a private class, and so at the call site, cannot resolve to that actual type. Recall that dynamic simply is an object variable but where compilation has been postponed until run-time. It isn't intended to let you do things at run-time you wouldn't otherwise be able to do, and "wouldn't otherwise be able to do" is judged based on the accessible type at the call site (which in this case, is object).
As far as the runtime binder is concerned, the type is simply object (hence the error message telling you that object doesn't contain the member you want).
Of course, it is theoretically possible that dynamic could have been implemented differently, and could do a more in-depth search for valid types it could treat the object as for the purpose of binding members to the object (i.e. implemented interfaces rather than just the object's actual type). But that's just not how it was implemented, and it would have been much more costly to do so (both in terms of the original design and implementation, and of course in terms of run-time cost of code that uses dynamic).
Related reading (arguably duplicates):
Is this a bug in dynamic?
Is this a hole in dynamic binding in C# 4?
Is there a way to modify the behavior of a static method at runtime?
for example:
Say I have this class
public class Utility {
public static void DoSomething(string data){
//...
}
}
Is there a way to do something like this:
typeof(Utility).SetMethod("DoSomething", (data) => { /*Do something else...*/ });
Such that if you call Utility.DoSomething it executes the new code?
What you want to do is pass the behavior you want as another parameter into the function.
public static void DoSomething(string data, Action<string> operation)
{
operation(data);
}
This is an oversimplified example, of course. What you actually wind up doing in your own code is going to depend on what operation actually does.
If you're trying to modify the behavior of an existing, compiled, in-production method, and cannot overload or override the method in the usual ways, the only way I know of to do that is CIL Rewriting, possibly using an Aspect Weaver.
Sure.
public class Utility {
public static Action<String> _DoSomething;
public static void DoSomething(string data){
if (_DoSomething != null) {
_DoSomething();
return;
}
// default behavior here.
}
}
And to mask the default behavior:
Utility._DoSomething = (data) => { /* do something else */ };
I don't see why you wouldn't just create a new class that inherits from Utility and define a new function that does what you want.
public class Program
{
static void Main(string[] args)
{
if (true)
{
Utility.DoSomething("TEST");
} else
{
Util1.DoSomething("TEST");
}
}
}
public class Utility
{
public static void DoSomething(string data)
{
//Perform some action
}
}
abstract class Util1 : Utility
{
public static new void DoSomething(string data)
{
//Perform a different action
}
}
I think although it is possible to do this you should ask yourself: "Why do I need this functionality"? Usually a method stays as is, and does what it is supposed to do according to its interface which is given by its name and signature. So while you can add additional logic by adding an Action<T>-parameter to your signature you should ask yourself if this won´t break the contract of the interface and therefor what the method was designed for.
Having said this you should consider either overwrite your method if the functionality you need is some kind of "making the same things differently then the parent-class" or extend it by adding a dependency into your consuming class and add some methods to that class that extent the functionality provided by the contained class (see also favour composition over inheritance)
class MyClass {
Utility MyUtility;
void ExtraMethod() { /* ... */ }
}
EDIT: As you´re using a static method the opportunity on overwriting is obsolete. However IMO that sounds like a great design-flaw.
Folks
I came across many threads for understanding polymorphism (Both compile time and run time). I was surprised to see some links where the programmers are claiming Overloading is Runtime and Overriding is compile time.
What I want to know from here is:
Runtime Polymorphism with a REAL TIME example and small code and what scenario we should use.
Compile time Polymorphism with REAL TIME example and small code and when to use.
Because I read many theoretical definitions, but I am not satisfied in understanding that.
Also, I gave a thought, that where I also felt, overloading should be runtime as because, say I have a method that calculates Area, at runtime only it decides which overloaded method to call based on parameters I pass (Say if I pass only one parameter, it should fire Square, and if parameters are 2, it should fire Rectangle)....So isn't it I can claim its runtime ? How its complie time ? (Most say theoretically, overloading is compile time but they dont even give a correct REAL time example...very few claim its runtime)....
Also, I feel overriding is compile time because, while you write code and complie, you ensure you used virtual keyword and also overriding that method in derived class which otherwise would give you compile time error. So I feel its compile time, the same way where I saw in a thread.....But most threads claims its runtime :D
I am confused :( This question is additional to my question 1 and 2. Please help with a real time example.. as I am already aware of theoretical definitions .... :(
Thank you....
In the case of Overloading, you are using static (compile-time) polymorphism because the compiler is aware of exactly which method you are calling. For example:
public static class test
{
static void Main(string[] args)
{
Foo();
Foo("test");
}
public static void Foo()
{
Console.WriteLine("No message supplied");
}
public static void Foo(string message)
{
Console.WriteLine(message);
}
}
In this case, the compiler knows exactly which Foo() method we are calling, based on the number/type of parameters.
Overriding is an example of dynamic (runtime) polymorphism. This is due to the fact that the compiler doesn't necessarily know what type of object is being passed in at compile-time. Suppose you have the following classes in a library:
public static class MessagePrinter
{
public static void PrintMessage(IMessage message)
{
Console.WriteLine(message.GetMessage());
}
}
public interface IMessage
{
public string GetMessage();
}
public class XMLMessage : IMessage
{
public string GetMessage()
{
return "This is an XML Message";
}
}
public class SOAPMessage : IMessage
{
public string GetMessage()
{
return "This is a SOAP Message";
}
}
At compile time, you don't know if the caller of that function is passing in an XMLMessage, a SOAPMessage, or possibly another type of IMessage defined elsewhere. When the PrintMessage() function is called, it determines which version of GetMessage() to use at runtime, based on the type of IMessage that is passed in.
Read : Polymorphism (C# Programming Guide)
Similar answer : Compile Time and run time Polymorphism
Well, there are two types of Polymorphism as stated below:
Static Polymorphism (Early binding)
Dynamic Polymorphism (Late binding)
Static Polymorphism(Early Binding):
Static Polymorphism is also know as Early Binding and Compile time Polymorphism. Method Overloading and Operator Overloading are examples of the same.
It is known as Early Binding because the compiler is aware of the functions with same name and also which overloaded function is tobe called is known at compile time.
For example:
public class Test
{
public Test()
{
}
public int add(int no1, int no2)
{
}
public int add(int no1, int no2, int no3)
{
}
}
class Program
{
static void Main(string[] args)
{
Test tst = new Test();
int sum = tst.add(10, 20);
// here in above statement compiler is aware at compile time that need to call function add(int no1, int no2), hence it is called early binding and it is fixed so called static binding.
}
}
Dynamic Polymorphism(Late Binding):
public class Animal
{
public virtual void MakeSound()
{
Console.WriteLine("Animal sound");
}
}
public class Dog:Animal
{
public override void MakeSound()
{
Console.WriteLine("Dog sound");
}
}
class Program
{
static void Main(string[] args)
{
Animal an = new Dog();
an.MakeSound();
Console.ReadLine();
}
}
As in the above code , as any other call to a virtual method, will be compiled to a callvirt IL instruction. This means that the actual method that gets called is determined at run-time (unless the JIT can optimize some special case), but the compiler checked that the method exists, it chose the most appropriate overload (if any) and it has the guarantee that the function pointer will exist at a well-defined location in the vtable of the type (even though that is an implementation detail). The process of resolving the virtual call is extremely fast (you only need to dereference a few pointers), so it doesn't make much of a difference.
public class Animal {
public virtual void MakeSound()
{
Console.WriteLine("Animal sound");
} }
public class Dog:Animal {
public override void MakeSound()
{
Console.WriteLine("Dog sound");
} }
class Program {
static void Main(string[] args)
{
Animal an = new Dog();
an.MakeSound();
Console.ReadLine();
} }
this is dynamic polymorphism,since it is decided at runtime which version of MakeSound will be called either of the parent or of the child, as a child may not override the parent function or may override it but all this is decided at runtime, which version is called ....
I am trying to optimize a certain part of my code, which happens to be in a tight performance loop. Mainly I am trying to learn new things which I can apply in the future. My implementation is very lengthy, so I will give a general example of what I am trying to achieve.
My question relates to this: C# 'is' operator performance, and especially to the chosen answer.
Say I have a class A. I also have a class B, which is derived from A. I have a list of type A (which contains a mix of A and B types). In a method where I process these items, I would like to achieve a certain behaviour based on the actual type of the object (not sure if this is the correct way of saying it. Please correct me wherever I say something wrong).
void Process(A item)
{
if (item is A)
{
DoBehavior((A)item); //I know the cast is redundant here, I'm just leaving
//it here for my explanation.
}
else if (item is B)
{
DoBehavior((B)item);
}
}
void DoBehaviour(A item)
{
//perform necessary behaviour for type A
}
void DoBehaviour(B item)
{
//perform necessary behaviour for type B
}
This is the way I currently do it. Note that I iterate over a list of type A, which contains A's and B's. Also, if you feel I did not provide enough code to clarify the situation, I'll gladly expand.
In the question I posted above: C# 'is' operator performance, I have learnt that I can rather change the structure to use an "as" operator, and completely get rid of the explicit cast.
B bItem = item as B;
if (bItem != null)
{
DoBehavior(bItem);
}
This is all good, however, in actuality I do not just have an A and a B, I also have a C, a D, and so on, all deriving from base class A. This will lead to many of these if statements, and they would have to be nested for best performance:
B bItem = item as B;
if (bItem != null)
{
DoBehavior(bItem);
}
else
{
C cItem = item as C;
if (cItem != null)
{
DoBehavior(cItem);
}
else
{
//and so on.
}
}
Now this is ugly. I like writing neat, elegant code, yet I am exceptionally bad at doing it (which often leads me to wasting time trying to just make things look a little better).
I hope this question is not to broad, but firstly I would like to know if there is a more optimal and clean solution at getting the type so that the relevant behavior is performed. If not, is there a cleaner way to use these 'as' operators than nesting it like this?
I suppose one alternative would be to move the behavior into the base class A, and then overriding it for each derived class. However, in a higher thinking sense, the behavior in this particular case of mine is not a behavior of the class A (or it's children), rather, it is some external class acting/behaving on it (which will behave differently for each type). If there is no better way to do it, I will strongly consider implementing it as I have explained now - but I would like some expert opinions on this.
I tried to keep this short, and may have left too much detail out. Let me know if this is the case.
I would strongly suggest that you avoid the "if..else if..else if.." path by programming to interfaces instead of referencing concrete classes.
To achieve this, first make the Process() method ignorant of the type of its parameter. Probably the parameter will end up being an interface like IDoSomething.
Next, implement Process() so that it won't call DoSomething() directly. You'll have to break DoSomething() in smaller chunks of code which will be moved into specific implementations of IDoSomething methods. The Process() method will blindly call these methods -- in other words, applying the IDoSomething contract to some data.
This could be tiresome the more convoluted DoSomething() is, but you'll have a much better separation of concerns, and will "open" Process() to any IDoSomething compatible type, without writing not even one more else.
Isn't that what polymorphism is all about ? A method that has different behavior depending on its type. And I'm fairly sure this would be faster than a "type switch".
And if you need to, you can also use function overloading (for your external processing), see the test program below:
using System;
using System.Collections.Generic;
public class A
{
public String Value
{
get;
set;
}
public A()
{
Value = "A's value";
}
public virtual void Process()
{
// Do algorithm for type A
Console.WriteLine("In A.Process()");
}
}
public class B : A
{
public int Health
{
get;
set;
}
public B()
{
Value = "B's value";
Health = 100;
}
public override void Process()
{
// Do algorithm for type B
Console.WriteLine("In B.Process()");
}
}
public static class Manager
{
// Does internal processing
public static void ProcessInternal(List<A> items)
{
foreach(dynamic item in items)
{
item.Process(); // Call A.Process() or B.Process() depending on type
ProcessExternal(item);
}
}
public static void ProcessExternal(A a)
{
Console.WriteLine(a.Value);
}
public static void ProcessExternal(B b)
{
Console.WriteLine(b.Health);
}
public static void Main(String[] args)
{
List<A> objects = new List<A>();
objects.Add(new A());
objects.Add(new B());
ProcessInternal(objects);
}
}
Note that this will only work with .Net 4.0 !
The best solution for the situation I found is to use a Double-Dispatch/Visitor pattern. I describe a situation where base class A is abstract, and concrete classes B and C inherit from A. Also, by making the DoBehavior method in the base class A abstract, we are forcing ourselves to make an implementation for it wherever we would need it, so if we expand this to add more types, we won't forget to add it's DoBehavior methods (seems unlikely that one would forget, but this behavior may be insignificant to the rest of the new type you add, and may be overlooked - especially if there are many of these behavior patterns)
interface IVisitor
{
void DoBehavior(B item);
void DoBehavior(C item);
}
abstract class A
{
abstract void DoBehavior(IVisitor visitor);
}
class B : A
{
override void DoBehavior(IVisitor visitor)
{
//can do some internal behavior here
visitor.DoBehavior(this); //external processing
}
}
class C : A
{
override void DoBehavior(IVisitor visitor)
{
//can do some internal behavior here
visitor.DoBehavior(this); //external processing
}
}
class Manager: IVisitor //(or executor or whatever. The external processing class)
{
public static void ProcessAll(List<A> items)
{
foreach(A item in items)
{
item.DoBehavior(this);
}
}
void DoBehavior(B item)
{
}
void DoBehavior(C item);
{
}
}
Thanks for contributing, everyone. Learnt a lot and got some good ideas from you all (it's worth it to read all the answers if you face a similar situation).
One simple solution would be to add a field in the base class specifying the class type.
class A
{
// Alternative
string typeName = this.GetType().Name;
public virtual string TypeName { get { return typeName; } }
public virtual string GetTypeName() { return "A"; }
}
class B : A
{
public override string GetTypeName() { return "B"; }
}
class C : A
{
public override string GetTypeName() { return "C"; }
}
class Executer
{
void ExecuteCommand(A val)
{
Console.WriteLine(val.GetType().Name);
switch (val.GetTypeName())
{
case "A": DoSomethingA(val as A); break;
case "B": DoSomethingB(val as B); break;
case "C": DoSomethingC(val as C); break;
}
}
private void DoSomethingC(C c)
{
throw new NotImplementedException();
}
private void DoSomethingB(B b)
{
throw new NotImplementedException();
}
private void DoSomethingA(A a)
{
throw new NotImplementedException();
}
}
You don't really need to use strings, but I prefer that option to using integer for the simple reason that you can't declare 2 class with the same name in the same namespace, therefor if you always return the name class, you have an automatic anti conflict mechanism.
What is the best way to implement polymorphic behavior in classes that I can't modify? I currently have some code like:
if(obj is ClassA) {
// ...
} else if(obj is ClassB) {
// ...
} else if ...
The obvious answer is to add a virtual method to the base class, but unfortunately the code is in a different assembly and I can't modify it. Is there a better way to handle this than the ugly and slow code above?
Hmmm... seems more suited to Adapter.
public interface ITheInterfaceYouNeed
{
void DoWhatYouWant();
}
public class MyA : ITheInterfaceYouNeed
{
protected ClassA _actualA;
public MyA( ClassA actualA )
{
_actualA = actualA;
}
public void DoWhatYouWant()
{
_actualA.DoWhatADoes();
}
}
public class MyB : ITheInterfaceYouNeed
{
protected ClassB _actualB;
public MyB( ClassB actualB )
{
_actualB = actualB;
}
public void DoWhatYouWant()
{
_actualB.DoWhatBDoes();
}
}
Seems like a lot of code, but it will make the client code a lot closer to what you want. Plus it'll give you a chance to think about what interface you're actually using.
Check out the Visitor pattern. This lets you come close to adding virtual methods to a class without changing the class. You need to use an extension method with a dynamic cast if the base class you're working with doesn't have a Visit method. Here's some sample code:
public class Main
{
public static void Example()
{
Base a = new GirlChild();
var v = new Visitor();
a.Visit(v);
}
}
static class Ext
{
public static void Visit(this object b, Visitor v)
{
((dynamic)v).Visit((dynamic)b);
}
}
public class Visitor
{
public void Visit(Base b)
{
throw new NotImplementedException();
}
public void Visit(BoyChild b)
{
Console.WriteLine("It's a boy!");
}
public void Visit(GirlChild g)
{
Console.WriteLine("It's a girl!");
}
}
//Below this line are the classes you don't have to change.
public class Base
{
}
public class BoyChild : Base
{
}
public class GirlChild : Base
{
}
I would say that the standard approach here is to wrap the class you want to "inherit" as a protected instance variable and then emulate all the non-private members (method/properties/events/etc.) of the wrapped class in your container class. You can then mark this class and its appropiate members as virtual so that you can use standard polymorphism features with it.
Here's an example of what I mean. ClosedClass is the class contained in the assembly whose code to which you have no access.
public virtual class WrapperClass : IClosedClassInterface1, IClosedClassInterface2
{
protected ClosedClass object;
public ClosedClass()
{
object = new ClosedClass();
}
public void Method1()
{
object.Method1();
}
public void Method2()
{
object.Method2();
}
}
If whatever assembly you are referencing were designed well, then all the types/members that you might ever want to access would be marked appropiately (abstract, virtual, sealed), but indeed this is unfortunately not the case (sometimes you can even experienced this issue with the Base Class Library). In my opinion, the wrapper class is the way to go here. It does have its benefits (even when the class from which you want to derive is inheritable), namely removing/changing the modifier of methods you don't want the user of your class to have access to. The ReadOnlyCollection<T> in the BCL is a pretty good example of this.
Take a look at the Decorator pattern. Noldorin actually explained it without giving the name of the pattern.
Decorator is the way of extending behavior without inheriting. The only thing I would change in Noldorin's code is the fact that the constructor should receive an instance of the object you are decorating.
Extension methods provide an easy way to add additional method signatures to existing classes. This requires the 3.5 framework.
Create a static utility class and add something like this:
public static void DoSomething(this ClassA obj, int param1, string param2)
{
//do something
}
Add a reference to the utility class on the page, and this method will appear as a member of ClassA. You can overload existing methods or create new ones this way.