What is the difference between method hiding and shadowing in C#? Are they same or different? Can we call them as polymorphism (compile time or run time)?
What is the difference between method hiding and shadowing in C#?
Shadowing is another commonly used term for hiding. The C# specification only uses "hiding" but either is acceptable.
You call out just "method hiding" but there are forms of hiding other than method hiding. For example:
namespace N
{
class D {}
class C
{
class N
{
class D
{
N.D nd; // Which N.D does this refer to?
the nested class N hides the namespace N when inside D.
Can we call them as polymorphism (compile time or run time)?
Method hiding can be used for polymorphism, yes. You can even mix method hiding with method overriding; it is legal to introduce a new virtual method by hiding an old virtual method; in that case which virtual method is chosen depends on the compile-time and run-time type of the receiver. Doing that is very confusing and you should avoid it if possible.
The VB.NET compiler calls it shadowing, in C# it is called hiding. Calling it shadowing in C# is a spill-over from VB.
And it is a a compiler warning, it essentially is a name-conflict between base and derived class.
Can we call them as polymorphism (compile time or run time)?
It certainly is not a form of runtime polymorphism. A call to a hiding or to a hidden method is resolved at compile time. Which makes that it will in general not be called or considered polymorphism.
The two terms mean the same in C#.
Method hiding == shadowing.
You can use this as a form of polymorphism - when you don't want the base class method to be visible/usable through the inheriting class.
A shadowing method is completely decoupled from the base class - it is a new method. The term hiding is used because it has an identical signature to the one of the base class and is "hiding" it - it breaks the inheritance chain.
Name hiding of C# (new modifier) is called shadowing in VB.NET (keyword Shadows).
This can be thought of as polymorphism only in the sense that overriding is a "polymorphism", i.e. static or compile-time. It is not a polymorphism in the classical sense of calling virtual functions.
They are just two different words for the same thing, but differ in the context where you most often use them. Typically, what is called "hiding" is related to polymorphism but what is called "shadowing" is not.
In C# parlance, when you say "hiding" you're usually talking about inheritance, where a more derived method "hides" a base-class method from the normal inherited method call chain.
When you say "shadow" you're usually talking about scope: an identifier in an inner scope is "shadowing" an identifier at a higher scope. In other languages, what is called "hiding" in C# is sometimes called "shadowing" as well.
Both are compile-time concepts; they describe what object a given identifier refers to in a given context when the compiler goes to bind it.
public class A
{
public int B;
public void C()
{
return this.B;
}
}
public class D : A
{
public int X;
public new void C()
{
var X = 1.0m;
return X;
}
}
Method D.C() "hides" method A.C(); normally, a call to D.C() would always call into the base classes A.C() method, since it's not virtual. We don't want that; we want D.C(). Obviously this is something you should try to avoid, because it's confusing, especially if you start up-casting your D's to A's, but it exists if you need it. Also, note that method hiding is automatic: without the new keyword here, D.C() still hides A.C() but we get a warning because usually that's not what you want. The new keyword just makes it clear that is really is what we want.
Local variable X in D.C() shadows class member D.X within the scope of D.C() only. In this case, there are two things in scope that could legitimately be called X and the compiler needs rules to tell it which one you mean. The "more local" X shadows the "less local" D.X so that's what we get.
Related
Consider the following snippet of code:
using System;
class Base
{
public virtual void Foo(int x)
{
Console.WriteLine("Base.Foo(int)");
}
}
class Derived : Base
{
public override void Foo(int x)
{
Console.WriteLine("Derived.Foo(int)");
}
public void Foo(object o)
{
Console.WriteLine("Derived.Foo(object)");
}
}
public class Program
{
public static void Main()
{
Derived d = new Derived();
int i = 10;
d.Foo(i);
}
}
And the surprising output is:
Derived.Foo(object)
I would expect it to select the overridden Foo(int x) method, since it's more specific. However, C# compiler picks the non-inherited Foo(object o) version. This also causes a boxing operation.
What is the reason for this behaviour?
This is the rule, and you may not like it...
Quote from Eric Lippert
if any method on a more-derived class is an applicable candidate, it
is automatically better than any method on a less-derived class, even
if the less-derived method has a better signature match.
The reason is because the method (that is a better signature match) might have been added in a later version and thereby be introducing a "brittle base class" failure
Note : This is a fairly complicated/in-depth part of the C# specs and it jumps all over the place. However, the main parts of the issue you are experiencing are written as follows
Update
And this is why i like stackoverflow, It is such a great place to learn.
I was quoting the the section on the run time processing of the method call. Where as the question is about compile time overload resolution, and should be.
7.6.5.1 Method invocations
...
The set of candidate methods is reduced to contain only methods from
the most derived types: For each method C.F in the set, where C is the
type in which the method F is declared, all methods declared in a base
type of C are removed from the set. Furthermore, if C is a class type
other than object, all methods declared in an interface type are
removed from the set. (This latter rule only has affect when the
method group was the result of a member lookup on a type parameter
having an effective base class other than object and a non-empty
effective interface set.)
Please see Eric's post answer https://stackoverflow.com/a/52670391/1612975 for a full detail on whats going on here and the appropriate part of the specs
Original
C#
Language Specification
Version 5.0
7.5.5 Function member invocation
...
The run-time processing of a function member invocation consists of
the following steps, where M is the function member and, if M is an
instance member, E is the instance expression:
...
If M is an instance function member declared in a reference-type:
E is evaluated. If this evaluation causes an exception, then no further steps are executed.
The argument list is evaluated as described in §7.5.1.
If the type of E is a value-type, a boxing conversion (§4.3.1) is performed to convert E to type object, and E is considered to be of
type object in the following steps. In this case, M could only be a
member of System.Object.
The value of E is checked to be valid. If the value of E is null, a System.NullReferenceException is thrown and no further steps are
executed.
The function member implementation to invoke is determined:
If the binding-time type of E is an interface, the function member to invoke is the implementation of M provided by the run-time
type of the instance referenced by E. This function member is
determined by applying the interface mapping rules (§13.4.4) to
determine the implementation of M provided by the run-time type of the
instance referenced by E.
Otherwise, if M is a virtual function member, the function member to invoke is the implementation of M provided by the run-time type of
the instance referenced by E. This function member is determined by
applying the rules for determining the most derived implementation
(§10.6.3) of M with respect to the run-time type of the instance
referenced by E.
Otherwise, M is a non-virtual function member, and the function member to invoke is M itself.
After reading the specs what's interesting is, if you use an interface which describes the method, the compiler will choose the overload signature, in-turn working as expected
public interface ITest
{
void Foo(int x);
}
Which can be shown here
In regards to the interface, it does make sense when considering the overloading behavior was implemented to protect against Brittle base class
Additional Resources
Eric Lippert, Closer is better
The aspect of overload resolution in C# I want to talk about today is
really the fundamental rule by which one potential overload is judged
to be better than another for a given call site: closer is always
better than farther away. There are a number of ways to characterize
“closeness” in C#. Let’s start with the closest and move our way out:
A method first declared in a derived class is closer than a method first declared in a base class.
A method in a nested class is closer than a method in a containing class.
Any method of the receiving type is closer than any extension method.
An extension method found in a class in a nested namespace is closer than an extension method found in a class in an outer namespace.
An extension method found in a class in the current namespace is closer than an extension method found in a class in a namespace
mentioned by a using directive.
An extension method found in a class in a namespace mentioned in a using directive where the directive is in a nested namespace is closer
than an extension method found in a class in a namespace mentioned in
a using directive where the directive is in an outer namespace.
The accepted answer is correct (excepting the fact that it quotes the wrong section of the spec) but it explains things from the perspective of the specification rather than giving a justification for why the specification is good.
Let's suppose we have base class B and derived class D. B has a method M that takes Giraffe. Now, remember, by assumption, the author of D knows everything about B's public and protected members. Put another way: the author of D must know more than the author of B, because D was written after B, and D was written to extend B to a scenario not already handled by B. We should therefore trust that the author of D is doing a better job of implementing all functionality of D than the author of B.
If the author of D makes an overload of M that takes an Animal, they are saying I know better than the author of B how to deal with Animals, and that includes Giraffes. We we should expect overload resolution when given a call to D.M(Giraffe) to call D.M(Animal), and not B.M(Giraffe).
Let's put this another way: We are given two possible justifications:
A call to D.M(Giraffe) should go to B.M(Giraffe) because Giraffe is more specific than Animal
A call to D.M(Giraffe) should go to D.M(Animal) because D is more specific than B
Both justifications are about specificity, so which justification is better? We're not calling any method on Animal! We're calling the method on D, so that specificity should be the one that wins. The specificity of the receiver is far, far more important than the specificity of any of its parameters. The parameter types are there for tie breaking. The important thing is making sure we choose the most specific receiver because that method was written later by someone with more knowledge of the scenario that D is intended to handle.
Now, you might say, what if the author of D has also overridden B.M(Giraffe)? There are two arguments why a call to D.M(Giraffe) should call D.M(Animal) in this case.
First, the author of D should know that D.M(Animal) can be called with a Giraffe, and it must be written do the right thing. So it should not matter from the user's perspective whether the call is resolved to D.M(Animal) or B.M(Giraffe), because D has been written correctly to do the right thing.
Second, whether the author of D has overridden a method of B or not is an implementation detail of D, and not part of the public surface area. Put another way: it would be very strange if changing whether or not a method was overridden changes which method is chosen. Imagine if you're calling a method on some base class in one version, and then in the next version the author of the base class makes a minor change to whether a method is overridden or not; you would not expect overload resolution in the derived class to change. C# has been designed carefully to prevent this kind of failure.
I have been confused by the following code
class A
{
public void Abc(int q)
{
Console.Write("A");
}
}
class B : A
{
public void Abc(double p)
{
Console.Write("B");
}
}
...
var b = new B();
b.Abc((int)1);
The result of code execution is "B" written to console.
In fact the B class contains two overloads of Abc method, the first for int parameter, the second one for double. Why the compiler use a double version for an integer argument?
Be careful the method abc(double) doesn't shadow or override the method abc(int)
Since the compiler can implicitly convert the int to double, it chooses the B.Abc method. This is explained in this post by Jon Skeet (search for "implicit"):
The target of the method call is an expression of type Child, so the
compiler first looks at the Child class. There's only one method
there, and it's applicable (there's an implicit conversion from int to
double) so that's the one that gets picked. The compiler doesn't
consider the Parent method at all.
The reason for this is to reduce the risk of the brittle base class
problem...
More from Eric Lippert
As the standard says, “methods in a base class are not candidates if any method in a derived class is applicable”.
In other words, the overload resolution algorithm starts by searching
the class for an applicable method. If it finds one then all the other
applicable methods in deeper base classes are removed from the
candidate set for overload resolution. Since Delta.Frob(float) is
applicable, Charlie.Frob(int) is never even considered as a candidate.
Only if no applicable candidates are found in the most derived type do
we start looking at its base class.
Things get a little more interesting if we extend the example in your question with this additional class that descends from A:
class C : A {
public void Abc(byte b) {
Console.Write("C");
}
}
If we execute the following code
int i = 1;
b.Abc((int)1);
b.Abc(i);
c.Abc((int)1);
c.Abc(i);
the results are BBCA. This is because in the case of the B class, the compiler knows it can implicitly cast any int to double. In the case of the C class, the compiler knows it can cast the literal int 1 to a byte (because the value 1 fits in a byte) so C's Abc method gets used. The compiler, however, can't implicitly cast any old int to a byte, so c.Abc(i) can't use C's Abc method. It must use the parent class in that case.
This page on Implicit Numeric Conversions shows a compact table of which numeric types have implicit conversions to other numeric types.
You get the same functionality even when you define B as:
class B : A
{
public void Abc(object p)
{
Console.Write("B");
}
}
Simply, it's because overload resolution is done by looking at methods defined in the current class. If there are any suitable methods in the current class, it stops looking. Only if there are no suitable matches does it look at base classes
You can take a look at the Overload resolution spec for a detailed explanation.
Different languages (such as C++, Java, or C#) have vastly different overload resolution rules. In C#, the overload was correctly chosen as per the language spec. If you wanted the other overload to be chosen, you have a choice. Remember this:
When a derived class intends to declare another overload for an inherited method, so as to treat all available overloads as equal-rights peers, it must also explicitly override all the inherited overloads with a base call as well.
What is the language design benefit of requiring this exercise?
Imagine that you are using a 3rd party library (say, .NET framework) and deriving from one of its classes. At some point you introduce a private method called Abc (a new, unique name, not an overload of anything). Two years later you upgrade the 3rd party library version without noticing that they also added a method, accessible to you and called, regrettably, Abc, except that it has a different parameter type somewhere (so the upgrade doesn't alert you with a compile time error) and it behaves subtly differently or maybe even has a different purpose altogether. Do you really want one half of your private calls to Abc to be silently redirected to the 3rd party Abc? In Java, this may happen. In C# or C++, this isn't going to happen.
The upside of the C# way is that it's somewhat easier, for a redistributed library, to add functionality while rigorously keeping backward compatibility. In two ways actually:
You won't ever mess with your customers' private method calls inside their own code.
You won't ever break your customers by adding a new uniquely named method, although you must still think twice before adding an overload of YOUR own existing method.
The downside of the C# way is that it cuts a hole into the OOP philosophy of overriding methods ever changing only the implementation, but not the API of a class.
Why is it necessary to use the new keyword when a method shall be hidden?
I have two classes:
public class Parent
{
public void Print()
{
Console.WriteLine("Parent");
}
}
public class Child : Parent
{
public void Print()
{
Console.WriteLine("Child");
}
}
Following code produces the given output:
Parent sut = new Child();
sut.Print();
Output: Parent
Child sut = new Child();
sut.Print();
Output: Child
I understand that this might be a problem if it the hiding was not intended but is there any other reason to use "new" (excepted avoding of warnings)?
Edit:
May be I was not clear. It is the same situation like:
public void foo(Parent p)
{
p.Print();
}
which is called:
Child c = new Child;
foo (c);c
No, but avoiding the warnings is incredibly important. The bugs caused by hiding can be insiduous, and after C++ (which just let you do it) both Java and C# chose two different ways to prevent them.
In Java, it is not possible to hide a base classes methods. The compiler will scream bloody murder if you try to do it.
In C#, there is the "new" keyword. It allows you to be explicit that you want to hide a method, and the compiler will warn you if you are not using it. Many developers make it a rule to have code compiler with no warnings because warnings can be a sign of bad code.
After all that, I have never had to use hiding. I would carefully consider why I was even considering using it, since its behavior is strange and almost never what I want.
If you are trying to use inheritance, you should make the method virtual in the base class, and override it in the child class.
The language designers have decided that when these keywords are not used, then method hiding should be explicitly done with the new keyword.
Hiding is a bad idea. It makes code more confusing, and your intent is unclear. Programming languages are about expressing your intent to both the compiler and to other programmers. It is not clear without a new keyword whether your intent was to override or to hide, or if you were just ignorant of the base method.
There's also an element of foolproofing. "Shoot-self in foot. Are you sure? OK/Cancel".
The real question is why is hiding allowed in the first place. The problem with not allowing hiding, is that if you derive from a class in another assembly and add a Foo() method, and then that other assembly is updated to add a Foo() method, not allowing for hiding would break your code.
Expanding on what Jon Hanna said...
The point of the warning is to tell you there is a name clash you might not be aware of. When you get the warning, you should do one of the following:
Add virtual and override to polymorphically override the method from the base class
Rename your method so its name no longer clashes with the method in the base class
Add new to make clear your intention to hide the method
If you are extending or refining the behaviour of the method in the base class, use virtual and override, as others have said here.
If you have just written your method for the first time and discover you have a name clash you weren't aware of and it is not your intention to override, simply rename your method.
Option 1 or 2 is usually preferable. So when should you resort to option 3?
You use option 3 when the base class is not your code. You don't have the option to add a virtual to it. The typical scenario where you need the new goes like this:
You bought a third party library. It has an interesting class. You inherit from the class and add a new method that the original author didn't provide. So far, there's no hiding or overriding involved. Now you receive a version 2 of the library, with some new features you want to use. The authors have added a new method to their class, and its name clashes with a method you wrote in your derived class.
If your method is not used very much, you should rename it out of the way, option 2. But if there are many dependencies on your method, it would be very disruptive to rename it. So you add the new to say that there is no logical connection between your method and the one in the base class, even though they happen to have the same name. You don't have the ability to add a virtual to the method in the base class, nor do you want to do that. The two methods were designed by different developers and your method doesn't refine or extend the one in the base class - when you wrote yours, the one in the base class didn't exist.
So, it's rare that you need the new keyword, but when you do, it's important.
Because it makes it explicit that you are deliberately hiding the parent's method, rather than overriding it. There is a warning when you don't use new because your subclass's method may have a typo in it, and accidentally hiding a parent method can lead to some subtle bugs.
You will have to use the new keyword if the method with the same signature in the base class isn't marked with abstract or virtual.
Just mark the Parent's method virtual. Then you would do public override void Print() in your child class.
public class Parent
{
public virtual void Print()
{
Console.WriteLine("Parent");
}
}
public class Child : Parent
{
public override void Print()
{
Console.WriteLine("Child");
}
}
This way you get the real inheritance and you can call the parents Print() method from the Child by calling base.Print()
Why is it necessary to use the new keyword when a method shall be hidden?
It is not necessary. The only function of new here is to suppress the warning.
The question could be: Why is it not an error to hide base class members?
The answer to the addition to your question
May be I was not clear. It is the same
situation like:
public void foo(Parent p) {
p.Print(); } which is called:
Child c = new Child; foo (c);
The answer is the foo(c) will output Parent. The method that hides the parent method will not execute if you are calling the parent class as in the example. To get the output of Child you will need to use virtual and override as described in earlier answers.
You can check the details here why method hiding allowed and to the scope of it.
https://msdn.microsoft.com/en-us/library/aa691135(v=vs.71).aspx
Hi I have a class "A" with as abstract method
protected abstract List<Contributor> GetContributors(List<SyndicationPerson> contributersList);
I want to override this method in derived class "B" with following conditions
It should be private to B class.
compiler does not allow me to declare this Method as private in derived class "B"
what is the correct syntax ?
You can't. That would violate the accessibility level declared in class A. Aside from anything else, it would stop it from being callable by class A! What would you expect to happen if code in class A tries to call the abstract method which you'd somehow overridden with a private implementation?
You can make the main implementation private and then create a protected method which just calls the private one, if you really want to.
Why do you want to make the method private in the first place, when it's designed to be callable from A?
EDIT: Okay, now you've explained in your comment what you want to do, you can't do it. The closest you can come is to pass a delegate to A's constructor - that delegate can refer to a private method. Unfortunately, you can't use "this" when you pass arguments in constructor chains, so you're forced to do something horrible such as writing a static method which takes "this" as its first parameter, effectively... except that it will have to cast it to the right type as well, as the parent can't declare which type it should be. The parent would then call the delegate instead of the protected method.
Note that this would also prevent further derived classes from "overriding" further, which may or may not be desirable.
It's incredibly tortuous, and I'd try to avoid it wherever possible. If you're really worried about what derived classes might do, I'd try to seal the class instead and force people to use composition instead of inheritance. The language doesn't really help you do what you want to here.
As a general OOPS rule, one cannot reduce the visibility of a member when overriding. So going from protected to private is not allowed.
You cannot change access level of virtual methods in C#. You can redeclare it by using the new keyword, but this is a redeclaration that hides the previous method, so it will not work as you expect if you're dealing with objects that differ in declared and actual type, or if you routinely cast objects to a base class type.
I've found loads of practical examples of this, and understand the practical output when overriding or hiding methods, but I'm looking for some under the covers info on why this is and why C# allows it when according to the rules of polymorphism, this shouldn't be allowed - at least, insofar as my understanding of polymorphism goes (which seems to coincide with the standard definitions found on Wikipedia/Webopedia).
Class Base
{
public virtual void PrintName()
{
Console.WriteLine("BaseClass");
}
}
Class FirstDerived : Base
{
public override void PrintName()
{
Console.WriteLine("FirstDerived");
}
}
Class SecondDerived : Base
{
public new void PrintName()
{
Console.WriteLine("SecondDerived");
}
}
Using the following code:
FirstDerived b = new FirstDerived();
BaseClass a = b;
b.PrintName();
a.PrintName();
I get:
FirstDerived
FirstDerived
Okay, I get that, makes sense.
SecondDerived c = new SecondDerived();
BaseClass a = c;
c.PrintName();
a.PrintName();
I get:
SecondDerived
BaseClass
Okay, that makes sense, too, instance a can't see c.PrintName() so it's using its own method to print its own name, however I can cast my instance to its true type using:
((SecondDerived)a).PrintName();
or
(a as SecondDerived).PrintName();
to get the output I would expect:
SecondDerived
So what is going on under the covers and what does this mean in terms of polymorphism? I'm told that this facility "breaks polymorphism" - and I guess according to the definition, it does. Is that right? Would an "object oriented" langage like C# really allow you to break one of the core principles of OOP?
(This answers the "why is it allowed" which I think is really the central point of your question. How it works in terms of the IL is less interesting to my mind... let me know if you want me to go into that though. Basically it's just a case of specifying the method to call with a different type token.)
It allows base classes to evolve without breaking derived classes.
Suppose Base didn't originally have the PrintName method. The only way to get at SecondDerived.PrintName would be to have an expression with a static type of SecondDerived, and call it on that. You ship your product, everything is fine.
Now fast forward to Base introducing a PrintName method. This may or may not have the same semantics of SecondDerived.PrintName - it's safest to assume that it doesn't.
Any callers of Base.PrintName know that they're calling the new method - they couldn't have called it before. Any callers which were previously using SecondDerived.PrintName still want to use it though - they don't want to suddenly end up calling Base.PrintName which could do something entirely different.
The difficulty is new callers of SecondDerived.PrintName, who may or may not appreciate that this isn't an override of Base.PrintName. They may be able to notice this from the documentation of course, but it may not be obvious. However, at least we haven't broken existing code.
When SecondDerived is recompiled though, the authors will be made aware that there's now a Base.PrintName class through a warning. They can either stick to their existing non-virtual scheme by adding the new modifier, or make it override the Base.PrintName method. Until they make that decision, they'll keep getting a warning.
Versioning and compatibility isn't usually mentioned in OO theory in my experience, but C# has been designed to try to avoid compatibility nightmares. It doesn't solve the problem completely, but it does a pretty good job.
I answer "how" it works. Jon has answered the "Why" part.
Calls to virtual methods are resolved a bit differently to those of non-virtual ones. Basically, a virtual method declaration introduces a "virtual method slot" in the base class. The slot will hold a pointer to the actual method definition (and the contents will point to an overridden version in the derived classes and no new slot will be created). When the compiler generates code for a virtual method call, it uses the callvirt IL instruction, specifying the method slot to call. The runtime will dispatch the call to the appropriate method. On the other hand, a non-virtual method is called with a call IL instruction, which will be statically resolved to the actual method by the compiler, at compile time (only with the knowledge of the compile-time type of the variable). new modifier does nothing in the compiled code. It essentially tells the C# compiler "Dude, shut up! I'm sure I'm doing the right thing" and turns off the compiler warning.
A new method (actually, any method without an override modifier) will introduce a completely separate chain of methods (new method slot). Note that a new method can be virtual itself. The compiler will look at the static type of the variable when it wants to resolve the method chain and the run time will choose the actual method in that specific chain.
According to the Wikipedia definition:
Type polymorphism in object-oriented
programming is the ability of one
type, A, to appear as and be used like
another type, B
Later on the same page:
Method overriding is where a subclass
replaces the implementation of one or
more of its parent's methods. Neither
method overloading nor method
overriding are by themselves
implementations of polymorphism.
The fact that SecondDerived does not provide an override for the PrintName does not affect its ability to appear and be used as Base. The new method implementation it provides will not be used anywhere an instance of SecondDerived is treated as an instance of the Base; it will be used only when that instance is explicitly used as an instance of SecondDerived.
Moreover, SecondClass can actually explicitly implement the Base.PrintName in addition to the new hiding implementation, thus providing its own override that will be used when treated as Base. (Though, Base has to be an explicit interface definition or has to derive from one to allow this)