Logically the methods in question should be abstract but they are on a parent form that gets inherited from and Visual Studio will have fits if they are declared abstract.
Ok, I made the bodies throw a NotImplementedException. Resharper flags that and I'm not one to tolerate a warning in the code like that.
Is there an elegant answer to this or do I have to put up with some ugliness? Currently I am doing:
protected virtual void SaveCurrentItem()
{
Trace.Assert(false, "Only the children of FormCore.SaveCurrentItem should be called");
}
protected virtual void SetItem()
{
Trace.Assert(false, "Only the children of FormCore.SetItem should be called");
}
The class itself should never be instantiated, only its children. However, Visual Studio insists on creating one when you look at the designer of one of its children.
You might consider creating a nested, protected interface. For example:
protected interface IManageItems
{
void SaveCurrentItem();
void SetItem();
}
Each class that inherits from FormCore could individually implement the interface. Then you wouldn't have the risk of calling the base class implementation because there wouldn't be any.
To call the methods from your base class:
(this as IManageItems)?.SaveCurrentItem();
This would have the effect of making the methods act as if they were virtual without having an initial declaration in the parent class. If you wanted to force a behavior that was closer to abstract, you could check to see if the interface was being implemented in the constructor of the base class and then throw an exception if it wasn't. Things are obviously getting a little wonky here, because this is essentially a workaround for something the IDE is preventing you from doing, and as such there's no real clean, standard solution for something like this. I'm sure most people would cringe at the sight of a nested protected interface, but if you don't want an implementation in your base class and you can't mark your base class abstract, you don't have a lot of options.
Another thing to consider is favoring composition over inheritance to provide the functionality that you need.
On the other hand instead of using an interface, it may be appropriate to simply throw a NotSupportedException in a circumstance where the class cannot perform the action. NotImplementedException is designed to be used for in-development projects only, which is why ReSharper is flagging the method.
NotSupportedException: The exception that is thrown when an invoked method is not supported, or when there is an attempt to read, seek, or write to a stream that does not support the invoked functionality.
One use case is:
You've inherited from an abstract class that requires that you override a number of methods. However, you're only prepared to provide an implementation for a subset of these. For the methods that you decide not to implement, you can choose to throw a NotSupportedException.
See NotSupportedException documentation on MSDN for more information and usage guidelines.
Resharper raises the warning to alert users that code has not been completed. If your actual desired behaviour is to not support those methods, you should throw a NotSupportedException instead of NotImplementedException, to make your intentions clearer.
Related
I'm developing a framework where a class inheriting from an abstract class of the framework needs to be able to specify the schema for the options it can accept when it is called to DoStuff().
I started out with an abstract GetOptionsSchema() method like this:
public abstract class Widget
{
public abstract OptionsSchema GetOptionsSchema();
public abstract void DoStuff(Options options);
}
Other developers would then extend on my framework by creating custom Widget types:
public abstract class FooWidget: Widget
{
public overide DoStuff(Options options)
{
//Do some FooWidget stuff
}
public overide OptionsSchema GetOptionsSchema()
{
//Return options for FooWidget
}
}
This works but requires the framework to create an instance of every Widget type to determine options schema they accept, even if it has no need to actually DoStuff() with any of these types.
Ultimately, I'd like to be able to determine the options schema for a specific Widget type directly from a System.Type. I would create a custom OptionsSchema attribute, but constructing these schemas is more complicated then would make sense to do in the constructor of an attribute. It needs to happen in a method.
I've seen other frameworks solve similar problems by creating a custom attribute that identifies a static method or property by name. For example the TestCaseSource attribute in NUnit.
Here's what this option might look like:
public abstract class Widget
{
public abstract void DoStuff(Options options);
}
[OptionsSchemaSource(nameof(GetOptionsSchema))]
public abstract class FooWidget: Widget
{
public overide DoStuff(Options options)
{
//Do some FooWidget stuff
}
public static OptionSchema GetOptionsSchema()
{
//Return options for FooWidget
}
}
I like how the OptionsSchemaSource attribute makes it possible to get the options schema directly from a System.Type, but this also seem much less discoverable to other developers creating custom Widget types.
With the abstract method another Widget developer knows they must override GetOptionSchema() because their code would not compile otherwise. With the OptionsSchemaSource attribute the best I could do would be to hope people read my documentation and have the framework throw an exception at run-time if it encounters a Widget with out an OptionsSchemaSource attribute.
Is there an alternative/better/recommended approach to this?
You pretty much already know everything of interest to judge what's the best approach.
As already mentioned, you cannot have static interfaces defined on your type, so there is no way you can ensure a new developer is enforced to add the attribute.
So, the two alternatives you identified are the only two I can think of.
Now, let's do a pros and cons and try to sharpen them.
Attribute
You can lessen the pain of ensuring devs put attributes on the classes with meaningful error messages. I would say that you should manage the discovery of the classes based exclusively on Attributes, not in inheritance.
If you manage everything with Attributes, you don't need to inherit from Widget.
This is a pro, because now everyone can inherit if it's desirable, and re-implement if it's preferred.
The con is that the implementation of discoverability will be more complex: you will need to use reflection at start up, get a MethodInfo, check that the method has the correct signature, give proper errors in case and invoke the method unboxing the result as needed.
Think about it: you would like a static method because you don't need to instantiate a single typed Widget instance, but actually instantiating a new Widget could very well be not a big deal.
Abstract class
Well, you enforce an inheritance chain over you developers, which could be ok, necessary or entirely optional (you judge), but you get a self documenting experience.
The apparent con is that at startup you need to instantiate a Widget for every derived type you discover, but that could very well be peanuts compared to assembly scanning and type checking and methodinfo discovery and method calls through reflection.
Ugly? Kind of. Inefficient? Not so much. And it's code that is invisible to your end user.
IMHO
I find quite a good tradeoff, when designing a framework, to put some "ugly" code inside the framework, if it means that every single implementation using the library is going to be even a little bit better.
All in all, if you're designing a library that you want to be flexible and discoverable, you should expect a developer to read at least a quick start guide. If they can read in 5 minutes a single bit of information (either "extend a base class" or "add a single or a couple attributes") and that single bit gives them an direction into discovering every aspect of widget registration, I would be ok: you can't really get much better than this.
My call: I would go the abstract class route with a smallish caveat. I really don't like having an enforced base class. So I would organize discovery at startup based on interface, IWidget, containing the GetOptionsSchema method and everything is needed to use the widget (which could be the DoStuff method, but could very well be something else). At startup you search for implementations of the interface which are not abstract, and you're good to go.
If, and only if, the only bit you really need in advance is a string or other similarly simple type, I would require an additional attribute.
[OptionsSchemaName("http://something")]
public class MyWidget : WidgetBase
{
public overide DoStuff(Options options)
{
//Do some FooWidget stuff
}
public static OptionSchema GetOptionsSchema()
{
//Return options for FooWidget
}
}
Then, your type discovery infrastructure can search for non-abstract IWidgets and throw a meaningful error right at startup like the type MyWidget is lacking an OptionsSchemaName attribute. Every implementation of IWidget must define one. See http://mydocs for information.
Bang! Nailed it!
It's not currently possible to enforce the attribute at compile time; that would've been ideal for your use case. It's also not possible to have an abstract static method, or have a static method specified in an interface; so there is no way to ensure the method is actually there at compile time, except by enforcing an instance method via abstract class or interface (which will require an instance of the type to access).
I'd go with the attribute idea - it's not unreasonable to expect developers to read documentation; even with overriding an abstract method, the developer would need to know how to construct an OptionSchema in the overridden method - back to documentation!
I'm writing a log4net appender. They have an AppenderSkeleton class which implements IAppender:
public abstract class AppenderSkeleton : IBulkAppender,
IAppender, IOptionHandler
They way the AppenderSkeleton class works is they implement the DoAppend() method of IAppender and do a bunch of work for you, like calling the filter chain, and then call an abstract method called Append(). While this is reasonable, I would like to execute some of my code before the filters run. I could implement the IAppender interface myself but at first I figured I would just try to override DoAppend() in my derived class, do my stuff, and then call base.DoAppend(). It was at this point where I noticed the AppenderSkeleton didn't mark DoAppend() as virtual since I got a compiler error indicating I couldn't override the method since it wasn't marked virtual.
I then had my class derive from IAppender and explicitly implemented the IAppender.DoAppend() method. I was surprised that the code compiled without issues. Below is my DoAppend() method:
void IAppender.DoAppend(LoggingEvent evnt)
{
.
.
.
base.DoAppend(evnt);
}
I haven't tried running it yet but wondering if someone might now what the runtime will end up doing with this implementation?
Thanks,
Nick
#Rob's answer is right - which method (base or derived) is called would depend on how you invoked it. That would make it rather fragile code.
What I'd recommend is to use composition and not inheritance. Don't make your class inherit from AppenderSkeleton, make it contain an instance of AppenderSkeleton and use its methods where you choose to.
Visual Studio even has a quick "Implement interface through private variable" option if you declare a private variable that implements one of the interfaces your class also implements. It quickly generates a proxy pattern for your class, calling the corresponding methods on the private member.
If someone has not marked his method as virtual, then there is no way to override this. The only option is to implement your own IAppender.
Also, note that you do not override interface methods, you implement them.
However, you even do not need to override this method.
According to documentation, DoAppend
performs threshold checks and invokes filters before delegating actual logging to the subclasses specific Append method.
You don't need to override DoAppend method, since it describes the generic algorithm.
It is sort of Template method.
You need to override Append method which is abstract:
protected override void Append(LogginEvent loggingEvent)
{
// Your actions here
}
I want to create an abstract class with an common exception handling pattern:
public abstract class Widget
{
public IFoo CreateFoo()
{
try
{
CreateFooUnsafe();
}
catch(Exception ex)
{
throw new WidgetException(ex, moreData, evenMoar);
}
}
protected abstract IFoo CreateFooUnsafe();
}
The intention is to have a standard exception handling pattern across all deriving objects. The abstract CreateFooUnsafe() method should not be expected to contain any exception handling. Implementations would probably a single line of return new Foo(...)
What I want to know is whether there are any standard naming conventions associated with this pattern, particularly where exception-throwing code is expected?
The names above seem somewhat appropriate, but not entirely without smell.
This appears to be an example of the template method pattern.
Template method is a pattern which can be expressed in many object-oriented languages by using a public non-virtual function to implement some over-arching behavior, and a protected virtual (or abstract) method to supple the concrete behavior in subclasses.
In your example, you are using the template method to catch all exceptions bubbling out of the inner implementation and wrapping them in a custom exception type. One comment I would make about this specific practice, is that it only makes sense if you can add contextual information that would allow calling code to better handle the exception. Otherwise, it may be better to simply allow the source exceptions to propogate out.
The short answer is no.
There is no convention in the Template Pattern to designate what type and when an exception is thrown. That kind of information is included in some section of the documentation as per MSDN. Using C# and XML comments you can easily generate such documentation.
I'm under the impression that there might be a naming convention in place for the Template Pattern itself sans any referencing to exception handling. As I understand it, naming might look like this:
public abstract class Widget
{
public void PerformAction()
{
this.PerformActionImpl();
}
protected virtual void PerformActionImpl(){}
}
Where Impl is a shorthand for "Implementation". Personally I don't like that naming style so don't use it but I'm sure I've read it somewhere semi authoritative that that is "the" way to do it.
I wouldn't use any of this in your case however as what you really to seem to want to either Factory or AbstractFactory.
..
With regard to your exception query, it seems to me the code is a little inside out tho I disagree with some of the other comments depending on your circumstances.
Wrap and throw is an entirely valid exception handling technique.
The additional context provided by the type of the exception itself may well be enough to route the exception to an appropriate handler. i.e. you've transformed an Exception into a WidgetException which one would expect then has context within your application. So that might well be good enough.
Where you've done the wrapping I do however disagree with.
I would do the catching wrapping and throwing from within the subclass implementation of the virtual method as only that subclass is going to have enough understanding of what it's doing to know whether the Exception is indeed a WidgetException and therefore wrap and throw or something a little more hairy that should propagate.
The code as it stands is making some massive assumptions about the cause of the exception and in that sense rendering any use of a custom exception next to useless. i.e. everything is now a WidgetException.
So while I believe type alone could be enough to contextualise the exception I dont believe the code is making that decision in the right place. I understand the motivation behind the implementation you've chosen as it seems like a really tasty shortcut, "the myth of the all knowing base class" but the mere fact that you declared it as abstract should provide a significant clue that the class is intended to be ignorant by design.
So with respect to the crosscutting concern of exception handling I don't think you should looking so much for a pattern to make your life easier but rather a framework to abstract all the guff away.
For example the Enterprise Library.
There are several different patterns swimming about in the code above. Among other things, it looks a bit like the Abstract Factory pattern, i.e., you've got an abstract class which is implementing a factory method that returns concrete objects which implement a specific interface.
As to whether this sort of exception handling is a good idea or not -- I would tend to agree with the other folks, that I can't typically see a lot of value in this approach. I see what you're trying to do, namely, provide a single sort of exception to handle, much as the CreateFoo() returns a single interface (IFoo). But the only benefit I can think of to that approach is if you provide some interesting and relevant troubleshooting information in the WidgetException (e.g., some database or service connection strings, or some special processing logic around the stack trace). If you're just wrapping the original exception so that your clients can deal with a WidgetException, you haven't really accomplished much: they could just as easily deal with the base Exception type.
According to MSDN Documentation for partial classes :
Partial methods are implicitly private
So you can have this
// Definition in file1.cs
partial void Method1();
// Implementation in file2.cs
partial void Method1()
{
// method body
}
But you can't have this
// Definition in file1.cs
public partial void Method1();
// Implementation in file2.cs
public partial void Method1()
{
// method body
}
But why is this? Is there some reason the compiler can't handle public partial methods?
Partial methods have to be completely resolvable at compile time. If they are not there at compile time, they are completely missing from the output. The entire reason partial methods work is that removing them has no impact on the API or program flow outside of the one line calling site (which, also, is why they have to return void).
When you add a method to your public API - you're defining a contract for other objects. Since the entire point of a partial method is to make it optional, you'd basically be saying: "I have a contract you can rely on. Oh wait, you can't rely on this method, though."
In order for the public API to be reasonable, the partial method has to really either always be there, or always be gone - in which case it shouldn't be partial.
In theory, the language designers could have changed the way partial methods work, in order to allow this. Instead of removing them from everywhere they were called, they could have implemented them using a stub (ie: a method that does nothing). This would not be as efficient, and is unnecessary for the use case envisioned by partial methods, though.
Instead of asking why are they private, let's rephrase the question to this:
What would happen if partial methods weren't private?
Consider this simple example:
public partial class Calculator
{
public int Divide(int dividend, int divisor)
{
try
{
return dividend / divisor;
}
catch (DivideByZeroException ex)
{
HandleException(ex);
return 0;
}
}
partial void HandleException(ArithmeticException ex);
}
Let's ignore for the moment the question of why we would make this a partial method as opposed to an abstract one (I'll come back to that). What's important is that this compiles - and works correctly - whether the HandleException method is implemented or not. If nobody implements it, this just eats the exception and returns 0.
Now let's change the rules, say that the partial method could be protected:
public partial class Calculator
{
// Snip other methods
// Invalid code
partial protected virtual void HandleException(ArithmeticException ex);
}
public class LoggingCalculator : Calculator
{
protected override virtual void HandleException(ArithmeticException ex)
{
LogException(ex);
base.HandleException(ex);
}
private void LogException(ArithmeticException ex) { ... }
}
We have a bit of a problem here. We've "overridden" the HandleException method, except that there's no method to override yet. And I mean the method literally does not exist, it's not getting compiled at all.
What does it mean what our base Calculator invokes HandleException? Should it invoke the derived (overridden) method? If so, what code does the compiler emit for the base HandleException method? Should it be turned into an abstract method? An empty method? And what happens when the derived method calls base.HandleException? Is this supposed to just do nothing? Raise a MethodNotFoundException? It's really hard to follow the principle of least surprise here; almost anything you do is going to be surprising.
Or maybe nothing should happen when HandleException is invoked, because the base method wasn't implemented. This doesn't seem very intuitive, though. Our derived class has gone and implemented this method and the base class has gone and pulled the rug out from under it without us knowing. I can easily imagine some poor developer pulling his hair out, unable to figure out why his overridden method is never getting executed.
Or maybe this code shouldn't compile at all or should produce a warning. But this has a number of problems of its own. Most importantly, it breaks the contract provided by partial methods, which says that neglecting to implement one should never result in a compiler error. You have a base class which is humming along just fine, and then, by virtue of the fact that someone implemented a totally valid derived class in some completely different part of the application, suddenly your app is broken.
And I haven't even started talking about the possibility of the base and derived classes being in different assemblies. What happens if you reference an assembly with a base class that contains a "public" partial method, and you try to override it in a derived class in another assembly? Is the base method there, or not there? What if, originally, the method was implemented, and we wrote a bunch of code against it, but somebody decided to remove the implementation? The compiler has no way to stub out the partial method calls from referencing classes because as far as the compiler is concerned, that method never existed in the first place. It's not there, it's not in the compiled assembly's IL anymore. So now, simply by removing the implementation of a partial method, which is supposed to have no ill effects, we've gone and broken a whole bunch of dependent code.
Now some people might be saying, "so what, I know that I'm not going to try to do this illegal stuff with partial methods." The thing you have to understand is that partial methods - much like partial classes - are primarily intended to help simplify the task of code generation. It's very unlikely that you would ever want to write a partial method yourself period. With machine-generated code, on the other hand, it's actually fairly likely that consumers of the code will want to "inject" code at various locations, and partial methods provide a clean way of doing this.
And therein lies the problem. If you introduce the possibility of compile-time errors due to partial methods, you've created a situation in which the code generator generates code that doesn't compile. This is a very, very bad situation to be in. Think about what you'd do if your favourite designer tool - say Linq to SQL, or the Winforms or ASP.NET designer, suddenly started producing code that sometimes fails to compile, all because some other programmer created some other class that you've never even seen before that happens to have become a little too intimate with the partial method?
In the end it really boils down to a much simpler question, though: What would public/protected partial methods add that you can't already accomplish with abstract methods? The idea behind partials is that you can put them on concrete classes and they'll still compile. Or rather, they won't compile, but they won't produce an error either, they will just be completely ignored. But if you expect them to be called publicly, then they aren't really "optional" anymore, and if you expect them to be overridden in a derived class, then you might as well just make it abstract or virtual and empty. There isn't really any use for a public or protected partial method, other than to confuse the compiler and the poor bastard trying to make sense of it all.
So instead of opening up that Pandora's box, the team said forget it - public/protected partial methods aren't much use anyway, so just make them private. That way we can keep everything safe and sane. And it is. As long as partial methods stay private, they are easy to understand and worry-free. Let's keep it that way!
Because the MS compiler team did not have a requirement to implement this feature.
Here is a possible scenario of what happen inside MS, because VS uses code gen for a lot of its features, one day a MS code gen developer decided that he/she needed to have partial methods so that the code generated api could be extended by outsiders, and this requirement led the compiler team to act and deliver.
If you don't define the method, what should the compiler do?
If you don't define them, partial methods will not be compiled at all.
This is only possible because the compiler knows where all of the calls to the method are. If the method isn't defined, it will completely remove the lines of code that call the method. (This is also why they can only return void)
If the method is public, this can't work, because the method might be called by a different assembly, which the compiler has no control over.
Reed's and Slak's answers are entirely correct but you seem unwilling to accept their answers.
I will thus try to explain why partial methods are implemented with these restrictions.
Partial methods are for implementing certain sorts of code gen scenarios with maximal efficiency, both in execution time and in meta data overhead.
The last part is the real reason for them since they are attempting to make them (in Erics words) "Pay for play".
When partial methods were added the JIT was entirely capable of inlining an empty method, and thus it having zero runtime effort at the call sites. The problem is that even then there is a cost involved which is that the meta data for the class will have these empty methods increasing their size (needlessly) as well as forcing some more effort during the JITing process to deal with optimizing them away.
Whilst you may not worry too much this cost (and indeed many people won't notice it at all) it does make a big difference to code where startup cost matters, or where disk/memory is constrained. You may have noticed the now mandatory use of .Net on windows mobile 7 and the Zune, in these areas bloat on type metadata is a considerable cost.
Thus partial methods are designed such that, if they are never used they have absolutely zero cost, they cease to exist in the output in any way. This comes with some significant constraints to ensure this does not result in bugs.
Taken from the msdn page with my notes.
...the method must return void.
Partial methods can have ref but not out parameters.
Otherwise removing the call to them may leave you with an undefined problem of what to replace the assignment with.
Partial methods cannot be extern, because the presence of the body determines whether they are defining or implementing.
You can make a delegate to a partial method that has been defined and implemented, but not to a partial method that has only been defined.
These follow from the fact the compiler needs to know if it's defined or not, and thus is safe for removal. This leads us to the one you dislike.
Partial methods are implicitly private, and therefore they cannot be virtual.
Your metal model of this feature is that, if the compiler knows that a partial method is implemented then it should simply allow the partial method to be public (and virtual for that matter) since it can check that you implemented the method.
Were you to change the feature to do this you have two options:
Force all non private partial methods to require implementation.
simple and not likely to involve much effort but then any such methods are no longer partial in the meaningful sense of the original plan.
Any method declared public is simply deleted if it was not implemented in the assembly.
This allows the partial removal to be applied
It requires quite a lot more effort by the compiler (to detect all references to the method rather than simply needing to look in the composed class itself)
The IntelliSense implementation is a bit confused, should the method be shown? sown only when it's been given a definition?
Overload resolution becomes much more complex since you need to decide whether a call to such a method with no definition is either a) a compile time failure or b) results in the selection of the next best option if available.
Side effects within expressions at the call sites are already complex in the private only case. This is mitigated somewhat by the assumption that partial implementations already exhibit a high degree of coupling. This change would increase the potential for coupling.
This has rather complex failure modes in that public methods could be silently deleted by some innocuous change in the original assembly. Other assemblies depending on this one would fail (at compile time) but with a very confusing error (without considerable effort this would apply even to projects in the same solution).
Solution 1 is simply pointless, it adds effort and is of no benefit to the original goals. Worse it may actually hinder the original goals since someone using partial methods this way may not realise they are gaining no reduction in metadata. Also someone may become confused about the errors that result from failing to supply the definition.
That leaves us with solution 2. This solution involves effort (and every feature starts with -100) so it must add some compelling benefit to get it over the -100 (and the additional negatives added for the confusion caused by not the additional edge cases).
What scenarios can you come up with to get things positive?
Your motivating example in the comments above was to "have comments/and or attributes in a different file"
The motivation for XML comments was entirely to increase the locality of documentation and code. If their verbosity is high then outlining exists to mitigate this. My opinion is that this is not sufficient to merit the feature.
The ability to push attributes to a separate location is not in my view that useful, in fact I think having to look in two locations is more confusing. The current private only implementation has this problem too, but it is inevitable and again is mitigated somewhat by the assumption that high coupling that is not visible outside of the class is not as bad as high coupling external to the class.
If you can demonstrate some other compelling reason I'm sure it would be interesting, but the negatives to overcome are considerable.
I've read thorough all your comments and I agree with most of the conclusions, however I have one scenario in which I think will benefit from public/protected partial methods WITHOUT loosing the original intent:
I have a code generator that generates serialization code and other boiler-plate code. In particular, it generates a "Reset" method for every property so that a designer like VS can revert its value to the original. Within the code for this method I generate a call to a partial method Repaint().
The idea is that if the object wants to, it ca write the code for it and then do something, otherwise nothing is done and performance is optimal.
The problem is, sometimes, the Repaint method exists in the object for purposes OTHER than being called from the generated code, at that point, when I declare the method body I should be able to make it internal,protected or public. I am defining the method at this point, and yes, I will documented, etc. here, not in the declaration I generated but in the one I made by hand. The fact that it was also defined in the generated code should not affect this.
I agree that defining access modifiers in a partial method makes no sense when you are declaring the method without a body. When you are declaring the method WITH a body you should be free to declare it any way you want it. I don't see any difference in terms of complexity for the compiler to accept this scenario.
Notice that in partial classes, this is perfectly fine, I can leave one of the partial declarations "naked" without any access modifiers but I can specify them in another partial declaration of the same class. As long as there are no contradictions, everything is fine.
partial methods will be removed by compiler if it doesn't have an implementation. As per my understanding compiler will not be able to remove the partial methods if the methods are accessible as public
//partial class with public modifier
public partial class Sample
{
partial void Display();
}
public partial class Sample
{
partial void Display() { }
}
We will be able to access public methods like below which will be a restriction for compiler to remove the partial method that is not implemented.
// referred partial method which doesn't have implementation
var sample = new Sample();
sample.Display();
The simple reasons is that partial methods are not public because they are an implementation detail. They are meant primarily to support designer related scenarios and are not ever meant to be part of a supported public API. A non-public method works just fine here.
Allowing partial methods to be public is a feature. Features have inherent costs including design, testing, development, etc ... Partial methods were just one of the many features added in a very packed Visual Studio 2008. They were scoped to be as small as possible to fit the scenario in order to leave room for more pressing features such as LINQ.
Not sure if this will answer your question, it did answer mine.
Partial Methods
Excerpts:
If you were allow to return something,
and were in turn using that as a
return value from your CallMyMethods
function, you would run into trouble
when the partial methods were not
implemented.
I've found loads of practical examples of this, and understand the practical output when overriding or hiding methods, but I'm looking for some under the covers info on why this is and why C# allows it when according to the rules of polymorphism, this shouldn't be allowed - at least, insofar as my understanding of polymorphism goes (which seems to coincide with the standard definitions found on Wikipedia/Webopedia).
Class Base
{
public virtual void PrintName()
{
Console.WriteLine("BaseClass");
}
}
Class FirstDerived : Base
{
public override void PrintName()
{
Console.WriteLine("FirstDerived");
}
}
Class SecondDerived : Base
{
public new void PrintName()
{
Console.WriteLine("SecondDerived");
}
}
Using the following code:
FirstDerived b = new FirstDerived();
BaseClass a = b;
b.PrintName();
a.PrintName();
I get:
FirstDerived
FirstDerived
Okay, I get that, makes sense.
SecondDerived c = new SecondDerived();
BaseClass a = c;
c.PrintName();
a.PrintName();
I get:
SecondDerived
BaseClass
Okay, that makes sense, too, instance a can't see c.PrintName() so it's using its own method to print its own name, however I can cast my instance to its true type using:
((SecondDerived)a).PrintName();
or
(a as SecondDerived).PrintName();
to get the output I would expect:
SecondDerived
So what is going on under the covers and what does this mean in terms of polymorphism? I'm told that this facility "breaks polymorphism" - and I guess according to the definition, it does. Is that right? Would an "object oriented" langage like C# really allow you to break one of the core principles of OOP?
(This answers the "why is it allowed" which I think is really the central point of your question. How it works in terms of the IL is less interesting to my mind... let me know if you want me to go into that though. Basically it's just a case of specifying the method to call with a different type token.)
It allows base classes to evolve without breaking derived classes.
Suppose Base didn't originally have the PrintName method. The only way to get at SecondDerived.PrintName would be to have an expression with a static type of SecondDerived, and call it on that. You ship your product, everything is fine.
Now fast forward to Base introducing a PrintName method. This may or may not have the same semantics of SecondDerived.PrintName - it's safest to assume that it doesn't.
Any callers of Base.PrintName know that they're calling the new method - they couldn't have called it before. Any callers which were previously using SecondDerived.PrintName still want to use it though - they don't want to suddenly end up calling Base.PrintName which could do something entirely different.
The difficulty is new callers of SecondDerived.PrintName, who may or may not appreciate that this isn't an override of Base.PrintName. They may be able to notice this from the documentation of course, but it may not be obvious. However, at least we haven't broken existing code.
When SecondDerived is recompiled though, the authors will be made aware that there's now a Base.PrintName class through a warning. They can either stick to their existing non-virtual scheme by adding the new modifier, or make it override the Base.PrintName method. Until they make that decision, they'll keep getting a warning.
Versioning and compatibility isn't usually mentioned in OO theory in my experience, but C# has been designed to try to avoid compatibility nightmares. It doesn't solve the problem completely, but it does a pretty good job.
I answer "how" it works. Jon has answered the "Why" part.
Calls to virtual methods are resolved a bit differently to those of non-virtual ones. Basically, a virtual method declaration introduces a "virtual method slot" in the base class. The slot will hold a pointer to the actual method definition (and the contents will point to an overridden version in the derived classes and no new slot will be created). When the compiler generates code for a virtual method call, it uses the callvirt IL instruction, specifying the method slot to call. The runtime will dispatch the call to the appropriate method. On the other hand, a non-virtual method is called with a call IL instruction, which will be statically resolved to the actual method by the compiler, at compile time (only with the knowledge of the compile-time type of the variable). new modifier does nothing in the compiled code. It essentially tells the C# compiler "Dude, shut up! I'm sure I'm doing the right thing" and turns off the compiler warning.
A new method (actually, any method without an override modifier) will introduce a completely separate chain of methods (new method slot). Note that a new method can be virtual itself. The compiler will look at the static type of the variable when it wants to resolve the method chain and the run time will choose the actual method in that specific chain.
According to the Wikipedia definition:
Type polymorphism in object-oriented
programming is the ability of one
type, A, to appear as and be used like
another type, B
Later on the same page:
Method overriding is where a subclass
replaces the implementation of one or
more of its parent's methods. Neither
method overloading nor method
overriding are by themselves
implementations of polymorphism.
The fact that SecondDerived does not provide an override for the PrintName does not affect its ability to appear and be used as Base. The new method implementation it provides will not be used anywhere an instance of SecondDerived is treated as an instance of the Base; it will be used only when that instance is explicitly used as an instance of SecondDerived.
Moreover, SecondClass can actually explicitly implement the Base.PrintName in addition to the new hiding implementation, thus providing its own override that will be used when treated as Base. (Though, Base has to be an explicit interface definition or has to derive from one to allow this)