Should protected methods be unit-tested? How to avoid repetitive testing? - c#

Using C#
I know this has been asked before and a lot of people will answer no, only test public methods, not implementation details. Others will say yes, if it has some important logic. Although you might then consider breaking it off into its own class.
One issue I haven't seen addressed is having to repeat testing public methods that call protected methods in inherited classes.
If I test the protected method in the base class, surely I don't have to retest it in base classes. Or should I copy and paste the tests to multiple classes?

You definitely should test protected methods. From a testing standpoint, a "protected" method still is part of the public interface, even though the "public" is limited to those classes that derive from your class. Because code that you do not control can reference those methods, you must ensure that they function as defined.
As for repetitive testing, I don't have a definitive answer. If given:
public class A
{
protected virtual void Foo() {}
}
public class B:A
{
}
The question is whether you write a test for B.Foo. On one hand I would say no, because B doesn't provide an explicit implementation of Foo, and so its behavior can't possibly be different than the behavior of A.Foo, and we can assume that you've already tested A.Foo.
On the other hand, A.Foo could depend on some other protected fields or properties that B could modify, or on a private callback that B provides in a constructor or initialization function. In that case, then you absolutely must test B.Foo because its behavior could be different than A.Foo, even though B doesn't override A.Foo.
Obviously, if B overrides Foo, then you have to write a test for B.Foo. But if B doesn't override A.Foo, then you have to use your judgement.
All that said, it's really no different from having to write tests for any class that derives from another. Consider deriving a class from TextWriter. Would you write explicit unit tests for all of the virtual functions defined by the TextWriter class? Or would you write tests only for those methods that you override, and those methods whose functionality might have changed as a side effect?

There is a lot of opinions on what should be Unit Tested and what should not.
My personal belief is that for every function you write, you should have written a unit test first to specify the desired behavior. You then write your code to make this test pass. This is applicable for private, public, protected and internal. If it is used it should be unit tested.
Believe me this makes your life easier in the long run because if you or another developer changes existing unit tested code then a change in behavior is a lot more likely to be caught.
In the real world though it usually ends up being code first then test. However they should still be written for all access levels.

Related

NotImplementedException -- but it's something that should never be called

Logically the methods in question should be abstract but they are on a parent form that gets inherited from and Visual Studio will have fits if they are declared abstract.
Ok, I made the bodies throw a NotImplementedException. Resharper flags that and I'm not one to tolerate a warning in the code like that.
Is there an elegant answer to this or do I have to put up with some ugliness? Currently I am doing:
protected virtual void SaveCurrentItem()
{
Trace.Assert(false, "Only the children of FormCore.SaveCurrentItem should be called");
}
protected virtual void SetItem()
{
Trace.Assert(false, "Only the children of FormCore.SetItem should be called");
}
The class itself should never be instantiated, only its children. However, Visual Studio insists on creating one when you look at the designer of one of its children.
You might consider creating a nested, protected interface. For example:
protected interface IManageItems
{
void SaveCurrentItem();
void SetItem();
}
Each class that inherits from FormCore could individually implement the interface. Then you wouldn't have the risk of calling the base class implementation because there wouldn't be any.
To call the methods from your base class:
(this as IManageItems)?.SaveCurrentItem();
This would have the effect of making the methods act as if they were virtual without having an initial declaration in the parent class. If you wanted to force a behavior that was closer to abstract, you could check to see if the interface was being implemented in the constructor of the base class and then throw an exception if it wasn't. Things are obviously getting a little wonky here, because this is essentially a workaround for something the IDE is preventing you from doing, and as such there's no real clean, standard solution for something like this. I'm sure most people would cringe at the sight of a nested protected interface, but if you don't want an implementation in your base class and you can't mark your base class abstract, you don't have a lot of options.
Another thing to consider is favoring composition over inheritance to provide the functionality that you need.
On the other hand instead of using an interface, it may be appropriate to simply throw a NotSupportedException in a circumstance where the class cannot perform the action. NotImplementedException is designed to be used for in-development projects only, which is why ReSharper is flagging the method.
NotSupportedException: The exception that is thrown when an invoked method is not supported, or when there is an attempt to read, seek, or write to a stream that does not support the invoked functionality.
One use case is:
You've inherited from an abstract class that requires that you override a number of methods. However, you're only prepared to provide an implementation for a subset of these. For the methods that you decide not to implement, you can choose to throw a NotSupportedException.
See NotSupportedException documentation on MSDN for more information and usage guidelines.
Resharper raises the warning to alert users that code has not been completed. If your actual desired behaviour is to not support those methods, you should throw a NotSupportedException instead of NotImplementedException, to make your intentions clearer.

How to test a function that has .base() call

I would like to do a unit test to a function that has a call to the base class inside it implementation (using .base() )
I cannot use Mocking as this is inheritance we dealing with, so I don't get the object in my constructor.
Example code:
protected override BuyerDeal Map(BuyerDealDTO buyerDealDTO, BuyerDeal buyerDealEntity)
{
buyerDealEntity.prop1 = buyerDealDTO.prop2;
base.Map(buyerDealDTO, buyerDealEntity);
return buyerDealEntity;
}
I would like to test this function but I don't want this:
base.Map(buyerDealDTO, buyerDealEntity);
to occur, as I test the base by itself.
Yet i do want to test ( Verify ) the call, and solely, the call to the base .
btw, the base class is abstract.
The problem is that if there is few classes that inherit from that base class this will result in testing the base class more than once .
Without knowing much about your mapper there might be several ways to do that:
Split your code and the map code into separate methods, e.g:
protected override BuyerDeal Map(BuyerDealDTO buyerDealDTO, BuyerDeal buyerDealEntity)
{
ExtraLogic(...);
base.Map(buyerDealDTO, buyerDealEntity);
return buyerDealEntity;
}
protected void ExtraLogic(...)
{
buyerDealEntity.prop1 = buyerDealDTO.prop2;
}
then you can test ExtraLogic. This might not work for all code as calling base may be required as a dependency, which changes the flow, as stated by comments.
I generally would not recommend that.
Or don't inherit from the base class. Inherit from the Interface and mock your abstract class then. Favour composition over inheritance (FCoI).
This allows you to inject the base class and test your code only.
Might not work if no interface is defined (which is bad anyway) or if the framework uses the abstract class explicitly.
I see three approaches:
Try to ignore the base class in derived classes' tests. Problem here is that your tests are incomplete, because you never test the interaction of derived classes' code with the base class.
Parallel inheritance hierarchies for test fixtures, i.e. an abstract fixture for the base class, which is inherited by all fixtures for derived classes. This helps to remove duplication and tests the above mentioned interaction, but makes the fixtures hard to understand.
Follow Composition over Inheritance whenever possible. No base classes, no headaches.
I would suggest to follow the third one, especially as it looks as if the mapping between buyer deals might be considered a separate responsibility anyway.
Bottom line: Just create a BuyerDealMapper, test that, and mock it when you test the other classes.
When unit testing, you are in fact testing that the code under test gives the correct output for the corresponding input.
This input may come in the form of dependencies, such as method parameters, data read from the file system or from a network source.
The output may come in the form of a value returned from the method, data written to the file system or properties passed into another class for instance.
As long as your input produces the expected output, within reason, anything else that happens in that code under test does not matter.
In your posted code, your unit test should not care whether or not the call to base.Map(buyerDealDTO, buyerDealEntity); is made or not - just that the code under test gives the expected output.
If perhaps your base class itself requires dependencies, such as file system writes or network reads etc..., these can be mocked in your test start up routines.
If you are worried that you are testing the base class multiple times, this is actually a very good thing! The more times code is tested, the more conditions and situations a piece of code sees, the more thorough, the more bullet-proof that piece of code will be. Your unit test should help guarantee that your base class code and cope with your inherited classes use of it.
For instance, how will your method cope if the base class throws an exception? Returns an unexpected value? Doesn't return for one reason or another? Unit testing should take these things into account.
Of course, if the base class call has no relevance to the method and how it works, then perhaps it shouldn't be called in that method at all. Some code refactoring may be required to fix this. This, in fact, is a huge benefit of unit testing as it can also help you with architectural issues such as this. Of course, assuming you are using TDD - rather than testing pre-written code - this can be a big help.
Refactoring
A potential way of refactoring this code is to not call the base class method in your overriden method. Have the base class ensure this method is called itself, and then in the overridden code call the code that is required. For instance:
public abstract class MyBaseClass
{
public BuyerDeal Map(BuyerDealDTO buyerDealDTO, BuyerDeal buyerDealEntity)
{
// perform base class logic here
var entity = this.MapInternal(buyerDealDTO, buyerDealEntity);
return entity;
}
protected abstract BuyerDeal MapInternal(BuyerDealDTO buyerDealDTO, BuyerDeal buyerDealEntity);
}
public class MyClass : MyBaseClass
{
protected override BuyerDeal MapInternal(BuyerDealDTO buyerDealDTO, BuyerDeal buyerDealEntity)
{
buyerDealEntity.prop1 = buyerDealDTO.prop2;
return buyerDealEntity;
}
}
Using this method, when you unit test MyClass, you are no longer testing MyBaseClass::Map(...) multiple times. Of course, you would still need to test MyBaseClass itself separately.

Testing working of internal functions using nunit tests

I have a function which calls many functions internally. I see in tutorials that test methods are designed in such a way that only the return values of outer functions are checked. How can I check the values returned by internal functions.
Only the GetValues() methods values are tested. How can i check the working of other methods inside GetValues(). How can I check its working using unit testing?
[TestFixture]
public class Class1
{
[Test]
public void Tester()
{
TesterClass clasObj;
int a = clasObj.GetValues();
Assert.AreEqual(10,a);
}
}
How can i check its working using unit testing?
In unit tests you only care about the, well, the unit, under test. In this case it is the GetValues. Also, usually only the public methods are unit tested. Because it is only the public methods ( interface) that has to be tested and not the internal workings.
It also ensures that the tests are not brittle. If you change the way a private / internal method works, but will essentially make the public interfaces work the same ( this especially when you are using mocks, and not really in the kind of testing you are doing), you shouldn't really be facing failed unit tests.
In such cases, you should be making sure that your unit tests cover all code path through the public method being tested and the private / internal methods that are being called by the method under test.
Sometimes, you do want to test the internals and one way is to use the InternalsVisibleToAttribute and mark the test assembly as a "friend".
http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx
Another way is to subclass the class you are testing ( possibly in your test assembly), and add a public wrapper method to the method to be tested and use this proxy class and the public wrapper for testing.
I think you can do this with some tools, like TypeMock, but there is a reason why most tools don't allow it. This is because it usually makes the tests very brittle, meaning that when you change the internal code of a class, the tests will break. Internal members should be encapsulated and that is a good thing. I would look at a design that is testable from its public interface.
Generally you want to avoid testing the internal implementations of code, this is so that you can refactor and not break any tests. However, if you want to test the inside of another object, then the answer is easy. By wanting to test private implementation, the code smell is that the current object under test is doing too much work. In turn violating such rules as the single responsibility principle.
Therefore split out GetValues into a new object that you can test, such as:
ExampleFormatter.FormatValues()
Now this would be a public class with a public method meaning you can easily test it. All GetValues has to do now is invoke FormatValues with the correct params. You could use a mock object to verify that this happens as expected. As this is now public, when can test such things as the formatting of the values are as we expect and so forth. Any time you find it hard to test some code it usually means the code is doing too much, break it out!

Unit testing Domain model objects

In our Core domain model design, we have got a class called "Category" whose constructor is internal by design. Since the constructor is internal, when writing unit test cases I won't be able to create the object of "Category".
So my question, is it a best practice to make the constructor public just for making the "Category" class testable? Or I shouldn't be testing that "Category", instead I should have tested the Class/method responsible for creating this object?
Ta,
Rajeesh
Don't make the constructor public only for the sake of unit tests. If from a design point you decided that it should be internal, leave it that way. Test the classes that invoke this constructor.
In .NET there's the InternalsVisibleToAttribute which allows you to expose internal members to unit tests.
TDD means Test-Driven Design, and a corrolary to this is that a constructor can't really be internal "by design" if you can't test it.
Consider why it's internal. This will tell you how to address the issue. You shouldn't make the constructor public just to be able to test it, but you should consider a design that makes it easy to create new instances.
Often, constructors are made internal to protect invariants, but you could just as well achieve the same goal with a public constructor that takes required input as constructor parameters.
public class MyClass
{
private readonly string requiredString;
public MyClass(string requiredString)
{
if (requiredString == null)
{
throw new ArgumentNullException("requiredString");
}
this.requiredString = requiredString;
}
}
Notice how the combination of the Guard Clause and the readonly keyword protects the invariant of the class. This is often a good alternative to internal constructors.
Another reason for having internal constructors is when you have a Factory Method that may return a polymorphic object, but once again, consider if it would be a problem to expose the constructor if it doesn't mean compromising invariants.
The beauty of TDD is that it forces us to take a good look at any design decision and be able to really justify each and every one of them. Consider the justification of making the constructor internal and then modfiy the API so that the type is easy to create.
Add
[assembly: InternalsVisibleTo("UnitTestAssembly")]
to your AssemblyInfo.cs. Then UnitTestAssembl.dll is able to call your internal methods. More info is available here.
You could consider creating a static factory method that is named
Category *ConstructCategory_ForUnitTest();
with which you can create the object just for the sake of testing it.
It is apparent from the name that it should not be used outside testing context, and code review can easily spot the 'illegal' use in production grade code.

Why cant partial methods be public if the implementation is in the same assembly?

According to MSDN Documentation for partial classes :
Partial methods are implicitly private
So you can have this
// Definition in file1.cs
partial void Method1();
// Implementation in file2.cs
partial void Method1()
{
// method body
}
But you can't have this
// Definition in file1.cs
public partial void Method1();
// Implementation in file2.cs
public partial void Method1()
{
// method body
}
But why is this? Is there some reason the compiler can't handle public partial methods?
Partial methods have to be completely resolvable at compile time. If they are not there at compile time, they are completely missing from the output. The entire reason partial methods work is that removing them has no impact on the API or program flow outside of the one line calling site (which, also, is why they have to return void).
When you add a method to your public API - you're defining a contract for other objects. Since the entire point of a partial method is to make it optional, you'd basically be saying: "I have a contract you can rely on. Oh wait, you can't rely on this method, though."
In order for the public API to be reasonable, the partial method has to really either always be there, or always be gone - in which case it shouldn't be partial.
In theory, the language designers could have changed the way partial methods work, in order to allow this. Instead of removing them from everywhere they were called, they could have implemented them using a stub (ie: a method that does nothing). This would not be as efficient, and is unnecessary for the use case envisioned by partial methods, though.
Instead of asking why are they private, let's rephrase the question to this:
What would happen if partial methods weren't private?
Consider this simple example:
public partial class Calculator
{
public int Divide(int dividend, int divisor)
{
try
{
return dividend / divisor;
}
catch (DivideByZeroException ex)
{
HandleException(ex);
return 0;
}
}
partial void HandleException(ArithmeticException ex);
}
Let's ignore for the moment the question of why we would make this a partial method as opposed to an abstract one (I'll come back to that). What's important is that this compiles - and works correctly - whether the HandleException method is implemented or not. If nobody implements it, this just eats the exception and returns 0.
Now let's change the rules, say that the partial method could be protected:
public partial class Calculator
{
// Snip other methods
// Invalid code
partial protected virtual void HandleException(ArithmeticException ex);
}
public class LoggingCalculator : Calculator
{
protected override virtual void HandleException(ArithmeticException ex)
{
LogException(ex);
base.HandleException(ex);
}
private void LogException(ArithmeticException ex) { ... }
}
We have a bit of a problem here. We've "overridden" the HandleException method, except that there's no method to override yet. And I mean the method literally does not exist, it's not getting compiled at all.
What does it mean what our base Calculator invokes HandleException? Should it invoke the derived (overridden) method? If so, what code does the compiler emit for the base HandleException method? Should it be turned into an abstract method? An empty method? And what happens when the derived method calls base.HandleException? Is this supposed to just do nothing? Raise a MethodNotFoundException? It's really hard to follow the principle of least surprise here; almost anything you do is going to be surprising.
Or maybe nothing should happen when HandleException is invoked, because the base method wasn't implemented. This doesn't seem very intuitive, though. Our derived class has gone and implemented this method and the base class has gone and pulled the rug out from under it without us knowing. I can easily imagine some poor developer pulling his hair out, unable to figure out why his overridden method is never getting executed.
Or maybe this code shouldn't compile at all or should produce a warning. But this has a number of problems of its own. Most importantly, it breaks the contract provided by partial methods, which says that neglecting to implement one should never result in a compiler error. You have a base class which is humming along just fine, and then, by virtue of the fact that someone implemented a totally valid derived class in some completely different part of the application, suddenly your app is broken.
And I haven't even started talking about the possibility of the base and derived classes being in different assemblies. What happens if you reference an assembly with a base class that contains a "public" partial method, and you try to override it in a derived class in another assembly? Is the base method there, or not there? What if, originally, the method was implemented, and we wrote a bunch of code against it, but somebody decided to remove the implementation? The compiler has no way to stub out the partial method calls from referencing classes because as far as the compiler is concerned, that method never existed in the first place. It's not there, it's not in the compiled assembly's IL anymore. So now, simply by removing the implementation of a partial method, which is supposed to have no ill effects, we've gone and broken a whole bunch of dependent code.
Now some people might be saying, "so what, I know that I'm not going to try to do this illegal stuff with partial methods." The thing you have to understand is that partial methods - much like partial classes - are primarily intended to help simplify the task of code generation. It's very unlikely that you would ever want to write a partial method yourself period. With machine-generated code, on the other hand, it's actually fairly likely that consumers of the code will want to "inject" code at various locations, and partial methods provide a clean way of doing this.
And therein lies the problem. If you introduce the possibility of compile-time errors due to partial methods, you've created a situation in which the code generator generates code that doesn't compile. This is a very, very bad situation to be in. Think about what you'd do if your favourite designer tool - say Linq to SQL, or the Winforms or ASP.NET designer, suddenly started producing code that sometimes fails to compile, all because some other programmer created some other class that you've never even seen before that happens to have become a little too intimate with the partial method?
In the end it really boils down to a much simpler question, though: What would public/protected partial methods add that you can't already accomplish with abstract methods? The idea behind partials is that you can put them on concrete classes and they'll still compile. Or rather, they won't compile, but they won't produce an error either, they will just be completely ignored. But if you expect them to be called publicly, then they aren't really "optional" anymore, and if you expect them to be overridden in a derived class, then you might as well just make it abstract or virtual and empty. There isn't really any use for a public or protected partial method, other than to confuse the compiler and the poor bastard trying to make sense of it all.
So instead of opening up that Pandora's box, the team said forget it - public/protected partial methods aren't much use anyway, so just make them private. That way we can keep everything safe and sane. And it is. As long as partial methods stay private, they are easy to understand and worry-free. Let's keep it that way!
Because the MS compiler team did not have a requirement to implement this feature.
Here is a possible scenario of what happen inside MS, because VS uses code gen for a lot of its features, one day a MS code gen developer decided that he/she needed to have partial methods so that the code generated api could be extended by outsiders, and this requirement led the compiler team to act and deliver.
If you don't define the method, what should the compiler do?
If you don't define them, partial methods will not be compiled at all.
This is only possible because the compiler knows where all of the calls to the method are. If the method isn't defined, it will completely remove the lines of code that call the method. (This is also why they can only return void)
If the method is public, this can't work, because the method might be called by a different assembly, which the compiler has no control over.
Reed's and Slak's answers are entirely correct but you seem unwilling to accept their answers.
I will thus try to explain why partial methods are implemented with these restrictions.
Partial methods are for implementing certain sorts of code gen scenarios with maximal efficiency, both in execution time and in meta data overhead.
The last part is the real reason for them since they are attempting to make them (in Erics words) "Pay for play".
When partial methods were added the JIT was entirely capable of inlining an empty method, and thus it having zero runtime effort at the call sites. The problem is that even then there is a cost involved which is that the meta data for the class will have these empty methods increasing their size (needlessly) as well as forcing some more effort during the JITing process to deal with optimizing them away.
Whilst you may not worry too much this cost (and indeed many people won't notice it at all) it does make a big difference to code where startup cost matters, or where disk/memory is constrained. You may have noticed the now mandatory use of .Net on windows mobile 7 and the Zune, in these areas bloat on type metadata is a considerable cost.
Thus partial methods are designed such that, if they are never used they have absolutely zero cost, they cease to exist in the output in any way. This comes with some significant constraints to ensure this does not result in bugs.
Taken from the msdn page with my notes.
...the method must return void.
Partial methods can have ref but not out parameters.
Otherwise removing the call to them may leave you with an undefined problem of what to replace the assignment with.
Partial methods cannot be extern, because the presence of the body determines whether they are defining or implementing.
You can make a delegate to a partial method that has been defined and implemented, but not to a partial method that has only been defined.
These follow from the fact the compiler needs to know if it's defined or not, and thus is safe for removal. This leads us to the one you dislike.
Partial methods are implicitly private, and therefore they cannot be virtual.
Your metal model of this feature is that, if the compiler knows that a partial method is implemented then it should simply allow the partial method to be public (and virtual for that matter) since it can check that you implemented the method.
Were you to change the feature to do this you have two options:
Force all non private partial methods to require implementation.
simple and not likely to involve much effort but then any such methods are no longer partial in the meaningful sense of the original plan.
Any method declared public is simply deleted if it was not implemented in the assembly.
This allows the partial removal to be applied
It requires quite a lot more effort by the compiler (to detect all references to the method rather than simply needing to look in the composed class itself)
The IntelliSense implementation is a bit confused, should the method be shown? sown only when it's been given a definition?
Overload resolution becomes much more complex since you need to decide whether a call to such a method with no definition is either a) a compile time failure or b) results in the selection of the next best option if available.
Side effects within expressions at the call sites are already complex in the private only case. This is mitigated somewhat by the assumption that partial implementations already exhibit a high degree of coupling. This change would increase the potential for coupling.
This has rather complex failure modes in that public methods could be silently deleted by some innocuous change in the original assembly. Other assemblies depending on this one would fail (at compile time) but with a very confusing error (without considerable effort this would apply even to projects in the same solution).
Solution 1 is simply pointless, it adds effort and is of no benefit to the original goals. Worse it may actually hinder the original goals since someone using partial methods this way may not realise they are gaining no reduction in metadata. Also someone may become confused about the errors that result from failing to supply the definition.
That leaves us with solution 2. This solution involves effort (and every feature starts with -100) so it must add some compelling benefit to get it over the -100 (and the additional negatives added for the confusion caused by not the additional edge cases).
What scenarios can you come up with to get things positive?
Your motivating example in the comments above was to "have comments/and or attributes in a different file"
The motivation for XML comments was entirely to increase the locality of documentation and code. If their verbosity is high then outlining exists to mitigate this. My opinion is that this is not sufficient to merit the feature.
The ability to push attributes to a separate location is not in my view that useful, in fact I think having to look in two locations is more confusing. The current private only implementation has this problem too, but it is inevitable and again is mitigated somewhat by the assumption that high coupling that is not visible outside of the class is not as bad as high coupling external to the class.
If you can demonstrate some other compelling reason I'm sure it would be interesting, but the negatives to overcome are considerable.
I've read thorough all your comments and I agree with most of the conclusions, however I have one scenario in which I think will benefit from public/protected partial methods WITHOUT loosing the original intent:
I have a code generator that generates serialization code and other boiler-plate code. In particular, it generates a "Reset" method for every property so that a designer like VS can revert its value to the original. Within the code for this method I generate a call to a partial method Repaint().
The idea is that if the object wants to, it ca write the code for it and then do something, otherwise nothing is done and performance is optimal.
The problem is, sometimes, the Repaint method exists in the object for purposes OTHER than being called from the generated code, at that point, when I declare the method body I should be able to make it internal,protected or public. I am defining the method at this point, and yes, I will documented, etc. here, not in the declaration I generated but in the one I made by hand. The fact that it was also defined in the generated code should not affect this.
I agree that defining access modifiers in a partial method makes no sense when you are declaring the method without a body. When you are declaring the method WITH a body you should be free to declare it any way you want it. I don't see any difference in terms of complexity for the compiler to accept this scenario.
Notice that in partial classes, this is perfectly fine, I can leave one of the partial declarations "naked" without any access modifiers but I can specify them in another partial declaration of the same class. As long as there are no contradictions, everything is fine.
partial methods will be removed by compiler if it doesn't have an implementation. As per my understanding compiler will not be able to remove the partial methods if the methods are accessible as public
//partial class with public modifier
public partial class Sample
{
partial void Display();
}
public partial class Sample
{
partial void Display() { }
}
We will be able to access public methods like below which will be a restriction for compiler to remove the partial method that is not implemented.
// referred partial method which doesn't have implementation
var sample = new Sample();
sample.Display();
The simple reasons is that partial methods are not public because they are an implementation detail. They are meant primarily to support designer related scenarios and are not ever meant to be part of a supported public API. A non-public method works just fine here.
Allowing partial methods to be public is a feature. Features have inherent costs including design, testing, development, etc ... Partial methods were just one of the many features added in a very packed Visual Studio 2008. They were scoped to be as small as possible to fit the scenario in order to leave room for more pressing features such as LINQ.
Not sure if this will answer your question, it did answer mine.
Partial Methods
Excerpts:
If you were allow to return something,
and were in turn using that as a
return value from your CallMyMethods
function, you would run into trouble
when the partial methods were not
implemented.

Categories

Resources