I am working on my first project using TDD and have hit a bit of a brick wall when it comes to inheritance.
For example if I have something like this
public interface IComponent
{
void MethodA();
void MethodB();
}
public class Component : IComponent
{
public virtual void MethodA()
{
// Do something
}
public virtual void MethodB()
{
// Do something
}
}
public class ExtendedComponent : Component
{
public override void MethodA()
{
base.MethodA();
// Do something else
}
}
then I cannot test ExtendedComponent in isolation because it depends on Component.
However, if I use composition to create ExtendedComponent like this
public class ExtendedComponent : IComponent
{
private readonly IComponent _component;
public ComponentB(IComponent component)
{
_component = component;
}
public virtual void MethodA()
{
_component.MethodA();
// Do something else
}
public virtual void MethodB()
{
_component.MethodB();
}
}
I can now test ExtendedComponent in isolation by mocking the wrapped IComponent.
The downside of this approach is that if I want to add new methods to IComponent then I have to add the new methods to Component and ExtendedComponent and any other implementations of which there could be many. Using inheritance I could just add the new method to the base Component and it wouldn't break anything else.
I really want to be able to test cleanly so am favouring the composition route but I am concerned that being able to unit test is not a valid reason to always use composition over inheritance. Also adding functionality at the base level will require the creation of lots of tedious delegating methods.
I'd really appreciate some advice on how other people have approached this kind of problem
your approach using composition is in all practicallity how most compilers implement inheritance so you gain nothing but pay a heavy cost (a lot of boilerplate code). So stick to the inheritance when there's a is-a relationship and composition when there is a has-a relation ship (those of course are neither gold nor the sole rules)
You don't need to worry about testing the extended component 'in isolation' because it does not 'depend' on component it IS a component (at least it is in the way you coded it).
All the tests you originally wrote for the component class are still fine and test all the unchanged behaviour in the extended class as well. All you need to do is write new tests that test the added functionality in the extended component.
public class ComponentTests{
[Fact]
public void MethodADoesSomething(){
Assert.xxx(new Component().MethodA());//Tests the component class
}
[Fact]
public void MethodBDoesSomething(){
Assert.xxx(new Component().MethodB());//Tests the component class
}
}
public class ExtendedComponentTests{
[Fact]
public void MethodADoesSomething(){
Assert.xxx(new ExtendedComponent().MethodA());//Tests the extended component class
}
}
You can see from above that MethodA functionality is tested for both the component AND the extended component. While the new functionality is only tested for the ExtendedComponent.
The key idea here is that one can have inheritance at unit test side too.
I use following approach in this scenario. I'll have a parallel inheritance hierarchy of unit test cases. e.g.
[TestClass]
public abstract class IComponentTest
{
[TestMethod]
public void TestMethodA()
{
// Interface level expectations.
}
[TestMethod]
public void TestMethodB()
{
// Interface level expectations.
}
}
[TestClass]
public class ComponentTest : IComponentTest
{
[TestMethod]
public void TestMethodACustom()
{
// Test Component specific test, not general test
}
[TestMethod]
public void TestMethodBCustom()
{
// Test Component specific test, not general test
}
}
[TestClass]
public class ExtendedComponent : ComponentTest
{
public void TestMethodACustom2()
{
// Test Component specific test, not general test
}
}
Each abstract test class or concrete class deals with expectations at it's own level. Thus extensible and maintainable.
You are correct - using composition over inheritance where it is not appropriate is not the way to go. Based on the information that you have provided here, it is not clear which is better. Ask yourself which one is more appropriate in this situation. By using inheritance, you get polymorphism and virtualization of methods. If you use composition, you are effectively separating your "front-end" logic from the isolated "back-end" logic -- this approach is easier in that changing the underlying component does not have a ripple effect on the rest of the code as inheritance often does.
All in all, this should not affect how you test your code. There are many frameworks for testing available, but this should not affect which design pattern you choose.
Related
I'm trying to write Unittests for D365 Plugins and CodeActivities (both being classes). There are small tests that should run in every plugin, such as:
[TestMethod]
public void NullLocalPluginContext()
{
XrmFakedContext context = new XrmFakedContext();
Assert.ThrowsException<InvalidPluginExecutionException>(
() => context.ExecutePluginWith<SomePlugin>(null));
}
Where SomePlugin is the class to be tested (which is for each child different) and cannot be abstract (awaits IPlugin). For example here it's a CheckDuplicateOrder in the child:
[TestClass]
public class CheckDuplicateOrderTest
{
[TestMethod]
public void NullLocalPluginContext()
{
XrmFakedContext context = new XrmFakedContext();
Assert.ThrowsException<Exception>(
() => context.ExecutePluginWith<CheckDuplicateOrder>(null));
}
}
For these small tests I'd like to have this parent with Shared tests but I don't know how to reference the 'to-be' child's target.
I prefer MSTest, but any NuGet framework is accepted.
Maybe this helps with understanding
Every plugin would have it's own test class.
Every plugintest class needs the basic.
These basic tests should be inherited from parent (so they don't take up space).
Plugins: Dog, Cat, Mouse
PluginTests: DogTest, CatTest, MouseTest
BasePluginTest -> should have shared tests where SomePlugin in the exmaple is Dog/Cat/Mouse. But I don't know how to reference it. Every plugin would have a function TestWalk() { .. ExecutePluginWith<SomePlugin>}. The Cat should call CatTest, the Dog should call DogTest.
As with a normal class you should favour composition over inheritance. Even
though test-classes do not have to follow the same rules and guidelines as normal classes doesn't mean we cannot implement them.
So when you feel you have some common functionality accross your test-classes you should extract some class that is used by your tests. You would do the same for a normal business-class also, won´t you?
class CommonFunc
{
public static bool NullLocalPluginContext<T, TException>() where T: IPlugIn, TException : Exception
{
XrmFakedContext context = new XrmFakedContext();
try { context.ExecutePluginWith<T>(null)) };
catch (T e) { return true; }
return false;
}
}
[TestClass]
public class CheckDuplicateOrderTests
{
[TestMethod]
public void NullLocalPluginContext()
{
Assert.IsTrue(CommonFunc.NullLocalPluginContext<CheckDuplicateOrder, Exception>(null));
}
}
[TestClass]
public class SomeOtherPluginTests
{
[TestMethod]
public void NullLocalPluginContext()
{
Assert.IsTrue(CommonFunc.NullLocalPluginContext<SomePlugin, InvalidPluginExecutionException>(null));
}
}
You could also make your common method rethrow the exception instead of just returning true or false if you want to log the actual exception being thrown within the test-framework.
Disclaimer: some people won't like this because it abuses class inheritance to save code. It's a potential tool for the job, you can evaluate whether it works for you or not.
This seems like it could be achievable with a base class to define the shared tests. Maybe something like this would achieve what you're trying to do?
// note: no [TestClass] on this type so it doesn't get discovered by MSTest.
// Probably should also be abstract.
public class SharedTests<T> where T : IPlugin
{
[TestMethod]
public void NullLocalPluginContext()
{
XrmFakedContext context = new XrmFakedContext();
Assert.ThrowsException<Exception>(
() => context.ExecutePluginWith<T>(null));
}
}
Your plugin classes would inherit from this class:
[TestClass]
public class CheckDuplicateOrderTests : SharedTests<CheckDuplicateOrder>
{
// NullLocalPluginContext test is inherited from the parent type
}
[TestClass]
public class SomeOtherPluginTests : SharedTests<SomeOtherPlugin>
{
// Also has NullLocalPluginContext test inherited, but for testing SomeOtherPlugin
}
I currently have a set of unit tests which are consistent for a number of Rest API endpoints. Say the class is defined like so.
public abstract class GetAllRouteTests<TModel, TModule>
{
[Test]
public void HasModels_ReturnsPagedModel()
{
// Implemented test
}
}
With the implemented test fixture looking like:
[TestFixture(Category = "/api/route-to-test")]
public GetAllTheThings : GetAllRouteTests<TheThing, ModuleTheThings> { }
This enables me to run a number of common tests across all GET all/list routes. It also means that I have classes which are linked directly to the module being tested, and links between tests and code in Resharper / Visual Studio / CI "just work".
The challenge is that some routes require query parameters for testing other pathways through the route code;
e.g. /api/route-to-test?category=big.
As [TestCaseSource] requires a static field, property, or method there appears to be no nice way to override a list of query strings to pass. The closest thing I have come up with seems like a hack. Namely:
public abstract class GetAllRouteTests<TModel, TModule>
{
[TestCaseSource("StaticToDefineLater")]
public void HasModels_ReturnsPagedModel(dynamic args)
{
// Implemented test
}
}
[TestFixture(Category = "/api/route-to-test")]
public GetAllTheThings : GetAllRouteTests<TheThing, ModuleTheThings>
{
static IEnumerable<dynamic> StaticToDefineLater()
{
// yield return all the query things
}
}
This works because the static method is defined for the implemented test class, and is found by NUnit. Huge hack. Also problematic for someone else consuming the abstract class as they need to "know" to implement "StaticToDefineLater" as a static something.
I am looking for a better way of achieving this. It seems like non-static TestCaseSource sources were removed in NUnit 3.x, so that's out.
Thanks in advance.
NOTES:
GetAllRouteTests<> implements a number of tests, not just the one shown.
Iterating through all the routes in one test will "hide" what is covered, so would like to avoid that.
The way I solved a similar problem is by having a base source class that implements IEnumerable (another acceptable source for NUnit), consider if this design suits your usecase:
// in the parent fixture...
public abstract class TestCases : IEnumerable
{
protected abstract List<List<object>> Cases { get; }
public IEnumerator GetEnumerator()
{
return Cases.GetEnumerator();
}
}
// in tests
private class TestCasesForTestFoobar : TestCases
{
protected override List<List<object>> Cases => /* sets of args */
}
[TestCaseSource(typeof(TestCasesForTestFoobar))]
public void TestFoobar(List<object> args)
{
// implemented test
}
So I have the following class, which takes, in its constructor, three different implementations of the same interface as dependencies:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass([Named("ImplementationA")] IMyTestInterface implementationA,
[Named("ImplementationB")] IMyTestInterface implementationB,
[Named("ImplementationC")] IMyTestInterface implementationC)
{
// Some logic here
}
public void MethodA(string)
{
}
}
When using this class outside of unit tests, I inject the dependencies in question with Ninject, i.e. I have something like this:
public class MyNinjectModule : NinjectModule
{
public override void Load()
{
this.Bind<IMyTestInterface>().To<ImplementationA>().InRequestScope().Named("ImplementationA");
this.Bind<IMyTestInterface>().To<ImplementationB>().InRequestScope().Named("ImplementationB");
this.Bind<IMyTestInterface>().To<ImplementationC>().InRequestScope().Named("ImplementationC");
}
}
Which works fine, but the problem I am having now is that I want to unit test this class and I want to do it with AutoFixture, which leads me to my question, how do I create an instance of MyTestClass with those three specific implementations of IMyTestInterface, i.e. ImplementationA, ImplementationB, and ImplementationC? If I just do something like this:
private ISomeInterface<string> testInstance;
private Fixture fixture;
[TestInitialize]
public void SetUp()
{
this.fixture = new Fixture();
this.fixture.Customize(new AutoMoqCustomization());
this.testInstance = this.fixture.Create<MyTestClass>();
}
AutoFixture creates an instance of MyTestClass, but with some random implementations of IMyTestInterface, which is not the dependencies I want. I've been searching for an answer online and the only thing I've found so far that seems similar to what I need, but not quite, is this
AutoFixture was originally built as a tool for Test-Driven Development (TDD), and TDD is all about feedback. In the spirit of GOOS, you should listen to your tests. If the tests are hard to write, you should reconsider your API design. AutoFixture tends to amplify that sort of feedback, so my first reaction is to recommend reconsidering the design.
Concrete dependencies
It sounds like you need the dependencies to be specific implementations of an interface. If that's the case, the current design is pulling in the wrong direction, and also violating the Liskov Substitution Principle (LSP). You could, instead, consider refactoring to Concrete Dependencies:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(ImplementationA implementationA,
ImplementationB implementationB,
ImplementationC implementationC)
{
// Some logic here
}
public void MethodA(string)
{
}
}
This makes it clear that MyTestClass depends on those three concrete classes, instead of obscuring the real dependencies.
It also has the added benefit of decoupling the design from a particular DI Container.
Zero, one, two, many
Another option is to adhere to the LSP, and allow any implementation of IMyTestInterface. If you do that, you shouldn't need the [Named] attributes any longer:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(IMyTestInterface implementationA,
IMyTestInterface implementationB,
IMyTestInterface implementationC)
{
// Some logic here
}
public void MethodA(string)
{
}
}
A question that may arise from such a design is: how do I distinguish between each dependency? Most of my Role Hints article series deals with this question.
Three objects, however, is grounds for further reflection. In software design, my experience is that when it comes to cardinality of identical arguments, there are only four useful sets: none, one, two and many.
No arguments is basically a statement or axiom.
One argument indicates a unary operation.
Two arguments indicate a binary operation.
More than two arguments essentially just indicates a multitude. In mathematics and software engineering, there are preciously few ternary operations.
Thus, once you're past two arguments, the question is whether or not three arguments don't generalise to any number of arguments? If this is the case, a design might instead look like this:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(IEnumerable<IMyTestInterface> deps)
{
// Some logic here
}
public void MethodA(string)
{
}
}
Or, if you can create a Composite from IMyTestInterface, you could even reduce it to something like this:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(IMyTestInterface dep)
{
// Some logic here
}
public void MethodA(string)
{
}
}
All of these options makes the design clearer, and should also have the derived benefit of making it easier to test with AutoFixture.
FWIW, while I believe that my first answer is the best answer, here's a simple way to address the issue with AutoFixture, in case for some reason you can't change the design:
fixture.Register(() =>
new MyTestClass(new ImpA(), new ImpB(), new ImpC()));
There are other options as well, but none as simple as this one, I believe.
I have a base class:
public abstract class MyBaseClass
{
protected virtual void Method1()
{
}
}
and a derived class:
public class MyDerivedClass : MyBaseClass
{
public void Method2()
{
base.Method1();
}
}
I want to write a unit test for Method2 to verify that it calls Method1 on the base class. I'm using Moq as my mocking library. Is this possible?
I came across a related SO link:
Mocking a base class method call with Moq
in which the 2nd answer suggests it can be achieved by setting CallBase property to true on the mock object. However it's not clear how this would enable the call to the base class method (Method1 in the above example) to be verified.
Appreciate any assistance with this.
Unit tests should verify behavior, not implementation. There are several reasons for this:
The results are the goal, not how you get the results
Testing results allows you to improve the implementation without re-writing your tests
Implementations are harder to mock
You might be able to put in hooks or create mocks that verify that the base method was called, but do you really care how the answer was achieved, or do you care that the answer is right?
If the particular implementation you require has side effects that you can verify, then that is what you should be validating.
Mocking the base class from the perspective of the derived class is not possible. In your simple example, I would suggest one of the two options.
Option 1: In the event that MyDerivedClass really shouldn't care what MyBaseClass is up to, then use dependency injection! Yay abstraction!
public class MyClass
{
private readonly IUsedToBeBaseClass myDependency;
public MyClass(IUsedToBeBaseClass myDependency){
_myDependency = myDependency;
}
public void Method2()
{
_myDependency.Method1();
}
}
Elsewhere in test land...
[TestClass]
public class TestMyDependency {
[TestMethod]
public void TestThatMyDependencyIsCalled() {
var dependency = new Mock<IUsedToBeBaseClass>();
var unitUnderTest = new MyClass(dependency.Object);
var unitUnderTest.Method2();
dependency.Verify(x => x.Method1(), Times.Once);
}
}
Option 2: In the event that MyDerivedClass NEEDS to know what MyBaseClass is doing, then test that MyBaseClass is doing the right thing.
In alternative test land...
[TestClass]
public class TestMyDependency {
[TestMethod]
public void TestThatMyDependencyIsCalled() {
var unitUnderTest = new MyDerivedClass();
var unitUnderTest.Method2();
/* verify base class behavior #1 inside Method1() */
/* verify base class behavior #2 inside Method1() */
/* ... */
}
}
What you're describing is not a test of your code, but a test of the behavior of the language. That's fine, because it's a good way to ensure that the language behaves the way we think it does. I used to write lots of little console apps when I was learning. I wish I'd known about unit testing then because it's a better way to go about it.
But once you've tested it and confirmed that the language behaves the way you expect, I wouldn't keep writing tests for that. You can just test the behavior of your code.
Here's a real simple example:
public class TheBaseClass
{
public readonly List<string> Output = new List<string>();
public virtual void WriteToOutput()
{
Output.Add("TheBaseClass");
}
}
public class TheDerivedClass : TheBaseClass
{
public override void WriteToOutput()
{
Output.Add("TheDerivedClass");
base.WriteToOutput();
}
}
Unit test
[TestMethod]
public void EnsureDerivedClassCallsBaseClass()
{
var testSubject = new TheDerivedClass();
testSubject.WriteToOutput();
Assert.IsTrue(testSubject.Output.Contains("TheBaseClass"));
}
I'm creating a series of classes with a 'constructor' and 'destructor' paradigm.
When a derived class is instantiated. The SetUp() method of all it's base classes must be called first, followed by its SetUp() method (if it implemented one).
When the derived class has a TearDown() method, it must perform it's teardown actions first, then call the TearDown() method of its base class, which then must also call base.TearDown(), etc.
For example, if I were in control of every class that could ever inherit from Base, I could enforce the following convention:
public abstract class Base {
public virtual void SetUp() {
//Base setup actions
}
public virtual void TearDown() {
//Base teardown actions
}
}
public abstract class BetterBase : Base {
public override void SetUp() {
base.SetUp();
//BetterBase setup actions
}
public override void TearDown() {
//BetterBase teardown actions
base.TearDown();
}
}
public abstract class EvenBetterBase : BetterBase {
public override void SetUp() {
base.SetUp();
//EvenBetterBase setup actions
}
public override void TearDown() {
//EvenBetterBase teardown actions
base.TearDown();
}
}
But one day, some jerk will come along and mess up the convention:
public abstract class IDontReadDocumentation : EvenBetterBase {
public override void TearDown() {
base.TearDown();
//my teardown actions
}
}
They might call base.TearDown() before attempting their own actions, or not call the base methods at all, and do some serious damage.
Because I don't trust derivers of my abstract class to follow convention, and they might choose to derive from any one of my Base classes of varying complexity, the only option I could think of is to seal the virtual method in each new base class and expose some new abstract method where the deriver can specify their own actions if they like:
public abstract class Base {
public virtual void DeriverSetUp() { } //Deriver may have their own or not
public virtual void DeriverTearDown() { }
public void SetUp() {
//Base setup actions
DeriverSetUp();
}
public void TearDown() {
DeriverTearDown();
//Base teardown actions
}
}
public abstract class BetterBase : Base {
public virtual void New_DeriverSetUp() { }
public virtual void New_DeriverTearDown() { }
public sealed override void DeriverSetUp() {
//BetterBase setup actions
New_DeriverSetUp();
}
public sealed override DeriverTearDown() {
New_DeriverTearDown();
//BetterBase teardown actions
}
}
And then of course
public abstract class EvenBetterBase : BetterBase {
public virtual void New_New_DeriverSetUp() { }
public virtual void New_New_DeriverTearDown() { }
public sealed override void New_DeriverSetUp() {
//EvenBetterBase setup actions
New_New_DeriverSetUp();
}
public sealed override New_DeriverTearDown() {
New_New_DeriverTearDown();
//EvenBetterBase teardown actions
}
}
Well, at least now no matter which class someone tries to derive from, it's impossible for them to mess up the SetUp and TearDown logic, but this pattern doesn't take long to get old).
This is a classic pattern when there's only one level of inheritance to worry about, but in my case we may get progressively more complex classes that all rely on maintaining the SetUp and TearDown method orders.
What am I to do?
Note that it isn't sufficient for me to simply perform SetUp and TearDown actions in Constructors and Destructors of these classes (even though doing so would guarantee precisely the order I'm seeking.) If you must know, this is infrastructure for a unit testing suite. The [TestInitialize] and [TestCleanup] attributes are specified on the Base class SetUp and TearDown methods, which are used for all deriving unit test classes - which is why constructors and destructors cannot be used, and also why properly cascading calls is essential.
Perhaps using 'Virtual' and/or 'Abstract' methods is the wrong design pattern here, but then I don't know what the appropriate one is. I want it to be nice and easy for a deriving class to switch from using one base class to another, without having to change any of their method names.
I came up with this neat pattern that stores actions registered at construction in an ordered list.
Pros:
Guarantees order of setup and teardown
Clear way of implementing additional setup and teardown logic.
Consistent pattern no matter what base class is inherited.
Cons:
Requires base instance fields, so won't work in cases where this pattern is needed for static classes. (Luckily that isn't a problem, since VS Unit Tests can only be defined in non-static classes.)
[TestClass]
public abstract class Base
{
private List<Action> SetUpActions = new List<Action>();
private List<Action> TearDownActions = new List<Action>();
public void SetUp()
{
foreach( Action a in SetUpActions )
a.Invoke();
}
public void TearDown()
{
foreach( Action a in TearDownActions.Reverse<Action>() )
a.Invoke();
}
protected void AddSetUpAction(Action a) { SetUpActions.Add(a); }
protected void AddTearDownAction(Action a) { TearDownActions.Add(a); }
}
That's it. All the hard work is done by the base class now.
[TestClass]
public abstract class BetterBase : Base {
public BetterBase() {
AddSetUpAction(SetUp);
AddTearDownAction(TearDown);
}
private static void SetUp() { //BetterBase setup actions }
private static void TearDown() { //BetterBase teardown actions }
}
[TestClass]
public abstract class EvenBetterBase : BetterBase {
public EvenBetterBase() {
AddSetUpAction(SetUp);
AddTearDownAction(TearDown);
}
private static void SetUp() { //EvenBetterBase setup actions }
private static void TearDown() { //EvenBetterBase teardown actions }
}
And derivers using any of the base classes are free to use their judgement and have nice clear methods for performing certain tasks, or pass in anonymous delegates, or not define custom SetUp or TearDown actions at all:
public abstract class SomeGuysTests : EvenBetterBase {
public SomeGuysTests() {
AddSetUpAction(HelperMethods.SetUpDatabaseConnection);
AddTearDownAction(delegate{ Process.Start("CMD.exe", "rd /s/q C:\\"); });
}
}
I think inheritance here is not an answer to your problem. I had a testing framework with multiple levels of set-up and tear-down. It was nightmare, especially if you had inherited SetUp and TearDown methods. Now I moved away from that. The testing framework is still depends on the template, but I don't override tear-downs and set-ups, rather provide one-call methods for whatever must have been done in these steps. My team-mates had exactly the same problem with order of set-ups and tear-downs. As soon as I stopped calling them SetUp and TearDown, but rather gave them meaningful names like CreateDatabase or StartStorageEmulator everybody got the idea and life became easier.
Another thing that I see here is a test-smell. Your tests are doing too much of pre-work and post-work. One can't get away from this if these are actually integration tests. But for actual unit tests these are definitely a test-smell and should be looked at from the other point of view.
Sorry, was not much of help with the actual question, but sometimes your problem lies out-with your question -)