I'm creating a series of classes with a 'constructor' and 'destructor' paradigm.
When a derived class is instantiated. The SetUp() method of all it's base classes must be called first, followed by its SetUp() method (if it implemented one).
When the derived class has a TearDown() method, it must perform it's teardown actions first, then call the TearDown() method of its base class, which then must also call base.TearDown(), etc.
For example, if I were in control of every class that could ever inherit from Base, I could enforce the following convention:
public abstract class Base {
public virtual void SetUp() {
//Base setup actions
}
public virtual void TearDown() {
//Base teardown actions
}
}
public abstract class BetterBase : Base {
public override void SetUp() {
base.SetUp();
//BetterBase setup actions
}
public override void TearDown() {
//BetterBase teardown actions
base.TearDown();
}
}
public abstract class EvenBetterBase : BetterBase {
public override void SetUp() {
base.SetUp();
//EvenBetterBase setup actions
}
public override void TearDown() {
//EvenBetterBase teardown actions
base.TearDown();
}
}
But one day, some jerk will come along and mess up the convention:
public abstract class IDontReadDocumentation : EvenBetterBase {
public override void TearDown() {
base.TearDown();
//my teardown actions
}
}
They might call base.TearDown() before attempting their own actions, or not call the base methods at all, and do some serious damage.
Because I don't trust derivers of my abstract class to follow convention, and they might choose to derive from any one of my Base classes of varying complexity, the only option I could think of is to seal the virtual method in each new base class and expose some new abstract method where the deriver can specify their own actions if they like:
public abstract class Base {
public virtual void DeriverSetUp() { } //Deriver may have their own or not
public virtual void DeriverTearDown() { }
public void SetUp() {
//Base setup actions
DeriverSetUp();
}
public void TearDown() {
DeriverTearDown();
//Base teardown actions
}
}
public abstract class BetterBase : Base {
public virtual void New_DeriverSetUp() { }
public virtual void New_DeriverTearDown() { }
public sealed override void DeriverSetUp() {
//BetterBase setup actions
New_DeriverSetUp();
}
public sealed override DeriverTearDown() {
New_DeriverTearDown();
//BetterBase teardown actions
}
}
And then of course
public abstract class EvenBetterBase : BetterBase {
public virtual void New_New_DeriverSetUp() { }
public virtual void New_New_DeriverTearDown() { }
public sealed override void New_DeriverSetUp() {
//EvenBetterBase setup actions
New_New_DeriverSetUp();
}
public sealed override New_DeriverTearDown() {
New_New_DeriverTearDown();
//EvenBetterBase teardown actions
}
}
Well, at least now no matter which class someone tries to derive from, it's impossible for them to mess up the SetUp and TearDown logic, but this pattern doesn't take long to get old).
This is a classic pattern when there's only one level of inheritance to worry about, but in my case we may get progressively more complex classes that all rely on maintaining the SetUp and TearDown method orders.
What am I to do?
Note that it isn't sufficient for me to simply perform SetUp and TearDown actions in Constructors and Destructors of these classes (even though doing so would guarantee precisely the order I'm seeking.) If you must know, this is infrastructure for a unit testing suite. The [TestInitialize] and [TestCleanup] attributes are specified on the Base class SetUp and TearDown methods, which are used for all deriving unit test classes - which is why constructors and destructors cannot be used, and also why properly cascading calls is essential.
Perhaps using 'Virtual' and/or 'Abstract' methods is the wrong design pattern here, but then I don't know what the appropriate one is. I want it to be nice and easy for a deriving class to switch from using one base class to another, without having to change any of their method names.
I came up with this neat pattern that stores actions registered at construction in an ordered list.
Pros:
Guarantees order of setup and teardown
Clear way of implementing additional setup and teardown logic.
Consistent pattern no matter what base class is inherited.
Cons:
Requires base instance fields, so won't work in cases where this pattern is needed for static classes. (Luckily that isn't a problem, since VS Unit Tests can only be defined in non-static classes.)
[TestClass]
public abstract class Base
{
private List<Action> SetUpActions = new List<Action>();
private List<Action> TearDownActions = new List<Action>();
public void SetUp()
{
foreach( Action a in SetUpActions )
a.Invoke();
}
public void TearDown()
{
foreach( Action a in TearDownActions.Reverse<Action>() )
a.Invoke();
}
protected void AddSetUpAction(Action a) { SetUpActions.Add(a); }
protected void AddTearDownAction(Action a) { TearDownActions.Add(a); }
}
That's it. All the hard work is done by the base class now.
[TestClass]
public abstract class BetterBase : Base {
public BetterBase() {
AddSetUpAction(SetUp);
AddTearDownAction(TearDown);
}
private static void SetUp() { //BetterBase setup actions }
private static void TearDown() { //BetterBase teardown actions }
}
[TestClass]
public abstract class EvenBetterBase : BetterBase {
public EvenBetterBase() {
AddSetUpAction(SetUp);
AddTearDownAction(TearDown);
}
private static void SetUp() { //EvenBetterBase setup actions }
private static void TearDown() { //EvenBetterBase teardown actions }
}
And derivers using any of the base classes are free to use their judgement and have nice clear methods for performing certain tasks, or pass in anonymous delegates, or not define custom SetUp or TearDown actions at all:
public abstract class SomeGuysTests : EvenBetterBase {
public SomeGuysTests() {
AddSetUpAction(HelperMethods.SetUpDatabaseConnection);
AddTearDownAction(delegate{ Process.Start("CMD.exe", "rd /s/q C:\\"); });
}
}
I think inheritance here is not an answer to your problem. I had a testing framework with multiple levels of set-up and tear-down. It was nightmare, especially if you had inherited SetUp and TearDown methods. Now I moved away from that. The testing framework is still depends on the template, but I don't override tear-downs and set-ups, rather provide one-call methods for whatever must have been done in these steps. My team-mates had exactly the same problem with order of set-ups and tear-downs. As soon as I stopped calling them SetUp and TearDown, but rather gave them meaningful names like CreateDatabase or StartStorageEmulator everybody got the idea and life became easier.
Another thing that I see here is a test-smell. Your tests are doing too much of pre-work and post-work. One can't get away from this if these are actually integration tests. But for actual unit tests these are definitely a test-smell and should be looked at from the other point of view.
Sorry, was not much of help with the actual question, but sometimes your problem lies out-with your question -)
Related
I'm new to the concept of unit testing and have started using xUnit framework to write my tests. I have a question that I would like to ask about unit testing.
In my test project, I'm going to have an interface which contains all the common test scenarios which will then be implemented by a base class. I didn't want to repeat writing the same code for the same test over and over in each test class files so in the base class I will have the implementation for each of the methods outlined in the interface but marked as virtual as it can be overwritten by the concrete test class when it needs to be.
For example:
public interface IPersonTests
{
void ValidateAge();
void ValidateExperience();
void ValidateAddress();
}
public abstract class BasePersonTests : IPersonTests
{
public virtual void ValidateAge()
{
// method implementation
}
public virtual void ValidateExperience()
{
// method implementation
}
public void ValidateAddress()
{
// method implementation
}
}
public class PersonA : BasePersonTests
{
[Fact]
public override void ValidateAge()
{
// different method implementation to the one specified in the base class
}
}
Is it okay to have a code structure like the above for a unit test project?
I have recently started using XUnit, we have several methods that we need to test that all use the same test methods and same test data. Obviously creating a base class that contains all the test methods and call a virtual method that can be overriden was my first approach.
However when I execute the test runner, the base class is also executed and since the virtual method doesn't contain any code, most of these tests fail.
So how should I structure this code so that the methods in BaseEmployeeTest are not called?
public class BaseEmployeeTest : IEmployeeTest
{
public virtual void CallTestMethod(string userSessionId, long employeeId)
{
// no-op
}
[Theory]
[MemberData(nameof(TestData.Emp), MemberType = typeof(Data))]
public void Pu_CanSee_Own_Customer_Employees(long employeeId)
{
var ex = Record.Exception(() => CallTestMethod(_fixture.PuUserSessionId, employeeId));
Assert.Null(ex);
}
}
public class Test : BaseEmployeeTest
{
public override void CallTestMethod(string userSessionId, long employeeId)
{
CallTestDataMethod("x", "y"
}
}
If you make it abstract, that should cover your need, i.e. replace
public class BaseEmployeeTest
with
public abstract class BaseEmployeeTest
I have a base class:
public abstract class MyBaseClass
{
protected virtual void Method1()
{
}
}
and a derived class:
public class MyDerivedClass : MyBaseClass
{
public void Method2()
{
base.Method1();
}
}
I want to write a unit test for Method2 to verify that it calls Method1 on the base class. I'm using Moq as my mocking library. Is this possible?
I came across a related SO link:
Mocking a base class method call with Moq
in which the 2nd answer suggests it can be achieved by setting CallBase property to true on the mock object. However it's not clear how this would enable the call to the base class method (Method1 in the above example) to be verified.
Appreciate any assistance with this.
Unit tests should verify behavior, not implementation. There are several reasons for this:
The results are the goal, not how you get the results
Testing results allows you to improve the implementation without re-writing your tests
Implementations are harder to mock
You might be able to put in hooks or create mocks that verify that the base method was called, but do you really care how the answer was achieved, or do you care that the answer is right?
If the particular implementation you require has side effects that you can verify, then that is what you should be validating.
Mocking the base class from the perspective of the derived class is not possible. In your simple example, I would suggest one of the two options.
Option 1: In the event that MyDerivedClass really shouldn't care what MyBaseClass is up to, then use dependency injection! Yay abstraction!
public class MyClass
{
private readonly IUsedToBeBaseClass myDependency;
public MyClass(IUsedToBeBaseClass myDependency){
_myDependency = myDependency;
}
public void Method2()
{
_myDependency.Method1();
}
}
Elsewhere in test land...
[TestClass]
public class TestMyDependency {
[TestMethod]
public void TestThatMyDependencyIsCalled() {
var dependency = new Mock<IUsedToBeBaseClass>();
var unitUnderTest = new MyClass(dependency.Object);
var unitUnderTest.Method2();
dependency.Verify(x => x.Method1(), Times.Once);
}
}
Option 2: In the event that MyDerivedClass NEEDS to know what MyBaseClass is doing, then test that MyBaseClass is doing the right thing.
In alternative test land...
[TestClass]
public class TestMyDependency {
[TestMethod]
public void TestThatMyDependencyIsCalled() {
var unitUnderTest = new MyDerivedClass();
var unitUnderTest.Method2();
/* verify base class behavior #1 inside Method1() */
/* verify base class behavior #2 inside Method1() */
/* ... */
}
}
What you're describing is not a test of your code, but a test of the behavior of the language. That's fine, because it's a good way to ensure that the language behaves the way we think it does. I used to write lots of little console apps when I was learning. I wish I'd known about unit testing then because it's a better way to go about it.
But once you've tested it and confirmed that the language behaves the way you expect, I wouldn't keep writing tests for that. You can just test the behavior of your code.
Here's a real simple example:
public class TheBaseClass
{
public readonly List<string> Output = new List<string>();
public virtual void WriteToOutput()
{
Output.Add("TheBaseClass");
}
}
public class TheDerivedClass : TheBaseClass
{
public override void WriteToOutput()
{
Output.Add("TheDerivedClass");
base.WriteToOutput();
}
}
Unit test
[TestMethod]
public void EnsureDerivedClassCallsBaseClass()
{
var testSubject = new TheDerivedClass();
testSubject.WriteToOutput();
Assert.IsTrue(testSubject.Output.Contains("TheBaseClass"));
}
I am working on my first project using TDD and have hit a bit of a brick wall when it comes to inheritance.
For example if I have something like this
public interface IComponent
{
void MethodA();
void MethodB();
}
public class Component : IComponent
{
public virtual void MethodA()
{
// Do something
}
public virtual void MethodB()
{
// Do something
}
}
public class ExtendedComponent : Component
{
public override void MethodA()
{
base.MethodA();
// Do something else
}
}
then I cannot test ExtendedComponent in isolation because it depends on Component.
However, if I use composition to create ExtendedComponent like this
public class ExtendedComponent : IComponent
{
private readonly IComponent _component;
public ComponentB(IComponent component)
{
_component = component;
}
public virtual void MethodA()
{
_component.MethodA();
// Do something else
}
public virtual void MethodB()
{
_component.MethodB();
}
}
I can now test ExtendedComponent in isolation by mocking the wrapped IComponent.
The downside of this approach is that if I want to add new methods to IComponent then I have to add the new methods to Component and ExtendedComponent and any other implementations of which there could be many. Using inheritance I could just add the new method to the base Component and it wouldn't break anything else.
I really want to be able to test cleanly so am favouring the composition route but I am concerned that being able to unit test is not a valid reason to always use composition over inheritance. Also adding functionality at the base level will require the creation of lots of tedious delegating methods.
I'd really appreciate some advice on how other people have approached this kind of problem
your approach using composition is in all practicallity how most compilers implement inheritance so you gain nothing but pay a heavy cost (a lot of boilerplate code). So stick to the inheritance when there's a is-a relationship and composition when there is a has-a relation ship (those of course are neither gold nor the sole rules)
You don't need to worry about testing the extended component 'in isolation' because it does not 'depend' on component it IS a component (at least it is in the way you coded it).
All the tests you originally wrote for the component class are still fine and test all the unchanged behaviour in the extended class as well. All you need to do is write new tests that test the added functionality in the extended component.
public class ComponentTests{
[Fact]
public void MethodADoesSomething(){
Assert.xxx(new Component().MethodA());//Tests the component class
}
[Fact]
public void MethodBDoesSomething(){
Assert.xxx(new Component().MethodB());//Tests the component class
}
}
public class ExtendedComponentTests{
[Fact]
public void MethodADoesSomething(){
Assert.xxx(new ExtendedComponent().MethodA());//Tests the extended component class
}
}
You can see from above that MethodA functionality is tested for both the component AND the extended component. While the new functionality is only tested for the ExtendedComponent.
The key idea here is that one can have inheritance at unit test side too.
I use following approach in this scenario. I'll have a parallel inheritance hierarchy of unit test cases. e.g.
[TestClass]
public abstract class IComponentTest
{
[TestMethod]
public void TestMethodA()
{
// Interface level expectations.
}
[TestMethod]
public void TestMethodB()
{
// Interface level expectations.
}
}
[TestClass]
public class ComponentTest : IComponentTest
{
[TestMethod]
public void TestMethodACustom()
{
// Test Component specific test, not general test
}
[TestMethod]
public void TestMethodBCustom()
{
// Test Component specific test, not general test
}
}
[TestClass]
public class ExtendedComponent : ComponentTest
{
public void TestMethodACustom2()
{
// Test Component specific test, not general test
}
}
Each abstract test class or concrete class deals with expectations at it's own level. Thus extensible and maintainable.
You are correct - using composition over inheritance where it is not appropriate is not the way to go. Based on the information that you have provided here, it is not clear which is better. Ask yourself which one is more appropriate in this situation. By using inheritance, you get polymorphism and virtualization of methods. If you use composition, you are effectively separating your "front-end" logic from the isolated "back-end" logic -- this approach is easier in that changing the underlying component does not have a ripple effect on the rest of the code as inheritance often does.
All in all, this should not affect how you test your code. There are many frameworks for testing available, but this should not affect which design pattern you choose.
I'm in the process of setting up tests in NUnit and have a newbie question.
Is it possible to have a Test/s that could be used in multiple [TestFixture]s?
So
[Test]ValidateString(string bob)
Could be called in a series of different [TestFixture]?
That doesn't sound like a test to me. Tests are typically parameterless (unless you're using [TestCase]s) and running it within a context of a single fixture would typically be enough -- it either passes once and that's good or it doesn't and it's a broken test.
If you just have a method that does some validation on a string, you could set it up as a static method on some class (e.g. TestHelpers) and call it from whatever tests (in multiple test fixtures) need it.
Here's another idea: inheritance. You can have a base fixture that has all your tests, and then fixtures that inherit from it that set up whatever variables you need. The tests will run for each fixture. I'm not familiar with Selenium RC, but you should be able to adapt the code below to set up whatever variables you need in various fixtures.
[TestFixture]
public class BaseFixtureTests
{
protected IMyClass _myClass;
[TestFixtureSetUp]
public void FixtureSetup()
{
_myClass = ConfigureMyClass();
}
protected virtual IMyClass ConfigureMyClass()
{
// fixtures that inherit from this will set up _myClass here as they see fit.
}
[Test]
public void MyClassTest1()
{
// test something about _myClass;
}
}
[TestFixture]
public class MySpecificFixture1 : BaseFixtureTests
{
protected override IMyClass ConfigureMyClass()
{
return new MySpecificMyClassImplementation();
}
}
public class MySpecificMyClassImplementation : IMyClass
{
//some implementation
}
You can also add extra tests in each fixture as well that don't test common functionality and don't need to be reused across fixtures.
The newer version of NUnit supports generics. This is a great fit if what you are testing doesn’t need to be configured (only created) from your test code. Here is an example copied from http://nunit.net/blogs/:
[TestFixture(typeof(ArrayList))]
[TestFixture(typeof(List<int>))]
public class IList_Tests<TList> where TList : IList, new()
{
private IList list;
[SetUp]
public void CreateList()
{
this.list = new TList();
}
[Test]
public void CanAddToList()
{
list.Add(1); list.Add(2); list.Add(3);
Assert.AreEqual(3, list.Count);
}
}
I’ve also used Anna’s approach of inheritance. One possible refinement to her example (depending on personal preference): Don’t mark the base class as a TestFixture, only the child classes. Each class that you mark as a TestFixture will be displayed as a set of tests in the NUnit client. You will probably never want to run the base class methods directly because the child is providing all of the setup code. If you remove TestFixture from the base class, running invalid tests won’t be an option in the UI. This allows you to run all the tests and see all green… always a nice feeling.
You might be able to achieve what you want with inheritance.
using NUnit.Framework;
namespace ClassLibrary1
{
[TestFixture]
public class TestFixtureBase
{
[SetUp]
public virtual void Setup()
{
// setup code here
}
[Test]
public void CommonTest1()
{
Assert.True(true);
}
[Test]
public void CommonTest2()
{
Assert.False(false);
}
}
public class MyClassTests : TestFixtureBase
{
[SetUp]
public override void Setup()
{
base.Setup();
// additional setup code
}
[Test]
public void MyClassTest1()
{
Assert.True(true);
}
}
}
You can write a method to be called from multiple [Test] methods. But I don't think there is a way to have the same [Test] included in multiple [TestFixture]s.
[TestFixture]
public class TestsOne{
[Test] public void TryOne(){
Helpers.ValidateString("Work?");
}
}
[TestFixture]
public class TestsTwo{
[Test] public void TryTwo(){
Helpers.ValidateString("Work?");
}
}
public static class Helpers{
public static void ValidateString(string s){
Assert.IsNotNull(s);
}
}