Capturing Dialog-Windows with SpecFlow - c#

I am fairly new to testing still, but have been using SpecFlow for a a few months. I am not entirely sure if what I am going to ask is possible, but maybe someone will have a suggestion to go about the problem.
Synopsis: My feature file makes a call to a method that creates a dialog window stored in a variable created in that method. The user would then need to fill out the dialog window (it is basically picking a file, and then clicking ok). The rest of the method relies on the information provided by the dialog window.
Problem: Since the window is created in the method and the result is stored in a variable created at that moment, I can not provide my information into the variable. But in order for my behavior tests to finish, I need to provide this information.
Example code:
Feature File:
Given I initialize the class
And I click on change selected item
Steps File:
[Given(#"I initialize the class")]
public void GivenIInitializeTheClass()
{
DoStuff();
SomeClass testClass = new SomeClasee();
}
[Given(#"IClickOnChangeSelectedItem")]
public void GivenIClickOnChangeSelectItem()
{
testClass.ChangeItem();
}
Method From Class:
public void ChangeItem()
{
var window = new SomeDialogWindow();
var result = window.ShowDialog();
if (result.HasValue && result.Value)
{
NewItem = window.SelectedItem;
}
}
I would know how to go about this if I could change the method in the class, but, in this example, I can make no changes to the class itself. Again I do not know if it is possible to assign anything to the result, or control the window since the variables for both are created within the method.

Depending on what you want to do this is quite a common pattern and fairly easy to solve, but first lets consider what kind of testing you might want to be running.
Unit tests - In a unit test that only wants to test the implementation of SomeClass we don't care about the implementation of other classes including SomeDialogWindow. Alternatively we could be writing tests that care solely about the implementation of SomeDialogWindow, or even just the implementation of SomeClass::ChangeItem and nothing else. How fine do you want to go? These tests are there to pinpoint exactly where some of your code is broken.
Acceptance tests - We build these to test how everything works together. They do care about the interaction between different units and as such will show up things that unit tests don't find. Subtle configuration issues, or more complicated interactions between units. Unfortunately they cover huge swathes of code, so leave you needing something more precise to find out what is wrong.
In a Test driven development project, we might write a single acceptance test so we can see when we are done, then we will write many unit tests one at a time, each unit test used to add a small piece of functionality to the codebase, and confirm it works before moving to the next.
Now for how to do it. As long as you are able to modify SomeClass its not a huge change, in fact all you really need is to add virtual to the ChangeItem method to make
public virtual void ChangeItem()
...
What this now allows you do is to replace the method with a different implementation when you are testing it. In the simplest form you can then declare something like,
namespace xxx.Tests
{
public class TestableSomeClass : SomeClass
{
public Item TestItem {get;set;}
public override void ChangeItem()
{
NewItem = TestItem;
}
}
}
This is a very common pattern and known as a stub. We've reduced the functionality in ChangeItem down to its bare essentials so its just a minimal stub of its original intent. We aren't testing ChangeItem anymore, just the other parts of our code.
In fact this pattern this pattern is so common there are libraries there to help us to Mock the function instead. I tend to use one called Moq and it would now look like this.
//Given
var desiredItem = ...
Mock<SomeClass> myMock = new Mock<SomeClass>();
myMock.Setup(x=>x.ChangeItem).Returns(desiredItem);
var testClass = myMock.Object;
//When
testClass.ChangeItem();
//Then
testClass.NewItem.ShouldEqual(....);
You will notice that in both these examples we have gotten rid of the GUI part of the codebase so that we can concentrate on your functionality. I would personally recommend this approach for getting 90% to get your codebase covered and end up with rapid uncomplicated testing. However sometimes you need Acceptance tests that even test the UI, and then we come to an altogether more complicated beast.
For eaxmple, your UI will include blocking calls when it displays the visual elements, such as SomeDialogWindow.ShowDialog() and these have to occur on what is commonly referred to as the UI thread. Fortunately while only one thread can be the UI thread, any thread can be the UI thread if it gets there first, but you will need to have at least one thread displaying the UI and another running the tests. You can steal a pattern from web based testing and create driver classes that control your UI, and these will end up on the test running thread performing the click operations and polling to see if the operations are complete.
If you need to go to these lengths then don't start with this as you learn how to use the testing frameworks, start with the simple stuff.

Related

How do I ensure complete unit test coverage?

I have 2 projects, one is the "Main" project and the other is the "Test" project.
The policy is that all methods in the Main project must have at least one accompanying test in the test project.
What I want is a new unit test in the Test project that verifies this remains the case. If there are any methods that do not have corresponding tests (and this includes method overloads) then I want this test to fail.
I can sort out appropriate messaging when the test fails.
My guess is that I can get every method (using reflection??) but I'm not sure how to then verify that there is a reference to each method in this Test project (and ignore references in projects)
You can use any existing software to measure code coverage but...
Don't do it!!! Seriously. The aim should be not to have 100% coverage but to have software that can easily evolve. From your test project you can invoke by reflection every single existing method and swallow all the exceptions. That will make your coverage around 100% but what good would it be?
Read about TDD. Start creating testable software that has meaningful tests that will save you when something goes wrong. It's not about coverage, it's about being safe.
This is an example of a meta-test that sounds good in principle, but once you get to the detail you should quickly realise is a bad idea. As has already been suggested, the correct approach is to encourage whoever owns the policy to amend it. As you’ve quoted the policy, it is sufficiently specific that people can satisfy the requirement without really achieving anything of value.
Consider:
public void TestMethod1Exists()
{
try
{
var classUnderTest = new ClassToTest();
classUnderTest.Method1();
}
catch (Exception)
{
}
}
The test contains a call to Method1 on the ClassToTest so the requirement of having a test for that method is satisfied, but nothing useful is being tested. As long as the method exists (which is must if the code compiled) the test will pass.
The intent of the policy is presumably to try to ensure that written code is being tested. Looking at some very basic code:
public string IsSet(bool flag)
{
if (flag)
{
return "YES";
}
return "NO";
}
As methods go, this is pretty simple (it could easily be changed to one line), but even so it contains two routes through the method. Having a test to ensure that this method is being called gives you a false sense of security. You would know it is being called but you would not know if all of the code paths are being tested.
An alternative that has been suggested is that you could just use code coverage tools. These can be useful and give a much better idea as to how well exercised your code is, but again they only give an indication of the coverage, not the quality of that coverage. So, let’s say I had some tests for the IsSet method above:
public void TestWhenTrue()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(true));
}
public void TestWhenFalse()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(false));
}
I’m passing sufficient parameters to exercise both code paths, so the coverage for the IsSet method should be 100%. But all I am testing is that the method returns a string. I’m not testing the value of the string, so the tests themselves don’t really add much (if any) value.
Code coverage is a useful metric, but only as part of a larger picture of code quality. Having peer reviews and sharing best practice around how to effectively test the code you are writing within your team, whilst it is less concretely measurable will have a more significant impact on the quality of your test code.

Unit test class with one public and multiple private methods

I'm having a little trouble understanding how to approach the following in order to unit test the class.
The object under test is an object that consists out of 1 public method, one that accepts a list of objects of type A and returns an object B (which is a binary stream).
Due to the nature of the resulting binary stream, which gets large, it is not a nice comparison for the test output.
The stream is built using several private instance helper methods.
class Foo
{
private BinaryStream mBinaryStream;
public Foo() {}
public BinaryStream Bar(List<Object> objects) {
// perform magic to build and return the binary stream;
// using several private instance helper methods.
Magic(objects);
MoreMagic(objects);
}
private void Magic(List<Object> objects) { /* work on mBinaryStream */ }
private void MoreMagic(List<Object> objects) { /* work on mBinaryStream */ }
};
Now I know that I need to test the behaviour of the class, thus the Bar method.
However, it's undoable (both space and time wise) to do compare the output of the method with a predefined result.
The number of variations is just too large (and they are corner cases).
One option to go for is to refactor these private helper methods into (a) separate class(es) that can be unit tested. The binary stream can then be chopped into smaller better testable chunks, but also here goes that a lot of cases need to be handled and comparing the binary result will defy the quick time of a unit test. It an option I'd rather not go for.
Another option is to create an interface that defines all these private methods in order to verify (using mocking) if these methods were called or not. This means however that these methods must have public visibility, which is also not nice. And verifying method invocations might be just enough to test for.
Yet another option is to inherit from the class (making the privates protected) and try to test this way.
I have read most of the topics around such issue, but they seem to handle good testable results. This is different than from this challenge.
How would you unit test such class?
Your first option (extract out the functionality into separate classes) is really the "correct" choice from a SOLID perspective. One of the main points of unit testing things (and TDD by extension) is to promote the creation of small, single responsibility classes. So, that is my primary recommendation.
That said, since you're rather against that solution, if what you're wanting to do is verify that certain things are called, and that they are called in a certain order, then you can leverage Moq's functionality.
First, have BinaryStream be an injected item that can be mocked. Then setup the various calls that will be made against that mock, and then do a mockStream.VerifyAll() call on it - this verifies that everything that you setup for that mock was called.
Additionally, you can also setup a mock to do a callback. What you can do with this is setup an empty string collection in your test. Then, in the callback of the mock setup, add a string identifying the name of that function called to the collection. Then after the test is completed, compare that list to a pre-populated list containing the calls that you expect to have been made, in the correct order, and do an EqualTo Assert. Something like this:
public void MyTest()
{
var expectedList = new List<string> { "SomeFunction", "AnotherFunction", ... };
var actualList = new List<string>();
mockStream.Setup(x => x.SomeFunction()).Callback(actualList.Add("SomeFunction"));
...
systemUnderTest.Bar(...);
Assert.That(actualList, Is.EqualTo(expectedList));
mockStream.VerifyAll();
}
Well you are on top of how to deal with the private methods. Testing the stream for the correct output. Personally I'd use a very limited set of input data, and simply exercise the code in the unit test.
All the potential scenarios I'd treat as an integration test.
So have a file (say xml) with input and expected output. Run through it, call the method with the input and compare actual output with expected, report differences. So you could do this as part of checkin, or before deploy to UAT or some such.
Don't try to test private methods - they don't exist from consumer point of view. Consider them as named code regions which are there just to make your Bar method more readable. You always can refactor Bar method - extract other private methods, rename them, or even move back to Bar. That is implementation details, which do not affect class behavior. And class behavior is exactly what you should test.
So, what is behavior of your class? What are consumer expectations from your class? That is what you should define and write down in your tests (ideally just before you make them pass). Start from trivial situations. What if list of objects is empty? Define behavior, write test. What if list contains single object? If behavior of your class is very complex, then probably your class doing too many things. Try to simplify it and move some 'magic' to dependencies.

Unit testing a class that tracks state

I am abstracting the history tracking portion of a class of mine so that it looks like this:
private readonly Stack<MyObject> _pastHistory = new Stack<MyObject>();
internal virtual Boolean IsAnyHistory { get { return _pastHistory.Any(); } }
internal virtual void AddObjectToHistory(MyObject myObject)
{
if (myObject == null) throw new ArgumentNullException("myObject");
_pastHistory.Push(myObject);
}
internal virtual MyObject RemoveLastObject()
{
if(!IsAnyHistory) throw new InvalidOperationException("There is no previous history.");
return _pastHistory.Pop();
}
My problem is that I would like to unit test that Remove will return the last Added object.
AddObjectToHistory
RemoveObjectToHistory -> returns what was put in via AddObjectToHistory
However, it isn't really a unit test if I have to call Add first? But, the only way that I can see to do this in a true unit test way is to pass in the Stack object in the constructor OR mock out IsAnyHistory...but mocking my SUT is odd also. So, my question is, from a dogmatic view is this a unit test? If not, how do I clean it up...is constructor injection my only way? It just seems like a stretch to have to pass in a simple object? Is it ok to push even this simple object out to be injected?
There are two approaches to those scenarios:
Interfere into design, like making _pastHistory internal/protected or injecting stack
Use other (possibly unit tested) methods to perform verification
As always, there is no golden rule, although I'd say you generally should avoid situations where unit tests force design changes (as those changes will most likely introduce ambiguity/unnecessary questions to code consumers).
Nonetheless, in the end it is you who has to weigh how much you want unit test code interfere into design (first case) or bend the perfect unit test definition (second case).
Usually, I find second case much more appealing - it doesn't clutter original class code and you'll most likely have Add already tested - it's safe to rely on it.
I think it's still a unit test, assuming MyObject is a simple object. I often construct input parameters to unit test methods.
I use Michael Feather's unit test criteria:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
My 2 cents... how would the client know if remove worked or not ? How is a 'client' supposed to interact with this object? Are clients going to push in a stack to the history tracker? Treat the test as just another user/consumer/client of the test subject.. using exactly the same interaction as in real production.
I haven't heard of any rule stating that you're not allowed to call multiple methods on the object under test.
To simulate, stack is not empty. I'd just call Add - 99% case. I'd refrain from destroying the encapsulation of that object.. Treat objects like people (I think I read that in Object Thinking). Tell them to do stuff.. don't break-in and enter.
e.g. If you want someone to have some money in their wallet,
the simple way is to give them the money and let them internally put it into their wallet.
throw their wallet away and stuff in a wallet in their pocket.
I like Option1. Also see how it frees you from implementation details (which induce brittleness in tests). Let's say tomorrow the person decides to use an online wallet. The latter approach will break your tests - they will need to be updated for pushing in an online wallet now - even though the object behavior is not broken.
Another example I've seen is for testing Repository.GetX() where people break-in to the DB to inject records with SQL now in the unit test.. where it would have be considerably cleaner and easier to call Repository.AddX(x) first. Isolation is desired but not to the extent that it overrides pragmatism.
I hope I didn't come on too strong here.. it just pains me to see object APIs being 'contorted for testability' to the point where it no longer resembles the 'simplest thing that could work'.
I think you're trying to be a little overly specific with your definition of a unit test. You should be testing the public behavior of your class, not the minute implementation details.
From your code snippet, it looks like all you really need to care about is whether a) calling AddObjectToHistory causes IsAnyHistory to return true and b) RemoveLastObject eventually causes IsAnyHistory to return false.
As stated in the other answers I think your options can be broken down like so.
You take a dogmatic approach to your testing methodology and add constructor injection for the stack object so you can inject your own fake stack object and test your methods.
You write a separate test for add and remove, the remove test will use the add method but consider it a part of the test setup. As long as your add test passes, your remove should be too.

Is there a way to protect Unit test names that follows MethodName_Condition_ExpectedBehaviour pattern against refactoring?

I follow the naming convention of
MethodName_Condition_ExpectedBehaviour
when it comes to naming my unit-tests that test specific methods.
for example:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
But when I need to rename the method under test, tools like ReSharper does not offer me to rename those tests.
Is there a way to prevent such cases to appear after renaming? Like changing ReSharper settings or following a better unit-test naming convention etc. ?
A recent pattern is to groups tests into inner classes by the method they test.
For example (omitting test attributes):
public CityGetterTests
{
public class GetCity
{
public void TakesParidId_ReturnsParis()
{
//...
}
// More GetCity tests
}
}
See Structuring Unit Tests from Phil Haack's blog for details.
The neat thing about this layout is that, when the method name changes,
you'll only have to change the name of the inner class instead of all
the individual tests.
I also started with this convertion, however ended up with feeling that is not very good. Now I use BDD styled names like should_return_Paris_for_ParisID.
That makes my tests more readable and alsow allows me to refactor method names without worrying about my tests :)
I think the key here is what you should be testing.
You've mentioned TDD in the tags, so I hope that we're trying to adhere to that here. By that paradigm, the tests you're writing have two purposes:
To support your code once it is written, so you can refactor without fearing that you've broken something
To guide us to a better way of designing components - writing the test first really forces you to think about what is necessary for solving the problem at hand.
I know at first it looks like this question is about the first point, but really I think it's about the second. The problem you're having is that you've got concrete components you're testing instead of a contract.
In code terms, that means that I think we should be testing interfaces instead of class methods, because otherwise we expose our test to a variety of problems associated with testing components instead of contracts - inheritance strategies, object construction, and here, renaming.
It's true that interfaces names will change as well, but they'll be a lot more rigid than method names. What TDD gives us here isn't just a way to support change through a test harness - it provides the insight to realise we might be going about it the wrong way!
Take for example the code block you gave:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
{
// some test logic here
}
And let's say we're testing the method GetCity() on our object, CityObtainer - when did I set this object up? Why have I done so? If I realise GetMatchingCity() is a better name, then you have the problem outlined above!
The solution I'm proposing is that we think about what this method really means earlier in the process, by use of interfaces:
public interface ICityObtainer
{
public City GetMatchingCity();
}
By writing in this "outside-in" style way, we're forced to think about what we want from the object a lot earlier in the process, and it becoming the focus should reduce its volatility. This doesn't eliminate your problem, but it may mitigate it somewhat (and, I think, it's a better approach anyway).
Ideally, we go a step further, and we don't even write any code before starting the test:
[TestMethod]
public void GetCity_TakesParId_ReturnsParis
{
ICityObtainer cityObtainer = new CityObtainer();
var result = cityObtainer.GetCity("paris");
Assert.That(result.Name, Is.EqualTo("paris");
}
This way, I can see what I really want from the component before I even start writing it - if GetCity() isn't really what I want, but rather GetCityByID(), it would become apparent a lot earlier in the process. As I said above, it isn't foolproof, but it might reduce the pain for this particular case a bit.
Once you've gone through that, I feel that if you're changing the name of the method, it's because you're changing the terms of the contract, and that means you should have to go back and reconsider the test (since it's possible you didn't want to change it).
(As a quick addendum, if we're writing a test with TDD in mind, then something is happening inside GetCity() that has a significant amount of logic going on. Thinking about the test as being to a contract helps us to separate the intention from the implementation - the test will stay valid no matter what we change behind the interface!)
I'm late, but maybe that Can be still useful. That's my solution (Assuming you are using XUnit at least).
First create an attribute FactFor that extends the XUnit Fact.
public class FactForAttribute : FactAttribute
{
public FactForAttribute(string methodName = "Constructor", [CallerMemberName] string testMethodName = "")
=> DisplayName = $"{methodName}_{testMethodName}";
}
The trick now is to use the nameof operator to make refactoring possible. For example:
public class A
{
public int Just2() => 2;
}
public class ATests
{
[FactFor(nameof(A.Just2))]
public void Should_Return2()
{
var a = new A();
a.Just2().Should().Be(2);
}
}
That's the result:

How do I unit test an implementation detail like caching

So I have a class with a method as follows:
public class SomeClass
{
...
private SomeDependency m_dependency;
public int DoStuff()
{
int result = 0;
...
int someValue = m_dependency.GrabValue();
...
return result;
}
}
And I've decided that rather than to call m_dependency.GrabValue() each time, I really want to cache the value in memory (i.e. in this class) since we're going to get the same value each time anyway (the dependency goes off and grabs some data from a table that hardly ever changes).
I've run into problems however trying to describe this new behaviour in a unit test. I've tried the following (I'm using NUnit with RhinoMocks):
[Test]
public void CacheThatValue()
{
var depend = MockRepository.GeneraMock<SomeDependency>();
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
var sut = new SomeCLass(depend);
int result = sut.DoStuff();
result = sut.DoStuff();
depend.VerifyAllExpectations();
}
This however doesn't work; this test passes even without introducing any changes to the functionality. What am I doing wrong?
I see caching as orthogonal to Do(ing)Stuff. I would find a way to pull the caching logic outside of the method, either by changing SomeDependency or wrapping it somehow (I now have a cool idea for a caching class based around lambda expressions -- yum).
That way your tests for DoStuff don't need to change, you only need to make sure they work with the new wrapper. Then you can test the caching functionality of SomeDependency, or its wrapper, independently. With well-architected code putting a caching layer in place should be rather easy and neither your dependency nor your implementation should know the difference.
Unit tests shouldn't be testing implementation, they should test behavior. At the same time, the subject under test should have a narrowly-defined set of behavior.
To answer your question, you are using a Dynamic Mock and the default behavior is to allow any call that isn't configured. The additional calls are just returning "0". You need to set up an expectation that no more calls are made on the dependency:
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
depend.Expect(d => d.GrabValue()).Repeat.Never();
You may need to enter record/replay mode to get it to work properly.
This seems like a case for "tests drive the design". If caching is an implementation detail of SubDependency - and therefore can't be directly tested - then probably some of its functionality (specifically, its caching behavior) needs to be exposed - and since it's not natural to expose it within SubDependency, needs to be exposed in another class (let's call it "Cache"). In Cache, of course, the behavior is contractual - public, and thereby testable.
So the tests - and the smells - are telling us we need a new class. Test-Driven Design. Ain't it great?

Categories

Resources