Internal security checks, how to reach 100 % code coverage? - c#

I have a lot of classes similar to the following one:
public class Foo
{
private readonly Ba _ba;
private Foo(Ba ba)
{
if (ba is null) throw new ArgumentNullException(ba);
_ba = ba;
}
}
In other classes' internals, I call this constructor of Foo, but as this would be unintended, in each constructor call ba is not null.
I wrote a lot of test methods for the consisting framework, but I am unable to reach the 100 % of code coverage as the exception in the above code snippet is newer thrown.
I see the following alternatives:
Remove the null check: This would work for the current project implementation, but whenever I might add an accidental call Foo(null), debugging will be more difficult.
Decorate the constructor with [ExcludeFromCodeCoverage]: This would work for the current Foo(Ba) implementation, but whenever I might change the implementation, new code paths in the constructor could develope and accidentally be missed to test.
How would you solve the dilemma?
Notes
The code example is written in C#, but the question might address a general unit testing/exception handling problem.
C# 8 might solve this problem by introducing non-nullable reference types, but I am searching for a good solution until it has been released stable.

You have missed the most important alternative: Don't see it as a desirable goal to achieve 100% code coverage.
The robustness checks in your code are, strictly speaking, not testable in a sensible way. This will happen in various other parts of your code as well - it often happens in switch statements where all possible cases are explicitly covered and an extra default case is added just to throw an exception or otherwise handle this 'impossible' situation. Or, think of assertion statements added to the code: Since assertions should never fail, you will strictly speaking never be able to cover the else branch that is hidden inside the assertion statement - how do you test that the expression inside the assertion is good to actually detect the problem you want it to?
Removing such robustness code and assertions is not a good idea, because they also protect you from undesired side effects of future changes. Excluding code from coverage analysis might be acceptable for the examples you have shown, but in most of the cases I have mentioned it would not be a good option. In the end you will have to make an informed decision (by looking at the coverage report in detail, not only the overall percentage) which statements/branches etc. of your code really need to be covered and which not.
And, as a final note, be aware that a high code coverage is not necessarily an indication that your test suite has a high quality. Your test suite has a high quality if it will detect the bugs in the code that could likely exist. You can have a test suite with 100% coverage that will not detect any of the potential bugs.

Related

Should you reuse system functionality in tests, or be explicit?

When writing tests is it acceptable (or should I) to use functionality from elsewhere in the application to assist in a test.
So as an example, the application I am writing tests for uses the CQRS pattern. A lot of the existing tests make use of these commands, queries and handlers when performing the arrange part of a test. They all have their own test cases so I should be OK to accept they function as expected.
I am curious though if this is best practice or if I should be performing setup during the arrange of a test manually (without using other application functionality)? If one of the commands, queries or handlers breaks, then my 'unrelated' test breaks too? Is this good or bad?
When writing tests is it acceptable (or should I) to use functionality from elsewhere in the application to assist in a test.
There are absolutely circumstances where using functionality from elsewhere is going to have good trade offs.
In my experience, it is useful to think about an automated check as consisting of two parts - a measurement that produces a value, and a validation that evaluates whether that value satisfies some specification.
Measurement actual = measurement(args)
assert specification.isSatisfiedBy(actual)
In the specification part, re-using code is commonplace. Consider
String actual = measurement(args)
assert specification.expected.equals(actual)
So here, we have introduced a dependency on String::equals, and that's fine, we have lots and lots of confidence that String::equals is correct, thanks to the robust distributed test program of everybody in the world using it.
Foo actual = measurement(args)
assert specification.expected.equals(actual)
Same idea here, except that instead of some general purpose type we are using our own bespoke equality check. If the bespoke equality check is well tested, then you can be confident that any assertion failures indicate a problem in the measurement. (If not, well then at least the check signals that measurement and specification are in disagreement, and you can investigate why.)
Sometimes, you'll want to have an explicit dependency on other parts of the system, because that's a better description of the actual requirements. For example, compare
int actual = foo("a")
assert 7 == actual
with
assert 7 == bar(0) // This check might be in a different test
assert bar(0) == foo("a")
At a fixed point in time, these spellings are essentially equivalent; but for tests that are expected to evaluate many generations of an evolving system, the verification is somewhat different:
// Future foo should return the same thing as today's foo
assert 7 == foo("a")
// Future foo should return the same thing as future bar
assert bar(0) == foo("a")
Within measurements, the tradeoffs are a bit different, but because you included cqrs I'll offer one specific observation: measurements are about reads.
(Sometimes what we read is "how many times did we crash?" or "what messages did we send?" but, explicit or implicit, we're evaluating the information that comes out of our system).
That means that including a read invocation in your measurement is going to be common, even in designs where you have decoupled reads from writes.
A lot of the existing tests make use of these commands, queries and handlers when performing the arrange part of a test.
Yup and the answer is the same - we're still talking about tradeoffs: does the test detect the problems you want it to? how expensive is it to track down the fault that was detected? How common are false positives (the "fault" is in the test itself, not the test subject)? How much future work are you signing up for just to "maintain" the test (which is related, in part, to how "stable" the dependencies are) during its useful lifetime.

How to emit compiler errors for unfinished features in Release Mode for .NET code?

Being inspired by this article I asked myself how can I emit compiler errors or , at least stop the build, if a feature is not implemented.
The only quick solution I came up with, is by creating a custom attribute:
public class NoReleaseAttribute : System.Attribute
{
public NoReleaseAttribute()
{
#if RELEASE
.
#endif
}
}
The idea is to have a syntactic error somewhere, but only for Release. I used an attribute because the IDE will help me find quickly the references to the attribute, and thus all places that I marked as needed to be completed or fixed before going in production.
I don't like this solution because I want one to emit a compiler error in each place that needs attention, not a single global error elsewhere.
Perhaps there is a more fitted solution.
You may use #error directive in your code, it will cause an error at compile-time.
Example usage:
static void Main(string[] args)
{
DoSomeStuff();
}
#error Needs to be implemented
static void DoSomeStuff(){
//This needs some work
}
There are no more fitted solution. You can use for your goal:
a) ObsoleteAttribute (may not fit your goal)
b) throw NotImplementedException in every place that you need to:
#if RELEASE
throw new NotImplementedException();
#endif
c) Wrap throwing NotImplementedException with attribute as you described us.
Basically it is unnecessary to add code that will not be used anywhere - Visual Studio even marking this code with 0 usages and it will not be included in CIL. So you have to question yourself - do i really need useless code exist in our project? It is better to track unreleased features in tracking systems like YouTrack or something, than to search them in your code.
I would suggest using a build step that checks the code for TODO-comments and generates warnings for them. A quick google suggest the Warn About TODOs extension might do just that, but it is not something I have used. You should also configure your build to fail on most warnings. A possible risk here is that people will just avoid using todo-comments, since they will fail the build.
But needing such a check suggest you do not have sufficient testing of your code. I find it a good practice to at least do some level of testing before committing any code, and just about any testing should reveal if entire components are not implemented. Reviewing your own code before committing is another practice that could be useful.
Automated unit testing can be a great way to deal with things like this. If you write unit tests for all your components the tests should find things like this, especially if you throw NotImplementedException in placeholders. This might require you to write your tests before the components, but there are certainly arguments for such a development approach.
The next line of defense should be code review. Even a quick review should find methods that are not implemented.
The final line of defense should be independent testing. If someone else tests the new features against the specification, any significant missing functionality should be found.

Should invalid cases be in one test? [duplicate]

What Makes a Good Unit Test? says that a test should test only one thing. What is the benefit from that?
Wouldn't it be better to write a bit bigger tests that test bigger block of code? Investigating a test failure is anyway hard and I don't see help to it from smaller tests.
Edit: The word unit is not that important. Let's say I consider the unit a bit bigger. That is not the issue here. The real question is why make a test or more for all methods as few tests that cover many methods is simpler.
An example: A list class. Why should I make separate tests for addition and removal? A one test that first adds then removes sounds simpler.
Testing only one thing will isolate that one thing and prove whether or not it works. That is the idea with unit testing. Nothing wrong with tests that test more than one thing, but that is generally referred to as integration testing. They both have merits, based on context.
To use an example, if your bedside lamp doesn't turn on, and you replace the bulb and switch the extension cord, you don't know which change fixed the issue. Should have done unit testing, and separated your concerns to isolate the problem.
Update: I read this article and linked articles and I gotta say, I'm shook: https://techbeacon.com/app-dev-testing/no-1-unit-testing-best-practice-stop-doing-it
There is substance here and it gets the mental juices flowing. But I reckon that it jibes with the original sentiment that we should be doing the test that context demands. I suppose I'd just append that to say that we need to get closer to knowing for sure the benefits of different testing on a system and less of a cross-your-fingers approach. Measurements/quantifications and all that good stuff.
I'm going to go out on a limb here, and say that the "only test one thing" advice isn't as actually helpful as it's sometimes made out to be.
Sometimes tests take a certain amount of setting up. Sometimes they may even take a certain amount of time to set up (in the real world). Often you can test two actions in one go.
Pro: only have all that setup occur once. Your tests after the first action will prove that the world is how you expect it to be before the second action. Less code, faster test run.
Con: if either action fails, you'll get the same result: the same test will fail. You'll have less information about where the problem is than if you only had a single action in each of two tests.
In reality, I find that the "con" here isn't much of a problem. The stack trace often narrows things down very quickly, and I'm going to make sure I fix the code anyway.
A slightly different "con" here is that it breaks the "write a new test, make it pass, refactor" cycle. I view that as an ideal cycle, but one which doesn't always mirror reality. Sometimes it's simply more pragmatic to add an extra action and check (or possibly just another check to an existing action) in a current test than to create a new one.
Tests that check for more than one thing aren't usually recommended because they are more tightly coupled and brittle. If you change something in the code, it'll take longer to change the test, since there are more things to account for.
[Edit:]
Ok, say this is a sample test method:
[TestMethod]
public void TestSomething() {
// Test condition A
// Test condition B
// Test condition C
// Test condition D
}
If your test for condition A fails, then B, C, and D will appear to fail as well, and won't provide you with any usefulness. What if your code change would have caused C to fail as well? If you had split them out into 4 separate tests, you would know this.
Haaa... unit tests.
Push any "directives" too far and it rapidly becomes unusable.
Single unit test test a single thing is just as good practice as single method does a single task. But IMHO that does not mean a single test can only contain a single assert statement.
Is
#Test
public void checkNullInputFirstArgument(){...}
#Test
public void checkNullInputSecondArgument(){...}
#Test
public void checkOverInputFirstArgument(){...}
...
better than
#Test
public void testLimitConditions(){...}
is question of taste in my opinion rather than good practice. I personally much prefer the latter.
But
#Test
public void doesWork(){...}
is actually what the "directive" wants you to avoid at all cost and what drains my sanity the fastest.
As a final conclusion, group together things that are semantically related and easilly testable together so that a failed test message, by itself, is actually meaningful enough for you to go directly to the code.
Rule of thumb here on a failed test report: if you have to read the test's code first then your test are not structured well enough and need more splitting into smaller tests.
My 2 cents.
Think of building a car. If you were to apply your theory, of just testing big things, then why not make a test to drive the car through a desert. It breaks down. Ok, so tell me what caused the problem. You can't. That's a scenario test.
A functional test may be to turn on the engine. It fails. But that could be because of a number of reasons. You still couldn't tell me exactly what caused the problem. We're getting closer though.
A unit test is more specific, and will firstly identify where the code is broken, but it will also (if doing proper TDD) help architect your code into clear, modular chunks.
Someone mentioned about using the stack trace. Forget it. That's a second resort. Going through the stack trace, or using debug is a pain and can be time consuming. Especially on larger systems, and complex bugs.
Good characteristics of a unit test:
Fast (milliseconds)
Independent. It's not affected by or dependent on other tests
Clear. It shouldn't be bloated, or contain a huge amount of setup.
Using test-driven development, you would write your tests first, then write the code to pass the test. If your tests are focused, this makes writing the code to pass the test easier.
For example, I might have a method that takes a parameter. One of the things I might think of first is, what should happen if the parameter is null? It should throw a ArgumentNull exception (I think). So I write a test that checks to see if that exception is thrown when I pass a null argument. Run the test. Okay, it throws NotImplementedException. I go and fix that by changing the code to throw an ArgumentNull exception. Run my test it passes. Then I think, what happens if it's too small or too big? Ah, that's two tests. I write the too small case first.
The point is I don't think of the behavior of the method all at once. I build it incrementally (and logically) by thinking about what it should do, then implement code and refactoring as I go to make it look pretty (elegant). This is why tests should be small and focused because when you are thinking about the behavior you should develop in small, understandable increments.
Having tests that verify only one thing makes troubleshooting easier. It's not to say you shouldn't also have tests that do test multiple things, or multiple tests that share the same setup/teardown.
Here should be an illustrative example. Let's say that you have a stack class with queries:
getSize
isEmpty
getTop
and methods to mutate the stack
push(anObject)
pop()
Now, consider the following test case for it (I'm using Python like pseudo-code for this example.)
class TestCase():
def setup():
self.stack = new Stack()
def test():
stack.push(1)
stack.push(2)
stack.pop()
assert stack.top() == 1, "top() isn't showing correct object"
assert stack.getSize() == 1, "getSize() call failed"
From this test case, you can determine if something is wrong, but not whether it is isolated to the push() or pop() implementations, or the queries that return values: top() and getSize().
If we add individual test cases for each method and its behavior, things become much easier to diagnose. Also, by doing fresh setup for each test case, we can guarantee that the problem is completely within the methods that the failing test method called.
def test_size():
assert stack.getSize() == 0
assert stack.isEmpty()
def test_push():
self.stack.push(1)
assert stack.top() == 1, "top returns wrong object after push"
assert stack.getSize() == 1, "getSize wrong after push"
def test_pop():
stack.push(1)
stack.pop()
assert stack.getSize() == 0, "getSize wrong after push"
As far as test-driven development is concerned. I personally write larger "functional tests" that end up testing multiple methods at first, and then create unit tests as I start to implement individual pieces.
Another way to look at it is unit tests verify the contract of each individual method, while larger tests verify the contract that the objects and the system as a whole must follow.
I'm still using three method calls in test_push, however both top() and getSize() are queries that are tested by separate test methods.
You could get similar functionality by adding more asserts to the single test, but then later assertion failures would be hidden.
If you are testing more than one thing then it is called an Integration test...not a unit test. You would still run these integration tests in the same testing framework as your unit tests.
Integration tests are generally slower, unit tests are fast because all dependencies are mocked/faked, so no database/web service/slow service calls.
We run our unit tests on commit to source control, and our integration tests only get run in the nightly build.
If you test more than one thing and the first thing you test fails, you will not know if the subsequent things you are testing pass or fail. It is easier to fix when you know everything that will fail.
Smaller unit test make it more clear where the issue is when they fail.
The GLib, but hopefully still useful, answer is that unit = one. If you test more than one thing, then you aren't unit testing.
Regarding your example: If you are testing add and remove in the same unit test, how do you verify that the item was ever added to your list? That is why you need to add and verify that it was added in one test.
Or to use the lamp example: If you want to test your lamp and all you do is turn the switch on and then off, how do you know the lamp ever turned on? You must take the step in between to look at the lamp and verify that it is on. Then you can turn it off and verify that it turned off.
I support the idea that unit tests should only test one thing. I also stray from it quite a bit. Today I had a test where expensive setup seemed to be forcing me to make more than one assertion per test.
namespace Tests.Integration
{
[TestFixture]
public class FeeMessageTest
{
[Test]
public void ShouldHaveCorrectValues
{
var fees = CallSlowRunningFeeService();
Assert.AreEqual(6.50m, fees.ConvenienceFee);
Assert.AreEqual(2.95m, fees.CreditCardFee);
Assert.AreEqual(59.95m, fees.ChangeFee);
}
}
}
At the same time, I really wanted to see all my assertions that failed, not just the first one. I was expecting them all to fail, and I needed to know what amounts I was really getting back. But, a standard [SetUp] with each test divided would cause 3 calls to the slow service. Suddenly I remembered an article suggesting that using "unconventional" test constructs is where half the benefit of unit testing is hidden. (I think it was a Jeremy Miller post, but can't find it now.) Suddenly [TestFixtureSetUp] popped to mind, and I realized I could make a single service call but still have separate, expressive test methods.
namespace Tests.Integration
{
[TestFixture]
public class FeeMessageTest
{
Fees fees;
[TestFixtureSetUp]
public void FetchFeesMessageFromService()
{
fees = CallSlowRunningFeeService();
}
[Test]
public void ShouldHaveCorrectConvenienceFee()
{
Assert.AreEqual(6.50m, fees.ConvenienceFee);
}
[Test]
public void ShouldHaveCorrectCreditCardFee()
{
Assert.AreEqual(2.95m, fees.CreditCardFee);
}
[Test]
public void ShouldHaveCorrectChangeFee()
{
Assert.AreEqual(59.95m, fees.ChangeFee);
}
}
}
There is more code in this test, but it provides much more value by showing me all the values that don't match expectations at once.
A colleague also pointed out that this is a bit like Scott Bellware's specunit.net: http://code.google.com/p/specunit-net/
Another practical disadvantage of very granular unit testing is that it breaks the DRY principle. I have worked on projects where the rule was that each public method of a class had to have a unit test (a [TestMethod]). Obviously this added some overhead every time you created a public method but the real problem was that it added some "friction" to refactoring.
It's similar to method level documentation, it's nice to have but it's another thing that has to be maintained and it makes changing a method signature or name a little more cumbersome and slows down "floss refactoring" (as described in "Refactoring Tools: Fitness for Purpose" by Emerson Murphy-Hill and Andrew P. Black. PDF, 1.3 MB).
Like most things in design, there is a trade-off that the phrase "a test should test only one thing" doesn't capture.
When a test fails, there are three options:
The implementation is broken and should be fixed.
The test is broken and should be fixed.
The test is not anymore needed and should be removed.
Fine-grained tests with descriptive names help the reader to know why the test was written, which in turn makes it easier to know which of the above options to choose. The name of the test should describe the behaviour which is being specified by the test - and only one behaviour per test - so that just by reading the names of the tests the reader will know what the system does. See this article for more information.
On the other hand, if one test is doing lots of different things and it has a non-descriptive name (such as tests named after methods in the implementation), then it will be very hard to find out the motivation behind the test, and it will be hard to know when and how to change the test.
Here is what a it can look like (with GoSpec), when each test tests only one thing:
func StackSpec(c gospec.Context) {
stack := NewStack()
c.Specify("An empty stack", func() {
c.Specify("is empty", func() {
c.Then(stack).Should.Be(stack.Empty())
})
c.Specify("After a push, the stack is no longer empty", func() {
stack.Push("foo")
c.Then(stack).ShouldNot.Be(stack.Empty())
})
})
c.Specify("When objects have been pushed onto a stack", func() {
stack.Push("one")
stack.Push("two")
c.Specify("the object pushed last is popped first", func() {
x := stack.Pop()
c.Then(x).Should.Equal("two")
})
c.Specify("the object pushed first is popped last", func() {
stack.Pop()
x := stack.Pop()
c.Then(x).Should.Equal("one")
})
c.Specify("After popping all objects, the stack is empty", func() {
stack.Pop()
stack.Pop()
c.Then(stack).Should.Be(stack.Empty())
})
})
}
The real question is why make a test or more for all methods as few tests that cover many methods is simpler.
Well, so that when some test fails you know which method fails.
When you have to repair a non-functioning car, it is easier when you know which part of the engine is failing.
An example: A list class. Why should I make separate tests for addition and removal? A one test that first adds then removes sounds simpler.
Let's suppose that the addition method is broken and does not add, and that the removal method is broken and does not remove. Your test would check that the list, after addition and removal, has the same size as initially. Your test would be in success. Although both of your methods would be broken.
Disclaimer: This is an answer highly influenced by the book "xUnit Test Patterns".
Testing only one thing at each test is one of the most basic principles that provides the following benefits:
Defect Localization: If a test fails, you immediately know why it failed (ideally without further troubleshooting, if you've done a good job with the assertions used).
Test as a specification: the tests are not only there as a safety net, but can easily be used as specification/documentation. For instance, a developer should be able to read the unit tests of a single component and understand the API/contract of it, without needing to read the implementation (leveraging the benefit of encapsulation).
Infeasibility of TDD: TDD is based on having small-sized chunks of functionality and completing progressive iterations of (write failing test, write code, verify test succeeds). This process get highly disrupted if a test has to verify multiple things.
Lack of side-effects: Somewhat related to the first one, but when a test verifies multiple things, it's more possible that it will be tied to other tests as well. So, these tests might need to have a shared test fixture, which means that one will be affected by the other one. So, eventually you might have a test failing, but in reality another test is the one that caused the failure, e.g. by changing the fixture data.
I can only see a single reason why you might benefit from having a test that verifies multiple things, but this should be seen as a code smell actually:
Performance optimisation: There are some cases, where your tests are not running only in memory, but are also dependent in persistent storage (e.g. databases). In some of these cases, having a test verify multiple things might help in decreasing the number of disk accesses, thus decreasing the execution time. However, unit tests should ideally be executable only in memory, so if you stumble upon such a case, you should re-consider whether you are going in the wrong path. All persistent dependencies should be replaced with mock objects in unit tests. End-to-end functionality should be covered by a different suite of integration tests. In this way, you do not need to care about execution time anymore, since integration tests are usually executed by build pipelines and not by developers, so a slightly higher execution time has almost no impact to the efficiency of the software development lifecycle.

How to use unit tests in projects with many levels of indirection

I was looking over a fairly modern project created with a big emphasis on unit testing. In accordance with old adage "every problem in object oriented programming can be solved by introducing new layer of indirection" this project was sporting multiple layers of indirection. The side-effect was that fair amount of code looked like following:
public bool IsOverdraft)
{
balanceProvider.IsOverdraft();
}
Now, because of the empahsis on unit testing and maintaining high code coverage, every piece of code had unit tests written against it.Therefore this little method would have three unit tests present. Those would check:
If balanceProvider.IsOverdraft() returns true then IsOverdraft should return true
If balanceProvider.IsOverdraft() returns false then IsOverdraft should return false
If balanceProvider throws an exception then IsOverdraft should rethrow the same exception
To make things worse, the mocking framework used (NMock2) accepted method names as string literals, as follows:
NMock2.Expect.Once.On(mockBalanceProvider)
.Method("IsOverdraft")
.Will(NMock2.Return.Value(false));
That obviously made "red, green, refactor" rule into "red, green, refactor, rename in test, rename in test, rename in test". Using differnt mocking framework like Moq, would help with refactoring, but it would require a sweep trough all existing unit tests.
What is the ideal way to handle this situation?
A) Keep smaller levels of layers, so that those forwarding calls do not happen anymore.
B) Do not test those forwarding methods, as they do not contain business logic. For purposes of coverage marked them all with ExcludeFromCodeCoverage attribute.
C) Test only if proper method is invoked, without checking return values, exceptions, etc.
D) Suck it up, and keep writing those tests ;)
Either B or C. That's the problem with such general requirements ("every method must have unit test, every line of code needs to be covered") - sometimes, benefit they provide is not worth the cost. If it's something you came up with, I suggest rethinking this approach. The "we must have 95% code coverage" might be appealing on paper but in practice it quickly spawns problems like the one you have.
Also, the code you're testing is something I'd call trivial code. Having 3 tests for it is most likely overkill. For that single line of code, you'll have to maintain like 40 more. Unless your software is mission critical (which might explain high-coverage requirement), I'd skip those tests.
One of the (IMHO) most pragmatic advices on this topic was provided by Kent Beck some time ago on this very site and I expanded a bit on those thoughts with in my blog posts - What should you test?
Honestly, I think we should write tests only to document our code in an helpful manner. We should not write tests just for the sake of code coverage. (Code coverage is just a great tool to figure out what it is NOT covered so that we can figure out if we did forget important unit tests cases or if we actually have some dead code somewhere).
If I write a test, but the test ends up just being a "duplication" of the implementation or worse...if it's harder to understand the test than the actual implementation....then really such a test should not exists. Nobody is interested in reading such tests. Tests should not contain implementation details. Test are about "what" should happen not "how" it will be done. Since you've tagged your question with "TDD", I would add that TDD is a design practice. So if I already know 100% sure in advance what will be the design of what i'm going to implement, then there is no point for me to use TDD and write unit tests (But I will always have in all cases a high level acceptance test that will cover that code). That will happen often when the thing to design is really simple, like in your example. TDD is not about testing and code coverage, but really about helping us to design our code and document our code. There is no point to use a design tool or a documentation tool for designing/documenting simple/obvious things.
In your example, it's far easier to understand what's going on by reading directly the implementation than the test. The test doesn't add any value in term of documentation. So I'd happily erase it.
On top of that such tests are horridly brittle, because they are tightly coupled to the implementation. That's a nightmare on the long term when you need to refactor stuff since any time you will want to change the implementation they will break.
What I'd suggest to do, is to not write such tests but instead have higher level component tests or fast integration tests/acceptance tests that would exercise these layers without knowing anything at all about the inner working.
I think one of the most important things to keep in mind with unit tests is that it doesn't necessarily matter how the code is implemented today, but rather what happens when the tested code, direct or indirect, is modified in the future.
If you ignore those methods today and they are critical to your application's operation, then someone decides to implement a new balanceProvider at some point down the road or decides that the redirection no longer makes sense, you will most likely have a failure point.
So, if this were my application, I would first look to reduce the forward-only calls to a bare minimum (reducing the code complexity), then introduce a mocking framework that does not rely on string values for method names.
A couple of things to add to the discussion here.
Switch to a better mocking framework immediately and incrementally. We switched from RhinoMock to Moq about 3 years ago. All new tests used Moq, and often when we change a test class we switch it over. But areas of the code that haven't changed much or have huge test casses are still using RhinoMock and that is OK. The code we work with from day to day is much better as a result of making the switch. All test changes can happen in this incremental way.
You are writing too many tests. An important thing to keep in mind in TDD is that you should only write code to satisfy a red test, and you should only write a test to specify some unwritten code. So in your example, three tests is overkill, because at most two are needed to force you to write all of that production code. The exception test does not make you write any new code, so there is no need to write it. I would probably only write this test:
[Test]
public void IsOverdraftDelegatesToBalanceProvider()
{
var result = RandomBool();
providerMock.Setup(p=>p.IsOverdraft()).Returns(result);
Assert.That(myObject.IsOverDraft(), Is.EqualTo(result);
}
Don't create useless layers of indirection. Mostly, unit tests will tell you if you need indirection. Most indirection needs can be solved by the dependency inversion principle, or "couple to abstractions, not concretions". Some layers are needed for other reasons (I make WCF ServiceContract implementations a thin pass through layer. I also don't test that pass through). If you see a useless layer of indirection, 1) make sure it really is useless, then 2) delete it. Code clutter has a huge cost over time. Resharper makes this ridiculously easy and safe.
Also, for meaningful delegation or delegation scenarios you can't get rid of but need to test, something like this makes it a lot easier.
I'd say D) Suck it up, and keep writing those tests ;) and try to see if you can replace NMock with MOQ.
It might not seem necessary and even though it's just delegation now, but the tests are testing that it's calling the right method with right parameters, and the method itself is not doing anything funky before returning values. So it's a good idea to cover them in tests. But to make it easier use MOQ or similiar framework that'll make it so much easier to refactor.

How to unit test code that is highly complex behind the public interface

I'm wondering how I should be testing this sort of functionality via NUnit.
Public void HighlyComplexCalculationOnAListOfHairyObjects()
{
// calls 19 private methods totalling ~1000 lines code + comments + whitespace
}
From reading I see that NUnit isn't designed to test private methods for philosophical reasons about what unit testing should be; but trying to create a set of test data that fully executed all the functionality involved in the computation would be nearly impossible. Meanwhile the calculation is broken down into a number of smaller methods that are reasonably discrete. They are not however things that make logical sense to be done independently of each other so they're all set as private.
You've conflated two things. The Interface (which might expose very little) and this particular Implementation class, which might expose a lot more.
Define the narrowest possible Interface.
Define the Implementation class with testable (non-private) methods and attributes. It's okay if the class has "extra" stuff.
All applications should use the Interface, and -- consequently -- don't have type-safe access to the exposed features of the class.
What if "someone" bypasses the Interface and uses the Class directly? They are sociopaths -- you can safely ignore them. Don't provide them phone support because they violated the fundamental rule of using the Interface not the Implementation.
To solve your immediate problem, you may want to take a look at Pex, which is a tool from Microsoft Research that addresses this type of problem by finding all relevant boundary values so that all code paths can be executed.
That said, had you used Test-Driven Development (TDD), you would never had found yourself in that situation, since it would have been near-impossible to write unit tests that drives this kind of API.
A method like the one you describe sounds like it tries to do too many things at once. One of the key benefits of TDD is that it drives you to implement your code from small, composable objects instead of big classes with inflexible interfaces.
As mentioned, InternalsVisibleTo("AssemblyName") is a good place to start when testing legacy code.
Internal methods are still private in the sense that assemblys outside of the current assembly cannot see the methods. Check MSDN for more infomation.
Another thing would be to refactor the large method into smaller, more defined classes. Check this question I asked about a similiar problem, testing large methods.
Personally I'd make the constituent methods internal, apply InternalsVisibleTo and test the different bits.
White-box unit testing can certainly still be effective - although it's generally more brittle than black-box testing (i.e. you're more likely to have to change the tests if you change the implementation).
HighlyComplexCalculationOnAListOfHairyObjects() is a code smell, an indication that the class that contains it is potentially doing too much and should be refactored via Extract Class. The methods of this new class would be public, and therefore testable as units.
One issue to such a refactoring is that the original class held a lot of state that the new class would need. Which is another code smell, one that indicates that state should be moved into a value object.
I've seen (and probably written) many such hair objects. If it's hard to test, it's usually a good candidate for refactoring. Of course, one problem with that is that the first step to refactoring is making sure it passes all tests first.
Honestly, though, I'd look to see if there isn't some way you can break that code down into a more manageable section.
Get the book Working Effectively with Legacy Code by Michael Feathers. I'm about a third of the way through it, and it has multiple techniques for dealing with these types of problems.
Your question implies that there are many paths of execution throughout the subsystem. The first idea that pops into mind is "refactor." Even if your API remains a one-method interface, testing shouldn't be "impossible".
trying to create a set of test data
that fully executed all the
functionality involved in the
computation would be nearly impossible
If that's true, try a less ambitious goal. Start by testing specific, high-usage paths through the code, paths that you suspect may be fragile, and paths for which you've had reported bugs.
Refactoring the method into separate sub-algorithms will make your code more testable (and might be beneficial in other ways), but if your problem is a ridiculous number of interactions between those sub-algorithms, extract method (or extract to strategy class) won't really solve it: you'll have to build up a solid suite of tests one at a time.

Categories

Resources