I have worked with code which had NUnit test written. But, I have never worked with mocking frameworks. What are they? I understand dependency injection and how it helps to improve the testability. I mean all dependencies can be mocked while unit testing. But, then why do we need mocking frameworks? Can't we simply create mock objects and provide dependencies. Am I missing something here?
Thanks.
It makes mocking easier
They usually
allow you to express testable
assertions that refer to the
interaction between objects.
Here you have an example:
var extension = MockRepository
.GenerateMock<IContextExtension<StandardContext>>();
var ctx = new StandardContext();
ctx.AddExtension(extension);
extension.AssertWasCalled(
e=>e.Attach(null),
o=>o.Constraints(Is.Equal(ctx)));
You can see that I explicitly test that the Attach method of the IContextExtension was called and that the input parameter was said context object. It would make my test fail if that did not happen.
You can create mock objects by hand and use them during testing using Dependency Injection frameworks...but letting a mocking framework generate your mock objects for you saves time.
As always, if using the framework adds too much complexity to be useful then don't use it.
Sometimes when working with third-party libraries, or even working with some aspects of the .NET framework, it is extremely difficult to write tests for some situations - for example, an HttpContext, or a Sharepoint object. Creating mock objects for those can become very cumbersome, so mocking frameworks take care of the basics so we can spend our time focusing on what makes our applications unique.
Using a mocking framework can be a much more lightweight and simple solution to provide mocks than actually creating a mock object for every object you want to mock.
For example, mocking frameworks are especially useful to do things like verify that a call was made (or even how many times that call was made). Making your own mock objects to check behaviors like this (while mocking behavior is a topic in itself) is tedious, and yet another place for you to introduce a bug.
Check out Rhino Mocks for an example of how powerful a mocking framework can be.
Mock objects take the place of any large/complex/external objects your code needs access to in order to run.
They are beneficial for a few reasons:
Your tests are meant to run fast and easily. If your code depends on, say, a database connection then you would need to have a fully configured and populated database running in order to run your tests. This can get annoying, so you create a replace - a "mock" - of the database connection object that just simulates the database.
You can control exactly what output comes out of the Mock objects and can therefore use them as controllable data sources to your tests.
You can create the mock before you create the real object in order to refine its interface. This is useful in Test-driven Development.
The only reason to use a mocking library is that it makes mocking easier.
Sure, you can do it all without the library, and that is fine if it's simple, but as soon as they start getting complicated, libraries are much easier.
Think of this in terms of sorting algorithms, sure anyone can write one, but why? If the code already exists and is simple to call... why not use it?
You certainly can mock your dependencies manually, but with a framework it takes a lot of the tedious work away. Also the assertions usually available make it worth it to learn.
Mocking frameworks allow you to isolate units of code that you wish to test from that code's dependencies. They also allow you to simulate various behaviors of your code's dependencies in a test environment that might be difficult to setup or reproduce otherwise.
For example if I have a class A containing business rules and logic that I wish to test, but this class A depends on a data-access classes, other business classes, even u/i classes, etc., these other classes can be mocked to perform in a certain manner (or in no manner at all in the case of loose mock behavior) to test the logic within your class A based on every imaginable way that these other classes could conceivably behave in a production environment.
To give a deeper example, suppose that your class A invokes a method on a data access class such as
public bool IsOrderOnHold(int orderNumber) {}
then a mock of that data access class could be setup to return true every time or to return false every time, to test how your class A responds to such circumstances.
I'd claim you don't. Writing test doubles isn't a large chore in 9 times out of 10. Most of the time it's done almost entirely automatically by just asking resharper to implement an interface for you and then you just add the minor detail needed for this double (because you aren't doing a bunch of logic and creating these intricate super test doubles, right? Right?)
"But why would I want my test project bloated with a bunch of test doubles" you may ask. Well you shouldn't. the DRY principle holds for tests as well. Create GOOD test doubles that are reusable and have descriptive names. This makes your tests more readable too.
One thing it DOES make harder is to over-use test doubles. I tend to agree with Roy Osherove and Uncle Bob, you really don't want to create a mock object with some special configuration all that often. This is in itself a design smell. Using a framework it's soooo easy to just use test doubles with intricate logic in just about every test and in the end you find that you haven't really tested your production code, you have merely tested the god-awful frankenstein's monsteresque mess of mocks containing mocks containing more mocks. You'll never "accidentally" do this if you write your own doubles.
Of course, someone will point out that there are times when you "have" to use a framework, not doing so would be plain stupid. Sure, there are cases like that. But you probably don't have that case. Most people don't, and only for a small part of the code, or the code itself is really bad.
I'd recommend anyone (ESPECIALLY a beginner) to stay away from frameworks and learn how to get by without them, and then later when they feel that they really have to they can use whatever framework they think is the most suitable, but by then it'll be an informed desicion and they'll be far less likely to abuse the framework to create bad code.
Well mocking frameworks make my life much easier and less tedious so I can spend time on actually writing code. Take for instance Mockito (in the Java world)
//mock creation
List mockedList = mock(List.class);
//using mock object
mockedList.add("one");
mockedList.clear();
//verification
verify(mockedList).add("one");
verify(mockedList).clear();
//stubbing using built-in anyInt() argument matcher
when(mockedList.get(anyInt())).thenReturn("element");
//stubbing using hamcrest (let's say isValid() returns your own hamcrest matcher):
when(mockedList.contains(argThat(isValid()))).thenReturn("element");
//following prints "element"
System.out.println(mockedList.get(999));
Though this is a contrived example if you replace List.class with MyComplex.class then the value of having a mocking framework becomes evident. You could write your own or do without but why would you want to go that route.
I first grok'd why I needed a mocking framework when I compared writing test doubles by hand for a set of unit tests (each test needed slightly different behaviour so I was creating subclasses of a base fake type for each test) with using something like RhinoMocks or Moq to do the same work.
Simply put it was much faster to use a framework to generate all of the fake objects I needed rather than writing (and debugging) my own fakes by hand.
Related
Rhino Mocks is tightly coupled with the design pattern of using dependency injection and constructor injection, but I typically don't follow the dependency-injection paradigm and don't like to re-architect my solution just for my test tool.
Take this scenario:
class MyClass{
public void MyMethod(...){
var x = new Something(...);
x.A();
x.B();
x.C();
}
}
Would it be quite typical and acceptable to instead do the following, since this is not a case where I would generally wish to inject the dependency - it can be considered part of MyClass' behaviour/logic.
class MyClass{
public void MyMethod(...){
var x = NewSomething(...);
x.A();
x.B();
x.C();
}
virtual protected Something NewSomething(...){
return new Something(...);
}
}
Now I can (I think) extend MyClass either as a concrete class in my test project, or using Rhino... right? Is this a)correct b)a reasonably sensible, commonplace way of doing things?
Another approach I can see other than DI could be that I actually have a ClassFactory class in my project which creates all instances as needed; then I find a way to mock/stub that in my tests. But this seems 'smelly' to me, though I'm aware it is a pattern some people use.
I have used this quite a few times when trying to make legacy code more testable, although it really likes to come back and bite you later.
Basically, mocks/fakes/testdoubles are your enemy. You should hate them and avoid them (Edit note: I'm not saying don't use them, I'm saying use them only when you HAVE to). It all follows from the paradigm that all code is bad code, and we should write as little code as necessary to complete the task. Having a bunch of test doubles overriding virtual methods makes your code very rigid. It makes it really painful to change a method signature, even if your production code only invokes the method in a single place, because your test doubles will also break. It also makes it painful to later on clean up the mess and actually inject the dependency (and yes, I would argue injecting stuff is Objectively Better(tm)).
What it comes down to is basically: Yes, doing this will make your code more testable, but without any of the benefits you usually get by testing. You wn't get better design, you'll have rigid code etc.
I won't really point any fingers though, since I like I said have used this on occasion just to get a test up to see if something works. It can be a temporary solution that is "good enough", but my final answer is "probably don't if you can at all avoid it (and still have tested code)".
You would be sacrificing the Single Responsability Pattern (SRP) with the method that you're suggesting, and arguably making your code harder to read, understand and maintain, there's a smell right there, before we even reach the talking about tests.
If you plan to run tests, and at the same time not follow SOLID principles , then where are you saving on time, readability, or agility? (I honestly would love to know).
What the DI principle allows you to do is to run your tests completely in isolation from your dependencies, which is exactly what you want to be doing.
The SOLID principles of OOP have many good arguments going for them, but I'm always open to learn and be wrong, but I must say, I may have been blinded by several years of SOLID code, and tens of thousands of green unit tests securing my projects.
I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!
I was looking over a fairly modern project created with a big emphasis on unit testing. In accordance with old adage "every problem in object oriented programming can be solved by introducing new layer of indirection" this project was sporting multiple layers of indirection. The side-effect was that fair amount of code looked like following:
public bool IsOverdraft)
{
balanceProvider.IsOverdraft();
}
Now, because of the empahsis on unit testing and maintaining high code coverage, every piece of code had unit tests written against it.Therefore this little method would have three unit tests present. Those would check:
If balanceProvider.IsOverdraft() returns true then IsOverdraft should return true
If balanceProvider.IsOverdraft() returns false then IsOverdraft should return false
If balanceProvider throws an exception then IsOverdraft should rethrow the same exception
To make things worse, the mocking framework used (NMock2) accepted method names as string literals, as follows:
NMock2.Expect.Once.On(mockBalanceProvider)
.Method("IsOverdraft")
.Will(NMock2.Return.Value(false));
That obviously made "red, green, refactor" rule into "red, green, refactor, rename in test, rename in test, rename in test". Using differnt mocking framework like Moq, would help with refactoring, but it would require a sweep trough all existing unit tests.
What is the ideal way to handle this situation?
A) Keep smaller levels of layers, so that those forwarding calls do not happen anymore.
B) Do not test those forwarding methods, as they do not contain business logic. For purposes of coverage marked them all with ExcludeFromCodeCoverage attribute.
C) Test only if proper method is invoked, without checking return values, exceptions, etc.
D) Suck it up, and keep writing those tests ;)
Either B or C. That's the problem with such general requirements ("every method must have unit test, every line of code needs to be covered") - sometimes, benefit they provide is not worth the cost. If it's something you came up with, I suggest rethinking this approach. The "we must have 95% code coverage" might be appealing on paper but in practice it quickly spawns problems like the one you have.
Also, the code you're testing is something I'd call trivial code. Having 3 tests for it is most likely overkill. For that single line of code, you'll have to maintain like 40 more. Unless your software is mission critical (which might explain high-coverage requirement), I'd skip those tests.
One of the (IMHO) most pragmatic advices on this topic was provided by Kent Beck some time ago on this very site and I expanded a bit on those thoughts with in my blog posts - What should you test?
Honestly, I think we should write tests only to document our code in an helpful manner. We should not write tests just for the sake of code coverage. (Code coverage is just a great tool to figure out what it is NOT covered so that we can figure out if we did forget important unit tests cases or if we actually have some dead code somewhere).
If I write a test, but the test ends up just being a "duplication" of the implementation or worse...if it's harder to understand the test than the actual implementation....then really such a test should not exists. Nobody is interested in reading such tests. Tests should not contain implementation details. Test are about "what" should happen not "how" it will be done. Since you've tagged your question with "TDD", I would add that TDD is a design practice. So if I already know 100% sure in advance what will be the design of what i'm going to implement, then there is no point for me to use TDD and write unit tests (But I will always have in all cases a high level acceptance test that will cover that code). That will happen often when the thing to design is really simple, like in your example. TDD is not about testing and code coverage, but really about helping us to design our code and document our code. There is no point to use a design tool or a documentation tool for designing/documenting simple/obvious things.
In your example, it's far easier to understand what's going on by reading directly the implementation than the test. The test doesn't add any value in term of documentation. So I'd happily erase it.
On top of that such tests are horridly brittle, because they are tightly coupled to the implementation. That's a nightmare on the long term when you need to refactor stuff since any time you will want to change the implementation they will break.
What I'd suggest to do, is to not write such tests but instead have higher level component tests or fast integration tests/acceptance tests that would exercise these layers without knowing anything at all about the inner working.
I think one of the most important things to keep in mind with unit tests is that it doesn't necessarily matter how the code is implemented today, but rather what happens when the tested code, direct or indirect, is modified in the future.
If you ignore those methods today and they are critical to your application's operation, then someone decides to implement a new balanceProvider at some point down the road or decides that the redirection no longer makes sense, you will most likely have a failure point.
So, if this were my application, I would first look to reduce the forward-only calls to a bare minimum (reducing the code complexity), then introduce a mocking framework that does not rely on string values for method names.
A couple of things to add to the discussion here.
Switch to a better mocking framework immediately and incrementally. We switched from RhinoMock to Moq about 3 years ago. All new tests used Moq, and often when we change a test class we switch it over. But areas of the code that haven't changed much or have huge test casses are still using RhinoMock and that is OK. The code we work with from day to day is much better as a result of making the switch. All test changes can happen in this incremental way.
You are writing too many tests. An important thing to keep in mind in TDD is that you should only write code to satisfy a red test, and you should only write a test to specify some unwritten code. So in your example, three tests is overkill, because at most two are needed to force you to write all of that production code. The exception test does not make you write any new code, so there is no need to write it. I would probably only write this test:
[Test]
public void IsOverdraftDelegatesToBalanceProvider()
{
var result = RandomBool();
providerMock.Setup(p=>p.IsOverdraft()).Returns(result);
Assert.That(myObject.IsOverDraft(), Is.EqualTo(result);
}
Don't create useless layers of indirection. Mostly, unit tests will tell you if you need indirection. Most indirection needs can be solved by the dependency inversion principle, or "couple to abstractions, not concretions". Some layers are needed for other reasons (I make WCF ServiceContract implementations a thin pass through layer. I also don't test that pass through). If you see a useless layer of indirection, 1) make sure it really is useless, then 2) delete it. Code clutter has a huge cost over time. Resharper makes this ridiculously easy and safe.
Also, for meaningful delegation or delegation scenarios you can't get rid of but need to test, something like this makes it a lot easier.
I'd say D) Suck it up, and keep writing those tests ;) and try to see if you can replace NMock with MOQ.
It might not seem necessary and even though it's just delegation now, but the tests are testing that it's calling the right method with right parameters, and the method itself is not doing anything funky before returning values. So it's a good idea to cover them in tests. But to make it easier use MOQ or similiar framework that'll make it so much easier to refactor.
I'm wondering how I should be testing this sort of functionality via NUnit.
Public void HighlyComplexCalculationOnAListOfHairyObjects()
{
// calls 19 private methods totalling ~1000 lines code + comments + whitespace
}
From reading I see that NUnit isn't designed to test private methods for philosophical reasons about what unit testing should be; but trying to create a set of test data that fully executed all the functionality involved in the computation would be nearly impossible. Meanwhile the calculation is broken down into a number of smaller methods that are reasonably discrete. They are not however things that make logical sense to be done independently of each other so they're all set as private.
You've conflated two things. The Interface (which might expose very little) and this particular Implementation class, which might expose a lot more.
Define the narrowest possible Interface.
Define the Implementation class with testable (non-private) methods and attributes. It's okay if the class has "extra" stuff.
All applications should use the Interface, and -- consequently -- don't have type-safe access to the exposed features of the class.
What if "someone" bypasses the Interface and uses the Class directly? They are sociopaths -- you can safely ignore them. Don't provide them phone support because they violated the fundamental rule of using the Interface not the Implementation.
To solve your immediate problem, you may want to take a look at Pex, which is a tool from Microsoft Research that addresses this type of problem by finding all relevant boundary values so that all code paths can be executed.
That said, had you used Test-Driven Development (TDD), you would never had found yourself in that situation, since it would have been near-impossible to write unit tests that drives this kind of API.
A method like the one you describe sounds like it tries to do too many things at once. One of the key benefits of TDD is that it drives you to implement your code from small, composable objects instead of big classes with inflexible interfaces.
As mentioned, InternalsVisibleTo("AssemblyName") is a good place to start when testing legacy code.
Internal methods are still private in the sense that assemblys outside of the current assembly cannot see the methods. Check MSDN for more infomation.
Another thing would be to refactor the large method into smaller, more defined classes. Check this question I asked about a similiar problem, testing large methods.
Personally I'd make the constituent methods internal, apply InternalsVisibleTo and test the different bits.
White-box unit testing can certainly still be effective - although it's generally more brittle than black-box testing (i.e. you're more likely to have to change the tests if you change the implementation).
HighlyComplexCalculationOnAListOfHairyObjects() is a code smell, an indication that the class that contains it is potentially doing too much and should be refactored via Extract Class. The methods of this new class would be public, and therefore testable as units.
One issue to such a refactoring is that the original class held a lot of state that the new class would need. Which is another code smell, one that indicates that state should be moved into a value object.
I've seen (and probably written) many such hair objects. If it's hard to test, it's usually a good candidate for refactoring. Of course, one problem with that is that the first step to refactoring is making sure it passes all tests first.
Honestly, though, I'd look to see if there isn't some way you can break that code down into a more manageable section.
Get the book Working Effectively with Legacy Code by Michael Feathers. I'm about a third of the way through it, and it has multiple techniques for dealing with these types of problems.
Your question implies that there are many paths of execution throughout the subsystem. The first idea that pops into mind is "refactor." Even if your API remains a one-method interface, testing shouldn't be "impossible".
trying to create a set of test data
that fully executed all the
functionality involved in the
computation would be nearly impossible
If that's true, try a less ambitious goal. Start by testing specific, high-usage paths through the code, paths that you suspect may be fragile, and paths for which you've had reported bugs.
Refactoring the method into separate sub-algorithms will make your code more testable (and might be beneficial in other ways), but if your problem is a ridiculous number of interactions between those sub-algorithms, extract method (or extract to strategy class) won't really solve it: you'll have to build up a solid suite of tests one at a time.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In order to help my team write testable code, I came up with this simple list of best practices for making our C# code base more testable. (Some of the points refer to limitations of Rhino Mocks, a mocking framework for C#, but the rules may apply more generally as well.) Does anyone have any best practices that they follow?
To maximize the testability of code, follow these rules:
Write the test first, then the code. Reason: This ensures that you write testable code and that every line of code gets tests written for it.
Design classes using dependency injection. Reason: You cannot mock or test what cannot be seen.
Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter. Reason: Allows the business logic to be tested while the parts that can't be tested (the UI) is minimized.
Do not write static methods or classes. Reason: Static methods are difficult or impossible to isolate and Rhino Mocks is unable to mock them.
Program off interfaces, not classes. Reason: Using interfaces clarifies the relationships between objects. An interface should define a service that an object needs from its environment. Also, interfaces can be easily mocked using Rhino Mocks and other mocking frameworks.
Isolate external dependencies. Reason: Unresolved external dependencies cannot be tested.
Mark as virtual the methods you intend to mock. Reason: Rhino Mocks is unable to mock non-virtual methods.
Definitely a good list. Here are a few thoughts on it:
Write the test first, then the code.
I agree, at a high level. But, I'd be more specific: "Write a test first, then write just enough code to pass the test, and repeat." Otherwise, I'd be afraid that my unit tests would look more like integration or acceptance tests.
Design classes using dependency injection.
Agreed. When an object creates its own dependencies, you have no control over them. Inversion of Control / Dependency Injection gives you that control, allowing you to isolate the object under test with mocks/stubs/etc. This is how you test objects in isolation.
Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter.
Agreed. Note that even the presenter/controller can be tested using DI/IoC, by handing it a stubbed/mocked view and model. Check out Presenter First TDD for more on that.
Do not write static methods or classes.
Not sure I agree with this one. It is possible to unit test a static method/class without using mocks. So, perhaps this is one of those Rhino Mock specific rules you mentioned.
Program off interfaces, not classes.
I agree, but for a slightly different reason. Interfaces provide a great deal of flexibility to the software developer - beyond just support for various mock object frameworks. For example, it is not possible to support DI properly without interfaces.
Isolate external dependencies.
Agreed. Hide external dependencies behind your own facade or adapter (as appropriate) with an interface. This will allow you to isolate your software from the external dependency, be it a web service, a queue, a database or something else. This is especially important when your team doesn't control the dependency (a.k.a. external).
Mark as virtual the methods you intend to mock.
That's a limitation of Rhino Mocks. In an environment that prefers hand coded stubs over a mock object framework, that wouldn't be necessary.
And, a couple of new points to consider:
Use creational design patterns. This will assist with DI, but it also allows you to isolate that code and test it independently of other logic.
Write tests using Bill Wake's Arrange/Act/Assert technique. This technique makes it very clear what configuration is necessary, what is actually being tested, and what is expected.
Don't be afraid to roll your own mocks/stubs. Often, you'll find that using mock object frameworks makes your tests incredibly hard to read. By rolling your own, you'll have complete control over your mocks/stubs, and you'll be able to keep your tests readable. (Refer back to previous point.)
Avoid the temptation to refactor duplication out of your unit tests into abstract base classes, or setup/teardown methods. Doing so hides configuration/clean-up code from the developer trying to grok the unit test. In this case, the clarity of each individual test is more important than refactoring out duplication.
Implement Continuous Integration. Check-in your code on every "green bar." Build your software and run your full suite of unit tests on every check-in. (Sure, this isn't a coding practice, per se; but it is an incredible tool for keeping your software clean and fully integrated.)
If you are working with .Net 3.5, you may want to look into the Moq mocking library - it uses expression trees and lambdas to remove non-intuitive record-reply idiom of most other mocking libraries.
Check out this quickstart to see how much more intuitive your test cases become, here is a simple example:
// ShouldExpectMethodCallWithVariable
int value = 5;
var mock = new Mock<IFoo>();
mock.Expect(x => x.Duplicate(value)).Returns(() => value * 2);
Assert.AreEqual(value * 2, mock.Object.Duplicate(value));
Know the difference between fakes, mocks and stubs and when to use each.
Avoid over specifying interactions using mocks. This makes tests brittle.
This is a very helpful post!
I would add that it is always important to understand the Context and System Under Test (SUT). Following TDD principals to the letter is much easier when you're writing new code in an environment where existing code follows the same principals. But when you're writing new code in a non TDD legacy environment you find that your TDD efforts can quickly balloon far beyond your estimates and expectations.
For some of you, who live in an entirely academic world, timelines and delivery may not be important, but in an environment where software is money, making effective use of your TDD effort is critical.
TDD is highly subject to the Law of Diminishing Marginal Return. In short, your efforts towards TDD are increasingly valuable until you hit a point of maximum return, after which, subsequent time invested into TDD has less and less value.
I tend to believe that TDD's primary value is in boundary (blackbox) as well as in occasional whitebox testing of mission-critical areas of the system.
The real reason for programming against interfaces is not to make life easier for Rhino, but to clarify the relationships between objects in the code. An interface should define a service that an object needs from its environment. A class provides a particular implementation of that service. Read Rebecca Wirfs-Brock's "Object Design" book on Roles, Responsibilities, and Collaborators.
Good list. One of the things that you might want to establish - and I can't give you much advice since I'm just starting to think about it myself - is when a class should be in a different library, namespace, nested namespaces. You might even want to figure out a list of libraries and namespaces beforehand and mandate that the team has to meet and decide to merge two/add a new one.
Oh, just thought of something that I do that you might want to also. I generally have a unit tests library with a test fixture per class policy where each test goes into a corresponding namespace. I also tend to have another library of tests (integration tests?) which is in a more BDD style. This allows me to write tests to spec out what the method should do as well as what the application should do overall.
Here's a another one that I thought of that I like to do.
If you plan to run tests from the unit test Gui as opposed to from TestDriven.Net or NAnt then I've found it easier to set the unit testing project type to console application rather than library. This allows you to run tests manually and step through them in debug mode (which the aforementioned TestDriven.Net can actually do for you).
Also, I always like to have a Playground project open for testing bits of code and ideas I'm unfamiliar with. This should not be checked into source control. Even better, it should be in a separate source control repository on the developer's machine only.