Best Practices of Test Driven Development Using C# and RhinoMocks [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In order to help my team write testable code, I came up with this simple list of best practices for making our C# code base more testable. (Some of the points refer to limitations of Rhino Mocks, a mocking framework for C#, but the rules may apply more generally as well.) Does anyone have any best practices that they follow?
To maximize the testability of code, follow these rules:
Write the test first, then the code. Reason: This ensures that you write testable code and that every line of code gets tests written for it.
Design classes using dependency injection. Reason: You cannot mock or test what cannot be seen.
Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter. Reason: Allows the business logic to be tested while the parts that can't be tested (the UI) is minimized.
Do not write static methods or classes. Reason: Static methods are difficult or impossible to isolate and Rhino Mocks is unable to mock them.
Program off interfaces, not classes. Reason: Using interfaces clarifies the relationships between objects. An interface should define a service that an object needs from its environment. Also, interfaces can be easily mocked using Rhino Mocks and other mocking frameworks.
Isolate external dependencies. Reason: Unresolved external dependencies cannot be tested.
Mark as virtual the methods you intend to mock. Reason: Rhino Mocks is unable to mock non-virtual methods.

Definitely a good list. Here are a few thoughts on it:
Write the test first, then the code.
I agree, at a high level. But, I'd be more specific: "Write a test first, then write just enough code to pass the test, and repeat." Otherwise, I'd be afraid that my unit tests would look more like integration or acceptance tests.
Design classes using dependency injection.
Agreed. When an object creates its own dependencies, you have no control over them. Inversion of Control / Dependency Injection gives you that control, allowing you to isolate the object under test with mocks/stubs/etc. This is how you test objects in isolation.
Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter.
Agreed. Note that even the presenter/controller can be tested using DI/IoC, by handing it a stubbed/mocked view and model. Check out Presenter First TDD for more on that.
Do not write static methods or classes.
Not sure I agree with this one. It is possible to unit test a static method/class without using mocks. So, perhaps this is one of those Rhino Mock specific rules you mentioned.
Program off interfaces, not classes.
I agree, but for a slightly different reason. Interfaces provide a great deal of flexibility to the software developer - beyond just support for various mock object frameworks. For example, it is not possible to support DI properly without interfaces.
Isolate external dependencies.
Agreed. Hide external dependencies behind your own facade or adapter (as appropriate) with an interface. This will allow you to isolate your software from the external dependency, be it a web service, a queue, a database or something else. This is especially important when your team doesn't control the dependency (a.k.a. external).
Mark as virtual the methods you intend to mock.
That's a limitation of Rhino Mocks. In an environment that prefers hand coded stubs over a mock object framework, that wouldn't be necessary.
And, a couple of new points to consider:
Use creational design patterns. This will assist with DI, but it also allows you to isolate that code and test it independently of other logic.
Write tests using Bill Wake's Arrange/Act/Assert technique. This technique makes it very clear what configuration is necessary, what is actually being tested, and what is expected.
Don't be afraid to roll your own mocks/stubs. Often, you'll find that using mock object frameworks makes your tests incredibly hard to read. By rolling your own, you'll have complete control over your mocks/stubs, and you'll be able to keep your tests readable. (Refer back to previous point.)
Avoid the temptation to refactor duplication out of your unit tests into abstract base classes, or setup/teardown methods. Doing so hides configuration/clean-up code from the developer trying to grok the unit test. In this case, the clarity of each individual test is more important than refactoring out duplication.
Implement Continuous Integration. Check-in your code on every "green bar." Build your software and run your full suite of unit tests on every check-in. (Sure, this isn't a coding practice, per se; but it is an incredible tool for keeping your software clean and fully integrated.)

If you are working with .Net 3.5, you may want to look into the Moq mocking library - it uses expression trees and lambdas to remove non-intuitive record-reply idiom of most other mocking libraries.
Check out this quickstart to see how much more intuitive your test cases become, here is a simple example:
// ShouldExpectMethodCallWithVariable
int value = 5;
var mock = new Mock<IFoo>();
mock.Expect(x => x.Duplicate(value)).Returns(() => value * 2);
Assert.AreEqual(value * 2, mock.Object.Duplicate(value));

Know the difference between fakes, mocks and stubs and when to use each.
Avoid over specifying interactions using mocks. This makes tests brittle.

This is a very helpful post!
I would add that it is always important to understand the Context and System Under Test (SUT). Following TDD principals to the letter is much easier when you're writing new code in an environment where existing code follows the same principals. But when you're writing new code in a non TDD legacy environment you find that your TDD efforts can quickly balloon far beyond your estimates and expectations.
For some of you, who live in an entirely academic world, timelines and delivery may not be important, but in an environment where software is money, making effective use of your TDD effort is critical.
TDD is highly subject to the Law of Diminishing Marginal Return. In short, your efforts towards TDD are increasingly valuable until you hit a point of maximum return, after which, subsequent time invested into TDD has less and less value.
I tend to believe that TDD's primary value is in boundary (blackbox) as well as in occasional whitebox testing of mission-critical areas of the system.

The real reason for programming against interfaces is not to make life easier for Rhino, but to clarify the relationships between objects in the code. An interface should define a service that an object needs from its environment. A class provides a particular implementation of that service. Read Rebecca Wirfs-Brock's "Object Design" book on Roles, Responsibilities, and Collaborators.

Good list. One of the things that you might want to establish - and I can't give you much advice since I'm just starting to think about it myself - is when a class should be in a different library, namespace, nested namespaces. You might even want to figure out a list of libraries and namespaces beforehand and mandate that the team has to meet and decide to merge two/add a new one.
Oh, just thought of something that I do that you might want to also. I generally have a unit tests library with a test fixture per class policy where each test goes into a corresponding namespace. I also tend to have another library of tests (integration tests?) which is in a more BDD style. This allows me to write tests to spec out what the method should do as well as what the application should do overall.

Here's a another one that I thought of that I like to do.
If you plan to run tests from the unit test Gui as opposed to from TestDriven.Net or NAnt then I've found it easier to set the unit testing project type to console application rather than library. This allows you to run tests manually and step through them in debug mode (which the aforementioned TestDriven.Net can actually do for you).
Also, I always like to have a Playground project open for testing bits of code and ideas I'm unfamiliar with. This should not be checked into source control. Even better, it should be in a separate source control repository on the developer's machine only.

Related

What defines an "isolated" test?

I have a class called Customer and I wish to unit test this class and its public interface. To be able to unit test, I have to test Customer in isolation from its real dependencies. Other than Customer, I have a Monster class that I have created.
My application is using a game framework that defines a Shape (represents a shape) and Vec2F (represents a vector used for math). Customer relies(uses) on Shape and Vec2F. It also uses a Monster.
Now I have to mock these real dependencies for my tests to be unit tests and not integration tests. However, what defines "real" dependencies? Like I would understand why I would mock my own implementation of Monster but Vec2F and Shape from the framework I use seem to be such fundamental structures.
Tests should be isolated from other tests. For this you need to mock any global state consumed by the system under the test.
If system under the test doesn't use global/shared state - mock nothing.
In perfect world, where setup new database will take a milliseconds, you can create new database for every test (In-Memory database in EF Core).
But in our real world we have dependencies which represents global state or if not, still will make tests slow(web services, file system, any external resources).
Those dependencies you wanna mock to provide quicker feedback(unit tests).
You can have very complicated dependencies hierarchy, which don't use global state or external resources, but configuring a test case with those dependencies, will become very very complex and difficult.
In this case you will introduce an abstraction around very very complex dependency and will mock it in the consumer's tests.
In your particular test case, I would mock nothing, unless framework classes depend on the drawing screen logic(depend on environment API).
Since testing software is a huge task, you split it into several tasks to make it manageable. In unit-testing your focus is on finding bugs that can be found by looking at small software pieces in isolation. That is, unit-testing aims at finding a special kind of bugs. Integration testing, for example, aims at a different kind of bugs, which can not be found by unit-testing.
Focusing on finding the bugs in an isolated piece of software does not mean you really must isolate that piece of software technically by mocking the dependencies. Nowadays mocking frameworks make mocking fairly easy (the level of difficulty varies among programming languages, though). Still, mocking is always some extra effort and also comes with disadvantages. For example, tests that use mocking are more closely linked to the code's implementation and thus are more likely to need maintainenance if implementation details change.
Therefore, mocking should only be done if there is a good reason to mock. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems. The same could hold, in your case for Vec2F and Shape, but you would have to judge that by yourself. The above list of criteria should help you there.
It sounds like the Customer is a domain level type. In this case, you should invert the dependency to Shape and Vec2F (this looks like the view-layer implementation) since the framework is a detail and it is on a lower level than the Customer is.

TDD: .NET following TDD principles, Mock / Not to Mock?

I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!

How to use unit tests in projects with many levels of indirection

I was looking over a fairly modern project created with a big emphasis on unit testing. In accordance with old adage "every problem in object oriented programming can be solved by introducing new layer of indirection" this project was sporting multiple layers of indirection. The side-effect was that fair amount of code looked like following:
public bool IsOverdraft)
{
balanceProvider.IsOverdraft();
}
Now, because of the empahsis on unit testing and maintaining high code coverage, every piece of code had unit tests written against it.Therefore this little method would have three unit tests present. Those would check:
If balanceProvider.IsOverdraft() returns true then IsOverdraft should return true
If balanceProvider.IsOverdraft() returns false then IsOverdraft should return false
If balanceProvider throws an exception then IsOverdraft should rethrow the same exception
To make things worse, the mocking framework used (NMock2) accepted method names as string literals, as follows:
NMock2.Expect.Once.On(mockBalanceProvider)
.Method("IsOverdraft")
.Will(NMock2.Return.Value(false));
That obviously made "red, green, refactor" rule into "red, green, refactor, rename in test, rename in test, rename in test". Using differnt mocking framework like Moq, would help with refactoring, but it would require a sweep trough all existing unit tests.
What is the ideal way to handle this situation?
A) Keep smaller levels of layers, so that those forwarding calls do not happen anymore.
B) Do not test those forwarding methods, as they do not contain business logic. For purposes of coverage marked them all with ExcludeFromCodeCoverage attribute.
C) Test only if proper method is invoked, without checking return values, exceptions, etc.
D) Suck it up, and keep writing those tests ;)
Either B or C. That's the problem with such general requirements ("every method must have unit test, every line of code needs to be covered") - sometimes, benefit they provide is not worth the cost. If it's something you came up with, I suggest rethinking this approach. The "we must have 95% code coverage" might be appealing on paper but in practice it quickly spawns problems like the one you have.
Also, the code you're testing is something I'd call trivial code. Having 3 tests for it is most likely overkill. For that single line of code, you'll have to maintain like 40 more. Unless your software is mission critical (which might explain high-coverage requirement), I'd skip those tests.
One of the (IMHO) most pragmatic advices on this topic was provided by Kent Beck some time ago on this very site and I expanded a bit on those thoughts with in my blog posts - What should you test?
Honestly, I think we should write tests only to document our code in an helpful manner. We should not write tests just for the sake of code coverage. (Code coverage is just a great tool to figure out what it is NOT covered so that we can figure out if we did forget important unit tests cases or if we actually have some dead code somewhere).
If I write a test, but the test ends up just being a "duplication" of the implementation or worse...if it's harder to understand the test than the actual implementation....then really such a test should not exists. Nobody is interested in reading such tests. Tests should not contain implementation details. Test are about "what" should happen not "how" it will be done. Since you've tagged your question with "TDD", I would add that TDD is a design practice. So if I already know 100% sure in advance what will be the design of what i'm going to implement, then there is no point for me to use TDD and write unit tests (But I will always have in all cases a high level acceptance test that will cover that code). That will happen often when the thing to design is really simple, like in your example. TDD is not about testing and code coverage, but really about helping us to design our code and document our code. There is no point to use a design tool or a documentation tool for designing/documenting simple/obvious things.
In your example, it's far easier to understand what's going on by reading directly the implementation than the test. The test doesn't add any value in term of documentation. So I'd happily erase it.
On top of that such tests are horridly brittle, because they are tightly coupled to the implementation. That's a nightmare on the long term when you need to refactor stuff since any time you will want to change the implementation they will break.
What I'd suggest to do, is to not write such tests but instead have higher level component tests or fast integration tests/acceptance tests that would exercise these layers without knowing anything at all about the inner working.
I think one of the most important things to keep in mind with unit tests is that it doesn't necessarily matter how the code is implemented today, but rather what happens when the tested code, direct or indirect, is modified in the future.
If you ignore those methods today and they are critical to your application's operation, then someone decides to implement a new balanceProvider at some point down the road or decides that the redirection no longer makes sense, you will most likely have a failure point.
So, if this were my application, I would first look to reduce the forward-only calls to a bare minimum (reducing the code complexity), then introduce a mocking framework that does not rely on string values for method names.
A couple of things to add to the discussion here.
Switch to a better mocking framework immediately and incrementally. We switched from RhinoMock to Moq about 3 years ago. All new tests used Moq, and often when we change a test class we switch it over. But areas of the code that haven't changed much or have huge test casses are still using RhinoMock and that is OK. The code we work with from day to day is much better as a result of making the switch. All test changes can happen in this incremental way.
You are writing too many tests. An important thing to keep in mind in TDD is that you should only write code to satisfy a red test, and you should only write a test to specify some unwritten code. So in your example, three tests is overkill, because at most two are needed to force you to write all of that production code. The exception test does not make you write any new code, so there is no need to write it. I would probably only write this test:
[Test]
public void IsOverdraftDelegatesToBalanceProvider()
{
var result = RandomBool();
providerMock.Setup(p=>p.IsOverdraft()).Returns(result);
Assert.That(myObject.IsOverDraft(), Is.EqualTo(result);
}
Don't create useless layers of indirection. Mostly, unit tests will tell you if you need indirection. Most indirection needs can be solved by the dependency inversion principle, or "couple to abstractions, not concretions". Some layers are needed for other reasons (I make WCF ServiceContract implementations a thin pass through layer. I also don't test that pass through). If you see a useless layer of indirection, 1) make sure it really is useless, then 2) delete it. Code clutter has a huge cost over time. Resharper makes this ridiculously easy and safe.
Also, for meaningful delegation or delegation scenarios you can't get rid of but need to test, something like this makes it a lot easier.
I'd say D) Suck it up, and keep writing those tests ;) and try to see if you can replace NMock with MOQ.
It might not seem necessary and even though it's just delegation now, but the tests are testing that it's calling the right method with right parameters, and the method itself is not doing anything funky before returning values. So it's a good idea to cover them in tests. But to make it easier use MOQ or similiar framework that'll make it so much easier to refactor.

Is testability alone justification for dependency injection?

The advantages of DI, as far as I am aware, are:
Reduced Dependencies
More Reusable Code
More Testable Code
More Readable Code
Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class.
Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way.
It also makes the code harder to read, as to instantiate the repository I now need to write
new OrdersRepository(new MyLinqDataContext())
which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code.
So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.
Dependency Injection's primary advantage is testing. And you've hit on something that seemed odd to me when I first started adopting Test Driven Development and DI. DI does break encapsulation. Unit tests should test implementation related decisions; as such, you end up exposing details that you wouldn't in a purely encapsulated scenario. Your example is a good one, where if you weren't doing test driven development, you would probably want to encapsulate the data context.
But where you say, Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, I would disagree - we have the same setup and are only dependent on an interface. You have to break that dependency.
Taking your example a step further however, how will you test your repository (or clients of it) without exercising the database? This is one of the core tenets of unit testing - you have to be able to test functionality without interacting with external systems. And nowhere does this matter more than with the database. Dependency Injection is the pattern that makes it possible to break dependencies on sub-systems and layers. Without it, unit tests end up requiring extensive fixture setup, become hard to write, fragile and too damn slow. As a result - you just won't write them.
Taking your example a step farther, you might have
In Unit Tests:
// From your example...
new OrdersRepository(new InMemoryDataContext());
// or...
IOrdersRepository repo = new InMemoryDataContext().OrdersRepository;
and In Production (using an IOC container):
// usually...
Container.Create<IDataContext>().OrdersRepository
// but can be...
Container.Create<IOrdersRepository>();
(If you haven't used an IOC container, they're the glue that makes DI work. Think of it as "make" (or ant) for object graphs...the container builds the dependency graph for you and does all of the heavy lifting for construction). In using an IOC container, you get back the dependency hiding that you mention in your OP. Dependencies are configured and handled by the container as a separate concern - and calling code can just ask for an instance of the interface.
There's a really excellent book that explores these issues in detail. Check out xUnit Test Patterns: Refactoring Test Code, by Mezaros. It's one of those books that takes your software development capabilities to the next level.
Dependency Injection is just a means to an end. It's a way to enable loose coupling.
Small comment first: dependency injection = IoC + dependency inversion. What matter the most for testing and what you actually describe, is dependency inversion.
Generally speaking, I think that testing justifies dependency inversion. But it doesn't justify dependency injection, I wouldn't introduce a DI container just for testing.
However, dependency inversion is a principle that can be bended a bit if necessary (like all principles). You can in particular use factories in some places to control the creation of object.
If you have DI container, it's what happens automatically; the DI container act as a factory and wires the object together.
The beauty of Dependency Injection is the ability to isolate your components.
One side-effect of isolation is easier unit-testing. Another is the ability to swap configurations for different environments. Yet another is the self-describing nature of each class.
However you take advantage of the isolation provided by DI, you have far more options than with tighter-coupled object models.
The power of dependency injection comes when you use an Inversion of Control container such as StructureMap. When using this, you won't see a "new" anywhere -- the container will take control of your object construction. That way everything is unaware of the dependencies.
I find that relationship between needing Dependency Injection container and testability is the other way around. I find DI extremely useful because I write unit testable code. And I write unit testable code because it's inherently better code- more loosely coupled smaller classes.
An IoC container is another tool in the toolbox that helps you manage complexity of the app - and it works quite well. I found it allows to better to code to an interface by taking instantiation out of the picture.
The design of tests here always depends on SUT (System Under Test). Ask yourself a question - what do I want to test?
If yours repository is just accessor to the database - then it is needed to test it like accessor - with database involvement. (Actually such kind of tests are not unit tests but integration)
If yours repository performs some mapping or business logic and acts like accessor to the database, then this is the case when it is needed to do decomposition in order to make your system to comply with SRP (Single Responsibility Principle). After decomposition you will have 2 entities:
OrdersRepository
OrderDataAccessor
Test them separately from each other, breaking dependencies with DI.
As of constructor ugliness... Use DI framework to construct your objects. For example with using Unity your constructor:
var repository = new OrdersRepository(new MyLinqDataContext());
will be look like:
var repository = container.Resolve<OrdersRepository>;
The jury is still out for me about the use of DI in the context of your question. You've asked if testing alone is justification for implementing DI, and I'm going to sound a little like a fence-sitter in answering this, even though my gut-response is to answer no.
If I answer yes, I am thinking about testing systems when you have nothing you can easily test directly. In the physical world, it's not unusual to include ports, access tunnels, lugs, etc, in order to provide a simple and direct means of testing the status of systems, machines, and so on. This seems reasonable in most cases. For example, an oil pipeline provides inspection hatches to allow equipment to be injected into the system for the purposes of testing and repair. These are purpose built, and provide no other function. The real question though is if this paradigm is suited to software development. Part of me would like to say yes, but the answer it seems would come at a cost, and leaves us in that lovely grey area of balancing benefits vs costs.
The "no" argument really comes down to the reasons and purposes for designing software systems. DI is a beautiful pattern for promoting the loose coupling of code, something we are taught in our OOP classes is a very important and powerful design concept for improving the maintainability of code. The problem is that like all tools, it can be misused. I'm going to disagree with Rob's answer above in part, because DI's advantages are NOT primarily testing, but in promoting loosely coupled architecture. And I'd argue that resorting to designing systems based solely on the ability to test them suggests in such cases that either the architecture is is flawed, or the test cases are inappropriately configured, and possibly even both.
A well-factored system architecture is in most cases inherently simple to test, and the introduction of mocking frameworks over the last decade makes the testing much easier still. Experience has taught me that any system that I found hard to test has had some aspect of it too tightly coupled in some way. Sometimes (more rarely) this has proven to be necessary, but in most cases it was not, and usually when a seemingly simple system seemed too hard to test, it was because the testing paradigm was flawed. I've seen DI used as a means to circumvent system design in order to allow a system to be tested, and the risks have certainly outweighed the intended rewards, with system architecture effectively corrupted. By that I mean back-doors into code resulting in security problems, code bloated with test-specific behaviour that is never used at runtime, and spaghettification of source code such that you needed a couple of Sherpa's and a Ouija Board just to figure out which way was up! All of this in shipped production code. The resulting costs in terms of maintenance, learning curve etc can be astronomical, and to small companies, such losses can prove to be devastating in the longer term.
IMHO, I don't believe that DI should ever be used as a means to simply improve testability of code. If DI is your only option for testing, then the design usually needs to be refactored. On the other hand, implementing DI by design where it can be used as a part of the run-time code can provide clear advantages, but should not be misused as it can also result in classes being misused, and it should not be over-used simply because it seems cool and easy, as it can in such cases over-complicate the design of your code.
:-)
It's possible to write generic data access objects in Java:
package persistence;
import java.io.Serializable;
import java.util.List;
public interface GenericDao<T, K extends Serializable>
{
T find(K id);
List<T> find();
List<T> find(T example);
List<T> find(String queryName, String [] paramNames, Object [] bindValues);
K save(T instance);
void update(T instance);
void delete(T instance);
}
I can't speak for LINQ, but .NET has generics so it should be possible.

Why do we need mocking frameworks?

I have worked with code which had NUnit test written. But, I have never worked with mocking frameworks. What are they? I understand dependency injection and how it helps to improve the testability. I mean all dependencies can be mocked while unit testing. But, then why do we need mocking frameworks? Can't we simply create mock objects and provide dependencies. Am I missing something here?
Thanks.
It makes mocking easier
They usually
allow you to express testable
assertions that refer to the
interaction between objects.
Here you have an example:
var extension = MockRepository
.GenerateMock<IContextExtension<StandardContext>>();
var ctx = new StandardContext();
ctx.AddExtension(extension);
extension.AssertWasCalled(
e=>e.Attach(null),
o=>o.Constraints(Is.Equal(ctx)));
You can see that I explicitly test that the Attach method of the IContextExtension was called and that the input parameter was said context object. It would make my test fail if that did not happen.
You can create mock objects by hand and use them during testing using Dependency Injection frameworks...but letting a mocking framework generate your mock objects for you saves time.
As always, if using the framework adds too much complexity to be useful then don't use it.
Sometimes when working with third-party libraries, or even working with some aspects of the .NET framework, it is extremely difficult to write tests for some situations - for example, an HttpContext, or a Sharepoint object. Creating mock objects for those can become very cumbersome, so mocking frameworks take care of the basics so we can spend our time focusing on what makes our applications unique.
Using a mocking framework can be a much more lightweight and simple solution to provide mocks than actually creating a mock object for every object you want to mock.
For example, mocking frameworks are especially useful to do things like verify that a call was made (or even how many times that call was made). Making your own mock objects to check behaviors like this (while mocking behavior is a topic in itself) is tedious, and yet another place for you to introduce a bug.
Check out Rhino Mocks for an example of how powerful a mocking framework can be.
Mock objects take the place of any large/complex/external objects your code needs access to in order to run.
They are beneficial for a few reasons:
Your tests are meant to run fast and easily. If your code depends on, say, a database connection then you would need to have a fully configured and populated database running in order to run your tests. This can get annoying, so you create a replace - a "mock" - of the database connection object that just simulates the database.
You can control exactly what output comes out of the Mock objects and can therefore use them as controllable data sources to your tests.
You can create the mock before you create the real object in order to refine its interface. This is useful in Test-driven Development.
The only reason to use a mocking library is that it makes mocking easier.
Sure, you can do it all without the library, and that is fine if it's simple, but as soon as they start getting complicated, libraries are much easier.
Think of this in terms of sorting algorithms, sure anyone can write one, but why? If the code already exists and is simple to call... why not use it?
You certainly can mock your dependencies manually, but with a framework it takes a lot of the tedious work away. Also the assertions usually available make it worth it to learn.
Mocking frameworks allow you to isolate units of code that you wish to test from that code's dependencies. They also allow you to simulate various behaviors of your code's dependencies in a test environment that might be difficult to setup or reproduce otherwise.
For example if I have a class A containing business rules and logic that I wish to test, but this class A depends on a data-access classes, other business classes, even u/i classes, etc., these other classes can be mocked to perform in a certain manner (or in no manner at all in the case of loose mock behavior) to test the logic within your class A based on every imaginable way that these other classes could conceivably behave in a production environment.
To give a deeper example, suppose that your class A invokes a method on a data access class such as
public bool IsOrderOnHold(int orderNumber) {}
then a mock of that data access class could be setup to return true every time or to return false every time, to test how your class A responds to such circumstances.
I'd claim you don't. Writing test doubles isn't a large chore in 9 times out of 10. Most of the time it's done almost entirely automatically by just asking resharper to implement an interface for you and then you just add the minor detail needed for this double (because you aren't doing a bunch of logic and creating these intricate super test doubles, right? Right?)
"But why would I want my test project bloated with a bunch of test doubles" you may ask. Well you shouldn't. the DRY principle holds for tests as well. Create GOOD test doubles that are reusable and have descriptive names. This makes your tests more readable too.
One thing it DOES make harder is to over-use test doubles. I tend to agree with Roy Osherove and Uncle Bob, you really don't want to create a mock object with some special configuration all that often. This is in itself a design smell. Using a framework it's soooo easy to just use test doubles with intricate logic in just about every test and in the end you find that you haven't really tested your production code, you have merely tested the god-awful frankenstein's monsteresque mess of mocks containing mocks containing more mocks. You'll never "accidentally" do this if you write your own doubles.
Of course, someone will point out that there are times when you "have" to use a framework, not doing so would be plain stupid. Sure, there are cases like that. But you probably don't have that case. Most people don't, and only for a small part of the code, or the code itself is really bad.
I'd recommend anyone (ESPECIALLY a beginner) to stay away from frameworks and learn how to get by without them, and then later when they feel that they really have to they can use whatever framework they think is the most suitable, but by then it'll be an informed desicion and they'll be far less likely to abuse the framework to create bad code.
Well mocking frameworks make my life much easier and less tedious so I can spend time on actually writing code. Take for instance Mockito (in the Java world)
//mock creation
List mockedList = mock(List.class);
//using mock object
mockedList.add("one");
mockedList.clear();
//verification
verify(mockedList).add("one");
verify(mockedList).clear();
//stubbing using built-in anyInt() argument matcher
when(mockedList.get(anyInt())).thenReturn("element");
//stubbing using hamcrest (let's say isValid() returns your own hamcrest matcher):
when(mockedList.contains(argThat(isValid()))).thenReturn("element");
//following prints "element"
System.out.println(mockedList.get(999));
Though this is a contrived example if you replace List.class with MyComplex.class then the value of having a mocking framework becomes evident. You could write your own or do without but why would you want to go that route.
I first grok'd why I needed a mocking framework when I compared writing test doubles by hand for a set of unit tests (each test needed slightly different behaviour so I was creating subclasses of a base fake type for each test) with using something like RhinoMocks or Moq to do the same work.
Simply put it was much faster to use a framework to generate all of the fake objects I needed rather than writing (and debugging) my own fakes by hand.

Categories

Resources