Related
What Makes a Good Unit Test? says that a test should test only one thing. What is the benefit from that?
Wouldn't it be better to write a bit bigger tests that test bigger block of code? Investigating a test failure is anyway hard and I don't see help to it from smaller tests.
Edit: The word unit is not that important. Let's say I consider the unit a bit bigger. That is not the issue here. The real question is why make a test or more for all methods as few tests that cover many methods is simpler.
An example: A list class. Why should I make separate tests for addition and removal? A one test that first adds then removes sounds simpler.
Testing only one thing will isolate that one thing and prove whether or not it works. That is the idea with unit testing. Nothing wrong with tests that test more than one thing, but that is generally referred to as integration testing. They both have merits, based on context.
To use an example, if your bedside lamp doesn't turn on, and you replace the bulb and switch the extension cord, you don't know which change fixed the issue. Should have done unit testing, and separated your concerns to isolate the problem.
Update: I read this article and linked articles and I gotta say, I'm shook: https://techbeacon.com/app-dev-testing/no-1-unit-testing-best-practice-stop-doing-it
There is substance here and it gets the mental juices flowing. But I reckon that it jibes with the original sentiment that we should be doing the test that context demands. I suppose I'd just append that to say that we need to get closer to knowing for sure the benefits of different testing on a system and less of a cross-your-fingers approach. Measurements/quantifications and all that good stuff.
I'm going to go out on a limb here, and say that the "only test one thing" advice isn't as actually helpful as it's sometimes made out to be.
Sometimes tests take a certain amount of setting up. Sometimes they may even take a certain amount of time to set up (in the real world). Often you can test two actions in one go.
Pro: only have all that setup occur once. Your tests after the first action will prove that the world is how you expect it to be before the second action. Less code, faster test run.
Con: if either action fails, you'll get the same result: the same test will fail. You'll have less information about where the problem is than if you only had a single action in each of two tests.
In reality, I find that the "con" here isn't much of a problem. The stack trace often narrows things down very quickly, and I'm going to make sure I fix the code anyway.
A slightly different "con" here is that it breaks the "write a new test, make it pass, refactor" cycle. I view that as an ideal cycle, but one which doesn't always mirror reality. Sometimes it's simply more pragmatic to add an extra action and check (or possibly just another check to an existing action) in a current test than to create a new one.
Tests that check for more than one thing aren't usually recommended because they are more tightly coupled and brittle. If you change something in the code, it'll take longer to change the test, since there are more things to account for.
[Edit:]
Ok, say this is a sample test method:
[TestMethod]
public void TestSomething() {
// Test condition A
// Test condition B
// Test condition C
// Test condition D
}
If your test for condition A fails, then B, C, and D will appear to fail as well, and won't provide you with any usefulness. What if your code change would have caused C to fail as well? If you had split them out into 4 separate tests, you would know this.
Haaa... unit tests.
Push any "directives" too far and it rapidly becomes unusable.
Single unit test test a single thing is just as good practice as single method does a single task. But IMHO that does not mean a single test can only contain a single assert statement.
Is
#Test
public void checkNullInputFirstArgument(){...}
#Test
public void checkNullInputSecondArgument(){...}
#Test
public void checkOverInputFirstArgument(){...}
...
better than
#Test
public void testLimitConditions(){...}
is question of taste in my opinion rather than good practice. I personally much prefer the latter.
But
#Test
public void doesWork(){...}
is actually what the "directive" wants you to avoid at all cost and what drains my sanity the fastest.
As a final conclusion, group together things that are semantically related and easilly testable together so that a failed test message, by itself, is actually meaningful enough for you to go directly to the code.
Rule of thumb here on a failed test report: if you have to read the test's code first then your test are not structured well enough and need more splitting into smaller tests.
My 2 cents.
Think of building a car. If you were to apply your theory, of just testing big things, then why not make a test to drive the car through a desert. It breaks down. Ok, so tell me what caused the problem. You can't. That's a scenario test.
A functional test may be to turn on the engine. It fails. But that could be because of a number of reasons. You still couldn't tell me exactly what caused the problem. We're getting closer though.
A unit test is more specific, and will firstly identify where the code is broken, but it will also (if doing proper TDD) help architect your code into clear, modular chunks.
Someone mentioned about using the stack trace. Forget it. That's a second resort. Going through the stack trace, or using debug is a pain and can be time consuming. Especially on larger systems, and complex bugs.
Good characteristics of a unit test:
Fast (milliseconds)
Independent. It's not affected by or dependent on other tests
Clear. It shouldn't be bloated, or contain a huge amount of setup.
Using test-driven development, you would write your tests first, then write the code to pass the test. If your tests are focused, this makes writing the code to pass the test easier.
For example, I might have a method that takes a parameter. One of the things I might think of first is, what should happen if the parameter is null? It should throw a ArgumentNull exception (I think). So I write a test that checks to see if that exception is thrown when I pass a null argument. Run the test. Okay, it throws NotImplementedException. I go and fix that by changing the code to throw an ArgumentNull exception. Run my test it passes. Then I think, what happens if it's too small or too big? Ah, that's two tests. I write the too small case first.
The point is I don't think of the behavior of the method all at once. I build it incrementally (and logically) by thinking about what it should do, then implement code and refactoring as I go to make it look pretty (elegant). This is why tests should be small and focused because when you are thinking about the behavior you should develop in small, understandable increments.
Having tests that verify only one thing makes troubleshooting easier. It's not to say you shouldn't also have tests that do test multiple things, or multiple tests that share the same setup/teardown.
Here should be an illustrative example. Let's say that you have a stack class with queries:
getSize
isEmpty
getTop
and methods to mutate the stack
push(anObject)
pop()
Now, consider the following test case for it (I'm using Python like pseudo-code for this example.)
class TestCase():
def setup():
self.stack = new Stack()
def test():
stack.push(1)
stack.push(2)
stack.pop()
assert stack.top() == 1, "top() isn't showing correct object"
assert stack.getSize() == 1, "getSize() call failed"
From this test case, you can determine if something is wrong, but not whether it is isolated to the push() or pop() implementations, or the queries that return values: top() and getSize().
If we add individual test cases for each method and its behavior, things become much easier to diagnose. Also, by doing fresh setup for each test case, we can guarantee that the problem is completely within the methods that the failing test method called.
def test_size():
assert stack.getSize() == 0
assert stack.isEmpty()
def test_push():
self.stack.push(1)
assert stack.top() == 1, "top returns wrong object after push"
assert stack.getSize() == 1, "getSize wrong after push"
def test_pop():
stack.push(1)
stack.pop()
assert stack.getSize() == 0, "getSize wrong after push"
As far as test-driven development is concerned. I personally write larger "functional tests" that end up testing multiple methods at first, and then create unit tests as I start to implement individual pieces.
Another way to look at it is unit tests verify the contract of each individual method, while larger tests verify the contract that the objects and the system as a whole must follow.
I'm still using three method calls in test_push, however both top() and getSize() are queries that are tested by separate test methods.
You could get similar functionality by adding more asserts to the single test, but then later assertion failures would be hidden.
If you are testing more than one thing then it is called an Integration test...not a unit test. You would still run these integration tests in the same testing framework as your unit tests.
Integration tests are generally slower, unit tests are fast because all dependencies are mocked/faked, so no database/web service/slow service calls.
We run our unit tests on commit to source control, and our integration tests only get run in the nightly build.
If you test more than one thing and the first thing you test fails, you will not know if the subsequent things you are testing pass or fail. It is easier to fix when you know everything that will fail.
Smaller unit test make it more clear where the issue is when they fail.
The GLib, but hopefully still useful, answer is that unit = one. If you test more than one thing, then you aren't unit testing.
Regarding your example: If you are testing add and remove in the same unit test, how do you verify that the item was ever added to your list? That is why you need to add and verify that it was added in one test.
Or to use the lamp example: If you want to test your lamp and all you do is turn the switch on and then off, how do you know the lamp ever turned on? You must take the step in between to look at the lamp and verify that it is on. Then you can turn it off and verify that it turned off.
I support the idea that unit tests should only test one thing. I also stray from it quite a bit. Today I had a test where expensive setup seemed to be forcing me to make more than one assertion per test.
namespace Tests.Integration
{
[TestFixture]
public class FeeMessageTest
{
[Test]
public void ShouldHaveCorrectValues
{
var fees = CallSlowRunningFeeService();
Assert.AreEqual(6.50m, fees.ConvenienceFee);
Assert.AreEqual(2.95m, fees.CreditCardFee);
Assert.AreEqual(59.95m, fees.ChangeFee);
}
}
}
At the same time, I really wanted to see all my assertions that failed, not just the first one. I was expecting them all to fail, and I needed to know what amounts I was really getting back. But, a standard [SetUp] with each test divided would cause 3 calls to the slow service. Suddenly I remembered an article suggesting that using "unconventional" test constructs is where half the benefit of unit testing is hidden. (I think it was a Jeremy Miller post, but can't find it now.) Suddenly [TestFixtureSetUp] popped to mind, and I realized I could make a single service call but still have separate, expressive test methods.
namespace Tests.Integration
{
[TestFixture]
public class FeeMessageTest
{
Fees fees;
[TestFixtureSetUp]
public void FetchFeesMessageFromService()
{
fees = CallSlowRunningFeeService();
}
[Test]
public void ShouldHaveCorrectConvenienceFee()
{
Assert.AreEqual(6.50m, fees.ConvenienceFee);
}
[Test]
public void ShouldHaveCorrectCreditCardFee()
{
Assert.AreEqual(2.95m, fees.CreditCardFee);
}
[Test]
public void ShouldHaveCorrectChangeFee()
{
Assert.AreEqual(59.95m, fees.ChangeFee);
}
}
}
There is more code in this test, but it provides much more value by showing me all the values that don't match expectations at once.
A colleague also pointed out that this is a bit like Scott Bellware's specunit.net: http://code.google.com/p/specunit-net/
Another practical disadvantage of very granular unit testing is that it breaks the DRY principle. I have worked on projects where the rule was that each public method of a class had to have a unit test (a [TestMethod]). Obviously this added some overhead every time you created a public method but the real problem was that it added some "friction" to refactoring.
It's similar to method level documentation, it's nice to have but it's another thing that has to be maintained and it makes changing a method signature or name a little more cumbersome and slows down "floss refactoring" (as described in "Refactoring Tools: Fitness for Purpose" by Emerson Murphy-Hill and Andrew P. Black. PDF, 1.3 MB).
Like most things in design, there is a trade-off that the phrase "a test should test only one thing" doesn't capture.
When a test fails, there are three options:
The implementation is broken and should be fixed.
The test is broken and should be fixed.
The test is not anymore needed and should be removed.
Fine-grained tests with descriptive names help the reader to know why the test was written, which in turn makes it easier to know which of the above options to choose. The name of the test should describe the behaviour which is being specified by the test - and only one behaviour per test - so that just by reading the names of the tests the reader will know what the system does. See this article for more information.
On the other hand, if one test is doing lots of different things and it has a non-descriptive name (such as tests named after methods in the implementation), then it will be very hard to find out the motivation behind the test, and it will be hard to know when and how to change the test.
Here is what a it can look like (with GoSpec), when each test tests only one thing:
func StackSpec(c gospec.Context) {
stack := NewStack()
c.Specify("An empty stack", func() {
c.Specify("is empty", func() {
c.Then(stack).Should.Be(stack.Empty())
})
c.Specify("After a push, the stack is no longer empty", func() {
stack.Push("foo")
c.Then(stack).ShouldNot.Be(stack.Empty())
})
})
c.Specify("When objects have been pushed onto a stack", func() {
stack.Push("one")
stack.Push("two")
c.Specify("the object pushed last is popped first", func() {
x := stack.Pop()
c.Then(x).Should.Equal("two")
})
c.Specify("the object pushed first is popped last", func() {
stack.Pop()
x := stack.Pop()
c.Then(x).Should.Equal("one")
})
c.Specify("After popping all objects, the stack is empty", func() {
stack.Pop()
stack.Pop()
c.Then(stack).Should.Be(stack.Empty())
})
})
}
The real question is why make a test or more for all methods as few tests that cover many methods is simpler.
Well, so that when some test fails you know which method fails.
When you have to repair a non-functioning car, it is easier when you know which part of the engine is failing.
An example: A list class. Why should I make separate tests for addition and removal? A one test that first adds then removes sounds simpler.
Let's suppose that the addition method is broken and does not add, and that the removal method is broken and does not remove. Your test would check that the list, after addition and removal, has the same size as initially. Your test would be in success. Although both of your methods would be broken.
Disclaimer: This is an answer highly influenced by the book "xUnit Test Patterns".
Testing only one thing at each test is one of the most basic principles that provides the following benefits:
Defect Localization: If a test fails, you immediately know why it failed (ideally without further troubleshooting, if you've done a good job with the assertions used).
Test as a specification: the tests are not only there as a safety net, but can easily be used as specification/documentation. For instance, a developer should be able to read the unit tests of a single component and understand the API/contract of it, without needing to read the implementation (leveraging the benefit of encapsulation).
Infeasibility of TDD: TDD is based on having small-sized chunks of functionality and completing progressive iterations of (write failing test, write code, verify test succeeds). This process get highly disrupted if a test has to verify multiple things.
Lack of side-effects: Somewhat related to the first one, but when a test verifies multiple things, it's more possible that it will be tied to other tests as well. So, these tests might need to have a shared test fixture, which means that one will be affected by the other one. So, eventually you might have a test failing, but in reality another test is the one that caused the failure, e.g. by changing the fixture data.
I can only see a single reason why you might benefit from having a test that verifies multiple things, but this should be seen as a code smell actually:
Performance optimisation: There are some cases, where your tests are not running only in memory, but are also dependent in persistent storage (e.g. databases). In some of these cases, having a test verify multiple things might help in decreasing the number of disk accesses, thus decreasing the execution time. However, unit tests should ideally be executable only in memory, so if you stumble upon such a case, you should re-consider whether you are going in the wrong path. All persistent dependencies should be replaced with mock objects in unit tests. End-to-end functionality should be covered by a different suite of integration tests. In this way, you do not need to care about execution time anymore, since integration tests are usually executed by build pipelines and not by developers, so a slightly higher execution time has almost no impact to the efficiency of the software development lifecycle.
I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!
I have a problem to understand the evolution of code when you have taken the "Fake It Until You Make IT" TDD approach.
Ok, you have faked it, let's say you returned a constant so that the broken test is green in the beginning. Then you re-factored your code. Then you run the same test which is going to pass obviously because you have faked it!
But if a test is passing how can you rely on that, especially when you know that you faked that?
How should the faked test be refactored with your real code refactoring so that it can still be reliable?
Thanks
The short answer is: write more tests.
If the method is returning a constant (when it should be calculating something), simply add a test for a condition with a different result. So, let's say you had the following:
#Test
public void testLength()
{
final int result = objectUnderTest.myLength("hello");
assertEquals(5, result);
}
and myLength was implemented as return 5, then you write a similar (additional) test but pass in "foobar" instead and assert that the output is 6.
When you're writing tests, you should try to be very vindictive against the implementation and try to write something that exposes its shortcomings. When you're writing code, I think you're meant to be very laissez-faire and do as little is required to make those nasty tests green.
You first create a unit test testing new functionality that does not exist.
Now, you have a unit test to a non existing method. You then create that method that doesn't do anything and your unit test compiles, but of course, fails.
You then go on building your method, underlying functionality etc until your unit test succeeds.
That's (kind of) test driven development.
The reason you should be able to trust on this is that you should make your unit test so that it actually tests your functionality. Of course, if it just returns a constant and you just test on that, you have a problem. But then, your unit test is not complete.
Your unit tests should (in theory) test every line. And if you've done that OK, this should work.
Fake it 'til you make it says to write the simplest possible thing to pass your current tests. Frequently, when you've written a single test case for a new feature, that simplest possible thing is to return a constant. When something that simple satisfies your tests, it's because you don't (yet) have enough tests. So write another test, as #Andrzej Doyle says. Now the feature you're developing needs some logic to it. Maybe this time the simplest possible thing is to write very basic if-else logic to handle your two test cases. You know you're faking it, so you know you're not done. When it becomes simpler to write the actual code to solve your problem than to extend your fake to cover yet another test case - that's what you do. And you've got enough test cases to make sure you're writing it correctly.
This may be referring to the practice of using mocks/stubs/fakes with which your system/class under test collaborates.
In this scenario, you "fake" the collaborator, not the thing that you are testing, because you don't have an implementation of this collaborator's interface.
Thus, you fake it until you "make it," meaning that you implement it in a concrete class.
In TDD, all the requirements are expressed as tests. If you fake something and all tests pass, your requirements are fulfilled. If this is not giving you the expected behavior, then you have not expressed all your requirements as tests.
If you continue faking stuff at this point, you will eventually notice that the easiest solution would be to actually solve the problem.
When you refactor the code, you are switching from returning a constant value to returning an expression in terms of variables, which are derived/calculated.
The test, assuming it was written correctly the first time around, would still be valid for your newly refactored implementation and does not have to be refactored.
It's important to understand the motivation behind Fake It: It's similar to writing the Assert first, except for your production code. It gets you to green, and lets you focus on turning the fake into a valid expression in the simplest way possible while still passing the test. It's the first thing to try when implementation is not obvious, before you give up and switch to Triangulation.
Can anyone come up with guidelines suggesting the ideal scenarios to choose mocking versus faking, i.e., setting up the essentials manually?
I am a bit confused with how to approach this situation.
Well you have a few things you need to sort out. You have two basic things you'll need to know: Nomenclature and Best Practices.
First I want to give you a great video resource from a great tester, Roy Osherove:
Unit Testing Reviews by Roy Osherove
He starts out by saying that he has
done some reviews of test harnesses
shipped with several open source
projects. You can find those here:
http://weblogs.asp.net/rosherove/archive/tags/TestReview/default.aspx
These are basically video reviews
where he walks you through these test
harnesses and tells you what is good
and what is bad. Very helpful.
Roy also has a book that I understand
is very good.
Nomenclature
This podcast will help out
immensely: http://www.hanselminutes.com/default.aspx?showID=187
I'll paraphrase the podcast, though
(that Hanselminutes intro music is
dreadful):
Basically everything you do with an
isolation framework (like Moq, Rhino Mocks, Type Mock, etc) is called a
fake.
A fake is an object in use during
a test that the code you are testing
can call in place of production code.
A fake is used to isolate the code you
are trying to test from other parts of
your application.
There are (mainly) two types of fakes: stubs
and mocks.
A mock is a fake that you put in
place so that the code you are testing
can call out to it and you assert that
the call was made with the correct
parameters. The below sample does just
this using the Moq isolation
framework:
[TestMethod]
public void CalculateTax_ValidTaxRate_DALCallIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALMock = new Mock<ITaxRateDataAccess>();
taxDALMock.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08).Verifiable();
TaxCalculator calc = new TaxCalculator(taxDALMock.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
taxDALMock.VerifyAll();
}
A stub is almost the same as a
mock, except that you put it in place
to make sure the code you are testing
gets back consistent data from its
call (for instance, if your code calls
a data access layer, a stub would
return back fake data), but you don’t
assert against the stub itself. That
is, you don’t care to verify that the
method called your fake data access
layer – you are trying to test
something else. You provide the stub
to get the method you are trying to
test to work in isolation.
Here’s an example with a stub:
[TestMethod]
public void CalculateTax_ValidTaxRate_TaxValueIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALStub = new Mock<ITaxRateDataAccess>();
taxDALStub.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08);
TaxCalculator calc = new TaxCalculator(taxDALStub.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
Assert.AreEqual(result, 8.00);
}
Notice here that we are testing the
output of the method, rather than the
fact that the method made a call to
another resource.
Moq doesn’t really make an API
distinction between a mock and a stub
(notice both were declared as
Mock<T>), but the usage here is
important in determining the type.
Hope this helps set you straight.
There are at leat 5 different kinds of test doubles: dummies, stubs, mocks, spies and fakes. A good overview is at http://code.google.com/testing/TotT-2008-06-12.pdf and they are also categorized at http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
You want to test a chunk of code, right, let's say a method. Your method downloads a file from a http url, and then saves the file on disk, and then mail out that the file is on disk. All these three actions are of course service-classes your method calls, because then they are easy to mock. If you don't mock these, your test will download stuff, access the disk, and mail a message every time you run that test. Then you are not just testing the code in the method, you are also testing the code that downloads, writes to disk and sends a mail. Now if you are mocking these, you are testing just the methods code. Also you are able to simulate a download failure for instance, to see that your method's code is behaving correctly.
Now as for faking, I usually fake classes that are just holding values, and don't have much logic. If you are sending in an object that holds some values, that get changed in the method, you can read off of it in the test to see that the method do the right thing.
Of course the rules can (and sometimes must) be bent a bit, but the general way of thinking is test your code, and your code only.
The Little Mocker, from Bob Martin, is a very good reading on the topic.
[...] long ago some very smart people wrote a paper that introduced and defined the term Mock Object. Lots of other people read it and started using that term. Other people, who hadn't read the paper, heard the term and started using it with a broader meaning. They even turned the word into a verb. They'd say, "Let's mock that object out.", or "We've got a lot of mocking to do."
The article explains the differences between mocks, fakes, spys and stubs.
Getting started with TDD and I want to ground up a Repository-driven Model. However, how can I use NUnit to effectively say:
SomeInterfaceExists()
I want to create tests for each domain model (E.g. ICarRepository, IDriverRepository), etc.)
Does this actually make sense?
Regards
TDD means you drive your development (design) by proceeding in a test-first manner, meaning you
Write the outline (methods) of your class you want to test
You create a Unit test for the sketch of your class
You run your unit test -> it will fail
You hack your class, just enough to make the test pass
You refactor your class
This is repeated for each item.
That's not something you test with TDD. The test is can I call one of the methods of that interface on this class, and does it return the right thing.
I would say that the compiler 'tests' (and I know that's a contentious statement) the interfaces existence when it tries to compile a class that implements it. I would expect to explicitly test for the interfaces existence in a test as it doesn't prove anything. You don't test that a class has been defined, you do test the methods on a class.
The existence of interfaces is implicity tested every time you use the interface in a test. For example, before the interface or any implementations of it exists, you might write a test that says, in part:
ICar car = new Convertible();
This establishes the existence of the ICar interface - your test won't compile until it's created - and that Convertible implements ICar. Every method on car you invoke will elaborate more of the interface.
You could write something like the following:
void AssertSomeInterfaceExists(object domainObject)
{
Assert.IsTrue(domainObject is ICarRepository);
}
You can write this test and it will fail afterwards if someone changes your domain object to no longer implement ICarRepository. However your other unit tests, which depend on your domain object implementing this interface, will no longer compile, which would make this test somewhat redundant.
You don't need to test whether your interface exists or not. You aren't actually "testing" anything there.
What happens if your domain object doesn't require data access code. An example- a shopping cart?
You could use reflection to divine the existence of an interface, but that seems like stretching the idea of TDD.
Remember that TDD is about making sure that functionality holds over time, not about asserting a particular design pattern is applied. As Juri points out, the test isn't the absolute first thing you build.