Can anyone come up with guidelines suggesting the ideal scenarios to choose mocking versus faking, i.e., setting up the essentials manually?
I am a bit confused with how to approach this situation.
Well you have a few things you need to sort out. You have two basic things you'll need to know: Nomenclature and Best Practices.
First I want to give you a great video resource from a great tester, Roy Osherove:
Unit Testing Reviews by Roy Osherove
He starts out by saying that he has
done some reviews of test harnesses
shipped with several open source
projects. You can find those here:
http://weblogs.asp.net/rosherove/archive/tags/TestReview/default.aspx
These are basically video reviews
where he walks you through these test
harnesses and tells you what is good
and what is bad. Very helpful.
Roy also has a book that I understand
is very good.
Nomenclature
This podcast will help out
immensely: http://www.hanselminutes.com/default.aspx?showID=187
I'll paraphrase the podcast, though
(that Hanselminutes intro music is
dreadful):
Basically everything you do with an
isolation framework (like Moq, Rhino Mocks, Type Mock, etc) is called a
fake.
A fake is an object in use during
a test that the code you are testing
can call in place of production code.
A fake is used to isolate the code you
are trying to test from other parts of
your application.
There are (mainly) two types of fakes: stubs
and mocks.
A mock is a fake that you put in
place so that the code you are testing
can call out to it and you assert that
the call was made with the correct
parameters. The below sample does just
this using the Moq isolation
framework:
[TestMethod]
public void CalculateTax_ValidTaxRate_DALCallIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALMock = new Mock<ITaxRateDataAccess>();
taxDALMock.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08).Verifiable();
TaxCalculator calc = new TaxCalculator(taxDALMock.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
taxDALMock.VerifyAll();
}
A stub is almost the same as a
mock, except that you put it in place
to make sure the code you are testing
gets back consistent data from its
call (for instance, if your code calls
a data access layer, a stub would
return back fake data), but you don’t
assert against the stub itself. That
is, you don’t care to verify that the
method called your fake data access
layer – you are trying to test
something else. You provide the stub
to get the method you are trying to
test to work in isolation.
Here’s an example with a stub:
[TestMethod]
public void CalculateTax_ValidTaxRate_TaxValueIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALStub = new Mock<ITaxRateDataAccess>();
taxDALStub.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08);
TaxCalculator calc = new TaxCalculator(taxDALStub.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
Assert.AreEqual(result, 8.00);
}
Notice here that we are testing the
output of the method, rather than the
fact that the method made a call to
another resource.
Moq doesn’t really make an API
distinction between a mock and a stub
(notice both were declared as
Mock<T>), but the usage here is
important in determining the type.
Hope this helps set you straight.
There are at leat 5 different kinds of test doubles: dummies, stubs, mocks, spies and fakes. A good overview is at http://code.google.com/testing/TotT-2008-06-12.pdf and they are also categorized at http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
You want to test a chunk of code, right, let's say a method. Your method downloads a file from a http url, and then saves the file on disk, and then mail out that the file is on disk. All these three actions are of course service-classes your method calls, because then they are easy to mock. If you don't mock these, your test will download stuff, access the disk, and mail a message every time you run that test. Then you are not just testing the code in the method, you are also testing the code that downloads, writes to disk and sends a mail. Now if you are mocking these, you are testing just the methods code. Also you are able to simulate a download failure for instance, to see that your method's code is behaving correctly.
Now as for faking, I usually fake classes that are just holding values, and don't have much logic. If you are sending in an object that holds some values, that get changed in the method, you can read off of it in the test to see that the method do the right thing.
Of course the rules can (and sometimes must) be bent a bit, but the general way of thinking is test your code, and your code only.
The Little Mocker, from Bob Martin, is a very good reading on the topic.
[...] long ago some very smart people wrote a paper that introduced and defined the term Mock Object. Lots of other people read it and started using that term. Other people, who hadn't read the paper, heard the term and started using it with a broader meaning. They even turned the word into a verb. They'd say, "Let's mock that object out.", or "We've got a lot of mocking to do."
The article explains the differences between mocks, fakes, spys and stubs.
Related
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!
I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.
I have worked with code which had NUnit test written. But, I have never worked with mocking frameworks. What are they? I understand dependency injection and how it helps to improve the testability. I mean all dependencies can be mocked while unit testing. But, then why do we need mocking frameworks? Can't we simply create mock objects and provide dependencies. Am I missing something here?
Thanks.
It makes mocking easier
They usually
allow you to express testable
assertions that refer to the
interaction between objects.
Here you have an example:
var extension = MockRepository
.GenerateMock<IContextExtension<StandardContext>>();
var ctx = new StandardContext();
ctx.AddExtension(extension);
extension.AssertWasCalled(
e=>e.Attach(null),
o=>o.Constraints(Is.Equal(ctx)));
You can see that I explicitly test that the Attach method of the IContextExtension was called and that the input parameter was said context object. It would make my test fail if that did not happen.
You can create mock objects by hand and use them during testing using Dependency Injection frameworks...but letting a mocking framework generate your mock objects for you saves time.
As always, if using the framework adds too much complexity to be useful then don't use it.
Sometimes when working with third-party libraries, or even working with some aspects of the .NET framework, it is extremely difficult to write tests for some situations - for example, an HttpContext, or a Sharepoint object. Creating mock objects for those can become very cumbersome, so mocking frameworks take care of the basics so we can spend our time focusing on what makes our applications unique.
Using a mocking framework can be a much more lightweight and simple solution to provide mocks than actually creating a mock object for every object you want to mock.
For example, mocking frameworks are especially useful to do things like verify that a call was made (or even how many times that call was made). Making your own mock objects to check behaviors like this (while mocking behavior is a topic in itself) is tedious, and yet another place for you to introduce a bug.
Check out Rhino Mocks for an example of how powerful a mocking framework can be.
Mock objects take the place of any large/complex/external objects your code needs access to in order to run.
They are beneficial for a few reasons:
Your tests are meant to run fast and easily. If your code depends on, say, a database connection then you would need to have a fully configured and populated database running in order to run your tests. This can get annoying, so you create a replace - a "mock" - of the database connection object that just simulates the database.
You can control exactly what output comes out of the Mock objects and can therefore use them as controllable data sources to your tests.
You can create the mock before you create the real object in order to refine its interface. This is useful in Test-driven Development.
The only reason to use a mocking library is that it makes mocking easier.
Sure, you can do it all without the library, and that is fine if it's simple, but as soon as they start getting complicated, libraries are much easier.
Think of this in terms of sorting algorithms, sure anyone can write one, but why? If the code already exists and is simple to call... why not use it?
You certainly can mock your dependencies manually, but with a framework it takes a lot of the tedious work away. Also the assertions usually available make it worth it to learn.
Mocking frameworks allow you to isolate units of code that you wish to test from that code's dependencies. They also allow you to simulate various behaviors of your code's dependencies in a test environment that might be difficult to setup or reproduce otherwise.
For example if I have a class A containing business rules and logic that I wish to test, but this class A depends on a data-access classes, other business classes, even u/i classes, etc., these other classes can be mocked to perform in a certain manner (or in no manner at all in the case of loose mock behavior) to test the logic within your class A based on every imaginable way that these other classes could conceivably behave in a production environment.
To give a deeper example, suppose that your class A invokes a method on a data access class such as
public bool IsOrderOnHold(int orderNumber) {}
then a mock of that data access class could be setup to return true every time or to return false every time, to test how your class A responds to such circumstances.
I'd claim you don't. Writing test doubles isn't a large chore in 9 times out of 10. Most of the time it's done almost entirely automatically by just asking resharper to implement an interface for you and then you just add the minor detail needed for this double (because you aren't doing a bunch of logic and creating these intricate super test doubles, right? Right?)
"But why would I want my test project bloated with a bunch of test doubles" you may ask. Well you shouldn't. the DRY principle holds for tests as well. Create GOOD test doubles that are reusable and have descriptive names. This makes your tests more readable too.
One thing it DOES make harder is to over-use test doubles. I tend to agree with Roy Osherove and Uncle Bob, you really don't want to create a mock object with some special configuration all that often. This is in itself a design smell. Using a framework it's soooo easy to just use test doubles with intricate logic in just about every test and in the end you find that you haven't really tested your production code, you have merely tested the god-awful frankenstein's monsteresque mess of mocks containing mocks containing more mocks. You'll never "accidentally" do this if you write your own doubles.
Of course, someone will point out that there are times when you "have" to use a framework, not doing so would be plain stupid. Sure, there are cases like that. But you probably don't have that case. Most people don't, and only for a small part of the code, or the code itself is really bad.
I'd recommend anyone (ESPECIALLY a beginner) to stay away from frameworks and learn how to get by without them, and then later when they feel that they really have to they can use whatever framework they think is the most suitable, but by then it'll be an informed desicion and they'll be far less likely to abuse the framework to create bad code.
Well mocking frameworks make my life much easier and less tedious so I can spend time on actually writing code. Take for instance Mockito (in the Java world)
//mock creation
List mockedList = mock(List.class);
//using mock object
mockedList.add("one");
mockedList.clear();
//verification
verify(mockedList).add("one");
verify(mockedList).clear();
//stubbing using built-in anyInt() argument matcher
when(mockedList.get(anyInt())).thenReturn("element");
//stubbing using hamcrest (let's say isValid() returns your own hamcrest matcher):
when(mockedList.contains(argThat(isValid()))).thenReturn("element");
//following prints "element"
System.out.println(mockedList.get(999));
Though this is a contrived example if you replace List.class with MyComplex.class then the value of having a mocking framework becomes evident. You could write your own or do without but why would you want to go that route.
I first grok'd why I needed a mocking framework when I compared writing test doubles by hand for a set of unit tests (each test needed slightly different behaviour so I was creating subclasses of a base fake type for each test) with using something like RhinoMocks or Moq to do the same work.
Simply put it was much faster to use a framework to generate all of the fake objects I needed rather than writing (and debugging) my own fakes by hand.
Updated version of question
Hi.
My company has a few legacy code bases which I hope to get under test as soon as they migrate to .NET 3.5. I have selected Moq as my Mocking framework (I fell in love with the crisp syntax immediately).
One common scenario, which I expect to see a lot of in the future, is where I see an object which interacts with some other objects.
I know the works of Michael Feathers and I am getting good at identifying inflection points and isolating decent sized components. Extract and Override is king.
However, there is one feature which would make my life a whole lot easier.
Imagine Component1 interacting with Component2. Component2 is some weird serial line interface to a fire central or somesuch with a lot of byte inspection, casting and pointer manipulation. I do not wish to understand component2 and its legacy interface consumed by Component1 carries with it a lot of baggage.
What I would like to do, is to extract the interface of Component2 consumed by Component1 and then do something like this:
component1.FireCentral = new Mock<IComponent2> (component2);
I am creating a normal mock, but I am pasing in an instance of the real Component2 in as a constructor argument into the Mock object. It may seem like I'm making my test depending on Component2, but I am not planning on keeping this code. This is part of the "place object under test" ritual.
Now, I would fire up the real system (with a physical fire central connected) and then interact with my object.
What I then would wish for, is to inspect the mock to see a log of how component1 interacted with component2 (using the debugger to inspect some collection of strings on the mock). And, even better, the mock could provide a list of expectations (in C#) that would create this behavior in a mock that did not depend on Component2, which I would then use in my test code.
In short. Using the mocking framework to record the interaction so that I can play it back in my test code.
Old version of question
Hi.
Working with legacy code and with a lot of utility classes, I sometimes find myself wondering how a particular class is acted upon by its surroundings in a number of scenarios.
One case that I was working on this morning involved subclassing a regular MemoryStream so that it would dump its contents to file when reaching a certain size.
// TODO: Remove
private class MyLimitedMemoryStream : MemoryStream
{
public override void Write(byte[] buffer, int offset, int count)
{
if (GetBuffer().Length > 10000000)
{
System.IO.FileStream file = new FileStream("c:\\foobar.html",FileMode.Create);
var internalbuffer = GetBuffer();
file.Write(internalbuffer,0,internalbuffer.Length);
}
base.Write(buffer, offset, count);
}
}
(and I used a breakpoint in here to exit the program after the file was written).
This worked, and I found which webform (web part->web part->web part) control rendered incorrectly. However, memorystream has a bunch of write's and writeline's.
Can I use a mocking framework to quickly get an overview on how a particular instance is acted upon? Any clever tricks there? We use Rhino Mocks.
I see this as a great assett in working with legacy code. Especially if recorded actions in a scenario easily can be set up as new expectations/acceptance criteria for that same scenario replicated in a unit test.
Every input is appreciated. Thank you for reading.
Welcome to the small club of "visionaries" who understand this requirement :)
Unfortunately I'll tell you, I don't think this exists yet for .NET. I'm also pretty sure it doesn't exist for Java either... as I've been searching periodically for a few years, and even offered a cash bounty for this on a pay-for-work website, and nothing turned up (some Russian developer offered to implement it from scratch, but it was outside my budget).
But, I have created a Proof Of Concept in PHP to demonstrate the idea and perhaps other get people interested in developing this for other languages (.NET for you, Java for me).
Here is the PHP proof-of-concept:
http://code.google.com/p/php-mock-recorder/
I don't think you can use a mocking framework to easily get an overview of how an particular instance is acted upon.
You can however use the mocking framework to define how it should be acted upon in order to verify that it is acted upon in this way. In legacy code this often requires making the code testable, e.g. introducing interfaces etc.
One technique that can be used with legacy code without needing to much restructuring of the code is using logging seams. You can read more about this in this InfoQ article: Using Logging Seams for Legacy Code Unit Testing.
If you want more tips on how to test legacy code, I recommend the book Working Effectively with Legacy Code by Michael Feathers.
Hope this helps!
Yes, this is possible. If you use a strict mock and run a unit test that exercises the mock, the test will fail, telling you which unexpected method was called.
Is this what you're looking for?
Mock frameworks weren't designed for this problem. I don't see how you can make this work with either Moq or RhinoMocks. Even the powerful TypeMock may not be able to do what you're asking. Mock frameworks weren't built for this.
Instead, use an aspect-oriented programming (AOP) tool to weave pre- and post- method invocation calls. This will do exactly what you want: see all interactions for a particular type. For example, in the PostSharp AOP framework, you simply specify methods you'd like called before and after a method call on some other object:
public class Component2TracerAttribute : OnMethodBoundaryAspect
{
public override void OnEntry( MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Method == somethingOnComponent2) // Pseudo-code
{
Trace.TraceInformation("Entering {0}.", eventArgs.Method);
}
}
public override void OnExit(MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Method == somethingOnComponent2) // Pseudo-code
{
Trace.TraceInformation("Leaving {0}.", eventArgs.Method);
}
}
}
That will log all the methods that are called on component 2.