How do I mock HttpResponseBase.End()? - c#

I'm using Moq to create a mock object of HttpResponseBase. I need to be able to test that HttpResponseBase.End() was called in my library. To do this, I specify some text before the call and some text after. Then I check that only the text before the call to End() is present in HttpResponseBase.Output.
The problem is, I can't figure out how to mock HttpResponseBase.End() so that it stops processing, like it does in ASP.NET.
public static HttpResponseBase CreateHttpResponseBase() {
var mock = new Mock<HttpResponseBase>();
StringWriter output = new StringWriter();
mock.SetupProperty(x => x.StatusCode);
mock.SetupGet(x => x.Output).Returns(output);
mock.Setup(x => x.End()) /* what do I put here? */;
mock.Setup(x => x.Write(It.IsAny<string>()))
.Callback<string>(s => output.Write(s));
return mock.Object;
}

It is a bit unclear to me what it is you are trying to achieve, but from your description, it sounds like you are attempting to get your Abstraction to behave like a particular implementation. In other words, because HttpResponse.End() has a certain behavior, you want your Mock to have the same behavior?
In general, that is not particularly easy to do with Moq, since it has no concept of ordered expectations (unlike RhinoMocks). There is, however, a feature request for it.
You might be able to use a Callback together with setting up the End method to toggle a flag that determines any further behavior of the Mock, but it's not going to be particularly pretty. I'm thinking about something like this:
bool ended = false;
var mock = new Mock<HttpResponseBase>();
mock.Setup(x => x.End()).Callback(() => ended = true);
// Other setups involving 'ended' and Callbacks
Then have all other Setups have dual implementatations based on whether ended is true or false.
It would be pretty damn ugly, so I'd seriously reconsider my options at this point. There are at least two directions you can take:
Make a Fake implementation of HttpResponseBase instead of using Moq. It sounds like you are expecting such specific behavior of the implementation that a Test Double with embedded logic sounds like a better option. Put shortly, a Fake is a Test Double that can contain semi-complex logic that mimics the intended production implementation. You can read more about Fakes and other Test Doubles in the excellent xUnit Test Patterns book.
Reconsider your initial assumptions. It sounds to me like you are tying your client very closely to a particular behavior of HttpResponseBase, so you may be violating the Liskov Substitution Principle. However, I may be mistaken, as a method called 'End' carries certain connotations beyond the purely semantic, but still, I'd personally consider if a better design was possible.

Related

Using a stub instead of a concrete object as a parameter

Is it always necessary to create and pass in a stub into a method as a parameter, even if I can instantiate that object being passed in to the method without any problems.
ex. I want to test this method below and it takes in a TargetDataRanger object as a parameter. Should I a.) stub it out and pass it in b.) break the dependency and put it behind a interface then stub it and pass it in c.) instantiate it and pass it into the method as a concrete object.
In this case below I can get away with using the concrete object but is that wise and does it break some testing rules or something?
public virtual Dictionary<DateTime, DateTime> ResolveDates(ISeries comparisonSeries, TargetDateRanger sourceRanger)
{
Dictionary<DateTime, DateTime> dates = new Dictionary<DateTime, DateTime>();
foreach (DateTime keyDate in sourceRanger.ValidDates)
dates.Add(keyDate, this.ResolveDate(comparisonSeries, keyDate));
return dates;
}
I think the answer depends on what TargetDateRanger.ValidDates does. Assuming you can completely control what that property returns from your unit test, there's no reason to separately mock it out. If it hits the database, has some internal logic, depends on something like DateTime.Now, etc. then you'll need to mock it.
Basically, you want the "environment" of a unit test to be completely under your control so that you have predictable results and can quickly pinpoint the failing code. If ValidDates has a possibility of returning wrong results, then you'd want to unit test that separately and mock it in this case (so that "bad results" don't cause your ResolveDates method to fail, since the problem doesn't reside there).
You could use a default parameter.
void print(int a, string b = "default")
{
Console.WriteLine(a + b);
}
In unit testing, I test mine as stand alone. I break it out, have a driver set up that I can pump variables into (if it is meant to receive them) and a stub so I can view the expected results. I feel it's a better test philosophy for me, but I am relatively new to programming, so I find this way of testing me teaches me a lot at the same time.
Once that's done, I integrate it into the larger system, re-test it with a new driver and stub output, verify the entire flow works.
I can't say there is something inherently wrong using the concrete method, especially if it's a small piece of a small program... But, I like breaking things out.
If you are writing a Unit test for this method, I think better to isolate/fake out any external dependencies. The problem is that if you don't, for example if someone make a change to the ValidateDates method, then your test would fail for the wrong reason. On the other had you are also testing what is inside the ValidateDates method. This means you are probably tempting to test multiple things. Also this might prevent you giving good/specific name for the test.
Remember you can Unit test ValidateDates method separately/in isolation.
It is important that you want to break as much as dependencies as test small piece of logic/behavior in isoloation. This way you get real value of Unit Tests IMO.

Moq, strict vs loose usage

In the past, I have only used Rhino Mocks, with the typical strict mock. I am now working with Moq on a project and I am wondering about the proper usage.
Let's assume that I have an object Foo with method Bar which calls a Bizz method on object Buzz.
In my test, I want to verify that Bizz is called, therefore I feel there are two possible options:
With a strict mock
var mockBuzz= new Mock<IBuzz>(MockBehavior.Strict);
mockBuzz.Setup(x => x.Bizz()); //test will fail if Bizz method not called
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.VerifyAll();
With a loose mock
var mockBuzz= new Mock<IBuzz>();
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.Verify(x => x.Bizz()) //test will fail if Bizz method not called
Is there a standard or normal way of doing this?
I used to use strict mocks when I first starting using mocks in unit tests. This didn't last very long. There are really 2 reasons why I stopped doing this:
The tests become brittle - With strict mocks you are asserting more than one thing, that the setup methods are called, AND that the other methods are not called. When you refactor the code the test often fails, even if what you are trying to test is still true.
The tests are harder to read - You need to have a setup for every method that is called on the mock, even if it's not really related to what you want to test. When someone reads this test it's difficult for them to tell what is important for the test and what is just a side effect of the implementation.
Because of these I would strongly recommend using loose mocks in your unit tests.
I have background in C++/non-.NET development and I've been more into .NET recently so I had certain expectations when I was using Moq for the first time. I was trying to understand WTF was going on with my test and why the code I was testing was throwing a random exception instead of the Mock library telling me which function the code was trying to call. So I discovered I needed to turn on the Strict behaviour, which was perplexing- and then I came across this question which I saw had no ticked answer yet.
The Loose mode, and the fact that it is the default is insane. What on earth is the point of a Mock library that does something completely unpredictable that you haven't explicitly listed it should do?
I completely disagree with the points listed in the other answers in support of Loose mode. There is no good reason to use it and I wouldn't ever want to, ever. When writing a unit test I want to be certain what is going on - if I know a function needs to return a null, I'll make it return that. I want my tests to be brittle (in the ways that matter) so that I can fix them and add to the suite of test code the setup lines which are the explicit information that is describing to me exactly what my software will do.
The question is - is there a standard and normal way of doing this?
Yes - from the point of view of programming in general, i.e. other languages and outside the .NET world, you should use Strict always. Goodness knows why it isn't the default in Moq.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}
Me personally, being new to mocking and Moq feel that starting off with Strict mode helps better understand of the innards and what's going on. "Loose" sometimes hides details and pass a test which a moq beginner may fail to see. Once you have your mocking skills down - Loose would probably be a lot more productive - like in this case saving a line with the "Setup" and just using "Verify" instead.

Need ideas for a TDD Approach

We have just released a re-written(for the 3rd time) module for our proprietary system. This module, which we call the Load Manager, is by far the most complicated of all the modules in our system to date. We are trying to get a comprehensive test suite because every time we make any kind of significant change to this module there is hell to pay for weeks in sorting out bugs and quirks. However, developing a test suite has proven to be quite difficult so we are looking for ideas.
The Load Manager's guts reside in a class called LoadManagerHandler, this is essentially all of the logic behind the module. This handler calls upon multiple controllers to do the CRUD methods in the database. These controllers are essentially the top layer of the DAL that sits on top and abstracts away our LLBLGen generated code.
So it is easy enough to mock these controllers, which we are doing using the Moq framework. However the problem comes in the complexity of the Load Manager and the issues that we receive aren't in dealing with the simple cases but the cases where there is a substantial amount of data contained within the handler.
To briefly explain the load manager contains a number of "unloaded" details, sometimes in the hundreds, that are then dropped into user created loads and reship pools. During the process of creating and populating these loads there is a multitude of deletes, changes, and additions that eventually cause issues to appear. However, because when you mock a method of an object the last mock wins, ie:
jobDetailControllerMock.Setup(mock => mock.GetById(1)).Returns(jobDetail1);
jobDetailControllerMock.Setup(mock => mock.GetById(2)).Returns(jobDetail2);
jobDetailControllerMock.Setup(mock => mock.GetById(3)).Returns(jobDetail3);
No matter what I send to jobDetailController.GetById(x) I will always get back jobDetail3. This makes testing almost impossible because we have to make sure that when changes are made all points are affected that should be affected.
So, I resolved to using the test database and just allowing the reads and writes to occur as normal. However, because you can't(read: should not) dictate the order of your tests, tests that are run earlier could cause tests that run later to fail.
TL/DR: I am essentially looking for testing strategies for data oriented code that is quite complex in nature.
As noted by Seb, you can indeed use a range matching:
controller.Setup(x => x.GetById(It.IsInRange<int>(1, 3, Range.Inclusive))))).Returns<int>(i => jobs[i]);
This code uses the argument passed to the method to calculate which value to return.
To get around the "last mock wins" with Moq, you could use the technique from this blog:
Moq Triqs - Successive Expectations
EDIT:
Actually you don't even need that. Based on your example, Moq will return different values based on the method argument.
public interface IController
{
string GetById(int id);
}
class Program
{
static void Main(string[] args)
{
var mockController = new Mock<IController>();
mockController.Setup(x => x.GetById(1)).Returns("one");
mockController.Setup(x => x.GetById(2)).Returns("two");
mockController.Setup(x => x.GetById(3)).Returns("three");
IController controller = mockController.Object;
Console.WriteLine(controller.GetById(1));
Console.WriteLine(controller.GetById(3));
Console.WriteLine(controller.GetById(2));
Console.WriteLine(controller.GetById(3));
Console.WriteLine(controller.GetById(99) == null);
}
}
Output is:
one
three
two
three
True
It sounds like LoaderManagerHandler does... quite a bit of work. "Manager" in a class name always somewhat worries me... from a TDD standpoint, it might be worth thinking about breaking the class up appropriately if possible.
How long is this class?
I've never used Moq, but it seems that it should be able to match a mock invocation by argument(s) supplied.
A quick look at the Quick Start documentation has the following excerpt:
//Matching Arguments
// any value
mock.Setup(foo => foo.Execute(It.IsAny<string>())).Returns(true);
// matching Func<int>, lazy evaluated
mock.Setup(foo => foo.Add(It.Is<int>(i => i % 2 == 0))).Returns(true);
// matching ranges
mock.Setup(foo => foo.Add(It.IsInRange<int>(0, 10, Range.Inclusive))).Returns(true);
I think you should be able to use the second example above.
A simple testing technique is to make sure everytime a bug is logged against a system, make sure a unit test is written covering that case. You can build up a pretty solid set of tests just from that technique. And even better you won't run into the same thing twice.
No matter what I send to jobDetailController.GetById(x) I will always get back jobDetail3
You should spend more time debugging your tests because what is happening is not how Moq behaves. There is a bug in your code or tests causing something to misbehave.
If you want to make repeated calls with the same inputs but different outputs you could also use a different mocking framework. RhinoMocks supports the record/playback idiom. You're right this is not always what you want with regards to enforcing call order. I do prefer Moq myself for its simplicity.

Use of Mocks in Tests

I just started using mock objects (using Java's mockito) in my tests recently. Needless to say, they simplified the set-up part of the tests, and along with Dependency Injection, I would argue it made the code even more robust.
However, I have found myself tripping in testing against implementation rather than specification. I ended up setting up expectations that I would argue that it's not part of the tests. In more technical terms, I will be testing the interaction between SUT (the class under test) and its collaborators, and such dependency isn't part of contract or the interface of the class!
Consider that you have the following:
When dealing with XML node, suppose that you have a method, attributeWithDefault() that returns the attribute value of the node if it's available, otherwise it would return a default value!
I would setup the test like the following:
Element e = mock(Element.class);
when(e.getAttribute("attribute")).thenReturn("what");
when(e.getAttribute("other")).thenReturn(null);
assertEquals(attributeWithDefault(e, "attribute", "default"), "what");
assertEquals(attributeWithDefault(e, "other", "default"), "default");
Well, here not only did I test that attributeWithDefault() adheres to the specification, but I also tested the implementation, as I required it to use Element.getAttribute(), instead of Element.getAttributeNode().getValue() or Element.getAttributes().getNamedItem().getNodeValue(), etc.
I assume that I am going about it in the wrong way, so any tips on how I can improve my usage of mocks and best practices will be appreciated.
EDIT:
What's wrong with the test
I made the assumption above that the test is a bad style, here is my rationale.
The specification doesn't specify which method gets called. A client of the library shouldn't care of how attribute is retrieved for example, as long as it is done rightly. The implementor should have free reign to access any of the alternative approaches, in any way he sees fit (with respect to performance, consistency, etc). It's the specification of Element that ensures that all these approaches return identical values.
It doesn't make sense to re-factor Element into a single method interface with getElement() (Go is quite nice about this actually). For ease of use, a client of the method should be able to just to use the standard Element in the standard library. Having interfaces and new classes is just plain silly, IMHO, as it makes the client code ugly, and it's not worth it.
Assuming the spec stays as is and the test stays as is, a new developer may decide to refactor the code to use a different approach of using the state, and cause the test to fail! Well, a test failing when the actual implementation adheres to the specification is valid.
Having a collaborator expose state in multiple format is quite common. A specification and the test shouldn't depend on which particular approach is taken; only the implementation should!
This is a common issue in mock testing, and the general mantra to get away from this is:
Only mock types you own.
Here if you want to mock collaboration with an XML parser (not necessarily needed, honestly, as a small test XML should work just fine in a unit context) then that XML parser should be behind an interface or class that you own that will deal with the messy details of which method on the third party API you need to call. The main point is that it has a method that gets an attribute from an element. Mock that method. This separates implementation from design. The real implementation would have a real unit test that actually tests you get a successful element from a real object.
Mocks can be a nice way of saving boilerplate setup code (acting essentially as Stubs), but that isn't their core purpose in terms of driving design. Mocks are testing behavior (as opposed to state) and are not Stubs.
I should add that when you use Mocks as stubs, they look like your code. Any stub has to make assumptions about how you are going to call it that are tied to your implementation. That is normal. Where it is a problem is if that is driving your design in bad ways.
When designing unit tests you will always effectively test your implementation, and not some abstract specification. Or one can argue that you will test the "technical specification", which is the business specification extended with technical details. There is nothing wrong with this. Instead of testing that:
My method will return a value if defined or a default.
you are testing:
My method will return a value if defined or a default provided that the xml Element supplied will return this attribute when I call getAttribute(name).
The only solution I can see for you here (and I have to admit I'm not familiar with the library you're using) is to create a mock element that has all of the functionality included, that is, also have the ability to set the value of getAttributeNote().getValue() and getAttributes().getNamedItem().getNodeValue().
But, assuming they're all equivalent, it's fine to just test one. It's when it varies that you need to test all cases.
I don't find anything wrong with your use of the mocks. What you are testing is the attributeWithDefault() method and it's implementation, not whether Element is correct or not. So you mocked Element in order to reduce the amount of setup required. The test ensures that the implementation of attributeWithDefault() fits the specification, naturally there needs to be some specific implementation that can be run for the test.
You're effectively testing your mock object here.
If you want to test the attributeWithDefault() method, you must assert that e.getAttribute() gets called with the expected argument and forget about the return value. This return value only verifies the setup of your mock object.
(I don't know how this is exactly done with Java's mockito, I'm a pure C# guy...)
It depends on whether getting the attribute via calling getAttribute() is part of the specification, or if it is an implementation detail that might change.
If Element is an interface, than stating that you should use 'getAttribute' to get the attribute is probably part of the interface. So your test is fine.
If Element is a concrete class, but attributeWithDefault should not be aware of how you can get the attribute, than maybe there is a interface waiting to appear here.
public interface AttributeProvider {
// Might return null
public String getAttribute(String name);
}
public class Element implements AttributeProvider {
public String getAttribute(String name) {
return getAttributeHolder().doSomethingReallyTricky().toString();
}
}
public class Whatever {
public String attributeWithDefault(AttributeProvider p, String name, String default) {
String res = p.getAtribute(name);
if (res == null) {
return default;
}
}
}
You would then test attributeWithDefault against a Mock AttributeProvider instead of an Element.
Of course in this situation it would probably be an overkill, and your test is probably just fine even with an implementation (You will have to test it somewhere anyway ;) ). However this kind of decoupling might be usefull if the logic ever goes any more complicated, either in getAttribute or in attributeWithDefualt.
Hoping this helps.
It seems to me that there are 3 things you want to verify with this method:
It gets the attribute from the right place (Element.getAttribute())
If the attribute is not null, it is returned
If the attribute is null, the string "default" is returned
You're currently verifying #2 and #3, but not #1. With mockito, you could verify #1 by adding
verify(e.getAttribute("attribute"));
verify(e.getAttribute("other"));
Which ensures that the methods are actually getting called on your mock. Admittedly this is a little clunky in mockito. In easymock, you'd do something like:
expect(e.getAttribute("attribute")).andReturn("what");
expect(e.getAttribute("default")).andReturn(null);
It has the same effect, but I think makes your test a bit easier to read.
If you are using dependency injection then the collaborators should be part of the contract. You need to be able to inject all collaborators in through the constructor or a public property.
Bottom line: if you have a collaborator that you newing up instead of injecting then you probably need to refactor the code. This is a change of mindset necessary for testing/mocking/injecting.
This is a late answer, but it takes a different viewpoint from the other ones.
Basically, the OP is right in thinking the test with mocking is bad, for the reasons he stated in the question. Those saying that mocks are ok have not provided good reasons for it, IMO.
Here is a complete version of the test, in two versions: one with mocking (the BAD one) and another without (the GOOD one). (I took the liberty of using a different mocking library, but that doesn't change the point.)
import javax.xml.parsers.*;
import org.w3c.dom.*;
import org.junit.*;
import static org.junit.Assert.*;
import mockit.*;
public final class XmlTest
{
// The code under test, embedded here for convenience.
public static final class XmlReader
{
public String attributeWithDefault(
Element xmlElement, String attributeName, String defaultValue
) {
String attributeValue = xmlElement.getAttribute(attributeName);
return attributeValue == null || attributeValue.isEmpty() ?
defaultValue : attributeValue;
}
}
#Tested XmlReader xmlReader;
// This test is bad because:
// 1) it depends on HOW the method under test is implemented
// (specifically, that it calls Element#getAttribute and not some other method
// such as Element#getAttributeNode) - it's therefore refactoring-UNSAFE;
// 2) it depends on the use of a mocking API, always a complex beast which takes
// time to master;
// 3) use of mocking can easily end up in mock behavior that is not real, as
// actually occurred here (specifically, the test records Element#getAttribute
// as returning null, which it would never return according to its API
// documentation - instead, an empty string would be returned).
#Test
public void readAttributeWithDefault_BAD_version(#Mocked final Element e) {
new Expectations() {{
e.getAttribute("attribute"); result = "what";
// This is a bug in the test (and in the CUT), since Element#getAttribute
// never returns null for real.
e.getAttribute("other"); result = null;
}};
String actualValue = xmlReader.attributeWithDefault(e, "attribute", "default");
String defaultValue = xmlReader.attributeWithDefault(e, "other", "default");
assertEquals(actualValue, "what");
assertEquals(defaultValue, "default");
}
// This test is better because:
// 1) it does not depend on how the method under test is implemented, being
// refactoring-SAFE;
// 2) it does not require mastery of a mocking API and its inevitable intricacies;
// 3) it depends only on reusable test code which is fully under the control of the
// developer(s).
#Test
public void readAttributeWithDefault_GOOD_version() {
Element e = getXmlElementWithAttribute("what");
String actualValue = xmlReader.attributeWithDefault(e, "attribute", "default");
String defaultValue = xmlReader.attributeWithDefault(e, "other", "default");
assertEquals(actualValue, "what");
assertEquals(defaultValue, "default");
}
// Creates a suitable XML document, or reads one from an XML file/string;
// either way, in practice this code would be reused in several tests.
Element getXmlElementWithAttribute(String attributeValue) {
DocumentBuilder dom;
try { dom = DocumentBuilderFactory.newInstance().newDocumentBuilder(); }
catch (ParserConfigurationException e) { throw new RuntimeException(e); }
Element e = dom.newDocument().createElement("tag");
e.setAttribute("attribute", attributeValue);
return e;
}
}

How do I unit test an implementation detail like caching

So I have a class with a method as follows:
public class SomeClass
{
...
private SomeDependency m_dependency;
public int DoStuff()
{
int result = 0;
...
int someValue = m_dependency.GrabValue();
...
return result;
}
}
And I've decided that rather than to call m_dependency.GrabValue() each time, I really want to cache the value in memory (i.e. in this class) since we're going to get the same value each time anyway (the dependency goes off and grabs some data from a table that hardly ever changes).
I've run into problems however trying to describe this new behaviour in a unit test. I've tried the following (I'm using NUnit with RhinoMocks):
[Test]
public void CacheThatValue()
{
var depend = MockRepository.GeneraMock<SomeDependency>();
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
var sut = new SomeCLass(depend);
int result = sut.DoStuff();
result = sut.DoStuff();
depend.VerifyAllExpectations();
}
This however doesn't work; this test passes even without introducing any changes to the functionality. What am I doing wrong?
I see caching as orthogonal to Do(ing)Stuff. I would find a way to pull the caching logic outside of the method, either by changing SomeDependency or wrapping it somehow (I now have a cool idea for a caching class based around lambda expressions -- yum).
That way your tests for DoStuff don't need to change, you only need to make sure they work with the new wrapper. Then you can test the caching functionality of SomeDependency, or its wrapper, independently. With well-architected code putting a caching layer in place should be rather easy and neither your dependency nor your implementation should know the difference.
Unit tests shouldn't be testing implementation, they should test behavior. At the same time, the subject under test should have a narrowly-defined set of behavior.
To answer your question, you are using a Dynamic Mock and the default behavior is to allow any call that isn't configured. The additional calls are just returning "0". You need to set up an expectation that no more calls are made on the dependency:
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
depend.Expect(d => d.GrabValue()).Repeat.Never();
You may need to enter record/replay mode to get it to work properly.
This seems like a case for "tests drive the design". If caching is an implementation detail of SubDependency - and therefore can't be directly tested - then probably some of its functionality (specifically, its caching behavior) needs to be exposed - and since it's not natural to expose it within SubDependency, needs to be exposed in another class (let's call it "Cache"). In Cache, of course, the behavior is contractual - public, and thereby testable.
So the tests - and the smells - are telling us we need a new class. Test-Driven Design. Ain't it great?

Categories

Resources