I want to test GetParameters() to assert the returned value includes "test=" within the value. Unfortunately the method responsible for this is private. Is there a way I can provide test coverage for this? The problem I have is highlighted below:
if (info.TagGroups != null)
The problem is that info.TagGroups equals null in my test
Thanks,
Test
[Test]
public void TestGetParameters()
{
var sb = new StringBuilder();
_renderer.GetParameters(sb);
var res = sb.ToString();
Assert.IsTrue(res.IndexOf("test=") > -1, "blabla");
}
Implementation class to test
internal void GetParameters(StringBuilder sb)
{
if (_dPos.ArticleInfo != null)
{
var info = _dPos.ArticleInfo;
AppendTag(sb, info);
}
}
private static void AppendTag(StringBuilder sb, ArticleInfo info)
{
if (info.TagGroups != null) // PROBLEM - TagGroups in test equals null
{
foreach (var tGroups in info.TagGroups)
{
foreach (var id in tGroups.ArticleTagIds)
{
sb.AppendFormat("test={0};", id);
}
}
}
}
Testing private methods directly (as opposed to indirectly through your class's public API) is generally a bad idea. However, if you need to, you can do this via reflection.
A better choice would be to refactor your code to be more testable. There are any number of ways to do this, here's one option:
It looks like the logic you want to test is in AppendTag. This is a static method, and doesn't modify any state, so why not make it public and callable by tests directly?
In the same way, you could also make GetParameters public static by giving it an additional ArticleInfo parameter.
You can make internals visible to your testing project: in AssemblyInfo use InternalsVisibleToAttribute.
DISCLAIMER:
Testing private/internal methods is bad! It's not TDD and in most cases this is a sign to stop and rethink design. BUT in case you're forced to do this (legacy code that should be tested) - you can use this approach.
Yes, you need just to inject appropriate contents of _dPos.ArticleInfo into your class. This way you can ensure that all paths will be covered in the private method. Another possibility is to rewrite this simple code using TDD - this will give you nearly 100% coverage.
Also note, that in general you shouldn't really care about how the private method works. As long as your class exposes desired behavior correctly everything is fine. Thinking too much about internals during testing makes your tests coupled with the details of your implementation and this makes tests fragile and hard to maintain.
See this and this for example, on why not to test implementation details.
UPDATE:
I really don't get why people always go with the InternalsVisible approach and other minor arguments encouraging testing of private methods. I'm not saying that it's always bad. But most of the time it's bad. The fact that some exceptional cases exist that force you to test private methods doesn't mean that this should be advised in general. So yes, do present a solution that makes testing implementation details possible, but after you've describes what is a valid approach in general. There are tons of blog posts and SO questions on this matter, why do we have to go through this over and over again? (Rhetorical question).
In your properties/assemblyinfo.cs add:
[assembly: InternalsVisibleTo("**Insert your test project here**")]
EDIT
Some people like testing internals and private methods and others don't. I personally prefer it and therefor I use InternalsVisibleTo. However, Internals are internal for a reason and this should therefore be used with caution.
Related
I follow the naming convention of
MethodName_Condition_ExpectedBehaviour
when it comes to naming my unit-tests that test specific methods.
for example:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
But when I need to rename the method under test, tools like ReSharper does not offer me to rename those tests.
Is there a way to prevent such cases to appear after renaming? Like changing ReSharper settings or following a better unit-test naming convention etc. ?
A recent pattern is to groups tests into inner classes by the method they test.
For example (omitting test attributes):
public CityGetterTests
{
public class GetCity
{
public void TakesParidId_ReturnsParis()
{
//...
}
// More GetCity tests
}
}
See Structuring Unit Tests from Phil Haack's blog for details.
The neat thing about this layout is that, when the method name changes,
you'll only have to change the name of the inner class instead of all
the individual tests.
I also started with this convertion, however ended up with feeling that is not very good. Now I use BDD styled names like should_return_Paris_for_ParisID.
That makes my tests more readable and alsow allows me to refactor method names without worrying about my tests :)
I think the key here is what you should be testing.
You've mentioned TDD in the tags, so I hope that we're trying to adhere to that here. By that paradigm, the tests you're writing have two purposes:
To support your code once it is written, so you can refactor without fearing that you've broken something
To guide us to a better way of designing components - writing the test first really forces you to think about what is necessary for solving the problem at hand.
I know at first it looks like this question is about the first point, but really I think it's about the second. The problem you're having is that you've got concrete components you're testing instead of a contract.
In code terms, that means that I think we should be testing interfaces instead of class methods, because otherwise we expose our test to a variety of problems associated with testing components instead of contracts - inheritance strategies, object construction, and here, renaming.
It's true that interfaces names will change as well, but they'll be a lot more rigid than method names. What TDD gives us here isn't just a way to support change through a test harness - it provides the insight to realise we might be going about it the wrong way!
Take for example the code block you gave:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
{
// some test logic here
}
And let's say we're testing the method GetCity() on our object, CityObtainer - when did I set this object up? Why have I done so? If I realise GetMatchingCity() is a better name, then you have the problem outlined above!
The solution I'm proposing is that we think about what this method really means earlier in the process, by use of interfaces:
public interface ICityObtainer
{
public City GetMatchingCity();
}
By writing in this "outside-in" style way, we're forced to think about what we want from the object a lot earlier in the process, and it becoming the focus should reduce its volatility. This doesn't eliminate your problem, but it may mitigate it somewhat (and, I think, it's a better approach anyway).
Ideally, we go a step further, and we don't even write any code before starting the test:
[TestMethod]
public void GetCity_TakesParId_ReturnsParis
{
ICityObtainer cityObtainer = new CityObtainer();
var result = cityObtainer.GetCity("paris");
Assert.That(result.Name, Is.EqualTo("paris");
}
This way, I can see what I really want from the component before I even start writing it - if GetCity() isn't really what I want, but rather GetCityByID(), it would become apparent a lot earlier in the process. As I said above, it isn't foolproof, but it might reduce the pain for this particular case a bit.
Once you've gone through that, I feel that if you're changing the name of the method, it's because you're changing the terms of the contract, and that means you should have to go back and reconsider the test (since it's possible you didn't want to change it).
(As a quick addendum, if we're writing a test with TDD in mind, then something is happening inside GetCity() that has a significant amount of logic going on. Thinking about the test as being to a contract helps us to separate the intention from the implementation - the test will stay valid no matter what we change behind the interface!)
I'm late, but maybe that Can be still useful. That's my solution (Assuming you are using XUnit at least).
First create an attribute FactFor that extends the XUnit Fact.
public class FactForAttribute : FactAttribute
{
public FactForAttribute(string methodName = "Constructor", [CallerMemberName] string testMethodName = "")
=> DisplayName = $"{methodName}_{testMethodName}";
}
The trick now is to use the nameof operator to make refactoring possible. For example:
public class A
{
public int Just2() => 2;
}
public class ATests
{
[FactFor(nameof(A.Just2))]
public void Should_Return2()
{
var a = new A();
a.Just2().Should().Be(2);
}
}
That's the result:
I have a method in a class for which they are a few different outcomes (based upon event responses etc). But this is a single atomic function which is to used by other applications.
I have broken down the main blocks of the functionality that comprise this function into different functions and successfully taken a Test Driven Development approach to the functionality of each of these elements. These elements however aren't exposed for other applications would use.
And so my question is how can/should i easily approach a TDD style solution to verifying that the single method that should be called does function correctly without a lot of duplication in testing or lots of setup required for each test?
I have considered / looked at moving the blocks of functionality into a different class and use Mocking to simulate the responses of the functions used but it doesn't feel right and the individual methods need to write to variables within the main class (it felt really heath robinson).
The code roughly looks like this (i have removed a lot of parameters to make things clearer along with a fair bit of irrelevant code).
public void MethodToTest(string parameter)
{
IResponse x = null;
if (function1(parameter))
{
if (!function2(parameter,out x))
{
function3(parameter, out x);
}
}
// ...
// more bits of code here
// ...
if (x != null)
{
x.Success();
}
}
I think you would make your life easier by avoiding the out keyword, and re-writing the code so that the functions either check some condition on the response, OR modify the response, but not both. Something like:
public void MethodToTest(string parameter)
{
IResponse x = null;
if (function1(parameter))
{
if (!function2Check(parameter, x))
{
x = function2Transform(parameter, x);
x = function3(parameter, x);
}
}
// ...
// more bits of code here
// ...
if (x != null)
{
x.Success();
}
}
That way you can start pulling apart and recombining the pieces of your large method more easily, and in the end you should have something like:
public void MethodToTest(string parameter)
{
IResponse x = ResponseBuilder.BuildResponse(parameter);
if (x != null)
{
x.Success();
}
}
... where BuildResponse is where all your current tests will be, and the test for MethodToTest should now be fairly easy to mock the ResponseBuilder.
Your best option would indeed be mocking function1,2,3 etc. If you cannot move your functions to a separate class you could look into using nested classes to move the functions to, they are able to access the data in the outer class. After that you should be able to use mocks instead of the nested classes for testing purposes.
Update: From looking at your example code I think you could get some inspiration by looking into the visitor pattern and ways of testing that, it might be appropriate.
In this case I think you would just mock the method calls as you mentioned.
Typically you would write your test first, and then write the method in a way so that all of the tests pass. I've noticed that when you do it this way, the code that's written is very clean and to the point. Also, each class is very good about only having a single responsibility that can easily be tested.
I don't know what's wrong, but something doesn't smell right, and I think there maybe a more elegant way to do what you're doing.
IMHO, you have a couple options here:
Break the inner functions out into a different class so you can mock them and verify that they are called. (which you already mentioned)
It sounds like the other methods you created are private methods, and that this is the only public interface into those methods. If so, you should be running those test cases through this function, and verifying the results (you said that those private methods modify variables of the class) instead of testing private methods. If that is too painful, then I would consider reworking your design.
It looks to me like this class is trying to do more than one thing. For example, the first function doesn't return a response but the other two do. In your description you said the function is complex and takes a lot of parameters. Those are both signs that you need to refactor your design.
Is there a way to know when code is being called from running a test method?
bool MyMethod()
{
if ( /* are we running a test? */ )
{
return true; // otherwise this will fail from the automated build script
}
else
{
// run the proper code
}
}
and please spare me the "this is a really bad idea" comments :)
You recognize this may be a bad idea, but you may not be aware of the alternatives. You should look into mocking frameworks - they can provide a way to inject alternative implementations into your code at runtime in your unit tests. This is an established practice to allow code to behave differently in test and production.
On to the main question.
This depends on the unit test framework you are using. I'm not aware of a specific feature in NUnit (for example) that reports you are in a test. Other frameworks (like MS Test) may very well provide this. However, it's easy to do yourself.
If you're binding to source code, just use a #define directive to conditionally define some variable you can branch on.
If you're binding to a library, I would recommend creating your own class that you use to track whether you are in a unit test or real code, and set that class up in the TestFixture setup method to indicate you are running a test. You could also use the Environment.SetEnvironmentVariable as a way of avoiding writing a special class.
Ok, I'll spare you my "this is a really bad idea" comment.
You can just give the method a parameter bool isTestMode.
The approach outlined in your question is a really bad idea.
A cleaner approach if you want a method to behave differently under test than otherwise, is to refactor out an interface with the method, and inject a different implementation at test time.
E.g.:
// Interface whose implementation changes under testing
public interface IChangesUnderTest
{
void DoesSomething();
}
// Inject this in production
public class ProductionCode : IChangesUnderTest
{
void DoesSomething() { /* Does stuff */ }
}
// Inject this under test
public class TestCode : IChangesUnderTest
{
void DoesSomething() { /* Does something else */ }
}
You should check the attribute of calling methods by reflection! I will not mention that this is a bed idea, because you know it as I see ;)
Here's my suggestion.
Instead of checking the context, just add in some conditional bug code:
#if DEBUG
/* assume you are running in a test environment here */
#endif
This is almost not bad. And if it isn't exactly what you need, you can look into #define-ing "TEST" for when you might want your test code to not execute during regular debugging.
If you use a test framework, and know the testAssemblyName, you can do as follows:
bool IsInUnitTest = AppDomain.CurrentDomain.GetAssemblies()
.Any(a => a.FullName.StartsWith(testAssemblyName));
To be more specificly, if you use NUnit,
bool IsRunningFromNUnit =
AppDomain.CurrentDomain.GetAssemblies().Any(
a=> a.FullName.ToLowerInvariant().StartsWith("nunit.framework"));
if you use MSTest, then replace testAssemblyName as corresponding AssemblyName, maybe is Microsoft.VisualStudio.QualityTools.UnitTestFramework.
I just started using mock objects (using Java's mockito) in my tests recently. Needless to say, they simplified the set-up part of the tests, and along with Dependency Injection, I would argue it made the code even more robust.
However, I have found myself tripping in testing against implementation rather than specification. I ended up setting up expectations that I would argue that it's not part of the tests. In more technical terms, I will be testing the interaction between SUT (the class under test) and its collaborators, and such dependency isn't part of contract or the interface of the class!
Consider that you have the following:
When dealing with XML node, suppose that you have a method, attributeWithDefault() that returns the attribute value of the node if it's available, otherwise it would return a default value!
I would setup the test like the following:
Element e = mock(Element.class);
when(e.getAttribute("attribute")).thenReturn("what");
when(e.getAttribute("other")).thenReturn(null);
assertEquals(attributeWithDefault(e, "attribute", "default"), "what");
assertEquals(attributeWithDefault(e, "other", "default"), "default");
Well, here not only did I test that attributeWithDefault() adheres to the specification, but I also tested the implementation, as I required it to use Element.getAttribute(), instead of Element.getAttributeNode().getValue() or Element.getAttributes().getNamedItem().getNodeValue(), etc.
I assume that I am going about it in the wrong way, so any tips on how I can improve my usage of mocks and best practices will be appreciated.
EDIT:
What's wrong with the test
I made the assumption above that the test is a bad style, here is my rationale.
The specification doesn't specify which method gets called. A client of the library shouldn't care of how attribute is retrieved for example, as long as it is done rightly. The implementor should have free reign to access any of the alternative approaches, in any way he sees fit (with respect to performance, consistency, etc). It's the specification of Element that ensures that all these approaches return identical values.
It doesn't make sense to re-factor Element into a single method interface with getElement() (Go is quite nice about this actually). For ease of use, a client of the method should be able to just to use the standard Element in the standard library. Having interfaces and new classes is just plain silly, IMHO, as it makes the client code ugly, and it's not worth it.
Assuming the spec stays as is and the test stays as is, a new developer may decide to refactor the code to use a different approach of using the state, and cause the test to fail! Well, a test failing when the actual implementation adheres to the specification is valid.
Having a collaborator expose state in multiple format is quite common. A specification and the test shouldn't depend on which particular approach is taken; only the implementation should!
This is a common issue in mock testing, and the general mantra to get away from this is:
Only mock types you own.
Here if you want to mock collaboration with an XML parser (not necessarily needed, honestly, as a small test XML should work just fine in a unit context) then that XML parser should be behind an interface or class that you own that will deal with the messy details of which method on the third party API you need to call. The main point is that it has a method that gets an attribute from an element. Mock that method. This separates implementation from design. The real implementation would have a real unit test that actually tests you get a successful element from a real object.
Mocks can be a nice way of saving boilerplate setup code (acting essentially as Stubs), but that isn't their core purpose in terms of driving design. Mocks are testing behavior (as opposed to state) and are not Stubs.
I should add that when you use Mocks as stubs, they look like your code. Any stub has to make assumptions about how you are going to call it that are tied to your implementation. That is normal. Where it is a problem is if that is driving your design in bad ways.
When designing unit tests you will always effectively test your implementation, and not some abstract specification. Or one can argue that you will test the "technical specification", which is the business specification extended with technical details. There is nothing wrong with this. Instead of testing that:
My method will return a value if defined or a default.
you are testing:
My method will return a value if defined or a default provided that the xml Element supplied will return this attribute when I call getAttribute(name).
The only solution I can see for you here (and I have to admit I'm not familiar with the library you're using) is to create a mock element that has all of the functionality included, that is, also have the ability to set the value of getAttributeNote().getValue() and getAttributes().getNamedItem().getNodeValue().
But, assuming they're all equivalent, it's fine to just test one. It's when it varies that you need to test all cases.
I don't find anything wrong with your use of the mocks. What you are testing is the attributeWithDefault() method and it's implementation, not whether Element is correct or not. So you mocked Element in order to reduce the amount of setup required. The test ensures that the implementation of attributeWithDefault() fits the specification, naturally there needs to be some specific implementation that can be run for the test.
You're effectively testing your mock object here.
If you want to test the attributeWithDefault() method, you must assert that e.getAttribute() gets called with the expected argument and forget about the return value. This return value only verifies the setup of your mock object.
(I don't know how this is exactly done with Java's mockito, I'm a pure C# guy...)
It depends on whether getting the attribute via calling getAttribute() is part of the specification, or if it is an implementation detail that might change.
If Element is an interface, than stating that you should use 'getAttribute' to get the attribute is probably part of the interface. So your test is fine.
If Element is a concrete class, but attributeWithDefault should not be aware of how you can get the attribute, than maybe there is a interface waiting to appear here.
public interface AttributeProvider {
// Might return null
public String getAttribute(String name);
}
public class Element implements AttributeProvider {
public String getAttribute(String name) {
return getAttributeHolder().doSomethingReallyTricky().toString();
}
}
public class Whatever {
public String attributeWithDefault(AttributeProvider p, String name, String default) {
String res = p.getAtribute(name);
if (res == null) {
return default;
}
}
}
You would then test attributeWithDefault against a Mock AttributeProvider instead of an Element.
Of course in this situation it would probably be an overkill, and your test is probably just fine even with an implementation (You will have to test it somewhere anyway ;) ). However this kind of decoupling might be usefull if the logic ever goes any more complicated, either in getAttribute or in attributeWithDefualt.
Hoping this helps.
It seems to me that there are 3 things you want to verify with this method:
It gets the attribute from the right place (Element.getAttribute())
If the attribute is not null, it is returned
If the attribute is null, the string "default" is returned
You're currently verifying #2 and #3, but not #1. With mockito, you could verify #1 by adding
verify(e.getAttribute("attribute"));
verify(e.getAttribute("other"));
Which ensures that the methods are actually getting called on your mock. Admittedly this is a little clunky in mockito. In easymock, you'd do something like:
expect(e.getAttribute("attribute")).andReturn("what");
expect(e.getAttribute("default")).andReturn(null);
It has the same effect, but I think makes your test a bit easier to read.
If you are using dependency injection then the collaborators should be part of the contract. You need to be able to inject all collaborators in through the constructor or a public property.
Bottom line: if you have a collaborator that you newing up instead of injecting then you probably need to refactor the code. This is a change of mindset necessary for testing/mocking/injecting.
This is a late answer, but it takes a different viewpoint from the other ones.
Basically, the OP is right in thinking the test with mocking is bad, for the reasons he stated in the question. Those saying that mocks are ok have not provided good reasons for it, IMO.
Here is a complete version of the test, in two versions: one with mocking (the BAD one) and another without (the GOOD one). (I took the liberty of using a different mocking library, but that doesn't change the point.)
import javax.xml.parsers.*;
import org.w3c.dom.*;
import org.junit.*;
import static org.junit.Assert.*;
import mockit.*;
public final class XmlTest
{
// The code under test, embedded here for convenience.
public static final class XmlReader
{
public String attributeWithDefault(
Element xmlElement, String attributeName, String defaultValue
) {
String attributeValue = xmlElement.getAttribute(attributeName);
return attributeValue == null || attributeValue.isEmpty() ?
defaultValue : attributeValue;
}
}
#Tested XmlReader xmlReader;
// This test is bad because:
// 1) it depends on HOW the method under test is implemented
// (specifically, that it calls Element#getAttribute and not some other method
// such as Element#getAttributeNode) - it's therefore refactoring-UNSAFE;
// 2) it depends on the use of a mocking API, always a complex beast which takes
// time to master;
// 3) use of mocking can easily end up in mock behavior that is not real, as
// actually occurred here (specifically, the test records Element#getAttribute
// as returning null, which it would never return according to its API
// documentation - instead, an empty string would be returned).
#Test
public void readAttributeWithDefault_BAD_version(#Mocked final Element e) {
new Expectations() {{
e.getAttribute("attribute"); result = "what";
// This is a bug in the test (and in the CUT), since Element#getAttribute
// never returns null for real.
e.getAttribute("other"); result = null;
}};
String actualValue = xmlReader.attributeWithDefault(e, "attribute", "default");
String defaultValue = xmlReader.attributeWithDefault(e, "other", "default");
assertEquals(actualValue, "what");
assertEquals(defaultValue, "default");
}
// This test is better because:
// 1) it does not depend on how the method under test is implemented, being
// refactoring-SAFE;
// 2) it does not require mastery of a mocking API and its inevitable intricacies;
// 3) it depends only on reusable test code which is fully under the control of the
// developer(s).
#Test
public void readAttributeWithDefault_GOOD_version() {
Element e = getXmlElementWithAttribute("what");
String actualValue = xmlReader.attributeWithDefault(e, "attribute", "default");
String defaultValue = xmlReader.attributeWithDefault(e, "other", "default");
assertEquals(actualValue, "what");
assertEquals(defaultValue, "default");
}
// Creates a suitable XML document, or reads one from an XML file/string;
// either way, in practice this code would be reused in several tests.
Element getXmlElementWithAttribute(String attributeValue) {
DocumentBuilder dom;
try { dom = DocumentBuilderFactory.newInstance().newDocumentBuilder(); }
catch (ParserConfigurationException e) { throw new RuntimeException(e); }
Element e = dom.newDocument().createElement("tag");
e.setAttribute("attribute", attributeValue);
return e;
}
}
So I have a class with a method as follows:
public class SomeClass
{
...
private SomeDependency m_dependency;
public int DoStuff()
{
int result = 0;
...
int someValue = m_dependency.GrabValue();
...
return result;
}
}
And I've decided that rather than to call m_dependency.GrabValue() each time, I really want to cache the value in memory (i.e. in this class) since we're going to get the same value each time anyway (the dependency goes off and grabs some data from a table that hardly ever changes).
I've run into problems however trying to describe this new behaviour in a unit test. I've tried the following (I'm using NUnit with RhinoMocks):
[Test]
public void CacheThatValue()
{
var depend = MockRepository.GeneraMock<SomeDependency>();
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
var sut = new SomeCLass(depend);
int result = sut.DoStuff();
result = sut.DoStuff();
depend.VerifyAllExpectations();
}
This however doesn't work; this test passes even without introducing any changes to the functionality. What am I doing wrong?
I see caching as orthogonal to Do(ing)Stuff. I would find a way to pull the caching logic outside of the method, either by changing SomeDependency or wrapping it somehow (I now have a cool idea for a caching class based around lambda expressions -- yum).
That way your tests for DoStuff don't need to change, you only need to make sure they work with the new wrapper. Then you can test the caching functionality of SomeDependency, or its wrapper, independently. With well-architected code putting a caching layer in place should be rather easy and neither your dependency nor your implementation should know the difference.
Unit tests shouldn't be testing implementation, they should test behavior. At the same time, the subject under test should have a narrowly-defined set of behavior.
To answer your question, you are using a Dynamic Mock and the default behavior is to allow any call that isn't configured. The additional calls are just returning "0". You need to set up an expectation that no more calls are made on the dependency:
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
depend.Expect(d => d.GrabValue()).Repeat.Never();
You may need to enter record/replay mode to get it to work properly.
This seems like a case for "tests drive the design". If caching is an implementation detail of SubDependency - and therefore can't be directly tested - then probably some of its functionality (specifically, its caching behavior) needs to be exposed - and since it's not natural to expose it within SubDependency, needs to be exposed in another class (let's call it "Cache"). In Cache, of course, the behavior is contractual - public, and thereby testable.
So the tests - and the smells - are telling us we need a new class. Test-Driven Design. Ain't it great?