I have been battling with writing some unit test for some of our code and I am struggling with this one:
We have a method that we overloaded and it is like this:
Public Client GetClient(int productID)
{
//Some sql that evaluate a client
if(!GetClient(clientRef,ClientTypeRef))
Return Client.UnknownClient;
//some other sql and codes
Return Client.CustomerClient;
}
The problem is how do I approach this, in my test I tried to add a mock to the GetClient(clientRef,ClientTypeRef) and returning an OK Client (anything other than Client.UnknownClient) to allow me to continue but I am getting a null reference? Is it possible to mock and test such methods, and how would I continue with this.
One of the reasons why unit testing has become so popular was that it was shown to encourage SOLID design principles.
Based on the little piece of code you've included, I think your difficulty may come from the fact that your GetClient(int productID) has too many responsibilities so you're struggling to test them separately.
The way I understand it, you are:
loading a client from the DB (by product Id?)
checking (with complex logic involving more queries to the db?) what kind of client it is
Rather than mocking GetClient(clientRef,ClientTypeRef) so that you can test every logical path under it, I would encourage you (if possible) to try and refactor your code, so that you separate the loading of the client from the checking of the client's type and possibly also some of the logic you group under //some other sql and codes.
That should make it easier to test every piece separately, but above all would make your code more maintainable.
What mocking frame work are you using?
I use moq, to mock this method you would do something like
_mockClientRepository.Setup(x => x.GetClient(It.IsAny<int>(),It.IsAny<int>())).Returns(Client.CustomerClient);
Basically this means that any integer passed to the method will always return Client.CustomerClient
Related
I am abstracting the history tracking portion of a class of mine so that it looks like this:
private readonly Stack<MyObject> _pastHistory = new Stack<MyObject>();
internal virtual Boolean IsAnyHistory { get { return _pastHistory.Any(); } }
internal virtual void AddObjectToHistory(MyObject myObject)
{
if (myObject == null) throw new ArgumentNullException("myObject");
_pastHistory.Push(myObject);
}
internal virtual MyObject RemoveLastObject()
{
if(!IsAnyHistory) throw new InvalidOperationException("There is no previous history.");
return _pastHistory.Pop();
}
My problem is that I would like to unit test that Remove will return the last Added object.
AddObjectToHistory
RemoveObjectToHistory -> returns what was put in via AddObjectToHistory
However, it isn't really a unit test if I have to call Add first? But, the only way that I can see to do this in a true unit test way is to pass in the Stack object in the constructor OR mock out IsAnyHistory...but mocking my SUT is odd also. So, my question is, from a dogmatic view is this a unit test? If not, how do I clean it up...is constructor injection my only way? It just seems like a stretch to have to pass in a simple object? Is it ok to push even this simple object out to be injected?
There are two approaches to those scenarios:
Interfere into design, like making _pastHistory internal/protected or injecting stack
Use other (possibly unit tested) methods to perform verification
As always, there is no golden rule, although I'd say you generally should avoid situations where unit tests force design changes (as those changes will most likely introduce ambiguity/unnecessary questions to code consumers).
Nonetheless, in the end it is you who has to weigh how much you want unit test code interfere into design (first case) or bend the perfect unit test definition (second case).
Usually, I find second case much more appealing - it doesn't clutter original class code and you'll most likely have Add already tested - it's safe to rely on it.
I think it's still a unit test, assuming MyObject is a simple object. I often construct input parameters to unit test methods.
I use Michael Feather's unit test criteria:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
My 2 cents... how would the client know if remove worked or not ? How is a 'client' supposed to interact with this object? Are clients going to push in a stack to the history tracker? Treat the test as just another user/consumer/client of the test subject.. using exactly the same interaction as in real production.
I haven't heard of any rule stating that you're not allowed to call multiple methods on the object under test.
To simulate, stack is not empty. I'd just call Add - 99% case. I'd refrain from destroying the encapsulation of that object.. Treat objects like people (I think I read that in Object Thinking). Tell them to do stuff.. don't break-in and enter.
e.g. If you want someone to have some money in their wallet,
the simple way is to give them the money and let them internally put it into their wallet.
throw their wallet away and stuff in a wallet in their pocket.
I like Option1. Also see how it frees you from implementation details (which induce brittleness in tests). Let's say tomorrow the person decides to use an online wallet. The latter approach will break your tests - they will need to be updated for pushing in an online wallet now - even though the object behavior is not broken.
Another example I've seen is for testing Repository.GetX() where people break-in to the DB to inject records with SQL now in the unit test.. where it would have be considerably cleaner and easier to call Repository.AddX(x) first. Isolation is desired but not to the extent that it overrides pragmatism.
I hope I didn't come on too strong here.. it just pains me to see object APIs being 'contorted for testability' to the point where it no longer resembles the 'simplest thing that could work'.
I think you're trying to be a little overly specific with your definition of a unit test. You should be testing the public behavior of your class, not the minute implementation details.
From your code snippet, it looks like all you really need to care about is whether a) calling AddObjectToHistory causes IsAnyHistory to return true and b) RemoveLastObject eventually causes IsAnyHistory to return false.
As stated in the other answers I think your options can be broken down like so.
You take a dogmatic approach to your testing methodology and add constructor injection for the stack object so you can inject your own fake stack object and test your methods.
You write a separate test for add and remove, the remove test will use the add method but consider it a part of the test setup. As long as your add test passes, your remove should be too.
I'm trying to get into unit testing with NUnit. At the moment, I'm writing a simple test to get used to the syntax and the way of unit testing. But I'm not sure if I'm doing it right with the following test:
The class under test holds a list of strings containing fruit names, where new fruit names can be added via class_under_test.addNewFruit(...). So, to test the functionality of addNewFruit(...), I first use the method to add a new string to the list (e.g. "Pinapple") and, in the next step, verify if the list contains this new string.
I'm not sure if this is a good way to test the functionality of the method, because I rely on the response of another function (which I have already tested in a previous unit test).
Is this the way to test this function, or are there better solutions?
public void addNewFruit_validNewFruitName_ReturnsFalse()
{
//arrange
string newFruit = "Pineapple";
//act
class_under_test.addNewFruit(newFruit);
bool result = class_under_test.isInFruitList(newFruit);
//assert
Assert.That(!result);
}
In a perfect world, every unit test can only be broken in single way. Every unit test "lives" in isolation to every other. Your addNewFruit test can be broken by breaking isInFruitsList - but luckily, this isn't a perfect world either.
Since you already tested isInFruitsList method, you shouldn't worry about that. That's like using 3rd party API - it (usually) is tested, and you assume it works. In your case, you assume isInFruitsList works because, well - you tested it.
Going around the "broken in a single way" you could try to expose underlying fruits list internally (and use InternalsVisibleTo attribute), or passing it via dependency injection. Question is - is it worth the effort? What do you really gain? In such simple case, you usually gain very little and overhead of introducing such constructs usually is not worth the time.
In the past, I have only used Rhino Mocks, with the typical strict mock. I am now working with Moq on a project and I am wondering about the proper usage.
Let's assume that I have an object Foo with method Bar which calls a Bizz method on object Buzz.
In my test, I want to verify that Bizz is called, therefore I feel there are two possible options:
With a strict mock
var mockBuzz= new Mock<IBuzz>(MockBehavior.Strict);
mockBuzz.Setup(x => x.Bizz()); //test will fail if Bizz method not called
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.VerifyAll();
With a loose mock
var mockBuzz= new Mock<IBuzz>();
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.Verify(x => x.Bizz()) //test will fail if Bizz method not called
Is there a standard or normal way of doing this?
I used to use strict mocks when I first starting using mocks in unit tests. This didn't last very long. There are really 2 reasons why I stopped doing this:
The tests become brittle - With strict mocks you are asserting more than one thing, that the setup methods are called, AND that the other methods are not called. When you refactor the code the test often fails, even if what you are trying to test is still true.
The tests are harder to read - You need to have a setup for every method that is called on the mock, even if it's not really related to what you want to test. When someone reads this test it's difficult for them to tell what is important for the test and what is just a side effect of the implementation.
Because of these I would strongly recommend using loose mocks in your unit tests.
I have background in C++/non-.NET development and I've been more into .NET recently so I had certain expectations when I was using Moq for the first time. I was trying to understand WTF was going on with my test and why the code I was testing was throwing a random exception instead of the Mock library telling me which function the code was trying to call. So I discovered I needed to turn on the Strict behaviour, which was perplexing- and then I came across this question which I saw had no ticked answer yet.
The Loose mode, and the fact that it is the default is insane. What on earth is the point of a Mock library that does something completely unpredictable that you haven't explicitly listed it should do?
I completely disagree with the points listed in the other answers in support of Loose mode. There is no good reason to use it and I wouldn't ever want to, ever. When writing a unit test I want to be certain what is going on - if I know a function needs to return a null, I'll make it return that. I want my tests to be brittle (in the ways that matter) so that I can fix them and add to the suite of test code the setup lines which are the explicit information that is describing to me exactly what my software will do.
The question is - is there a standard and normal way of doing this?
Yes - from the point of view of programming in general, i.e. other languages and outside the .NET world, you should use Strict always. Goodness knows why it isn't the default in Moq.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}
Me personally, being new to mocking and Moq feel that starting off with Strict mode helps better understand of the innards and what's going on. "Loose" sometimes hides details and pass a test which a moq beginner may fail to see. Once you have your mocking skills down - Loose would probably be a lot more productive - like in this case saving a line with the "Setup" and just using "Verify" instead.
We have just released a re-written(for the 3rd time) module for our proprietary system. This module, which we call the Load Manager, is by far the most complicated of all the modules in our system to date. We are trying to get a comprehensive test suite because every time we make any kind of significant change to this module there is hell to pay for weeks in sorting out bugs and quirks. However, developing a test suite has proven to be quite difficult so we are looking for ideas.
The Load Manager's guts reside in a class called LoadManagerHandler, this is essentially all of the logic behind the module. This handler calls upon multiple controllers to do the CRUD methods in the database. These controllers are essentially the top layer of the DAL that sits on top and abstracts away our LLBLGen generated code.
So it is easy enough to mock these controllers, which we are doing using the Moq framework. However the problem comes in the complexity of the Load Manager and the issues that we receive aren't in dealing with the simple cases but the cases where there is a substantial amount of data contained within the handler.
To briefly explain the load manager contains a number of "unloaded" details, sometimes in the hundreds, that are then dropped into user created loads and reship pools. During the process of creating and populating these loads there is a multitude of deletes, changes, and additions that eventually cause issues to appear. However, because when you mock a method of an object the last mock wins, ie:
jobDetailControllerMock.Setup(mock => mock.GetById(1)).Returns(jobDetail1);
jobDetailControllerMock.Setup(mock => mock.GetById(2)).Returns(jobDetail2);
jobDetailControllerMock.Setup(mock => mock.GetById(3)).Returns(jobDetail3);
No matter what I send to jobDetailController.GetById(x) I will always get back jobDetail3. This makes testing almost impossible because we have to make sure that when changes are made all points are affected that should be affected.
So, I resolved to using the test database and just allowing the reads and writes to occur as normal. However, because you can't(read: should not) dictate the order of your tests, tests that are run earlier could cause tests that run later to fail.
TL/DR: I am essentially looking for testing strategies for data oriented code that is quite complex in nature.
As noted by Seb, you can indeed use a range matching:
controller.Setup(x => x.GetById(It.IsInRange<int>(1, 3, Range.Inclusive))))).Returns<int>(i => jobs[i]);
This code uses the argument passed to the method to calculate which value to return.
To get around the "last mock wins" with Moq, you could use the technique from this blog:
Moq Triqs - Successive Expectations
EDIT:
Actually you don't even need that. Based on your example, Moq will return different values based on the method argument.
public interface IController
{
string GetById(int id);
}
class Program
{
static void Main(string[] args)
{
var mockController = new Mock<IController>();
mockController.Setup(x => x.GetById(1)).Returns("one");
mockController.Setup(x => x.GetById(2)).Returns("two");
mockController.Setup(x => x.GetById(3)).Returns("three");
IController controller = mockController.Object;
Console.WriteLine(controller.GetById(1));
Console.WriteLine(controller.GetById(3));
Console.WriteLine(controller.GetById(2));
Console.WriteLine(controller.GetById(3));
Console.WriteLine(controller.GetById(99) == null);
}
}
Output is:
one
three
two
three
True
It sounds like LoaderManagerHandler does... quite a bit of work. "Manager" in a class name always somewhat worries me... from a TDD standpoint, it might be worth thinking about breaking the class up appropriately if possible.
How long is this class?
I've never used Moq, but it seems that it should be able to match a mock invocation by argument(s) supplied.
A quick look at the Quick Start documentation has the following excerpt:
//Matching Arguments
// any value
mock.Setup(foo => foo.Execute(It.IsAny<string>())).Returns(true);
// matching Func<int>, lazy evaluated
mock.Setup(foo => foo.Add(It.Is<int>(i => i % 2 == 0))).Returns(true);
// matching ranges
mock.Setup(foo => foo.Add(It.IsInRange<int>(0, 10, Range.Inclusive))).Returns(true);
I think you should be able to use the second example above.
A simple testing technique is to make sure everytime a bug is logged against a system, make sure a unit test is written covering that case. You can build up a pretty solid set of tests just from that technique. And even better you won't run into the same thing twice.
No matter what I send to jobDetailController.GetById(x) I will always get back jobDetail3
You should spend more time debugging your tests because what is happening is not how Moq behaves. There is a bug in your code or tests causing something to misbehave.
If you want to make repeated calls with the same inputs but different outputs you could also use a different mocking framework. RhinoMocks supports the record/playback idiom. You're right this is not always what you want with regards to enforcing call order. I do prefer Moq myself for its simplicity.
My company is on a Unit Testing kick, and I'm having a little trouble with refactoring Service Layer code. Here is an example of some code I wrote:
public class InvoiceCalculator:IInvoiceCalculator
{
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
UpdateLine(il);
}
//do a ton of other stuff here
}
private UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff, including calls to other private methods
}
}
In this simplified case (it is reduced from a 1,000 line class that has 1 public method and ~30 private ones), my boss says I should be able to test my CalculateInvoice and UpdateLine separately (UpdateLine actually calls 3 other private methods, and performs database calls as well). But how would I do this? His suggested refactoring seemed a little convoluted to me:
//Tiny part of original code
public class InvoiceCalculator:IInvoiceCalculator
{
public ILineUpdater _lineUpdater;
public InvoiceCalculator (ILineUpdater lineUpdater)
{
_lineUpdater = lineUpdater;
}
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
_lineUpdater.UpdateLine(il);
}
//do a ton of other stuff here
}
}
public class LineUpdater:ILineUpdater
{
public UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff
}
}
I can see how the dependency is now broken, and I can test both pieces, but this would also create 20-30 extra classes from my original class. We only calculate invoices in one place, so these pieces wouldn't really be reusable. Is this the right way to go about making this change, or would you suggest I do something different?
Thank you!
Jess
This is an example of Feature Envy:
line.Amount = line.Qty * line.Rate;
It should probably look more like:
var amount = line.CalculateAmount();
There isn't anything wrong with lots of little classes, it's not about re-usability as much as it's about adaptability. When you have many single responsibility classes, it's easier to see the behavior of your system and change it when your requirements change. Big classes have intertwinded responsibilities which make it very difficult to change.
IMO this all depends on how 'significant' that UpdateLine() method really is. If it's just an implementation detail (e.g. it could easily be inlined inside CalculateInvoice() method and they only thing that would hurt is readability), then you probably don't need to unit test it separately from the master class.
On the other hand, if UpdateLine() method has some value to the business logic, if you can imagine situation when you would need to change this method independently from the rest of the class (and therefore test it separately), then you should go on with refactoring it to a separate LineUpdater class.
You probably won't end up with 20-30 classes this way, because most of those private methods are really just implementation details and do not deserve to be tested separately.
Well, your boss goes more correct way in terms of unit-testing:
He is now able to test CalculateInvoice() without testing UpdateLine() function. He can pass mock object instead of real LineUpdater object and test only CalculateInvoice(), not a whole bunch of code.
Is it right? It depends. Your boss wants to make real unit-tests. And testing in your first example would not be unit-testing, it would be integration testing.
What are advantages of unit-tests before integration tests?
1) Unit-tests allow you to test only one method or property, without it being affected by other methods/database and so on.
2) Second advantage - unit tests execute faster (for example, you said UpdateLine uses database) because they don't test all the nested methods. Nested methods can be database calls so if you have thousand of tests your tests can run slow (several minutes).
3) Third advantage: if your methods make database calls then sometimes you need to setup database (fill it with data which is necessary for test) and it can be not easy - maybe you will have to write a couple of pages of code just to prepare database for a test. With unit tests, you separate database calls from the methods being tested (using mock objects).
But! I am not saying that unit tests a better. They are just different. As I said, unit tests allow you to test a unit in isolation and quickly. Integration tests are easier and allow you to test results of a joint work of different methods and layers. Honestly, I prefer integration tests more :)
Also, I have a couple of suggestions for you:
1) I don't think having Amount field is a good idea. It seems that Amount field is extra because it's value can be calculated based on 2 other public fields. If you want to do it anyway, I would do it as a read only property which returns Qty * Rate.
2) Usually, having a class which consists of 1000 rows may mean that it's badly designed and should be refactored.
Now, I hope you better understand the situation and can decide. Also, if you understand the situation you can talk to your boss and you can decide together.
yeah, nice one. I'm not sure wether the InvoiceLine object also has some logic included, otherwise then you would probably need a IInvoiceLine also.
I sometimes have the same questions. On one hand you want to do things right and unit test your code, but when database calls and maybe filewriting is involved it causes a lot of extra work to setup the first test with all the testobjects which step in when filewriting and database io is about to happen, interfaces, asserts and you also want to test that the datalayer doesn't contain any errors. So a test which is more 'process' then 'unit' is often easier to build.
If you have a project that will be changed a lot (in the future) and lots of dependencies of this code (maybe other programs read the file or database data) it can be nice to have a solid unit test for all parts of your code, and the investment time is worthwhile.
But if the project is, like my latest client says 'let's get it live and maybe we'll tweak a bit next year and next year there will be something new', than i wouldn't be so hard on myself to get all unit tests up and running.
Michel
Your boss' example looks reasonable to me.
Some of the key considerations I try to keep in mind when designing for any scenario are:
Single Responsibility Principle
A class should only change for one reason.
Does each new class justify its existence
Have classes been created just for the sake of it, or do they encapsulate a meaningful portion of logic?
Are you able to test each piece of code in isolation?
In your scenario, just looking at the names, it would appear that you were wandering away from Single Responsibility - You have an IInvoiceCalculator, yet this class is then also responsible for updating InvoiceLines. Not only have you made it very difficult to test the update behaviour, but you now need to change your InvoiceCalculator class both when calculation business rules change and when the rules around updating change.
Then there is the question about the updating logic - does the logic justify a seperate class? That really depends and it is hard to say without seeing the code, but certainly the fact that your boss wants that logic under test would suggest that it is more than a simple on liner call off to a datalayer.
You say that this refactoring creates a great number of extra classes, (I'm taking that you mean across all your business entities, since I only see a couple of new classes and their interfaces in your example) but you have to consider what you get from this. It looks like you gain full testability of your code, the ability to introduce new calculation and new update logic in isolation, and a more clear encapsulation of what are seperate pieces of business logic.
The gains above are of course subject to a cost benefit analysis, but since your boos is asking for them, it sounds like he is happy that they will pay off, against the extra work to implement the code this way.
The final, third point about testing in isolation is also a key benefit of the way your boss has designed this - the closer your public methods are to the code that does that actualy work, the easier it is to inject stubs or mocks for parts of your system that are not under test. For example, if you are testing an update method that calls off the a datalayer, you do not want to test the datalayer, so you would usually inject a mock. If you need to pass that mocked datalayer through all your calculator logic first, your test setup is going to be far more complicated since the mock now needs to meet many other potential requirements, not related to the actual test in question.
While this approach is extra work initially, I'd say that the majority of the work is the time to think through the design, after that, and after you get up to speed on the more injection based style of code, the raw implementation time of software that is structured in that way is actually comparible.
Your hoss' approach is a great example of dependency injection and how doing so allows you to use a mock ILineUpdater to conduct your tests efficiently.