I have a class that deals with Account stuff. It provides methods to login, reset password and create new accounts so far.
I inject the dependencies through the constructor. I have tests that validates each dependency's reference, if the reference is null it throws an ArgumentNullException.
The Account class exposes each of these dependencies through read only properties, I then have tests that validates if the reference passed on the constructor is the same that the property returns. I do this to make sure the references are being held by the class. (I don't know if this is a good practice too.)
First question: Is this a good practice in TDD? I ask this because this class has 6 dependencies so far, and it gets very repetitive and also the tests get pretty long as I have to mock all the dependencies for each test. What I do is just a copy and paste every time and just change the dependency's reference being tested.
Second question: my account creation method does things like validating the model passed, inserting data in 3 different tables or a forth table if a certain set of values are present and sending an email. What should I test here? I have so far a test that checks if the model validation gets executed, if the Add method of each repository gets called, and in this case, I use the Moq's Callback method of the mocked repository to compare each property being added to the repository against the ones I passed by the model.
Something like:
userRepository
.Setup(r => r.Add(It.IsAny<User>()))
.Callback<User>(u =>
{
Assert.AreEqual(model.Email, u.Email);
Assert.IsNotNull(u.PasswordHash);
//...
})
.Verifiable();
As I said, these tests are getting longer, I think that it doesn't hurt to test anything I can, but I don't know if it's worth it as it it's taking time to write the tests.
The purpose of testing is to find bugs.
Are you really going to have a bug where the property exists but is not initialized to the value from the constructor?
public class NoNotReally {
private IMyDependency1 _myDependency;
public IMyDependency1 MyDependency {get {return _myDependency;}}
public NoNotReally(IMyDependency dependency) {
_myDependency = null; // instead of dependency. Really?
}
}
Also, since you're using TDD, you should write the tests before you write the code, and the code should exist only to make the tests pass. Instead of your unnecessary tests of the properties, write a test that demonstrates that your injected dependency is being used. In order or such a test to pass, the dependency will need to exist, it will need to be of the correct type, and it will need to be used in the particular scenario.
In my example, the dependency will come to exist because it's needed, not because some artificial unit test required it to be there.
You say writing these tests feels repetitive. I say you feel the major benefit of TDD. Which is in fact not writing software with less bugs and not writing better software, because TDD doesn't guarantee either (at least not inherently). TDD forces you to think about design decisions and make design decisions all. The. Time. (And reduce debugging time.) If you feel pain while doing TDD, it's usually because a design decision is coming back to bite you. Then it's time to switch to your refactoring hat and improve the design.
Now in this particular case it's just the design of your tests, but you have to make design decisions for those as well.
As for testing whether properties are set. If I understand you correctly, you exposed those properties just for the sake of testing? In that case I'd advise against that. Assume you have a class with a constructor parameter and have a test that asserts the construtor should throw on null arguments:
public class MyClass
{
public MyClass(MyDependency dependency)
{
if (dependency == null)
{
throw new ArgumentNullException("dependency");
}
}
}
[Test]
public void ConstructorShouldThrowOnNullArgument()
{
Assert.Catch<ArgumentNullException>(() => new MyClass(null));
}
(TestFixture class omitted)
Now when you start to write a test for an actual business method of the class under test, the parts will start to fit together.
[Test]
public void TestSomeBusinessFunctionality()
{
MyDependency mockedDependency;
// setup mock
// mock calls on mockedDependency
MyClass myClass = new MyClass(mockedDependency);
var result = myClass.DoSomethingOrOther();
// assertions on result
// if necessary assertion on calls on mockedDependency
}
At that point, you will have to assign the injected dependency from the constructor to a field so you can use it in the method later. And if you manage to get the test to pass without using the dependency... well, heck, obviously you didn't need it to begin with. Or, maybe, you'll only start to need it for the next test.
About the other point. When it becomes a hassle to test all the reponsibilities of a method or class, TDD is telling you that the method/class is doing to much and would maybe like to be split up into parts that are easy to test. E.g. one class for verification, one for mapping and one for executing the storage calls.
That can very well lead to over-engineering, though! So watch out for that and you'll develop a feeling for when to resist the urge for more indirection. ;)
To test if properties are mapped properly, I'd suggest to use stubs or self-made fake objects which have simple properties. That way you can simply compare the source and target properties and don't have to make lengthy setups like the one you posted.
Normally in unit tests (especially in TDD), you are not going to test every single statement in the class that you are testing. The main purpose of the TDD unit tests is to test the business logic of the class, not the initialization stuff.
In other words, you give scenarios (remember to include edge cases too) as input and check the results, which can either be the final values of the properties and/or the return values of the methods.
The reason you don't want to test every single possible code path in your classes is because should you ever decide to refactor your classes later on, you only need to make minimal changes to your TDD unit tests, as they are supposed to be agnostic to the actual implementation (as much as possible).
Note: There are other types of unit tests, such as code coverage tests, that are meant to test every single code path in your classes. However, I personally find these tests impractical, and certainly not encouraged in TDD.
Related
I am abstracting the history tracking portion of a class of mine so that it looks like this:
private readonly Stack<MyObject> _pastHistory = new Stack<MyObject>();
internal virtual Boolean IsAnyHistory { get { return _pastHistory.Any(); } }
internal virtual void AddObjectToHistory(MyObject myObject)
{
if (myObject == null) throw new ArgumentNullException("myObject");
_pastHistory.Push(myObject);
}
internal virtual MyObject RemoveLastObject()
{
if(!IsAnyHistory) throw new InvalidOperationException("There is no previous history.");
return _pastHistory.Pop();
}
My problem is that I would like to unit test that Remove will return the last Added object.
AddObjectToHistory
RemoveObjectToHistory -> returns what was put in via AddObjectToHistory
However, it isn't really a unit test if I have to call Add first? But, the only way that I can see to do this in a true unit test way is to pass in the Stack object in the constructor OR mock out IsAnyHistory...but mocking my SUT is odd also. So, my question is, from a dogmatic view is this a unit test? If not, how do I clean it up...is constructor injection my only way? It just seems like a stretch to have to pass in a simple object? Is it ok to push even this simple object out to be injected?
There are two approaches to those scenarios:
Interfere into design, like making _pastHistory internal/protected or injecting stack
Use other (possibly unit tested) methods to perform verification
As always, there is no golden rule, although I'd say you generally should avoid situations where unit tests force design changes (as those changes will most likely introduce ambiguity/unnecessary questions to code consumers).
Nonetheless, in the end it is you who has to weigh how much you want unit test code interfere into design (first case) or bend the perfect unit test definition (second case).
Usually, I find second case much more appealing - it doesn't clutter original class code and you'll most likely have Add already tested - it's safe to rely on it.
I think it's still a unit test, assuming MyObject is a simple object. I often construct input parameters to unit test methods.
I use Michael Feather's unit test criteria:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
My 2 cents... how would the client know if remove worked or not ? How is a 'client' supposed to interact with this object? Are clients going to push in a stack to the history tracker? Treat the test as just another user/consumer/client of the test subject.. using exactly the same interaction as in real production.
I haven't heard of any rule stating that you're not allowed to call multiple methods on the object under test.
To simulate, stack is not empty. I'd just call Add - 99% case. I'd refrain from destroying the encapsulation of that object.. Treat objects like people (I think I read that in Object Thinking). Tell them to do stuff.. don't break-in and enter.
e.g. If you want someone to have some money in their wallet,
the simple way is to give them the money and let them internally put it into their wallet.
throw their wallet away and stuff in a wallet in their pocket.
I like Option1. Also see how it frees you from implementation details (which induce brittleness in tests). Let's say tomorrow the person decides to use an online wallet. The latter approach will break your tests - they will need to be updated for pushing in an online wallet now - even though the object behavior is not broken.
Another example I've seen is for testing Repository.GetX() where people break-in to the DB to inject records with SQL now in the unit test.. where it would have be considerably cleaner and easier to call Repository.AddX(x) first. Isolation is desired but not to the extent that it overrides pragmatism.
I hope I didn't come on too strong here.. it just pains me to see object APIs being 'contorted for testability' to the point where it no longer resembles the 'simplest thing that could work'.
I think you're trying to be a little overly specific with your definition of a unit test. You should be testing the public behavior of your class, not the minute implementation details.
From your code snippet, it looks like all you really need to care about is whether a) calling AddObjectToHistory causes IsAnyHistory to return true and b) RemoveLastObject eventually causes IsAnyHistory to return false.
As stated in the other answers I think your options can be broken down like so.
You take a dogmatic approach to your testing methodology and add constructor injection for the stack object so you can inject your own fake stack object and test your methods.
You write a separate test for add and remove, the remove test will use the add method but consider it a part of the test setup. As long as your add test passes, your remove should be too.
Is it always necessary to create and pass in a stub into a method as a parameter, even if I can instantiate that object being passed in to the method without any problems.
ex. I want to test this method below and it takes in a TargetDataRanger object as a parameter. Should I a.) stub it out and pass it in b.) break the dependency and put it behind a interface then stub it and pass it in c.) instantiate it and pass it into the method as a concrete object.
In this case below I can get away with using the concrete object but is that wise and does it break some testing rules or something?
public virtual Dictionary<DateTime, DateTime> ResolveDates(ISeries comparisonSeries, TargetDateRanger sourceRanger)
{
Dictionary<DateTime, DateTime> dates = new Dictionary<DateTime, DateTime>();
foreach (DateTime keyDate in sourceRanger.ValidDates)
dates.Add(keyDate, this.ResolveDate(comparisonSeries, keyDate));
return dates;
}
I think the answer depends on what TargetDateRanger.ValidDates does. Assuming you can completely control what that property returns from your unit test, there's no reason to separately mock it out. If it hits the database, has some internal logic, depends on something like DateTime.Now, etc. then you'll need to mock it.
Basically, you want the "environment" of a unit test to be completely under your control so that you have predictable results and can quickly pinpoint the failing code. If ValidDates has a possibility of returning wrong results, then you'd want to unit test that separately and mock it in this case (so that "bad results" don't cause your ResolveDates method to fail, since the problem doesn't reside there).
You could use a default parameter.
void print(int a, string b = "default")
{
Console.WriteLine(a + b);
}
In unit testing, I test mine as stand alone. I break it out, have a driver set up that I can pump variables into (if it is meant to receive them) and a stub so I can view the expected results. I feel it's a better test philosophy for me, but I am relatively new to programming, so I find this way of testing me teaches me a lot at the same time.
Once that's done, I integrate it into the larger system, re-test it with a new driver and stub output, verify the entire flow works.
I can't say there is something inherently wrong using the concrete method, especially if it's a small piece of a small program... But, I like breaking things out.
If you are writing a Unit test for this method, I think better to isolate/fake out any external dependencies. The problem is that if you don't, for example if someone make a change to the ValidateDates method, then your test would fail for the wrong reason. On the other had you are also testing what is inside the ValidateDates method. This means you are probably tempting to test multiple things. Also this might prevent you giving good/specific name for the test.
Remember you can Unit test ValidateDates method separately/in isolation.
It is important that you want to break as much as dependencies as test small piece of logic/behavior in isoloation. This way you get real value of Unit Tests IMO.
In the past, I have only used Rhino Mocks, with the typical strict mock. I am now working with Moq on a project and I am wondering about the proper usage.
Let's assume that I have an object Foo with method Bar which calls a Bizz method on object Buzz.
In my test, I want to verify that Bizz is called, therefore I feel there are two possible options:
With a strict mock
var mockBuzz= new Mock<IBuzz>(MockBehavior.Strict);
mockBuzz.Setup(x => x.Bizz()); //test will fail if Bizz method not called
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.VerifyAll();
With a loose mock
var mockBuzz= new Mock<IBuzz>();
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.Verify(x => x.Bizz()) //test will fail if Bizz method not called
Is there a standard or normal way of doing this?
I used to use strict mocks when I first starting using mocks in unit tests. This didn't last very long. There are really 2 reasons why I stopped doing this:
The tests become brittle - With strict mocks you are asserting more than one thing, that the setup methods are called, AND that the other methods are not called. When you refactor the code the test often fails, even if what you are trying to test is still true.
The tests are harder to read - You need to have a setup for every method that is called on the mock, even if it's not really related to what you want to test. When someone reads this test it's difficult for them to tell what is important for the test and what is just a side effect of the implementation.
Because of these I would strongly recommend using loose mocks in your unit tests.
I have background in C++/non-.NET development and I've been more into .NET recently so I had certain expectations when I was using Moq for the first time. I was trying to understand WTF was going on with my test and why the code I was testing was throwing a random exception instead of the Mock library telling me which function the code was trying to call. So I discovered I needed to turn on the Strict behaviour, which was perplexing- and then I came across this question which I saw had no ticked answer yet.
The Loose mode, and the fact that it is the default is insane. What on earth is the point of a Mock library that does something completely unpredictable that you haven't explicitly listed it should do?
I completely disagree with the points listed in the other answers in support of Loose mode. There is no good reason to use it and I wouldn't ever want to, ever. When writing a unit test I want to be certain what is going on - if I know a function needs to return a null, I'll make it return that. I want my tests to be brittle (in the ways that matter) so that I can fix them and add to the suite of test code the setup lines which are the explicit information that is describing to me exactly what my software will do.
The question is - is there a standard and normal way of doing this?
Yes - from the point of view of programming in general, i.e. other languages and outside the .NET world, you should use Strict always. Goodness knows why it isn't the default in Moq.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}
Me personally, being new to mocking and Moq feel that starting off with Strict mode helps better understand of the innards and what's going on. "Loose" sometimes hides details and pass a test which a moq beginner may fail to see. Once you have your mocking skills down - Loose would probably be a lot more productive - like in this case saving a line with the "Setup" and just using "Verify" instead.
So I have a class with a method as follows:
public class SomeClass
{
...
private SomeDependency m_dependency;
public int DoStuff()
{
int result = 0;
...
int someValue = m_dependency.GrabValue();
...
return result;
}
}
And I've decided that rather than to call m_dependency.GrabValue() each time, I really want to cache the value in memory (i.e. in this class) since we're going to get the same value each time anyway (the dependency goes off and grabs some data from a table that hardly ever changes).
I've run into problems however trying to describe this new behaviour in a unit test. I've tried the following (I'm using NUnit with RhinoMocks):
[Test]
public void CacheThatValue()
{
var depend = MockRepository.GeneraMock<SomeDependency>();
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
var sut = new SomeCLass(depend);
int result = sut.DoStuff();
result = sut.DoStuff();
depend.VerifyAllExpectations();
}
This however doesn't work; this test passes even without introducing any changes to the functionality. What am I doing wrong?
I see caching as orthogonal to Do(ing)Stuff. I would find a way to pull the caching logic outside of the method, either by changing SomeDependency or wrapping it somehow (I now have a cool idea for a caching class based around lambda expressions -- yum).
That way your tests for DoStuff don't need to change, you only need to make sure they work with the new wrapper. Then you can test the caching functionality of SomeDependency, or its wrapper, independently. With well-architected code putting a caching layer in place should be rather easy and neither your dependency nor your implementation should know the difference.
Unit tests shouldn't be testing implementation, they should test behavior. At the same time, the subject under test should have a narrowly-defined set of behavior.
To answer your question, you are using a Dynamic Mock and the default behavior is to allow any call that isn't configured. The additional calls are just returning "0". You need to set up an expectation that no more calls are made on the dependency:
depend.Expect(d => d.GrabValue()).Repeat.Once().Return(1);
depend.Expect(d => d.GrabValue()).Repeat.Never();
You may need to enter record/replay mode to get it to work properly.
This seems like a case for "tests drive the design". If caching is an implementation detail of SubDependency - and therefore can't be directly tested - then probably some of its functionality (specifically, its caching behavior) needs to be exposed - and since it's not natural to expose it within SubDependency, needs to be exposed in another class (let's call it "Cache"). In Cache, of course, the behavior is contractual - public, and thereby testable.
So the tests - and the smells - are telling us we need a new class. Test-Driven Design. Ain't it great?
My company is on a Unit Testing kick, and I'm having a little trouble with refactoring Service Layer code. Here is an example of some code I wrote:
public class InvoiceCalculator:IInvoiceCalculator
{
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
UpdateLine(il);
}
//do a ton of other stuff here
}
private UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff, including calls to other private methods
}
}
In this simplified case (it is reduced from a 1,000 line class that has 1 public method and ~30 private ones), my boss says I should be able to test my CalculateInvoice and UpdateLine separately (UpdateLine actually calls 3 other private methods, and performs database calls as well). But how would I do this? His suggested refactoring seemed a little convoluted to me:
//Tiny part of original code
public class InvoiceCalculator:IInvoiceCalculator
{
public ILineUpdater _lineUpdater;
public InvoiceCalculator (ILineUpdater lineUpdater)
{
_lineUpdater = lineUpdater;
}
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
_lineUpdater.UpdateLine(il);
}
//do a ton of other stuff here
}
}
public class LineUpdater:ILineUpdater
{
public UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff
}
}
I can see how the dependency is now broken, and I can test both pieces, but this would also create 20-30 extra classes from my original class. We only calculate invoices in one place, so these pieces wouldn't really be reusable. Is this the right way to go about making this change, or would you suggest I do something different?
Thank you!
Jess
This is an example of Feature Envy:
line.Amount = line.Qty * line.Rate;
It should probably look more like:
var amount = line.CalculateAmount();
There isn't anything wrong with lots of little classes, it's not about re-usability as much as it's about adaptability. When you have many single responsibility classes, it's easier to see the behavior of your system and change it when your requirements change. Big classes have intertwinded responsibilities which make it very difficult to change.
IMO this all depends on how 'significant' that UpdateLine() method really is. If it's just an implementation detail (e.g. it could easily be inlined inside CalculateInvoice() method and they only thing that would hurt is readability), then you probably don't need to unit test it separately from the master class.
On the other hand, if UpdateLine() method has some value to the business logic, if you can imagine situation when you would need to change this method independently from the rest of the class (and therefore test it separately), then you should go on with refactoring it to a separate LineUpdater class.
You probably won't end up with 20-30 classes this way, because most of those private methods are really just implementation details and do not deserve to be tested separately.
Well, your boss goes more correct way in terms of unit-testing:
He is now able to test CalculateInvoice() without testing UpdateLine() function. He can pass mock object instead of real LineUpdater object and test only CalculateInvoice(), not a whole bunch of code.
Is it right? It depends. Your boss wants to make real unit-tests. And testing in your first example would not be unit-testing, it would be integration testing.
What are advantages of unit-tests before integration tests?
1) Unit-tests allow you to test only one method or property, without it being affected by other methods/database and so on.
2) Second advantage - unit tests execute faster (for example, you said UpdateLine uses database) because they don't test all the nested methods. Nested methods can be database calls so if you have thousand of tests your tests can run slow (several minutes).
3) Third advantage: if your methods make database calls then sometimes you need to setup database (fill it with data which is necessary for test) and it can be not easy - maybe you will have to write a couple of pages of code just to prepare database for a test. With unit tests, you separate database calls from the methods being tested (using mock objects).
But! I am not saying that unit tests a better. They are just different. As I said, unit tests allow you to test a unit in isolation and quickly. Integration tests are easier and allow you to test results of a joint work of different methods and layers. Honestly, I prefer integration tests more :)
Also, I have a couple of suggestions for you:
1) I don't think having Amount field is a good idea. It seems that Amount field is extra because it's value can be calculated based on 2 other public fields. If you want to do it anyway, I would do it as a read only property which returns Qty * Rate.
2) Usually, having a class which consists of 1000 rows may mean that it's badly designed and should be refactored.
Now, I hope you better understand the situation and can decide. Also, if you understand the situation you can talk to your boss and you can decide together.
yeah, nice one. I'm not sure wether the InvoiceLine object also has some logic included, otherwise then you would probably need a IInvoiceLine also.
I sometimes have the same questions. On one hand you want to do things right and unit test your code, but when database calls and maybe filewriting is involved it causes a lot of extra work to setup the first test with all the testobjects which step in when filewriting and database io is about to happen, interfaces, asserts and you also want to test that the datalayer doesn't contain any errors. So a test which is more 'process' then 'unit' is often easier to build.
If you have a project that will be changed a lot (in the future) and lots of dependencies of this code (maybe other programs read the file or database data) it can be nice to have a solid unit test for all parts of your code, and the investment time is worthwhile.
But if the project is, like my latest client says 'let's get it live and maybe we'll tweak a bit next year and next year there will be something new', than i wouldn't be so hard on myself to get all unit tests up and running.
Michel
Your boss' example looks reasonable to me.
Some of the key considerations I try to keep in mind when designing for any scenario are:
Single Responsibility Principle
A class should only change for one reason.
Does each new class justify its existence
Have classes been created just for the sake of it, or do they encapsulate a meaningful portion of logic?
Are you able to test each piece of code in isolation?
In your scenario, just looking at the names, it would appear that you were wandering away from Single Responsibility - You have an IInvoiceCalculator, yet this class is then also responsible for updating InvoiceLines. Not only have you made it very difficult to test the update behaviour, but you now need to change your InvoiceCalculator class both when calculation business rules change and when the rules around updating change.
Then there is the question about the updating logic - does the logic justify a seperate class? That really depends and it is hard to say without seeing the code, but certainly the fact that your boss wants that logic under test would suggest that it is more than a simple on liner call off to a datalayer.
You say that this refactoring creates a great number of extra classes, (I'm taking that you mean across all your business entities, since I only see a couple of new classes and their interfaces in your example) but you have to consider what you get from this. It looks like you gain full testability of your code, the ability to introduce new calculation and new update logic in isolation, and a more clear encapsulation of what are seperate pieces of business logic.
The gains above are of course subject to a cost benefit analysis, but since your boos is asking for them, it sounds like he is happy that they will pay off, against the extra work to implement the code this way.
The final, third point about testing in isolation is also a key benefit of the way your boss has designed this - the closer your public methods are to the code that does that actualy work, the easier it is to inject stubs or mocks for parts of your system that are not under test. For example, if you are testing an update method that calls off the a datalayer, you do not want to test the datalayer, so you would usually inject a mock. If you need to pass that mocked datalayer through all your calculator logic first, your test setup is going to be far more complicated since the mock now needs to meet many other potential requirements, not related to the actual test in question.
While this approach is extra work initially, I'd say that the majority of the work is the time to think through the design, after that, and after you get up to speed on the more injection based style of code, the raw implementation time of software that is structured in that way is actually comparible.
Your hoss' approach is a great example of dependency injection and how doing so allows you to use a mock ILineUpdater to conduct your tests efficiently.