Unit testing classes that instantiate other classes - c#

I'm trying to write unit tests for a class that instantiates other classes within it, but am struggling with how to instantiate those classes in a testable way. I'm aware of dependency injection, but this is somewhat different as the instantiation doesn't happen in the constructor.
This question really isn't specific to MVVM and C#, but that's what my example will use. Note that I've simplified this and it will not compile as-is - the goal is to show a pattern.
class ItemListViewModel
{
ItemListViewModel(IService service)
{
this.service.ItemAdded += this.OnItemAdded;
}
List<IItemViewModel> Items { get; }
OnItemAdded(IItemModel addedItem)
{
var viewModel = new ItemViewModel(addedItem);
this.Items.Add(viewModel);
}
}
class ItemViewModel : IItemViewModel
{
ItemViewModel(IItem) {}
}
As can be seen above, there is an event from the model layer. The ViewModel listens to that event and, in response, adds a new child ViewModel. This fits with standard Object Oriented programming practices that I'm aware of, as well as the MVVM pattern, and feels like quite a clean implementation to me.
The problem comes when I want to unit test this ViewModel. While I can easily mock out the service using dependency injection, I'm unable to mock out items added through the event. This leads to my primary question: is it OK to write my unit tests depending on the real version of ItemViewModel, rather than a mock?
My gut feel: that isn't OK because I'm now inherently testing more than ItemListViewModel, particularly if ItemListViewModel calls any methods on any of the items internally. I should have ItemListViewModel depend on mock IItemViewModels during testing.
There's a few tactics I've considered for how to do this:
Have ItemListViewModel's owning class listen to the event and add mocked out items. This just moves the problem around, though, as now the owning class can't be fully mocked out.
Pass an ItemViewModel factory into ItemListViewModel and use that instead of new. This would definitely work for mocking as it moves things to be based on dependency injection...but does that mean I need a factory for every class I ever want to mock in my app? This feels wrong, and would be a pain to maintain.
Refactor my model and how it communicates with the ViewModel. Perhaps the eventing pattern I'm using isn't good for testing; though I don't see how I would get around needing to ultimately construct an ItemViewModel somewhere in code that needs to be tested.
Additionally, I've searched online and looked through the book Clean Code, but this really hasn't been covered. Everything talks about dependency injection, which doesn't clearly solve this.

is it OK to write my unit tests depending on the real version of
ItemViewModel, rather than a mock?
Yes!
You should use real implementation as long as tests become slow or very very complicated to setup.
Notice that tests should be isolated, but isolated from other tests not from other dependencies of unit under the test.
Main issue of the testing is that applications using shared state (database, filesystem). Shared state makes our tests dependent on each other (tests for adding and removing items can not be run in parallel). By introducing mocking we eliminate shared state between tests.
Sometimes application being divided into independent domain modules, which "communicate" with each other via abstracted interface. To keep modules independent we will mock communication interface, so module under the test will not depend on implementation details of another domain module.
Mocking literally all dependencies will make maintenance/refactoring changes a nightmare, because every time you going to extract some logic into dedicated class you will be forced to change/rewrite test suit of the unit you are refactoring.
Your scenario is good example, by not mocking creating of ItemViewModel, you will be able to introduce a factory inject it into the class under the test and run already existing test suit to make sure that factory didn't introduce any regressions.

While I can easily mock out the service using dependency injection, I'm unable to mock out items added through the event.
Misko Hevery wrote about this pattern: How to Think About the New Operator
If you mix application logic with graph construction (the new operator) unit-testing becomes impossible for anything but the leaf nodes in your application.
So if we were to look at your problem code:
OnItemAdded(IItemModel addedItem)
{
var viewModel = new ItemViewModel(addedItem);
this.Items.Add(viewModel);
}
then one change we could consider is replacing this direct call to ItemViewModel::new with a more indirect approach
var viewModel = factory.itemViewModel(addedItem);
Where factory provides a capability to create ItemViewModel, and the design allows us to provide substitutes.
ItemListViewModel(IService service, Factory factory)
{
this.service.ItemAdded += this.OnItemAdded;
this.factory = factory;
}
Having done that, you can (when appropriate) use a Factory that provides some simpler implementation of your item view model.
When is this important? One thing to notice is that you are asking about ItemViewModel, but you aren't asking about List. Why is that?
A couple of answers: List is stable; we aren't at all worried that the behavior of List itself is going to change in a way that causes an observable change to the behavior of ItemListViewModel. If the test reports a problem later, there isn't going to be any doubt that we introduced a mistake in our code.
Also, this.List is (presumably) isolated. We don't have to worry that our test results are going to be flaky because some other code is running at the same time. In other words, are test is not vulnerable to problems caused by shared mutable state.
If those properties also hold for ItemViewModel, then adding a bunch of ceremony to your code to create separation between these two implementations isn't actually going to make your design any "better".

Related

Use Singleton for model OR where to inject model into viewmodel?

Normally I'd just use a singleton pattern for the model class Logbook. However, I'm trying to move towards learning about DI and want to inject the model into my viewmodel. In addition, I'm planning on having different child windows that open for various function (choose which logbook entries to display, sort order, add a logbook entry, etc) that may or may not need access to the model. What are the pros and cons of using the singleton pattern vs DI with these constraints? In addition, is DI possible if a lot of my viewmodels need access to this model? Finally, where/how should I instantiate the view, viewmodel, and model to be able to inject?
When you're using dependency injection, the idea is to provide classes with a specific implementation of the dependency it needs, so when you're unit testing, you can provide a mock or stub implementation. This is known as loose coupling.
Singletons, meanwhile, are global state that you have no control over. If your class accesses a singleton, it's tightly coupled to the singleton. How do you mock a singleton? In general, you don't. You're stuck to it, and suddenly you have untestable code. Singletons are generally considered an anti-pattern for exactly this reason. (Aside: there are ways around that, but the basic design problem still exists)
Even worse, if something in your class mutates the state of the singleton, this can have side effects in other tests you're writing and make it very difficult to figure out why Test B passes when run alone, but fails if it's run after Test A.
My rule of thumb is Don't use singletons, ever.
Your specific question sounds a little bit fishy. Your viewmodel shouldn't need a model injected into it. It sounds like you want a class responsible for retrieving or otherwise constructing your model, and then you'd inject an implementation of that class into your viewmodel.

facade design pattern

Lately I've been trying to follow the TDD methodology, and this results in lot of subclasses so that one can easily mock dependencies, etc.
For example, I could have say a RecurringProfile which in turn has methods / operations which can be applied to it like MarkAsCancel, RenewProfile, MarkAsExpired, etc.. Due to TDD, these are implemented as 'small' classes, like MarkAsCancelService, etc.
Does it make sense to create a 'facade' (singleton) for the various methods/operations which can be performed on say a RecurringProfile, for example having a class RecurringProfileFacade. This would then contain methods, which delegate code to the actual sub-classes, eg.:
public class RecurringProfileFacade
{
public void MarkAsCancelled(IRecurringProfile profile)
{
MarkAsCancelledService service = new MarkAsCancelledService();
service.MarkAsCancelled(profile);
}
public void RenewProfile(IRecurringProfile profile)
{
RenewProfileService service = new RenewProfileService();
service.Renew(profile);
}
...
}
Note that the above code is not actual code, and the actual code would use constructor-injected dependencies. The idea behind this is that the consumer of such code would not need to know the inner details about which classes/sub-classes they need to call, but just access the respective 'Facade'.
First of all, is this the 'Facade' pattern, or is it some other form of design pattern?
The other question which would follow if the above makes sense is - would you do unit-tests on such methods, considering that they do not have any particular business logic function?
I would only create a facade like this if you intend to expose your code to others as a library. You can create a facade which is the interface everyone else uses.
This will give you some capability later to change the implementation.
If this is not the case, then what purpose does this facade provide? If a piece of code wants to call one method on the facade, it will have a dependency on the entire facade. Best to keep dependencies small, and so calling code would be better with a dependency on MarkAsCancelledService tha one on RecurringProfileFacade.
In my opinion, this is kind of the facade pattern since you are abstracting your services behind simple methods, though a facade pattern usually has more logic I think behind their methods. The reason is because the purpose of a facade pattern is to offer a simplified interface on a larger body of code.
As for your second question, I always unit test everything. Though, in your case, it depends, does it change the state of your project when you cancel or renew a profile ? Because you could assert that the state did change as you expected.
If your design "tells" you that you could use a Singleton to do some work for you, then it's probably bad design. TDD should lead you far away from thinking about using singletons.
Reasons on why it's a bad idea (or can be an ok one) can be found on wikipedia
My answer to your questions is: Look at other patterns! For example UnitOfWork and Strategy, Mediator and try to acheive the same functionality with these patterns and you'll be able to compare the benefits from each one. You'll probably end up with a UnitOfStrategicMediationFacade or something ;-)
Consider posting this questions over at Code Review for more in depth analysis.
When facing that kind of issue, I usually try to reason from a YAGNI/KISS perspective :
Do you need MarkAsCancelService, MarkAsExpiredService and the like in the first place ? Wouldn't these operations have a better home in RecurringProfile itself ?
You say these services are byproducts of your TDD process but TDD 1. doesn't mean stripping business entities of all logic and 2. if you do externalize some logic, it doesn't have to go into a Service. If you can't come up with a better name than [BehaviorName]Service for your extracted class, it's often a sign that you should stop and reconsider whether the behavior should really be extracted.
In short, your objects should remain cohesive, which means they shouldn't encapsulate too many responsibilities, but not become anemic either.
Assuming these services are justified, do you really need a Facade for them ? If it's only a convenient shortcut for developers, is it worth the additional maintenance (a change in one of the services will generate a change in the facade) ? Wouldn't it be simpler if each consumer of one of the services knows how to leverage that service directly ?
The other question which would follow if the above makes sense is -
would you do unit-tests on such methods, considering that they do not
have any particular business logic function?
Unit testing boilerplate code can be a real pain indeed. Some will take that pain, others consider it not worthy. Due to the repetitive and predictable nature of such code, one good compromise is to generate your tests automatically, or even better, all your boilerplate code automatically.

Dependent (Procedural) Unit Testing in Visual Studio 2012

In an MVC4 C# website, we are using Unit Testing to test our code. What we are currently doing is creating a mock database, then testing each layer:
Test the Database - connecting, initializing, load, etc.
Test the code accessing the database
Test the view layer
I want these three categories to be run in order and only if the previous one passed. Is this the right way to go about it? The old way we were doing it was using a Dependency Injection framework was a weird approach to mock testing. We actually created a mock class for each data accessor, which sucks because we have to re implement the logic of each method, and it didn't really add any value because if we made a mistake with the logic the first time, we'll make it a second.
I want these three categories to be run in order and only if the
previous one passed. Is this the right way to go about it?
No. You should be test everything in isolation and because of that any failing data access code should be completely unrelated to the view. So the view unit tests should fail for a completely different reason and because of that you should always run all tests.
In general, tests should run in isolation, and should not depend on any other tests. It seems to me that the view layer still depends on the data access code, but that you only abstracted the layer below. In that case, from the view's perspective, you are mocking an indirect dependency, which makes your unit tests more complex than strictly needed.
The old way we were doing it was using a Dependency Injection
framework was a weird approach to mock testing
I'm not sure if I get this, but it seems as if you were using the DI container inside your unit tests. This is a absolute no-no. Don't clutter your tests with any DI framework. And there's no need to. Just design your classes around the dependency injection principle (expose all dependencies as constructor arguments) and in your test you just manually new up the class under test with fake dependencies -or- when this leads to repetitive code, create a factory method in your test class that creates a new instance of the class under test.
We actually created a mock class for each data accessor, which sucks
I'm not sure, but it sounds as if there's something wrong with your design here. You would normally have a IRepository<T> interface for which you would have multiple implementations, such as a UserRepository : IRepository<User>, and OrderRepository : IRepository<Order>. In your test suite, you will have one generic FakeRespository<T> or InMemoryRepository<T> and you're done. No need to mock many classes here.
Here are two great books you should definitely read:
The Art of Unit Testing
Dependency Injection in .NET
Those books will change your life. And since you'll be reading anyway, do read this book as well (not related to your question, but code be a life changer as well).

Dependency Injection what´s the big improvement?

Currently I'm trying to understand dependency injection better and I'm using asp.net MVC to work with it. You might see some other related questions from me ;)
Alright, I'll start with an example controller (of an example Contacts Manager asp.net MVC application)
public class ContactsController{
ContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
//...Actions here
}
Allright, awesome that's working. My actions can all use the database for CRUD actions. Now I've decided I wanted to add unit testing, and I've added another contructor to mock a database
public class ContactsController{
IContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
public ContactsController(IContactsManagerDb db){
_db = db;
}
//...Actions here
}
Awesome, that's working to, in my unit tests I can create my own implementation of the IContactsManagerDb and unit test my controller.
Now, people usually make the following decision (and here is my actual question), get rid of the empty controller, and use dependency injection to define what implementation to use.
So using StructureMap I've added the following injection rule:
x.For<IContactsManagerDb>().Use<ContactsManagerDb>();
And ofcourse in my Testing Project I'm using a different IContactsManagerDb implementation.
x.For<IContactsManagerDb>().Use<MyTestingContactsManagerDb>();
But my question is, **What problem have I solved or what have I simplified by using dependency injection in this specific case"
I fail to see any practical use of it now, I understand the HOW but not the WHY? What's the use of this? Can anyone add to this project perhaps, make an example how this is more practical and useful?
The first example is not unit testable, so it is not good as it is creating a strong coupling between the different layers of your application and makes them less reusable. The second example is called poor man dependency injection. It's also discussed here.
What is wrong with poor man dependency injection is that the code is not autodocumenting. It doesn't state its intent to the consumer. A consumer sees this code and he could easily call the default constructor without passing any argument, whereas if there was no default constructor it would have immediately been clear that this class absolutely requires some contract to be passed to its constructor in order to function normally. And it is really not to the class to decide which specific implementation to choose. It is up to the consumer of this class.
Dependency injection is useful for 3 main reasons :
It is a method of decoupling interfaces and implementations.
It is good for reducing the amount of boiler plate / factory methods in an application.
It increases the modularity of packages.
As an example - consider the Unit test which required access to a class, defined as an interface. In many cases, a unit test for an interface would have to invoke implementations of that interface -- thus if an implementation changed, so would the unit test. However, with DI, you could "inject" an interface's implementation at run time into a unit test using the injection API - so that changes to implementations only have to be handled by the injection framework, not by individual classes that use those implementations.
Another example is in the web world : Consider the coupling between service providers and service definitions. If a particular component needs access to a service - it is better to design to the interface than to a particular implementation of that service. Injection enables such design, again, by allowing you to dynamically add dependencies by referencing your injection framework.
Thus, the various couplings of classes to one another are moved out of factories and individual classes, and dealt with in a uniform, abstract, reusable, and easily-maintained manner when one has a good DI framework. The best tutorials on DI that I have seen are on Google's Guice tutorials, available on YouTube. Although these are not the same as your particular technology, the principles are identical.
First, your example won't compile. var _db; is not a valid statement because the type of the variable has to be inferred at declaration.
You could do var _db = new ContactsManagerDb();, but then your second constructor won't compile because you're trying to assign an IContactsManagerDb to an instance of ContactsManagerDb.
You could change it to IContactsManagerDb _db;, and then make sure that ContactsManagerDb derives from IContactsManagerDb, but then that makes your first constructor irrelvant. You have to have the constructor that takes the interface argument anyways, so why not just use it all the time?
Dependency Injection is all about removing dependancies from the classes themselves. ContactsController doesn't need to know about ContactsManagerDb in order to use IContactsManagerDb to access the Contacts Manager.

Is testability alone justification for dependency injection?

The advantages of DI, as far as I am aware, are:
Reduced Dependencies
More Reusable Code
More Testable Code
More Readable Code
Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class.
Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way.
It also makes the code harder to read, as to instantiate the repository I now need to write
new OrdersRepository(new MyLinqDataContext())
which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code.
So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.
Dependency Injection's primary advantage is testing. And you've hit on something that seemed odd to me when I first started adopting Test Driven Development and DI. DI does break encapsulation. Unit tests should test implementation related decisions; as such, you end up exposing details that you wouldn't in a purely encapsulated scenario. Your example is a good one, where if you weren't doing test driven development, you would probably want to encapsulate the data context.
But where you say, Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, I would disagree - we have the same setup and are only dependent on an interface. You have to break that dependency.
Taking your example a step further however, how will you test your repository (or clients of it) without exercising the database? This is one of the core tenets of unit testing - you have to be able to test functionality without interacting with external systems. And nowhere does this matter more than with the database. Dependency Injection is the pattern that makes it possible to break dependencies on sub-systems and layers. Without it, unit tests end up requiring extensive fixture setup, become hard to write, fragile and too damn slow. As a result - you just won't write them.
Taking your example a step farther, you might have
In Unit Tests:
// From your example...
new OrdersRepository(new InMemoryDataContext());
// or...
IOrdersRepository repo = new InMemoryDataContext().OrdersRepository;
and In Production (using an IOC container):
// usually...
Container.Create<IDataContext>().OrdersRepository
// but can be...
Container.Create<IOrdersRepository>();
(If you haven't used an IOC container, they're the glue that makes DI work. Think of it as "make" (or ant) for object graphs...the container builds the dependency graph for you and does all of the heavy lifting for construction). In using an IOC container, you get back the dependency hiding that you mention in your OP. Dependencies are configured and handled by the container as a separate concern - and calling code can just ask for an instance of the interface.
There's a really excellent book that explores these issues in detail. Check out xUnit Test Patterns: Refactoring Test Code, by Mezaros. It's one of those books that takes your software development capabilities to the next level.
Dependency Injection is just a means to an end. It's a way to enable loose coupling.
Small comment first: dependency injection = IoC + dependency inversion. What matter the most for testing and what you actually describe, is dependency inversion.
Generally speaking, I think that testing justifies dependency inversion. But it doesn't justify dependency injection, I wouldn't introduce a DI container just for testing.
However, dependency inversion is a principle that can be bended a bit if necessary (like all principles). You can in particular use factories in some places to control the creation of object.
If you have DI container, it's what happens automatically; the DI container act as a factory and wires the object together.
The beauty of Dependency Injection is the ability to isolate your components.
One side-effect of isolation is easier unit-testing. Another is the ability to swap configurations for different environments. Yet another is the self-describing nature of each class.
However you take advantage of the isolation provided by DI, you have far more options than with tighter-coupled object models.
The power of dependency injection comes when you use an Inversion of Control container such as StructureMap. When using this, you won't see a "new" anywhere -- the container will take control of your object construction. That way everything is unaware of the dependencies.
I find that relationship between needing Dependency Injection container and testability is the other way around. I find DI extremely useful because I write unit testable code. And I write unit testable code because it's inherently better code- more loosely coupled smaller classes.
An IoC container is another tool in the toolbox that helps you manage complexity of the app - and it works quite well. I found it allows to better to code to an interface by taking instantiation out of the picture.
The design of tests here always depends on SUT (System Under Test). Ask yourself a question - what do I want to test?
If yours repository is just accessor to the database - then it is needed to test it like accessor - with database involvement. (Actually such kind of tests are not unit tests but integration)
If yours repository performs some mapping or business logic and acts like accessor to the database, then this is the case when it is needed to do decomposition in order to make your system to comply with SRP (Single Responsibility Principle). After decomposition you will have 2 entities:
OrdersRepository
OrderDataAccessor
Test them separately from each other, breaking dependencies with DI.
As of constructor ugliness... Use DI framework to construct your objects. For example with using Unity your constructor:
var repository = new OrdersRepository(new MyLinqDataContext());
will be look like:
var repository = container.Resolve<OrdersRepository>;
The jury is still out for me about the use of DI in the context of your question. You've asked if testing alone is justification for implementing DI, and I'm going to sound a little like a fence-sitter in answering this, even though my gut-response is to answer no.
If I answer yes, I am thinking about testing systems when you have nothing you can easily test directly. In the physical world, it's not unusual to include ports, access tunnels, lugs, etc, in order to provide a simple and direct means of testing the status of systems, machines, and so on. This seems reasonable in most cases. For example, an oil pipeline provides inspection hatches to allow equipment to be injected into the system for the purposes of testing and repair. These are purpose built, and provide no other function. The real question though is if this paradigm is suited to software development. Part of me would like to say yes, but the answer it seems would come at a cost, and leaves us in that lovely grey area of balancing benefits vs costs.
The "no" argument really comes down to the reasons and purposes for designing software systems. DI is a beautiful pattern for promoting the loose coupling of code, something we are taught in our OOP classes is a very important and powerful design concept for improving the maintainability of code. The problem is that like all tools, it can be misused. I'm going to disagree with Rob's answer above in part, because DI's advantages are NOT primarily testing, but in promoting loosely coupled architecture. And I'd argue that resorting to designing systems based solely on the ability to test them suggests in such cases that either the architecture is is flawed, or the test cases are inappropriately configured, and possibly even both.
A well-factored system architecture is in most cases inherently simple to test, and the introduction of mocking frameworks over the last decade makes the testing much easier still. Experience has taught me that any system that I found hard to test has had some aspect of it too tightly coupled in some way. Sometimes (more rarely) this has proven to be necessary, but in most cases it was not, and usually when a seemingly simple system seemed too hard to test, it was because the testing paradigm was flawed. I've seen DI used as a means to circumvent system design in order to allow a system to be tested, and the risks have certainly outweighed the intended rewards, with system architecture effectively corrupted. By that I mean back-doors into code resulting in security problems, code bloated with test-specific behaviour that is never used at runtime, and spaghettification of source code such that you needed a couple of Sherpa's and a Ouija Board just to figure out which way was up! All of this in shipped production code. The resulting costs in terms of maintenance, learning curve etc can be astronomical, and to small companies, such losses can prove to be devastating in the longer term.
IMHO, I don't believe that DI should ever be used as a means to simply improve testability of code. If DI is your only option for testing, then the design usually needs to be refactored. On the other hand, implementing DI by design where it can be used as a part of the run-time code can provide clear advantages, but should not be misused as it can also result in classes being misused, and it should not be over-used simply because it seems cool and easy, as it can in such cases over-complicate the design of your code.
:-)
It's possible to write generic data access objects in Java:
package persistence;
import java.io.Serializable;
import java.util.List;
public interface GenericDao<T, K extends Serializable>
{
T find(K id);
List<T> find();
List<T> find(T example);
List<T> find(String queryName, String [] paramNames, Object [] bindValues);
K save(T instance);
void update(T instance);
void delete(T instance);
}
I can't speak for LINQ, but .NET has generics so it should be possible.

Categories

Resources