The classical solution if you have a concrete implementation of a dependency in a class you want to test is: add a layer of indirection where you have full control.
So it comes to adding more and more of indirection layers (interfaces that can be stubs/mocks in unit tests).
But: somewhere in my tests I must have the "real" implementation of my dependency. So what about this? Test? Don't test?
Take this example:
I had some dependencies on paths that I need in my application. So I extracted an interface, IPathProvider (that I can fake in my tests). Here is the implementation:
public interface IPathProvider
{
string AppInstallationFolder { get; }
string SystemCommonApplicationDataFolder { get; }
}
The concrete implementation PathProvider looks like this:
public class PathProvider : IPathProvider
{
private string _appInstallationFolder;
private string _systemCommonApplicationDataFolder;
public string AppInstallationFolder
{
get
{
if (_appInstallationFolder == null)
{
try
{
_appInstallationFolder =
Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
}
catch (Exception ex)
{
throw new MyException("Error reading Application-Installation-Path.", ex);
}
}
return _appInstallationFolder;
}
}
public string SystemCommonApplicationDataFolder
{
get
{
if (_systemCommonApplicationDataFolder == null)
{
try
{
_systemCommonApplicationDataFolder =
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData);
}
catch (Exception ex)
{
throw new MyException("Error reading System Common Application Data Path.", ex);
}
}
return _systemCommonApplicationDataFolder;
}
}
}
Do you test such code and if, how?
That PathProvider class seems a lot like a database repository class - it integrates with an external system (e.g., filesystem, database, etc).
These classes live at the boundaries of your system - other classes depend on them, and they depend solely on an external system. They connect your system to the external world.
As such, they're subject to integration testing - not unit testing.
I usually just tag my tests (e.g. with xUnit's Trait attribute) to keep them separate from regular tests, and so I can disable them when running in isolation (e.g. without a database). An alternative would be to separate them into a whole new project in the same solution, but I personally think that's overkill.
[Fact, Trait("Category", "Integration")]
public void ReturnsCorrectAppInstallationFolder()
{
// Arrange
var assemblyFilename = Path.GetFilename(typeof(SomeType).Assembly.Location);
var provider = new PathProvider();
// Act
var location = provider.AppInstallationFolder;
// Assert
Assert.NotEmpty(Directory.GetFiles(location, assemblyFilename))
}
The answer depends on your definition of 'unit'. In the case where a class is used as a unit then the answer is yes. However, testing individual classes has very limited business value unless you're developing a class library of some kind.
Another alternative it to define a unit as being a user-level (or system) requirement. A test for this kind of unit will be along the lines of: given user input X, test for system output Y. Units defined in this way have definite, measurable business value that can be traced to user requirements.
For example, when using user stories you may have a requirements such as:
As an administrator, I want to install the package to a specific folder.
This may have a number of associated scenarios to cover different permissible and prohibited folders. Each of these will have a unit test that provides the user input and verifies the output.
Given: The user has specified C:\Program Files\MyApp as the install
folder.
When: The installer has run.
Then: The C:\Program Files\MyApp exists on the drive and all of the
application files are present in that location.
When unit testing at a class interface level as you appear to be doing, you can easily end up with your software design tightly coupled to your unit test framework and implementation, rather than to your users' requirements.
The two answers have a lot of value, but I'll add one more thing, which I believe to be of big importance.
I have a feeling that people often get tied up in discussions about what should tested and what should not. That something should be treated as unit or it shouldn't. That it's an integration or not.
Generally, the goal of unit tests is to make sure our code works as intended. IMO we should test everything (whatever we are able to), that can get broken. Organizing it into some formal type of test and naming it is needed, but is of secondary importance to the test itself. My advice would be, that if you have any doubts, always ask yourself the question: "Am I afraid that it can be broken?". So... "where to stop"? Where you are not afraid of the functionality getting broken.
And back to your specific case. As #dcastro noticed, this does look a lot as a case of integration testing, rather than classic logic unit testing. Regardless of how we name it though - can it be broken? Yes it can. Should it be tested?
It MAY be tested, although I'd be far from naming it crucial. There is little own added logic and the complexity is low. Would I test it? Yes, if I had the time to do so. How would I do it? #dcastro 's example code is the way I'd approach the problem.
Related
I am currently trying to learn NUnit for the 1st time.
For a C# program I want to develop using TDD, I have decided I want to write a User class to start with. This User class will (semantically) look like this:
using System;
namespace SSH_file_drop
{
public class User
{
private Boolean authenticated = false;
public string userName = null;
//one-time object 'transmission' instance to allow single-use file transmission field
Transmission fileVehicle = null;
//property to discern whether user has been correctly authenticated
public Boolean isAuthenticated
{
get;
}
public Boolean canSend ()
{
if (authenticated)
{
return this.userType != "sender";
}
return false;
}
public User(String inputUName)
{
String userName = inputUName;
}
public static void generateUser(string userName)
{
//contact server to attempt to register new user
}
public void ChangePassword(String oldPassword, String newPassword)
{
//ask server to change user password
}
public Boolean SetUpTransmission()
{
if (canSend())
{
try
{
fileVehicle = new Transmission(this);
return true;
}
catch (e)
{
//write exception message
return false;
}
}
else
{
return false;
}
}
}
}
Really just placeholder code at the moment.
In order to test the User class however I am trying to write a TestFixture that creates separate instances of the User class, and somehow stores them persistently until teardown, enacting the same tests upon each object.
My idea was to create an array of User objects as a TestCase data source to test everything in order via the [Order(<n>)] attribute (bar User instance initialisation test method), but I have read here that a new instance of the TestFixture will be created for each method within it at runtime, so I would not be able to modify persistent data in a Test Fixture in this way.
Since I am trying to implement stateful logic - User object isAuthenticated() (and this authentication is dependent for subsequent tests after this as all User data is modelled as being stored in a remote database), is there a way to proceed without creating tests with repeated operations (create object, authenticate,check userType etc.), and thus multiple assertions?
Answering the part of your question that deals with how NUnit works...
The answer you quoted, regarding the lifetime of fixture instances in NUnit is misleading and a little surprising because there is no big mystery about it!
The quotation from Jim Newkirk, the primary develope of NUnit V2, is just a statement of how he wished he had done NUnit V2. He carried out those ideas in the xUnit.net framework, but it isn't relevant to NUnit 3.
NUnit 3, like NUnit V2,creates a single instance of a TestFixture, which is used for all your tests. You can create objects to use in your tests using [OneTimeSetUp]and store them as members of the class.If those objects are stateful, you should avoid use of parallel test execution for the tests in the fixture that make use of them.
If additional per-test setup is needed, then you can use [SetUp] for that purpose and [TearDown] to remove any changes that would affect subsequent tests negatively.
You can also specify ordering of tests, but this is usually not a very good idea because one broken test may cause subsequent tests to also break. Try to make each test independent if you can.
Note also that if you want to be able to run the same fixture multiple times against various types of objects, a parameterized fixture is a good option. Just pass into the constructor enough information so that the proper object initialization can be done in the one-time setup.
Be sure to read the details of all the above in the docs: https://github.com/nunit/docs/wiki
The approach that we take in this situation is to create helper methods that manufacture instances of the class under test with the appropriate state.
This is effectively the approach that you have mentioned except with a slightly different approach.
The other issues to consider with your proposed approach, which the approach I suggest would avoid, is ordered and/or parallel execution of tests: if you have a series of stateful objects, the tests will not be able to be reliably run in parallel or out of order.
Note that in order to fully support this mode of test development, you may need to introduce test-only methods and.or constructors.
For example, IsAuthenticated in your class is a readonly property. In order to simulate this property being on or off, you may need to introduce a test-only constructor (which I tend to avoid since someone will use it at some point even though you have documented that it is for test only) or change the implementation of the property and add a setter method.
If you change the Authenticated property to use a backing store member, then you can add a test-only method (although there is a chance that another developer will use this method, if it is well-named, it becomes much more obvious when it is used than a constructor).
private bool m_IsAuthenticated;
public bool IsAuthenticated {
get {
return m_IsAuthenticated;
}
}
public void Set_IsAuthenticated_ForTestONLY(bool value) {
m_IsAuthenticated = value;
}
I'm attempting to learn TDD but having a hard time getting my head around what / how to test with a little app I need to write.
The (simplified somewhat) spec for the app is as follows:
It needs to take from the user the location of a csv file, the location of a word document mailmerge template and an output location.
The app will then read the csv file and for each row, merge the data with the word template and output to the folder specified.
Just to be clear, I'm not asking how I would go about coding such an app as I'm confident that I know how to do it if I just went ahead and started. But if I wanted to do it using TDD, some guidance on the tests to write would be appreciated as I'm guessing I don't want to be testing reading a real csv file, or testing the 3rd party component that does the merge or converts to pdf.
I think just some general TDD guidance would be a great help!
I'd start out by thinking of scenarios for each step of your program, starting with failure cases and their expected behavior:
User provides a null csv file location (throws an ArgumentNullException).
User provides an empty csv file location (throws an ArgumentException).
The csv file specified by the user doesn't exist (whatever you think is appropriate).
Next, write a test for each of those scenarios and make sure it fails. Next, write just enough code to make the test pass. That's pretty easy for some of these conditions, because the code that makes your test pass is often the final code:
public class Merger {
public void Merge(string csvPath, string templatePath, string outputPath) {
if (csvPath == null) { throw new ArgumentNullException("csvPath"); }
}
}
After that, move into standard scenarios:
The specified csv file has one line (merge should be called once, output written to the expected location).
The specified csv file has two lines (merge should be called twice, output written to the expected location).
The output file's name conforms to your expectations (whatever those are).
And so on. Once you get to this second phase, you'll start to identify behavior you want to stub and mock. For example, checking whether a file exists or not - .NET doesn't make it easy to stub this, so you'll probably need to create an adapter interface and class that will let you isolate your program from the actual file system (to say nothing of actual CSV files and mail-merge templates). There are other techniques available, but this method is fairly standard:
public interface IFileFinder { bool FileExists(string path); }
// Concrete implementation to use in production
public class FileFinder: IFileFinder {
public bool FileExists(string path) { return File.Exists(path); }
}
public class Merger {
IFileFinder finder;
public Merger(IFileFinder finder) { this.finder = finder; }
}
In tests, you'll pass in a stub implementation:
[Test]
[ExpectedException(typeof(FileNotFoundException))]
public void Fails_When_Csv_File_Does_Not_Exist() {
IFileFinder finder = mockery.NewMock<IFileFinder>();
Merger merger = new Merger(finder);
Stub.On(finder).Method("FileExists").Will(Return.Value(false));
merger.Merge("csvPath", "templatePath", "outputPath");
}
Simple general guidance:
You write unit tests first. At the beginning
they all fail.
Then you go into the class under test
and write code until tests related to
each method pass.
Do this for every public method of
your types.
By writing units test you actually specify the requirements but in another form, easy to read code.
Looking to it from another angle: when you receive a new black boxed class and unit tests for it, you should read the unit tests to see what the class does and how it behaves.
To read more about unit testing I recommend a very good book: Art Of Unit Testing
Here are a couple links to articles on StackOverflow regarding TDD for more details and examples:
Link1
Link2
To be able to unit test you need to decouple the class from any dependencies so you can effectively just test the class itself.
To do this you'll need to inject any dependencies into the class. You would typically do this by passing in an object that implements the dependency interface, into your class in the constructor.
Mocking frameworks are used to create a mock instance of your dependency that your class can call during the test. You define the mock to behave in the same way as your dependency would and then verify it's state at the end of the test.
I would recommend having a play with Rhino mocks and going through the examples in the documentation to get a feel for how this works.
http://ayende.com/projects/rhino-mocks.aspx
Consider a following chunk of service:
public class ProductService : IProductService {
private IProductRepository _productRepository;
// Some initlization stuff
public Product GetProduct(int id) {
try {
return _productRepository.GetProduct(id);
} catch (Exception e) {
// log, wrap then throw
}
}
}
Let's consider a simple unit test:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
}
At first it seems that this test is ok. But let's change our service method a little bit:
public Product GetProduct(int id) {
try {
var product = _productRepository.GetProduct(id);
product.Owner = "totallyDifferentOwner";
return product;
} catch (Exception e) {
// log, wrap then throw
}
}
How to rewrite a given test that it'd pass with the first service method and fail with a second one?
How do you handle this kind of simple scenarios?
HINT 1: A given test is bad coz product and returnedProduct is actually the same reference.
HINT 2: Implementing equality members (object.equals) is not the solution.
HINT 3: As for now, I create a clone of the Product instance (expectedProduct) with AutoMapper - but I don't like this solution.
HINT 4: I'm not testing that the SUT does NOT do sth. I'm trying to test that SUT DOES return the same object as it is returned from repository.
Personally, I wouldn't care about this. The test should make sure that the code is doing what you intend. It's very hard to test what code is not doing, I wouldn't bother in this case.
The test actually should just look like this:
[Test]
public void GetProduct_GetsProductFromRepository()
{
var product = EntityGenerator.Product();
_productRepositoryMock
.Setup(pr => pr.GetProduct(product.Id))
.Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreSame(product, returnedProduct);
}
I mean, it's one line of code you are testing.
Why don't you mock the product as well as the productRepository?
If you mock the product using a strict mock, you will get a failure when the repository touches your product.
If this is a completely ridiculous idea, can you please explain why? Honestly, I'd like to learn.
One way of thinking of unit tests is as coded specifications. When you use the EntityGenerator to produce instances both for the Test and for the actual service, your test can be seen to express the requirement
The Service uses the EntityGenerator to produce Product instances.
This is what your test verifies. It's underspecified because it doesn't mention if modifications are allowed or not. If we say
The Service uses the EntityGenerator to produce Product instances, which cannot be modified.
Then we get a hint as to the test changes needed to capture the error:
var product = EntityGenerator.Product();
// [ Change ]
var originalOwner = product.Owner;
// assuming owner is an immutable value object, like String
// [...] - record other properties as well.
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
// [ Change ] verify the product is equivalent to the original spec
Assert.AreEqual(originalOwner, returnedProduct.Owner);
// [...] - test other properties as well
(The change is that we retrieve the owner from the freshly created Product and check the owner from the Product returned from the service.)
This embodies the fact that the Owner and other product properties must equal the the original value from the generator. This may seem like I'm stating the obvious, since the code is pretty trivial, but it runs quite deep if you think in terms of requirement specifications.
I often "test my tests" by stipulating "if I change this line of code, tweak a critical constant or two, or inject a few code burps (e.g. changing != to ==), which test will capture the error?" Doing it for real finds if there is a test that captures the problem. Sometimes not, in which case it's time to look at the requirements implicit in the tests, and see how we can tighten them up. In projects with no real requirements capture/analysis this can be a useful tool to toughen up tests so they fail when unexpected changes occur.
Of course, you have to be pragmatic. You can't reasonably expect to handle all changes - some will simply be absurd and the program will crash. But logical changes like the Owner change are good candidates for test strengthening.
By dragging talk of requirements into a simple coding fix, some may think I've gone off the deep end, but thorough requirements help produce thorough tests, and if you have no requirements, then you need to work doubly hard to make sure your tests are thorough, since you're implicitly doing requirements capture as you write the tests.
EDIT: I'm answering this from within the contraints set in the question. Given a free choice, I would suggest not using the EntityGenerator to create Product test instances, and instead create them "by hand" and use an equality comparison. Or more direct, compare the fields of the returned Product to specific (hard-coded) values in the test, again, without using the EntityGenerator in the test.
Q1: Don't make changes to code then write a test. Write a test first for the expected behavior. Then you can do whatever you want to the SUT.
Q2: You don't make the changes in your Product Gateway to change the owner of the product. You make the change in your model.
But if you insist, then listen to your tests. They are telling you that you have the possibility for products to be pulled from the gateway that have the incorrect owners. Oops, Looks like a business rule. Should be tested for in the model.
Also your using a mock. Why are you testing an implementation detail? The gateway only cares that the _productRepository.GetProduct(id) returns a product. Not what the product is.
If you test in this manner you will be creating fragile tests. What if product changes further. Now you have failing tests all over the place.
Your consumers of product (MODEL) are the only ones that care about the implementation of Product.
So your gateway test should look like this:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
_productService.GetProduct(product.Id);
_productRepositoryMock.VerifyAll();
}
Don't put business logic where it doesn't belong! And it's corollary is don't test for business logic where there should be none.
If you really want to guarantee that the service method doesn't change the attributes of your products, you have two options:
Define the expected product attributes in your test and assert that the resulting product matches these values. (This appears to be what you're doing now by cloning the object.)
Mock the product and specify expectations to verify that the service method does not change its attributes.
This is how I'd do the latter with NMock:
// If you're not a purist, go ahead and verify all the attributes in a single
// test - Get_Product_Does_Not_Modify_The_Product_Returned_By_The_Repository
[Test]
public Get_Product_Does_Not_Modify_Owner() {
Product mockProduct = mockery.NewMock<Product>(MockStyle.Transparent);
Stub.On(_productRepositoryMock)
.Method("GetProduct")
.Will(Return.Value(mockProduct);
Expect.Never
.On(mockProduct)
.SetProperty("Owner");
_productService.GetProduct(0);
mockery.VerifyAllExpectationsHaveBeenMet();
}
My previous answer stands, though it assumes the members of the Product class that you care about are public and virtual. This is not likely if the class is a POCO / DTO.
What you're looking for might be rephrased as a way to do comparison of the values (not instance) of the object.
One way to compare to see if they match when serialized. I did this recently for some code... Was replacing a long parameter list with a parameterized object. The code is crufty, I don't want to refactor it though as its going away soon anyhow. So I just do this serialization comparison as a quick way to see if they have the same value.
I wrote some utility functions... Assert2.IsSameValue(expected,actual) which functions like NUnit's Assert.AreEqual(), except it serializes via JSON before comparing. Likewise, It2.IsSameSerialized() can be used to describe parameters passed to mocked calls in a manner similar to Moq.It.Is().
public class Assert2
{
public static void IsSameValue(object expectedValue, object actualValue) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
var expectedJSON = serializer.Serialize(expectedValue);
var actualJSON = serializer.Serialize(actualValue);
Assert.AreEqual(expectedJSON, actualJSON);
}
}
public static class It2
{
public static T IsSameSerialized<T>(T expectedRecord) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
string expectedJSON = serializer.Serialize(expectedRecord);
return Match<T>.Create(delegate(T actual) {
string actualJSON = serializer.Serialize(actual);
return expectedJSON == actualJSON;
});
}
}
Well, one way is to pass around a mock of product rather than the actual product. Verify nothing to affect the product by making it strict. (I assume you are using Moq, it looks like you are)
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = new Mock<EntityGenerator.Product>(MockBehavior.Strict);
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
product.VerifyAll();
}
That said, I'm not sure you should be doing this. The test is doing to much, and might indicate there is another requirement somewhere. Find that requirement and create a second test. It might be that you just want to stop yourself from doing something stupid. I don't think that scales, because there are so many stupid things you can do. Trying to test each would take too long.
I'm not sure, if the unit test should care about "what given method does not". There are zillion steps which are possible. In strict the test "GetProduct(id) return the same product as getProduct(id) on productRepository" is correct with or without the line product.Owner = "totallyDifferentOwner".
However you can create a test (if is required) "GetProduct(id) return product with same content as getProduct(id) on productRepository" where you can create a (propably deep) clone of one product instance and then you should compare contents of the two objects (so no object.Equals or object.ReferenceEquals).
The unit tests are not guarantee for 100% bug free and correct behaviour.
You can return an interface to product instead of a concrete Product.
Such as
public IProduct GetProduct(int id)
{
return _productRepository.GetProduct(id);
}
And then verify the Owner property was not set:
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg.Is.Anything);
If you care about all the properties and or methods, then there is probably a pre-existing way with Rhino. Otherwise you can make an extension method that probably uses reflection such as:
Dep<IProduct>().AssertNoPropertyOrMethodWasCalled()
Our behaviour specifications are like so:
[Specification]
public class When_product_service_has_get_product_called_with_any_id
: ProductServiceSpecification
{
private int _productId;
private IProduct _actualProduct;
[It]
public void Should_return_the_expected_product()
{
this._actualProduct.Should().Be.EqualTo(Dep<IProduct>());
}
[It]
public void Should_not_have_the_product_modified()
{
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg<string>.Is.Anything);
// or write your own extension method:
// Dep<IProduct>().AssertNoPropertyOrMethodWasCalled();
}
public override void GivenThat()
{
var randomGenerator = new RandomGenerator();
this._productId = randomGenerator.Generate<int>();
Stub<IProductRepository, IProduct>(r => r.GetProduct(this._productId));
}
public override void WhenIRun()
{
this._actualProduct = Sut.GetProduct(this._productId);
}
}
Enjoy.
If all consumers of ProductService.GetProduct() expect the same result as if they had asked it from the ProductRepository, why don't they just call ProductRepository.GetProduct() itself ?
It seems you have an unwanted Middle Man here.
There's not much value added to ProductService.GetProduct(). Dump it and have the client objects call ProductRepository.GetProduct() directly. Put the error handling and logging into ProductRepository.GetProduct() or the consumer code (possibly via AOP).
No more Middle Man, no more discrepancy problem, no more need to test for that discrepancy.
Let me state the problem as I see it.
You have a method and a test method. The test method validates the original method.
You change the system under test by altering the data. What you want to see is that the same unit test fails.
So in effect you're creating a test that verifies that the data in the data source matches the data in your fetched object AFTER the service layer returns it. That probably falls under the class of "integration test."
You don't have many good options in this case. Ultimately, you want to know that every property is the same as some passed-in property value. So you're forced to test each property independently. You could do this with reflection, but that won't work well for nested collections.
I think the real question is: why test your service model for the correctness of your data layer, and why write code in your service model just to break the test? Are you concerned that you or other users might set objects to invalid states in your service layer? In that case you should change your contract so that the Product.Owner is readonly.
You'd be better off writing a test against your data layer to ensure that it fetches data correctly, then use unit tests to check the business logic in your service layer. If you're interested in more details about this approach reply in the comments.
Having look on all 4 hints provided it seems that you want to make an object immutable at runtime. C# language does no support that. It is possible only with refactoring the Product class itself. For refactoring you can take IReadonlyProduct approach and protect all setters from being called. This however still allows modification of elements of containers like List<> being returned by getters. ReadOnly collection won't help either. Only WPF lets you change immutability at runtime with Freezable class.
So I see the only proper way to make sure objects have same contents is by comparing them. Probably the easiest way would be to add [Serializable] attribute to all involved entities and do the serialization-with-comparison as suggested by Frank Schwieterman.
In bigger projects my unit tests usually require some "dummy" (sample) data to run with. Some default customers, users, etc. I was wondering how your setup looks like.
How do you organize/maintain this data?
How do you apply it to your unit tests (any automation tool)?
Do you actually require test data or do you think it's useless?
My current solution:
I differentiate between Master data and Sample data where the former will be available when the system goes into production (installed for the first time) and the latter are typical use cases I require for my tests to run (and to play during development).
I store all this in an Excel file (because it's so damn easy to maintain) where each worksheet contains a specific entity (e.g. users, customers, etc.) and is flagged either master or sample.
I have 2 test cases which I (miss)use to import the necessary data:
InitForDevelopment (Create Schema, Import Master data, Import Sample data)
InitForProduction (Create Schema, Import Master data)
I use the repository pattern and have a dummy repository that's instantiated by the unit tests in question, it provides a known set of data that encompasses a examples that are both within and out of range for various fields.
This means that I can test my code unchanged by supplying the instantiated repository from the test unit for testing or the production repository at runtime (via a dependency injection (Castle)).
I don't know of a good web reference for this but I learnt much from Steven Sanderson's Professional ASP.NET MVC 1.0 book published by Apress. The MVC approach naturally provides the separation of concern that's necessary to allow your testing to operate with fewer dependencies.
The basic elements are that you repository implements an interface for data access, that same interface is then implemented by a fake repository that you construct in your test project.
In my current project I have an interface thus:
namespace myProject.Abstract
{
public interface ISeriesRepository
{
IQueryable<Series> Series { get; }
}
}
This is implemented as both my live data repository (using Linq to SQL) and also a fake repository thus:
namespace myProject.Tests.Respository
{
class FakeRepository : ISeriesRepository
{
private static IQueryable<Series> fakeSeries = new List<Series> {
new Series { id = 1, name = "Series1", openingDate = new DateTime(2001,1,1) },
new Series { id = 2, name = "Series2", openingDate = new DateTime(2002,1,30),
...
new Series { id = 10, name = "Series10", openingDate = new DateTime(2001,5,5)
}.AsQueryable();
public IQueryable<Series> Series
{
get { return fakeSeries; }
}
}
}
Then the class that's consuming the data is instantiated passing the repository reference to the constructor:
namespace myProject
{
public class SeriesProcessor
{
private ISeriesRepository seriesRepository;
public void SeriesProcessor(ISeriesRepository seriesRepository)
{
this.seriesRepository = seriesRepository;
}
public IQueryable<Series> GetCurrentSeries()
{
return from s in seriesRepository.Series
where s.openingDate.Date <= DateTime.Now.Date
select s;
}
}
}
Then in my tests I can approach it thus:
namespace myProject.Tests
{
[TestClass]
public class SeriesTests
{
[TestMethod]
public void Meaningful_Test_Name()
{
// Arrange
SeriesProcessor processor = new SeriesProcessor(new FakeRepository());
// Act
IQueryable<Series> currentSeries = processor.GetCurrentSeries();
// Assert
Assert.AreEqual(currentSeries.Count(), 10);
}
}
}
Then look at CastleWindsor for the inversion of control approach for your live project to allow your production code to automatically instantiate your live repository through dependency injection. That should get you closer to where you need to be.
In our company we discuss exact these problem a bunch of time since weeks and month.
To follow the guideline of unit testing:
Each test must be atomar and don't allow relate to each other (No data sharing), that means, each tust must be have there own data at the beginning and clear the data at end.
Out product is so complex (5 years development, over 100 tables in a database), that is nearly impossible to maintain this in a acceptable way.
We tried out database scripts, which creates and deletes the data before / after the test (there are automatic methods which call it).
I would say you are on a good way with excel files.
Ideas from me to make it a little well:
If you have a database behind your software google for "NDBUnit". It's a framework to insert and delete data in databases for unit tests.
If you have no database maybe XML is a little more flexible on systems like excel.
Not directly answering the question but one way to limit the amount of tests that need to use dummy data is to use a mocking framework to create mocked objects that you can use to fake the behavior of any dependencies you have in a class.
I find that using mocked objects rather then a specific concrete implementation you can drastically reduce the amount of real data you need to use as mocks don't process the data you pass into them. They just perform exactly as you want them to.
I'm still sure you probably need dummy data in a lot of instances so apologies if you're already using or are aware of mocking frameworks.
Just to be clear, you need to differenciate between UNIT testing (test a module with no implied dependencies on other modules) and app testing (test parts of application).
For the former, you need a mocking framework (I'm only familiar with Perl ones, but i'm sure they exist in Java/C#). A sign of a good framework would be ability to take a running app, RECORD all the method calls/returns, and then mock the selected methods (e.g. the ones you are not testing in this specific unit test) using recorded data.
For good unit tests you MUST mock every external dependency - e.g., no calls to filesystem, no calls to DB or other data access layers unless that is what you are testing, etc...
For the latter, the same mocking framework is useful, plus ability to create test data sets (that can be reset for each test). The data to be loaded for the tests can reside in any offline storage that you can load from - BCP files for Sybase DB data, XML, whatever tickles your fancy. We use both BCP and XML.
Please note that this sort of "load test data into DB" testing is SIGNIFICANTLY easier if your overall company framework allows - or rather enforces - a "What is the real DB table name for this table alias" API. That way, you can cause your application to look at cloned "test" DB tables instead of real ones during testing - on top of such table aliasing API's main purpose of enabling one to move DB tables from one database to another.
I'm trying my hand at behavior driven development and I'm finding myself second guessing my design as I'm writing it. This is my first greenfield project and it may just be my lack of experience. Anyway, here's a simple spec for the class(s) I'm writing. It's written in NUnit in a BDD style instead of using a dedicated behavior driven framework. This is because the project targets .NET 2.0 and all of the BDD frameworks seem to have embraced .NET 3.5.
[TestFixture]
public class WhenUserAddsAccount
{
private DynamicMock _mockMainView;
private IMainView _mainView;
private DynamicMock _mockAccountService;
private IAccountService _accountService;
private DynamicMock _mockAccount;
private IAccount _account;
[SetUp]
public void Setup()
{
_mockMainView = new DynamicMock(typeof(IMainView));
_mainView = (IMainView) _mockMainView.MockInstance;
_mockAccountService = new DynamicMock(typeof(IAccountService));
_accountService = (IAccountService) _mockAccountService.MockInstance;
_mockAccount = new DynamicMock(typeof(IAccount));
_account = (IAccount)_mockAccount.MockInstance;
}
[Test]
public void ShouldCreateNewAccount()
{
_mockAccountService.ExpectAndReturn("Create", _account);
MainPresenter mainPresenter = new MainPresenter(_mainView, _accountService);
mainPresenter.AddAccount();
_mockAccountService.Verify();
}
}
None of the interfaces used by MainPresenter have any real implementations yet. AccountService will be responsible for creating new accounts. There can be multiple implementations of IAccount defined as separate plugins. At runtime, if there is more than one then the user will be prompted to choose which account type to create. Otherwise AccountService will simply create an account.
One of the things that has me uneasy is how many mocks are required just to write a single spec/test. Is this just a side effect of using BDD or am I going about this thing the wrong way?
[Update]
Here's the current implementation of MainPresenter.AddAccount
public void AddAccount()
{
IAccount account;
if (AccountService.AccountTypes.Count == 1)
{
account = AccountService.Create();
}
_view.Accounts.Add(account);
}
Any tips, suggestions or alternatives welcome.
When doing top to down development it's quite common to find yourself using a lot of mocks. The pieces you need aren't there so naturally you need to mock them. With that said this does feel like an acceptance level test. In my experience BDD or Context/Specification starts to get a bit weird at the unit test level. At the unit test level I'd probably be doing something more along the lines of...
when_adding_an_account
should_use_account_service_to_create_new_account
should_update_screen_with_new_account_details
You may want to reconsider your usage of an interface for IAccount. I personally stick
with keeping interfaces for services over domain entities. But that's more of a personal preference.
A few other small suggestions...
You may want to consider using a Mocking framework such as Rhino Mocks (or Moq) which allow you to avoid using strings for your assertions.
_mockAccountService.Expect(mock => mock.Create())
.Return(_account);
If you are doing BDD style one common pattern I've seen is using chained classes for test setup. In your example...
public class MainPresenterSpec
{
// Protected variables for Mocks
[SetUp]
public void Setup()
{
// Setup Mocks
}
}
[TestFixture]
public class WhenUserAddsAccount : MainPresenterSpec
{
[Test]
public void ShouldCreateNewAccount()
{
}
}
Also I'd recommend changing your code to use a guard clause..
public void AddAccount()
{
if (AccountService.AccountTypes.Count != 1)
{
// Do whatever you want here. throw a message?
return;
}
IAccount account = AccountService.Create();
_view.Accounts.Add(account);
}
The test life support is a lot simpler if you use an auto mocking container such as RhinoAutoMocker (part of StructureMap) . You use the auto mocking container to create the class under test and ask it for the dependencies you need for the test(s). The container might need to inject 20 things in the constructor but if you only need to test one you only have to ask for that one.
using StructureMap.AutoMocking;
namespace Foo.Business.UnitTests
{
public class MainPresenterTests
{
public class When_asked_to_add_an_account
{
private IAccountService _accountService;
private IAccount _account;
private MainPresenter _mainPresenter;
[SetUp]
public void BeforeEachTest()
{
var mocker = new RhinoAutoMocker<MainPresenter>();
_mainPresenter = mocker.ClassUnderTest;
_accountService = mocker.Get<IAccountService>();
_account = MockRepository.GenerateStub<IAccount>();
}
[TearDown]
public void AfterEachTest()
{
_accountService.VerifyAllExpectations();
}
[Test]
public void Should_use_the_AccountService_to_create_an_account()
{
_accountService.Expect(x => x.Create()).Return(_account);
_mainPresenter.AddAccount();
}
}
}
}
Structurally I prefer to use underscores between words instead of RunningThemAllTogether as I find it easier to scan. I also create an outer class named for the class under test and multiple inner classes named for the method under test. The test methods then allow you to specify the behaviors of the method under test. When run in NUnit this gives you a context like:
Foo.Business.UnitTests.MainPresenterTest
When_asked_to_add_an_account
Should_use_the_AccountService_to_create_an_account
Should_add_the_Account_to_the_View
That seems like the correct number of mocks for a presenter with a service which is supposed to hand back an account.
This seems more like an acceptance test rather than a unit test, though - perhaps if you reduced your assertion complexity you would find a smaller set of concerns being mocked.
Yes, your design is flawed. You are using mocks :)
More seriously, I agree with the previous poster who suggests your design should be layered, so that each layer can be tested separately. I think it is wrong in principle that testing code should alter the actual production code -- unless this can be done automatically and transparently the way code can be compiled for debug or release.
It's like the Heisenberg uncertainty principle - once you have the mocks in there, your code is so altered it becomes a maintenance headache and the mocks themselves have the potential to introduce or mask bugs.
If you have clean interfaces, I have no quarrel with implementing a simple interface that simulates (or mocks) an unimplemented interface to another module. This simulation could be used in the same way mocking is, for unit testing etc.
You might want to use MockContainers in order to get rid of all the mock management, while creating the presenter. It simplifies unit tests a lot.
This is okay, but I would expect an IoC automocking container in there somewhere. The code hints at the test writer manually (explicitly) switching between mocked and real objects in tests which should not be the case because if we are talking about a unit test (with unit being just one class), it's simpler to just auto-mock all other classes and use the mocks.
What I'm trying to say is that if you have a test class that uses both mainView and mockMainView, you don't have a unit test in the strict sense of the word -- more like an integration test.
It is my opinion that if you find yourself needing mocks, your design is incorrect.
Components should be layered. You build and test components A in isolation. Then you build and test B+A. Once happy, you build layer C and test C+B+A.
In your case you shouldn't need a "_mockAccountService". If your real AccountService has been tested, then just use it. That way you know any bugs are in MainPresentor and not in the mock itself.
If your real AccountService hasn't been tested, stop. Go back and do what you need to ensure it is working correctly. Get it to the point where you can really depend on it, then you won't need the mock.