C# Unit Test - To mock, stub or use explicit implementation - c#

This has been discussed a number of times before, but the merits in the below examples aren't obvious, so please bear with me.
I'm trying to decide whether to use mock implementations in my unit tests and am undecided given the following two examples, the first using NSubstitute for mocking and the second using a SimpleInjector (Bootstrapper object) resolved implementation.
Essentially both are testing for the same thing, that the Disposed member is set to true when the .Dispose() method is called (see implementation of method at the bottom of this post).
To my eye, the second method makes more sense for regression testing as the mock proxy explicitly sets the Disposed member to be true in the first example, whereas it is set by the actual .Dispose() method in the injected implementation.
Why would you suggest I choose one over the other for verifying that the method behaves as expected? I.e. that the .Dispose() method is called, and that the Disposed member is set correctly by this method.
[Test]
public void Mock_socket_base_dispose_call_is_received()
{
var socketBase = Substitute.For<ISocketBase>();
socketBase.Disposed.Should().BeFalse("this is the default disposed state.");
socketBase.Dispose();
socketBase.Received(1).Dispose();
socketBase.Disposed.Returns(true);
socketBase.Disposed.Should().BeTrue("the ISafeDisposable interface requires this.");
}
[Test]
public void Socket_base_is_marked_as_disposed()
{
var socketBase = Bootstrapper.GetInstance<ISocketBase>();
socketBase.Disposed.Should().BeFalse("this is the default disposed state.");
socketBase.Dispose();
socketBase.Disposed.Should().BeTrue("the ISafeDisposable interface requires this.");
}
For reference the .Dispose() method is simply this:
/// <summary>
/// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
/// </summary>
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
/// <summary>
/// Releases unmanaged and - optionally - managed resources.
/// </summary>
/// <param name="disposeAndFinalize"><c>true</c> to release both managed and unmanaged resources; <c>false</c> to release only unmanaged resources.</param>
protected void Dispose(bool disposeAndFinalize)
{
if (Disposed)
{
return;
}
if (disposeAndFinalize)
{
DisposeManagedResources();
}
DisposeUnmanagedResources();
Disposed = true;
}
Cheers

Both test methods seem quite bizarre to me. With the first method, you don't seem to test anything (or I might be misunderstanding what NSubstitute does), because you just mock the ISocketBase interface (that has no behavior to test) and start testing that mock object instead of the real implementation.
The second method is bad as well, since you should NOT use any DI container inside your unit tests. This only makes things more complicated because:
You now use shared state that all tests use, which makes all tests depend on each other (tests should run in isolation).
The container bootstrap logic will get very complex, because you want to insert different mocks for different tests, and again, no objects shared between tests.
Your tests got an extra dependency on a framework or facade that just doesn't have exist anyway. In this sense you're simply making your tests more complicated. It might be just a little bit more complicated, but it's an extra complication nonetheless.
Instead, what you should do is always create the class under test (SUT) inside the unit test (or a test factory method) itself. You might still want to create the SUTs dependencies using a mocking framework but this is optional. So, IMO the test should look something like this:
[Test]
public void A_nondisposed_Socket_base_should_not_be_marked_dispose()
{
// Arrange
Socket socket = CreateValidSocket();
// Assert
socketBase.Disposed.Should().BeFalse(
"A non-disposed socket should not be flagged.");
}
[Test]
public void Socket_base_is_marked_as_disposed_after_calling_dispose()
{
// Arrange
Socket socket = CreateValidSocket();
// Act
socketBase.Dispose();
// Assert
socketBase.Disposed.Should().BeTrue(
"Should be flagged as Disposed.");
}
private static Socket CreateValidSocket()
{
return new Socket(
new FakeDependency1(), new FakeDependency2());
}
Note that I split up your single test into 2 tests. That Disposed should be false before dispose is called is not a precondition for that test to run; it's a requirement of the system to work. In other words, you need to be explicit about this and need this second test.
Also note the use of the CreateValidSocket factory method that is reused over multiple tests. You might have multiple overloads (or optional parameters) for this method when other tests check other parts of the class that require more specific fake or mock objects.

You are concerned with too much. This test is testing weather or not a given implementation is correctly disposing and as such your test should reflect that. See the pseudo code below. The trick to non brittle tests is to only test the absolute minimum required to satisfy the test.
public class When_disposed_is_called()
{
public void The_object_should_be_disposed()
{
var disposableObjects = someContainer.GetAll<IDisposable>();
disposableObjects.ForEach(obj => obj.Dispose());
Assert.False(disposableObject.Any(obj => obj.IsDisposed == false));
}
}
As you can see I fill some dependency container with all the objects in my concern that implement IDisposable. I might have to mock them or do other things but that is not the concern of the test. Ultimately it is only concerned with validating that when something is disposed it should in fact be disposed.

Related

Understanding Dispose method for in-memory database

I have this base class for all my unit tests that spins up an in-memory database
public abstract class TestWithSqlite : IDisposable
{
private const string InMemoryConnectionString = "DataSource=:memory:";
private readonly SqliteConnection _connection;
protected readonly ToDoDbContext DbContext;
protected TestWithSqlite()
{
_connection = new SqliteConnection(InMemoryConnectionString);
_connection.Open();
var options = new DbContextOptionsBuilder<ToDoDbContext>()
.UseSqlite(_connection)
.Options;
DbContext = new ToDoDbContext(options);
DbContext.Database.EnsureCreated();
}
public void Dispose()
{
_connection.Close();
}
}
My question is: If I call DbContext.something in one of my tests, is it the Dispose method that ensures that this instance of the database is closed when the test ends? So that for the next test when I call DbContext again, its a new instance?
Every unit test should have a new DbContext. You don't want any dependencies between tests. Therefore, calling dispose at the end of a test is correct.
The xUnit documentation describes this. So the class containing your tests could implement IDisposable. By default, xUnit will run every method in your test class in isolation, and will call Dispose, so any object instances are unique per test.
If you want to share object instances, you can use fixtures, but it sounds like you want isolation between your tests, which is there by default.
So if you now directly add test methods to the class in your question, the context should be unique for each test. You should be able to test that by putting breakpoints in your test methods (or Dispose) and then debug the tests, to see what happens.
When I take a look into "SQLite.Net Help the document states:
Dispose - Disposes and finalizes the connection, if applicable.
In other words all SQLiteConnection Members become invalid, all resources are released.
Regards
Martin
Since your test class is used for multiple tests, having a base class for things that should be unique to one test is a bad idea.
You want to control exactly when and how a database is opened and when and how it's removed from memory.
That means your class should not be inherited, but rather used by each single test.
Make your class a normal class that can be instantiated and will provide a DBContext through a method or property. Open and create the database in the constructor, close and remove the database in the dispose method as you already do.
Your test should start with a using block that instantiates this class.
Inside the using block your tests can use the class and the DBContext it provides. the using block will take care of the dispose, no matter how you leave your test (the code might throw an exception after all).
I don't remember how it worked, but I think you can give names to the in-memory databases. It might be a good idea to do so, otherwise you will have to make sure your tests never run in parallel.

How to refactor to avoid using a Shim?

I'm pretty new to Unit Testing and am exploring the Microsoft Fakes framework - primarily because it's free and it allows me to mock SharePoint objects easily with the Emulators package. I've seen various mentions on SO and elsewhere that Shims are evil and I more or less understand why. What I don't get is how to avoid them in one specific case - in other words, "how should I refactor my code to avoid having to use shims?"
For the code in question, I have a JobProcessor object that has properties and methods, some of which are private as they should only be called from the public Execute method. I want to test that when Execute is called and there is a Job available that its Process method is called as I need to do some extra logging.
Here's the relevant code:
//in system under test - JobProcessor.cs
private IJob CurrentJob { get; set; }
public void Execute()
{
GetJobToProcess(); //stores Job in CurrentJob property if found
if (ShouldProcessJob){
CurrentJob.ProcessJob();
}
}
I need to do some extra things if ProcessJob is called from a test, so I set up a Stub in my Test Method to do those extra things:
StubIJob fakeJob = new StubIJob(){
ProcessJob = () =>{
//do my extra things here
}
};
I'm testing the ProcessJob method itself elsewhere so I don't care that it doesn't do anything but my extra stuff here. As I understand things, I now need to set up a Shim to have the private method GetJobsToProcess from JobProcessor (my system under test) return my fake job so that my stubbed method is called:
processor = new JobProcessor();
ShimJobProcessor.AllInstances.GetJobToProcess = (#this) =>{
var privateProcessor = new PrivateObject(processor);
privateProcessor.SetProperty("CurrentJob", fakeJob); //force my test Job to be processed so the Stub is used
};
In this case, how should I avoid using the Shim? Does it matter?
Thanks.
This is a case where rather than using a shim or stub, I'd just make the method return a boolean to notify whether or not the inner call has happened.
The problem with using fakes there is that you're assuming that some method of some object is called, which the test should not know. Tests should be dumb, and only see the outside of the code. Tests, like any other code, should not care how a value was reached, just that it is correct.
However, your code has another issue as well. You're getting some unknown object and using it within the same scope. You should remove the call to GetJobToProccess from Execute.
It's the principle of Dependency Injection: a method should not spin up and hide it's dependencies; if it depends on an object, that object should be possible to change freely or be passed in. The exact implementation of the job should not matter to the execute method, and that, along with the naming, implies that you should not be getting that object and executing it in the same call.

Unit test class with one public and multiple private methods

I'm having a little trouble understanding how to approach the following in order to unit test the class.
The object under test is an object that consists out of 1 public method, one that accepts a list of objects of type A and returns an object B (which is a binary stream).
Due to the nature of the resulting binary stream, which gets large, it is not a nice comparison for the test output.
The stream is built using several private instance helper methods.
class Foo
{
private BinaryStream mBinaryStream;
public Foo() {}
public BinaryStream Bar(List<Object> objects) {
// perform magic to build and return the binary stream;
// using several private instance helper methods.
Magic(objects);
MoreMagic(objects);
}
private void Magic(List<Object> objects) { /* work on mBinaryStream */ }
private void MoreMagic(List<Object> objects) { /* work on mBinaryStream */ }
};
Now I know that I need to test the behaviour of the class, thus the Bar method.
However, it's undoable (both space and time wise) to do compare the output of the method with a predefined result.
The number of variations is just too large (and they are corner cases).
One option to go for is to refactor these private helper methods into (a) separate class(es) that can be unit tested. The binary stream can then be chopped into smaller better testable chunks, but also here goes that a lot of cases need to be handled and comparing the binary result will defy the quick time of a unit test. It an option I'd rather not go for.
Another option is to create an interface that defines all these private methods in order to verify (using mocking) if these methods were called or not. This means however that these methods must have public visibility, which is also not nice. And verifying method invocations might be just enough to test for.
Yet another option is to inherit from the class (making the privates protected) and try to test this way.
I have read most of the topics around such issue, but they seem to handle good testable results. This is different than from this challenge.
How would you unit test such class?
Your first option (extract out the functionality into separate classes) is really the "correct" choice from a SOLID perspective. One of the main points of unit testing things (and TDD by extension) is to promote the creation of small, single responsibility classes. So, that is my primary recommendation.
That said, since you're rather against that solution, if what you're wanting to do is verify that certain things are called, and that they are called in a certain order, then you can leverage Moq's functionality.
First, have BinaryStream be an injected item that can be mocked. Then setup the various calls that will be made against that mock, and then do a mockStream.VerifyAll() call on it - this verifies that everything that you setup for that mock was called.
Additionally, you can also setup a mock to do a callback. What you can do with this is setup an empty string collection in your test. Then, in the callback of the mock setup, add a string identifying the name of that function called to the collection. Then after the test is completed, compare that list to a pre-populated list containing the calls that you expect to have been made, in the correct order, and do an EqualTo Assert. Something like this:
public void MyTest()
{
var expectedList = new List<string> { "SomeFunction", "AnotherFunction", ... };
var actualList = new List<string>();
mockStream.Setup(x => x.SomeFunction()).Callback(actualList.Add("SomeFunction"));
...
systemUnderTest.Bar(...);
Assert.That(actualList, Is.EqualTo(expectedList));
mockStream.VerifyAll();
}
Well you are on top of how to deal with the private methods. Testing the stream for the correct output. Personally I'd use a very limited set of input data, and simply exercise the code in the unit test.
All the potential scenarios I'd treat as an integration test.
So have a file (say xml) with input and expected output. Run through it, call the method with the input and compare actual output with expected, report differences. So you could do this as part of checkin, or before deploy to UAT or some such.
Don't try to test private methods - they don't exist from consumer point of view. Consider them as named code regions which are there just to make your Bar method more readable. You always can refactor Bar method - extract other private methods, rename them, or even move back to Bar. That is implementation details, which do not affect class behavior. And class behavior is exactly what you should test.
So, what is behavior of your class? What are consumer expectations from your class? That is what you should define and write down in your tests (ideally just before you make them pass). Start from trivial situations. What if list of objects is empty? Define behavior, write test. What if list contains single object? If behavior of your class is very complex, then probably your class doing too many things. Try to simplify it and move some 'magic' to dependencies.

Multiple cleanup actions on VS2008 Unit Testing

The questions is pretty straightforward and I need to know how to create multiple cleanup tests.
I have some tests, and each test creates a different file. I would like to bind a cleanup action to each, so I can delete the specified files for each test.
eg:
[TestMethod]
public void TestMethodA()
{
// do stuff
}
[TestMethod]
public void TestMethodB()
{
// do stuff
}
[TestCleanup]
public void CleanUpA()
{
// clean A
}
[TestCleanup]
public void CleanUpB()
{
// clean B
}
Any ideas?
There are, potentially, a couple of options that I can see. A simple solution that might work for you is to have a class-level variable in your unit test class that stores the path to the file used by the currently executing test. Have each test assign the current file path to that variable. Then you can have a single cleanup method that uses that variable in order to clean up the file.
Another idea, but one that might require significant refactoring, is to use a dependency-injection approach in order to abstract your code from the file system. Perhaps instead of your code creating/opening the file itself, you could have a an IO abstraction component handle creating the file and then just return a Stream object to your main code. When running your unit tests you could provide your main code with a unit testing version of the IO abstraction component that will return a MemoryStream instead of a FileStream, which would avoid the need to perform cleanup. I could update with a rough example if my explanation isn't clear.

Unit testing interfaces and method calls

There are two testing scenarios I am unclear on. Both appear at first sight to create very brittle tests.
First, when should one unit test the interface (i.e. verify that an interface has a set signature)?
Second, when should one "test the sequence diagram" (a term I just made up) - meaning verifying calls are being made to the appropriate objects?
Testing the interface means that you should only test the members which are available on the public interface. In other words, don't test private stuff. Take the unit under test as a black-box. This makes the tests more maintainable because you can change the implementation details without breaking the tests. Tests will also express what your unit under test is good for, not how it is implemented.
Testing calls made on other objects is called "interaction test" (not to be confused with integration tests, where you don't mock the other objects). Interaction tests are needed when your unit under test calls a method on another object without depending on it. I try to explain it with an example.
Following method needs to be tested:
public decimal CalculateTax(Order order);
Lets assume that this method needs to call
TaxRules TaxRuleProvider.GetRules(Country country)
which returns some local rules. When it doesn't call it, it will not be able to return the correct result. It will miss vital information. You don't need to test if it had been called, just test the result.
Another method:
public void StoreThingy(Thingy toBeStored);
It will call
public void NotificationBroker.NotifyChanges(SomeChanges x);
StoreThingy doesn't depend on the notification. You can't decide on its interface if it sent notifications or not. You need to test this by an interaction test.
Typically the methods for interaction tests return void. In this category are all kinds of events and notifications and methods like Commit().

Categories

Resources