How do I unit test code that creates a new Process? - c#

How can I test the following method?
It is a method on a concrete class implementation of an interface.
I have wrapped the Process class with an interface that only exposes the methods and properties I need. The ProcessWrapper class is the concrete implementation of this interface.
public void Initiate(IEnumerable<Cow> cows)
{
foreach (Cow c in cows)
{
c.Process = new ProcessWrapper(c);
c.Process.Start();
count++;
}
}

There are two ways to get around this. The first is to use dependency injection. You could inject a factory and have Initiate call the create method to get the kind of ProcessWrapper you need for your test.
The other solution is to use a mocking framework such as TypeMock, that will let you work around this. TypeMock basically allows you to mock anything, so you could use it to provide a mock object instead of the actual ProcessWrapper instances.

I'm not familiar with C# (I prefer mine without the hash), but you need some sort of interface to the process (IPC or whatever is the most convenient method) so you can send it test requests and get results back. At the simplest level, you would just send a message to the process and receive the result. Or you could have more granularity and send more specific commands from your test harness. It depends on how you have set up your unit testing environment, more precisely how you send the test commands, how you receive them and how you report the results.
I would personally have a test object inside the process that simply receives, runs & reports the unit test results and have the test code inside that object.

What does your process do? Is there any way you could check that it is doing what it's supposed to do? For example, it might write to a file or a database table. Or it might expose an API (IPC, web-service, etc.) that you could try calling with test data.
From a TDD perspective, it might make make sense to plug in a "mock/test process" that performs some action that you can easily check. (This may require code changes to allow your test code to inject something.) This way, you're only testing your invocation code, and not-necessarily testing an actual business process. You could then have different unit tests to test your business process.

Related

How to execute XUnit tests via code

I have tests written in XUnit using InlineData and MemberData attributes. I would like to run tests via code elsewhere in my project and have the attributes automatically fill in test data like they normally do when ran through the VS test runner.
If it weren't for the attributes I would just call the methods directly like any other normal method. The asserts are still checked and it functions fine. But if I call a method directly that has the attributes, the attributes are ignored and I must provide all the test data manually through code. Is there some sort of test runner class in XUnit that I can reuse to accomplish this? I've been trying to dig through their API to no avail.
Why I want to do this will take some explanation, but bear with me. I'm writing tests against specific interfaces rather than their concrete implementations (think standard collection interfaces for example). There's plenty there to test and I don't want to copy paste them for each concrete implementer (could be dozens). I write the tests once and then pass each concrete implementation of the interface as the first argument to the test, a subject to test on.
But this leaves a problem. XUnit sees the test and wants to run it, but it can't because there are no concrete implementations available at this layer, there's only the interface. So I want to write tests at the higher layer that just new up the concrete implementations, and then invoke the interface tests passing in the new subjects. I can easily do this for tests that only accept 1 argument, the subject, but for tests where I'm using InlineData or MemberData too I would like to reuse those test cases already provided and just add the subject as the first argument.
Available for reference is the GitHub issue How to programmatically run XUnit tests from the xUnit.net project.
The class AssemblyRunner is now part of Xunit.Runner.Utility.
From the linked issue, xUnit.net contributor Brad Wilson provided a sample runner in the samples.xunit project on GitHub. This program demonstrates the techniques described in the issue. Namely, the portion responsible for running the tests after they have been discovered is as follows:
using (var runner = AssemblyRunner.WithAppDomain(testAssembly))
{
runner.OnDiscoveryComplete = OnDiscoveryComplete;
runner.OnExecutionComplete = OnExecutionComplete;
runner.OnTestFailed = OnTestFailed;
runner.OnTestSkipped = OnTestSkipped;
Console.WriteLine("Discovering...");
runner.Start(typeName);
finished.WaitOne(); // A ManualResetEvent
finished.Dispose();
return result;
}
For a deeper dive, he describes a method using XunitFrontController and TestDiscoveryVisitor to find and run tests. This is what AssemblyRunner does for its implementation.
Nevermind, I figured it out. Taking a closer look at XUnit's attribute hierarchy I found that the DataAttributes (InlineData, MemberData, etc) have a GetData method you can call to retrieve the set of data they represent. With a little reflection I can easily find all the tests in my test class and call the test methods, invoking the data attribute's get data method if there are any present, and perform the tests via my own code that way. The GetData part would have been much harder if I had to role my own version of it. Thank you XUnit authors for not forcing me to do that.

Should I test if my controller's Action method are calling the repository methods and generating the right view?

I'm new to testing and I just started testing my MVC application.
Currently I'm testing if my controller's action method are calling the right repository methods which in turn reads or writes the data from database.
What I'm also testing is if the return type of the action method is View, PartialView or RedirectToRoute, etc.
I've got some comments saying that testing if the controller's Action method is calling the right function in repository doesn't really make sense. Is it true?
What should I include in my Unit test for my MVC application, that uses Repository pattern as well.
It could make sense to check if you action call correct method on your repository but you'll need to mock it to avoid access to database. Unit tests should be isolated from external components.
Although it's not ideal, you could replace your "real" database by a lightweight in memory Sqlite to avoid mocking your database access in your tests.
I personally use Moq as mocking framework but it is plenty of mature mocking framework for .NET.
Take into account that testing if a method is called checks behavior instead of status. This make test more fragil as becomes dependent on internal implementation, but depending your scenario it could be perfeclty valid.
Unit testing is about testing a component's behavior in isolation, meaning that while testing a specific component, this component doesn't interract with any external component.
Usually, the way to do that is using mocks. All your dependencies must be mock so you can control them.
Testing if a method have been called is valid. If the logic is not on your tested component, then your job is done. Your component call a function and case x,y and z in another case. Behavior is fine? Thats good enough.
If you have difficulties testing because you have a database dependency, thats usually a design problem. Usually, this is solved by using an database abstraction in front of the database, who its only job is the make call and return value from the database. That abstraction can be mock and injected in your tested class. That way, you can even return pre-configured values to your tested class and continue the process.
This depends on different scenarios, for ex, In Controller, You have one Action
bool SaveEmployee(), which inside calls, service and then Database Layer to save. So testing whether Emp is actually saved in db does not make sense as it should be in another Unit Test for corresponding Database layer Function. Here, you just need to verify the status after it is successful, Failed, Duplicate or throws some Exception. You can simply Mock the function and return bool or string(like Success, as appropriate) and verify actual output with expected Output.

TDD: .NET following TDD principles, Mock / Not to Mock?

I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!

Using TestContext to share information between unit tests

I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.

Is it possible to change the way unit tests are invoked?

My guess is that the current semantics of unit testing involve actually calling the method, i.e., if I have a method MyTest() then that's what gets called. My question is this: is it possible to somehow change the pipeline of the way tests are executed (preferably without recompiling the test runner) so that, say, instead of calling the method directly it's called via a wrapper I provide (i.e., MyWrapper(MyTest))?
Thanks.
If you use MbUnit then there's lot of stuff you can customize by defining custom attributes.
The easiest way to do this is to create a subclass of TestDecoratorAttribute and override the SetUp, TearDown or Execute methods to wrap them with additional logic of your choice.
However if you need finer control, you can instead create a subclass of TestDecoratorPatternAttribute and override the DecorateTest method with logic to add additional test actions or test instance actions.
For example, the MbUnit [Repeat] attribute works by wrapping the test's body run action (which runs all phases of the test) with a loop and some additional bookkeeping to run the test repeatedly.
Here's the code for RepeatAttribute: http://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/RepeatAttribute.cs
It depends on how the unit testing framework provides interception and extensibility capabilities.
Most frameworks (MSTest, NUnit etc.) allow you to define Setup and Teardown methods that are guaranteed to run before and after the test.
xUnit.NET has more advanced extensibility mechanisms where you can define custom attributes you can use to decorate your test methods to change the way they are invoked. As an example, there's a TheoryAttribute that allows you to define Parameterized Tests.
I don't know MBUnit, so I can't say whether it supports these scenarios or not.

Categories

Resources