Is it possible to change the way unit tests are invoked? - c#

My guess is that the current semantics of unit testing involve actually calling the method, i.e., if I have a method MyTest() then that's what gets called. My question is this: is it possible to somehow change the pipeline of the way tests are executed (preferably without recompiling the test runner) so that, say, instead of calling the method directly it's called via a wrapper I provide (i.e., MyWrapper(MyTest))?
Thanks.

If you use MbUnit then there's lot of stuff you can customize by defining custom attributes.
The easiest way to do this is to create a subclass of TestDecoratorAttribute and override the SetUp, TearDown or Execute methods to wrap them with additional logic of your choice.
However if you need finer control, you can instead create a subclass of TestDecoratorPatternAttribute and override the DecorateTest method with logic to add additional test actions or test instance actions.
For example, the MbUnit [Repeat] attribute works by wrapping the test's body run action (which runs all phases of the test) with a loop and some additional bookkeeping to run the test repeatedly.
Here's the code for RepeatAttribute: http://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/RepeatAttribute.cs

It depends on how the unit testing framework provides interception and extensibility capabilities.
Most frameworks (MSTest, NUnit etc.) allow you to define Setup and Teardown methods that are guaranteed to run before and after the test.
xUnit.NET has more advanced extensibility mechanisms where you can define custom attributes you can use to decorate your test methods to change the way they are invoked. As an example, there's a TheoryAttribute that allows you to define Parameterized Tests.
I don't know MBUnit, so I can't say whether it supports these scenarios or not.

Related

How to unit test Prism INavigationAware.OnNavigatedTo with async calls

A lot of our ViewModel-Unit-Tests create a ViewModel in the Arrange-Phase, call OnNavigatedTo() in the Act-Phase and Assert some stuff that should have happened after OnNavigatedTo finished (e.g. some Properties contain certain values).
When the implementation of OnNavigatedTo() contains asynchronious method calls (like loading data from the backend), we have a problem because OnNavigatedTo returns void and we can not await it's completion.
In our unit tests, we mock the calls to the backends, so they return immediately and the problem almost never shows up. However we had a case where exactly this scenario caused an issue on the build server (running on Linux) and i suspect this always creates a kind of race-condition which just works by accident most of the time.
One workaround we came up with was to provide a public method returning a Task which contains the implementation of OnNavigatedTo and is used by the unit tests but it's generally no good idea to extend the public API surface of the system under test just for the sake of testing.
Moving all the code to IInitializeAsync.InitializeAsync is in my opinion not an option as well as those 2 lifecycle hooks aren't equivalent and InitializeAsync might not get called on every page navigation.
So my question is: How can we reliably unit test code in INavigationAware.OnNavigatedTo which makes async calls?
One workaround we came up with was to provide a public method returning a Task which contains the implementation of OnNavigatedTo and is used by the unit tests
I'd do exactly this, except that I'd use internal (with InternalsVisibleTo if the fixture must be in another assembly) or even private (if you can make the fixture a nested class).
Alternatively, define your own INavigatedToAsyncForTest (with methods that return Task) and implement it explicitly to limit the discoverability of these methods.
but it's generally no good idea to extend the public API surface of the system under test just for the sake of testing.
True, but making the method internal screams "this is not public API", especially if the type itself is internal (which view models should be, unless you have to put your views in different assembly), so you can somewhat mitigate this.

How to run method after every NUnit test (even failure within teardown)

I am working on a test library using NUnit, and am generating a custom report while the tests run.
In the TearDown of my tests I call a method that will report the result of the test. It works just fine if the test passes, but is never reached if the test fails, is ignored, or inconclusive.
To make things more difficult, the "//do stuff" in the TearDown can also cause the test to fail in which case it would still need to be logged. BUT Assert throws an exception which means it leaves the block and never reaches the reporting code.
[SetUp]
public void ExampleSetup()
{
//whatever
}
[Test]
public void ExampleTest()
{
//whatever
}
[TearDown]
public void ExampleTearDown()
{
//do stuff
someObject.ReportTestResult();
}
More Info
- Using NUnit 3.2.0
By looking up the NUnit documentation's Framework Extensibility chapter we can see that you need to use the Action Attribute.
The Action Attribute extension point is:
designed to better enable composability of test logic by creating
attributes that encapsulate specific actions to be taken before or
after a test is run.
Summary of a basic implementation of this extension point
You need to implement the ITestAction interface in your given say FooBarActionAttribute() class.
Given this you implement BeforeTest(), AfterTest() and Targets property.
For a basic scenario you need to perform your custom operations in the above two methods.
The final thing you need to do is to use this attribute, for example as:
[Test][FooBarActionAttribute()]
public void Should_Baz_a_FooBar() {
...
}
This will get executed right before and after the test method run.
For more advanced techniques please consult the linked documentation, it's pretty straightforward.
Frankly, this is just a bad idea. TearDown is part of your test. That's why it can fail. (But note that NUnit treats an assert failure in teardown as an error rather than a failure)
Since a test can do anything - good or bad, right or wrong - putting your logging into the test code is unreliable. You should be logging in the code that runs tests, not in the tests themselves. As stated above, the TearDown is a part of your test. So is any ActionAttribute you may define, as suggested in another answer. Don't do logging there.
Unfortunately, there is a history of doing logging within the tests because NUnit either didn't supply an alternative - at least not one that was easy to use. For NUnit V2, you could create a test event listener addin for exactly this purpose. You still can if that's the version of NUnit you are using.
For NUnit 3.0 and higher, you should create a listener extension that works with the TestEngine. The TestEngine and its extensions are completely separate from your tests. Once you have a well-tested extension, you will be able to use it with all your tests and the tests won't be cluttered with logging code.

How to execute XUnit tests via code

I have tests written in XUnit using InlineData and MemberData attributes. I would like to run tests via code elsewhere in my project and have the attributes automatically fill in test data like they normally do when ran through the VS test runner.
If it weren't for the attributes I would just call the methods directly like any other normal method. The asserts are still checked and it functions fine. But if I call a method directly that has the attributes, the attributes are ignored and I must provide all the test data manually through code. Is there some sort of test runner class in XUnit that I can reuse to accomplish this? I've been trying to dig through their API to no avail.
Why I want to do this will take some explanation, but bear with me. I'm writing tests against specific interfaces rather than their concrete implementations (think standard collection interfaces for example). There's plenty there to test and I don't want to copy paste them for each concrete implementer (could be dozens). I write the tests once and then pass each concrete implementation of the interface as the first argument to the test, a subject to test on.
But this leaves a problem. XUnit sees the test and wants to run it, but it can't because there are no concrete implementations available at this layer, there's only the interface. So I want to write tests at the higher layer that just new up the concrete implementations, and then invoke the interface tests passing in the new subjects. I can easily do this for tests that only accept 1 argument, the subject, but for tests where I'm using InlineData or MemberData too I would like to reuse those test cases already provided and just add the subject as the first argument.
Available for reference is the GitHub issue How to programmatically run XUnit tests from the xUnit.net project.
The class AssemblyRunner is now part of Xunit.Runner.Utility.
From the linked issue, xUnit.net contributor Brad Wilson provided a sample runner in the samples.xunit project on GitHub. This program demonstrates the techniques described in the issue. Namely, the portion responsible for running the tests after they have been discovered is as follows:
using (var runner = AssemblyRunner.WithAppDomain(testAssembly))
{
runner.OnDiscoveryComplete = OnDiscoveryComplete;
runner.OnExecutionComplete = OnExecutionComplete;
runner.OnTestFailed = OnTestFailed;
runner.OnTestSkipped = OnTestSkipped;
Console.WriteLine("Discovering...");
runner.Start(typeName);
finished.WaitOne(); // A ManualResetEvent
finished.Dispose();
return result;
}
For a deeper dive, he describes a method using XunitFrontController and TestDiscoveryVisitor to find and run tests. This is what AssemblyRunner does for its implementation.
Nevermind, I figured it out. Taking a closer look at XUnit's attribute hierarchy I found that the DataAttributes (InlineData, MemberData, etc) have a GetData method you can call to retrieve the set of data they represent. With a little reflection I can easily find all the tests in my test class and call the test methods, invoking the data attribute's get data method if there are any present, and perform the tests via my own code that way. The GetData part would have been much harder if I had to role my own version of it. Thank you XUnit authors for not forcing me to do that.

Testing static classes and methods in C# .NET

I'm relatively new to unit testing, and very new to C#, but I've been trying to test code that uses static classes with static methods, and it seems like I have to write huge amounts of boilerplate code in order to test, and that code would then also probably need to be tested.
For example: I'm using the System.Web.Security.Membership class, with a method ValidateUser on it. It seems like I need to create an interface IMembership containing the method ValidateUser, then create a class MembershipWrapper that implements IMembership, implementing the method ValidateUser and passing the arguments on to the actual Membership class. Then I need to have properties on my class that uses the Membership to reference the wrapper so that I can inject the dependency for a mock object during testing.
So to test 1 line of code that uses Membership, I've had to create an interface, and a class, and add a property and constructor code to my class. This seems wrong, so I must be getting something wrong. How should I be going about this testing? I've had a brief look at some frameworks/libraries that do dependency injection, but they still appear to require lots of boilerplate, or a very deep understanding of what's going on under the hood.
I don't see anything wrong in making your system loosely coupled. I believe you don't complain on creating constructor parameters and passing abstract dependencies to your classes. But instantiating dependencies in place looks so much easier, does it?
Also, as I pointed in comments, you can reuse wrappers later. So, that is not such useless work, as it seems from first glance.
You are on the right way, and think you are not testing single line of code, in this case you are writing important test to ensure that your code interacts with membership provider in the right way, this is not simple unit test rather "mock-based" integration test. I think it worth creating all these mocks and have covered by tests this part of application.
And yes, it seems overkill but no other way - either you use some helpers/libraries either wrap third-party static dependencies yourself.
I you're not happy taking the approach of constructor injection, you could look at using Ambient Context
You basically set up a default which will call System.Web.Security.Membership.ValidateUser
You then call the exposed method on the context in your code and you can now mock it for your tests
This allows you to write less setup code, but it also hides the fact that you have a dependency, which might be a problem in the future (depending on how you're reusing code)
If you're using VS2012, you can always use Shims in Microsoft Fakes for static calls (or .Net library calls too).
http://msdn.microsoft.com/en-us/library/hh549175(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/hh549176.aspx

How do I unit test code that creates a new Process?

How can I test the following method?
It is a method on a concrete class implementation of an interface.
I have wrapped the Process class with an interface that only exposes the methods and properties I need. The ProcessWrapper class is the concrete implementation of this interface.
public void Initiate(IEnumerable<Cow> cows)
{
foreach (Cow c in cows)
{
c.Process = new ProcessWrapper(c);
c.Process.Start();
count++;
}
}
There are two ways to get around this. The first is to use dependency injection. You could inject a factory and have Initiate call the create method to get the kind of ProcessWrapper you need for your test.
The other solution is to use a mocking framework such as TypeMock, that will let you work around this. TypeMock basically allows you to mock anything, so you could use it to provide a mock object instead of the actual ProcessWrapper instances.
I'm not familiar with C# (I prefer mine without the hash), but you need some sort of interface to the process (IPC or whatever is the most convenient method) so you can send it test requests and get results back. At the simplest level, you would just send a message to the process and receive the result. Or you could have more granularity and send more specific commands from your test harness. It depends on how you have set up your unit testing environment, more precisely how you send the test commands, how you receive them and how you report the results.
I would personally have a test object inside the process that simply receives, runs & reports the unit test results and have the test code inside that object.
What does your process do? Is there any way you could check that it is doing what it's supposed to do? For example, it might write to a file or a database table. Or it might expose an API (IPC, web-service, etc.) that you could try calling with test data.
From a TDD perspective, it might make make sense to plug in a "mock/test process" that performs some action that you can easily check. (This may require code changes to allow your test code to inject something.) This way, you're only testing your invocation code, and not-necessarily testing an actual business process. You could then have different unit tests to test your business process.

Categories

Resources