I have tests written in XUnit using InlineData and MemberData attributes. I would like to run tests via code elsewhere in my project and have the attributes automatically fill in test data like they normally do when ran through the VS test runner.
If it weren't for the attributes I would just call the methods directly like any other normal method. The asserts are still checked and it functions fine. But if I call a method directly that has the attributes, the attributes are ignored and I must provide all the test data manually through code. Is there some sort of test runner class in XUnit that I can reuse to accomplish this? I've been trying to dig through their API to no avail.
Why I want to do this will take some explanation, but bear with me. I'm writing tests against specific interfaces rather than their concrete implementations (think standard collection interfaces for example). There's plenty there to test and I don't want to copy paste them for each concrete implementer (could be dozens). I write the tests once and then pass each concrete implementation of the interface as the first argument to the test, a subject to test on.
But this leaves a problem. XUnit sees the test and wants to run it, but it can't because there are no concrete implementations available at this layer, there's only the interface. So I want to write tests at the higher layer that just new up the concrete implementations, and then invoke the interface tests passing in the new subjects. I can easily do this for tests that only accept 1 argument, the subject, but for tests where I'm using InlineData or MemberData too I would like to reuse those test cases already provided and just add the subject as the first argument.
Available for reference is the GitHub issue How to programmatically run XUnit tests from the xUnit.net project.
The class AssemblyRunner is now part of Xunit.Runner.Utility.
From the linked issue, xUnit.net contributor Brad Wilson provided a sample runner in the samples.xunit project on GitHub. This program demonstrates the techniques described in the issue. Namely, the portion responsible for running the tests after they have been discovered is as follows:
using (var runner = AssemblyRunner.WithAppDomain(testAssembly))
{
runner.OnDiscoveryComplete = OnDiscoveryComplete;
runner.OnExecutionComplete = OnExecutionComplete;
runner.OnTestFailed = OnTestFailed;
runner.OnTestSkipped = OnTestSkipped;
Console.WriteLine("Discovering...");
runner.Start(typeName);
finished.WaitOne(); // A ManualResetEvent
finished.Dispose();
return result;
}
For a deeper dive, he describes a method using XunitFrontController and TestDiscoveryVisitor to find and run tests. This is what AssemblyRunner does for its implementation.
Nevermind, I figured it out. Taking a closer look at XUnit's attribute hierarchy I found that the DataAttributes (InlineData, MemberData, etc) have a GetData method you can call to retrieve the set of data they represent. With a little reflection I can easily find all the tests in my test class and call the test methods, invoking the data attribute's get data method if there are any present, and perform the tests via my own code that way. The GetData part would have been much harder if I had to role my own version of it. Thank you XUnit authors for not forcing me to do that.
Related
I'm trying to run a UnitTest in C#, but only if another UnitTest passes? Can't quite get it to work, does anybody know how I can do this?
Edit: It's in NUnit
I encountered this problem a while back. A solution I came up with may not be the most elegant but it works. What I did to get around the ordering issue is I created my own unit testing framework. It was propriety to the company I worked for so I can't share it with you.
In the testing framework, there existed
A template for each type of test & a generic template to aggregate all the tests
A utility to execute each template type
For example, if I was doing an integration test, I would have a "http utility" and the template would contain the endpoint & payload.
The tests I wanted to run would need to be stored into an intermediate data structure such as json. This allowed me to serialize the tests into templates.
Now this is where it gets tricky... Using some fancy T4 templating, I would get the json data & serialize it to a list of templates. Then I would order the tests by execution order and dependency (one test could be dependent on another for chaining integration tests). I would then generate a unit test for every template. The generated unit tests would then run on build
For your question about canceling test execution if one fails, you can build that into your templating using some fancy logic
static List<ITestTemplate> requiredTests = new List<ITestTemplate>();
...
if(requiredTests.Any(t => t.Failed))
Assert.IsTrue(false) //fail subsequent tests
You could accomplish this, though it may not be the cleanest solution, by using the [Order()] attribute. (docs)
This will allow you to run the dependency-test as [Order(1)] and the test relying on the first test as [Order(2)]. You can share your driver across the tests, and if the first test fails, close the driver, causing the other tests relying on the first test passing to fail.
I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.
My guess is that the current semantics of unit testing involve actually calling the method, i.e., if I have a method MyTest() then that's what gets called. My question is this: is it possible to somehow change the pipeline of the way tests are executed (preferably without recompiling the test runner) so that, say, instead of calling the method directly it's called via a wrapper I provide (i.e., MyWrapper(MyTest))?
Thanks.
If you use MbUnit then there's lot of stuff you can customize by defining custom attributes.
The easiest way to do this is to create a subclass of TestDecoratorAttribute and override the SetUp, TearDown or Execute methods to wrap them with additional logic of your choice.
However if you need finer control, you can instead create a subclass of TestDecoratorPatternAttribute and override the DecorateTest method with logic to add additional test actions or test instance actions.
For example, the MbUnit [Repeat] attribute works by wrapping the test's body run action (which runs all phases of the test) with a loop and some additional bookkeeping to run the test repeatedly.
Here's the code for RepeatAttribute: http://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/RepeatAttribute.cs
It depends on how the unit testing framework provides interception and extensibility capabilities.
Most frameworks (MSTest, NUnit etc.) allow you to define Setup and Teardown methods that are guaranteed to run before and after the test.
xUnit.NET has more advanced extensibility mechanisms where you can define custom attributes you can use to decorate your test methods to change the way they are invoked. As an example, there's a TheoryAttribute that allows you to define Parameterized Tests.
I don't know MBUnit, so I can't say whether it supports these scenarios or not.
Can anyone come up with guidelines suggesting the ideal scenarios to choose mocking versus faking, i.e., setting up the essentials manually?
I am a bit confused with how to approach this situation.
Well you have a few things you need to sort out. You have two basic things you'll need to know: Nomenclature and Best Practices.
First I want to give you a great video resource from a great tester, Roy Osherove:
Unit Testing Reviews by Roy Osherove
He starts out by saying that he has
done some reviews of test harnesses
shipped with several open source
projects. You can find those here:
http://weblogs.asp.net/rosherove/archive/tags/TestReview/default.aspx
These are basically video reviews
where he walks you through these test
harnesses and tells you what is good
and what is bad. Very helpful.
Roy also has a book that I understand
is very good.
Nomenclature
This podcast will help out
immensely: http://www.hanselminutes.com/default.aspx?showID=187
I'll paraphrase the podcast, though
(that Hanselminutes intro music is
dreadful):
Basically everything you do with an
isolation framework (like Moq, Rhino Mocks, Type Mock, etc) is called a
fake.
A fake is an object in use during
a test that the code you are testing
can call in place of production code.
A fake is used to isolate the code you
are trying to test from other parts of
your application.
There are (mainly) two types of fakes: stubs
and mocks.
A mock is a fake that you put in
place so that the code you are testing
can call out to it and you assert that
the call was made with the correct
parameters. The below sample does just
this using the Moq isolation
framework:
[TestMethod]
public void CalculateTax_ValidTaxRate_DALCallIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALMock = new Mock<ITaxRateDataAccess>();
taxDALMock.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08).Verifiable();
TaxCalculator calc = new TaxCalculator(taxDALMock.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
taxDALMock.VerifyAll();
}
A stub is almost the same as a
mock, except that you put it in place
to make sure the code you are testing
gets back consistent data from its
call (for instance, if your code calls
a data access layer, a stub would
return back fake data), but you don’t
assert against the stub itself. That
is, you don’t care to verify that the
method called your fake data access
layer – you are trying to test
something else. You provide the stub
to get the method you are trying to
test to work in isolation.
Here’s an example with a stub:
[TestMethod]
public void CalculateTax_ValidTaxRate_TaxValueIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALStub = new Mock<ITaxRateDataAccess>();
taxDALStub.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08);
TaxCalculator calc = new TaxCalculator(taxDALStub.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
Assert.AreEqual(result, 8.00);
}
Notice here that we are testing the
output of the method, rather than the
fact that the method made a call to
another resource.
Moq doesn’t really make an API
distinction between a mock and a stub
(notice both were declared as
Mock<T>), but the usage here is
important in determining the type.
Hope this helps set you straight.
There are at leat 5 different kinds of test doubles: dummies, stubs, mocks, spies and fakes. A good overview is at http://code.google.com/testing/TotT-2008-06-12.pdf and they are also categorized at http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
You want to test a chunk of code, right, let's say a method. Your method downloads a file from a http url, and then saves the file on disk, and then mail out that the file is on disk. All these three actions are of course service-classes your method calls, because then they are easy to mock. If you don't mock these, your test will download stuff, access the disk, and mail a message every time you run that test. Then you are not just testing the code in the method, you are also testing the code that downloads, writes to disk and sends a mail. Now if you are mocking these, you are testing just the methods code. Also you are able to simulate a download failure for instance, to see that your method's code is behaving correctly.
Now as for faking, I usually fake classes that are just holding values, and don't have much logic. If you are sending in an object that holds some values, that get changed in the method, you can read off of it in the test to see that the method do the right thing.
Of course the rules can (and sometimes must) be bent a bit, but the general way of thinking is test your code, and your code only.
The Little Mocker, from Bob Martin, is a very good reading on the topic.
[...] long ago some very smart people wrote a paper that introduced and defined the term Mock Object. Lots of other people read it and started using that term. Other people, who hadn't read the paper, heard the term and started using it with a broader meaning. They even turned the word into a verb. They'd say, "Let's mock that object out.", or "We've got a lot of mocking to do."
The article explains the differences between mocks, fakes, spys and stubs.
How can I test the following method?
It is a method on a concrete class implementation of an interface.
I have wrapped the Process class with an interface that only exposes the methods and properties I need. The ProcessWrapper class is the concrete implementation of this interface.
public void Initiate(IEnumerable<Cow> cows)
{
foreach (Cow c in cows)
{
c.Process = new ProcessWrapper(c);
c.Process.Start();
count++;
}
}
There are two ways to get around this. The first is to use dependency injection. You could inject a factory and have Initiate call the create method to get the kind of ProcessWrapper you need for your test.
The other solution is to use a mocking framework such as TypeMock, that will let you work around this. TypeMock basically allows you to mock anything, so you could use it to provide a mock object instead of the actual ProcessWrapper instances.
I'm not familiar with C# (I prefer mine without the hash), but you need some sort of interface to the process (IPC or whatever is the most convenient method) so you can send it test requests and get results back. At the simplest level, you would just send a message to the process and receive the result. Or you could have more granularity and send more specific commands from your test harness. It depends on how you have set up your unit testing environment, more precisely how you send the test commands, how you receive them and how you report the results.
I would personally have a test object inside the process that simply receives, runs & reports the unit test results and have the test code inside that object.
What does your process do? Is there any way you could check that it is doing what it's supposed to do? For example, it might write to a file or a database table. Or it might expose an API (IPC, web-service, etc.) that you could try calling with test data.
From a TDD perspective, it might make make sense to plug in a "mock/test process" that performs some action that you can easily check. (This may require code changes to allow your test code to inject something.) This way, you're only testing your invocation code, and not-necessarily testing an actual business process. You could then have different unit tests to test your business process.