Run a C# unittest only if another one passes - c#

I'm trying to run a UnitTest in C#, but only if another UnitTest passes? Can't quite get it to work, does anybody know how I can do this?
Edit: It's in NUnit

I encountered this problem a while back. A solution I came up with may not be the most elegant but it works. What I did to get around the ordering issue is I created my own unit testing framework. It was propriety to the company I worked for so I can't share it with you.
In the testing framework, there existed
A template for each type of test & a generic template to aggregate all the tests
A utility to execute each template type
For example, if I was doing an integration test, I would have a "http utility" and the template would contain the endpoint & payload.
The tests I wanted to run would need to be stored into an intermediate data structure such as json. This allowed me to serialize the tests into templates.
Now this is where it gets tricky... Using some fancy T4 templating, I would get the json data & serialize it to a list of templates. Then I would order the tests by execution order and dependency (one test could be dependent on another for chaining integration tests). I would then generate a unit test for every template. The generated unit tests would then run on build
For your question about canceling test execution if one fails, you can build that into your templating using some fancy logic
static List<ITestTemplate> requiredTests = new List<ITestTemplate>();
...
if(requiredTests.Any(t => t.Failed))
Assert.IsTrue(false) //fail subsequent tests

You could accomplish this, though it may not be the cleanest solution, by using the [Order()] attribute. (docs)
This will allow you to run the dependency-test as [Order(1)] and the test relying on the first test as [Order(2)]. You can share your driver across the tests, and if the first test fails, close the driver, causing the other tests relying on the first test passing to fail.

Related

C# Unit tests based on unknown number of json files

I would like to create test suite based on the tests defined in this repo:
https://github.com/json-schema/JSON-Schema-Test-Suite
The tests are defined in a number of json files, that can contain any number of tests.
Obviously I don't want to write a test for each test defined in this repo, I want to auto discover the tests and run them. But I would like to use a standard unit test framework like NUnit or xUnit. But if I just loop through all the files in a single test, then I don't get much info out of that. It will just be a single test that fails or passes in Jenkins or Team City. I could of course output all the relevant data, but that isn't nice.
Is there a way to make each test show up as a single unit test when running my test suite, without having to actually write each test method?
EDIT:
I guess what I am looking for is something like xUnit ClassData like described here Pass complex parameters to [Theory]. But then my next question would be if I can have each test show up with different names ;)
What I needed to search for on Google was "data driven unit tests".
NUnit TestCaseSource does exactly what I want it to.
#Kieren Johnstone I would prefer not doing code generation. I considered that and decided against it for the sake of simplicity.
You could generate code using T4 templates or some other automatic means. The massive advantage here is that the tests themselves are already in a decent data structure (you couldn't line things up better than that!).

How to execute XUnit tests via code

I have tests written in XUnit using InlineData and MemberData attributes. I would like to run tests via code elsewhere in my project and have the attributes automatically fill in test data like they normally do when ran through the VS test runner.
If it weren't for the attributes I would just call the methods directly like any other normal method. The asserts are still checked and it functions fine. But if I call a method directly that has the attributes, the attributes are ignored and I must provide all the test data manually through code. Is there some sort of test runner class in XUnit that I can reuse to accomplish this? I've been trying to dig through their API to no avail.
Why I want to do this will take some explanation, but bear with me. I'm writing tests against specific interfaces rather than their concrete implementations (think standard collection interfaces for example). There's plenty there to test and I don't want to copy paste them for each concrete implementer (could be dozens). I write the tests once and then pass each concrete implementation of the interface as the first argument to the test, a subject to test on.
But this leaves a problem. XUnit sees the test and wants to run it, but it can't because there are no concrete implementations available at this layer, there's only the interface. So I want to write tests at the higher layer that just new up the concrete implementations, and then invoke the interface tests passing in the new subjects. I can easily do this for tests that only accept 1 argument, the subject, but for tests where I'm using InlineData or MemberData too I would like to reuse those test cases already provided and just add the subject as the first argument.
Available for reference is the GitHub issue How to programmatically run XUnit tests from the xUnit.net project.
The class AssemblyRunner is now part of Xunit.Runner.Utility.
From the linked issue, xUnit.net contributor Brad Wilson provided a sample runner in the samples.xunit project on GitHub. This program demonstrates the techniques described in the issue. Namely, the portion responsible for running the tests after they have been discovered is as follows:
using (var runner = AssemblyRunner.WithAppDomain(testAssembly))
{
runner.OnDiscoveryComplete = OnDiscoveryComplete;
runner.OnExecutionComplete = OnExecutionComplete;
runner.OnTestFailed = OnTestFailed;
runner.OnTestSkipped = OnTestSkipped;
Console.WriteLine("Discovering...");
runner.Start(typeName);
finished.WaitOne(); // A ManualResetEvent
finished.Dispose();
return result;
}
For a deeper dive, he describes a method using XunitFrontController and TestDiscoveryVisitor to find and run tests. This is what AssemblyRunner does for its implementation.
Nevermind, I figured it out. Taking a closer look at XUnit's attribute hierarchy I found that the DataAttributes (InlineData, MemberData, etc) have a GetData method you can call to retrieve the set of data they represent. With a little reflection I can easily find all the tests in my test class and call the test methods, invoking the data attribute's get data method if there are any present, and perform the tests via my own code that way. The GetData part would have been much harder if I had to role my own version of it. Thank you XUnit authors for not forcing me to do that.

Unit testing C# database dependent code

I would like to create unit tests for data dependent code. For example:
A user class that has the usual create, update & delete.
If I wanted to create a test for "user already exists" scenario, or and update or delete test. I would need to know that a specific user already exists in my database.
In such cases, what would be the best approach to have stand alone tests for these operations that can run in any order?
When you have dependencies like this think about whether you want to be Integration Testing as opposed to Unit Testing. If you do want to do Unit tests take a look at using Mock Data.
Integration Testing:
Tests how you code integrates with different parts of a system. This can be making sure your code connects to a database properly or has created a file on the filesystem. These tests are usually very straight-forward and do not have the same constraint of "being able to run in any order." However, they require a specific configuration in order to pass which means they do float well from developer to developer.
Unit Testing: Tests your code's ability to preform a function. For example "Does my function AddTwoNumbers(int one, int two) actually add two numbers?" Unit tests are to ensure any changes in code does not effect the expected results.
When getting into areas like "Does my code call the database any enter the result correctly?" you need to consider that unit tests are not meant to interact with the system. This is where we get into using "mock data." Mock classes and mock data take the place of an actual system to just ensure that your code "called out in the way we were expecting." The difficult part about this is it can be done but most of the .Net Framework classes do not provide the needed Interfaces in order to do it easily.
See the MSDN page on Tesing for more info. Also, consider this MSDN article on Mock Data.

Dependent Tests

Alright, so I'm using the Microsoft Test Framework for testing and I need to somehow build dependent tests. Why you might say? Because one of the tests ensures that I can load data from the database and the others need to operate against that set -- making the round trips is something we don't want to do to keep the automated nature of the tests efficient.
I have been searching and just can't seem to find a way to do a couple of things:
Decorate the test methods so that they are seen as dependent.
Persist the set between tests.
What have I tried?
Well, regarding the decoration for dependent tests, not much because I can't seem to find an attribute that's built for that.
Regarding the persistence of the set, I've tried a private class variable and adding to the Properties of the test context. Both of those have resulted in the loss of the set between tests.
I hope that you can help me out here!
Create your test separately and then use the Ordered Test to run them in the order you want.
If any of the tests fails then the whole test is aborted and it considered failed:
I think, what you need is an orderer test list. You can create this in you test project under 'New Item...'. The ordered test list is a list with tests in a specified order that are executed in the same context.
By the way: Unit tests should test only the smallest unit in an application, not a huge set of operations. Because if one unit is not working correctly you can find easy wich one.

How can i pass my tests directly to nunit

I want to run C90 tests in NUnit. I have written a short adapter that actually works, but i want to get rid of writing that adapter and instead make a tool that opens nunit and passes in loadable data so that nunit can run the tests.
I spare you the codeload i wrote so far and just summarize its usage:
I wrtie my C90 tests as callable dll functions with annotations above them like comments. Example /TEST/. As of now the test returns 0 or 1 for pas fail information.
A C# Pathinspector inspects all *.c files for annotations and extracts the loadinformation.
A C# Executer manages the execution of the tests and returns PASS/FAIL information.
What i want to do instead of my Executer is to create either the Launchinformation for NUnit and pass that into a freshly created gui, or create C# code to pass into the freshly created gui.
I know how to create an NUnit gui runner, but i dont know how to pass information into it.
Nunit source is online available, however it uses a lot of singletons during initialisation so i would argue against it.
One way could be to actually write and compile cs files and pass those compiled into nunit.
The best approach seems to be writing an xUnit compliant framework from scratch.

Categories

Resources