The problem: I have a bunch of xunit tests that connect to an external server. If that server is up - they fly through very quickly. If that server is down, everything goes into timeouts and retries and the tests all take a long time. The thing is, if I have 100 tests - if the first one fails, I still want all the rest to fail, but it's completely pointless running them... What I'd like is to be able to make my own fact attribute something like:
[FactInFailureGroup( "ConectsToBlah", typeof(ConnectionException)]
public void TestOne()
{
...
}
[FactInFailureGroup( "ConnectsToBlah", typeof(ConnectionException)]
public void TestTwo()
{
...
}
I've looked at before/after attributes, but they don't seem to be able to kill the test or see the result - and I've looked at creating my own fact attribute, which looks like it can prevent the test from running, but not put in a trap for the result. I really want to somehow make code along these lines:
class FactInFailureGroupAttribute : FactAttribute
{
private static _failedGroups = new HashSet<String>();
...
void BeforeTest( TestManipulator manipulator )
{
if (_failedGroups.contains( _thisGroup ))
manipulator.SkipWithMessage( f"all tests in group {_thisGroup} failed because of {_exceptionType}");
}
void AfterTest( TestResult test )
{
if (test.Failed && test.Exception.GetType() == _exceptionType)
_failedGroups.add( _thisGroup );
}
}
You can abuse constructors and statics to accomplish your task.
The constructor of your test class will get called at the beginning of every test but your static should exist for your entire test run.
Try something like this:
private static object Lock = new object();
private static bool? IsServerUp;
public FactAssertionTests()
{
lock (Lock)
{
if (!IsServerUp.HasValue)
{
// check server connection and set IsServerUp
}
else if (!IsServerUp.Value)
{
throw new Exception("Failed to connect to server");
}
}
}
That way whatever test gets into the Lock first will check the server's up-ness. If it fails, every test coming in next will throw the exception.
This isn't quite a decorator attribute and requires you to put all the tests into one class, but it's simple.
Edit:
I tested the following approach, it relies on the fact that xUnit runs all tests in a single class in serial. If you've got tests spanning multiple classes this won't work:
public class ConnectionTests
{
[Fact]
public void Test1()
{
RunServerTest(() =>
{
var svc = new ServiceConnection();
svc.Connect();
});
}
[Fact]
public void Test2()
{
RunServerTest(() =>
{
var svc = new ServiceConnection();
svc.Connect();
});
}
private static bool? ServerIsDown;
public void RunServerTest(Action testAction)
{
var exception = new Exception("Server is down, test will not run.");
if (ServerIsDown.GetValueOrDefault())
{
throw exception;
}
try
{
testAction.Invoke();
}
catch (ServerMissingException ex)
{
ServerIsDown = true;
throw;
}
}
}
You can accomplish what you want by calling your first test explicitly in the constructor of the class. If it fails due to server missing it should throw an exception. This will prevent the class from being instantiated, and none of the other tests in the class will run. They will be marked as failed in the test explorer.
public class SO74607887Tests
{
public SO74607887Tests()
{
TestOne();
}
[Fact]
public void TestOne()
{
// Arrange
// Act
// Oops server is down
throw new Exception("Server is down");
}
[Fact]
public void TestTwo()
{
// Will not run
}
}
If your tests are distributed over a lot of different classes, you can follow #JeremyLakeman 's suggestion to use a fixture. Proceed as follows:
using System;
using Xunit;
namespace SO74607887_XunitCancelTestsIfOneFails
{
// This class will never actually be instantiated, it is only present to provide information to Xunit.
[CollectionDefinition("SO74607887")]
public class SO74607887CollectionDefinition : ICollectionFixture<SO74607887Base>
{
}
// This class creates a single object injected into the constructors of all the other classes in the collection.
public class SO74607887Base : IDisposable
{
public bool serverOK;
public SO74607887Base()
{
// Check server, if it is missing, set the flag
serverOK = false;
}
public void Dispose()
{
// Clean up
}
}
[Collection("SO74607887")]
public class SO74607887Tests
{
public SO74607887Tests(SO74607887Base basis)
{
if (!basis.serverOK)
{
throw new Exception($"Server is down, tests in {nameof(SO74607887Tests)} will not run.");
}
}
[Fact]
public void TestOne()
{
// Arrange
// Act
// Assert
}
[Fact]
public void TestTwo()
{
// Arrange
// Act
// Assert
}
}
}
The check on the server is only done once in the fixture. All the other classes only need to check if it is available. The fixture constructor is guaranteed to run before the test classes are instantiated.
Instead of a boolean to check, you could also simply make a method checkServer(string forClass) in the fixture class to call which itself throws the exception, then you'd only have to call
basis.checkServer(nameof(SO74607887Tests));
in the test classes instead of throwing in each test class.
All the tests in the classes will be marked as failed in the Test Excplorer window:
Related
I have a method
using Microsoft.VisualStudio.TestTools.UnitTesting; // using visual studio's test framework
[TestMethod]
public void ATestMethod()
{
// stuff
}
from a public class ATestClass. This test class runs two types of tests :
tests requiring that a certain software is installed on the machine running the test
tests that can run free
To handle this, I added a public class BaseTestClass from which I made ATestClass derive, and in ATestClass I added a :
public bool isTheSoftwareInstalledOnTheMachine()
{
// stuff
}
and I "decorated" all internal scopes of tests from ATestClass as follows :
[TestMethod]
public void ATestMethod()
{
if (isTheSoftwareInstalledOnTheMachine())
{
// stuff
}
}
I find this horrible. I would rather like to be able to write something like :
[TestMethod]
[RunIfTheSoftwareInstalledOnTheMachine]
public void ATestMethod()
{
// stuff
}
but I don't know if one is allowed to define "custom" [characterizer]'s. (I don't even know the right word for them.) If it is, would that be the best design ? (I heard about the decorator pattern, but I don't know if I could make it generic enough in my context, because I would potentially need to use the condition for many other test classes.) Anyway, how would I proceed with characterizer's ?
I know you're using VS test framework but if you can change to NUnit you can accomplish what you want.
Test case:
using NUnit.Framework;
[TestFixture]
public class MyAppTests
{
[Test]
[RunIfTheSoftwareInstalledOnTheMachine]
public void ATestMethod()
{
// Executes if custom attribute is true, otherwise test case is ignored
}
}
Custom attribute:
using NUnit.Framework;
using NUnit.Framework.Interfaces;
public class TestHelper
{
public static bool IsTheSoftwareInstalledOnTheMachine()
{
// Return state of software
return true;
}
}
public class RunIfTheSoftwareInstalledOnTheMachineAttribute : Attribute, ITestAction
{
public ActionTargets Targets { get; private set; }
public void AfterTest(ITest test) {}
public void BeforeTest(ITest test)
{
if (!TestHelper.IsTheSoftwareInstalledOnTheMachine())
{
Assert.Ignore("Omitting {0}. Software is not installed on machine.", test.Name);
}
}
}
If you define your own attribute you surerly have to check for its existance on your own. You can´t expect your framework to guess what the attribute is for.
But I suppose you don´t even need an attribute to do this. You can simply ignore the test by putting the logic inside the test-method anyway:
[Test]
public void MyTest()
{
if(!RunIfTheSoftwareInstalledOnTheMachine)
Assert.Ignore("Test not run because no software was installed");
// your actual test-code
}
Another approach is to use the CategoryAttribute provided by NUnit, with which you can run only those tests that fall within your provided category:
[Test]
[Category("SoftwareInstalled")]
public void MyTest() { /* ... */ }
EDIT: You could also use the TestCaseAttribute with a specific method that returns a TestCase when the condition is met:
[TestCaseSource("ProvideTestcases")]
public void MyTest() { /* ... */ }
private static IEnumerable<TestCaseData> ProvideTestcases()
{
if(RunIfTheSoftwareInstalledOnTheMachine)
yield return new TestCaseData();
}
If the codition is not met no testcase is generated at all.
If the software being installed on the machine is a requirement for any of the tests to pass and any one test failing means the whole suite fails then why bother checking in multiple tests if the software is installed? Just write a single test to fail if the software is not installed and throw a useful exception. Something like:
[Test]
public void EnsureImportantSoftwareIsInstalled()
{
if(!importantSoftwareIsInstalled)
{
Assert.Fail($"Software X must be installed for the tests in {nameof(MyTestClass)} to run, please install it");
}
}
For Nunit 2.6, a slight variation of the HimBromBeere's answer works well for me. The test case is displayed as ignored.
[TestCaseSource("ProvideTestcases")]
public void MyTest() { /* ... */ }
private static IEnumerable<TestCaseData> ProvideTestcases()
{
if(RunIfTheSoftwareInstalledOnTheMachine)
yield return new TestCaseData().Ignore();
}
I am trying to grab a test result in NUnit 3 upon tear down using the internal ITestResult interface. However when I pass an ITestResult object to the tear down method I get "OneTimeSetup: Invalid signature for SetUp or TearDown method: TestFinished" where TestFinished is listed as my teardown method.
If I don't pass the object the tests work fine. I have tried to move my [TearDown] method to the actual class containing the tests instead of my base class but result in the same error. I would like to have my TestFinish function run upon each test complete so I can act accordingly depending on pass/fail or what is in the exception message rather than using my test try/catch with action structure I have now.
Here is my code structure below:
----A file that starts and ends testing and creates a webdriver object to use---
[OneTimeSetUp]
public void Setup()
{
//Do some webdriver setup...
}
----Base Test Class that is used for setup or tear down of testing----
[TestFixture]
public class BaseTestClass
{
//Also use the webdriver object created at [OneTimeSetUp]
protected void TestRunner(Action action)
{
//perform the incoming tests.
try
{
action();
}
//if the test errors out, log it to the file.
catch (Exception e)
{
//Do some logging...
}
}
[TearDown]
public void TestFinished(ITestResult i)
{
//Handle the result of a test using ITestResult object
}
}
----Test file that uses the BaseTestClass----
class AccountConfirmation : BaseTestClass
{
[Test]
public void VerifyAccountData() {
TestRunner(() => {
//Do a test...
});
}
}
Remove the ITestResult from your TearDown method and instead use TestContext.CurrentContext.Result within the method.
For example,
[Test]
public void TestMethod()
{
Assert.Fail("This test failed");
}
[TearDown]
public void TearDown()
{
TestContext.WriteLine(TestContext.CurrentContext.Result.Message);
}
Will output,
=> NUnitFixtureSetup.TestClass.TestMethod
This test failed
I'm planning to use Assert.Fail in this way in my unit testing.
Inside a private helper method inside the Test class (IsFileExist)
Inside the methods of a helper class (LoadData)
So is this ok? or this is out of the unit test framework usage?
If I did like this when the Assert.Fail execute does it unwind the whole stack for the test method or just only unwind the stack for that particular method?
Helper class
public class DataLoader
{
public void LoadData(string file)
{
if (Util.readfile(file)) {
Assert.Fail("Unable to read the file.");
}
}
}
Test class
[TestClass]
public class testFileData
{
[TestMethod]
public void TestData()
{
string file = "C:\\data.txt";
this.IsFileExist(file);
DataLoader dl = new DataLoader();
dl.LoadData(file);
}
private void IsFileExist(string file)
{
if(!Util.IsFileExist(file)) {
Assert.Fail("File not exist");
}
}
}
The fact that Assert is in the Microsoft.VisualStudio.TestTools.UnitTesting name space should serve as a hint that no, you should not be using it out side of a unit test.
If you want to fail based on a condition in your code, throw an exception.
I have been using MSpec to write my unit tests and really prefer the BDD style, I think it's a lot more readable. I'm now using Silverlight which MSpec doesn't support so I'm having to use MSTest but would still like to maintain a BDD style so am trying to work out a way to do this.
Just to explain what I'm trying to acheive, here's how I'd write an MSpec test
[Subject(typeof(Calculator))]
public class when_I_add_two_numbers : with_calculator
{
Establish context = () => this.Calculator = new Calculator();
Because I_add_2_and_4 = () => this.Calculator.Add(2).Add(4);
It should_display_6 = () => this.Calculator.Result.ShouldEqual(6);
}
public class with_calculator
{
protected static Calculator;
}
So with MSTest I would try to write the test like this (although you can see it won't work because I've put in 2 TestInitialize attributes, but you get what I'm trying to do..)
[TestClass]
public class when_I_add_two_numbers : with_calculator
{
[TestInitialize]
public void GivenIHaveACalculator()
{
this.Calculator = new Calculator();
}
[TestInitialize]
public void WhenIAdd2And4()
{
this.Calculator.Add(2).Add(4);
}
[TestMethod]
public void ThenItShouldDisplay6()
{
this.Calculator.Result.ShouldEqual(6);
}
}
public class with_calculator
{
protected Calculator Calculator {get;set;}
}
Can anyone come up with some more elegant suggestions to write tests in this way with MSTest?
What you think about this one:
[TestClass]
public class when_i_add_two_numbers : with_calculator
{
public override void When()
{
this.calc.Add(2, 4);
}
[TestMethod]
public void ThenItShouldDisplay6()
{
Assert.AreEqual(6, this.calc.Result);
}
[TestMethod]
public void ThenTheCalculatorShouldNotBeNull()
{
Assert.IsNotNull(this.calc);
}
}
public abstract class with_calculator : SpecificationContext
{
protected Calculator calc;
public override void Given()
{
this.calc = new Calculator();
}
}
public abstract class SpecificationContext
{
[TestInitialize]
public void Init()
{
this.Given();
this.When();
}
public virtual void Given(){}
public virtual void When(){}
}
public class Calculator
{
public int Result { get; private set; }
public void Add(int p, int p_2)
{
this.Result = p + p_2;
}
}
Mark Nijhof has an example of doing Given-When-Then style testing with NUnit in his Fohjin.DDD github repository.
Here's an excerpt from the example referenced above:
public class When_registering_an_domain_event : BaseTestFixture<PreProcessor>
{
/* ... */
protected override void When()
{
SubjectUnderTest.RegisterForPreProcessing<ClientMovedEvent>();
SubjectUnderTest.Process();
}
[Then]
public void Then_the_event_processors_for_client_moved_event_will_be_registered()
{
IEnumerable<EventProcessor> eventProcessors;
EventProcessorCache.TryGetEventProcessorsFor(typeof(ClientMovedEvent), out eventProcessors);
eventProcessors.Count().WillBe(1);
}
}
And you can see the Given in the base class implementation:
[Given]
public void Setup()
{
CaughtException = new NoExceptionWasThrownException();
Given();
try
{
When();
}
catch (Exception exception)
{
CaughtException = exception;
}
finally
{
Finally();
}
}
I've been giving this sort of question a lot of though recently. There are a lot of reasonable options out there, and you can create your own easily, as displayed in some of the answers in this post. I've been working on a BDD testing framework with the intent being to make it easily extended to any unit testing framework. I currently support MSTest and NUnit. Its called Given, and it's opensource. The basic idea is pretty simple, Given provides wrappers for common sets of functionality which can then be implemented for each test runner.
The following is an example of an NUnit Given test:
[Story(AsA = "car manufacturer",
IWant = "a factory that makes the right cars",
SoThat = "I can make money")]
public class when_building_a_toyota : Specification
{
static CarFactory _factory;
static Car _car;
given a_car_factory = () =>
{
_factory = new CarFactory();
};
when building_a_toyota = () => _car = _factory.Make(CarType.Toyota);
[then]
public void it_should_create_a_car()
{
_car.ShouldNotBeNull();
}
[then]
public void it_should_be_the_right_type_of_car()
{
_car.Type.ShouldEqual(CarType.Toyota);
}
}
I tried my best to stay true to the concepts from Dan North's Introducting BDD blog, and as such, everything is done using the given, when, then style of specification. The way it is implemented allows you to have multiple givens and even multiple when's, and they should be executed in order (still checking into this).
Additionally, there is a full suite of Should extensions included directly in Given. This enables things like the ShouldEqual() call seen above, but is full of nice methods for collection comparison and type comparison, etc. For those of you familiar with MSpec, i basically ripped them out and made some modifications to make them work outside of MSpec.
The payoff, though, I think, is in the reporting. The test runner is filled with the scenario you've created, so that at a glance you can get details about what each test is actually doing without diving into the code:
Additionally, an HTML report is created using t4 templating based on the results of the tests for each assembly. Classes with matching stories are all nested together, and each scenario name is printed for quick reference. For the above tests the report would look like this:
Failed tests would be colored red and can be clicked to view the exception details.
That's pretty much it. I'm using it in several projects I'm working on, so it is still being actively developed, but I'd describe the core as pretty stable. I'm looking at a way to share contexts by composition instead of inheritance, so that will likely be one of the next changes coming down the pike. Bring on the criticism. :)
You could use NUnit.Specifications and write tests like this:
using NUnit.Specifications;
using Should;
public class OrderSpecs
{
[Component]
public class when_a_customer_places_an_order : ContextSpecification
{
static OrderService _orderService;
static bool _results;
static Order _order;
Establish context = () =>
{
_orderService = new OrderService();
_order = new Order();
};
Because of = () => _results = _orderService.PlaceOrder(_order);
It should_successfully_place_the_order = () => _results.ShouldBeTrue();
}
}
MSTestEnhancer may help you, and you can get the package through NuGet.org.
Here is the sample code:
[TestClass]
public class TheTestedClassTest
{
[ContractTestCase]
public void TheTestedMethod()
{
"When Xxx happens, results in Yyy.".Test(() =>
{
// Write test case code here...
});
"When Zzz happens, results in Www.".Test(() =>
{
// Write test case code here...
});
}
}
And when you see your test result, you'll get this below:
I have written a post to present more information about it. See Introducing MSTestEnhancer to make unit test result easy to read - walterlv for more details.
I've a test class named MyClass. MyClass has a TestFixtureSetUp that loads some initial data. I want to mark whole class as Inconclusive when loading initial data fails. Just like when somebody marks a test method Inconclusive by calling Assert.Inconclusive().
Is there any solution?
You can work around it using Setup by signaling it when a data loading failed.
For example:
[TestFixture]
public class ClassWithDataLoad
{
private bool loadFailed;
[TestFixtureSetUp]
public void FixtureSetup()
{
// Assuming loading failure throws exception.
// If not if-else can be used.
try
{
// Try load data
}
catch (Exception)
{
loadFailed = true;
}
}
[SetUp]
public void Setup()
{
if (loadFailed)
{
Assert.Inconclusive();
}
}
[Test] public void Test1() { }
[Test] public void Test2() { }
}
Nunit does not support Assert.Inconclusive() in the TestFixtureSetUp. If a call to Assert.Inconclusive() is done there all the tests in the fixture appear as failed.
Try this:
In your TestFixtureSetUp, store a static value in the class to indicate whether the data has yet to be loaded, was successfully loaded, or was attempted but unsuccessfully loaded.
In your SetUp for each test, check the value.
If it indicates an unsuccessful load, immediately bomb out by calling Assert.Inconclusive().