I am trying to execute some code before all tests - even more specifically - even before all test case sources being evaluated.
The supposed-to-work-solution is not executing the code (breakpoint is being hit after test case sources were evaluated):
[SetUpFixture]
public class GlobalSetup
{
[OneTimeSetUp]
public void Setup()
{
// Unfortunately does stuff after TestCaseSources
}
}
I managed to achieve this by creating a fake test fixture with static constructor like this:
[TestFixture]
public class GlobalSetup
{
//Step 1
static GlobalSetup ()
{
// Does stuff before every other TestCaseSource
}
//Step 3
[Test, TestCaseSource (nameof (GlobalSetup.TestInitializerTestCaseData))]
public void Setup () { }
//Step 2
public static IEnumerable TestInitializerTestCaseData => new[] { new TestCaseData () };
}
What is the correct way to not use this workaround?
As the comments in the issue Pablo notPicasso cited indicate, you are trying to use NUnit in a way it isn't designed to be used. A global setup is possible, of course, but it runs before all other test execution. Test case sources are evaluated as a part of test discovery which has to be complete before any test execution can begin.
It's usually possible to use a global setup by reducing the amount of work done in your TestCaseSource. This is desirable anyway - TestCaseSource should always be as lightweight as possible.
Your example doesn't indicate exactly what you are trying to do before the tests are loaded. If you add more info, I can refine this answer. Meanwhile, here are a couple of examples...
If your TestCaseSource is creating instances of the objects you are testing don't do that. Instead, initialize parameters like strings and ints that will be used in the SetUp or OneTimeSetup for your tests.
If your TestCaseSource is initializing a database don't do that. Instead save the parameters that are needed to initialize the database and do it in a OneTimeSetUp at some level in the test hierarchy.
Hopefully, you get the idea... or provide some further info on what you are doing.
The workaround you have selected looks like it would work in the current implementation. However, it depends on the exact, unspecified order in which NUnit does things internally. It could easily stop working in a future release.
Related
NUnit has a OneTimeSetup attribute. I am trying to think of some scenarios when I can use it. However, I cannot find any GitHub examples online (even my link does not have an example).
Say I had some code like this:
[TestFixture]
public class MyFixture
{
IProduct Product;
[OneTimeSetUp]
public void OneTimeSetUp()
{
IFixture fixture = new Fixture().Customize(new AutoMoqCustomization());
Product = fixture.Create<Product>();
}
[Test]
public void Test1()
{
//Use Product here
}
[Test]
public void Test2()
{
//Use Product here
}
[Test]
public void Test3()
{
//Use Product here
}
}
Is this a valid scenario for initialising a variable inside OneTimeSetup i.e. because in this case, Product is used by all of the Test methods?
The fixture setup for Moq seems reasonable to me.
However, reusing the same Product is suspect as it can be modified and accidentally reused which could lead to unstable tests. I would get a new Product each test run, that is if it's the system under test.
Yes. No. :-)
Yes... [OneTimeSetUp] works as you expect. It executes once, before all your tests and all your tests will use the same product.
No... You should probably not use it this way, assuming that the initialization of Product is not extremely costly - it likely is not since it's based on a mock.
OneTimeSetUp is best used only for extremely costly steps like creating a database. In fact, it tends to have very little use in purely unit testing.
You might think, "Well if it's even a little more efficient, why not use it all the time?" The answer is that you are writing tests, which implies you don't actually know what the code under test will do. One of your tests may change the common product, possibly even put it in an invalid state. Your other tests will start to fail, making it tricky to figure out the source of the problem. Almost invariably, you want to start out using SetUp, which will create a separate product for each one of your tests.
Yes, you use OneTimeSetUp (and OneTimeTearDown) to do all the setup that will be shared among tests in that fixture. It is run once per test run, so regardless of however many tests in the fixture the code will execute once. Typically you will initialize instances held in private fields etc for the other tests to use, so that you don't end up with lots of duplicate setup code.
We use C# and NUnit 3.6.1 in our project and we think of parallel test execution to reduce the duration. As far as I know, it is possible to parallel the execution of TestFixture with a Parallelizable-attribute for the class in an assembly. We want to go further and parallel all tests, that are marked with Testor TestCase, inside a test class.
A typical test class looks like:
[TestFixture]
public class MyTestClass
{
private MySUT _sut;
private Mock<IDataBase> _mockedDB;
[SetUp]
public void SetUpSUT()
{
_mockedDB = new Mock<IDataBase>();
_sut = new MySUT(_mockedDB.Object);
}
public void TearDownSUT()
{
_mockedDB = null;
_sut = null;
}
[Test]
public void MyFirstTC()
{
//Here do some testing stuff with _sut
}
[Test]
public void MySecondTC()
{
//Here do some testing stuff with _sut again
}
}
As you can see, we have some fields, that we use in every test. To have a better separation, we think of executing every single test in an own process. I have no idea how to do that, or do we have to change something different to make parallel execution possible?
Thanks in advance!
Yes you can run individual tests in separate threads (not processes), starting with NUnit 3.7.
Using the ParellelScope parameter to Parallelizable attribute. From NUnit documentation:
ParallelScope Enumeration
This is a [Flags] enumeration used to specify which tests may run in parallel. It applies to the test upon which it appears and any subordinate tests. It is defined as follows:
[Flags]
public enum ParallelScope
{
None, // the test may not be run in parallel with other tests
Self, // the test itself may be run in parallel with other tests
Children, // child tests may be run in parallel with one another
Fixtures // fixtures may be run in parallel with one another
}
NUnit has no builtin way to run individual tests in separate processes. It allows you to run entire assemblies in separate processes or individual tests within an assembly on separate threads, as you have discovered already.
The question has come up in the past as a possible extension to NUnit, but has never been acted on up to now because there doesn't seem to be a very wide demand for it.
Not a real positive answer, I'm afraid. :-(
In VSTS, you can check a box to isolate tests. You can also run in parallel or in series. In Visual Studio, I dont think its obvious/possible to do this. If its possible in VSTS, there is probably a command line you can use to pass the appropriate switches.
With TestNG we have the "dependsOnMethods" feature that cheks if another TC has passed to execute the current TC and if it has a fail it does not execute unless you add alwaysRun label, as shown below:
#Test(dependsOnMethods={ "testMethod2" }, alwaysRun=true)
public void testMethod1() {
System.out.println("testMethod1");
}
#Test
public void testMethod2() {
System.out.println("testMethod2");
int result = 3;
Assert.assertEquals(result, 2);
}
Is there any way to have the same behaviour using NUnit?
Using NUnit's own facilties, there is no way to do this. There has been a lot of discussion around adding this sort of dependency, but it does not yet exist. Maybe TestNG is a good model for a future Attribute.
Currently, all you can do is order tests in NUnit. So, if you gave testMethod2 the attribute [Order(1)] it would run before any other tests in the fixture. This has some limitations:
Ordering has to do with starting tests, not waiting for them to finish. In a parallel environment, both tests could still run together. So, to use this workaround, you should not run the tests in the fixture in parallel. Fixtures can still run in parallel against one another, of course.
There is no provision that testMethod2 must pass in order for testMethod1 to be run. You could handle this yourself by setting an instance field in testMethod2 and testing it in testMethod1. I would probably test it using Assume.That so that test method 1 is not displayed as a warning or error in the case that test method 2 failed.
Using TDD first time in my life today. I am using nUnit.
I have one method, where I can insert multiple different inputs and check if result works.
I read that multiple asserts in one test is not a problem, and I really don't want to write new test for each input.
Example with multiple asserts:
[TestFixture]
public class TestClass
{
public Program test;
[SetUp]
public void Init()
{
test = new Program();
}
[Test]
public void Parse_SimpleValues_Calculated()
{
Assert.AreEqual(25, test.ParseCalculationString("5*5"));
Assert.AreEqual(125, test.ParseCalculationString("5*5*5"));
Assert.AreEqual(10, test.ParseCalculationString("5+5"));
Assert.AreEqual(15, test.ParseCalculationString("5+5+5"));
Assert.AreEqual(50, test.ParseCalculationString("5*5+5*5"));
Assert.AreEqual(3, test.ParseCalculationString("5-1*2"));
Assert.AreEqual(7, test.ParseCalculationString("7+1-1"));
}
}
But when something fails it is very hard to read which assert failed, I mean if you have them a lot, you have to go through all and find the right assert.
Is there any elegant way to show what input did we set if assert fails, instead of result and expected result?
Thank you.
I mean if you have them a lot, so you have to go through all.
No you don't - you just look at the stack trace. If you're running the tests within an IDE, I find that that's easily good enough to work out which line failed.
That said, there is a (significantly) better way - parameterized tests with TestCaseAttribute. So for example:
[Test]
[TestCase("5*5", 25)]
[TestCase("5*5*5", 125)]
[TestCase("5+5", 10)]
// etc
public void Parse_SimpleValues_Calculated(string input, int expectedOutput)
{
Assert.AreEqual(expectedOutput, test.ParseCalculationString(input));
}
Now your unit test runner will show you each test case separately, and you'll be able to see which one fails. Additionally, it will run all of the tests, even if an early one fails - so you don't end up fixing one only to find that the next one failed unexpectedly.
There's also TestCaseSourceAttribute for cases where you want to specify a collection of inputs separately - e.g. to use across multiple tests.
I have a unit test called TestMakeAValidCall(). It tests my phone app making a valid call.
I am about to write another test called TestShowCallMessage() that needs to have a valid call made for the test. Is it bad form to just call TestMakeAValidCall() in that test?
For reference this is my TestMakeAValidCall() test.
[TestMethod]
public void TestMakeAValidCall()
{
//Arrange
phone.InCall = false;
phone.CurrentNumber = "";
// Stub the call to the database
data.Expect(x => x.GetWhiteListData()).
Return(FillTestObjects.GetSingleEntryWhiteList());
// Get some bogus data
string phoneNumber = FillTestObjects.GetSingleEntryWhiteList().
First().PhoneNumber;
// Stub th call to MakeCall() so that it looks as if a call was made.
phone.Expect(x => x.MakeCall(phoneNumber)).
WhenCalled(invocation =>
{
phone.CurrentNumber = phoneNumber;
phone.InCall = true;
});
//Act
// Select the phone number
deviceControlForm.SelectedNumber = phoneNumber;
// Press the call button to make a call.
deviceMediator.CallButtonPressed();
//Assert
Assert.IsTrue(phone.InCall);
Assert.IsTrue(phone.CurrentNumber == phoneNumber);
}
Refactor the setup to another method and call that method from both tests. Tests should not call other tests.
IMHO, you should do one of the following:
Create a method that returns a valid call, and use it separately for both tests (not one calling the other)
Mock the valid call for the ShowCallMessageTest
To offer a counter point:
I strongly believe that well designed unit test should depend on one another!
Of course, that makes sense only if the testing framework is aware of these dependencies such that it can stop running dependent test when a dependency fails. Even better, such a framework can pass the fixture from test to test, such that can build upon a growing and extending fixture instead of rebuilding it from scratch for each single test. Of course, caching is done to take care no side-effects are introduced when more than one test depends from the same example.
We implemented this idea in the JExample extension for JUnit. There is no C# port yet, though there are ports for Ruby and Smalltalk and ... the most recent release of PHPUnit picked up both our ideas: dependencies and fixture reuse.
PS: folks are also using it for Groovy.
I think its a bad idea. You want your unit test to test one thing and one thing only. Instead of creating a call through your other test, mock out a call and pass it in as an argument.
A unit test should test one unit/function of your code by definition. Having it call other unit tests makes it test more than one unit. I break it up in to individual tests.
Yes - unit tests should be separate and should aim to test only one thing (or at least a small number of closely-related things). As an aside, the calls to data.Expect and phone.Expect in your test method are creating expectations rather than stub calls, which can make your tests brittle if you refactor...
A unit vs. module....we also think tests should depend on reusable methods as well and should test at an api level testing integration of classes. Many just tests a single class but many bugs are at that integration between the class level. We also use verifydesign to guarantee the api does not depend on implementation. This allows you to refactor the whole component/module without touching a test(and we went through that once actually and it worked great). Of course, any architectural changes force you to refactor the tests but at least design changes in the module don't cause test refactor work(unless you change the behavior of the api of course implicitly like firing more events than you used to but that "would" be an api change anyways).
"Could someone ellaborate on how the refactoring would look like in this case? – Philip Bergström Nov 28 '15 at 15:33"
I am currently doing something like this and this is what i came up with:
Notice that ProcessorType and BuildProcessors both call TestLevels
the actual content besides that fact is unimportant
its using XUnit, and Shouldly NuGet package
private static void TestLevels(ArgProcessor incomingProcessor)
{
Action<ProcessorLevel, int> currentLevelIteration = null;
currentLevelIteration = (currentProcessor, currentLevel) =>
{
currentProcessor.CurrentLevel.ShouldBeEquivalentTo(currentLevel);
ProcessorLevel nextProcessor = currentProcessor.CurrentProcessor;
if (nextProcessor != null)
currentLevelIteration(nextProcessor, currentLevel + 1);
};
currentLevelIteration(incomingProcessor, 0);
}
[Theory]
[InlineData(typeof(Build), "Build")]
public void ProcessorType(Type ProcessorType, params string[] args)
{
ArgProcessor newCLI = new OriWeb_CLI.ArgProcessor(args);
IncomingArgumentsTests.TestLevels(newCLI);
newCLI.CurrentProcessor.ShouldBeOfType(ProcessorType);
}
[Theory]
[InlineData(typeof(Build.TypeScript), "TypeScript")]
[InlineData(typeof(Build.CSharp), "CSharp")]
public void BuildProcessors(Type ProcessorType, params string[] args)
{
List<string> newArgs = new List<string> {"Build"};
foreach(string arg in args) newArgs.Add(arg);
ArgProcessor newCLI = new OriWeb_CLI.ArgProcessor(newArgs.ToArray());
IncomingArgumentsTests.TestLevels(newCLI);
newCLI.CurrentProcessor.CurrentProcessor.ShouldBeOfType(ProcessorType);
}