With TestNG we have the "dependsOnMethods" feature that cheks if another TC has passed to execute the current TC and if it has a fail it does not execute unless you add alwaysRun label, as shown below:
#Test(dependsOnMethods={ "testMethod2" }, alwaysRun=true)
public void testMethod1() {
System.out.println("testMethod1");
}
#Test
public void testMethod2() {
System.out.println("testMethod2");
int result = 3;
Assert.assertEquals(result, 2);
}
Is there any way to have the same behaviour using NUnit?
Using NUnit's own facilties, there is no way to do this. There has been a lot of discussion around adding this sort of dependency, but it does not yet exist. Maybe TestNG is a good model for a future Attribute.
Currently, all you can do is order tests in NUnit. So, if you gave testMethod2 the attribute [Order(1)] it would run before any other tests in the fixture. This has some limitations:
Ordering has to do with starting tests, not waiting for them to finish. In a parallel environment, both tests could still run together. So, to use this workaround, you should not run the tests in the fixture in parallel. Fixtures can still run in parallel against one another, of course.
There is no provision that testMethod2 must pass in order for testMethod1 to be run. You could handle this yourself by setting an instance field in testMethod2 and testing it in testMethod1. I would probably test it using Assume.That so that test method 1 is not displayed as a warning or error in the case that test method 2 failed.
Related
I wrote [Test,OneTimeSetup] above a test case is that running the OneTimeSetup method I defined again?
We have a heavy setup process that requires us to use a [OneTimeSetup] for Nunit. There was a problem that some of the data would be changed and cause a test case to fail when run in the Test Fixture but not as a Individual test. So above the test I wrote [Test, OneTimeSetup] and that solved the problem.
[OneTimeSetUp]
public void Initialize()
{
//Setup Code
}
[Test]
public void TestName1()
{
...
}
[Test, OneTimeSetUp]
public void TestName2()
{
//Test Code
}
I think your "fix" changed behavior in a way that no failure is seen but that you did't actually fix a dependency that is messing up your tests.
Here's what happens with the attributes you use.
Because you tricked NUnit to think that TestName2 is a one-time setup method, it runs first thing, before any other tests.
Then TestName1 runs, as the first test found. Although the order is not guaranteed, that's probably the order that's being used, based on your report.
Then TestName2 runs again, this time as a test, because it is marked as a test as well.
However, what you are doing should really be an error in NUnit. A method is either a Test method, SetUp method, TearDown method, etc. It's not intended to be more than one of those things. It would be unsurprising if the next release of NUnit detected what you are doing and gave an error.
A better solution is to analyze your tests to find the ordering dependency and remove it.
I am trying to execute some code before all tests - even more specifically - even before all test case sources being evaluated.
The supposed-to-work-solution is not executing the code (breakpoint is being hit after test case sources were evaluated):
[SetUpFixture]
public class GlobalSetup
{
[OneTimeSetUp]
public void Setup()
{
// Unfortunately does stuff after TestCaseSources
}
}
I managed to achieve this by creating a fake test fixture with static constructor like this:
[TestFixture]
public class GlobalSetup
{
//Step 1
static GlobalSetup ()
{
// Does stuff before every other TestCaseSource
}
//Step 3
[Test, TestCaseSource (nameof (GlobalSetup.TestInitializerTestCaseData))]
public void Setup () { }
//Step 2
public static IEnumerable TestInitializerTestCaseData => new[] { new TestCaseData () };
}
What is the correct way to not use this workaround?
As the comments in the issue Pablo notPicasso cited indicate, you are trying to use NUnit in a way it isn't designed to be used. A global setup is possible, of course, but it runs before all other test execution. Test case sources are evaluated as a part of test discovery which has to be complete before any test execution can begin.
It's usually possible to use a global setup by reducing the amount of work done in your TestCaseSource. This is desirable anyway - TestCaseSource should always be as lightweight as possible.
Your example doesn't indicate exactly what you are trying to do before the tests are loaded. If you add more info, I can refine this answer. Meanwhile, here are a couple of examples...
If your TestCaseSource is creating instances of the objects you are testing don't do that. Instead, initialize parameters like strings and ints that will be used in the SetUp or OneTimeSetup for your tests.
If your TestCaseSource is initializing a database don't do that. Instead save the parameters that are needed to initialize the database and do it in a OneTimeSetUp at some level in the test hierarchy.
Hopefully, you get the idea... or provide some further info on what you are doing.
The workaround you have selected looks like it would work in the current implementation. However, it depends on the exact, unspecified order in which NUnit does things internally. It could easily stop working in a future release.
NUnit has a OneTimeSetup attribute. I am trying to think of some scenarios when I can use it. However, I cannot find any GitHub examples online (even my link does not have an example).
Say I had some code like this:
[TestFixture]
public class MyFixture
{
IProduct Product;
[OneTimeSetUp]
public void OneTimeSetUp()
{
IFixture fixture = new Fixture().Customize(new AutoMoqCustomization());
Product = fixture.Create<Product>();
}
[Test]
public void Test1()
{
//Use Product here
}
[Test]
public void Test2()
{
//Use Product here
}
[Test]
public void Test3()
{
//Use Product here
}
}
Is this a valid scenario for initialising a variable inside OneTimeSetup i.e. because in this case, Product is used by all of the Test methods?
The fixture setup for Moq seems reasonable to me.
However, reusing the same Product is suspect as it can be modified and accidentally reused which could lead to unstable tests. I would get a new Product each test run, that is if it's the system under test.
Yes. No. :-)
Yes... [OneTimeSetUp] works as you expect. It executes once, before all your tests and all your tests will use the same product.
No... You should probably not use it this way, assuming that the initialization of Product is not extremely costly - it likely is not since it's based on a mock.
OneTimeSetUp is best used only for extremely costly steps like creating a database. In fact, it tends to have very little use in purely unit testing.
You might think, "Well if it's even a little more efficient, why not use it all the time?" The answer is that you are writing tests, which implies you don't actually know what the code under test will do. One of your tests may change the common product, possibly even put it in an invalid state. Your other tests will start to fail, making it tricky to figure out the source of the problem. Almost invariably, you want to start out using SetUp, which will create a separate product for each one of your tests.
Yes, you use OneTimeSetUp (and OneTimeTearDown) to do all the setup that will be shared among tests in that fixture. It is run once per test run, so regardless of however many tests in the fixture the code will execute once. Typically you will initialize instances held in private fields etc for the other tests to use, so that you don't end up with lots of duplicate setup code.
We use C# and NUnit 3.6.1 in our project and we think of parallel test execution to reduce the duration. As far as I know, it is possible to parallel the execution of TestFixture with a Parallelizable-attribute for the class in an assembly. We want to go further and parallel all tests, that are marked with Testor TestCase, inside a test class.
A typical test class looks like:
[TestFixture]
public class MyTestClass
{
private MySUT _sut;
private Mock<IDataBase> _mockedDB;
[SetUp]
public void SetUpSUT()
{
_mockedDB = new Mock<IDataBase>();
_sut = new MySUT(_mockedDB.Object);
}
public void TearDownSUT()
{
_mockedDB = null;
_sut = null;
}
[Test]
public void MyFirstTC()
{
//Here do some testing stuff with _sut
}
[Test]
public void MySecondTC()
{
//Here do some testing stuff with _sut again
}
}
As you can see, we have some fields, that we use in every test. To have a better separation, we think of executing every single test in an own process. I have no idea how to do that, or do we have to change something different to make parallel execution possible?
Thanks in advance!
Yes you can run individual tests in separate threads (not processes), starting with NUnit 3.7.
Using the ParellelScope parameter to Parallelizable attribute. From NUnit documentation:
ParallelScope Enumeration
This is a [Flags] enumeration used to specify which tests may run in parallel. It applies to the test upon which it appears and any subordinate tests. It is defined as follows:
[Flags]
public enum ParallelScope
{
None, // the test may not be run in parallel with other tests
Self, // the test itself may be run in parallel with other tests
Children, // child tests may be run in parallel with one another
Fixtures // fixtures may be run in parallel with one another
}
NUnit has no builtin way to run individual tests in separate processes. It allows you to run entire assemblies in separate processes or individual tests within an assembly on separate threads, as you have discovered already.
The question has come up in the past as a possible extension to NUnit, but has never been acted on up to now because there doesn't seem to be a very wide demand for it.
Not a real positive answer, I'm afraid. :-(
In VSTS, you can check a box to isolate tests. You can also run in parallel or in series. In Visual Studio, I dont think its obvious/possible to do this. If its possible in VSTS, there is probably a command line you can use to pass the appropriate switches.
Using TDD first time in my life today. I am using nUnit.
I have one method, where I can insert multiple different inputs and check if result works.
I read that multiple asserts in one test is not a problem, and I really don't want to write new test for each input.
Example with multiple asserts:
[TestFixture]
public class TestClass
{
public Program test;
[SetUp]
public void Init()
{
test = new Program();
}
[Test]
public void Parse_SimpleValues_Calculated()
{
Assert.AreEqual(25, test.ParseCalculationString("5*5"));
Assert.AreEqual(125, test.ParseCalculationString("5*5*5"));
Assert.AreEqual(10, test.ParseCalculationString("5+5"));
Assert.AreEqual(15, test.ParseCalculationString("5+5+5"));
Assert.AreEqual(50, test.ParseCalculationString("5*5+5*5"));
Assert.AreEqual(3, test.ParseCalculationString("5-1*2"));
Assert.AreEqual(7, test.ParseCalculationString("7+1-1"));
}
}
But when something fails it is very hard to read which assert failed, I mean if you have them a lot, you have to go through all and find the right assert.
Is there any elegant way to show what input did we set if assert fails, instead of result and expected result?
Thank you.
I mean if you have them a lot, so you have to go through all.
No you don't - you just look at the stack trace. If you're running the tests within an IDE, I find that that's easily good enough to work out which line failed.
That said, there is a (significantly) better way - parameterized tests with TestCaseAttribute. So for example:
[Test]
[TestCase("5*5", 25)]
[TestCase("5*5*5", 125)]
[TestCase("5+5", 10)]
// etc
public void Parse_SimpleValues_Calculated(string input, int expectedOutput)
{
Assert.AreEqual(expectedOutput, test.ParseCalculationString(input));
}
Now your unit test runner will show you each test case separately, and you'll be able to see which one fails. Additionally, it will run all of the tests, even if an early one fails - so you don't end up fixing one only to find that the next one failed unexpectedly.
There's also TestCaseSourceAttribute for cases where you want to specify a collection of inputs separately - e.g. to use across multiple tests.