I have got a question regarding test parallelization with parameterized fixtures and methods using NUnit 3.8.0.0, .NET 4.0 and C#.
I am currently developing a test suite that runs tests using each resource in a set of resources. Since initializing these resources is quite time consuming, and the resources do not preserve state, I want to share them between my test cases, in order to improve performance. These resource can not be used simultaneously by multiple tests.
In order to do that, I made a base fixture with a constuctor taking one argument, which makes the fixture point to the right resource. The actual test fixtures subclass from this fixture.
I am having a test fixture that uses both constuctor arguments (passed to a base class) and test case arguments. Based on the parameter, the base class initializes some resources. These resources can't be used at the same time.
Now I am trying to parallelize these test cases in such a way that the tests in the different fixtures generated by the class MyParameterizedTestFixture(V1), MyParameterizedTestFixture(V2). can run simultaniously, but tests in the same fixture can not (since the resources can not be used simultaniously). Additionally, different fixture classes may not run in parallel.
My code looks like this (with details abstracted away):
[TestFixture]
[TestFixtureSource(typeof(TestData), "FixtureParams")]
public class MyParameterizedTestFixture : BaseFixture
{
public MyParameterizedTestFixture(Resource resource) : base (resource) { }
[Test]
public void Test1() { /* run test using Resource */ }
[TestCaseSource("Params")]
public void TestParameterized(object param) { /* run test using Resource and param */ }
}
I tried to add Parallelizable(ParallelScope.Self)] to my derived fixtures, but that results in tests from different fixtures using the same resource (i.e. having the same version) fixture parameter, being run simultaniously (it works when only selecting one fixture).
Using [Parallelizable(ParallelScope.Fixture)] is definately not correct, because it will cause different fixtures to run together. Using [Parallelizable(ParallelScope.Children)] is also not correct, because it will cause different test cases in a fixture to run together, and not the different fixtures from the same class.
Now I was wondering if something like what I want could be archived by using a 'layered' approach (marking fixtures parallelizable some way, and methods in another way), or is it possible to define custom parallel scopes?
NUnit allows you to run any fixture in parallel or not. When a fixture is run in parallel, it may run at the same time as any other parallel fixture. Some way of grouping fixtures that can be run together but not with other fixtures would be a nice feature, but it's not one we have right now
When you have multiple instances of the same fixture - i.e. generic or parameterized fixtures - NUnit treats those instances exactly the same way as it treats any fixture. That is, if the fixture is parallelizable, the instances may run at the same time as any of the other instances as well as instances of different fixtures.
Similarly, you can run test cases (methods) in parallel or not. If a parallel case is contained in a non-parallel fixture, then it only runs in parallel with other methods of that fixture. If it is contained in a parallel fixture, then it can run at the same time as any other parallel method.
In other words, with the features we presently have, parallelism is basically all or nothing for each test, whether it's a suite, a fixture or a test case. This may change with enhancements in a future release.
Related
I have a selenium project written with NUnit in C# .NET 6. I have a folder called 'Tests' where there are multiple sub folders and each folder has a lot of classes. Each class has only one Test method. The reason for this is for structuring the project and each class represents one process in the software I'm testing. However, some processes need to be run after some other processes have already ran.
My question is; is there any way to run the classes in a specific order I want? I have tried using
dotnet test --filter
However this did not work. I also tried using NUnit's
Order
attribute but this works only when a class has multiple test methods.
The Order attribute may be placed on a class or a method. From the NUnit docs:
The OrderAttribute may be placed on a test method or fixture to specify the order in which tests are run within the fixture or other suite in which they are contained.
The bold italics in the citation are mine. In your case, the "other suite" containing the fixture class is the namespace in which it is defined.
There is no "global" ordering facility, but if all your tests are in the same namespace, using the OrderAttribute on the fixtures will cause them to run in the order you specify. If it doesn't interfere with any other use of the namespaces you might consider putting them all in one namespace.
A couple of notes:
The OrderAttribute specifies the order in which the tests start. If you run tests in parallel, multiple tests may run at the same time.
It's not advisable to have the tests depend on one another in most cases.
There are lots of reasons not to control the order of tests, which are covered in the answers quoted by other folks. I'm just answering the specific "how-to" question you posed.
TL;DR: As per title: How can I run the same tests but have different setups?
I'm working in C#, Visual Studio, deploying via Azure DevOps (nee VSTS) to a WebApp in Azure.
The software under development configures resources in Azure, and identities in Azure Active Directory.
This means I have tests like:
[Fact]
public void ShouldCreateAzureDevOpsProject()
{
_azureDevOpsHelper.GetAzureDevOpsProject(_projectName).Should().NotBe(null);
}
[Fact]
public void ShouldPutDefaultFileInRepo()
{
_azureDevOpsHelper.GetDefaultFileFromRepo(_projectName).Should().NotBe(null);
}
[Fact]
public void ShouldEnableAllMicrosoftResourceProviders()
{
_azureSubscriptionHelper.GetMicrosoftResourceProviders().Select(x => x.RegistrationState).Should().NotContain("NotRegistered");
}
I want to run these tests against the code as I write it. The code runs on my laptop, so the setup (which I currently have in an xUnit Fixture) is:
new EngineOrchestrator.EngineOrchestrator().RequestInstance(userSuppliedConfiguration);
But those tests are equally applicable to being run in our deployment pipeline, to check for regression after deploying to our test environment.
For those purposes, the set up would involve creating a HTTP client, and hitting the endpoint of the application.
The point is that the tests are identical, regardless of the setup. The values-to-test-for are sourced from a json configuration file in both the 'local' and 'pipeline' cases; isolation is achieved by subbing in a different configuration file for the tests during deployment.
Put another way; I'm trying to work out how I can encapsulate the setup, so that two different setups can share the same tests. That's the reverse of what fixtures etc do, where multiple tests can share the same setup.
I have tried:
Switching the "setup" depending on what machine is running the test
if (Environment.MachineName.StartsWith("Plavixo"))
{
new EngineOrchestrator.EngineOrchestrator().RequestInstance(userSuppliedConfiguration);
}
else
{
HttpEngineHelper.RunOrchestrator(userSuppliedConfiguration, authenticationDetails);
}
This is my current solution, but it feels brittle, and makes the test artifact huge because it necessarily includes all the source to be able to new up the Engine, even when it's going to run on the build machine.
Making the Tests Abstract, and inheriting to concrete classes that have their the specific setup in their constructors.
public class LocalBootstrap : BootstrapTests.BootstrapTests
{
public LocalBootstrap():base()
{
//do specific setup here
public abstract class BootstrapTests
{
[Fact]
public void ShouldCreateAzureDevOpsProject()
This sort of worked, but the set up ran before each and every test, which make sense: "xUnit.net creates a new instance of the test class for every test that is run, so any code which is placed into the constructor of the test class will be run for every single test."
Making the Fixture abstract
A fixture runs once and is shared between tests. I tried making the fixture abstract, and having a concrete class for each of my set ups.
xUnit throws a System.AggregateException : Class fixture type may only define a single public constructor. This is referenced by a github issue that has been closed as "By Design"
Running my local tests on a localhost
This is my next option to investigate. Is this a good idea?
Are there any other things I should try?
I ended up working out a solution that works for us.
The crux of the solution was making the tests abstract, and inheriting from that abstract class with a couple of concrete classes that captured the setup I wanted.
This is a simplified diagram of what we now have.
We're still using fixtures because we want the Arrange code and the Act code to run only once.
This arrangement works really well for us because we can run the same tests with multiple set ups (we have Bootstrap and Full), and with multiple invocation styles (run remotely vs run locally). We intend to add more set up for Greenfield and Brownfield (which is to do with whether our code has run before or not).
I have a bunch of integration tests that need to run sequentially because they interact with a database. The order in which the tests are executed does not matter, as long as at most one test is running at a given moment. According to the documentation, this can be achieved by grouping the tests in a collection. However, not all my tests share the same fixture, so I need at least two collections:
[CollectionDefinition("Collection A")]
public class CollectionA : ICollectionFixture<CollectionAFixture>
{
}
[CollectionDefinition("Collection B")]
public class CollectionB: ICollectionFixture<CollectionBFixture>
{
}
With this setup, tests from Collection A are run at the same time as tests from Collection B, which results in race conditions.
My question: is there a way to specify that the tests in both collections should be run sequentially, as if they were all part of a single collection?
I am aware that it is possible to disable parallelism using the xUnit console runner. However, I would prefer solving this in the source code, so other developers don't need to tweak their configuration.
I am approaching database testing with NUnit. As its time consuming so I don't want to run everytime.
So, I tried creating the base class and every other database testing classes derive from it as I thought if I will decorate the base class with [Ignore] attribute then rest of the derived classes will get ignored, but thats not happening.
I need to know is there any way to Ignore set of the classes with minimal effort?
If you don't want to split out integration and unit tests into separate projects you can also group tests into categories
[Test, Category("Integration")]
Most test runners allow you to filter which categories to run which would give you finer grained control if you need it (e.g. 'quick', 'slow' and 'reaaallyy slow' categories)
A recommended approach is seperating your unit tests that can run in isolation from your integration tests into different projects, then you can choose which project to execute when you run your tests. This will make it easier to run your faster running tests more often, multiple times daily or even hourly (and hopefully without ever having to worry about such things as configuration), while letting your slower running integration tests run on a different schedule.
I am looking at moving from NUnit to MbUnit for my unit testing framework as it has a couple of features that I like, one of them being the parallelizable attribute. If I mark tests with this attribute what happens
i, are all instance variables available only to their own thread or are they shared?
ii, how many tests will execute at once? Is it dependant on the number of processors/cores?
Reason for asking the first question is that I have, as a test simply swapped the Nunit framework for th MbUnit framework, and in a particular test class sets of tests tend to fail when run in parallel and pass when run sequentially. These test use variables at the class level and then setup in the [SetUp].
The tests run on a single instance of your test fixture class, so the instance fields will be shared.
By default, the degree of parallelism equals the number of CPUs you have, or 2 at a minimum.
You can use the DegreeOfParallelism attribute at the assembly level to override this.
See this blog post for details and some examples showing you how to use the various attributes.