Sequentially run tests in different collections - c#

I have a bunch of integration tests that need to run sequentially because they interact with a database. The order in which the tests are executed does not matter, as long as at most one test is running at a given moment. According to the documentation, this can be achieved by grouping the tests in a collection. However, not all my tests share the same fixture, so I need at least two collections:
[CollectionDefinition("Collection A")]
public class CollectionA : ICollectionFixture<CollectionAFixture>
{
}
[CollectionDefinition("Collection B")]
public class CollectionB: ICollectionFixture<CollectionBFixture>
{
}
With this setup, tests from Collection A are run at the same time as tests from Collection B, which results in race conditions.
My question: is there a way to specify that the tests in both collections should be run sequentially, as if they were all part of a single collection?
I am aware that it is possible to disable parallelism using the xUnit console runner. However, I would prefer solving this in the source code, so other developers don't need to tweak their configuration.

Related

Combine unit test with benchmarkDotNet?

Is there a way to combine Unit Testing with BenchmarkDotNet?
The idea would be that I would like to be able to write my unit tests using the AAA-pattern.
However, when running it as a benchmark-test using dotnetBenchmark, I would like to be able to not include the arrange and assert part of the benchmark-test (not include it in the code being benchmarked, since the arrange and act has nothing to do with the execution of my actual code).
By being able to benchmark my "unit tests" this way, I would not need to write separate benchmark tests and unit test, but be able to base them upon each other (no need for duplication of arrange or act).
There is no way for BenchmarkDotNet to execute only selected parts of the compiled code, moreover it's not recommended to treat benchmarks as unit tests.
Here is what I wrote about this matter in Microbenchmark Design Guidelines created for the .NET Team:
When writing Unit Tests, we ideally want to test all methods and properties of the given type. We also test both the happy and unhappy paths. The result of every Unit Test run is a single value: passed or failed.
Benchmarks are different. First of all, the result of a benchmark run is never a single value. It's a whole distribution, described with values like mean, standard deviation, min, max and so on. To get a meaningful distribution, the benchmark has to be executed many, many times. This takes a lot of time. With the current recommended settings used in dotnet/performance repository, it takes on average six seconds to run a single benchmark.
The public surface of .NET Standard 2.0 API has tens of thousands of methods. If we had 1 benchmark for every public method, it would take two and a half days to run the benchmarks. Not to speak about the time it would take to analyze the results, filter the false positives, etc..
This is only one of the reasons why writing Benchmarks is different than writing Unit Tests.
The goal of benchmarking is to test the performance of all the methods that are frequently used (hot paths) and should be performant. The focus should be on the most common use cases, not edge cases.

Running Fixtures but not methods in parallel NUnit

I have got a question regarding test parallelization with parameterized fixtures and methods using NUnit 3.8.0.0, .NET 4.0 and C#.
I am currently developing a test suite that runs tests using each resource in a set of resources. Since initializing these resources is quite time consuming, and the resources do not preserve state, I want to share them between my test cases, in order to improve performance. These resource can not be used simultaneously by multiple tests.
In order to do that, I made a base fixture with a constuctor taking one argument, which makes the fixture point to the right resource. The actual test fixtures subclass from this fixture.
I am having a test fixture that uses both constuctor arguments (passed to a base class) and test case arguments. Based on the parameter, the base class initializes some resources. These resources can't be used at the same time.
Now I am trying to parallelize these test cases in such a way that the tests in the different fixtures generated by the class MyParameterizedTestFixture(V1), MyParameterizedTestFixture(V2). can run simultaniously, but tests in the same fixture can not (since the resources can not be used simultaniously). Additionally, different fixture classes may not run in parallel.
My code looks like this (with details abstracted away):
[TestFixture]
[TestFixtureSource(typeof(TestData), "FixtureParams")]
public class MyParameterizedTestFixture : BaseFixture
{
public MyParameterizedTestFixture(Resource resource) : base (resource) { }
[Test]
public void Test1() { /* run test using Resource */ }
[TestCaseSource("Params")]
public void TestParameterized(object param) { /* run test using Resource and param */ }
}
I tried to add Parallelizable(ParallelScope.Self)] to my derived fixtures, but that results in tests from different fixtures using the same resource (i.e. having the same version) fixture parameter, being run simultaniously (it works when only selecting one fixture).
Using [Parallelizable(ParallelScope.Fixture)] is definately not correct, because it will cause different fixtures to run together. Using [Parallelizable(ParallelScope.Children)] is also not correct, because it will cause different test cases in a fixture to run together, and not the different fixtures from the same class.
Now I was wondering if something like what I want could be archived by using a 'layered' approach (marking fixtures parallelizable some way, and methods in another way), or is it possible to define custom parallel scopes?
NUnit allows you to run any fixture in parallel or not. When a fixture is run in parallel, it may run at the same time as any other parallel fixture. Some way of grouping fixtures that can be run together but not with other fixtures would be a nice feature, but it's not one we have right now
When you have multiple instances of the same fixture - i.e. generic or parameterized fixtures - NUnit treats those instances exactly the same way as it treats any fixture. That is, if the fixture is parallelizable, the instances may run at the same time as any of the other instances as well as instances of different fixtures.
Similarly, you can run test cases (methods) in parallel or not. If a parallel case is contained in a non-parallel fixture, then it only runs in parallel with other methods of that fixture. If it is contained in a parallel fixture, then it can run at the same time as any other parallel method.
In other words, with the features we presently have, parallelism is basically all or nothing for each test, whether it's a suite, a fixture or a test case. This may change with enhancements in a future release.

Nunit Ordering Issue with 2 classes in a project

I am trying to run Unit TestCases through Nunit where i have two Unit TestCase classes,
1 class which deals with Product Console
2nd class which deals with other Windows processes and services, and both are under same project and they are set a sequence through Nunit Ordering which is very important aspect as all the testcases have to run sequentially.
When I am trying to run the testcases through command line, only the testcases specific to one class is running irrespective or ordering.
So, for example :
NoConsoleclass.cs
[Test , Order(1)]
public void Test1()
{}
[Test, Order(3)]
public void Test3()
{}
ConsoleTestCases.cs
[Test, Order(2)]
public void Test2()
{}
nunit3-console projectname.dll is running 1 and 3 first and then test 2.
Is there any way I could attain it the way I want as test 1, test 2 and then test 3??
I know, sequencing or pre-requisites is not adviceable, but it is required for this specific suite.
Kindly let me know
Thanks
Not currently, no.
There's an open feature request to allow the ordering of different TestFixtures - see the link below.
https://github.com/nunit/nunit/issues/345
This still wouldn't cover your case however, of wanting to interleave tests from different classes in a specific order. (Although would, if you were able to break this up into three classes.)
There's also an open feature request for a TestDependencyAttribute - which would allow you to specify one test as being dependent on the completion of others. This perhaps seems close to what you want to represent, than actual explicit ordering. Find that one at the link below.
https://github.com/nunit/nunit/issues/51

NUnit base class

I am approaching database testing with NUnit. As its time consuming so I don't want to run everytime.
So, I tried creating the base class and every other database testing classes derive from it as I thought if I will decorate the base class with [Ignore] attribute then rest of the derived classes will get ignored, but thats not happening.
I need to know is there any way to Ignore set of the classes with minimal effort?
If you don't want to split out integration and unit tests into separate projects you can also group tests into categories
[Test, Category("Integration")]
Most test runners allow you to filter which categories to run which would give you finer grained control if you need it (e.g. 'quick', 'slow' and 'reaaallyy slow' categories)
A recommended approach is seperating your unit tests that can run in isolation from your integration tests into different projects, then you can choose which project to execute when you run your tests. This will make it easier to run your faster running tests more often, multiple times daily or even hourly (and hopefully without ever having to worry about such things as configuration), while letting your slower running integration tests run on a different schedule.

MbUnit Parallelizable tests

I am looking at moving from NUnit to MbUnit for my unit testing framework as it has a couple of features that I like, one of them being the parallelizable attribute. If I mark tests with this attribute what happens
i, are all instance variables available only to their own thread or are they shared?
ii, how many tests will execute at once? Is it dependant on the number of processors/cores?
Reason for asking the first question is that I have, as a test simply swapped the Nunit framework for th MbUnit framework, and in a particular test class sets of tests tend to fail when run in parallel and pass when run sequentially. These test use variables at the class level and then setup in the [SetUp].
The tests run on a single instance of your test fixture class, so the instance fields will be shared.
By default, the degree of parallelism equals the number of CPUs you have, or 2 at a minimum.
You can use the DegreeOfParallelism attribute at the assembly level to override this.
See this blog post for details and some examples showing you how to use the various attributes.

Categories

Resources