Extra test run when using InlineAutoData without using InlineAutoData - c#

I have the challenge that when using InlineAutoData that the test is run with random values as well. The background is that I am testing a conversion with some input that is required to follow the specification. I am not interested in random data.
The following test runs twice. Once with the InlineAutoData and one with random strings. The test have been deliberately made simple and to fail on the random data run:
[Theory, GeneralTransferTestConventions]
[InlineAutoData("Allowed", "Allowed")]
public void Testing(string test1Data, string test2Data)
{
Assert.Equal(test1Data, test2Data);
}
My question is if there is a way to avoid the test run with random data and how to do that?

Remove the AutoFixture stuff integration:
[Theory]
[InlineData("Allowed", "Allowed")]
public void Testing(string test1Data, string test2Data)
{
Assert.Equal(test1Data, test2Data);
}
This is a pure xUnit.net test, and is entirely deterministic.
As a note, though, there's no reason to make a test parametrised if there's only going to be a single set of test cases, so either add more InlineData test cases:
[Theory]
[InlineData("Allowed", "Allowed")]
[InlineData("foo", "foo")]
[InlineData("bar", "bar")]
public void Testing(string test1Data, string test2Data)
{
Assert.Equal(test1Data, test2Data);
}
or make it a 'normal' test:
[Fact]
public void Testing()
{
var test1Data = "Allowed";
var test2Data = "Allowed";
Assert.Equal(test1Data, test2Data);
}

Related

Best practice for unit test cases

I am using the xUnit.net test framework and in each unit test I have certain steps which I am doing in each case. I would like to know if there is a way I call this method once before my unit case starts and also call when all unit test cases has been executed.
For example: In the scenario below I have two unit cases and in each case I am creating a local DB, populating it with data and then running my test and once it is done I am calling method to delete the DB. This I am doing in each test case. Instead of multiple creation I would like to create once and populate once and then delete db once all test case has been executed. It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed.
[Fact]
public void UnitCase1()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
[Fact]
public void UnitCase2()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
Editing after Answer from Eric:(I tried but its not working)
public class CosmosDataFixture : IDisposable
{
public static readonly string CosmosEndpoint = "https://localhost:8081";
public static readonly string EmulatorKey = "Mykey";
public static readonly string DatabaseId = "Databasename";
public static readonly string RecordingCollection = "collectionName";
string Root = Directory.GetParent( Directory.GetCurrentDirectory() ).Parent.Parent.FullName;
DocumentClient client = null;
public void ReadAllData( DocumentClient client )
{
//reading document code
}
public void ReadConfigAsync()
{
client = new DocumentClient( new Uri( CosmosEndpoint ), EmulatorKey,
new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
} );
}
public void CreateDatabase()
{// create db code
}
private void DeleteDatabase()
{
// delete db code
}
public CosmosDataFixture()
{
ReadConfigAsync();
CreateDatabase();
ReadAllData( client );
}
public void Dispose()
{
DeleteDatabase();
}
}
public class CosmosDataTests : IClassFixture<CosmosDataFixture>
{
CosmosDataFixture fixture;
public CosmosDataTests( CosmosDataFixture fixture )
{
this.fixture = fixture;
}
[Fact]
public async Task CheckDatabaseandCollectionCreation()
{
List<string> collectionName = new List<string>();
var uri = UriFactory.CreateDatabaseUri(DatabaseId);// don't get DatabaseId or client says does not exist in current context
var collections = await client.ReadDocumentCollectionFeedAsync( uri );
foreach( var collection in collections )
{
collectionName.Add( collection.Id);
}
}
That's what [SetUp] and [TearDown] are for in NUnit. They are run right before and right after each test case, respectively. In xUnit you would usually implement a default constructor and IDisposable.
For example:
public TestClass()
{
CreateDb();
UploadData();
}
public void Dispose()
{
DeleteDb()
}
[Fact]
public void UnitCase1()
{
...//My set of operation to test this case
...//Assert
}
[Fact]
public void UnitCase2()
{
...//My set of operation to test this case
...//Assert
}
As other people have pointed out, such tests are in mainstream parlance not unit tests, but rather integration tests. xUnit.net is a fine framework for those kinds of tests, though, so apart from the semantic distinction, it makes little technical difference.
Apart from setting up the database in the test class' constructor and tearing it down in Dispose, as outlined by Eric Schaefer, you can also use xUnit.net's BeforeAfterTestAttribute. You'll then override Before to set up the database, and override After to tear it down:
public class UseDatabaseAttribute : BeforeAfterTestAttribute
{
public override void Before(MethodInfo methodUnderTest)
{
CreateDb();
UploadData();
base.Before(methodUnderTest);
}
public override void After(MethodInfo methodUnderTest)
{
base.After(methodUnderTest);
DeleteDb();
}
}
You can then annotate either each test method, or the entire test class with the attribute. I usually just annotate the class:
[UseDatabase]
public class DbTests
{
// Tests go here...
}
Since tests that use a database interact with a shared resource (the database), they can't easily run in parallel. By default, xUnit.net runs tests in parallel, so you may want to disable that. You can do it by adding an xunit.runner.json file:
{
"$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
"parallelizeTestCollections": false
}
Finally, at least if you're using SQL Server, connection pooling will prevent you from deleting the database. You can either turn off connection pooling for your tests, or forcibly close other connections before teardown.
In my experience in Testing, I see 2 points here:
1-If you are checking that the data from the DB to another point in the program is being transmited correctly, that is Integration Testing, and it should be out of scope in the Unit Testing Plan, make sure that the responsabilities of a Unit Tester are clear where you work as there are some companies which avoid Integration Testing levels by assuming that if Functional Testing is 'OK', integrations should be too.
2- You mention at the end
It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed
but
I would like to create once and populate once and then delete db once all test case has been executed.
If I understand correctly, you need to do it for each Test Case as not all test cases are checking the same scenario, so it looks like those statements are the real problem here.
To answer your question, as it seems like you want to automate the process with minimum maintenance for the next releases, and I also know how the work environment tend to corner you to do some stuff that shouldn't be, I could think of a Preconditions Function and a Postcondition one, where you do it once and that's it.
If that is not possible for whatever reason, try to create another Test Case at the beginning (like Test Case 0) where you create and populate the DB (if apply, or separate it if needed) and another one at the end where you delete it.
I'm not familiar with the framework you are using, but I have a lot of experience in Testing, opening test levels and automating tasks, and hope that my answer could be of some help.

In MSTest, how can I specify that certain test methods cannot be run in parallel with each other?

I have a large set of integration tests that test a website server. Most of these tests are fine to run in parallel. However, I have a few that change settings and can cause each other to fail when run in parallel.
As a simplified example, let's say I had these tests:
TestPrice_5PercentTax
TestPrice_10PercentTax
TestPrice_NoTax
TestInventory_Add10Items
TestInventory_Remove10Items
The inventory tests will not get in the way of each other, and are not affected by the price tests. But the price tests will change the Tax setting, so that if both 5 and 10 run in parallel, 10 could end up changing the setting before 5 is done, and 5 would fail because it saw 10% tax instead of the 5% it expected.
I want to define a category for the three price tests, and say that they may not run at the same time as one another. They can run at the same time as any other tests, just not the other price tests. Is there a way to do this in MSTest?
MsTest v2 has functionality as following
[assembly: Parallelize(Workers = 0, Scope = ExecutionScope.MethodLevel)]
// Notice the assembly bracket, this can be compatible or incompatible with how your code is built
namespace UnitTestProject1
{
[TestClass]
public class TestClass1
{
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_5PercentTax() => //YourTestHere?;
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_10PercentTax() => //YourTestHere?;
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_NoTax() => //YourTestHere?;
[TestMethod]
public void TestInventory_Add10Items() => //YourTestHere?;
[TestMethod]
public void TestInventory_Remove10Items() => //YourTestHere?;
}
}
More detailed information can be found here MSTest v2 at meziantou.net
I strongly recommend atleast a quick read through of the link, as this will likely help you solve and understand the issue with the tests run in parallel or sequential.
I would like to provide a potential solution that I started but did not pursue.
First, I made a class that I could use as an attribute on my test methods.
[AttributeUsage(AttributeTargets.Method, Inherited = true, AllowMultiple =true)]
public class NoParallel : Attribute
{
public NoParallel(string nonParallelGroupName)
{
SetName = nonParallelGroupName;
}
public string SetName { get; }
}
Then I went and added it to my test methods that will conflict.
[NoParallel("Tax")]
public void TestPrice_5PercentTax();
[NoParallel("Tax")]
public void TestPrice_10PercentTax();
[NoParallel("Tax")]
public void TestPrice_NoTax();
// This test doesn't care
public void TestInventory_Add10Items();
// This test doesn't care
public void TestInventory_Remove10Items();
I gave my test class a static dictionary of mutexes keyed by their names.
private static Dictionary<string, Mutex> exclusiveCategories = new Dictionary<string, Mutex>();
Finally, using a helper to grab all of the "NoParallel" strings the test method has...
public static List<string> NonparallelSets(this TestContext context, ContextHandler testInstance)
{
var result = new List<string>();
var testName = context.TestName;
var testClassType = testInstance.GetType();
var testMethod = testClassType.GetMethod(testName);
if (testMethod != null)
{
var nonParallelGroup = testMethod.GetCustomAttribute<NoParallel>(true);
if (nonParallelGroup != null)
{
result = nonParallelGroups.Select(x => x.SetName).ToList();
}
}
result.Sort();
return result;
}
... I set up a TestInitialize and TestCleanup to make the tests with matching NoParallel strings execute in order.
[TestInitialize]
public void PerformSetup()
{
// Get all "NoParallel" strings on the test method currently being run
var nonParallelSets = testContext.NonparallelSets(this);
// A test can have multiple "NoParallel" attributes so do this for all of them
foreach (var setName in nonParallelSets)
{
// If this NoParallel set doesn't have a mutex yet, make one
Mutex mutex;
if (exclusiveCategories.ContainsKey(setName))
{
mutex = exclusiveCategories[setName];
}
else
{
mutex = new System.Threading.Mutex();
exclusiveCategories[setName] = mutex;
}
// Wait for the mutex before you can run the test
mutex.WaitOne();
}
}
[TestCleanup]
public void PerformTeardown()
{
// Get the "NoParallel" strings on the test method again
var nonParallelSets = testContext.NonparallelSets(this);
// Release the mutex held for each one
foreach (var setName in nonParallelSets)
{
var mutex = exclusiveCategories[setName];
mutex.ReleaseMutex();
}
}
We decided not to pursue this because it wasn't really worth the effort. Ultimately we decided to pull the tests that can't run together into their own test class, and mark them with [DoNotParallelize] as H.N suggested.

Is there a way to send a value from a unit test to another unit test?

I have a function which calculates some stuff and inputs it into a DB. This setup is important for all unit tests because they need some data to work on.
Sometimes i need to "flush" the DB, so all the unit tests point to the wrong ID.
Normally i just run the setup first and then change all the unit tests but this is taking to long tbh. Is there a way to automate this?
I would like to pass the generated ID into other unit tests.
So the idea was something like this:
[SetupFixture]
public class{
[Test]
public void SetupDB(){
setup();
//now marking the result somehow so other tests can pick the result up
return ID; //<--
}
}
public class OtherTests{
[Test]
[Get_ID_From_SetupDB]
public void testBaseOnID(int ID){
//we do stuff now with ID
}
}
PS: i have no problem switching the testing framework if you know a framework which can do this
Tests should be independent and you should generally never pass values between tests.
What you can do in your case, if all the tests are in the same class, is to have a variable in your class to hold the id and some global setup function that sets everything up and sets the variable to the correct id. In NUnit there is the [OneTimeSetUp] atribute for that.
[TestFixture]
public class MyTests
{
private int _testId;
[OneTimeSetUp]
public void SetItUp()
{
...
_testId = whatever;
}
[Test]
public void TestOne()
{
var whatever = _testId;
...
}
}

How to generate tests based on data in nunit framework using C#

So i have this code:
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test]
public void TestCase12()
{
ExecuteTestCase(12);
}
[Test]
public void TestCase13()
{
ExecuteTestCase(13);
}
[Test]
public void TestCase14()
{
ExecuteTestCase(14);
}
}
The ExecuteTestCase gets test parameters from my web server and executes the test case with these settings.
Each time i add a new test case parameters on my web server i need to add a new test in my C# code and pass the ID of test case parameters i have in my web server database and compile my code.
Is there any way to do it automatically? Like say, C# gets from my server ID's of all test case parameters and creates tests for them on the fly?
What is important, test cases change frequently. I was thinking about running all test cases in one test case on a loop, but than i'd be unable to run my test cases separately for example in Nunit IDE.
So my question is: how to run multiple test cases depending on data i receive on run time.
You can use TestCaseSourceattribute in order to get parameters from web service and have your test cases auto generated
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test, TestCaseSource(nameof(GetTestParameters))]
public void TestCase(int parameter)
{
ExecuteTestCase(parameter);
}
static int[] GetTestParameters()
{
//call web service and get parameters
return new[] { 1, 2, 3 };
}
}
documentation

Selective Ignore of NUnit tests

I want to ignore certain tests based on data I pulled from a configuration file during the TestFixtureSetUp. Is there a way to ignore running a test based on parameters?
[TestFixture]
public class MessagesTests
{
private bool isPaidAccount;
[TestFixtureSetUp]
public void Init () {
isPaidAccount = ConfigurationManager.AppSettings["IsPaidAccount"] == "True";
}
[Test]
//this test should run only if `isPaidAccount` is true
public void Message_Without_Template_Is_Sent()
{
//this tests an actual web api call.
}
}
If account we are testing with is a paid account, the test should run fine, if not, the method will throw an exception.
Would there be an extension of the attribute [Ignore(ReallyIgnore = isPaidAccount )]? Or should I write this inside the method and run 2 separate test cases for eg.
public void Message_Without_Template_Is_Sent()
{
if(isPaidAccount)
{
//test for return value here
}
else
{
//test for exception here
}
}
You can use Assert.Ignore() like Matthew states. You could also use Assert.Inconclusive() if you want to categorize the result differently.
This Question/Answer is slightly similar: Programmatically skip an nunit test

Categories

Resources