I want to ignore certain tests based on data I pulled from a configuration file during the TestFixtureSetUp. Is there a way to ignore running a test based on parameters?
[TestFixture]
public class MessagesTests
{
private bool isPaidAccount;
[TestFixtureSetUp]
public void Init () {
isPaidAccount = ConfigurationManager.AppSettings["IsPaidAccount"] == "True";
}
[Test]
//this test should run only if `isPaidAccount` is true
public void Message_Without_Template_Is_Sent()
{
//this tests an actual web api call.
}
}
If account we are testing with is a paid account, the test should run fine, if not, the method will throw an exception.
Would there be an extension of the attribute [Ignore(ReallyIgnore = isPaidAccount )]? Or should I write this inside the method and run 2 separate test cases for eg.
public void Message_Without_Template_Is_Sent()
{
if(isPaidAccount)
{
//test for return value here
}
else
{
//test for exception here
}
}
You can use Assert.Ignore() like Matthew states. You could also use Assert.Inconclusive() if you want to categorize the result differently.
This Question/Answer is slightly similar: Programmatically skip an nunit test
Related
I want to test a method with no return, which only runs code if a passed in bool is set to true.
I'd like to test that when the bool is false, no code is called. I expect, this is maybe a case of refactoring to make my code more testable.
Code speaks a thousand words, so here's my method (constructor and dependencies left out for brevity):
...
public void Handle(Notification notification)
{
if (notification.IsRestarting)
{
var installedVersion = _versionService.GetValue("Version");
var packageVersion = "1.0.0";
if (installedVersion.IsNullOrWhiteSpace())
{
_service.Initialise();
}
_versionService.SetValue("Version", packageVersion);
}
}
...
I'm using Moq for mocking.
The _versionService actually sets the value in a SQL database table. This is a third-party service, though it is mockable.
So, if isRestarting is false, the method should do nothing. How do I test this, or make my code more testable?
You can check that your services were never called. Assuming that _versionService is a Mock<ISomeVersionService>. Some rough code -
[Test]
public void MyTest()
{
//call the method
Handle(new Notification { IsRestarting = false });
//verify nothing happened
_versionService.Verify(x => x.GetValue(It.IsAny<string>(), Times.Never))
//handle _service similarly
}
I am using the xUnit.net test framework and in each unit test I have certain steps which I am doing in each case. I would like to know if there is a way I call this method once before my unit case starts and also call when all unit test cases has been executed.
For example: In the scenario below I have two unit cases and in each case I am creating a local DB, populating it with data and then running my test and once it is done I am calling method to delete the DB. This I am doing in each test case. Instead of multiple creation I would like to create once and populate once and then delete db once all test case has been executed. It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed.
[Fact]
public void UnitCase1()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
[Fact]
public void UnitCase2()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
Editing after Answer from Eric:(I tried but its not working)
public class CosmosDataFixture : IDisposable
{
public static readonly string CosmosEndpoint = "https://localhost:8081";
public static readonly string EmulatorKey = "Mykey";
public static readonly string DatabaseId = "Databasename";
public static readonly string RecordingCollection = "collectionName";
string Root = Directory.GetParent( Directory.GetCurrentDirectory() ).Parent.Parent.FullName;
DocumentClient client = null;
public void ReadAllData( DocumentClient client )
{
//reading document code
}
public void ReadConfigAsync()
{
client = new DocumentClient( new Uri( CosmosEndpoint ), EmulatorKey,
new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
} );
}
public void CreateDatabase()
{// create db code
}
private void DeleteDatabase()
{
// delete db code
}
public CosmosDataFixture()
{
ReadConfigAsync();
CreateDatabase();
ReadAllData( client );
}
public void Dispose()
{
DeleteDatabase();
}
}
public class CosmosDataTests : IClassFixture<CosmosDataFixture>
{
CosmosDataFixture fixture;
public CosmosDataTests( CosmosDataFixture fixture )
{
this.fixture = fixture;
}
[Fact]
public async Task CheckDatabaseandCollectionCreation()
{
List<string> collectionName = new List<string>();
var uri = UriFactory.CreateDatabaseUri(DatabaseId);// don't get DatabaseId or client says does not exist in current context
var collections = await client.ReadDocumentCollectionFeedAsync( uri );
foreach( var collection in collections )
{
collectionName.Add( collection.Id);
}
}
That's what [SetUp] and [TearDown] are for in NUnit. They are run right before and right after each test case, respectively. In xUnit you would usually implement a default constructor and IDisposable.
For example:
public TestClass()
{
CreateDb();
UploadData();
}
public void Dispose()
{
DeleteDb()
}
[Fact]
public void UnitCase1()
{
...//My set of operation to test this case
...//Assert
}
[Fact]
public void UnitCase2()
{
...//My set of operation to test this case
...//Assert
}
As other people have pointed out, such tests are in mainstream parlance not unit tests, but rather integration tests. xUnit.net is a fine framework for those kinds of tests, though, so apart from the semantic distinction, it makes little technical difference.
Apart from setting up the database in the test class' constructor and tearing it down in Dispose, as outlined by Eric Schaefer, you can also use xUnit.net's BeforeAfterTestAttribute. You'll then override Before to set up the database, and override After to tear it down:
public class UseDatabaseAttribute : BeforeAfterTestAttribute
{
public override void Before(MethodInfo methodUnderTest)
{
CreateDb();
UploadData();
base.Before(methodUnderTest);
}
public override void After(MethodInfo methodUnderTest)
{
base.After(methodUnderTest);
DeleteDb();
}
}
You can then annotate either each test method, or the entire test class with the attribute. I usually just annotate the class:
[UseDatabase]
public class DbTests
{
// Tests go here...
}
Since tests that use a database interact with a shared resource (the database), they can't easily run in parallel. By default, xUnit.net runs tests in parallel, so you may want to disable that. You can do it by adding an xunit.runner.json file:
{
"$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
"parallelizeTestCollections": false
}
Finally, at least if you're using SQL Server, connection pooling will prevent you from deleting the database. You can either turn off connection pooling for your tests, or forcibly close other connections before teardown.
In my experience in Testing, I see 2 points here:
1-If you are checking that the data from the DB to another point in the program is being transmited correctly, that is Integration Testing, and it should be out of scope in the Unit Testing Plan, make sure that the responsabilities of a Unit Tester are clear where you work as there are some companies which avoid Integration Testing levels by assuming that if Functional Testing is 'OK', integrations should be too.
2- You mention at the end
It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed
but
I would like to create once and populate once and then delete db once all test case has been executed.
If I understand correctly, you need to do it for each Test Case as not all test cases are checking the same scenario, so it looks like those statements are the real problem here.
To answer your question, as it seems like you want to automate the process with minimum maintenance for the next releases, and I also know how the work environment tend to corner you to do some stuff that shouldn't be, I could think of a Preconditions Function and a Postcondition one, where you do it once and that's it.
If that is not possible for whatever reason, try to create another Test Case at the beginning (like Test Case 0) where you create and populate the DB (if apply, or separate it if needed) and another one at the end where you delete it.
I'm not familiar with the framework you are using, but I have a lot of experience in Testing, opening test levels and automating tasks, and hope that my answer could be of some help.
I have a large set of integration tests that test a website server. Most of these tests are fine to run in parallel. However, I have a few that change settings and can cause each other to fail when run in parallel.
As a simplified example, let's say I had these tests:
TestPrice_5PercentTax
TestPrice_10PercentTax
TestPrice_NoTax
TestInventory_Add10Items
TestInventory_Remove10Items
The inventory tests will not get in the way of each other, and are not affected by the price tests. But the price tests will change the Tax setting, so that if both 5 and 10 run in parallel, 10 could end up changing the setting before 5 is done, and 5 would fail because it saw 10% tax instead of the 5% it expected.
I want to define a category for the three price tests, and say that they may not run at the same time as one another. They can run at the same time as any other tests, just not the other price tests. Is there a way to do this in MSTest?
MsTest v2 has functionality as following
[assembly: Parallelize(Workers = 0, Scope = ExecutionScope.MethodLevel)]
// Notice the assembly bracket, this can be compatible or incompatible with how your code is built
namespace UnitTestProject1
{
[TestClass]
public class TestClass1
{
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_5PercentTax() => //YourTestHere?;
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_10PercentTax() => //YourTestHere?;
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_NoTax() => //YourTestHere?;
[TestMethod]
public void TestInventory_Add10Items() => //YourTestHere?;
[TestMethod]
public void TestInventory_Remove10Items() => //YourTestHere?;
}
}
More detailed information can be found here MSTest v2 at meziantou.net
I strongly recommend atleast a quick read through of the link, as this will likely help you solve and understand the issue with the tests run in parallel or sequential.
I would like to provide a potential solution that I started but did not pursue.
First, I made a class that I could use as an attribute on my test methods.
[AttributeUsage(AttributeTargets.Method, Inherited = true, AllowMultiple =true)]
public class NoParallel : Attribute
{
public NoParallel(string nonParallelGroupName)
{
SetName = nonParallelGroupName;
}
public string SetName { get; }
}
Then I went and added it to my test methods that will conflict.
[NoParallel("Tax")]
public void TestPrice_5PercentTax();
[NoParallel("Tax")]
public void TestPrice_10PercentTax();
[NoParallel("Tax")]
public void TestPrice_NoTax();
// This test doesn't care
public void TestInventory_Add10Items();
// This test doesn't care
public void TestInventory_Remove10Items();
I gave my test class a static dictionary of mutexes keyed by their names.
private static Dictionary<string, Mutex> exclusiveCategories = new Dictionary<string, Mutex>();
Finally, using a helper to grab all of the "NoParallel" strings the test method has...
public static List<string> NonparallelSets(this TestContext context, ContextHandler testInstance)
{
var result = new List<string>();
var testName = context.TestName;
var testClassType = testInstance.GetType();
var testMethod = testClassType.GetMethod(testName);
if (testMethod != null)
{
var nonParallelGroup = testMethod.GetCustomAttribute<NoParallel>(true);
if (nonParallelGroup != null)
{
result = nonParallelGroups.Select(x => x.SetName).ToList();
}
}
result.Sort();
return result;
}
... I set up a TestInitialize and TestCleanup to make the tests with matching NoParallel strings execute in order.
[TestInitialize]
public void PerformSetup()
{
// Get all "NoParallel" strings on the test method currently being run
var nonParallelSets = testContext.NonparallelSets(this);
// A test can have multiple "NoParallel" attributes so do this for all of them
foreach (var setName in nonParallelSets)
{
// If this NoParallel set doesn't have a mutex yet, make one
Mutex mutex;
if (exclusiveCategories.ContainsKey(setName))
{
mutex = exclusiveCategories[setName];
}
else
{
mutex = new System.Threading.Mutex();
exclusiveCategories[setName] = mutex;
}
// Wait for the mutex before you can run the test
mutex.WaitOne();
}
}
[TestCleanup]
public void PerformTeardown()
{
// Get the "NoParallel" strings on the test method again
var nonParallelSets = testContext.NonparallelSets(this);
// Release the mutex held for each one
foreach (var setName in nonParallelSets)
{
var mutex = exclusiveCategories[setName];
mutex.ReleaseMutex();
}
}
We decided not to pursue this because it wasn't really worth the effort. Ultimately we decided to pull the tests that can't run together into their own test class, and mark them with [DoNotParallelize] as H.N suggested.
So i have this code:
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test]
public void TestCase12()
{
ExecuteTestCase(12);
}
[Test]
public void TestCase13()
{
ExecuteTestCase(13);
}
[Test]
public void TestCase14()
{
ExecuteTestCase(14);
}
}
The ExecuteTestCase gets test parameters from my web server and executes the test case with these settings.
Each time i add a new test case parameters on my web server i need to add a new test in my C# code and pass the ID of test case parameters i have in my web server database and compile my code.
Is there any way to do it automatically? Like say, C# gets from my server ID's of all test case parameters and creates tests for them on the fly?
What is important, test cases change frequently. I was thinking about running all test cases in one test case on a loop, but than i'd be unable to run my test cases separately for example in Nunit IDE.
So my question is: how to run multiple test cases depending on data i receive on run time.
You can use TestCaseSourceattribute in order to get parameters from web service and have your test cases auto generated
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test, TestCaseSource(nameof(GetTestParameters))]
public void TestCase(int parameter)
{
ExecuteTestCase(parameter);
}
static int[] GetTestParameters()
{
//call web service and get parameters
return new[] { 1, 2, 3 };
}
}
documentation
I am using XUnit with TestDriven.Net or Resharper test runner to execute my tests. I really like the BDD style of writing my tests so I was wondering if there is some what that we can modify the output of these frameworks?
I like to name my test with underscores and want to split the test name and format it in Given, When, Then format. Is it at all possible with these tools?
I'm not sure what you try to do. It seems that you want to change test names displayed on a certain test runnder. Displaying test names are actually depended on just test runner(tool). This means that we could customize names or couldn't by which test runner we use.
Checked out the following code, which could be the code that you want to do. If the code is not the case, at least, I think that it can show you some idea about how to customize test names.
public class Given_Foo
{
[Test]
public void Then_Bar_returns_correct_result()
{
Assert.True(flase, "Check out test names...");
}
}
public class TestAttribute : FactAttribute
{
public TestAttribute()
{
}
protected override IEnumerable<ITestCommand> EnumerateTestCommands(IMethodInfo method)
{
yield return new CustomNamedTestCommand(method);
}
}
public class CustomNamedTestCommand : FactCommand
{
public CustomNamedTestCommand(IMethodInfo method) : base(method)
{
this.DisplayName = DisplayName.Replace("_", " ");
}
}