I am using the xUnit.net test framework and in each unit test I have certain steps which I am doing in each case. I would like to know if there is a way I call this method once before my unit case starts and also call when all unit test cases has been executed.
For example: In the scenario below I have two unit cases and in each case I am creating a local DB, populating it with data and then running my test and once it is done I am calling method to delete the DB. This I am doing in each test case. Instead of multiple creation I would like to create once and populate once and then delete db once all test case has been executed. It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed.
[Fact]
public void UnitCase1()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
[Fact]
public void UnitCase2()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
Editing after Answer from Eric:(I tried but its not working)
public class CosmosDataFixture : IDisposable
{
public static readonly string CosmosEndpoint = "https://localhost:8081";
public static readonly string EmulatorKey = "Mykey";
public static readonly string DatabaseId = "Databasename";
public static readonly string RecordingCollection = "collectionName";
string Root = Directory.GetParent( Directory.GetCurrentDirectory() ).Parent.Parent.FullName;
DocumentClient client = null;
public void ReadAllData( DocumentClient client )
{
//reading document code
}
public void ReadConfigAsync()
{
client = new DocumentClient( new Uri( CosmosEndpoint ), EmulatorKey,
new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
} );
}
public void CreateDatabase()
{// create db code
}
private void DeleteDatabase()
{
// delete db code
}
public CosmosDataFixture()
{
ReadConfigAsync();
CreateDatabase();
ReadAllData( client );
}
public void Dispose()
{
DeleteDatabase();
}
}
public class CosmosDataTests : IClassFixture<CosmosDataFixture>
{
CosmosDataFixture fixture;
public CosmosDataTests( CosmosDataFixture fixture )
{
this.fixture = fixture;
}
[Fact]
public async Task CheckDatabaseandCollectionCreation()
{
List<string> collectionName = new List<string>();
var uri = UriFactory.CreateDatabaseUri(DatabaseId);// don't get DatabaseId or client says does not exist in current context
var collections = await client.ReadDocumentCollectionFeedAsync( uri );
foreach( var collection in collections )
{
collectionName.Add( collection.Id);
}
}
That's what [SetUp] and [TearDown] are for in NUnit. They are run right before and right after each test case, respectively. In xUnit you would usually implement a default constructor and IDisposable.
For example:
public TestClass()
{
CreateDb();
UploadData();
}
public void Dispose()
{
DeleteDb()
}
[Fact]
public void UnitCase1()
{
...//My set of operation to test this case
...//Assert
}
[Fact]
public void UnitCase2()
{
...//My set of operation to test this case
...//Assert
}
As other people have pointed out, such tests are in mainstream parlance not unit tests, but rather integration tests. xUnit.net is a fine framework for those kinds of tests, though, so apart from the semantic distinction, it makes little technical difference.
Apart from setting up the database in the test class' constructor and tearing it down in Dispose, as outlined by Eric Schaefer, you can also use xUnit.net's BeforeAfterTestAttribute. You'll then override Before to set up the database, and override After to tear it down:
public class UseDatabaseAttribute : BeforeAfterTestAttribute
{
public override void Before(MethodInfo methodUnderTest)
{
CreateDb();
UploadData();
base.Before(methodUnderTest);
}
public override void After(MethodInfo methodUnderTest)
{
base.After(methodUnderTest);
DeleteDb();
}
}
You can then annotate either each test method, or the entire test class with the attribute. I usually just annotate the class:
[UseDatabase]
public class DbTests
{
// Tests go here...
}
Since tests that use a database interact with a shared resource (the database), they can't easily run in parallel. By default, xUnit.net runs tests in parallel, so you may want to disable that. You can do it by adding an xunit.runner.json file:
{
"$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
"parallelizeTestCollections": false
}
Finally, at least if you're using SQL Server, connection pooling will prevent you from deleting the database. You can either turn off connection pooling for your tests, or forcibly close other connections before teardown.
In my experience in Testing, I see 2 points here:
1-If you are checking that the data from the DB to another point in the program is being transmited correctly, that is Integration Testing, and it should be out of scope in the Unit Testing Plan, make sure that the responsabilities of a Unit Tester are clear where you work as there are some companies which avoid Integration Testing levels by assuming that if Functional Testing is 'OK', integrations should be too.
2- You mention at the end
It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed
but
I would like to create once and populate once and then delete db once all test case has been executed.
If I understand correctly, you need to do it for each Test Case as not all test cases are checking the same scenario, so it looks like those statements are the real problem here.
To answer your question, as it seems like you want to automate the process with minimum maintenance for the next releases, and I also know how the work environment tend to corner you to do some stuff that shouldn't be, I could think of a Preconditions Function and a Postcondition one, where you do it once and that's it.
If that is not possible for whatever reason, try to create another Test Case at the beginning (like Test Case 0) where you create and populate the DB (if apply, or separate it if needed) and another one at the end where you delete it.
I'm not familiar with the framework you are using, but I have a lot of experience in Testing, opening test levels and automating tasks, and hope that my answer could be of some help.
Related
I have a large set of integration tests that test a website server. Most of these tests are fine to run in parallel. However, I have a few that change settings and can cause each other to fail when run in parallel.
As a simplified example, let's say I had these tests:
TestPrice_5PercentTax
TestPrice_10PercentTax
TestPrice_NoTax
TestInventory_Add10Items
TestInventory_Remove10Items
The inventory tests will not get in the way of each other, and are not affected by the price tests. But the price tests will change the Tax setting, so that if both 5 and 10 run in parallel, 10 could end up changing the setting before 5 is done, and 5 would fail because it saw 10% tax instead of the 5% it expected.
I want to define a category for the three price tests, and say that they may not run at the same time as one another. They can run at the same time as any other tests, just not the other price tests. Is there a way to do this in MSTest?
MsTest v2 has functionality as following
[assembly: Parallelize(Workers = 0, Scope = ExecutionScope.MethodLevel)]
// Notice the assembly bracket, this can be compatible or incompatible with how your code is built
namespace UnitTestProject1
{
[TestClass]
public class TestClass1
{
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_5PercentTax() => //YourTestHere?;
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_10PercentTax() => //YourTestHere?;
[TestMethod]
[DoNotParallelize] // This test will not be run in parallel
public void TestPrice_NoTax() => //YourTestHere?;
[TestMethod]
public void TestInventory_Add10Items() => //YourTestHere?;
[TestMethod]
public void TestInventory_Remove10Items() => //YourTestHere?;
}
}
More detailed information can be found here MSTest v2 at meziantou.net
I strongly recommend atleast a quick read through of the link, as this will likely help you solve and understand the issue with the tests run in parallel or sequential.
I would like to provide a potential solution that I started but did not pursue.
First, I made a class that I could use as an attribute on my test methods.
[AttributeUsage(AttributeTargets.Method, Inherited = true, AllowMultiple =true)]
public class NoParallel : Attribute
{
public NoParallel(string nonParallelGroupName)
{
SetName = nonParallelGroupName;
}
public string SetName { get; }
}
Then I went and added it to my test methods that will conflict.
[NoParallel("Tax")]
public void TestPrice_5PercentTax();
[NoParallel("Tax")]
public void TestPrice_10PercentTax();
[NoParallel("Tax")]
public void TestPrice_NoTax();
// This test doesn't care
public void TestInventory_Add10Items();
// This test doesn't care
public void TestInventory_Remove10Items();
I gave my test class a static dictionary of mutexes keyed by their names.
private static Dictionary<string, Mutex> exclusiveCategories = new Dictionary<string, Mutex>();
Finally, using a helper to grab all of the "NoParallel" strings the test method has...
public static List<string> NonparallelSets(this TestContext context, ContextHandler testInstance)
{
var result = new List<string>();
var testName = context.TestName;
var testClassType = testInstance.GetType();
var testMethod = testClassType.GetMethod(testName);
if (testMethod != null)
{
var nonParallelGroup = testMethod.GetCustomAttribute<NoParallel>(true);
if (nonParallelGroup != null)
{
result = nonParallelGroups.Select(x => x.SetName).ToList();
}
}
result.Sort();
return result;
}
... I set up a TestInitialize and TestCleanup to make the tests with matching NoParallel strings execute in order.
[TestInitialize]
public void PerformSetup()
{
// Get all "NoParallel" strings on the test method currently being run
var nonParallelSets = testContext.NonparallelSets(this);
// A test can have multiple "NoParallel" attributes so do this for all of them
foreach (var setName in nonParallelSets)
{
// If this NoParallel set doesn't have a mutex yet, make one
Mutex mutex;
if (exclusiveCategories.ContainsKey(setName))
{
mutex = exclusiveCategories[setName];
}
else
{
mutex = new System.Threading.Mutex();
exclusiveCategories[setName] = mutex;
}
// Wait for the mutex before you can run the test
mutex.WaitOne();
}
}
[TestCleanup]
public void PerformTeardown()
{
// Get the "NoParallel" strings on the test method again
var nonParallelSets = testContext.NonparallelSets(this);
// Release the mutex held for each one
foreach (var setName in nonParallelSets)
{
var mutex = exclusiveCategories[setName];
mutex.ReleaseMutex();
}
}
We decided not to pursue this because it wasn't really worth the effort. Ultimately we decided to pull the tests that can't run together into their own test class, and mark them with [DoNotParallelize] as H.N suggested.
I have a function which calculates some stuff and inputs it into a DB. This setup is important for all unit tests because they need some data to work on.
Sometimes i need to "flush" the DB, so all the unit tests point to the wrong ID.
Normally i just run the setup first and then change all the unit tests but this is taking to long tbh. Is there a way to automate this?
I would like to pass the generated ID into other unit tests.
So the idea was something like this:
[SetupFixture]
public class{
[Test]
public void SetupDB(){
setup();
//now marking the result somehow so other tests can pick the result up
return ID; //<--
}
}
public class OtherTests{
[Test]
[Get_ID_From_SetupDB]
public void testBaseOnID(int ID){
//we do stuff now with ID
}
}
PS: i have no problem switching the testing framework if you know a framework which can do this
Tests should be independent and you should generally never pass values between tests.
What you can do in your case, if all the tests are in the same class, is to have a variable in your class to hold the id and some global setup function that sets everything up and sets the variable to the correct id. In NUnit there is the [OneTimeSetUp] atribute for that.
[TestFixture]
public class MyTests
{
private int _testId;
[OneTimeSetUp]
public void SetItUp()
{
...
_testId = whatever;
}
[Test]
public void TestOne()
{
var whatever = _testId;
...
}
}
I have a web service, which I would like to do some unit testing on, however I am not sure how I can do this. Can anyone give any suggestions? Below is the webservice, it produces an object with three fields, but only when there is values in the database queue.
[WebMethod]
public CommandMessages GetDataLINQ()
{
CommandMessages result;
using (var dc = new TestProjectLinqSQLDataContext())
{
var command = dc.usp_dequeueTestProject();
result = command.Select(c => new CommandMessages(c.Command_Type, c.Command, c.DateTimeSent)).FirstOrDefault();
return result;
}
}
You don't need to consume your data over the WebService to Unit test it. You can just create another project in your solution with a reference to your WebService project and call directly the methods.
First up, what you've posted can't really be Unit Tested at all; by definition, a Unit Test can have only a single reason to fail; However in your case, a single test of GetDataLINQ() (the "System Under Test" or "SUT") could fail because of a problem with any of the dependencies in the function - namely, TestProjectLinqSQLDataContext and usp_dequeueTestProject.
When you call this method from a Unit test, these dependencies at present are probably beyond your control because you didn't directly create them - they are most likely created in your page classes' constructor. (Note: this is an assumption on my part, and I could be wrong)
Also, because these dependencies are at present real "live" objects, which have hard dependencies on an actual database being present, it means your tests aren't able to run independently, which is another requirement for a Unit Test.
(I'll assume your page's class file is "MyPageClass" from now on, and I will pretend it's not a web page code-behind or asmx code-behind; because as other posters have pointed out, this only matters in the context of accessing the code via HTTP which we're not doing here)
var sut = new MyPageClass(); //sut now contains a DataContext over which the Test Method has no control.
var result = sut.GetDataLINQ(); //who know what might happen?
Consider some possible reasons for failure in this method when you call sut.GetDataLINQ():
new TestProjectLinqSQLDataContext() results in an exception because of a fault in TestProjectLinqSQLDataContext's constructor
dc.usp_dequeueTestProject() results in an exception because the database connection fails, or because the stored procedure has changed, or doesn't exist.
command.Select(...) results in an exception because of some as of yet unknown defect in the CommandMessage constructor
Probably many more reasons (i.e failure to perform correctly as opposed to an exception being thrown)
Because of the multiple ways to fail, you can't quickly and reliably tell what went wrong (certainly your test runner will indicate what type of exception threw, but that requires you to at least read the stack trace - you shouldn't need to do this for a Unit Test)
So, in order to do this you need to be able to setup your SUT - in this case, the GetDataLINQ function - such that any and all dependencies are fully under the control of the test method.
So if you really want to Unit Test this, you'll have to make some adjustments to your code. I'll outline the ideal scenario and then one alternative (of many) if you can't for whatever reason implement this. No error checking included in the code below, nor is it compiled so please forgive any typos, etc.
Ideal scenario
Abstract the dependencies, and inject them into the constructor.
Note that this ideal scenario will require you to introduce an IOC framework (Ninject, AutoFAC, Unity, Windsor, etc) into your project. It also requires a Mocking framework (Moq, etc).
1. Create an interface IDataRepository, which contains a method DequeueTestProject
public interface IDataRepository
{
public CommandMessages DequeueTestProject();
}
2. Declare IDataRepository as a dependency of MyPageClass
public class MyPageClass
{
readonly IDataRepository _repository;
public MyPageClass(IDataRepository repository)
{
_repository=repository;
}
}
3. Create an actual implementation of IDataRepository, which will be used in "real life" but not in your Unit Tests
public class RealDataRepository: IDataRepository
{
readonly MyProjectDataContext _dc;
public RealDataRepository()
{
_dc = new MyProjectDataContext(); //or however you do it.
}
public CommandMessages DequeueTestProject()
{
var command = dc.usp_dequeueTestProject();
result = command.Select(c => new CommandMessages(c.Command_Type, c.Command, c.DateTimeSent)).FirstOrDefault();
return result;
}
}
This is where you will need to involve your IOC framework such that it can inject the correct IDataRepository (i.e RealDataRepository) whenever your MyPageClass is instantiated by the ASP.NET framework
4. Recode your GetDataLINQ() method to use the _repository member
public CommandMessages GetDataLINQ()
{
CommandMessages result;
return _repository.DequeueTestProject();
}
So what has this bought us? Well, consider now how you can test against the following specification for GetDataLINQ:
Must always invoke DequeueTestProject
Must return NULL if there is no data in the database
Must return a valid CommandMessages instance if there is data in the database.
Test 1 - Must always invoke DequeueTestProject
public void GetDataLINQ_AlwaysInvokesDequeueTestProject()
{
//create a fake implementation of IDataRepository
var repo = new Mock<IDataRepository>();
//set it up to just return null; we don't care about the return value for now
repo.Setup(r=>r.DequeueTestProject()).Returns(null);
//create the SUT, passing in the fake repository
var sut = new MyPageClass(repo.Object);
//call the method
sut.GetDataLINQ();
//Verify that repo.DequeueTestProject() was indeed called.
repo.Verify(r=>r.DequeueTestProject(),Times.Once);
}
Test 2 - Must return NULL if there is no data in the database
public void GetDataLINQ_ReturnsNULLIfDatabaseEmpty()
{
//create a fake implementation of IDataRepository
var repo = new Mock<IDataRepository>();
//set it up to return null;
repo.Setup(r=>r.DequeueTestProject()).Returns(null);
var sut = new MyPageClass(repo.Object);
//call the method but store the result this time:
var actual = sut.GetDataLINQ();
//Verify that the result is indeed NULL:
Assert.IsNull(actual);
}
Test 3 - Must return a valid CommandMessages instance if there is data in the database.
public void GetDataLINQ_ReturnsNCommandMessagesIfDatabaseNotEmpty()
{
//create a fake implementation of IDataRepository
var repo = new Mock<IDataRepository>();
//set it up to return null;
repo.Setup(r=>r.DequeueTestProject()).Returns(new CommandMessages("fake","fake","fake");
var sut = new MyPageClass(repo.Object);
//call the method but store the result this time:
var actual = sut.GetDataLINQ();
//Verify that the result is indeed NULL:
Assert.IsNotNull(actual);
}
Because we can Mock the IDataRepository interface, therfore we can completely control how it behaves.
We could even make it throw an exception, if we needed to test how GetDataLINQ responds to unforseen results.
This is the real benefit of abstracting your dependencies when it comes to Unit Testing (not to mention, it reduces coupling in your system because dependencies are not tied to a particular concrete type).
Not Quite ideal method
Introducing an IOC framework into your project may be a non-runner, so here is one alternative which is a compromise. There are other ways as well, this is just the first that sprang to mind.
Create the IDataRepository interface
Create the RealDataRepository class
Create other implementations of IDataRepository, which mimic the behaviour we created on the fly in the previous example. These are called stubs, and basically they are just classes with a single, predefined behaviour that never changes. This makes then ideal for testing, because you always know what will happen when you invoke them.
public class FakeEmptyDatabaseRepository:IDataRepository
{
public CommandMessages DequeueTestProject(){CallCount++;return null;}
//CallCount tracks if the method was invoked.
public int CallCount{get;private set;}
}
public class FakeFilledDatabaseRepository:IDataRepository
{
public CommandMessages DequeueTestProject(){CallCount++;return new CommandMessages("","","");}
public int CallCount{get;private set;}
}
Now modify the MyPageClass as per the first method, except do not declare IDataRepository on the constructor, instead do this:
public class MyPageClass
{
private IDataRepository _repository; //not read-only
public MyPageClass()
{
_repository = new RealDataRepository();
}
//here is the compromise; this method also returns the original repository so you can restore it if for some reason you need to during a test method.
public IDataRepository SetTestRepo(IDataRepository testRepo)
{
_repository = testRepo;
}
}
And finally, modify your unit tests to use FakeEmptyDatabaseRepository or FakeFilledDatabaseRepository as appropriate:
public void GetDataLINQ_AlwaysInvokesDequeueTestProject()
{
//create a fake implementation of IDataRepository
var repo = new FakeFilledDatabaseRepository();
var sut = new MyPageClass();
//stick in the stub:
sut.SetTestRepo(repo);
//call the method
sut.GetDataLINQ();
//Verify that repo.DequeueTestProject() was indeed called.
var expected=1;
Assert.AreEqual(expected,repo.CallCount);
}
Note that this second scenario is not an ivory-tower-ideal scenario and doesn't lead to strictly pure Unit tests (i.e if there were a defect in FakeEmptyDatabaseRepository your test could also fail) but it's a pretty good compromise; however if possible strive to achieve the first scenario as it leads to all kinds of other benefits and gets you one step closer to truly SOLID code.
Hope that helps.
I would change your Code as follows:
public class MyRepository
{
public CommandMessage DeQueueTestProject()
{
using (var dc = new TestProjectLinqSQLDataContext())
{
var results = dc.usp_dequeueTestProject().Select(c => new CommandMessages(c.Command_Type, c.Command, c.DateTimeSent)).FirstOrDefault();
return results;
}
}
}
Then code your Web Method as:
[WebMethod]
public CommandMessages GetDataLINQ()
{
MyRepository db = new MyRepository();
return db.DeQueueTestProject();
}
Then Code your Unit Test:
[Test]
public void Test_MyRepository_DeQueueTestProject()
{
// Add your unit test using MyRepository
var r = new MyRepository();
var commandMessage = r.DeQueueTestProject();
Assert.AreEqual(commandMessage, new CommandMessage("What you want to compare"));
}
This allows your code to be reusable and is a common design pattern to have Data Repositories. You can now use your Repository Library everywhere you need it and test it in only one place and it should be good everywhere you use it. This way you don't have to worry about complicated tests calling WCF Services. This is a good way of testing Web Methods.
This is just a short explanation and can be improved much more, but this gets you in the right direction in building your Web Services.
I want to ignore certain tests based on data I pulled from a configuration file during the TestFixtureSetUp. Is there a way to ignore running a test based on parameters?
[TestFixture]
public class MessagesTests
{
private bool isPaidAccount;
[TestFixtureSetUp]
public void Init () {
isPaidAccount = ConfigurationManager.AppSettings["IsPaidAccount"] == "True";
}
[Test]
//this test should run only if `isPaidAccount` is true
public void Message_Without_Template_Is_Sent()
{
//this tests an actual web api call.
}
}
If account we are testing with is a paid account, the test should run fine, if not, the method will throw an exception.
Would there be an extension of the attribute [Ignore(ReallyIgnore = isPaidAccount )]? Or should I write this inside the method and run 2 separate test cases for eg.
public void Message_Without_Template_Is_Sent()
{
if(isPaidAccount)
{
//test for return value here
}
else
{
//test for exception here
}
}
You can use Assert.Ignore() like Matthew states. You could also use Assert.Inconclusive() if you want to categorize the result differently.
This Question/Answer is slightly similar: Programmatically skip an nunit test
I am trying to write some unit tests for some open source code.
one of the tests will test that only the minimal number of records have been loaded to memory
(ie if someone were to call:
DataContext.SomeTable.ToList().Where(s=>s.Id <=10)
the test should fail)
For this to work, DataContext.SomeTable.Load should be reset to have 0 items before the unit test executes.
At present, this TestFixture (using nunit, but that should not be relevant) is abstract, with the DbContext injected at instantiation, so that unit tests can test different providers.
I don't believe there is a way to clear the loaded entities, but was wondering how I might dispose and create a new DbContext which uses the same database provider as the injected context.
First, could you post your code?
Second, if I understand correctly, you would like to re-create the context every time? One solution is to pass in a function to create the context instead of the context itself. Like this:
public class MyTest {
private Func<IMyContext> createContext;
public MyTest(Func<IMyContext> createContext){
this.createContext = createContext;
}
[Test]
public void RunTest(){
using(var context = this.createContext()){
// do stuff with context
}
}
}
public class TestClass
{
private MyContext context;
[SetUp]
public void Setup()
{
// is executed before each test
context = new MyContext();
}
[Test]
public void Test1()
context.SomeTable.ToList().Where(s => s.Id <= 10);
}
[TearDown]
public void Complete()
{
context.Dispose();
}
}
each test should access table only once.
point of unit tests is that each test is for single scenario.