Dependent Tests - c#

Alright, so I'm using the Microsoft Test Framework for testing and I need to somehow build dependent tests. Why you might say? Because one of the tests ensures that I can load data from the database and the others need to operate against that set -- making the round trips is something we don't want to do to keep the automated nature of the tests efficient.
I have been searching and just can't seem to find a way to do a couple of things:
Decorate the test methods so that they are seen as dependent.
Persist the set between tests.
What have I tried?
Well, regarding the decoration for dependent tests, not much because I can't seem to find an attribute that's built for that.
Regarding the persistence of the set, I've tried a private class variable and adding to the Properties of the test context. Both of those have resulted in the loss of the set between tests.
I hope that you can help me out here!

Create your test separately and then use the Ordered Test to run them in the order you want.
If any of the tests fails then the whole test is aborted and it considered failed:

I think, what you need is an orderer test list. You can create this in you test project under 'New Item...'. The ordered test list is a list with tests in a specified order that are executed in the same context.
By the way: Unit tests should test only the smallest unit in an application, not a huge set of operations. Because if one unit is not working correctly you can find easy wich one.

Related

Run a C# unittest only if another one passes

I'm trying to run a UnitTest in C#, but only if another UnitTest passes? Can't quite get it to work, does anybody know how I can do this?
Edit: It's in NUnit
I encountered this problem a while back. A solution I came up with may not be the most elegant but it works. What I did to get around the ordering issue is I created my own unit testing framework. It was propriety to the company I worked for so I can't share it with you.
In the testing framework, there existed
A template for each type of test & a generic template to aggregate all the tests
A utility to execute each template type
For example, if I was doing an integration test, I would have a "http utility" and the template would contain the endpoint & payload.
The tests I wanted to run would need to be stored into an intermediate data structure such as json. This allowed me to serialize the tests into templates.
Now this is where it gets tricky... Using some fancy T4 templating, I would get the json data & serialize it to a list of templates. Then I would order the tests by execution order and dependency (one test could be dependent on another for chaining integration tests). I would then generate a unit test for every template. The generated unit tests would then run on build
For your question about canceling test execution if one fails, you can build that into your templating using some fancy logic
static List<ITestTemplate> requiredTests = new List<ITestTemplate>();
...
if(requiredTests.Any(t => t.Failed))
Assert.IsTrue(false) //fail subsequent tests
You could accomplish this, though it may not be the cleanest solution, by using the [Order()] attribute. (docs)
This will allow you to run the dependency-test as [Order(1)] and the test relying on the first test as [Order(2)]. You can share your driver across the tests, and if the first test fails, close the driver, causing the other tests relying on the first test passing to fail.

Unit testing C# database dependent code

I would like to create unit tests for data dependent code. For example:
A user class that has the usual create, update & delete.
If I wanted to create a test for "user already exists" scenario, or and update or delete test. I would need to know that a specific user already exists in my database.
In such cases, what would be the best approach to have stand alone tests for these operations that can run in any order?
When you have dependencies like this think about whether you want to be Integration Testing as opposed to Unit Testing. If you do want to do Unit tests take a look at using Mock Data.
Integration Testing:
Tests how you code integrates with different parts of a system. This can be making sure your code connects to a database properly or has created a file on the filesystem. These tests are usually very straight-forward and do not have the same constraint of "being able to run in any order." However, they require a specific configuration in order to pass which means they do float well from developer to developer.
Unit Testing: Tests your code's ability to preform a function. For example "Does my function AddTwoNumbers(int one, int two) actually add two numbers?" Unit tests are to ensure any changes in code does not effect the expected results.
When getting into areas like "Does my code call the database any enter the result correctly?" you need to consider that unit tests are not meant to interact with the system. This is where we get into using "mock data." Mock classes and mock data take the place of an actual system to just ensure that your code "called out in the way we were expecting." The difficult part about this is it can be done but most of the .Net Framework classes do not provide the needed Interfaces in order to do it easily.
See the MSDN page on Tesing for more info. Also, consider this MSDN article on Mock Data.

NUnit - Loads ALL TestCaseSources even if they're not required by current test

I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]

Using TestContext to share information between unit tests

I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.

Writing Unit Tests for method that queries database

I am learning TDD and I currently have a method that is working but I thought I'd have a go at rebuilding it using TDD.
The method essentially takes 6 parameters, queries a database, does some logic and returns a List<T>
My initial tests including checking for empty/zero defined string and int method parameter values but now I'm not sure what to do. If I wasn't using TDD, I would just create code to find the DB connection string and open up a DB connection, query the database, read the values etc.
Obviously we can't do that in Unit Testing so I was after some advice of how to proceed.
Remember that TDD is as much about good design than it is about testing. This method has too much going on; it violates the Separation of Concerns principle.
You've already identified several areas that will need to be tested:
The method essentially takes 6 parameters, queries a database, does some logic and returns a List<T>
You have several discrete steps there, and there are probably a few more hiding in the code. Breaking those up is the name of the game when it comes to TDD.
For starters, it might be a good idea to factor out the piece that performs the logic.
Is your method building a query dynamically? Break that piece out as well and test it to make sure the query is written properly.
You can put the execution of the query into a standalone repository or something similar, and write integration tests against that. That way you only have a simple test hitting the database instead of the current complex method.
If you try to test this as is, you'll likely end up with a monster test that requires a lot of setup and duplicates all of your business logic, and when it breaks it'll be unclear as to what went wrong.
In general, there's nothing "wrong" about using TDD to test database code. However, you might try abstracting out the database code, then mocking it out.
The method essentially takes 6 parameters, queries a database, does
some logic and returns a List
That seems to be too much to be a unit testable code!!
A unit testable code should be doing very specific things and doing it in small modules. So, in your case you need to refactor and break your method into following (at least):
data base query: wrapped inside a DataProvider with a backing interface. And your unit test would mock this interface.
does some logic : this is the best candidate for a unit test. This should be a module that just takes data provider interface and does the logic and returns modified list which you will validate in your unit test.
Also, remember a unit test should cover at least three scenarios for each testable module:
a positive test
a negative test
test throwing meaningful exception for invalid values.
Hope this is helpful.
Another option is to start a transaction before the test and do a rollback afterwards. This way tests are independent so can still, according to some definitions, be considered unit tests.
Contrary to what's mentioned in other answers, you should refactor the code to get to a better design after the test passes. Then you can verify that your refactoring didn't break anything just by rerunning the test.
You might want to try looking at DbUnit for running unit tests on your data access layer. It puts your database in a known state between test runs preventing corruption of your test database.
You can:
Use the class/test init to raise a blank DB or a copy of small DB with a known set of data.
In the test method enter test data (if the DB is empty), then perform the query, then compare result with expect result.
In the test/class cleanup remove DB.
This tests your unit but is considered an "integration test" by some.
- The term "unit test" has some disagreement due to the ambiguity of the term "unit".
You could also use an in-memory DB or an in-process DB to make the test environment simpler.

Categories

Resources