I have recently begun using NUnit and now Rhino Mocks.
I am now ready to start using it in a C# project.
The project involves a database library which I must write.
I have read that test should be independent and not rely on each other nor the order of test execution.
So suppose I wanted to check the FTP or database connection.
I would write something like
[Test]
public void OpenDatabaseConnection_ValidConnection_ReturnsTrue()
{
SomeDataBase db = new SomeDataBaseLibrary(...);
bool connectionOk = db.Open();
Assert.IsTrue(connectionOk);
}
Another test might involve testing some database functionality, like inserting a row.
[Test]
public void InsertData_ValidData_NoExceptions()
{
SomeDataBase db = new SomeDataBaseLibrary(...);
db.Open();
db.InsertSomeRow("valid data", ...);
}
I see several problems with this:
1) The problem is that the last test, in order to be independent on the first test, will have to open the database connection again. (This will also require the first test to close the connection again, before it's open.)
2) Another thing is that if SomeDataBaseLibrary changes, then all the test-methods will have to change as well.
3) The speed of the test will go down when all these connections have to be established every time the test runs.
What is the usually way of handling this?
I realize that I can use mocks of the DataBaseLibrary, but this doesn't test the library itself which is my first objective in the project.
1:
You can open 1 connection before all your tests, and keep it open, until all the tests that use that connection have ended. There are certain attributes for methods, much like the [Test] attribute, that specify when that method should be called:
http://www.nunit.org/index.php?p=attributes&r=2.2.10
Take a look at:
TestFixtureSetUpAttribute (NUnit 2.1)
This attribute is used inside a TestFixture to provide a single set of functions that are performed once prior to executing any of the tests in the fixture. A TestFixture can have only one TestFixtureSetUp method. If more than one is defined the TestFixture will compile successfully but its tests will not run.
So, within the method defined with this attribute, you can open your database connection and make your database object global to the test environment. Then, every single test can use that database connection. Note that your tests are still independent, even though they use the same connection.
I believe that this also addresses your 3rd concern.
I am not quite sure how to answer your 2nd concern, as I do not know the extent of the changes that take place in the SomeDataBaseLibrary class.
Just nit-picking, these tests are integration tests, not unit tests. But that doesn't matter right now that I can tell.
As #sbenderli pointed out, you can use TestFixtureSetUp to start the connection, and write a nearly-empty test that just asserts the condition of the DB connection. I think you'll just have to give up on the ideal of 1 bug -> 1 test failing here as multiple tests require connecting to the test database. If using your data access layer has any side-effects (e.g. caching), be extra careful here about interacting tests (<-link might be broken).
This is good. Tests are supposed to demonstrate how to use the SUT (software under test--SomeDataBaseLibrary in this case). If a change to the SUT requires change to how it's used, you want to know. Ideally, if you make a change to SomeDataBaseLibrary that will break client code, it will break your automated tests. Whether you have automated tests or not, you will have to change anything depending on the SUT; automated tests are one additional thing to change, but they also let you know that you have to make said change.
taken care of with TestFixtureSetUp.
One more thing which you may have taken care of already: InsertData_ValidData_NoExceptions does not clean up after itself, leading to interacting tests. The easiest way I have found to making tests clean up after themselves is to use a TransactionScope: Just create one in your SetUp class and Dispose it in TearDown. Works like a charm (with compatible databases) in my experience.
EDIT:
Once you have connection logic in a TestFixtureSetup, you can still test connection like this:
[Test]
public void Test_Connection()
{
Assert.IsTrue(connectionOk);
}
One downside to this is that the Exercise step of the test is implicit--it is part of the setup logic. IMHO, that's OK.
Related
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
I would like to create unit tests for data dependent code. For example:
A user class that has the usual create, update & delete.
If I wanted to create a test for "user already exists" scenario, or and update or delete test. I would need to know that a specific user already exists in my database.
In such cases, what would be the best approach to have stand alone tests for these operations that can run in any order?
When you have dependencies like this think about whether you want to be Integration Testing as opposed to Unit Testing. If you do want to do Unit tests take a look at using Mock Data.
Integration Testing:
Tests how you code integrates with different parts of a system. This can be making sure your code connects to a database properly or has created a file on the filesystem. These tests are usually very straight-forward and do not have the same constraint of "being able to run in any order." However, they require a specific configuration in order to pass which means they do float well from developer to developer.
Unit Testing: Tests your code's ability to preform a function. For example "Does my function AddTwoNumbers(int one, int two) actually add two numbers?" Unit tests are to ensure any changes in code does not effect the expected results.
When getting into areas like "Does my code call the database any enter the result correctly?" you need to consider that unit tests are not meant to interact with the system. This is where we get into using "mock data." Mock classes and mock data take the place of an actual system to just ensure that your code "called out in the way we were expecting." The difficult part about this is it can be done but most of the .Net Framework classes do not provide the needed Interfaces in order to do it easily.
See the MSDN page on Tesing for more info. Also, consider this MSDN article on Mock Data.
I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.
Consider the following example of a unit test. The comments pretty much explain my problem.
[TestMethod]
public void MyTestMethod()
{
//generate some objects in the database
...
//make an assert that fails sometimes (for example purposes, this fails always)
Assert.IsTrue(false);
//TODO: how do we clean up the data generated in the database now that the test has ended here?
}
There are two ways to do this. One is using TestInitialize and TestCleanup attributes on methods in the test class. They will always be run before and after the test, respectively.
Another way is to use the fact that test failures are propagated to the test runner via exceptions. This means that a try { } finally { } block in your test can be used clean up anything after an assert fails.
[TestMethod]
public void FooTest()
{
try
{
// setup some database objects
Foo foo = new Foo();
Bar bar = new Bar(foo);
Assert.Fail();
}
finally
{
// remove database objects.
}
}
The try/finally cleanup can get really messy is there are a lot of objects to cleanup. What my team has leaned towards is a helper class which implements IDisposable. It tracks what objects have been created and pushes them onto a stack. When Dispose is called the items are popped off the stack and removed from the database.
[TestMethod]
public void FooTest()
{
using (FooBarDatabaseContext context = new FooBarDatabaseContext())
{
// setup some db objects.
Foo foo = context.NewFoo();
Bar bar = context.NewBar(foo);
Assert.Fail();
} // calls dispose. deletes bar, then foo.
}
This has the added benefit of wrapping the constructors in method calls. If constructor signatures change we can easily modify the test code.
I think the best answer in situations like this is to think very carefully about what you are trying to test. Ideally a unit test should be trying to test a single fact about a single method or function. When you start combining many things together it crosses over into the world of integration tests (which are equally valuable, but different).
For unit testing purposes, to enable you to test only the thing you want to test, you will need to design for testability. This typically involves additional use of interfaces (I'm assuming .NET from the code you showed) and some form of dependency injection (but doesn't require an IoC/DI container unless you want one). It also benefits from, and encourages you to create very cohesive (single purpose) and decoupled (soft dependencies) classes in your system.
So when you are testing business logic that depends on data from a database, you would typically use something like the Repository Pattern and inject a fake/stub/mock IXXXRepository in for unit testing. When you are testing the concrete repository, you either need to do the kind of database cleanup you are asking about or you need to shim/stub the underlying database call. That is really up to you.
When you do need to create/populate/cleanup the database, you might consider taking advantage of the various setup and teardown methods available in most testing frameworks. But be careful, because some of them are run before and after each test, which can seriously impact the performance of your unit tests. Tests that run too slowly will not be run very often, and that is bad.
In MS-Test, the attributes you would use to declare setup/teardown are ClassInitialize, ClassCleanUp, TestInitialize, TestCleanUp. Other frameworks have similarly named constructs.
There are a number of frameworks that can help you with the mocking/stubbing: Moq, Rhino Mocks, NMock, TypeMock, Moles and Stubs (VS2010), VS11 Fakes (VS11 Beta), etc. If you are looking for dependency injection frameworks, look at things like Ninject, Unity, Castle Windsor, etc.
A couple of responses:
If it's using an actual database, I'd argue that it's not a "unit test" in the strictest sense of the term. It's an integration test. A unit test should have no such side-effects. Consider using a mocking library to simulate the actual database. Rhino Mocks is one, but there are plenty of others.
If, however, the whole point of this test is to actually interact with a database, then you'll want to interact with a transient test-only database. In that case part of your automated testing would include code to build the test database from scratch, then run the tests, then destroy the test database. Again, the idea is to have no external side-effects. There are probably multiple ways to go about this, and I'm not familiar enough with unit testing frameworks to really give a concrete suggestion. But if you're using the testing that's built in to Visual Studio then perhaps a Visual Studio Database Project would be of use.
Your question is a little bit too general. Usually you should clean up after every single test. Usually you cannot rely that all tests are always executed in the same order and you have to be sure about what is in your database. For general setup or cleanup most unit test frameworks provide setUp and tearDown methods that you can override and will automatically be called. I don't know how that works in C# but e. g. in JUnit (Java) you have these methods.
I agree with David. Your tests usually should have no side effects. You should set up a new database for every single test.
You'll have to do manual cleanup in this circumstance. Ie, the opposite of the generate some objects in the db.
The alternative, is to use Mocking tools such as Rhino Mocks so that the database is just a in memory database
It depends on what you are actually testing. Looking on the comments I would say yes, but by the way it's difficult to deduct looking on comments. Cleaning up the object you just inserted you, in practice, reset the state of the test. So if you cleanup, you begin to test from cleanup system.
I think the clean up depends on how you're building the data, so if "old test data" doesn't interact with future tests runs, I think it's fine to leave it behind.
An approach I've been taking when writing integration tests is to have the tests run against a different db than the application db. I tend to rebuild the test db as a precondition to each test run. That way you don't need a granular clean-up scheme for each test as each test run gets a clean slate between runs.
Most of my development i done using SQL server, but I have in some cases run my tests against a SQL Compact edition db, which is fast and efficient to rebuild between runs.
mbUnit has a very handy attribute Rollback that cleans up the database after finishing the test. However, you'll have to configure DTC (Distributed Transaction Coordinator) in order to be able to use it.
I was having a comparable issue where one test's assertion was preventing cleanup and causing other tests to fail.
Hopefully this is of use to somebody, sometime.
[Test]
public void Collates_Blah_As_Blah()
{
Assert.False(SINGLETON.Collection.Any());
for (int i = 0; i < 2; i++)
Assert.That(PROCESS(ValidRequest) == Status.Success);
try
{
Assert.AreEqual(1, SINGLETON.Collection.Count);
}
finally
{
SINGLETON.Collection.Clear();
}
}
The finally block will execute whether the assertion passes or fails, it also doesn't introduce the risk of false passes - which catch will cause!
The following is a skeleton of a test method that I use. this allows me to use a try catch finally to do the cleanup code in the finally block without loosing my assert that fail.
[TestMethod]
public void TestMethod1()
{
Exception defaultException = new Exception("No real execption.");
try
{
#region Setup
#endregion
#region Tests
#endregion
}
catch (Exception exc)
{
/*if an Assert fails this catches its Exception so that it can be thrown
in the finally block*/
defaultException = exc;
}
finally
{
#region Cleanup
//cleanup code goes here
if (!defaultException.Message.Equals("No real execption."))
{
throw defaultException;
}
#endregion
}
}
I am learning TDD and I currently have a method that is working but I thought I'd have a go at rebuilding it using TDD.
The method essentially takes 6 parameters, queries a database, does some logic and returns a List<T>
My initial tests including checking for empty/zero defined string and int method parameter values but now I'm not sure what to do. If I wasn't using TDD, I would just create code to find the DB connection string and open up a DB connection, query the database, read the values etc.
Obviously we can't do that in Unit Testing so I was after some advice of how to proceed.
Remember that TDD is as much about good design than it is about testing. This method has too much going on; it violates the Separation of Concerns principle.
You've already identified several areas that will need to be tested:
The method essentially takes 6 parameters, queries a database, does some logic and returns a List<T>
You have several discrete steps there, and there are probably a few more hiding in the code. Breaking those up is the name of the game when it comes to TDD.
For starters, it might be a good idea to factor out the piece that performs the logic.
Is your method building a query dynamically? Break that piece out as well and test it to make sure the query is written properly.
You can put the execution of the query into a standalone repository or something similar, and write integration tests against that. That way you only have a simple test hitting the database instead of the current complex method.
If you try to test this as is, you'll likely end up with a monster test that requires a lot of setup and duplicates all of your business logic, and when it breaks it'll be unclear as to what went wrong.
In general, there's nothing "wrong" about using TDD to test database code. However, you might try abstracting out the database code, then mocking it out.
The method essentially takes 6 parameters, queries a database, does
some logic and returns a List
That seems to be too much to be a unit testable code!!
A unit testable code should be doing very specific things and doing it in small modules. So, in your case you need to refactor and break your method into following (at least):
data base query: wrapped inside a DataProvider with a backing interface. And your unit test would mock this interface.
does some logic : this is the best candidate for a unit test. This should be a module that just takes data provider interface and does the logic and returns modified list which you will validate in your unit test.
Also, remember a unit test should cover at least three scenarios for each testable module:
a positive test
a negative test
test throwing meaningful exception for invalid values.
Hope this is helpful.
Another option is to start a transaction before the test and do a rollback afterwards. This way tests are independent so can still, according to some definitions, be considered unit tests.
Contrary to what's mentioned in other answers, you should refactor the code to get to a better design after the test passes. Then you can verify that your refactoring didn't break anything just by rerunning the test.
You might want to try looking at DbUnit for running unit tests on your data access layer. It puts your database in a known state between test runs preventing corruption of your test database.
You can:
Use the class/test init to raise a blank DB or a copy of small DB with a known set of data.
In the test method enter test data (if the DB is empty), then perform the query, then compare result with expect result.
In the test/class cleanup remove DB.
This tests your unit but is considered an "integration test" by some.
- The term "unit test" has some disagreement due to the ambiguity of the term "unit".
You could also use an in-memory DB or an in-process DB to make the test environment simpler.