I have a rather large set of integration tests developed in C#/NUnit. Currently it is one test fixture, one class. In [TestFixtureSetUp] I create a database from a script and populate it with test data, also from a script. My tests do not modify data, so they can run in any order or in parallel.
My problem is that I have too many tests and the file is growing too big, so it looks ugly, and my ReSharper is getting sluggish. I want to split the file, but I really want to create my test database only once. As a temporary measure, I am moving the code which does actual testing into static methods in other classes, and invoke them from a single TextFixture class, as follows:
[Test]
public void YetAnotherIntegrationTest()
{
IntegrationTestsPart5.YetAnotherIntegrationTest(connection);
}
Yet it still looks ugly and I think there should be a better way.
I am using VS 2008, upgrading to 2010 right now, and SQL Server 2005.
You could split your test class into several partial classes across mutliple files. This way you can keep one [TestFixtureSetup] and split the file up for cleanliness.
You could consider wrapping your database connection in a sigleton class, which would initialize the database while creating the singleton instance. Then you can have as many test classes as you like, just grab the db connection from the singleton.
I'm creating base class containing Setup method. Then You just inherit from it and don't have to call anything to setup database in any of the test classes.
Related
I've inherited a program containing a very large class file of some 6,000 lines of unruly and totally undocumented code. I'm starting the refactoring process by creating unit tests, but I'm having trouble figuring out the best way to quickly create an instance of this class for testing. Tens of thousands of instances are made of this class when the application is launched, but instantiating it by hand isn't really practical since it has hundreds of attributes and the way that these instances are being constructed is not at all transparent to me. Is there a way I can grab an instance of the class in Visual Studio during debugging to somehow put it in a unit test?
I played around with using this Object Exporter extension, but making this very large and complicated class serializable/deserializable is proving to be it's own can of worms. Are there any other suggestions people have for quickly getting started with unit testing an enormous class file?
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
How do I run specflow tests using Nunit 2.6.1 on a build Server?
Also how do you maintainin and organize these tests to run successfully on the build server with multiple automation programmers coding seperate tests?
I will try to answer the second part of your question for now:
"how do you maintainin and organize these tests to run successfully on the build server with multiple automation programmers coding seperate tests?"
These kind of tests have a huge potential of being turned into a unmaintainable mess and here is why:
steps are used to create contexts (for the scenario test)
steps can be called from any scenario in any order
Due to these two simple facts, its easy to compare this model with procedural/structured programming. The scenario context is no different from some global variables and the steps are some methods that can be called at any time at any place.
What my team does to avoid a huge mess in the step files is to keep them as stupid as possible. Everything a step does is to parse and call services that will do the real meaningful job and hold the context of the current test. We call these services as 'xxxxDriver' (where xxxx is the domain object that we are dealing with).
a stupid example:
[Given("a customer named (.*)")]
public void GivenACustomer(string customerName)
{
_customerDriver.CreateCustomer(customerName);
}
[Given("an empty schedule for the customer (.*)")]
public void GivenEmptySchedule(string customerName)
{
var customer = _customerDriver.GetCustomer(customerName);
_scheduleDriver.CreateForCustomer(customer);
}
The 'xxxxxDriver' will contain all repositories, webgateways, stubs, mocks or anything related with the related domain object. Another important detail is that if you inject these drivers in the step files, specflow will create an instance of it for each scenario and use it among all step files.
This was the best way we found to keep some consistency in the way we maintain and extend the steps without stepping in each other toes with a big team touching the same codebase. Another huge advantage is that it help us to find similar steps navigating through the usages of the methods in driver classes.
There's a clear example in the specflow codebase itself. (look at the drivers folder)
https://github.com/techtalk/SpecFlow/tree/master/Tests/TechTalk.SpecFlow.Specs
I have a method ExportXMLFiles(string path) to export xml files at a certain path with some elements inside it like FirstName, LastName, MajorSubject. These values are getting picked from a database.
Now I need to write a Unit test method for it and I have not worked on much unit tests except simple and straight forward ones. My confusion is, do I need to connect to database and create a XML file or do I need to pass hard coded values while creating XML file so that I can validate the values in XML created?
Is there any other way for doing this?
You absolutely do not want to use an actual database in your unit test. It adds one level of complexity that you don't want to deal with in your unit tests. It also makes your unit tests less reliable and slower. See if you can break the database functionality into an interface that you can instantiate using a mocking framework. Try looking into something like moq or if that isn't enough check out moles from Microsoft .
Edit - Another post mentioned that if the functionality is to write to the disk then your unit test should validate that the file was written out to disk. Using Moles you can simulate file systems and test your file system calls and simulate write failures or whatever other cases you need which would give you a lot more flexibility and speed than actually physically writing to disk. Things like a disk write failing would be miserable to test without something like moles.
A unit test should be small in scope and isolated from dependencies eg databases and file systems. So what you want to do is look at mocking out the database access and what would get written to a file so that you can run your test without needing particular values in the database. Unit tests should be fast to run, have repeatable results (ie run twice, get the same answer), isolated from other tests and able to be run in any order.
A unit test is looking at ONE item of functionality and not relying on the behaviour of anything else.
So look at using a pattern such as dependency injection so you can provide (ie inject) database and file system dependencies. Look at a mocking framework such as NMock or write your own lightweight fake objects that implement the same interface as the dependencies and then you can pass those into your functions being tested.
What is the responsibility of this method ?
Is it to dump given data in the form of xml files at a certain path? If yes, then you'd have to check that the files are in fact created.
This is not a unit test but an integration test (because this is the class at the boundary between your app and the filesystem). You should abstract away the input data source (the DB) via an interface/role. You can also create a Role to CreateXmlFile(contents) but I think that's overkill.
// setup mock data source to supply canned data
// call myObject.ExportToXml(mockDataSource, tempPath)
// verify files are created in tempPath
Finally this class needs to implement a Role (DataExporter) so that the tests that use DataExporter are fast / don't have to deal with filesystems (or XML).
I have recently begun using NUnit and now Rhino Mocks.
I am now ready to start using it in a C# project.
The project involves a database library which I must write.
I have read that test should be independent and not rely on each other nor the order of test execution.
So suppose I wanted to check the FTP or database connection.
I would write something like
[Test]
public void OpenDatabaseConnection_ValidConnection_ReturnsTrue()
{
SomeDataBase db = new SomeDataBaseLibrary(...);
bool connectionOk = db.Open();
Assert.IsTrue(connectionOk);
}
Another test might involve testing some database functionality, like inserting a row.
[Test]
public void InsertData_ValidData_NoExceptions()
{
SomeDataBase db = new SomeDataBaseLibrary(...);
db.Open();
db.InsertSomeRow("valid data", ...);
}
I see several problems with this:
1) The problem is that the last test, in order to be independent on the first test, will have to open the database connection again. (This will also require the first test to close the connection again, before it's open.)
2) Another thing is that if SomeDataBaseLibrary changes, then all the test-methods will have to change as well.
3) The speed of the test will go down when all these connections have to be established every time the test runs.
What is the usually way of handling this?
I realize that I can use mocks of the DataBaseLibrary, but this doesn't test the library itself which is my first objective in the project.
1:
You can open 1 connection before all your tests, and keep it open, until all the tests that use that connection have ended. There are certain attributes for methods, much like the [Test] attribute, that specify when that method should be called:
http://www.nunit.org/index.php?p=attributes&r=2.2.10
Take a look at:
TestFixtureSetUpAttribute (NUnit 2.1)
This attribute is used inside a TestFixture to provide a single set of functions that are performed once prior to executing any of the tests in the fixture. A TestFixture can have only one TestFixtureSetUp method. If more than one is defined the TestFixture will compile successfully but its tests will not run.
So, within the method defined with this attribute, you can open your database connection and make your database object global to the test environment. Then, every single test can use that database connection. Note that your tests are still independent, even though they use the same connection.
I believe that this also addresses your 3rd concern.
I am not quite sure how to answer your 2nd concern, as I do not know the extent of the changes that take place in the SomeDataBaseLibrary class.
Just nit-picking, these tests are integration tests, not unit tests. But that doesn't matter right now that I can tell.
As #sbenderli pointed out, you can use TestFixtureSetUp to start the connection, and write a nearly-empty test that just asserts the condition of the DB connection. I think you'll just have to give up on the ideal of 1 bug -> 1 test failing here as multiple tests require connecting to the test database. If using your data access layer has any side-effects (e.g. caching), be extra careful here about interacting tests (<-link might be broken).
This is good. Tests are supposed to demonstrate how to use the SUT (software under test--SomeDataBaseLibrary in this case). If a change to the SUT requires change to how it's used, you want to know. Ideally, if you make a change to SomeDataBaseLibrary that will break client code, it will break your automated tests. Whether you have automated tests or not, you will have to change anything depending on the SUT; automated tests are one additional thing to change, but they also let you know that you have to make said change.
taken care of with TestFixtureSetUp.
One more thing which you may have taken care of already: InsertData_ValidData_NoExceptions does not clean up after itself, leading to interacting tests. The easiest way I have found to making tests clean up after themselves is to use a TransactionScope: Just create one in your SetUp class and Dispose it in TearDown. Works like a charm (with compatible databases) in my experience.
EDIT:
Once you have connection logic in a TestFixtureSetup, you can still test connection like this:
[Test]
public void Test_Connection()
{
Assert.IsTrue(connectionOk);
}
One downside to this is that the Exercise step of the test is implicit--it is part of the setup logic. IMHO, that's OK.