I am using Visual Studio 2008 and I would like to be able to split up my unit tests into two groups:
Quick tests
Longer tests (i.e. interactions with database)
I can only see an option to run all or one, and also to run all of the tests in a unit test class.
Is there any way I can split these up or specify which tests to run when I want to run a quick test?
Thanks
If you're using NUnit, you could use the CategoryAttribute.
The equivalent in MSTest is the TestCategory attribute - see here for a description of how to use it.
I would distinguish your unit test groups as follows:
Unit Tests - Testing single methods / classes, with stubbed dependenices. Should be very quick to execute, as there are only internal dependencies.
Integration Tests - Testing two or more components together, such as your Data Access classes with an actual backed database. These are generally lengthy, as you may be dealing with an external dependency such as a DB or web service. However, these could still be quick tests depending on what components you are integrating. The key here is the scope of the test is different than Unit Tests.
I would create seperate test libraries, i.e. MyProj.UniTests.dll and MyProj.IntegrationTests.dll. This way, your Unit Tests library would have fewer dependenices than your Integration tests. It will then be easy to specify which test group you want to run.
You can set up a continuous integration server, if you are using something like that, to run the tests at different times, knowing that group 1 is quicker than the second. For example, Unit Tests could run immediatley after code is checked in to your repository, and Integration Tests could run overnight. It's easy to set something like this up using Team City
There is the Test List Editor. I'm not at my Visual Studio computer now so I'll just point to this answer.
Related
I would like to create test suite based on the tests defined in this repo:
https://github.com/json-schema/JSON-Schema-Test-Suite
The tests are defined in a number of json files, that can contain any number of tests.
Obviously I don't want to write a test for each test defined in this repo, I want to auto discover the tests and run them. But I would like to use a standard unit test framework like NUnit or xUnit. But if I just loop through all the files in a single test, then I don't get much info out of that. It will just be a single test that fails or passes in Jenkins or Team City. I could of course output all the relevant data, but that isn't nice.
Is there a way to make each test show up as a single unit test when running my test suite, without having to actually write each test method?
EDIT:
I guess what I am looking for is something like xUnit ClassData like described here Pass complex parameters to [Theory]. But then my next question would be if I can have each test show up with different names ;)
What I needed to search for on Google was "data driven unit tests".
NUnit TestCaseSource does exactly what I want it to.
#Kieren Johnstone I would prefer not doing code generation. I considered that and decided against it for the sake of simplicity.
You could generate code using T4 templates or some other automatic means. The massive advantage here is that the tests themselves are already in a decent data structure (you couldn't line things up better than that!).
I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]
We're using Microsoft's Unit Test program and we use the Unit Test Wizard to create one-to-one mapping for methods in each class from the business layer. The issue is the amount of work needed go through and determine if we are missing any tests after the initial tests were created.
Currently I have to run the wizard and look for tests that have a "1" appended to the default name [method][test]. Those with that name mean we have already have a test for that method. The ones without an append 1 mean those are methods that don't have a Unit Test that follow the default naming convention.
I'm wondering if there is away to map Unit Test to a method with attribute on the Method so it doesn't take as much work. And yes, I know if we were following TDD we would write the Unit Test first. We write the test in parallel to development (but sometimes in rush it is missed).
If you are using Visual Studio 2012 and have the appropriate version, it has proper code coverage analysis built in: "Run tests with code coverage".
Otherwise, you can use a diagnostic tool to run code coverage, such as NCover. You can do this from inside Visual Studio using TestDriven.net
I am approaching database testing with NUnit. As its time consuming so I don't want to run everytime.
So, I tried creating the base class and every other database testing classes derive from it as I thought if I will decorate the base class with [Ignore] attribute then rest of the derived classes will get ignored, but thats not happening.
I need to know is there any way to Ignore set of the classes with minimal effort?
If you don't want to split out integration and unit tests into separate projects you can also group tests into categories
[Test, Category("Integration")]
Most test runners allow you to filter which categories to run which would give you finer grained control if you need it (e.g. 'quick', 'slow' and 'reaaallyy slow' categories)
A recommended approach is seperating your unit tests that can run in isolation from your integration tests into different projects, then you can choose which project to execute when you run your tests. This will make it easier to run your faster running tests more often, multiple times daily or even hourly (and hopefully without ever having to worry about such things as configuration), while letting your slower running integration tests run on a different schedule.
So I have been running into all kinds of interesting problems in VisualStudio 2008 when running Unit Tests.
Such as, when running Visual Studio Unit Tests some tests are failing together but passing individually. This is happening because some class level variables in that test class is being reused in the Unit Tests.
Now normally I would go into each class and fix this problem manually! However, we are talking about tests that range in the thousands!
Now here comes the interesting dilemma, using both ReSharper Unit Tests and TFS BuildServer they are passing together!
Is there any way that I could configure the VS Unit Test Solution to run in the same fashion? I want to avoid calling the [TestInitialize] methods in the [TestCleanup] methods.
This is usually a byproduct of differently ordered tests. ReSharper 4.x and earlier runs unit tests based on the order they appear in the source file. Almost all other unit test runners run tests in alphabetical order. This different ordering can (but never should) affect whether or not tests pass/fail (based on left over data in a database or statics).
ReSharper 5.0 is not using a custom runner anymore so it should fix these inconsistencies.
However, this type of inconsistency indicates a problem in the tests. Some are leaving data behind that they should be cleaning up and some are dependent on, or hurt by, data left over from a previous test.