Suppose that we have two unit tests that are dependent on each other. TestA depends on TestB. Now we want to change the code so when we run TestA, TestB will automatically be run.
[TestMethod]
public void TestA()
{
string id = "123456789";
NewUser user = new NewUser();
Boolean idCheck = user.checkID(id);
Assert.IsFalse(idCheck);
}
[TestMethod]
[HostType("ASP.NET")]
[UrlToTest("http://localhost:1776/Login.aspx")]
[AspNetDevelopmentServerHost("$(SolutionDir)\\project", "/")]
public void TestB()
{
Page page = _testContextInstance.RequestedPage;
Button button = page.FindControl("BNewUser") as Button;
PrivateObject po = new PrivateObject(page);
po.Invoke("BNewUser_Click", button, EventArgs.Empty);
Assert.IsFalse(page.Visible);
}
Unit tests should be F.I.R.S.T. Where I means isolated (not only from external resources, but from other tests also). TestB should have single reason to fail - if requirement which it verifies is not implemented. In your case it can fail if TestA was not run before, but requirement for TestB is implemented. So you never can tell real reason of test failure.
If you need some preconditions to be set before running TestB, then you should add this preconditions setting to Arrange part of TestB.
UPDATE: Reuse of Unit Test Artifacts. Allow Us to Dream article is just a dreaming of re-using unit tests for integration testing:
In theory it looks interesting. But in practice unit tests and integration tests are very different. First should be isolated latter are quite the reverse - they should use real dependencies and external resources. Lets imaging you will use some dependency injection framework to provide different implementations of dependencies to your SUT - mocked for unit-tests and real for integration tests. Sounds good. But it will make unit tests very hard to maintain - you will not know setup of mocked object in current test, because you are moving arrange part of test out of test to dependency injection framework. All unit tests will have only act and assert parts, which will have some value, but they will be really hard to understand and maintain.
Next part is even worth - how would you configure dependency injection framework to provide different setups of mocks for every different unit test? Also integration tests will require additional setup and tear-down steps which don't exist in separate unit-tests (you should clear and fill database etc). Also I can't even imagine how long it will take to run several thousands of integration tests which require real database, services and files. Unit tests use mocks, thus they are fast. Integration tests are slow.
As you can see, these types of tests are very different by their nature. So just don't try to mix them and use each one as it supposed to be used.
I think you may want to use a common initialization for your unit test. You can validate the initialization inside of TestB before you continue.
Take a look at
c# unit test with common code repeated
this may answer your question.
As a rule, I hold the opinion that unit tests should not have dependencies, if you find that you cannot separate the two sets of functionality, you may consider re-factoring it.
Tim
I am using Visual Studio 2008 and I would like to be able to split up my unit tests into two groups:
Quick tests
Longer tests (i.e. interactions with database)
I can only see an option to run all or one, and also to run all of the tests in a unit test class.
Is there any way I can split these up or specify which tests to run when I want to run a quick test?
Thanks
If you're using NUnit, you could use the CategoryAttribute.
The equivalent in MSTest is the TestCategory attribute - see here for a description of how to use it.
I would distinguish your unit test groups as follows:
Unit Tests - Testing single methods / classes, with stubbed dependenices. Should be very quick to execute, as there are only internal dependencies.
Integration Tests - Testing two or more components together, such as your Data Access classes with an actual backed database. These are generally lengthy, as you may be dealing with an external dependency such as a DB or web service. However, these could still be quick tests depending on what components you are integrating. The key here is the scope of the test is different than Unit Tests.
I would create seperate test libraries, i.e. MyProj.UniTests.dll and MyProj.IntegrationTests.dll. This way, your Unit Tests library would have fewer dependenices than your Integration tests. It will then be easy to specify which test group you want to run.
You can set up a continuous integration server, if you are using something like that, to run the tests at different times, knowing that group 1 is quicker than the second. For example, Unit Tests could run immediatley after code is checked in to your repository, and Integration Tests could run overnight. It's easy to set something like this up using Team City
There is the Test List Editor. I'm not at my Visual Studio computer now so I'll just point to this answer.
I am approaching database testing with NUnit. As its time consuming so I don't want to run everytime.
So, I tried creating the base class and every other database testing classes derive from it as I thought if I will decorate the base class with [Ignore] attribute then rest of the derived classes will get ignored, but thats not happening.
I need to know is there any way to Ignore set of the classes with minimal effort?
If you don't want to split out integration and unit tests into separate projects you can also group tests into categories
[Test, Category("Integration")]
Most test runners allow you to filter which categories to run which would give you finer grained control if you need it (e.g. 'quick', 'slow' and 'reaaallyy slow' categories)
A recommended approach is seperating your unit tests that can run in isolation from your integration tests into different projects, then you can choose which project to execute when you run your tests. This will make it easier to run your faster running tests more often, multiple times daily or even hourly (and hopefully without ever having to worry about such things as configuration), while letting your slower running integration tests run on a different schedule.
What mocking frameworks in C# allow for parallel execution? (Thread safety) I was trying RhinoMocks, but it doesn't work well with parallel execution. These tests are not using external resources.
Background: I'm easing other developers into unit testing with MSTest and wanted to use multiple cores. That part appears to run properly.
Update:
Ok, so I originally posted the outline below because I was confident that it might be something within your tests such as shared state -- it couldn't possibly be RhinoMocks, right? right?? Well, to your point I think there's something odd with the framework. My sincere apologies!
Using Reflector to casually look at the RhinoMock source, we can see that MockRepository has an internal static field to represent the current mock repository. Looking at the usages of this field, it appears that this repository is used extensively by the dynamic proxy interceptor which strongly suggests nothing is thread-safe.
Here's an example usage that would use that static field.
LastCall.IgnoreArguments();
So, to answer your question - I would recommend you try out Moq. Based on the source, it appears that the Fluent API is static, but it's marked with [ThreadStatic], so it should be okay. Of course this comes with the caveat that you're doing the initial set up of the mocks on the current thread and not across threads.
Let me know how that works.
How to setup Parallel Test Execution in Visual Studio 2010:
This link outlines the mechanism to enable parallel test execution. In short, you need to manually edit the testsettings file and add the parallelTestCount attribute to the Execution node. Setting the parallelTestCount to 0 means that it will automatically balance the test execution across the available cores on the machine.
<TestSettings>
<!-- etc -->
<Execution parallelTestCount="0">
<!-- etc -->
</Execution>
<!-- etc -->
</TestSettings>
There are a few caveats:
It only works for standard MSTest "unit tests" (there are a variety of different tests you can create (ordered, coded ui, custom test extensions, etc). Standard unit tests are the TestClass/TestMethod variety.
It doesn't support Data Adapters. Thus, no support for parallel database driven tests
It must run locally. (You can't run tests distributed over multiple machines in parallel.)
Lastly, and this is the most important one -- the onus is on you to ensure that your tests are thread safe. This means if you use any kind of static singletons or shared-state, you're going to run into issues.
So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.