I am working on a test library using NUnit, and am generating a custom report while the tests run.
In the TearDown of my tests I call a method that will report the result of the test. It works just fine if the test passes, but is never reached if the test fails, is ignored, or inconclusive.
To make things more difficult, the "//do stuff" in the TearDown can also cause the test to fail in which case it would still need to be logged. BUT Assert throws an exception which means it leaves the block and never reaches the reporting code.
[SetUp]
public void ExampleSetup()
{
//whatever
}
[Test]
public void ExampleTest()
{
//whatever
}
[TearDown]
public void ExampleTearDown()
{
//do stuff
someObject.ReportTestResult();
}
More Info
- Using NUnit 3.2.0
By looking up the NUnit documentation's Framework Extensibility chapter we can see that you need to use the Action Attribute.
The Action Attribute extension point is:
designed to better enable composability of test logic by creating
attributes that encapsulate specific actions to be taken before or
after a test is run.
Summary of a basic implementation of this extension point
You need to implement the ITestAction interface in your given say FooBarActionAttribute() class.
Given this you implement BeforeTest(), AfterTest() and Targets property.
For a basic scenario you need to perform your custom operations in the above two methods.
The final thing you need to do is to use this attribute, for example as:
[Test][FooBarActionAttribute()]
public void Should_Baz_a_FooBar() {
...
}
This will get executed right before and after the test method run.
For more advanced techniques please consult the linked documentation, it's pretty straightforward.
Frankly, this is just a bad idea. TearDown is part of your test. That's why it can fail. (But note that NUnit treats an assert failure in teardown as an error rather than a failure)
Since a test can do anything - good or bad, right or wrong - putting your logging into the test code is unreliable. You should be logging in the code that runs tests, not in the tests themselves. As stated above, the TearDown is a part of your test. So is any ActionAttribute you may define, as suggested in another answer. Don't do logging there.
Unfortunately, there is a history of doing logging within the tests because NUnit either didn't supply an alternative - at least not one that was easy to use. For NUnit V2, you could create a test event listener addin for exactly this purpose. You still can if that's the version of NUnit you are using.
For NUnit 3.0 and higher, you should create a listener extension that works with the TestEngine. The TestEngine and its extensions are completely separate from your tests. Once you have a well-tested extension, you will be able to use it with all your tests and the tests won't be cluttered with logging code.
Good afternoon/morning/evening folks,
I was wondering is it possible for me to "execute" a SpecFlow test via some sort of test harness (not NUnit)?
Previously my test harness I built ran MS Unit tests by calling methods from within the DLL that was created when I compiled the tests.
I'm assuming the same is possible in theory since a DLL is created, but im wondering how it will get all of the arguments etc.
So in short, is this possible if so is there a straight forward way to do this or am I barking up the wrong tree?
It's possible, but I'm not clear why you would want to.
Specflow is basically just a clever way of generating tests. Normally these are nUnit tests, but they can also be switched to use mstest. When you save your edits to the .feature file then VS runs a Custom Tool that converts your plaintext into a .feature.cs file that contains a code version of what you wrote with nUnit attributes applied to the methods.
Later, an nUnit runner (nUnit, resharper, gallio, teamcity etc) loads the dll and looks for all public methods marked with [Test] inside public classes marked with [TestFixture]. These methods get called.
There is nothing to stop you writing your own runner, however I'm not sure why you would do that. nUnit provides extensive reporting of the success of your test run in xml format, so its probably faster just to write something to parse that.
So I decided to invest some time on this and figured that using reflection was the way to do this task.
Here is some of my code:
TestRunner.TestDLLString = getDLL(project);
var TestDLL = Assembly.LoadFrom(TestDLLString);
Type myClassType = TestDLL.GetType("SeleniumDPS." + testname);
var instance = Activator.CreateInstance(myClassType);
MethodInfo myInitMethod = myClassType.GetMethod("Initialize");
try
{
myInitMethod.Invoke(instance, null);
}
catch (Exception ex)
{
//Error logging etc
}
I then repeat that for the "[TestMethod]" etc. I understand some people dislike reflection but in this instance the performance isnt critical so it works quite well for us.
So essentially what im doing is reading on the name of the test from an XML file then searching the DLL for that test method, then executing the Intitialize method, and later on executing the test method itself. After the test is run I then execute the cleanup method.
It might seem a bit hacky and NUnit might seem the logical choice for some, but as I mentioned earlier I needed a customizable approach. Thanks for all the suggestions though.
I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]
I am currently working on a C# solution in VS 2010.
In order to write sufficient unit tests for my business processes I am using the Accessor approach to access and change the internals of my business objects.
The issues that has arisen on my TFS build server now that I have added Accessors to my objet assembly in a number of other test assemblies, when my test run not all the test pass, some fail with a warning along the lines of:
... <Test failed message> ....
... Could not load file 'ObjectLibrary_Accessor, Version=0.0.0.0,
Culture=neutralm PublicKeyToken=ab391acd9394' or one of its dependencies.
...
...
I believe the issue is that as each test assembly is compiled a ObjectLibrary_Accessor.dll is created with a different strong name. Therefore when some of the tests are compiled the strong name check fails with the above error even-though the dll is in the expected location.
I see a number of options, none of which are particularly attractive, these include:
Not using the _Accessor approach.
Set a different XX_Accessor.dll for each test assembly - Is it possible to change the name of the generated assembly to avoid the clash?
Change my integration build to use a different binaries folder for each test project(how?)
Other options I do not know about?
I would be interested in any advice or experience people have had of this issue, solutions and workarounds (although I do not have time to change my code so option 1 is not a favorate).
The Accessor approach is a bit fragile, as you've seen.
You can make internal items visible to your test code by using the InternalsVisibleTo assembly attribute.
If you want to get at private methods and you're using .NET 4.0 then consider using something like Trespasser to make this easier.
For more options see the answers for this question: How do you unit test private methods?
So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.