How can i pass my tests directly to nunit - c#

I want to run C90 tests in NUnit. I have written a short adapter that actually works, but i want to get rid of writing that adapter and instead make a tool that opens nunit and passes in loadable data so that nunit can run the tests.
I spare you the codeload i wrote so far and just summarize its usage:
I wrtie my C90 tests as callable dll functions with annotations above them like comments. Example /TEST/. As of now the test returns 0 or 1 for pas fail information.
A C# Pathinspector inspects all *.c files for annotations and extracts the loadinformation.
A C# Executer manages the execution of the tests and returns PASS/FAIL information.
What i want to do instead of my Executer is to create either the Launchinformation for NUnit and pass that into a freshly created gui, or create C# code to pass into the freshly created gui.
I know how to create an NUnit gui runner, but i dont know how to pass information into it.

Nunit source is online available, however it uses a lot of singletons during initialisation so i would argue against it.
One way could be to actually write and compile cs files and pass those compiled into nunit.
The best approach seems to be writing an xUnit compliant framework from scratch.

Related

Run a C# unittest only if another one passes

I'm trying to run a UnitTest in C#, but only if another UnitTest passes? Can't quite get it to work, does anybody know how I can do this?
Edit: It's in NUnit
I encountered this problem a while back. A solution I came up with may not be the most elegant but it works. What I did to get around the ordering issue is I created my own unit testing framework. It was propriety to the company I worked for so I can't share it with you.
In the testing framework, there existed
A template for each type of test & a generic template to aggregate all the tests
A utility to execute each template type
For example, if I was doing an integration test, I would have a "http utility" and the template would contain the endpoint & payload.
The tests I wanted to run would need to be stored into an intermediate data structure such as json. This allowed me to serialize the tests into templates.
Now this is where it gets tricky... Using some fancy T4 templating, I would get the json data & serialize it to a list of templates. Then I would order the tests by execution order and dependency (one test could be dependent on another for chaining integration tests). I would then generate a unit test for every template. The generated unit tests would then run on build
For your question about canceling test execution if one fails, you can build that into your templating using some fancy logic
static List<ITestTemplate> requiredTests = new List<ITestTemplate>();
...
if(requiredTests.Any(t => t.Failed))
Assert.IsTrue(false) //fail subsequent tests
You could accomplish this, though it may not be the cleanest solution, by using the [Order()] attribute. (docs)
This will allow you to run the dependency-test as [Order(1)] and the test relying on the first test as [Order(2)]. You can share your driver across the tests, and if the first test fails, close the driver, causing the other tests relying on the first test passing to fail.

C# Unit tests based on unknown number of json files

I would like to create test suite based on the tests defined in this repo:
https://github.com/json-schema/JSON-Schema-Test-Suite
The tests are defined in a number of json files, that can contain any number of tests.
Obviously I don't want to write a test for each test defined in this repo, I want to auto discover the tests and run them. But I would like to use a standard unit test framework like NUnit or xUnit. But if I just loop through all the files in a single test, then I don't get much info out of that. It will just be a single test that fails or passes in Jenkins or Team City. I could of course output all the relevant data, but that isn't nice.
Is there a way to make each test show up as a single unit test when running my test suite, without having to actually write each test method?
EDIT:
I guess what I am looking for is something like xUnit ClassData like described here Pass complex parameters to [Theory]. But then my next question would be if I can have each test show up with different names ;)
What I needed to search for on Google was "data driven unit tests".
NUnit TestCaseSource does exactly what I want it to.
#Kieren Johnstone I would prefer not doing code generation. I considered that and decided against it for the sake of simplicity.
You could generate code using T4 templates or some other automatic means. The massive advantage here is that the tests themselves are already in a decent data structure (you couldn't line things up better than that!).

How to run a SpecFlow test through a test harness?

Good afternoon/morning/evening folks,
I was wondering is it possible for me to "execute" a SpecFlow test via some sort of test harness (not NUnit)?
Previously my test harness I built ran MS Unit tests by calling methods from within the DLL that was created when I compiled the tests.
I'm assuming the same is possible in theory since a DLL is created, but im wondering how it will get all of the arguments etc.
So in short, is this possible if so is there a straight forward way to do this or am I barking up the wrong tree?
It's possible, but I'm not clear why you would want to.
Specflow is basically just a clever way of generating tests. Normally these are nUnit tests, but they can also be switched to use mstest. When you save your edits to the .feature file then VS runs a Custom Tool that converts your plaintext into a .feature.cs file that contains a code version of what you wrote with nUnit attributes applied to the methods.
Later, an nUnit runner (nUnit, resharper, gallio, teamcity etc) loads the dll and looks for all public methods marked with [Test] inside public classes marked with [TestFixture]. These methods get called.
There is nothing to stop you writing your own runner, however I'm not sure why you would do that. nUnit provides extensive reporting of the success of your test run in xml format, so its probably faster just to write something to parse that.
So I decided to invest some time on this and figured that using reflection was the way to do this task.
Here is some of my code:
TestRunner.TestDLLString = getDLL(project);
var TestDLL = Assembly.LoadFrom(TestDLLString);
Type myClassType = TestDLL.GetType("SeleniumDPS." + testname);
var instance = Activator.CreateInstance(myClassType);
MethodInfo myInitMethod = myClassType.GetMethod("Initialize");
try
{
myInitMethod.Invoke(instance, null);
}
catch (Exception ex)
{
//Error logging etc
}
I then repeat that for the "[TestMethod]" etc. I understand some people dislike reflection but in this instance the performance isnt critical so it works quite well for us.
So essentially what im doing is reading on the name of the test from an XML file then searching the DLL for that test method, then executing the Intitialize method, and later on executing the test method itself. After the test is run I then execute the cleanup method.
It might seem a bit hacky and NUnit might seem the logical choice for some, but as I mentioned earlier I needed a customizable approach. Thanks for all the suggestions though.

NUnit - Loads ALL TestCaseSources even if they're not required by current test

I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]

where should I put my test code for my class?

So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.

Categories

Resources