C# Gallio/MBUnit common setup assembly - c#

My solution has many projects. Most projects in the solution will have associated unit test projects. I have created a UnitTestCommon project to act as a stand alone assembly with a local database and setup methods to refresh its data by parsing xml records into the tables. Also others like, test the db connection and expose public properties for the connection string and others needed to interact with the db.
I want to accomplish a self instantiated UnitTestCommon project to be references by all other associated unit test projects.
I know if you add the attribute [AssemblyFixture] for the "global" class and [FixtureSetup] it will run before all other tests. But because I want to separate my unit test projects this will not self instantiate when referencing in the associated unit tests.
I could always call an Initialize method in the UnitTestCommon, but...there has to be a better way.

Related

Functional tests fail when run together, but pass when run separately

I'm implementing a project using .net-core and microservices base on eShopOnContainers. In my functional test I previously put all the tests in one class called IntegrationEventsScenarios.cs and now I need to separate tests by their related services. But when I run Basket Catalog and Ordering test scenarios together tests will not pass.
I have tried putting for example basket Scenariobase and Teststarup classes in Ordering folder to remove any shared resource but it didn't work.

How to keep static state between nunit multiple test assemblies execution

I have a base class that creates some test databases and then those are used by the tests in multiple assemblies. Those tests are methods in classes that inherits from this base class.
However, it seems that when the runner changes to another assembly the base class recreates again the test databases, losing considerable time.
Is there a way to save this state between assemblies in NUnit?
Thanks.

Accessing App.Config Connection Strings in Load Test Plugin

I'm currently developing my first load test ever, so please bear with me ~
The load test itself is running a set of unit tests that POST different requests to an HTTP Handler. Depending on which agent the load test is running on, the requests themselves are made against dev/staging environments, and then passed to a downstream service that 'handles' the request, and executes INSERT statements against a couple of databases. I wanted to build and include a simple plugin that implements ILoadTestPlugin and provides it's own set of instructions for the LoadTestCompleted event handler. This plugin is currently contained within a separate Class Library project that is included in the same solution that is housing the load test itself.
Before I get into describing the problem, I'd like to point out that I am currently running the load test locally against the same handler that I've set up in IIS.
I'm running into an issue when the event fires, and my method attempts to establish an entity connection to the target database context (currently using Entity Framework 4). It appears that the plugin code is failing to find the app.config contained within the load test project. The exception message / stack trace points out that the connection string is invalid, but I have a hankering that the issue is that it cannot find it. On top of that, this same connection string is used throughout our code base in numerous places.
I can rule out the chance of my connection strings being invalid, because if I create a simple unit test method that uses the same configuration file to execute the same code, it works just fine. This is a given because the load test agent is contained in a different directory than that of the project, so it isn't having a problem locating the config file.
I've also tried adding the configuration file copied to the start-up project's output directory as a deployment item in my .testsettings file. No bueno. I've also created an app config file in the plug-in project that is an exact copy of that which I'm trying to use to see if that worked. Still no bueno.
Has anyone run into this problem? If you're trying to use <connectionString> sections in your config file, how can you get the load test plugin to find/use them?
I was going to attempt to use reflection and the good ole' ConfigurationManager to try and make a call into the assembly to find the path (and ultimately, the ProjectName.dll.config file), but wanted to ask the pros on StackOverflow for a little advice before moving forward.
Also, I'd provide code examples if this weren't such straight-forward EF code (or if it was getting past the point of: var dbContext = new dbContext( myConnectionString ); )
Any help / feedback is much appreciated.
Although I did not figure out how to use the application's configuration file within the load test plugin, because the load test & any corresponding plugin(s) run in the QTAgent.exe directory, I was able to implement my post-load test Database clean-up step in two ways.
Using Context Parameters that contained the unique elements of the connection string based on the environment (Run Setting), such that the unique elements of connection string itself (e.g. - DataSource, etc...) were programatically available to my plug-in.
In my Unit Test class that issued requests against the endpoint, I created a static method which was flagged with the [ClassCleanup] attribute. This gets executed when the Test Mix containing unit tests from that class are finished running. The test class is contained within the project's out directory and has access to the app's .config file with the entity connection string:
[TestClass]
public class MyEndpointUnitTests()
{
[TestMethod]
public void SubmitRequestType1()
{
//DoStuff for request type 1
}
[TestMethod]
public void SubmitRequestType2()
{
//DoStuff for request type 2
}
[ClassCleanup]
public static void Cleanup()
{
EndpointLoadTestCleanup.DoCleanup( dbContext = new DbContext( ) );
}
}
You can create a custom Load Test Plug-in, in Initialize method you can grab the connection string from some xml/app.config file add it into context object will use it in your unit test project. it will be more robust and easy to maintain down the road.

How to lay out my integration tests

I have a rather large set of integration tests developed in C#/NUnit. Currently it is one test fixture, one class. In [TestFixtureSetUp] I create a database from a script and populate it with test data, also from a script. My tests do not modify data, so they can run in any order or in parallel.
My problem is that I have too many tests and the file is growing too big, so it looks ugly, and my ReSharper is getting sluggish. I want to split the file, but I really want to create my test database only once. As a temporary measure, I am moving the code which does actual testing into static methods in other classes, and invoke them from a single TextFixture class, as follows:
[Test]
public void YetAnotherIntegrationTest()
{
IntegrationTestsPart5.YetAnotherIntegrationTest(connection);
}
Yet it still looks ugly and I think there should be a better way.
I am using VS 2008, upgrading to 2010 right now, and SQL Server 2005.
You could split your test class into several partial classes across mutliple files. This way you can keep one [TestFixtureSetup] and split the file up for cleanliness.
You could consider wrapping your database connection in a sigleton class, which would initialize the database while creating the singleton instance. Then you can have as many test classes as you like, just grab the db connection from the singleton.
I'm creating base class containing Setup method. Then You just inherit from it and don't have to call anything to setup database in any of the test classes.

where should I put my test code for my class?

So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.

Categories

Resources