I am currently working on a C# solution in VS 2010.
In order to write sufficient unit tests for my business processes I am using the Accessor approach to access and change the internals of my business objects.
The issues that has arisen on my TFS build server now that I have added Accessors to my objet assembly in a number of other test assemblies, when my test run not all the test pass, some fail with a warning along the lines of:
... <Test failed message> ....
... Could not load file 'ObjectLibrary_Accessor, Version=0.0.0.0,
Culture=neutralm PublicKeyToken=ab391acd9394' or one of its dependencies.
...
...
I believe the issue is that as each test assembly is compiled a ObjectLibrary_Accessor.dll is created with a different strong name. Therefore when some of the tests are compiled the strong name check fails with the above error even-though the dll is in the expected location.
I see a number of options, none of which are particularly attractive, these include:
Not using the _Accessor approach.
Set a different XX_Accessor.dll for each test assembly - Is it possible to change the name of the generated assembly to avoid the clash?
Change my integration build to use a different binaries folder for each test project(how?)
Other options I do not know about?
I would be interested in any advice or experience people have had of this issue, solutions and workarounds (although I do not have time to change my code so option 1 is not a favorate).
The Accessor approach is a bit fragile, as you've seen.
You can make internal items visible to your test code by using the InternalsVisibleTo assembly attribute.
If you want to get at private methods and you're using .NET 4.0 then consider using something like Trespasser to make this easier.
For more options see the answers for this question: How do you unit test private methods?
Related
Much like this question I have a need to run all my NUnit tests in different cultures. I was hoping that there would be an attribute that allows me to specify cultures and the tests would be run in each culture in turn.
It seems there is a documented attribute detailed here which allows you to set the culture that an NUnit test runs under, however it does not allow the setting of multiple cultures.
In the answer to the linked question and several other resources found (such as here and here) which relate to NUnit 2, this feature was a planned improvement for NUnit 3.
As I cannot find evidence that this was implemented, can anyone answer this question please...
Is there a way to take an existing suite of NUnit 3 test cases and automatically run them in more than one culture in a tidy format?
I imagine I could write an application script that runs them outside of the IDE but that adds extra overhead.
I also don't relish the idea of copying and pasting the tests simply changing the "SetCulture" attribute each time (in the same vain I dont want to have to manually change this on an existing fixture before each run)
There's nothing currently built-in to NUnit to do this that I'm aware of, although it's an idea that's been floated a couple of times. (e.g. introducing a --set-culture flag for the console runner.)
As a workaround, the SetCulture attribute can be applied at assembly level - so you could potentially have two different assembly-level SetCulture attributes under compilation symbols, and then build your test assembly twice - once for each culture. e.g.
#if US_CULTURE
[assembly: SetCulture("en-US")]
#elif UK_CULTURE
[assembly: SetCulture("en-GB")]
#endif
In my .NET solution, I have two projects: one main project and a project for running tests against the main project. In my project, I have several methods that I'd like to keep "private", but would also like to run tests for. Is there an access method that could limit these functions to just inside of my solution?
You are looking for the InternalsVisibleTo attribute.
This attributes lets you specify other assemblies that should have access to types and methods that are internal to your assembly. So, in your main project AssemblyInfo.cs file (or any other source file), you can specify that your test project is a 'friend assembly' and should have access to the internals of your main project:
[assembly:InternalsVisibleTo("MainProject.Tests")]
On a side note, as pointed out by Alexei, if your MainProject is signed with a strong name key, any 'friend' assembly must also be signed. This is explained here
Although, as mentioned in another comment. Best practice is to test your assembly by using its public API.
You can use InternalsVisibleTo attribute to make internal types and methods visible to selected assemblies.
However, you should try to design your API so that it can be tested using only the public interface.
You should seriously think back about the architecture of your solution. This is a smell that often shows that your class does too much things at once.
A simple fix is to extract this responsibility (those private methods) to another class where they then become public and are testable out of the box...
No, there is no way to limit access to "just solution".
The reason is solution is simply group of projects. One project can be in any number of solutions. So even if you "limit" access to projects included in one solution you/someone else can create another solution that somehow will need to magically get access to methods.
Additionally built assembly does not include any information on what solution it was part of - so there is no information at run time to check access.
To you particular problem - InternalsVisibleTo (as shown in other answers) will give access to internal methods to projects you allow (requires strongly signed assemblies) or refactor your code to avoid need for testing private methods.
TL;DR, The question:
What effect on the execution of code can the presence of an extension method have in .NET (e.g. JIT/optimizations)?
Background
I'm experiencing a test failure in MSTest that depends on whether a seemingly unrelated assembly is also tested.
I noticed the test failure and by accident noticed that the failure only occured if another test assembly was loaded. Running mstest on both the Unittests and Integration test assemblies would start executing the integration tests and fail on the 21st integration test under the 4.5 CLR, whereas this does not happen under the 4.0 CLR (same configuration otherwise).
I removed all the tests but the failing one from the integration test assembly. Execution now looks like this with both test assemblies loaded, mstest loads both assemblies then executes the single test in the integration test assembly, which fails.
> mstest.exe /testcontainer:Unittests.dll /testcontainer:IntegrationTests.dll
Loading C:\Proj\Tests\bin\x86\Release\Unittests.dll...
Loading C:\Proj\Tests\bin\x86\Release\Integrationtests.dll...
Starting execution...
Results Top Level Tests
------- ---------------
Failed Proj.IntegrationTest.IntegrationTest21
Without the Unittests assembly in the execution, the test passes.
> mstest.exe /testcontainer:IntegrationTests.dll
Loading C:\Proj\Tests\bin\x86\Release\Integrationtests.dll...
Starting execution...
Results Top Level Tests
------- ---------------
Passed Proj.IntegrationTest.IntegrationTest21
I thought it must be an [AssemblyInitialize] thing being executed on the UnitTests dll, or perhaps a some sort of static state in the Unittest.dll or a common dependency being modified when the test assembly was loaded. I find neither any static constructors nor assembly init in the Unittests.dll. I suspected a deployment difference when the Unittests assembly was included, (dependent assembly deployed in different version etc.) but I compared the passing/failing deployment dirs and they are binary equivalent.
So what part of the Unittests assembly is causing the test difference?
From the unit tests I removed half the tests at a time until I drilled it down to a source file in the Unit tests assembly. Along with the test class, an extension method is declared:
Apart from this extension class the Unittest assembly now contains a single test case in a dummy test class. The test failure occurs only if I have a dummy test method and the extension method declared. I could remove all of the remaining test logic until the Unittest dll is a single file, containing this:
// DummyTest.cs in Unittests.dll
[TestClass]
public class DummyTest
{
[TestMethod]
public void TestNothing()
{
}
}
public static class IEnumerableExtension
{
public static IEnumerable<T> SymmetricDifference<T>(
this IEnumerable<T> #this,
IEnumerable<T> that)
{
return #this.Except(that).Concat(that.Except(#this));
}
}
If either the test method or the extension class is removed, the test passes. Both present, and the test fails.
There are no calls to the extension method made from either assembly, and no code is executed in the Unittests assembly before the integration tests are executed (as far as I'm aware).
I'm sure the integration test is complex enough that JIT differences in optimization can cause a difference e.g. in floating point. Is that what I'm seeing?
Probably the issue occurs due to type loading error.
When CLR runtime loads a class or method it always inspects all the types used in those items. It does not matter if type/method is actually called or not. What matters is the fact of declaration. Returning to your sample, the extension method SymmetricDifference declares that it uses Except and Concat methods from System.Core assembly.
What happens there is the error during loading of type System.Linq.Enumerable from System.Core assembly.
The reasons for that behavior can vary. The very first step to take is to log the exact exception you get on a test failure.
I did find this: https://connect.microsoft.com/VisualStudio/feedback/details/792429/some-unit-tests-fail-with-visual-studio-2012-agent-update3-on-windowsxp-sp3
Apparently the wrong framework is infered. Passing /noisolation on the command line when executing your tests is mentioned as a work around. Hope this solves your issue.
Two thoughts...
Floating point math and comparison can actually give different results with/without optimizations. At least they can in C++ They also can, and definitely will, be different on different CPU architectures. Generally speaking, it sounds like a bad idea to have a unit test evaluating pass/fail based on a floating point comparison. Eg, your friend running x64 windows fails the unit tests while you succeed.
This extension method issue is a wild goose chase. So what if it mysteriously works if you remove it? Optimized code does crazy shit sometimes, ultimately the problem is your floating point number is DIFFERENT from when its unoptimized. Add an epsilon if a difference is unavoidable, otherwise log its value at every step along the way it is computed and track it down.
So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.
I thought it will be a common question so I searched for a while but couldn't find it.
I am about to start a new project (C#, .net 3.5) and I was thinking about where I should I write the unit test code. I can create a unit test project and write all code there, or I can write the unit test code with the "class under test" itself.
What do you recommend and why? Things to consider before choosing an approach (caveats?)?
EDIT: About writing unit-test code with "code under test": Removing the test code from production assembly isn't difficult I guess. Thats what conditional compilation is for. Right?
Just throwing this point because answers are rejecting the second option just because production assemblies would be fatty.
Separate project, same solution. Use InternalsVisibleTo if you want to access internals from the test code.
Separating out test from production code:
makes it more obvious what's what
means you don't need dependencies on test frameworks in your production project
keeps your deployed code leaner
avoids including test data files in your deployment assembly
Keeping the code in the same solution:
keeps the test cycle quick
makes it easy to hop between production and test code
I always create a separate project in where I write my TestFixtures.
I do not want to litter my domain model (or whatever) with Test classes.
I do not want to distribute my tests to customers or users of my application, so therefore I put them in a separate project (which also leads to a separate assembly).
If you have the rare case that you want to test internal methods, you can use InternalsVisibleTo.
If you have the very rare case that you want to test private methods, you can use this technique, as explained by Davy Brion.
I prefer the first approach - separating to unit test to its own project.
placing the unit tests within the test subject will make it dirty. furthermore, you don't necessarily want to distribute your project with the unit tests which will make your dll's bigger and possibly expose things that you don't want to expose to the end user.
most of the open source projects that I saw had a different projects for unit tests.
You shoul place the unit tests in a seperate project.
You should also write them in a way, so that the SUT (System under Test) is not modified in a way to make unittests possible. I mean you should have no helper classes in you main project that exist "only" to support you tests.
Mixing test and production code is allways a bad plan, since you dont want to deliver all that extra code out to your clients. Keep the clear separation that another project offers.
I dont think the "keep the tests quick" argument is a really strong one. Make a clear cut... Testing code does not belong into a production enviroment IMHO...
Edit:
Comment on Edit above:
EDIT: About writing unit-test code with "code under test": Removing the test code from production assembly isn't difficult I guess. Thats what conditional compilation is for. Right?
Yes, it is "easy" to remove the code with a conditional compilation flag, but you wont have tested the final assembly you created, you only tested the assembly you created with the code inside it, then you recompile, creating a new,untested assembly and ship that one. Are you sure all your conditional flags are set 100% correct? I guess not, since you cant run the tests ;)