I have a static Configuration class responsible for data settings for my entire system. It loads certain values from the registry in its constructor, and all of its methods are based on these values. If it cannot get the values from the registry (which is possible if the application hasn't been activated yet), it throws an exception, which translates to a TypeInitializationException, which is fine by me.
I wrote unit tests using NUnit to make sure that Configuration's constructor handles all cases correctly - normal values, blank values, Null value. Each test initializes the registry using the relevant values and then calls some method inside Configuration.
And here's the problem: NUnit has decided to run the Null test first. It clears the registry, initializes Configuration, throws an exception - all is well. But then, because this is a static class whose constructor just failed - it doesn't re-construct the class again for the other tests, and they all fail.
I would have a problem even without the Null test, because Configuration probably (I'm guessing) gets initialized once for all classes that use it.
My question is: Should I use reflection to re-construct the class for each test, or should I re-design this class to check the registry in a property rather than the constructor?
My advice is to re-design your Configuration class a bit. Because your question is theoretical in nature, with a little practical aspect (failure of unit test), I'll provide some link to back-up my ideas.
First, make Configuration an non-static class. Miško Hevery, engineer at google, has a very good speech about OO Design for Testability where he specifically touches global state as a bad design choice, specially if you want to test it.
Your configuration could accept RegistryProvider instance through constructor (I assume you heard about Dependency Injection principles). RegistryProvider responsibility would be just read values from registry and that's the only thing, that it should do. Now when you test Configuration, you will provide RegistryProvider stub (if you don't know what stubs and mocks are - google it, they are simple in nature), where you will hardcode values for specific registry entries.
Now, benefits:
you have good unit tests, because you don't rely on registry
you don't have global state (testability)
your tests don't depend on
each other (each have separate Configuration instance)
your tests don't rely on environment, in which they are executed (you may not have permissions to access registry, but still you are able to test your Configuration class)
If you feel like you are not quite good at Dependency Injection, I would recommend a marvelous piece of art and engineering, provided to mortal souls by the genius of Mark Seemann, called Dependency Injection in .NET. One of the best book I've read about class design, which is oriented to .NET developers.
To make my answer more concrete :
Should I use reflection to re-construct the class for each test?
No, you should never use reflexion in your tests (only if it is no other case). Reflexion will make you tests:
fragile
hard to understand
hard to maintain
Use object-oriented practices with conjunction of encapsulation to achieve hiding of implementation. Then test only external behavior and don't rely on internal implementation details. This will make you tests depend only on external behavior and not on internal implementation, which can change a lot.
should I re-design this class to check the registry in a property
rather than the constructor?
Designing you class as described in my answer will make you able to test your class not accessing registry at all. This is a cornerstone of unit tests - not to rely on external systems (databases, file systems, web, registry, etc... ). If you want to test if you can access registry at all - write separate integration tests, where you will write to registry and read from it.
Now I don't have enough information to tell you whether you should read registry via RegistryProvider in Configuration constructor, or lazily read registry on demand, that's a tricky question. But I definitely can say - try to avoid global state as much as you can, try to minimize dependency on implementation details in you tests (this related to OO principles as a whole) and try to tests you objects in isolation, i.e. without accessing external systems. Then you can mimic exceptional cases, for example does you class behaves as expected when registry is no available? It is really hard to re-create such scenario if you access registry directly via static members of a Configuration class.
static classes / methods are notoriously hard to unit-test.
(Also notice that what you're currently doing isn't unit testing at all; it's integration testing (you're changing registry values for your tests).)
I'm afraid you'll have to choose between your class being testable and it being static.
A compromise you could make is to move the 'logical' bits (i.e validation etc.) to a different, non-static class, which will be called by the main static class.
That non-static class can be then easily tested.
I would opt for redesign.
Also having a TypeInitializationException if anything goes wrong could confuse the user/developer.
I would suggest adapting your code to use the singleton pattern as a first step.
Related
I have a bunch of classes that wrap DB queries. I want to be able to test-run each query, and verify the result, returned to private members of my class. The perhaps barbaric idea that sprung to mind was to give my query wrappers a public SelfTest() method, visible to XUnit with a [Fact] attribute. As such, my test method would have access to the internals of the class and could verify in detail the outcome of the DB request.
Are there unhappy consequences I should be aware of? I would be adding a public method to my DB wrapper classes, but the method would do no damage. I would be making my application directly 'consumable' by XUnit, rather than having my tests in a separate project as I'm used to, but this seems harmless, no?
Adding the SelfTest() methods to your production code will have no direct impact on the functionality. But your notion that this is kind of
'babaric' is caused by the violation of several principles we know as Clean Code.
Just to name a few points:
First of all this will violate the Single Responsibilty Principle. Beside the functionality itsself the class has the responsibility to test itsself.
More code and more complexity is added to the class. Who wants to read all the test code if someone is just interesting in understanding the functionality ?
You add additional dependencies to your production code (XUnit in this case) and the assemblies will be part of your shipped product.
But what is an alternative approach ?
Just making all methods and properties public for the sake of testing is not suitable as well. But with the InternalsVisibleTo annotation in the assembly.cs you can give the test project assembly the right to access internal methods and properties of the assembly to test.
Example:
Add in the assembly.cs of MyAssembly the line
[assembly:InternalsVisibleTo("MyAssembly.UnitTests")]
Then in MyAssembly.UnitTests internal methods can be used.
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
I'm aware of (and agree with) the usual arguments for placing unit tests in a separate assembly. However, of late I've been experiencing some situations where I really want to be testing private methods. The behind-the-scenes logic in question is complex enough that testing the public and internal interfaces doesn't quite get the job done. The testing against the class's public interface feels overwrought, and I see several spots where a few tests against privates would get the job done more simply and effectively.
In the past I've tackled these kinds of situations by making the stuff I need to test protected, and creating a subclass that I can use to get at it in the test framework. But that doesn't work so well on classes that should be sealed. Not to mention bloating the test framework with all that scaffolding.
So I'm thinking of doing this instead: Place some tests in the class, where they can get at the private members. But keep them out of the production code using '#if DEBUG`.
Does this seem like a good idea?
Before anybody asks...
The solution to OP's problem is to properly incorporate IoC with DI and eliminate the need of testing private method altogether (as Joel Martinez noted). As it's been mentioned multiple times, unit testing private members is not the way to go.
However, sometimes you just can't change the code (legacy systems, risk of breaking changes - you name it) nor you can use tools that allow private members testing (like Typemock, which is paid product). For such cases, you can either not test at all, or cut corners. Which I believe is situation OP's facing.
Leaving private methods testing discussion aside...
Remember you can use reflection to access and invoke private members.
In my opinion, placing conditional debugs in the class itself is rather bad idea - it adds noise (as in, something unrelated) to the class code. Sure, it will be gone in release, but you (and possibly other programmers) will have to deal with it on the daily basics.
I realize your idea might sound good on paper - simple test wrapped with conditional debug. But in reality, tests quickly turn out to use extra variables (those will also have to be placed in the class code), some utility (extra references, custom types), testing frameworks (even more references) and what not. This all will have to be somehow connected to the class code. Put that all together, and you quickly end up with an unmaintanable monster.
Are you sure you want to deal with that? Especially considering that throwing together simple reflection-based utility is probably not that hard.
Everything you're referring to can be solved with just two concepts: Single Responsibility Principle, and Dependency Injection. It definitely sounds like you need to simplify your classes. Mind you, that doesn't mean the class must offer less value, it just means that the internals need to be simpler and some functionality may have to be delegated to others.
If you need to test this method independently of the public API of the class, then it sounds like a candidate for being removed from the class itself.
You could say the class is dependent on the private method (as is arguably evident by the need to test it separately from the class public API).
If this dependency cannot be satisfied through testing the public API of the type alone then have the class instead delegate this dependency to another type. You can either instantiate this type internally or have this type injected / resolved.
This new type can then have its own unit tests, as it's public API will be expressing what was previously a private method.
I'm not doing much new development right now, but a lot of refactoring of older C# subsystems whose original requirements no longer support new and I'll add unexpected requirements. I'm also now using Rhino Mocks and unit tests where possible (vs 2008).
The dilemma for me is that to make the methods testable and mockable, I need to define clear "contracts" using interfaces. However, if I do this, a lot of the global data that many of the classes use turns into tramp data, passed from method to method until it gets to its intended user; this looks ugly, and is against my sensibilities, but ... can be mocked. Making a mixed bag class with a lot of static global properties is a more attractive option but not Rhino testable. Is there a middle ground between the two? Testable but not too trampy? Pattern perhaps?
You should also understand that these applications run on an in-house corporate developed platform, so there are a lot of helper classes and services that are instantiated once per application, and then are used throughout the application, for example a database accessor helper class. Another example is using configuration files that are read once, and used throughout the application by different methods for various reasons.
Your thoughts appreciated.
What you might want to look at here is some form of the Service Locator Pattern. Make them classes find their own tramps.
Some other reasonable options would include wrapping up the bulk of the commonly used stuff in an "application context" class of some sort.
You also might wish to look into dependency injection if you haven't done so yet.
I'm wondering how I should be testing this sort of functionality via NUnit.
Public void HighlyComplexCalculationOnAListOfHairyObjects()
{
// calls 19 private methods totalling ~1000 lines code + comments + whitespace
}
From reading I see that NUnit isn't designed to test private methods for philosophical reasons about what unit testing should be; but trying to create a set of test data that fully executed all the functionality involved in the computation would be nearly impossible. Meanwhile the calculation is broken down into a number of smaller methods that are reasonably discrete. They are not however things that make logical sense to be done independently of each other so they're all set as private.
You've conflated two things. The Interface (which might expose very little) and this particular Implementation class, which might expose a lot more.
Define the narrowest possible Interface.
Define the Implementation class with testable (non-private) methods and attributes. It's okay if the class has "extra" stuff.
All applications should use the Interface, and -- consequently -- don't have type-safe access to the exposed features of the class.
What if "someone" bypasses the Interface and uses the Class directly? They are sociopaths -- you can safely ignore them. Don't provide them phone support because they violated the fundamental rule of using the Interface not the Implementation.
To solve your immediate problem, you may want to take a look at Pex, which is a tool from Microsoft Research that addresses this type of problem by finding all relevant boundary values so that all code paths can be executed.
That said, had you used Test-Driven Development (TDD), you would never had found yourself in that situation, since it would have been near-impossible to write unit tests that drives this kind of API.
A method like the one you describe sounds like it tries to do too many things at once. One of the key benefits of TDD is that it drives you to implement your code from small, composable objects instead of big classes with inflexible interfaces.
As mentioned, InternalsVisibleTo("AssemblyName") is a good place to start when testing legacy code.
Internal methods are still private in the sense that assemblys outside of the current assembly cannot see the methods. Check MSDN for more infomation.
Another thing would be to refactor the large method into smaller, more defined classes. Check this question I asked about a similiar problem, testing large methods.
Personally I'd make the constituent methods internal, apply InternalsVisibleTo and test the different bits.
White-box unit testing can certainly still be effective - although it's generally more brittle than black-box testing (i.e. you're more likely to have to change the tests if you change the implementation).
HighlyComplexCalculationOnAListOfHairyObjects() is a code smell, an indication that the class that contains it is potentially doing too much and should be refactored via Extract Class. The methods of this new class would be public, and therefore testable as units.
One issue to such a refactoring is that the original class held a lot of state that the new class would need. Which is another code smell, one that indicates that state should be moved into a value object.
I've seen (and probably written) many such hair objects. If it's hard to test, it's usually a good candidate for refactoring. Of course, one problem with that is that the first step to refactoring is making sure it passes all tests first.
Honestly, though, I'd look to see if there isn't some way you can break that code down into a more manageable section.
Get the book Working Effectively with Legacy Code by Michael Feathers. I'm about a third of the way through it, and it has multiple techniques for dealing with these types of problems.
Your question implies that there are many paths of execution throughout the subsystem. The first idea that pops into mind is "refactor." Even if your API remains a one-method interface, testing shouldn't be "impossible".
trying to create a set of test data
that fully executed all the
functionality involved in the
computation would be nearly impossible
If that's true, try a less ambitious goal. Start by testing specific, high-usage paths through the code, paths that you suspect may be fragile, and paths for which you've had reported bugs.
Refactoring the method into separate sub-algorithms will make your code more testable (and might be beneficial in other ways), but if your problem is a ridiculous number of interactions between those sub-algorithms, extract method (or extract to strategy class) won't really solve it: you'll have to build up a solid suite of tests one at a time.