general consensus
I've done quite a lot of reading up on the subject of testing complex classes and private methods.
The general consensus seems to be:
"if you need to test private methods then you're class is badly designed"
"if your class is complex, then you need to separate it out"
So, I need your help.
the problem Class
So I have a relatively simple class whose long running job it is to:
poll a datasource
do some very simple mapping of the data
send that data somewhere else
Aditionally:
it needs to be able to be quite fault tolerant by being able to retry various tasks in case of certain errors.
the testing problem
The point of the class is to abstract a lot of the fault tolerance and threading... basically by using a simple Timer Class and some internal lists to keep track of errors etc.
Because of the Timer, certain methods are called on different threads asynchronously... additionally a bunch of the methods rely on global private fields.
How should I test this class... particularly because so many methods are private?
cheers guys
I would extract the code to poll the data into a separate class that can be mocked, and also extract the code that sends that data for the same reason. You might want to extract the data mapping code, depending on how trivial it is.
I would definitely use mock timers in the unit tests, otherwise your tests are hard to set up and slow to run. You can either pass in the timer in the constructor, or expose a property that you can set. I often create a regular timer in the constructor and then overwrite that from my unit test.
You might also be able to extract the retry logic so that it can be tested separately from the other code. Passing in a delegate of the code to try and retry might be a way to decouple the data code from the retry logic. You can also use IEnumerable and the yield statement to generate your data and feed it to the retry code. Essentially, I'm looking for ways to make it so that the retry code doesn't directly call the target code that it's supposed to try and retry. That makes it easier to test and generate all possible errors for, although you can get some of the same benefits by just mocking out that target code.
If you really have multithreaded scenarios that you need to test, there are some tools out there to coordinate threads from within a test. One of them is a port I created of Java MultithreadedTC called TickingTest.
You could try using something like JMock. This will let you replace the real Timer with a mock Timer that you can control. You can then set up test cases that fire method calls in a defined order, and you can also set up error conditions by creating a mock data source.
EDIT: Whoops! Didn't see the C# tag. Maybe there's a C# equivalent to JMock.
Related
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
Are there any disadvantages in using a private accessor to test a piece of code?
I'm weighing the option of using a private accessor only to test my GUI, as opposed the the methods/properties that are publically exposed.
This will allow some GUI testing i need, i just wanted to make sure their were not any hidden "pitfalls" in using a private accessor in how it behaves.
So to recap, your stated goal is:
I'm weighing the option of using a private accessor only to test my GUI... This will allow some GUI testing i need...
In short, yes there are pitfalls. The code you are testing is still tightly coupled to the user interface.
In the comments you clarify your goal/problem as:
What about in the case if i want to test, Drag/drop. Custom Controls, Overriden events?
All I can say is welcome aboard. The software industry has struggled with this for nearly half a century. The fact remains that testing UI is hard, Really HARD. Yes you can take a piece of code that is tightly coupled with a UI element and try and automate it; however, your going to be fighting tooth-and-nail to make headway against poor assumptions.
The 'trick' to testable UI is not to make your UI testable, but to remove the code you want to test from the UI. Thus the wide acceptance of N-Tier application development and presentation design patterns like MVC, MVVM, etc.
see the following:
Model View Controller
Model View ViewModel
Model–view–adapter
Model–view–presenter
The primary goal or driving force behind many of these design patterns is to remove the tight coupling between behavior and presentation. This enables you to then test a behavior like drag-n-drop without a user interface. My recommendation is review the patterns, choose one you like, and then start refactoring the code as you write your unit tests.
Another way to think of writing UI for testing is to remove every if, else, for, while, switch, or other control statement from your user interface code. The resulting 'shell' of a UI should be very resilient to change. Just be careful when using things like data binding that rely on reflection (which is generally an acceptable practice). The primary downside to this is that the compiler can not tell you that a member no longer exists.
Updated
#timmy you wrote:
... for example if i want to test mouse click behavior...
So what about the mouse click behavior cannot be moved to a controller rather than being embedded into the form? I guess the "Close" button might have a problem, but beyond that why not move the logic to another class that can then be tested?
BTW, You don't have to pick just one pattern MVC, MVVM, etc, they are 'guidelines' or 'suggestions' not hard rules so don't get ridiculous with it. Just try and make the logic separate from the UI and independently testable. As an example, perhaps your "Click" event fits better with a simple command class? Using a command pattern is easy, new up an object and execute it. Consider this example code for a folder copy form:
private void OnCopyClick(object sender, EventArgs args)
{
var cmd = new MyCopyCommand(this.FolderPath, this.txtTargetFolderPath.Text);
new ErrorHandler(this).Perform(cmd);
}
This works well, it has no 'real' logic other than what to provide the command and has no conditional code paths. Notice we don't even directly invoke the command but rather defer that to someone who can handle an error appropriately. Usually this 'ErrorHandler' would be provided to the Form rather than constructed directly, but you get the idea.
From this we should be easily able to verify the correct behavior of the MyCopyCommand. In the end you should wind up with a bunch of "flat functions" in the UI, ie. functions that have no nesting or curly braces. Of course this is a rule of thumb, not to be taken to such an extreme as to prevent you from being productive.
I know this may seem like a lot of work, but truthfully it is not when you are already working to write a set of tests. You can be productive AND write solid code. You just need to know when to cheat, and when not to. That comes with experience and after 20 years, 10 of those writing NUnits, I still fail once in a while. When something breaks because you didn't do this, first extract the logic from the UI, then write a unit test to prove it's broken, then fix it.
It's better not to use them, instead try to find out if you can inject any dependency. Unless you're working on a legacy code where you want to create some unit tests using private accessors, I'd suggest not to use them and even in this case, I'd recommend you do that temporarily until you refactor legacy code.
In addition to what AD.NET said.
I could use those private properties in tests only until I finish my refactoring (to Model View Controller, Model View Presenter, Model View ViewModel) so that I don't have to test GUI at all!
I'm aware of (and agree with) the usual arguments for placing unit tests in a separate assembly. However, of late I've been experiencing some situations where I really want to be testing private methods. The behind-the-scenes logic in question is complex enough that testing the public and internal interfaces doesn't quite get the job done. The testing against the class's public interface feels overwrought, and I see several spots where a few tests against privates would get the job done more simply and effectively.
In the past I've tackled these kinds of situations by making the stuff I need to test protected, and creating a subclass that I can use to get at it in the test framework. But that doesn't work so well on classes that should be sealed. Not to mention bloating the test framework with all that scaffolding.
So I'm thinking of doing this instead: Place some tests in the class, where they can get at the private members. But keep them out of the production code using '#if DEBUG`.
Does this seem like a good idea?
Before anybody asks...
The solution to OP's problem is to properly incorporate IoC with DI and eliminate the need of testing private method altogether (as Joel Martinez noted). As it's been mentioned multiple times, unit testing private members is not the way to go.
However, sometimes you just can't change the code (legacy systems, risk of breaking changes - you name it) nor you can use tools that allow private members testing (like Typemock, which is paid product). For such cases, you can either not test at all, or cut corners. Which I believe is situation OP's facing.
Leaving private methods testing discussion aside...
Remember you can use reflection to access and invoke private members.
In my opinion, placing conditional debugs in the class itself is rather bad idea - it adds noise (as in, something unrelated) to the class code. Sure, it will be gone in release, but you (and possibly other programmers) will have to deal with it on the daily basics.
I realize your idea might sound good on paper - simple test wrapped with conditional debug. But in reality, tests quickly turn out to use extra variables (those will also have to be placed in the class code), some utility (extra references, custom types), testing frameworks (even more references) and what not. This all will have to be somehow connected to the class code. Put that all together, and you quickly end up with an unmaintanable monster.
Are you sure you want to deal with that? Especially considering that throwing together simple reflection-based utility is probably not that hard.
Everything you're referring to can be solved with just two concepts: Single Responsibility Principle, and Dependency Injection. It definitely sounds like you need to simplify your classes. Mind you, that doesn't mean the class must offer less value, it just means that the internals need to be simpler and some functionality may have to be delegated to others.
If you need to test this method independently of the public API of the class, then it sounds like a candidate for being removed from the class itself.
You could say the class is dependent on the private method (as is arguably evident by the need to test it separately from the class public API).
If this dependency cannot be satisfied through testing the public API of the type alone then have the class instead delegate this dependency to another type. You can either instantiate this type internally or have this type injected / resolved.
This new type can then have its own unit tests, as it's public API will be expressing what was previously a private method.
I'm wondering how I should be testing this sort of functionality via NUnit.
Public void HighlyComplexCalculationOnAListOfHairyObjects()
{
// calls 19 private methods totalling ~1000 lines code + comments + whitespace
}
From reading I see that NUnit isn't designed to test private methods for philosophical reasons about what unit testing should be; but trying to create a set of test data that fully executed all the functionality involved in the computation would be nearly impossible. Meanwhile the calculation is broken down into a number of smaller methods that are reasonably discrete. They are not however things that make logical sense to be done independently of each other so they're all set as private.
You've conflated two things. The Interface (which might expose very little) and this particular Implementation class, which might expose a lot more.
Define the narrowest possible Interface.
Define the Implementation class with testable (non-private) methods and attributes. It's okay if the class has "extra" stuff.
All applications should use the Interface, and -- consequently -- don't have type-safe access to the exposed features of the class.
What if "someone" bypasses the Interface and uses the Class directly? They are sociopaths -- you can safely ignore them. Don't provide them phone support because they violated the fundamental rule of using the Interface not the Implementation.
To solve your immediate problem, you may want to take a look at Pex, which is a tool from Microsoft Research that addresses this type of problem by finding all relevant boundary values so that all code paths can be executed.
That said, had you used Test-Driven Development (TDD), you would never had found yourself in that situation, since it would have been near-impossible to write unit tests that drives this kind of API.
A method like the one you describe sounds like it tries to do too many things at once. One of the key benefits of TDD is that it drives you to implement your code from small, composable objects instead of big classes with inflexible interfaces.
As mentioned, InternalsVisibleTo("AssemblyName") is a good place to start when testing legacy code.
Internal methods are still private in the sense that assemblys outside of the current assembly cannot see the methods. Check MSDN for more infomation.
Another thing would be to refactor the large method into smaller, more defined classes. Check this question I asked about a similiar problem, testing large methods.
Personally I'd make the constituent methods internal, apply InternalsVisibleTo and test the different bits.
White-box unit testing can certainly still be effective - although it's generally more brittle than black-box testing (i.e. you're more likely to have to change the tests if you change the implementation).
HighlyComplexCalculationOnAListOfHairyObjects() is a code smell, an indication that the class that contains it is potentially doing too much and should be refactored via Extract Class. The methods of this new class would be public, and therefore testable as units.
One issue to such a refactoring is that the original class held a lot of state that the new class would need. Which is another code smell, one that indicates that state should be moved into a value object.
I've seen (and probably written) many such hair objects. If it's hard to test, it's usually a good candidate for refactoring. Of course, one problem with that is that the first step to refactoring is making sure it passes all tests first.
Honestly, though, I'd look to see if there isn't some way you can break that code down into a more manageable section.
Get the book Working Effectively with Legacy Code by Michael Feathers. I'm about a third of the way through it, and it has multiple techniques for dealing with these types of problems.
Your question implies that there are many paths of execution throughout the subsystem. The first idea that pops into mind is "refactor." Even if your API remains a one-method interface, testing shouldn't be "impossible".
trying to create a set of test data
that fully executed all the
functionality involved in the
computation would be nearly impossible
If that's true, try a less ambitious goal. Start by testing specific, high-usage paths through the code, paths that you suspect may be fragile, and paths for which you've had reported bugs.
Refactoring the method into separate sub-algorithms will make your code more testable (and might be beneficial in other ways), but if your problem is a ridiculous number of interactions between those sub-algorithms, extract method (or extract to strategy class) won't really solve it: you'll have to build up a solid suite of tests one at a time.
How can I test the following method?
It is a method on a concrete class implementation of an interface.
I have wrapped the Process class with an interface that only exposes the methods and properties I need. The ProcessWrapper class is the concrete implementation of this interface.
public void Initiate(IEnumerable<Cow> cows)
{
foreach (Cow c in cows)
{
c.Process = new ProcessWrapper(c);
c.Process.Start();
count++;
}
}
There are two ways to get around this. The first is to use dependency injection. You could inject a factory and have Initiate call the create method to get the kind of ProcessWrapper you need for your test.
The other solution is to use a mocking framework such as TypeMock, that will let you work around this. TypeMock basically allows you to mock anything, so you could use it to provide a mock object instead of the actual ProcessWrapper instances.
I'm not familiar with C# (I prefer mine without the hash), but you need some sort of interface to the process (IPC or whatever is the most convenient method) so you can send it test requests and get results back. At the simplest level, you would just send a message to the process and receive the result. Or you could have more granularity and send more specific commands from your test harness. It depends on how you have set up your unit testing environment, more precisely how you send the test commands, how you receive them and how you report the results.
I would personally have a test object inside the process that simply receives, runs & reports the unit test results and have the test code inside that object.
What does your process do? Is there any way you could check that it is doing what it's supposed to do? For example, it might write to a file or a database table. Or it might expose an API (IPC, web-service, etc.) that you could try calling with test data.
From a TDD perspective, it might make make sense to plug in a "mock/test process" that performs some action that you can easily check. (This may require code changes to allow your test code to inject something.) This way, you're only testing your invocation code, and not-necessarily testing an actual business process. You could then have different unit tests to test your business process.