Testing Classes - When to refactor? - c#

I'm writing a series of automatic tests in C# using NUnit and Selenium.
Edit: I am testing an entire website, to begin I wrote three classes for the three types of members that use the website, these classes contain methods which use selenium to perform various actions by these members. These classes are then created and their methods called by my test classes with the appropriate inputs.
My question is:
Does it matter how large my test class becomes? (i.e. thousands of tests?)
When is it time to refactor my functionality classes? (25 or 50 methods, 1000 lines of code, etc)
I've been trying to read all I can about test design so if you have any good resources I would appreciate links.

Does it matter how large my test class becomes? (i.e. thousands of tests?)
Yes it does. Tests need to be maintained in the long term, and a huge test class is difficult to understand and maintain.
When is it time to refactor my functionality classes? (25 or 50 methods, 1000 lines of code, etc)
When you start to feel it is awkward to find a specific test case, or to browse through the tests related to a specific scenario. I don't think there is a hard limit here, just as there is no hard limit for the size of production classes or the number of methods. I personally put the limits higher for test code than for production code, because test code tends to be simpler, so the threshold where it starts to become difficult to understand is higher. But in general, a 1000 line test class with 50 test methods starts to feel too big for me.
I just recently had to work with such a test class, and I ended up partitioning it, so now I have several test classes each testing one particular method / use case of a specific class*. Some of the old tests I managed to convert into parameterized tests, and all new tests are written as paramterized tests. I found that parameterized tests make it much easier to look through the big picture, and keep all test cases in mind at once. I did this using JUnit on a Java project, but I see NUnit 2.5 now offers parameterized tests too - you should check it out.
*You may rightly ask shouldn't the class under test be refactored if we need so many test cases to cover it - yes it should, eventually. It is the largest class in our legacy app, with way too much stuff in it. But first we need to have the test cases in place :-) Btw this may apply to your class too - if you need so many test cases to cover it, it might be that the class under test is just trying to do too much, and you would be better off extracting some of its functionality into a separate class, with its own unit tests.

Related

Recommendation for testing a long, complex method

I've got a fairly long and intricate C# method - just shy of 200 lines - that I'm trying to figure out how to test effectively. I've already got about 50 unit tests for this particular method, but I'm not satisfied with them, for two reasons: (1) Experience has shown that they've missed some problematic scenarios, and (2) the tests are complicated enough that I'm having trouble confirming that they're actually testing what I want them to test.
The strategy that I'm adopting to ameliorate this problem is to refactor the method into half a dozen smaller methods, which should individually be easier to test. So far, so good - nothing unusual about this.
But I'm worried about the fact that these new methods - which I should normally make private, as I can't foresee them being used by any other production classes - either (a) need to be public, so that they can be tested, or (b) if I leave them private, I need to jump through weird reflection-style hoops to test them. Since the class in question isn't intended for external consumption, I'm not horribly worried about exposing these ostensibly private methods as public, but it still strikes me as having a weird code smell that I'd prefer to avoid.
What have other folks done in similar scenarios? What sort of strategies should I be adopting to help with this?
Spliiting the method up is a good start.
You don't need to make them public. Make the methods internal and use the InternalsVisibleTo-Attribute to grant your unit test assembly access to them.
If you have a Visual Studio version that supports it, use the "Analyze code coverage" feature to check if you have tested every line.

Strategies for Class/Schema aware test data generation for Data Driven Tests

I've recently started pushing for TDD where I work. So far things are going well. We're writing tests, we're having them run automatically on commit, and we're always looking to improve our process and tools.
One thing I've identified that could be improved is how we set up our Test Data. In strictly unit tests, we often find ourselves instantiating and populating complex CLR objects. This is a pain, and typically the test is then only run on a handful of cases.
What I'd like to push for is Data Driven tests. I think we should be able to load our test data from files or maybe even generate them on the fly from a schema (though I would only consider doing it on the fly if I could generate every possible configuration of an object, and that number of configurations was small). And there is my problem.
I have yet to find a good strategy for generating test data for C# CLR objects.
I looked into generating XML data from XSDs and then loading that into my tests using the DataSourceAttribute. The seemed like a good approach, but I ran into troubles generating XSD files. xsd.exe falls over because our classes have interface members. I also tried using svcutil.exe on our assembly, but because our code is monolithic the output is huge and tricky (many interdependent .xsd files).
What are other techniques for generating test data? Ideally the generator would follow a schema (maybe an xsd, but preferably the class itself), and could be scripted.
Technical notes (not sure if this is even relevant, but it can't hurt):
We're using Visual Studio's unit testing framework (defined in Microsoft.VisualStudio.TestTools.UnitTesting).
We're using RhinoMocks
Thanks
Extra Info
One reason I'm interested in this is to test an Adapter class we have. It takes a complex and convoluted legacy Entity and converts it to a DTO. The legacy Entity is a total mess of spaghetti and can not be easily split up into logical sub-units defined by interfaces (as suggested). That would be a nice approach, but we don't have that luxury.
I would like to be able to generate a large number of configurations of this legacy Entity and run them through the adapter. The larger the number of configurations, the more likely my test will fail when the next developer (oblivious to 90% of the application) changes the schema of the legacy Entity.
UPDATE
Just to clarify, I am not looking to generate random data for each execution of my tests. I want to be able to generate data to cover multiple configurations of complex objects. I want to generate this data offline and store it as static input for my tests.
I just reread my question and noticed that I had in fact originally ask for random on the fly generation. I'm surprised I ask for that! I've updated the question to fix that. Sorry about the confusion.
What you need is a tool such as NBuilder (http://code.google.com/p/nbuilder).
This allows you to describe objects, then generate them. This is great for unit testing.
Here is a very simple example (but you can make it as complex as you want):
var products = Builder<Product>
.CreateListOfSize(10)
.All().With(x => x.Title = "some title")
.And(x => x.AnyProperty = RandomlyGeneratedValue())
.And(x => x.AnyOtherProperty = OtherRandomlyGeneratedValue())
.Build();
In my experience, what you're looking to accomplish ends up actually being harder to implement and maintain than generating objects in code on a test-by-test basis.
I worked with a client that had a similar issue, and they ended up storing their objects as JSON and deserializing them, with the expectation that it would be easier to maintain and extend. It wasn't. You know what you don't get when editing JSON? Compile-time syntax checking. They just ended up with tests breaking because of JSON that failed to deserialize due to syntax errors.
One thing you can do to reduce your pain is to code to small interfaces. If you have a giant object with a ton of properties, a given method that you'd like to test will probably only need a handful. So instead of your method taking SomeGiantClass, have it take a class that implements ITinySubset. Working with the smaller subset will make it much more obvious what things need to be populated in order for your test to have any validity.
I agree with the other folks who have said that generating random data is a bad idea. I'd say it's a really bad idea. The goal of unit testing is repeatability, which goes zooming out the window the second you generate random data. It's a bad idea even if you're generating the data "offline" and then feeding it in. You have no guarantee that the test object that you generated is actually testing anything worthwhile that's not covered in other tests, or if it's testing valid conditions.
More tests doesn't mean that your code is better. 100% code coverage doesn't mean that your code is bug-free and working properly. You should aim to test the logic that you know matters to your application, not try to cover every single imaginable case.
This is a little different then what you are talking about, but have you looked at Pex? Pex will attempt to generate inputs that cover all of the paths of your code.
http://research.microsoft.com/en-us/projects/Pex/
Generating test data is often an inappropriate and not very useful way of testing - particuarly if you are generating a different set of test data (eg randomly each time) as sometimes a test run will fail and sometimes it wont. It also may be totally irrelevant to what your doing and will make for a confusing group of tests.
Tests are supposed to help document + formalise the specification of a piece of software. If the boundaries of the software are found through bombarding the system with data then these wont be documented properly. They also provide a way of communicating through code that is different from the code itself and as a result are often most useful if they are very specific and easy to read and understand.
That said if you really want to do it though typically you can write your own generator as a test class. I've done this a few times in the past and it works nicely, with the added bonus that you can see exactly what it's doing. You also already know the constraints of the data so there's no problem trying to generalise an approach
From what you say the pain you are having is in setting up objects. This is a common testing issue - I'd suggest focusing on that by making fluent builders for your common object types - this gives you a nice way of filling in less detail every time (you typically would provide only the interesting data (for a given test case) and have valid defaults for everything else). They also reduce the number of dependencies on constructors in test code which means your tests are less likely to get in the way of refactoring later on if you need to change them. You can really get a lot of mileage out of that approach. You can further extend it by having common setup code for builders when you get a lot of them that is a natural point for developers to hang reusable code.
In one system I've worked on we ended up aggregating all these sorts of things together into something which could switch on + off different seams in the application (file access etc), provided builders for objects and setup a comprehensive set of fake view classes (for wpf) to sit on top of our presenters. It effectively provided a test friendly interface for scripting and testing the entire application from very high-level things to very low-level things. Once you get there you're really in the sweet spot as you can write tests that effectively mirror button clicks in the application at a very high level but you have very easy to refactor code as there are few direct dependencies on your real classes in the tests
Actually, there is a Microsoft's way of expressing object instances in markup, and that is XAML.
Don't be scared with the WPF paradigm in the documentation. All you need to do is use correct classes in unit tests to load the objects.
Why I would do this? because Visual Studio project will automatically give you XAML syntax and probably intellisense support when you add this file.
What would be a small problem? markup element classes must have parameterless constructors. But that problem is always present and there are workarounds (e.g. here).
For reference, have a look at:
Create object from text in XAML, and
How to convert XAML File to objects, and
How to Deserialize XML document.
I wish I could show you something done by me on this matter, but I can't.

How to use unit tests in projects with many levels of indirection

I was looking over a fairly modern project created with a big emphasis on unit testing. In accordance with old adage "every problem in object oriented programming can be solved by introducing new layer of indirection" this project was sporting multiple layers of indirection. The side-effect was that fair amount of code looked like following:
public bool IsOverdraft)
{
balanceProvider.IsOverdraft();
}
Now, because of the empahsis on unit testing and maintaining high code coverage, every piece of code had unit tests written against it.Therefore this little method would have three unit tests present. Those would check:
If balanceProvider.IsOverdraft() returns true then IsOverdraft should return true
If balanceProvider.IsOverdraft() returns false then IsOverdraft should return false
If balanceProvider throws an exception then IsOverdraft should rethrow the same exception
To make things worse, the mocking framework used (NMock2) accepted method names as string literals, as follows:
NMock2.Expect.Once.On(mockBalanceProvider)
.Method("IsOverdraft")
.Will(NMock2.Return.Value(false));
That obviously made "red, green, refactor" rule into "red, green, refactor, rename in test, rename in test, rename in test". Using differnt mocking framework like Moq, would help with refactoring, but it would require a sweep trough all existing unit tests.
What is the ideal way to handle this situation?
A) Keep smaller levels of layers, so that those forwarding calls do not happen anymore.
B) Do not test those forwarding methods, as they do not contain business logic. For purposes of coverage marked them all with ExcludeFromCodeCoverage attribute.
C) Test only if proper method is invoked, without checking return values, exceptions, etc.
D) Suck it up, and keep writing those tests ;)
Either B or C. That's the problem with such general requirements ("every method must have unit test, every line of code needs to be covered") - sometimes, benefit they provide is not worth the cost. If it's something you came up with, I suggest rethinking this approach. The "we must have 95% code coverage" might be appealing on paper but in practice it quickly spawns problems like the one you have.
Also, the code you're testing is something I'd call trivial code. Having 3 tests for it is most likely overkill. For that single line of code, you'll have to maintain like 40 more. Unless your software is mission critical (which might explain high-coverage requirement), I'd skip those tests.
One of the (IMHO) most pragmatic advices on this topic was provided by Kent Beck some time ago on this very site and I expanded a bit on those thoughts with in my blog posts - What should you test?
Honestly, I think we should write tests only to document our code in an helpful manner. We should not write tests just for the sake of code coverage. (Code coverage is just a great tool to figure out what it is NOT covered so that we can figure out if we did forget important unit tests cases or if we actually have some dead code somewhere).
If I write a test, but the test ends up just being a "duplication" of the implementation or worse...if it's harder to understand the test than the actual implementation....then really such a test should not exists. Nobody is interested in reading such tests. Tests should not contain implementation details. Test are about "what" should happen not "how" it will be done. Since you've tagged your question with "TDD", I would add that TDD is a design practice. So if I already know 100% sure in advance what will be the design of what i'm going to implement, then there is no point for me to use TDD and write unit tests (But I will always have in all cases a high level acceptance test that will cover that code). That will happen often when the thing to design is really simple, like in your example. TDD is not about testing and code coverage, but really about helping us to design our code and document our code. There is no point to use a design tool or a documentation tool for designing/documenting simple/obvious things.
In your example, it's far easier to understand what's going on by reading directly the implementation than the test. The test doesn't add any value in term of documentation. So I'd happily erase it.
On top of that such tests are horridly brittle, because they are tightly coupled to the implementation. That's a nightmare on the long term when you need to refactor stuff since any time you will want to change the implementation they will break.
What I'd suggest to do, is to not write such tests but instead have higher level component tests or fast integration tests/acceptance tests that would exercise these layers without knowing anything at all about the inner working.
I think one of the most important things to keep in mind with unit tests is that it doesn't necessarily matter how the code is implemented today, but rather what happens when the tested code, direct or indirect, is modified in the future.
If you ignore those methods today and they are critical to your application's operation, then someone decides to implement a new balanceProvider at some point down the road or decides that the redirection no longer makes sense, you will most likely have a failure point.
So, if this were my application, I would first look to reduce the forward-only calls to a bare minimum (reducing the code complexity), then introduce a mocking framework that does not rely on string values for method names.
A couple of things to add to the discussion here.
Switch to a better mocking framework immediately and incrementally. We switched from RhinoMock to Moq about 3 years ago. All new tests used Moq, and often when we change a test class we switch it over. But areas of the code that haven't changed much or have huge test casses are still using RhinoMock and that is OK. The code we work with from day to day is much better as a result of making the switch. All test changes can happen in this incremental way.
You are writing too many tests. An important thing to keep in mind in TDD is that you should only write code to satisfy a red test, and you should only write a test to specify some unwritten code. So in your example, three tests is overkill, because at most two are needed to force you to write all of that production code. The exception test does not make you write any new code, so there is no need to write it. I would probably only write this test:
[Test]
public void IsOverdraftDelegatesToBalanceProvider()
{
var result = RandomBool();
providerMock.Setup(p=>p.IsOverdraft()).Returns(result);
Assert.That(myObject.IsOverDraft(), Is.EqualTo(result);
}
Don't create useless layers of indirection. Mostly, unit tests will tell you if you need indirection. Most indirection needs can be solved by the dependency inversion principle, or "couple to abstractions, not concretions". Some layers are needed for other reasons (I make WCF ServiceContract implementations a thin pass through layer. I also don't test that pass through). If you see a useless layer of indirection, 1) make sure it really is useless, then 2) delete it. Code clutter has a huge cost over time. Resharper makes this ridiculously easy and safe.
Also, for meaningful delegation or delegation scenarios you can't get rid of but need to test, something like this makes it a lot easier.
I'd say D) Suck it up, and keep writing those tests ;) and try to see if you can replace NMock with MOQ.
It might not seem necessary and even though it's just delegation now, but the tests are testing that it's calling the right method with right parameters, and the method itself is not doing anything funky before returning values. So it's a good idea to cover them in tests. But to make it easier use MOQ or similiar framework that'll make it so much easier to refactor.

How to unit test code that is highly complex behind the public interface

I'm wondering how I should be testing this sort of functionality via NUnit.
Public void HighlyComplexCalculationOnAListOfHairyObjects()
{
// calls 19 private methods totalling ~1000 lines code + comments + whitespace
}
From reading I see that NUnit isn't designed to test private methods for philosophical reasons about what unit testing should be; but trying to create a set of test data that fully executed all the functionality involved in the computation would be nearly impossible. Meanwhile the calculation is broken down into a number of smaller methods that are reasonably discrete. They are not however things that make logical sense to be done independently of each other so they're all set as private.
You've conflated two things. The Interface (which might expose very little) and this particular Implementation class, which might expose a lot more.
Define the narrowest possible Interface.
Define the Implementation class with testable (non-private) methods and attributes. It's okay if the class has "extra" stuff.
All applications should use the Interface, and -- consequently -- don't have type-safe access to the exposed features of the class.
What if "someone" bypasses the Interface and uses the Class directly? They are sociopaths -- you can safely ignore them. Don't provide them phone support because they violated the fundamental rule of using the Interface not the Implementation.
To solve your immediate problem, you may want to take a look at Pex, which is a tool from Microsoft Research that addresses this type of problem by finding all relevant boundary values so that all code paths can be executed.
That said, had you used Test-Driven Development (TDD), you would never had found yourself in that situation, since it would have been near-impossible to write unit tests that drives this kind of API.
A method like the one you describe sounds like it tries to do too many things at once. One of the key benefits of TDD is that it drives you to implement your code from small, composable objects instead of big classes with inflexible interfaces.
As mentioned, InternalsVisibleTo("AssemblyName") is a good place to start when testing legacy code.
Internal methods are still private in the sense that assemblys outside of the current assembly cannot see the methods. Check MSDN for more infomation.
Another thing would be to refactor the large method into smaller, more defined classes. Check this question I asked about a similiar problem, testing large methods.
Personally I'd make the constituent methods internal, apply InternalsVisibleTo and test the different bits.
White-box unit testing can certainly still be effective - although it's generally more brittle than black-box testing (i.e. you're more likely to have to change the tests if you change the implementation).
HighlyComplexCalculationOnAListOfHairyObjects() is a code smell, an indication that the class that contains it is potentially doing too much and should be refactored via Extract Class. The methods of this new class would be public, and therefore testable as units.
One issue to such a refactoring is that the original class held a lot of state that the new class would need. Which is another code smell, one that indicates that state should be moved into a value object.
I've seen (and probably written) many such hair objects. If it's hard to test, it's usually a good candidate for refactoring. Of course, one problem with that is that the first step to refactoring is making sure it passes all tests first.
Honestly, though, I'd look to see if there isn't some way you can break that code down into a more manageable section.
Get the book Working Effectively with Legacy Code by Michael Feathers. I'm about a third of the way through it, and it has multiple techniques for dealing with these types of problems.
Your question implies that there are many paths of execution throughout the subsystem. The first idea that pops into mind is "refactor." Even if your API remains a one-method interface, testing shouldn't be "impossible".
trying to create a set of test data
that fully executed all the
functionality involved in the
computation would be nearly impossible
If that's true, try a less ambitious goal. Start by testing specific, high-usage paths through the code, paths that you suspect may be fragile, and paths for which you've had reported bugs.
Refactoring the method into separate sub-algorithms will make your code more testable (and might be beneficial in other ways), but if your problem is a ridiculous number of interactions between those sub-algorithms, extract method (or extract to strategy class) won't really solve it: you'll have to build up a solid suite of tests one at a time.

Refactoring strategy for the class which generates specific text file

I am a TDD noob and I don't know how to solve the following problem.
I have pretty large class which generates text file in a specific format, for import into the external system. I am going to refactor this class and I want to write unit tests before.
How should these tests look like? Actually the main goal - do not break the structure of the file. But this does not mean that I should compare the contents of the file before and after?
I think you would benefit from a test that I would hesitate to call a "unit test" - although arguably it tests the current text-file-producing "unit". This would simply run the current code and do a diff between its output and a "golden master" file (which you could generate by running the test once and copying to its designated location). If there is much conditional behavior in the code, you may want to run this with several examples, each a different test case. With the existing code, by definition, all the tests should pass.
Now start to refactor. Extract a method - or better, write a test for a method that you can envision extracting, a true unit test - extract the method, and ensure that all tests, for the new small method and for the bigger system, still pass. Lather, rinse, repeat. The system tests give you a safety net that lets you go forward in the refactoring with confidence; the unit tests drive the design of the new code.
There are libraries available to make this kind of testing easier (although it's pretty easy even without them). See http://approvaltests.sourceforge.net/.
In such a case I use the following strategy:
Write a test for each method (just covering its default behavior without any error handling etc.)
Run a code coverage tool and find the blocks not covered by the tests. Write tests covering these blocks.
Do this until you get a code coverage of over 80%
Start refactoring the class (mostly generate smaller classes following the separation of concern principle).
Use Test Driven Development for writing the new classes.
Actually, that's a pretty good place to start (comparing a well known output against what is being generated by the current class).
If the single generator class can produce different results, then create one for each case.
This will ensure that you are not breaking your current generator class.
One thing that might help you is if you have the specification document for the current class. You can use that as the base of your refactoring effort.
If you haven't yet, pick up a copy of Michael Feathers' book "Working Effectively with Legacy Code". It's all about how to add tests to existing code, which is exactly what you're looking for.
But until you finish reading the book, I'd suggest starting with a regression test: create the class, have it write the file to disk, and then compare that file to a "known good" file that you've stashed in your source repository somewhere. If they don't match, fail the test.
Then start looking at the interesting decisions that your class makes. See how you can get them under test. Maybe you extract some complicated if-conditions into public functions that return bool, and you write a battery of tests to prove that, given the right inputs, that function returns the right value. Maybe generation of a particular string has some interesting logic; start testing it.
Along the way, you may find objects that want to get out. For example, you may find that the code (or the tests!) would be simpler if there was a separate class that generates a single line of output. Go with it. You've got your regression test to catch you if you screw anything up.
Work relentlessly to remove dependencies (but make sure you've got a higher-level test, like a regression test, to catch you if you make mistakes). If your class creates its own FileStream and writes to the filesystem, change it to take a TextWriter in its constructor instead, so you can write tests that pass in a StringWriter and never touch the file system. Once that's done, you can get rid of the old test that writes a file to disk (but only if you didn't break it while trying to write the new test!) If your class needs a database connection, refactor until you can write a test that passes in fake data. Etc.

Categories

Resources