TDD approach for complex function - c#

I have a method in a class for which they are a few different outcomes (based upon event responses etc). But this is a single atomic function which is to used by other applications.
I have broken down the main blocks of the functionality that comprise this function into different functions and successfully taken a Test Driven Development approach to the functionality of each of these elements. These elements however aren't exposed for other applications would use.
And so my question is how can/should i easily approach a TDD style solution to verifying that the single method that should be called does function correctly without a lot of duplication in testing or lots of setup required for each test?
I have considered / looked at moving the blocks of functionality into a different class and use Mocking to simulate the responses of the functions used but it doesn't feel right and the individual methods need to write to variables within the main class (it felt really heath robinson).
The code roughly looks like this (i have removed a lot of parameters to make things clearer along with a fair bit of irrelevant code).
public void MethodToTest(string parameter)
{
IResponse x = null;
if (function1(parameter))
{
if (!function2(parameter,out x))
{
function3(parameter, out x);
}
}
// ...
// more bits of code here
// ...
if (x != null)
{
x.Success();
}
}

I think you would make your life easier by avoiding the out keyword, and re-writing the code so that the functions either check some condition on the response, OR modify the response, but not both. Something like:
public void MethodToTest(string parameter)
{
IResponse x = null;
if (function1(parameter))
{
if (!function2Check(parameter, x))
{
x = function2Transform(parameter, x);
x = function3(parameter, x);
}
}
// ...
// more bits of code here
// ...
if (x != null)
{
x.Success();
}
}
That way you can start pulling apart and recombining the pieces of your large method more easily, and in the end you should have something like:
public void MethodToTest(string parameter)
{
IResponse x = ResponseBuilder.BuildResponse(parameter);
if (x != null)
{
x.Success();
}
}
... where BuildResponse is where all your current tests will be, and the test for MethodToTest should now be fairly easy to mock the ResponseBuilder.

Your best option would indeed be mocking function1,2,3 etc. If you cannot move your functions to a separate class you could look into using nested classes to move the functions to, they are able to access the data in the outer class. After that you should be able to use mocks instead of the nested classes for testing purposes.
Update: From looking at your example code I think you could get some inspiration by looking into the visitor pattern and ways of testing that, it might be appropriate.

In this case I think you would just mock the method calls as you mentioned.
Typically you would write your test first, and then write the method in a way so that all of the tests pass. I've noticed that when you do it this way, the code that's written is very clean and to the point. Also, each class is very good about only having a single responsibility that can easily be tested.
I don't know what's wrong, but something doesn't smell right, and I think there maybe a more elegant way to do what you're doing.

IMHO, you have a couple options here:
Break the inner functions out into a different class so you can mock them and verify that they are called. (which you already mentioned)
It sounds like the other methods you created are private methods, and that this is the only public interface into those methods. If so, you should be running those test cases through this function, and verifying the results (you said that those private methods modify variables of the class) instead of testing private methods. If that is too painful, then I would consider reworking your design.
It looks to me like this class is trying to do more than one thing. For example, the first function doesn't return a response but the other two do. In your description you said the function is complex and takes a lot of parameters. Those are both signs that you need to refactor your design.

Related

Compiling many chunks of code into a single method

I have a legacy method which processes various quantities in real time. There is lots of data, and this method is basically a large if/switch mess which decides how to calculate the target value based on certain rules, and does this for each sample received from each device (and there are many of them). Its signature is something like:
double Process(ITimestampedData data, IProcessingRule rule);
where ISample contains multiple different quantities' values for a single timestamp, while IProcessingRule defines which value to use and how to process it to get the result (which can then be compared to a threshold).
I would like to get rid of all ifs and switches and refactor this into a factory which would create a single processing method for each rule, and then run these methods for input data. Since these rules have various parameters, I would also like to see if there is a way to fully resolve all these branches at compile-time (well, run-time, but I am referring to the point where I invoke the factory method once to "compile" my processing delegate).
So, I have something like this, but much more complex (more mutually-dependent conditions and various rules):
// this runs on each call
double result;
switch (rule.Quantity)
{
case QuantityType.Voltage:
{
Vector v;
if (rule.Type == VectorType.SinglePhase)
{
v = data.Vectors[Quantity.Voltage].Phases[rule.Phase];
if (rule.Phase == PhaseType.Neutral)
{
v = v * 2; // making this up just to make a point
}
}
else if (rule.Type == VectorType.Symmetry)
{
v = CalculateSymmetry(data.Vectors);
}
if (rule.TargetProperty == PropertyType.Magnitude)
{
result = v.Magnitude();
if (rule.Normalize)
{
result /= rule.NominalValue;
}
}
}
// ... this doesn't end so soon
Into something like this:
// this is a factory method which will return a single delegate
// for each rule - and do it only once, at startup
Func<ITimestampedData, double> GetProcessor(IProcessingRule)
{
Func<ITimestampedData, Vectors> quantityGetter;
Func<Vectors, Vector> vectorGetter;
Func<Vector, double> valueGetter;
quantityGetter = data => data.Vectors[rule.Quantity];
if (rule.Type == VectorType.SinglePhase)
{
if (rule.Phase == PhaseType.Neutral)
vectorGetter = vectors => 2 * vectors.Phases[rule.Phase];
else
vectorGetter = vectors => vectors.Phases[rule.Phase];
}
else if (rule.Type == VectorType.Symmetry)
{
vectorGetter = vectors => CalculateSymmetry(vectors);
}
if (rule.TargetProperty == PropertyType.Magnitude)
{
if (rule.Normalize)
valueGetter = v => v.Magnitude() / rule.NominalValue;
else
valueGetter = v => v.Magnitude();
}
...
// now we just chain all delegates into a single "if-less" call
return data => valueGetter(vectorGetter(quantityGetter(data)));
}
But the problem is:
I still have lots of repetition inside my method,
I have switched ifs for multiple delegate invocations and performance doesn't get any better,
although this "chain" is fixed and known at the end of the factory method, I still don't have a single compiled method which would process my input.
So, finally, my question is:
Is there a way to somehow "build" the final compiled method from these various chunks of code inside my factory?
I know I can use something like CSharpCodeProvider, create a huge string and then compile it, but I was hoping for something with better compile time support and type checking.
Factories
The switch statement is usually a bad smell in code, and your feeling about it are completely right. But factories are perfectly valid place for switch statements. Just don't forget that factory responsibility is to construct objects, so make sure any extra logic is outside of the factory. Also, don't confuse Factories with Factory Methods. First are used when you have a group of polymorphically exchangeable classes and your factory decides which one to use. Also, it helps to break dependencies. At the same time, factory methods are more like static constructors that know about all dependencies of a constructed object. I recommend to be careful about factory methods and prefer proper Factory classes instead. Consider this in terms of SRP - Factory's responsibility is to construct the object while your class has some business responsibility. Whereas you use Factory Method your class gets two responsibilities.
Indentation
There is a good rule I try to follow, called "One indentation level per method". That means, that you can have only more level of indentation, excluding the root one. That is valid and readable piece of code:
function something() {
doSomething();
if (isSomethingValid()) {
doSomethingElse();
}
return someResult();
}
Try to follow this rule, by extracting private methods and you will see that code becomes much clearer.
If/Else statements
It is proven that else statement is always optional and you can always refactor your code to not use it. The solution is simple - use early returns. Your methods will become much shorter and way more readable.
Probably my answer is not good enough to solve all your problems, but at least it gives you some ideas to think about.
If you are working with legacy code, I strongly recommend reading "Working Effectively with Legacy Code" book by Michael Feathers and of course "Refactoring" by Martin Fowler.
Think about having your rules have more functionality inside them.
You know what rule you want because you passed it in. But then in your current code you ask the run about itself to determine what calculation you do. I suggest you make the rules more intelligent and ask the rule for the result.
For example the rule that does the most calculations is the SinglePhaseNeutralVoltageMagnitudeNormalizedRule.
class SinglePhaseNeutralVoltageMagnitudeNormalizedRule implements IProcessingRule
{
double calculate(ITimestampedData data)
{
double result;
Vector v;
v = data.Vectors[Quantity.Voltage].Phases[Phase];
v = v * 2; // making this up just to make a point
result = v.Magnitude();
result /= NominalValue;
return result;
}
}
So the Process method becomes much simpler
result = rule.calculate(data);
A factory class as suggested by #SergeKuharev could be used to build the rules if there is much complexity there. Also, if there is much common code between the rules themselves that could be refactored to a common place.
For example, Normalization could be a rule that simply wraps another rule.
class NormalizeRule IProcessingRule
{
private IProcessingRule priorRule;
private double nominalValue;
public NormalizeRule(IProcessingRule priorRule, double nominalValue)
{
priorRule = priorRule;
nominalValue = nominalValue;
}
public double calculate(ITimestampedData data)
{
return priorRule.calculate(data)/nominalValue;
}
}
so given that, and a class SinglePhaseNeutralVoltageMagnitudeRule (as above less the /= nominalValue) a factory could combine the two to make a SinglePhaseNeutralVoltageMagnitudeNrmalizedRule by composition.

unit testing c# and issue with testing a private method

I want to test GetParameters() to assert the returned value includes "test=" within the value. Unfortunately the method responsible for this is private. Is there a way I can provide test coverage for this? The problem I have is highlighted below:
if (info.TagGroups != null)
The problem is that info.TagGroups equals null in my test
Thanks,
Test
[Test]
public void TestGetParameters()
{
var sb = new StringBuilder();
_renderer.GetParameters(sb);
var res = sb.ToString();
Assert.IsTrue(res.IndexOf("test=") > -1, "blabla");
}
Implementation class to test
internal void GetParameters(StringBuilder sb)
{
if (_dPos.ArticleInfo != null)
{
var info = _dPos.ArticleInfo;
AppendTag(sb, info);
}
}
private static void AppendTag(StringBuilder sb, ArticleInfo info)
{
if (info.TagGroups != null) // PROBLEM - TagGroups in test equals null
{
foreach (var tGroups in info.TagGroups)
{
foreach (var id in tGroups.ArticleTagIds)
{
sb.AppendFormat("test={0};", id);
}
}
}
}
Testing private methods directly (as opposed to indirectly through your class's public API) is generally a bad idea. However, if you need to, you can do this via reflection.
A better choice would be to refactor your code to be more testable. There are any number of ways to do this, here's one option:
It looks like the logic you want to test is in AppendTag. This is a static method, and doesn't modify any state, so why not make it public and callable by tests directly?
In the same way, you could also make GetParameters public static by giving it an additional ArticleInfo parameter.
You can make internals visible to your testing project: in AssemblyInfo use InternalsVisibleToAttribute.
DISCLAIMER:
Testing private/internal methods is bad! It's not TDD and in most cases this is a sign to stop and rethink design. BUT in case you're forced to do this (legacy code that should be tested) - you can use this approach.
Yes, you need just to inject appropriate contents of _dPos.ArticleInfo into your class. This way you can ensure that all paths will be covered in the private method. Another possibility is to rewrite this simple code using TDD - this will give you nearly 100% coverage.
Also note, that in general you shouldn't really care about how the private method works. As long as your class exposes desired behavior correctly everything is fine. Thinking too much about internals during testing makes your tests coupled with the details of your implementation and this makes tests fragile and hard to maintain.
See this and this for example, on why not to test implementation details.
UPDATE:
I really don't get why people always go with the InternalsVisible approach and other minor arguments encouraging testing of private methods. I'm not saying that it's always bad. But most of the time it's bad. The fact that some exceptional cases exist that force you to test private methods doesn't mean that this should be advised in general. So yes, do present a solution that makes testing implementation details possible, but after you've describes what is a valid approach in general. There are tons of blog posts and SO questions on this matter, why do we have to go through this over and over again? (Rhetorical question).
In your properties/assemblyinfo.cs add:
[assembly: InternalsVisibleTo("**Insert your test project here**")]
EDIT
Some people like testing internals and private methods and others don't. I personally prefer it and therefor I use InternalsVisibleTo. However, Internals are internal for a reason and this should therefore be used with caution.

What is this pattern called, and how should it be tested?

In brief: What is the pattern called in the following code, and how should it be tested?
The purpose of the code is to encapsulates a number of actions on a zip file (written in C#, although the pattern is language independent):
public class ZipProcessor
{
public ZipProcessor(string zipFilePath) { ... }
public void Process()
{
this.ExtractZip();
this.StepOne();
this.StepTwo();
this.StepThree();
this.CompressZip();
}
private void ExtractZip() { ... }
private void CompressZip() { ... }
private void StepOne() { ... }
private void StepTwo() { ... }
private void StepThree() { ... }
}
The actual class has around 6 steps, and each step is a short method, 5-15 lines long. The order of the steps is not important, but Extract and Compress must always come first and last respectively. Also, StepTwo takes much longer to run than the rest of the steps.
The following are options I can think of for testing the class:
Only call the public Process method, and only check the result of one step in each test method (pro: clean, con: slow, because each test method calls StepTwo, which is slow, even though it doesn't care about the result of StepTwo)
Test the private steps directly using an accessor or wrapper (pro: simple, clear relation to what is run during a test and what is actually tested, con: still slow: extracts and compresses multiple times, hacky: need to use a private accessor or dynamic wrapper, or make the steps internal in order to access them)
Have only one test method that calls a bunch of smaller helper test methods (pro: fast, models the class more closely, con: violates "one assert per test method", would still need to run multiple times for different scenarios, ex. StepOne has different behavior based on input)
I'm a bit late to the discussion, but this is a Sergeant Method.
A quick google returns "We call the bigger method here as 'sergeant' method, which basically calls other private methods and marshaling them. It may have bits and pieces of code here and there. Each of these private methods is about one particular thing. This promotes cohesion and makes sergeant method read like comments".
As for how you can test it - your example presumably violates SRP because you have a zip compressor/decompressor (one thing) and then step1/2/3. You could extract the private method invocations into other classes and mock those for your test.
I disagree that Chain-of-Responsibility makes much sense here - the compressor shouldn't need to know about the decompressor (unless they're the same class) or the reason why it's doing the decompression. The processing classes (Step1/2/3) shouldn't care that the data they're working with was compressed before, etc.
The strategy pattern doesn't really make sense either - just because you can swap the implementation of extractZip or compressZip doesn't mean you have a strategy pattern.
There's not really a pattern reflected here, but you could rewrite your code to use the Strategy or Chain of Responsibility patterns, as pointed out Paul Michalik. As is, you basically just have a custom workflow defined for your application's needs. Using the Chain of Responsibility pattern, each step would be its own class which you could test independently. You may want to then write an integration test which ensures the whole process works end-to-end (component or acceptance level test).
Consider to split it into 2+ classes - ExtractZip/Compress and process or 1 class ExtractZip/Compress with delegate in the middle.
It's a Strategy. Depending on your testing scenario you can derive from ProcessorMock (or implement it if its an interface) and override the methods not relevant to test by proper stubs. However, usually, the more flexible pattern for such cases is Chain of Responsibility...

Is there a way to protect Unit test names that follows MethodName_Condition_ExpectedBehaviour pattern against refactoring?

I follow the naming convention of
MethodName_Condition_ExpectedBehaviour
when it comes to naming my unit-tests that test specific methods.
for example:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
But when I need to rename the method under test, tools like ReSharper does not offer me to rename those tests.
Is there a way to prevent such cases to appear after renaming? Like changing ReSharper settings or following a better unit-test naming convention etc. ?
A recent pattern is to groups tests into inner classes by the method they test.
For example (omitting test attributes):
public CityGetterTests
{
public class GetCity
{
public void TakesParidId_ReturnsParis()
{
//...
}
// More GetCity tests
}
}
See Structuring Unit Tests from Phil Haack's blog for details.
The neat thing about this layout is that, when the method name changes,
you'll only have to change the name of the inner class instead of all
the individual tests.
I also started with this convertion, however ended up with feeling that is not very good. Now I use BDD styled names like should_return_Paris_for_ParisID.
That makes my tests more readable and alsow allows me to refactor method names without worrying about my tests :)
I think the key here is what you should be testing.
You've mentioned TDD in the tags, so I hope that we're trying to adhere to that here. By that paradigm, the tests you're writing have two purposes:
To support your code once it is written, so you can refactor without fearing that you've broken something
To guide us to a better way of designing components - writing the test first really forces you to think about what is necessary for solving the problem at hand.
I know at first it looks like this question is about the first point, but really I think it's about the second. The problem you're having is that you've got concrete components you're testing instead of a contract.
In code terms, that means that I think we should be testing interfaces instead of class methods, because otherwise we expose our test to a variety of problems associated with testing components instead of contracts - inheritance strategies, object construction, and here, renaming.
It's true that interfaces names will change as well, but they'll be a lot more rigid than method names. What TDD gives us here isn't just a way to support change through a test harness - it provides the insight to realise we might be going about it the wrong way!
Take for example the code block you gave:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
{
// some test logic here
}
And let's say we're testing the method GetCity() on our object, CityObtainer - when did I set this object up? Why have I done so? If I realise GetMatchingCity() is a better name, then you have the problem outlined above!
The solution I'm proposing is that we think about what this method really means earlier in the process, by use of interfaces:
public interface ICityObtainer
{
public City GetMatchingCity();
}
By writing in this "outside-in" style way, we're forced to think about what we want from the object a lot earlier in the process, and it becoming the focus should reduce its volatility. This doesn't eliminate your problem, but it may mitigate it somewhat (and, I think, it's a better approach anyway).
Ideally, we go a step further, and we don't even write any code before starting the test:
[TestMethod]
public void GetCity_TakesParId_ReturnsParis
{
ICityObtainer cityObtainer = new CityObtainer();
var result = cityObtainer.GetCity("paris");
Assert.That(result.Name, Is.EqualTo("paris");
}
This way, I can see what I really want from the component before I even start writing it - if GetCity() isn't really what I want, but rather GetCityByID(), it would become apparent a lot earlier in the process. As I said above, it isn't foolproof, but it might reduce the pain for this particular case a bit.
Once you've gone through that, I feel that if you're changing the name of the method, it's because you're changing the terms of the contract, and that means you should have to go back and reconsider the test (since it's possible you didn't want to change it).
(As a quick addendum, if we're writing a test with TDD in mind, then something is happening inside GetCity() that has a significant amount of logic going on. Thinking about the test as being to a contract helps us to separate the intention from the implementation - the test will stay valid no matter what we change behind the interface!)
I'm late, but maybe that Can be still useful. That's my solution (Assuming you are using XUnit at least).
First create an attribute FactFor that extends the XUnit Fact.
public class FactForAttribute : FactAttribute
{
public FactForAttribute(string methodName = "Constructor", [CallerMemberName] string testMethodName = "")
=> DisplayName = $"{methodName}_{testMethodName}";
}
The trick now is to use the nameof operator to make refactoring possible. For example:
public class A
{
public int Just2() => 2;
}
public class ATests
{
[FactFor(nameof(A.Just2))]
public void Should_Return2()
{
var a = new A();
a.Just2().Should().Be(2);
}
}
That's the result:

Unit-Testing delegating methods

Is there any point in Unit-Testing a method that the only thing it does is delegate work on another object? Example:
class abc {
...
public void MoveLeft()
{
fallingPiece.MoveLeft();
}
...
}
I am doing Unit-Tests for some existing classes I have, for learning purposes. It seems kinda odd to do a Unit-Test for this MoveLeft() method, for example. But I am unsure how would it have been had I done Test-First.
Thanks
Will your code break if I do this ? If it would, then you need a test to catch it.
class abc {
...
public void MoveLeft()
{
// fallingPiece.MoveLeft();
}
...
}
Assumptions: abc is a public / exposed type and fallingPiece is a dependency. If this holds, then you need a test to test the MoveLeft behavior. If it isn't a public type, then you need a test for the public type XYZ that uses abc as a colloborator/dependency. You don't directly test it but it still needs to be tested.
My understanding of unit tests is that they are there to ensure that the logic inside a method stays the same when you didn't intend for it to change, and this method has no logic in it. We have a lot of pass-through methods like that in the code base where I work. Ostensibly, they're "Controller" classes, but in most cases all they do is pass through to the data layer.
Yes, you can unit test them, assuming you have a way to mock fallingPiece. If you actually plan on expanding the MoveLeft method to include logic, it's probably a good idea.
However, to my comment above, it's probably a better move to just inline the method until you actually need to introduce logic around moving left.
One case that this method can fail is when fallingPiece is null when Abc.MoveLeft gets called. Having a test case that builds a legitimate abc and calls abc.MoveLeft might be good idea.
something like
CanMoveLeft()
{
Abc abc =new Abc();
abc.MoveLeft();
Assert.That( abc.fallingPice has moved to left)
}

Categories

Resources