How to properly unit test inequality - c#

So one of the goals of unit testing is to make sure that future changes/refactors won't break existing functionality. Suppose we have the following method:
public bool LessThanFive(double a) {
return a < 5;
}
One way to uni test this is as follow:
public bool LessThanFiveTests_True() {
const double a = 4;
Assert.IsTrue(LessThanFive(a))
}
The problem with this unit test is that if, later on, someone changes the < into <= in LessThanFive method, the test will pass.
What about if we have DateTime instead of double?

Your assumption seems to be that you write one test, and that test catches all possible bugs. The opposite is closer to the truth: For each potential bug, you need one test to catch it. In practice, certainly, many tests will catch a number of potential bugs, but I exaggerated a bit to convey the idea.
So, to catch the potential bug that the < is turned into a <=, you need a second test case that tries the function with 5. This is, btw. a so-called boundary test, and boundary testing is one well-known method to derive a set of useful test cases.
There are many more test design techniques, like, equivalence class partitioning, classification trees, coverage based testing etc. The underlying goal of these methods is, to guide you in developing a test suite where ideally for every potential bug a corresponding test case exists that will detect that bug.

Related

How to write unit test first and code later?

I am new to unit testing and have read several times that we should write unit test first and then the actual code. As of now , i am writing my methods and then unit test the code.
If you write the tests first...
You tend to write the code to fit the tests. This encourages the
"simplest thing that solves the problem" type development and keeps
you focused on solving the problem not working on meta-problems.
If you write the code first...
You will be tempted to write the tests to fit the code. In effect this
is the equivalent of writing the problem to fit your answer, which is
kind of backwards and will quite often lead to tests that are of
lesser value.
Sounds good to me. However, How do i write unit tests even before having my code in place?
Am i taking the advice literally ? Does it means that i should have my POCO classes and Interfaces in place and then write unit test ?
Can anyone explain me how this is done with a simple example of say adding two numbers?
It's simple really. Red, Green, Refactor.
Red means - your code is completely broken. The syntax highlighting shows red and the test doesn't pass. Why? You haven't written any code yet.
Green means - Your application builds and the test passes. You've added the required code.
Refactor means - clean it up and make sure the test passes.
You can start by writing a test somewhat like this:
[TestMethod]
public void Can_Create_MathClass() {
var math = new MathClass();
Assert.IsNotNull(math);
}
This will fail (RED). How do you fix it? Create the class.
public class MathClass {
}
That's it. It now passes (GREEN). Next test:
[TestMethod]
public void Can_Add_Two_Numbers() {
var math = new MathClass();
var result = math.Add(1, 2);
Assert.AreEqual(3, result);
}
This also fails (RED). Create the Add method:
public class MathClass {
public int Add(int a, int b) {
return a + b;
}
}
Run the test. This will pass (GREEN).
Refactoring is a matter of cleaning up the code. It also means you can remove redundant tests. We know we have the MathClass now.. so you can completely remove the Can_Create_MathClass test. Once that is done.. you've passed REFACTOR, and can continue on.
It is important to remember that the Refactor step doesn't just mean your normal code. It also means tests. You cannot let your tests deteriorate over time. You must include them in the Refactor step.
When you create your tests first, before the code, you will find it much easier and faster to create your code. The combined time it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away. But, if you already have the unit tests you don't need to create them after the code saving you some time now and lots later.
Creating a unit test helps a developer to really consider what needs to be done. Requirements are nailed down firmly by tests. There can be no misunderstanding a specification written in the form of executable code.
The code you will create is simple and concise, implementing only the features you wanted. Other developers can see how to use this new code by browsing the tests. Input whose results are undefined will be conspicuously absent from the test suite
There is also a benefit to system design. It is often very difficult to unit test some software systems. These systems are typically built code first and testing second, often by a different team entirely. By creating tests first your design will be influenced by a desire to test everything of value to your customer. Your design will reflect this by being easier to test.
Let's take a slightly more advanced example: You want to write a method that returns the largest number from a sequence.
Firstly, write one or more units test for the method to be tested:
int[] testSeq1 = {1, 4, 8, 120, 34, 56, -1, 3, -13};
Assert.That(MaxOf(testSeq1) == 120);
And repeat for some more sequences. Also include a null parameter, a sequence with one element and an empty sequence and decide if an empty sequence or null parameter should throw an exception (and ensure that the unit test expects an exception for an empty sequence if that's the case).
It is during this process that you need to decide the name of the method and the type of its parameters.
At this point, it won't compile.
Then write a stub for the method:
public int MaxOf(IEnumerable<int> sequence)
{
return 0;
}
At this point it compiles, but the unit tests fail.
Then implement MaxOf() so that those unit tests now pass.
Doing it this way around ensures that you immediately focus on the usability of the method, since the very first thing you try to do is to use it - before even beginning to write it. You might well decide to change the method's declaration slightly at this point, based on the usage pattern.
A real world example would apply this approach to using an entire class rather than just one method. For the sake of brevity I have omitted the class from the example above.
It is possible to write the unit tests before you write any code - Visual Studio does have features to generate method stubs from the code you've written in your unit test. doing it this way around can also help understand the methods that the object will need to support - sometimes this can aid later enhancements (If you had a save to disk, that you also overload to save to Stream, this is more testable and aids spooling over the network if required later on)

What's the best practice in organizing test methods that cover steps of a behavior?

I have a FileExtractor class which a Start method which does some steps.
I've created a test class called "WhenExtractingInvalidFile.cs" within my folder called "FileExtractorTests" and added some Test Methods inside it as below which should be verified as steps of the Start() method:
[TestMethod]
public void Should_remove_original_file()
{
}
[TestMethod]
public void Should_add_original_file_to_errorStorage()
{
}
[TestMethod]
public void Should_log_error_locally()
{
}
This way, it'd nicely organize the behaviors and the expectations that should be met.
The problem is that most of the logic of these test methods are the same so should I be creating one test method that verifies all the steps or separately like above?
[TestMethod]
public void Should_remove_original_file_then_add_original_file_to_errorStorage_then_log_error_locally()
{
}
What's the best practice?
While it's commonly accepted that the Act section of tests should only contain one call, there's still a lot of debate over the "One Assert per Test" practice.
I tend to adhere to it because :
when a test fails, I immediately know (from the test name) which of the multiple things we want to verify on the method under test went wrong.
tests that imply mocking are already harder to read than regular tests, they can easily get arcane when you're asserting against multiple mocks in the same test.
If you don't follow it, I'd at least recommend that you include meaningful Assert messages in order to minimize head scratching when a test fails.
I would do the same thing you did, but with a _on_start postfix like this: Should_remove_original_file_on_start. The latter method will only give you a maximum of one assert fail, even though all aspects of Start could be broken.
DRY - don't repeat yourself.
Think about the scenario where your test fails. What would be most useful from a test organization point?
The second dimension to look at is maintenance. Do you want to run and maintain 1 test or n tests? Don't overburden development with writing many tests that have little value (I think TDD is a bit overrated for this reason). A test is more valuable if it exercises a larger path of the code rather than a short one.
In this particular scenario I would create a single test. If this test fails often and you are not reaching the root cause of the problem fast enough, re-factor into multiple tests.

Howto break on UnitTesting.Assert.AreEqual?

Standard behavior of assertion in a VS UnitTest project is to just tell me that a test has failed on a specific line.
Sometimes however it would be convenient, if I could break when such an assertion fails. Because just setting a breakpoint on that line breaks also for not failing testcases. If I could break on assertion failing, I could see immediately which testcase is failing.
How can I temporarily tell VS to break, when assertion fails?
E.g. I have many loops in the following way in my unit tests and want to know which iteration is failing:
foreach(var testcase in testcases)
{
Assert.AreEqual(testcase.ExpectedOutputData, FuncionUnderTest(testcase.InputData));
}
Just use a conditional breakpoint, and give it the logical negative of the assertion's check.
Personally, I think breakpoints are seriously underused as nothing more than a 'break at this point' tool- They're very versatile, and can easily be used instead of Console.WriteLine for showing debug output without actually modifying the code, as well as only breaking if an assertion would fail.
They do tend to cause a bit of a performance hit if overused this way though, but that's rarely a problem running unit tests.
Without having too much knowledge of how you're constructing your unit tests, it seems to me that if you are struggling to see which assertion is causing your unit test to fail, then perhaps you are testing too much in your individual tests? Unit tests should be as atomic as possible to negate against precisely this issue. If you can break your tests up, even down to having a single assertion within a single test, it will be much easier to work out which test is failing. I would certainly advocate that over writing breakpoint-specific code into your unit tests.
If you use nUnit you can have this that allows you to debug separately:
[TestCase(0)]
[TestCase(1)]
public void NunitTestCases(int expected)
{
Assert.AreEqual(expected,0);
}
I guess you can always do this though:
[Test]
public void BreakTest()
{
for (int i = 0; i < 2; i++)
{
bool condition = i == 0;
if(!condition)
Debugger.Break();
Assert.IsTrue(condition);
}
}

How to write Test Cases?

I want to learn how to write test cases before writing the code. I read an article about test-driven development. I wonder how developers write test cases? For Example this method:
public int divideNumbers(int num1, int num2)
{
return num1 / num2;
}
We start with a blank project now. You want to do something, say divide two numbers. So you write a test describing what you want to do:
Assert.That(divide(10,2), Eq(5))
This test gives you an entry point: it describes the acceptable interface of the divide method. So you proceed to implement it as int divide(int x, int y) for example.
Write tests that describe what you expect to get from your code. You don't need to think much about it. The most normal way of writing your expectation is probably the best way to design your code, and then you can implement it to satisfy your test.
There are a few steps for testing. From MSDN;
Testing Overview
Unit Testing
Integration Testing
In your case;
Assert.AreEqual(divideNumbers(8, 4), 2);
Assert class verifies conditions in unit tests using true/false propositions. You should write your test cases what you expecting their results. You can use TestMethod attribute for your test methods. There is a cool post about Creating Unit tests for your c# code. Analyze it very well.
Start with a stub of the function/class/component that you want to develop. It should compile, but deliberately not (yet) do what it is supposed to do.
For example:
public int divideNumbers(int num1, int num2)
{
throw new NotImplementedException();
}
or
return -42;
Think about the intended behavior, treating the stub as an interface to a black box. Don't care about the implementation (yet). Think about the "contract" of the interface: X goes in, Y goes out.
Identify standard cases and important egde cases. Write tests for them.
For integer division (assuming we would write it from scratch) there are actually quite a couple of cases to consider: With and without remainder, n/1, n/0, 0/n, 0/0, negative numbers, etc.
Assert.IsTrue(divideNumbers(4,4) == 1);
Assert.IsTrue(divideNumbers(4,3) == 1);
Assert.IsTrue(divideNumbers(4,2) == 2);
Assert.IsTrue(divideNumbers(4,1) == 4);
Assert.Throws<ArgumentException>(() => divideNumbers(4,0));
Assert.IsTrue(divideNumbers(0,4) == 0);
Assert.Throws<ArgumentException>(() => divideNumbers(0,0));
Assert.IsTrue(divideNumbers( 4,-2) == -2);
Assert.IsTrue(divideNumbers(-4, 2) == -2);
Assert.IsTrue(divideNumbers(-4,-2) == 2);
Assert.IsTrue(divideNumbers( 4,-3) == -1);
Assert.IsTrue(divideNumbers(-4, 3) == -1);
Assert.IsTrue(divideNumbers(-4,-3) == 1);
Compile and run the unit tests. Do they all fail? If not, why? Maybe one of the tests is not working as intended (tests can be buggy, too!).
Now, start to implement until no test fails anymore.
Start by realizing the difference between theory and practice.
Any real life system will have stuff that is easily created via TDD and some that are not.
The last group is everything dependent on environment, when working on a system that does not seek to abstract away environmental assumptions, but pragmatically accept these.
This group can be developed in a TDD fashion, but it will require additional tooling and extensions to the software factory.
For .Net this would be tooling and extensions such as MS virtual Test Lab and SpecFlow.
What I am trying to communicate is that it depends.
For very simple component/unit testing, the idea would be to write a failing testcase, before writing the code to be tested, and ending development when test runs successfully.
For integration testing and beyond (system testing), you will need to consider, among other things, how to bring the test environment into some known state in addition to considering what to test for.

Which tests to make for this little method?

I currently have the following method:
public void SetNewRandomValue() {
double newValue = numberGenerator.GenerateDouble(
genesValuesInterval.MinimumValue,
genesValuesInterval.MaximumValue
);
this.value = newValue;
}
What should be the guidelines for deciding how many tests (and which tests) to make to this method? I currently have done the following one (only after implementing the method -- that is, not test-first):
var interval = new Interval(-10, 10);
var numberGeneratorMock = new Mock<INumberGenerator>(MockBehavior.Strict);
var numberGenerator = numberGeneratorMock.Object;
double expectedValue = 5.0;
numberGeneratorMock.Setup(ng =>
ng.GenerateDouble(interval.MinimumValue, interval.MaximumValue))
.Returns(expectedValue);
var gene = new Gene(numberGenerator, 0, new Interval(-10, 10));
gene.SetNewRandomValue();
Assert.AreEqual<double>(expectedValue, gene.Value);
that basically just tests one situation. Regression-testingwise I'd say that I can't think of a way of messing up the code, turning it into mal functioning code and still have the test pass, that is, I think the method looks decently covered.
What are your opinions on this? How would you handle this little method?
Thanks
I would examine the code coverage with whatever testing tool you use, if a code coverage is available for your testing framework.
I personally like to work with either Microsoft Testing Tool or NUnit Testing Framework. I can then right-click my tests project and Test with NCover (while using NUnit), which will run the tests and tell me the percentage of code covered for each of my objects and tests.
I say that when you'll be done checking the code coverage which would result of at least a 98% code coverage, your code is likely to be well tested.
I'd recommend taking a look at Pex - it can really help generate the kind of unit tests you're looking for (i.e. figure out the different potential paths and results given a method and return value).
That test looks fine. The only thing you can actually assert about SetNewRandomValue is that the Value member is assigned afterward. You've mocked out the call to GenerateDouble and verified that Value contains the expected number, so you should be good.
You could also write a test to document (and verify) the expected behavior of Gene.SetNewRandomValue when NumberGenerator.GenerateDouble returns a value outside the specified interval.
You could definitely make a case for not unit testing this. IMHO, code inspection is a perfectly valid test methodology. You generally don't test things like property setters/getters, I think this method is simple enough to avoid unit testing for the same reason.
That said, if you really do want to test it, here's what I'd do: I'd test it with a couple values, not just once with 5. (SetNewRandomValue could be implemented as this.value = 5;, which should not pass.) I'd test it with a non-integer number, to confirm there's not a oddball cast to integer in there.
You could test that it's calling GenerateDouble with the proper parameters, though that's really testing an implementation detail. (SetNewRandomValue could be implemented as numberGenerator.GenerateDouble(0, interval.max - interval.min) + interval.min;, and that shouldn't fail the test.) You could use a real random number generator, and do SetNewRandomValue a few thousand times, and test that the values are evenly distributed in your expected range.
The method is doing three things:
Calling numberGenerator.GenerateDouble with genesValuesInterval.MinimumValue as the first parameter,
and with genesValuesInterval.MaximumValue as the second parameter,
and setting this.value to the result of that call.
Your test tests the third of these things, but not the first two. You could write two more tests that check the mock is called with the correct first and second parameters.
Edit (responding to comments below):
If the intended behaviour of this method is to set this.value to a random double within a previously specified range, then the above three tests are useful (assuming genesValuesInterval min and max are the previously specified range and that you have tests in place to assert that numberGenerator.GenerateDouble(min, max) returns a double within the specified range.
If the intended behaviour of this method is just to set this.value to a random double within (Double.MinValue, Double.MaxValue), then the first two tests are unnecessary as this is just an implementation detail.
If the inted
To answer how to test it, you should be able to describe what is the desired behavior.
Looking at your code, I assume that "Gene.SetNewRandomValue" is supposed to set self.Value to a number which falls within the Interval passed to the constructor.
I'm not super familiar with the Mock class, so I may be off base, but it appears that you are not testing that. What if your implementation had this typo?
double newValue = numberGenerator.GenerateDouble(
genesValuesInterval.MinimumValue,
genesValuesInterval.MinimumValue
);
Wouldn't your test still pass?

Categories

Resources