Howto break on UnitTesting.Assert.AreEqual? - c#

Standard behavior of assertion in a VS UnitTest project is to just tell me that a test has failed on a specific line.
Sometimes however it would be convenient, if I could break when such an assertion fails. Because just setting a breakpoint on that line breaks also for not failing testcases. If I could break on assertion failing, I could see immediately which testcase is failing.
How can I temporarily tell VS to break, when assertion fails?
E.g. I have many loops in the following way in my unit tests and want to know which iteration is failing:
foreach(var testcase in testcases)
{
Assert.AreEqual(testcase.ExpectedOutputData, FuncionUnderTest(testcase.InputData));
}

Just use a conditional breakpoint, and give it the logical negative of the assertion's check.
Personally, I think breakpoints are seriously underused as nothing more than a 'break at this point' tool- They're very versatile, and can easily be used instead of Console.WriteLine for showing debug output without actually modifying the code, as well as only breaking if an assertion would fail.
They do tend to cause a bit of a performance hit if overused this way though, but that's rarely a problem running unit tests.

Without having too much knowledge of how you're constructing your unit tests, it seems to me that if you are struggling to see which assertion is causing your unit test to fail, then perhaps you are testing too much in your individual tests? Unit tests should be as atomic as possible to negate against precisely this issue. If you can break your tests up, even down to having a single assertion within a single test, it will be much easier to work out which test is failing. I would certainly advocate that over writing breakpoint-specific code into your unit tests.

If you use nUnit you can have this that allows you to debug separately:
[TestCase(0)]
[TestCase(1)]
public void NunitTestCases(int expected)
{
Assert.AreEqual(expected,0);
}
I guess you can always do this though:
[Test]
public void BreakTest()
{
for (int i = 0; i < 2; i++)
{
bool condition = i == 0;
if(!condition)
Debugger.Break();
Assert.IsTrue(condition);
}
}

Related

Ignoring (skipping) an NUnit test case without throwing an exception

I have a requirement to not run a test case during certain times in the month and I would like to ignore or skip the relevant test(s) when applicable. So far the only attribute I have found that makes sense is Assert.Ignore();. Unfortunately triggering this method throws an IgnoreException and as a consequence the NUnit TearDown will not be triggered. This is not ideal as I have code that I would like to execute after each test case irrespective of the test outcome. I was incorrect in my assumption please find comments below...
In an ideal world I wish I could just set the test outcome in my ignore / skipped part of the code, example:
else
{
//it's after the 25th of the month let's skip this test case
TestContext.CurrentContext.Result.Outcome == ResultState.Ignored;
}
But I understand that TestContext.CurrentContext.Result.Outcome is only a Get for good reason and it would most likely just open a can of worms with people just setting their tests as passed when they should not etc..
[Test]
[TestCase(TestName = "Test A")]
[Description("Test A will be testing something.")]
[Category("Regression Test Pack")]
public void Test_Case_1()
{
sessionVariables = new Session().SetupSession(uiTestCase: true);
if (DateTime.Now.Day <= 25)
{
//let's test something!
}
else
{
//it's after the 25th of the month let's skip this test case
Assert.Ignore();
}
}
[TearDown]
public void Cleanup()
{
sessionVariables.TeardownLogic(sessionVariables);
}
As you discovered, use of Assert.Ignore does not cause the teardown to be skipped. (If it did, that would be an NUnit bug, because teardown must always be run if setup has been run)
So, your choice of how to end the test depends on how you want the result to be shown...
Simply returning causes the test to pass, without any special message.
Assert.Pass does the same thing but allows you to include a message.
Assert.Ignore issues an "ignored" warning and allows you to specify a reason. The warning is propagated so that the result of the run as a whole is also "Warning".
Assert.Inconclusive gives an "inconclusive" result, which is intended to mean that the test could not be run due to some external factor. You may also specify the specific reason and the overall run result is not affected.
As an alternative to Assert.Inconclusive, you could use Assume.That with an appropriate test. For example, the following code would trigger the "inconclusive" result after the 25th of the month:
Assume.That(DateTime.Now.Day <= 25);
For what it's worth, the "Inconclusive" result is intended for exactly this kind of situation. It means that the test could not be run for reasons beyond it's control. "Ignored" results are intended to be treated as a problem to eventually be solved by the team, although many folks use it differently.
From the point of view of NUnit's design, Assume.That is the most natural way to produce the result. You would normally place it at the beginning of the test or the SetUp method, before any other code.
You can just return instead of doing an assert.
public void Test_Case_1()
{
sessionVariables = new Session().SetupSession(uiTestCase: true);
if (DateTime.Now.Day <= 25)
{
//let's test something!
}
else
{
//it's after the 25th of the month let's skip this test case
return;
}
}

How to properly unit test inequality

So one of the goals of unit testing is to make sure that future changes/refactors won't break existing functionality. Suppose we have the following method:
public bool LessThanFive(double a) {
return a < 5;
}
One way to uni test this is as follow:
public bool LessThanFiveTests_True() {
const double a = 4;
Assert.IsTrue(LessThanFive(a))
}
The problem with this unit test is that if, later on, someone changes the < into <= in LessThanFive method, the test will pass.
What about if we have DateTime instead of double?
Your assumption seems to be that you write one test, and that test catches all possible bugs. The opposite is closer to the truth: For each potential bug, you need one test to catch it. In practice, certainly, many tests will catch a number of potential bugs, but I exaggerated a bit to convey the idea.
So, to catch the potential bug that the < is turned into a <=, you need a second test case that tries the function with 5. This is, btw. a so-called boundary test, and boundary testing is one well-known method to derive a set of useful test cases.
There are many more test design techniques, like, equivalence class partitioning, classification trees, coverage based testing etc. The underlying goal of these methods is, to guide you in developing a test suite where ideally for every potential bug a corresponding test case exists that will detect that bug.

How do I ensure complete unit test coverage?

I have 2 projects, one is the "Main" project and the other is the "Test" project.
The policy is that all methods in the Main project must have at least one accompanying test in the test project.
What I want is a new unit test in the Test project that verifies this remains the case. If there are any methods that do not have corresponding tests (and this includes method overloads) then I want this test to fail.
I can sort out appropriate messaging when the test fails.
My guess is that I can get every method (using reflection??) but I'm not sure how to then verify that there is a reference to each method in this Test project (and ignore references in projects)
You can use any existing software to measure code coverage but...
Don't do it!!! Seriously. The aim should be not to have 100% coverage but to have software that can easily evolve. From your test project you can invoke by reflection every single existing method and swallow all the exceptions. That will make your coverage around 100% but what good would it be?
Read about TDD. Start creating testable software that has meaningful tests that will save you when something goes wrong. It's not about coverage, it's about being safe.
This is an example of a meta-test that sounds good in principle, but once you get to the detail you should quickly realise is a bad idea. As has already been suggested, the correct approach is to encourage whoever owns the policy to amend it. As you’ve quoted the policy, it is sufficiently specific that people can satisfy the requirement without really achieving anything of value.
Consider:
public void TestMethod1Exists()
{
try
{
var classUnderTest = new ClassToTest();
classUnderTest.Method1();
}
catch (Exception)
{
}
}
The test contains a call to Method1 on the ClassToTest so the requirement of having a test for that method is satisfied, but nothing useful is being tested. As long as the method exists (which is must if the code compiled) the test will pass.
The intent of the policy is presumably to try to ensure that written code is being tested. Looking at some very basic code:
public string IsSet(bool flag)
{
if (flag)
{
return "YES";
}
return "NO";
}
As methods go, this is pretty simple (it could easily be changed to one line), but even so it contains two routes through the method. Having a test to ensure that this method is being called gives you a false sense of security. You would know it is being called but you would not know if all of the code paths are being tested.
An alternative that has been suggested is that you could just use code coverage tools. These can be useful and give a much better idea as to how well exercised your code is, but again they only give an indication of the coverage, not the quality of that coverage. So, let’s say I had some tests for the IsSet method above:
public void TestWhenTrue()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(true));
}
public void TestWhenFalse()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(false));
}
I’m passing sufficient parameters to exercise both code paths, so the coverage for the IsSet method should be 100%. But all I am testing is that the method returns a string. I’m not testing the value of the string, so the tests themselves don’t really add much (if any) value.
Code coverage is a useful metric, but only as part of a larger picture of code quality. Having peer reviews and sharing best practice around how to effectively test the code you are writing within your team, whilst it is less concretely measurable will have a more significant impact on the quality of your test code.

How to write unit test first and code later?

I am new to unit testing and have read several times that we should write unit test first and then the actual code. As of now , i am writing my methods and then unit test the code.
If you write the tests first...
You tend to write the code to fit the tests. This encourages the
"simplest thing that solves the problem" type development and keeps
you focused on solving the problem not working on meta-problems.
If you write the code first...
You will be tempted to write the tests to fit the code. In effect this
is the equivalent of writing the problem to fit your answer, which is
kind of backwards and will quite often lead to tests that are of
lesser value.
Sounds good to me. However, How do i write unit tests even before having my code in place?
Am i taking the advice literally ? Does it means that i should have my POCO classes and Interfaces in place and then write unit test ?
Can anyone explain me how this is done with a simple example of say adding two numbers?
It's simple really. Red, Green, Refactor.
Red means - your code is completely broken. The syntax highlighting shows red and the test doesn't pass. Why? You haven't written any code yet.
Green means - Your application builds and the test passes. You've added the required code.
Refactor means - clean it up and make sure the test passes.
You can start by writing a test somewhat like this:
[TestMethod]
public void Can_Create_MathClass() {
var math = new MathClass();
Assert.IsNotNull(math);
}
This will fail (RED). How do you fix it? Create the class.
public class MathClass {
}
That's it. It now passes (GREEN). Next test:
[TestMethod]
public void Can_Add_Two_Numbers() {
var math = new MathClass();
var result = math.Add(1, 2);
Assert.AreEqual(3, result);
}
This also fails (RED). Create the Add method:
public class MathClass {
public int Add(int a, int b) {
return a + b;
}
}
Run the test. This will pass (GREEN).
Refactoring is a matter of cleaning up the code. It also means you can remove redundant tests. We know we have the MathClass now.. so you can completely remove the Can_Create_MathClass test. Once that is done.. you've passed REFACTOR, and can continue on.
It is important to remember that the Refactor step doesn't just mean your normal code. It also means tests. You cannot let your tests deteriorate over time. You must include them in the Refactor step.
When you create your tests first, before the code, you will find it much easier and faster to create your code. The combined time it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away. But, if you already have the unit tests you don't need to create them after the code saving you some time now and lots later.
Creating a unit test helps a developer to really consider what needs to be done. Requirements are nailed down firmly by tests. There can be no misunderstanding a specification written in the form of executable code.
The code you will create is simple and concise, implementing only the features you wanted. Other developers can see how to use this new code by browsing the tests. Input whose results are undefined will be conspicuously absent from the test suite
There is also a benefit to system design. It is often very difficult to unit test some software systems. These systems are typically built code first and testing second, often by a different team entirely. By creating tests first your design will be influenced by a desire to test everything of value to your customer. Your design will reflect this by being easier to test.
Let's take a slightly more advanced example: You want to write a method that returns the largest number from a sequence.
Firstly, write one or more units test for the method to be tested:
int[] testSeq1 = {1, 4, 8, 120, 34, 56, -1, 3, -13};
Assert.That(MaxOf(testSeq1) == 120);
And repeat for some more sequences. Also include a null parameter, a sequence with one element and an empty sequence and decide if an empty sequence or null parameter should throw an exception (and ensure that the unit test expects an exception for an empty sequence if that's the case).
It is during this process that you need to decide the name of the method and the type of its parameters.
At this point, it won't compile.
Then write a stub for the method:
public int MaxOf(IEnumerable<int> sequence)
{
return 0;
}
At this point it compiles, but the unit tests fail.
Then implement MaxOf() so that those unit tests now pass.
Doing it this way around ensures that you immediately focus on the usability of the method, since the very first thing you try to do is to use it - before even beginning to write it. You might well decide to change the method's declaration slightly at this point, based on the usage pattern.
A real world example would apply this approach to using an entire class rather than just one method. For the sake of brevity I have omitted the class from the example above.
It is possible to write the unit tests before you write any code - Visual Studio does have features to generate method stubs from the code you've written in your unit test. doing it this way around can also help understand the methods that the object will need to support - sometimes this can aid later enhancements (If you had a save to disk, that you also overload to save to Stream, this is more testable and aids spooling over the network if required later on)

What's the best practice in organizing test methods that cover steps of a behavior?

I have a FileExtractor class which a Start method which does some steps.
I've created a test class called "WhenExtractingInvalidFile.cs" within my folder called "FileExtractorTests" and added some Test Methods inside it as below which should be verified as steps of the Start() method:
[TestMethod]
public void Should_remove_original_file()
{
}
[TestMethod]
public void Should_add_original_file_to_errorStorage()
{
}
[TestMethod]
public void Should_log_error_locally()
{
}
This way, it'd nicely organize the behaviors and the expectations that should be met.
The problem is that most of the logic of these test methods are the same so should I be creating one test method that verifies all the steps or separately like above?
[TestMethod]
public void Should_remove_original_file_then_add_original_file_to_errorStorage_then_log_error_locally()
{
}
What's the best practice?
While it's commonly accepted that the Act section of tests should only contain one call, there's still a lot of debate over the "One Assert per Test" practice.
I tend to adhere to it because :
when a test fails, I immediately know (from the test name) which of the multiple things we want to verify on the method under test went wrong.
tests that imply mocking are already harder to read than regular tests, they can easily get arcane when you're asserting against multiple mocks in the same test.
If you don't follow it, I'd at least recommend that you include meaningful Assert messages in order to minimize head scratching when a test fails.
I would do the same thing you did, but with a _on_start postfix like this: Should_remove_original_file_on_start. The latter method will only give you a maximum of one assert fail, even though all aspects of Start could be broken.
DRY - don't repeat yourself.
Think about the scenario where your test fails. What would be most useful from a test organization point?
The second dimension to look at is maintenance. Do you want to run and maintain 1 test or n tests? Don't overburden development with writing many tests that have little value (I think TDD is a bit overrated for this reason). A test is more valuable if it exercises a larger path of the code rather than a short one.
In this particular scenario I would create a single test. If this test fails often and you are not reaching the root cause of the problem fast enough, re-factor into multiple tests.

Categories

Resources