TDD approach issue with test - c#

I'm new to TDD practice and i was wondering if my approach is good from a special case.
I want to program a little software that takes a group of array strings in parameter and split all string in that group.
Example :
{"C - 3 - 3", "M - 5 - 5"} should return { {"C", "3", "3"}, {"M", "5", "5"} }
By cutting out the problem, i started with a StringSplitter and StringSplitterTest with TDD in order to split just one string to an array of string.
After, i programmed a StringGroupSplitter and StringGroupSplitterTest (always with the TDD approach) doing the same thing but with a array of strings (knowing that StringGroupSplitter has a StringSplitter dependency).
So, i remembered the "FIRST" principles of unit tests and in particular the principle of independence saying that a test should not depend on the result of another test.
In my case, the problem is that due to the dependency of StringGroupSplitter and StringSplitter, if only ont test fails in StringSplitterTest, tests in StringGroupSplitterTest will also fail.
So my question is the following : Is my approach to TDD the right one ?
Thanks in advance.

i remembered the "FIRST" principles of unit tests and in particular the principle of independence saying that a test should not depend on the result of another test.
The important idea is that tests should produce accurate measurements regardless of the order in which they are run. If we have a "blue" test and an "orange" test, then we should be able to run either test alone, or both tests in either order, and get accurate information about whether or not our implementation is in agreement with our specification.
Back in the day, coupled tests were a common anti-pattern. When all of the tests worked, then things were fine. But if one test broke, it would be followed by a cascade of untrustworthy results, simply because the preconditions of the subsequent tests might not be satisfied.
Your description of your string splitters does not suggest you have a problem here. If StringGroupSplitter depends on a part of StringSplitter, and you introduce a bug in that part, then having multiple tests fail is normal. This is not the kind of thing we mean when we worry about the anti-pattern.

Related

When to use a unit test vs a BDD test

Based on some light reading about BDD, I came to the conclusion that unit tests are good tools for testing some granular part of your application, while BDD are higher level, where you are able to exercise feature workflows.
Some of the items I would consider candidates for unit tests: sorting algorithms, state reducers, geometric calculations, etc...
Items I would consider candidates for BDD would be feature workflows: adding an item to a cart, logging in, searching the contents of a site to find course material, etc...
I was asked by a client to write a sorting algorithm, and normally I would have just written a unit test like:
public class SorterTest
{
[TestMethod]
public void TestSort()
{
var numbers = new List<int>() { 9, 8, 7, 6, 5, 4, 3, 2, 1 };
var expected = new List<int>() { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
var sorter = new Sorter()
{
Input = numbers
};
var sorted = sorter.Sort();
CollectionAssert.AreEqual(expected, sorted.ToList());
}
}
However, the client asked for a BDD test, so I came up with the following:
Feature: Sort
In order to to ensure sorting of integers
I want to be able to sort a collection of integers
Scenario Outline: Perform a sort
Given I have these integers: '<input>'
When I sort them
Then the result should be '<expected>'
Examples:
| input | expected |
| 9,8,7,6,5,4,3,2,1 | 1,2,3,4,5,6,7,8,9 |
To make my Sorter object testable for BDD, I had to alter it:
private Sorter sorter = new Sorter();
[Given(#"^I have the following integers: '(.*)'$")]
public void GivenIHaveTheFollowingIntegers(string numbers)
{
var inputs = numbers.Split(',')
.Select(s => Convert.ToInt32(s.Trim()))
.ToList();
sorter.Input = inputs;
}
Note for setting up the test with Given, I had to add an Input property to Sorter tp prepare for the sort. The downside is that in application code, if I want my sorter to perform a sort, I will always need to set this property prior to performing the sort:
sorter.Input = intCollection;
var result = sorter.sort();
I'd rather just have:
var result = sorter.sort(intCollection);
Is BDD appropriate for this sort of test? If so, am I doing it right? Adding the Input property feels wrong, should I be doing it some other way?
If not appropriate, how does one go about drawing the line between BDD and unit tests? There is an existing SO post, but the answers reference a book. It would be nice to get some better guidance.
Choosing between both styles
Choosing between the unit test (TDD) approach and the BDD approach boils down to preference. If the customer asks for BDD, provide BDD. If the team is more comfortable to TDD, use TDD.
Mixing both approaches
The choice is not exclusive. I have experience with mixing both approaches. To answer the question where to draw a line between both approaches the Agile Testing Quadrants are quite helpful:
Drawing a line
I found that the unit test (TDD) approach was more helpful for the technology facing tests. The BDD approach was more helpful for the business facing tests.
Details about this observation
It is more natural to map business requirements to BDD style tests. To test business requirements with some tactile business value, it usually requires the integration of multiple classes. In my experience those BDD style tests were usually integration tests and had the characteristics of functional tests and user acceptance tests.
On the other side the TDD tests were written by programmers for programmers. Many programmers are more comfortable or have more experience with the unit test (TDD) approach. These tests usually tested classes in isolation and with an emphasize on edge cases and technical aspects of the system.
Your provided example is quite an exception because it maps a business case in a single class. In this case both approaches are OK.
I would have written a unit test.
You mentionned
Some of the items I would consider candidates for unit tests: sorting algorithms, state reducers, geometric calculations, etc...
Items I would consider candidates for BDD would be feature workflows: adding an item to a cart, logging in, searching the contents of a site to find course material, etc...
and I agree with that. Behavior Driven test are good at "speccing" quickly and making sure that the application behaves properly, and does what it is supposed to. You could take this definition and say that you are making sure that your sort works, but the key (IMO) is that you are not testing how your program works, but how an algorithm works, a very specific element of the program. If you look at how you built the test, you see that you are trying to mimic a unit test: you throw a bunch of input, and you compare with the output you are trying to get.
A BDD test aims to test a feature, as scenarios. For instance, if you had a program that organizes phone numbers from a file, sorting would be a part of the process, but you don't cover "sorting" by itself, you test the general behavior of the program (if you get the expected file resulting from executing the application). You setup the test, execute it, verify the result.
You should also write test before you write the code, and I agree with #Theo Lenndoff, having the input as a property is weird and very counter intuitive. (You mentionned it too).
Firstly, my experience is that BDD-thinking can sometimes be helpful when testing a unit.
Secondly, I would argue that the "Given" keyword is being misused. Could you ask your client to rewrite their scenario as:
Scenario Outline: Perform a sort
When I sort these integers: '<input>'
Then the result should be '<expected>'
I've found that the correct use of Given is for objects that have to exist in the system under test. In your example it is being used for something that exists in the user's head: implementing that in code is bound to be unnatural.
Thirdly, if I had to implement that "Given" clause, I would store the list of integers within the test environment, rather than trying to insert them into the application immediately. Then the "When" step can retrieve them from the test environment and use them where the code needs them.

How to write unit test first and code later?

I am new to unit testing and have read several times that we should write unit test first and then the actual code. As of now , i am writing my methods and then unit test the code.
If you write the tests first...
You tend to write the code to fit the tests. This encourages the
"simplest thing that solves the problem" type development and keeps
you focused on solving the problem not working on meta-problems.
If you write the code first...
You will be tempted to write the tests to fit the code. In effect this
is the equivalent of writing the problem to fit your answer, which is
kind of backwards and will quite often lead to tests that are of
lesser value.
Sounds good to me. However, How do i write unit tests even before having my code in place?
Am i taking the advice literally ? Does it means that i should have my POCO classes and Interfaces in place and then write unit test ?
Can anyone explain me how this is done with a simple example of say adding two numbers?
It's simple really. Red, Green, Refactor.
Red means - your code is completely broken. The syntax highlighting shows red and the test doesn't pass. Why? You haven't written any code yet.
Green means - Your application builds and the test passes. You've added the required code.
Refactor means - clean it up and make sure the test passes.
You can start by writing a test somewhat like this:
[TestMethod]
public void Can_Create_MathClass() {
var math = new MathClass();
Assert.IsNotNull(math);
}
This will fail (RED). How do you fix it? Create the class.
public class MathClass {
}
That's it. It now passes (GREEN). Next test:
[TestMethod]
public void Can_Add_Two_Numbers() {
var math = new MathClass();
var result = math.Add(1, 2);
Assert.AreEqual(3, result);
}
This also fails (RED). Create the Add method:
public class MathClass {
public int Add(int a, int b) {
return a + b;
}
}
Run the test. This will pass (GREEN).
Refactoring is a matter of cleaning up the code. It also means you can remove redundant tests. We know we have the MathClass now.. so you can completely remove the Can_Create_MathClass test. Once that is done.. you've passed REFACTOR, and can continue on.
It is important to remember that the Refactor step doesn't just mean your normal code. It also means tests. You cannot let your tests deteriorate over time. You must include them in the Refactor step.
When you create your tests first, before the code, you will find it much easier and faster to create your code. The combined time it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away. But, if you already have the unit tests you don't need to create them after the code saving you some time now and lots later.
Creating a unit test helps a developer to really consider what needs to be done. Requirements are nailed down firmly by tests. There can be no misunderstanding a specification written in the form of executable code.
The code you will create is simple and concise, implementing only the features you wanted. Other developers can see how to use this new code by browsing the tests. Input whose results are undefined will be conspicuously absent from the test suite
There is also a benefit to system design. It is often very difficult to unit test some software systems. These systems are typically built code first and testing second, often by a different team entirely. By creating tests first your design will be influenced by a desire to test everything of value to your customer. Your design will reflect this by being easier to test.
Let's take a slightly more advanced example: You want to write a method that returns the largest number from a sequence.
Firstly, write one or more units test for the method to be tested:
int[] testSeq1 = {1, 4, 8, 120, 34, 56, -1, 3, -13};
Assert.That(MaxOf(testSeq1) == 120);
And repeat for some more sequences. Also include a null parameter, a sequence with one element and an empty sequence and decide if an empty sequence or null parameter should throw an exception (and ensure that the unit test expects an exception for an empty sequence if that's the case).
It is during this process that you need to decide the name of the method and the type of its parameters.
At this point, it won't compile.
Then write a stub for the method:
public int MaxOf(IEnumerable<int> sequence)
{
return 0;
}
At this point it compiles, but the unit tests fail.
Then implement MaxOf() so that those unit tests now pass.
Doing it this way around ensures that you immediately focus on the usability of the method, since the very first thing you try to do is to use it - before even beginning to write it. You might well decide to change the method's declaration slightly at this point, based on the usage pattern.
A real world example would apply this approach to using an entire class rather than just one method. For the sake of brevity I have omitted the class from the example above.
It is possible to write the unit tests before you write any code - Visual Studio does have features to generate method stubs from the code you've written in your unit test. doing it this way around can also help understand the methods that the object will need to support - sometimes this can aid later enhancements (If you had a save to disk, that you also overload to save to Stream, this is more testable and aids spooling over the network if required later on)

multiple expectations/assertions to verify test result

I have read in several book and articles about TDD and BDD that one should avoid multiple assertions or expectations in a single unit test or specification. And I can understand the reasons for doing so. Still I am not sure what would be a good way to verify a complex result.
Assuming a method under test returns a complex object as a result (e.g. deserialization or database read) how do I verify the result correctly?
1.Asserting on each property:
Assert.AreEqual(result.Property1, 1);
Assert.AreEqual(result.Property2, "2");
Assert.AreEqual(result.Property3, null);
Assert.AreEqual(result.Property4, 4.0);
2.Relying on a correctly implemented .Equals():
Assert.AreEqual(result, expectedResult);
The disadvantage of 1. is that if the first assert fails all the following asserts are not run, which might have contained valuable information to find the problem. Maintainability might also be a problem as Properties come and go.
The disatvantage of 2. is that I seem to be testing more than one thing with this test. I might get false positives or negatives if .Equals() is not implemented correctly. Also with 2. I do not see, what properties are actually different if the test fails but I assume that can often be addressed with a decent .ToString() override. In any case I think I should avoid to be forced to throw the debugger at the failing tests to see the difference. I should see it right away.
The next problem with 2. is that it compares the whole object even though for some tests only some properties might be significant.
What would be a decent way or best practise for this in TDD and BDD.
Don't take TDD advice literally. What the "good guys" mean is that you should test one thing per test (to avoid a test failing for multiple reasons and subsequently having to debug the test to find the cause).
Now test "one thing" means "one behavior" ; NOT one assert per test IMHO.
It's a guideline not a rule.
So options:
For comparing whole data value objects
If the object already exposes a usable production Equals, use it.
Else do not add a Equals just for testing (See also Equality Pollution). Use a helper/extension method (or you could find one in an assertion library) obj1.HasSamePropertiesAs(obj2)
For comparing unstructured parts of objects (set of arbitrary properties - which should be rare),
create a well named private method AssertCustomerDetailsInOrderEquals(params) so that the test is clear in what part you're actually testing. Move the set of assertions into the private method.
With the context present in the question I'd go for option 1.
It likely depends on context. If I'm using some sort of built in object serialization within the .NET framework, I can be reasonably assured that if no errors were encountered then the entire object was appropriately marshaled. In that case, asserting a single field in the object is probably fine. I trust MS libraries to do the right thing.
If you are using SQL and manually mapping results to domain objects I feel that option 1 makes it quicker to diagnose when something breaks than option 2. Option 2 likely relies on toString methods in order to render the assertion failure:
Expected <1 2 null 4.0> but was <1 2 null null>
Now I am stuck trying to figure out what field 4.0/null was. Of course I could put the field name into the method:
Expected <Property1: 1, Property2: 2, Property3: null, Property4: 4.0>
but was <Property1: 1, Property2: 2, Property3: null, Property4: null>
This is fine for small numbers of properties, but begins to break down larger numbers of properties due to wrapping, etc. Also, the toString maintenance could become an issue as it needs to change at the same rate as the equals method.
Of course there is no correct answer, at the end of the day, it really boils down to your team's (or your own) personal preference.
Hope that helps!
Brandon
I would use the second approach by default. You're right, this fails if Equals() is not implemented correctly, but if you've implemented a custom Equals(), you should have unit-tested it too.
The second approach is in fact more abstract and consise and allows you to modify the code easier later, allowing in the same way to reduce code duplication. Let's say you choose the first approach:
You'll have to compare all properties in several places,
If you add a new property to a class, you'll have to add a new assertion in unit tests; modifying your Equals() would be much easier. Of course you have still to add a value of the property in expected result (if it's not the default value), but it would be shorter to do than adding a new assertion.
Also, it is much easier to see what properties are actually different with the second approach. You just run your tests in debug mode, and compare the properties on break.
By the way, you should never use ToString() for this. I suppose you wanted to say [DebuggerDisplay] attribute?
The next problem with 2. is that it compares the whole object even though for some tests only some properties might be significant.
If you have to compare only some properties, than:
ether you refactor your code by implementing a base class which contains only those properties. Example: if you want to compare a Cat to another Cat but only considering the properties common to Dog and other animals, implement Cat : Animal and compare the base class.
or you do what you've done in your first approach. Example: if you do care only about the quantity of milk the expected and the actual cats have drunk and their respective names, you'll have two assertions in your unit test.
Try "one aspect of behavior per test" rather than "one assertion per test". If you need more than one assertion to illustrate the behavior you're interested in, do that.
For instance, your example might be ShouldHaveSensibleDefaults. Splitting that up into ShouldHaveADefaultNameAsEmptyString, ShouldHaveNullAddress, ShouldHaveAQuantityOfZero etc. won't read as clearly. Nor will it help to hide the sensible defaults in another object then do a comparison.
However, I would separate examples where the values had defaults with any properties derived from some logic somewhere, for instance, ShouldCalculateTheTotalQuantity. Moving small examples like this into their own method makes it more readable.
You might also find that different properties on your object are changed by different contexts. Calling out each of these contexts and looking at those properties separately helps me to see how the context relates to the outcome.
Dave Astels, who came up with the "one assertion per test", now uses the phrase "one aspect of behavior" too, though he still finds it useful to separate that behavior. I tend to err on the side of readability and maintainability, so if it makes pragmatic sense to have more than one assertion, I'll do that.

Are multiple asserts bad in a unit test? Even if chaining?

Is there anything wrong with checking so many things in this unit test?:
ActualModel = ActualResult.AssertViewRendered() // check 1
.ForView("Index") // check 2
.WithViewData<List<Page>>(); // check 3
CollectionAssert.AreEqual(Expected, ActualModel); // check 4
The primary goals of this test are to verify the right view is returned (check 2) and it contains the right data (check 4).
Would I gain anything by splitting this into multiple tests? I'm all about doing things right, but I'm not going to split things up if it doesn't have practical value.
I'm pretty new to unit testing, so be gentle.
As others have pointed out, it's best to stick with one assert in each test in order to avoid losing information - if the first assert fails, you don't know whether the later ones would fail also. You have to fix the problem with less information - which might (probably will) be harder.
A very good reference is Roy Osherove's Art of Unit Testing - if you want to get started right with Unit Testing, you could do a lot worse than starting here.
Noting for future readers that this question and its duplicate Is it bad practice to have more than one assertion in a unit test? have opposite majority opinions, so read them both and decide for yourself.
My experience is that the most useful quantum of testing is not the assertion, but the scenario -- that is, there should be one unit test for a given set of initial conditions and method call, with as many assertions as are necessary to assert the expected final conditions. Having one unit test per assertion leads to duplicate setup or tortuous workarounds to avoid the duplication (such as the awful deeply nested rspec contexts that I'm seeing more and more of lately). It also multiplies tests, drastically slowing your suite.
As long as each assertion has a unique and identifying failure message you should be good and will sidestep any Assertion Roulette issues because it won't be hard to tell which test failed. Use common sense.
I have found a different argument to this question (at least for myself):
Use of multiple asserts is OK if they are testing the same thing.
For example, it's OK to do:
Assert.IsNotNull(value);
Assert.AreEqual(0, value.Count);
Why? - because these two asserts are not hiding the intention of the test. If the first assert fails, it means the second would fail too. And indeed, if we remove the first assert, the second assert fails (with null reference exception - !!!) anyway when the value is null. If this was not the case, then we should not put these two asserts together.
So, this is wrong:
Assert.IsNotNull(value1);
Assert.IsNotNull(value2);
As I've mentioned above, if the first assert fails, there is no clear indication about the second assert - we would still want to know what happens to the second one (even if the first one failed). So, for this reason, these two asserts belong to two different unit tests.
Conclusion: putting one or more asserts, when done properly, becomes the matter of the preference - whether we want to see assertion exceptions in the test results, or do we want to see some other exceptions as well in particular cases.
It's best to stick with only one assert in each test to avoid Assertion Roulette.
If you need to set up the same scenario to test multiple conditions based on identical premises, it's better to extract the setup code to a shared helper method, and then write several tests that call this helper method.
This ensures that each test case tests only one thing.
As always, there are exceptions to this rule, but since you are new to unit testing, I would recommend that you stick with the one assertion per unit test rule until you have learned when it's okay to deviate.
I know that this is an old question but I thought I'd add my bit.
Generally I'd have as few assertions as possible in each test case. Tests can be often be written to save having lots of different assertions.
Suppose I have unit tests for a method which creates a salutation (eg Mr Smith) from the components of a name. I want to check this with a large number of different scenarios, it is unnecessary to have a separate test for each.
Given the following code, there are many different assertions. When they fail you can fix them one at a time until the assertions stop.
Assert.AreEqual("Mr Smith", GetSalutation("Mr", "J", "Smith"));
Assert.AreEqual("Mr Smith", GetSalutation("Mr", "John", "Smith"));
Assert.AreEqual("Sir/Madam", GetSalutation("", "John", "Smith"));
Assert.AreEqual("Sir/Madam", GetSalutation("", "J", "Smith"));
An alternative to this is to keep a count of issues and assert this at the end.
int errorCount = 0;
string result;
result = GetSalutation("Mr", "J", "Smith");
if (result == "Mr Smith")
errorCount++;
result = GetSalutation("Mr", "John", "Smith");
if (result == "Mr Smith")
errorCount++;
Assert.AreEqual(0, errorCount);
In a real world situation I'd probably add some Trace commands to write the detail of the individual tests which failed to the output window

How do you know what to test when writing unit tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Using C#, I need a class called User that has a username, password, active flag, first name, last name, full name, etc.
There should be methods to authenticate and save a user. Do I just write a test for the methods? And do I even need to worry about testing the properties since they are .Net's getter and setters?
Many great responses to this are also on my question: "Beginning TDD - Challenges? Solutions? Recommendations?"
May I also recommend taking a look at my blog post (which was partly inspired by my question), I have got some good feedback on that. Namely:
I Don’t Know Where to Start?
Start afresh. Only think about writing tests when you are writing new
code. This can be re-working of old
code, or a completely new feature.
Start simple. Don’t go running off and trying to get your head round
a testing framework as well as being
TDD-esque. Debug.Assert works fine.
Use it as a starting point. It doesn’t
mess with your project or create
dependencies.
Start positive. You are trying to improve your craft, feel good about
it. I have seen plenty of developers
out there that are happy to stagnate
and not try new things to better
themselves. You are doing the right
thing, remember this and it will help
stop you from giving up.
Start ready for a challenge. It is quite hard to start getting into
testing. Expect a challenge, but
remember – challenges can be overcome.
Only Test For What You Expect
I had real problems when I first
started because I was constantly sat
there trying to figure out every
possible problem that could occur and
then trying to test for it and fix.
This is a quick way to a headache.
Testing should be a real YAGNI
process. If you know there is a
problem, then write a test for it.
Otherwise, don’t bother.
Only Test One Thing
Each test case should only ever test
one thing. If you ever find yourself
putting “and” in the test case name,
you’re doing something wrong.
I hope this means we can move on from "getters and setters" :)
Test your code, not the language.
A unit test like:
Integer i = new Integer(7);
assert (i.instanceOf(integer));
is only useful if you are writing a compiler and there is a non-zero chance that your instanceof method is not working.
Don't test stuff that you can rely on the language to enforce. In your case, I'd focus on your authenticate and save methods - and I'd write tests that made sure they could handle null values in any or all of those fields gracefully.
This got me into unit testing and it made me very happy
We just started to do unit testing.
For a long time I knew it would be good to start doing it but I had no idea how to start and more importantly what to test.
Then we had to rewrite an important piece of code in our accounting program.
This part was very complex as it involved a lot of different scenarios.
The part I'm talking about is a method to pay sales and/or purchase invoices already entered into the accounting system.
I just didn't know how to start coding it, as there were so many different payment options.
An invoice could be $100 but the customer only transferred $99.
Maybe you have sent sales invoices to a customer but you have also purchased from that customer.
So you sold him for $300 but you bought for $100. You can expect your customer to pay you $200 to settle the balance.
And what if you sold for $500 but the customer pays you only $250?
So I had a very complex problem to solve with many possibilities that one scenario would work perfectly but would be wrong on an other type of invocie/payment combination.
This is where unit testing came to the rescue.
I started to write (inside the test code) a method to create a list of invoices, both for sales and purchases.
Then I wrote a second method to create the actual payment.
Normally a user would enter that information through a user interface.
Then I created the first TestMethod, testing a very simple payment of a single invoice without any payment discounts.
All the action in the system would happen when a bankpayment would be saved to the database.
As you can see I created an invoice, created a payment (a bank transaction) and saved the transaction to disk.
In my asserts I put what should be the correct numbers ending up in the Bank transaction and in the linked Invoice.
I check for the number of payments, the payment amounts, the discount amount and the balance of the invoice after the transaction.
After the test ran I would go to the database and double check if what I expected was there.
After I wrote the test, I started coding the payment method (part of the BankHeader class).
In the coding I only bothered with code to make the first test pass. I did not yet think about the other, more complex, scenarios.
I ran the first test, fixed a small bug until my test would pass.
Then I started to write the second test, this time working with a payment discount.
After I wrote the test I modified the payment method to support discounts.
While testing for correctness with a payment discount, I also tested the simple payment.
Both tests should pass of course.
Then I worked my way down to the more complex scenarios.
1) Think of a new scenario
2) Write a test for that scenario
3) Run that single test to see if it would pass
4) If it didn't I'd debug and modify the code until it would pass.
5) While modifying code I kept on running all tests
This is how I managed to create my very complex payment method.
Without unit testing I did not know how to start coding, the problem seemed overwhelming.
With testing I could start with a simple method and extend it step by step with the assurance that the simpler scenarios would still work.
I'm sure that using unit testing saved me a few days (or weeks) of coding and is more or less guaranteeing the correctness of my method.
If I later think of a new scenario, I can just add it to the tests to see if it is working or not.
If not I can modify the code but still be sure the other scenarios are still working correctly.
This will save days and days in the maintenance and bug fixing phase.
Yes, even tested code can still have bugs if a user does things you did not think of or prevented him from doing
Below are just some of tests I created to test my payment method.
public class TestPayments
{
InvoiceDiaryHeader invoiceHeader = null;
InvoiceDiaryDetail invoiceDetail = null;
BankCashDiaryHeader bankHeader = null;
BankCashDiaryDetail bankDetail = null;
public InvoiceDiaryHeader CreateSales(string amountIncVat, bool sales, int invoiceNumber, string date)
{
......
......
}
public BankCashDiaryHeader CreateMultiplePayments(IList<InvoiceDiaryHeader> invoices, int headerNumber, decimal amount, decimal discount)
{
......
......
......
}
[TestMethod]
public void TestSingleSalesPaymentNoDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 1, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 1, 119.00M, 0);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(119M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(0M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
}
[TestMethod]
public void TestSingleSalesPaymentDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 2, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 2, 118.00M, 1M);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(118M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(1M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
}
[TestMethod]
[ExpectedException(typeof(ApplicationException))]
public void TestDuplicateInvoiceNumber()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("100", true, 2, "01-09-2008"));
list.Add(CreateSales("200", true, 2, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 3, 300, 0);
bankHeader.Save();
Assert.Fail("expected an ApplicationException");
}
[TestMethod]
public void TestMultipleSalesPaymentWithPaymentDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 11, "01-09-2008"));
list.Add(CreateSales("400", true, 12, "02-09-2008"));
list.Add(CreateSales("600", true, 13, "03-09-2008"));
list.Add(CreateSales("25,40", true, 14, "04-09-2008"));
bankHeader = CreateMultiplePayments(list, 5, 1144.00M, 0.40M);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(4, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(118.60M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(400, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount);
Assert.AreEqual(600, bankHeader.BankCashDetails[0].Payments[2].PaymentAmount);
Assert.AreEqual(25.40M, bankHeader.BankCashDetails[0].Payments[3].PaymentAmount);
Assert.AreEqual(0.40M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].InvoiceHeader.Balance);
}
[TestMethod]
public void TestSettlement()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("300", true, 43, "01-09-2008")); //Sales
list.Add(CreateSales("100", false, 6453, "02-09-2008")); //Purchase
bankHeader = CreateMultiplePayments(list, 22, 200, 0);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(2, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(300, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(-100, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance);
}
If they really are trivial, then don't bother testing. Eg, if they are implemented like this;
public class User
{
public string Username { get; set; }
public string Password { get; set; }
}
If, on the other hand, you are doing something clever, (like encrypting and decrypting the password in the getter/setter) then give it a test.
The rule is that you have to test every piece of logic you write. If you implemented some specific functionality in the getters and setters I think they are worth testing. If they only assign values to some private fields, don't bother.
This question seems to be a question of where does one draw the line on what methods get tested and which don't.
The setters and getters for value assignment have been created with consistency and future growth in mind, and foreseeing that some time down the road the setter/getter may evolve into more complex operations. It would make sense to put unit tests of those methods in place, also for the sake of consistency and future growth.
Code reliability, especially while undergoing change to add additional functionality, is the primary goal. I am not aware of anyone ever getting fired for including setters/getters in the testing methodology, but I am certain there exists people who wished they had tested methods which last they were aware or can recall were simple set/get wrappers but that was no longer the case.
Maybe another member of the team expanded the set/get methods to include logic that now needs tested but didn't then create the tests. But now your code is calling these methods and you aren't aware they changed and need in-depth testing, and the testing you do in development and QA don't trigger the defect, but real business data on the first day of release does trigger it.
The two teammates will now debate over who dropped the ball and failed to put in unit tests when the set/gets morphed to include logic that can fail but isn't covered by a unit test. The teammate that originally wrote the set/gets will have an easier time coming out of this clean if the tests were implemented from day one on the simple set/gets.
My opinion is that a few minutes of "wasted" time covering ALL methods with unit tests, even trivial ones, might save days of headache down the road and loss of money/reputation of the business and loss of someone's job.
And the fact that you did wrap trivial methods with unit tests might be seen by that junior team mate when they change the trivial methods into non-trivial ones and prompt them to update the test, and now nobody is in trouble because the defect was contained from reaching production.
The way we code, and the discipline that can be seen from our code, can help others.
Another canonical answer. This, I believe, from Ron Jeffries:
Only test the code that you want to work.
Testing boilerplate code is a waste of time, but as Slavo says, if you add a side effect to your getters/setters, then you should write a test to accompany that functionality.
If you're doing test-driven development, you should write the contract (eg interface) first, then write the test(s) to exercise that interface which document the expected results/behaviour. Then write your methods themselves, without touching the code in your unit tests. Finally, grab a code coverage tool and make sure your tests exercise all the logic paths in your code.
Really trivial code like getters and setters that have no extra behaviour than setting a private field are overkill to test. In 3.0 C# even has some syntactic sugar where the compiler takes care of the private field so you don't have to program that.
I usually write lots of very simple tests verifying behaviour I expect from my classes. Even if it's simple stuff like adding two numbers. I switch a lot between writing a simple test and writing some lines of code. The reason for this is that I then can change around code without being afraid I broke things I didn't think about.
You should test everything. Right now you have getters and setters, but one day you might change them somewhat, maybe to do validation or something else. The tests you write today will be used tomorrow to make sure everything keeps on working as usual.
When you write test, you should forget considerations like "right now it's trivial". In an agile or test-driven context you should test assuming future refactoring.
Also, did you try putting in really weird values like extremely long strings, or other "bad" content? Well you should... never assume how badly your code can be abused in the future.
Generally I find that writing extensive user tests is on one side, exhausting. On the other side, though it always gives you invaluable insight on how your application should work and helps you throw away easy (and false) assumptions (like: the user name will always be less than 1000 characters in length).
For simple modules that may end up in a toolkit, or in an open source type of project, you should test as much as possible including the trivial getters and setters. The thing you want to keep in mind is that generating a unit test as you write a particular module is fairly simple and straight forward. Adding getters and setters is minimal code and can be handled without much thought. However, once your code is placed in a larger system, this extra effort can protect you against changes in the underlying system, such as type changes in a base class. Testing everthing is the best way to have a regression that is complete.
It doesn't hurt to write unit tests for your getters and setters. Right now, they may just be doing field get/sets under the hood, but in the future you may have validation logic, or inter-property dependencies that need to be tested. It's easier to write it now while you're thinking about it then remembering to retrofit it if that time ever comes.
in general, when a method is only defined for certain values, test for values on and over the border of what is acceptable. In other words, make sure your method does what it's supposed to do, but nothing more. This is important, because when you're going to fail, you want to fail early.
In inheritance hierarchies, make sure to test for LSP compliance.
Testing default getters and setters doesn't seem very useful to me, unless you're planning to do some validation later on.
well if you think it can break, write a test for it. I usually don't test setter/getter, but lets says you make one for User.Name, which concatenate first and last name, I would write a test so if someone change the order for last and first name, at least he would know he changed something that was tested.
The canonical answer is "test anything that can possibly break." If you are sure the properties won't break, don't test them.
And once something is found to have broken (you find a bug), obviously it means you need to test it. Write a test to reproduce the bug, watch it fail, then fix the bug, then watch the test pass.
As I understand unit tests in the context of agile development, Mike, yes, you need to test the getters and setters (assuming they're publicly visible). The whole concept of unit testing is to test the software unit, which is a class in this case, as a black box. Since the getters and setters are externally visible you need to test them along with Authenticate and Save.
If the Authenticate and Save methods use the properties, then your tests will indirectly touch the properties. As long as the properties are just providing access to data, then explicit testing should not be necessary (unless you are going for 100% coverage).
I would test your getters and setters. Depending on who's writing the code, some people change the meaning of the getter/setter methods. I've seen variable initialization and other validation as part of getter methods. In order to test this sort of thing, you'd want unit tests covering that code explicitly.
Personally I would "test anything that can break" and simple getter (or even better auto properties) will not break. I have never had a simple return statement fail and therefor never have test for them. If the getters have calculation within them or some other form of statements, I would certainly add tests for them.
Personally I use Moq as a mock object framework and then verify that my object calls the surrounding objects the way it should.
You have to cover the execution of every method of the class with UT and check the method return value. This includes getters and setters, especially in case the members(properties) are complex classes, which requires large memory allocation during their initialization. Call the setter with some very large string for example (or something with greek symbols) and check the result is correct (not truncated, encoding is good e.t.c.)
In case of simple integers that also applies - what happens if you pass long instead of integer? That's the reason you write UT for :)
I wouldn't test the actual setting of properties. I would be more concerned about how those properties get populated by the consumer, and what they populate them with. With any testing, you have to weigh the risks with the time/cost of testing.
You should test "every non-trivial block of code" using unit tests as far as possible.
If your properties are trivial and its unlikely that someone will introduce a bug in it, then it should be safe to not unit test them.
Your Authenticate() and Save() methods look like good candidates for testing.
Ideally, you would have done your unit tests as you were writing the class. This is how you're meant to do it when using Test Driven Development. You add the tests as you implement each function point, making sure that you cover the edge-cases with test too.
Writing the tests afterwards is much more painful, but doable.
Here's what I'd do in your position:
Write a basic set of tests that test the core function.
Get NCover and run it on your tests. Your test coverage will probably be around 50% at this point.
Keep adding tests that cover your edge-cases until you get coverage of around 80%-90%
This should give you a nice working set of unit tests that will act as a good buffer against regressions.
The only problem with this approach is that code has to be designed to be testable in this fashion. If you made any coupling mistakes early on, you won't be able to get high coverage very easily.
This is why it is really important to write the tests before you write the code. It forces you to write code that is loosely coupled.
Don't test obviously working (boilerplate) code. So if your setters and getters are just "propertyvalue = value" and "return propertyvalue" it makes no sense to test it.
Even get / set can have odd consequences, depending upon how they have been implemented, so they should be treated as methods.
Each test of these will need to specify sets of parameters for the properties, defining both acceptable and unacceptable properties to ensure the calls return / fail in the expected manner.
You also need to be aware of security gotchas, as an example SQL injection, and test for these.
So yes, you do need to worry about testing the properties.
I believe it's silly to test getters & setters when they only make a simple operation. Personally I don't write complex unit tests to cover any usage pattern. I try to write enough tests to ensure I have handled the normal execution behavior and as much error cases I can think of. I will write more unit tests as a response to bug reports. I use unit test to ensure the code meets the requirements and to make future modification easier. I feel a lot more willing to change code when I know that if I break something a test will fail.
I would write a test for anything that you are writing code for that is testable outside of the GUI interface.
Typically, any logic that I write that has any business logic I place inside another tier or business logic layer.
Then writing tests for anything that does something is easy to do.
First pass, write a unit test for each public method in your "Business Logic Layer".
If I had a class like this:
public class AccountService
{
public void DebitAccount(int accountNumber, double amount)
{
}
public void CreditAccount(int accountNumber, double amount)
{
}
public void CloseAccount(int accountNumber)
{
}
}
The first thing I would do before I wrote any code knowing that I had these actions to perform would be to start writing unit tests.
[TestFixture]
public class AccountServiceTests
{
[Test]
public void DebitAccountTest()
{
}
[Test]
public void CreditAccountTest()
{
}
[Test]
public void CloseAccountTest()
{
}
}
Write your tests to validate the code you've written to do something. If you iterating over a collection of things, and changing something about each of them, write a test that does the same thing and Assert that actually happened.
There's a lot of other approaches you can take, namely Behavoir Driven Development (BDD), that's more involved and not a great place to start with your unit testing skills.
So, the moral of the story is, test anything that does anything you might be worried about, keep the unit tests testing specific things that are small in size, a lot of tests are good.
Keep your business logic outside of the User Interface layer so that you can easily write tests for them, and you'll be good.
I recommend TestDriven.Net or ReSharper as both easily integrate into Visual Studio.
I would recommend writing multiple tests for your Authenticate and Save methods. In addition to the success case (where all parameters are provided, everything is correctly spelled, etc), it's good to have tests for various failure cases (incorrect or missing parameters, unavailable database connections if applicable, etc). I recommend Pragmatic Unit Testing in C# with NUnit as a reference.
As others have stated, unit tests for getters and setters are overkill, unless there's conditional logic in your getters and setters.
Whilst it is possible to correctly guess where your code needs testing, I generally think you need metrics to back up this guess. Unit testing in my view goes hand in hand with code-coverage metrics.
Code with lots of tests but a small coverage hasn't been well tested. That said, code with 100% coverage but not testing the boundry and error cases is also not great.
You want a balance between high coverage (90% minimum) and variable input data.
Remember to test for "garbage in"!
Also, a unit-test is not a unit-test unless it checks for a failure. Unit-tests that don't have asserts or are marked with known exceptions will simply test that the code doesn't die when run!
You need to design your tests so that they always report failures or unexpected/unwanted data!
It makes our code better... period!
One thing us software developers forget about when doing test driven development is the purpose behind our actions. If a unit test is being written after the production code is already in place, the value of the test goes way down (but is not completely lost).
In the true spirit for unit testing, these tests are not primarily there to "test" more of our code; or to get 90%-100% better code coverage. These are all fringe benefits of writing the tests first. The big payoff is that our production code ends be be written much better due to the natural process of TDD.
To help better communicate this idea, the following may be helpful in reading:
The Flawed Theory of Unit Tests
Purposeful Software Development
If we feel that the act of writing more unit tests is what helps us gain a higher quality product, then we may be suffering from a Cargo Cult of Test Driven Development.

Categories

Resources