How do I ensure complete unit test coverage? - c#

I have 2 projects, one is the "Main" project and the other is the "Test" project.
The policy is that all methods in the Main project must have at least one accompanying test in the test project.
What I want is a new unit test in the Test project that verifies this remains the case. If there are any methods that do not have corresponding tests (and this includes method overloads) then I want this test to fail.
I can sort out appropriate messaging when the test fails.
My guess is that I can get every method (using reflection??) but I'm not sure how to then verify that there is a reference to each method in this Test project (and ignore references in projects)

You can use any existing software to measure code coverage but...
Don't do it!!! Seriously. The aim should be not to have 100% coverage but to have software that can easily evolve. From your test project you can invoke by reflection every single existing method and swallow all the exceptions. That will make your coverage around 100% but what good would it be?
Read about TDD. Start creating testable software that has meaningful tests that will save you when something goes wrong. It's not about coverage, it's about being safe.

This is an example of a meta-test that sounds good in principle, but once you get to the detail you should quickly realise is a bad idea. As has already been suggested, the correct approach is to encourage whoever owns the policy to amend it. As you’ve quoted the policy, it is sufficiently specific that people can satisfy the requirement without really achieving anything of value.
Consider:
public void TestMethod1Exists()
{
try
{
var classUnderTest = new ClassToTest();
classUnderTest.Method1();
}
catch (Exception)
{
}
}
The test contains a call to Method1 on the ClassToTest so the requirement of having a test for that method is satisfied, but nothing useful is being tested. As long as the method exists (which is must if the code compiled) the test will pass.
The intent of the policy is presumably to try to ensure that written code is being tested. Looking at some very basic code:
public string IsSet(bool flag)
{
if (flag)
{
return "YES";
}
return "NO";
}
As methods go, this is pretty simple (it could easily be changed to one line), but even so it contains two routes through the method. Having a test to ensure that this method is being called gives you a false sense of security. You would know it is being called but you would not know if all of the code paths are being tested.
An alternative that has been suggested is that you could just use code coverage tools. These can be useful and give a much better idea as to how well exercised your code is, but again they only give an indication of the coverage, not the quality of that coverage. So, let’s say I had some tests for the IsSet method above:
public void TestWhenTrue()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(true));
}
public void TestWhenFalse()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(false));
}
I’m passing sufficient parameters to exercise both code paths, so the coverage for the IsSet method should be 100%. But all I am testing is that the method returns a string. I’m not testing the value of the string, so the tests themselves don’t really add much (if any) value.
Code coverage is a useful metric, but only as part of a larger picture of code quality. Having peer reviews and sharing best practice around how to effectively test the code you are writing within your team, whilst it is less concretely measurable will have a more significant impact on the quality of your test code.

Related

How to test the value of a variable that exists only inside that function?

I'm trying to implement some unit tests for a function, but one of the inputs does not change what the function returns. Even though this input won't change what the function returns, I still need it for an API inside the code. The function is something like this:
public bool IsEmailSent(string userEmail, bool isJson)
{
var link = isJson ? "Link1" : "Link2";
if (string.IsNullOrEmpty(userEmail))
{
return false;
}
//Some other code
return true;
}
What I'm trying to do is to test the variable 'link' value, which depends on the input 'isJson'. So the test I wanted to implement is something like:
[TestMethod]
public void link_should_be_link2_when_isJson_is_false()
{
//if isJson is false && link = link2, test is sucessful
}
The problem is that I have no idea how to get the variable 'link' inside a test, to check if it's value is correct, since my function doesn't return it. So, how do I test a value of some function's variable that dependes on a given input, but my function doesn't return anything close to this variable's value?
When performing unit testing, consider a function like a black box. Test combinations of inputs, test idempotence, etc. The implementation of the actually function can be abstracted away from the unit tests. Without seeing the
//Some other code
I would see if you can turn the API call into a helper function. Then, write separate unit tests for the helper function.
Since the computation of link in your function does not have an impact on the return value, it will have some other observable effect, because otherwise the computation would not be needed. In your case it seems likely that the effect would be observable in the way your function link accesses a dependend-on-component (DOC), probably some function that sends out an email.
You have a multitude of options here:
You can mock the call to the DOC function that sends out the email to see if it is called as expected for your expected intermediate value of link.
You can factor out some helper functions from link as was suggested by elgonzo.
You could consider refactoring your code to avoid the Boolean control argument, which looks like a code smell, for example to have different functions for the Json and the non-Json case. However, the testing problem will then have to be solved according to the new design you chose.
...
Some remarks, however: Unit-testing is not a black box technique. In fact, some of the unit-testing specific test design techniques only make sense for glass box (aka white box) testing, namely all the coverage based test design methods. An attempt to keep the whole unit-test suite independent of implementation details is likely to result in an inefficient test suite, that is, a test suite that is not suited to find all bugs that could be found.
Bugs are, in the end, in the implementation. Different implementations will have different bugs. Think about the different ways to implement a Fibonacci function: as iterative/recursive function, closed form expression (Moivre/Binet), lookup table: Every implementation brings different potential bugs. Unit-testing is the testing method at the bottom of the test pyramid, and all higher-level tests (integration or system test) are less suited to find bugs in the implementation details. And, finding bugs is one primary goal of testing (see Myers, Badgett, Sandler: The Art of Software Testing, or, Beizer: Software Testing Techniques, and many others).
The best approach therefore is to have as many as possible useful tests that are implementation independent. Additionally, you will likely need additional tests that aim at finding the potential bugs in the chosen implementation. However, the less stable an implementation aspect is, the more you should avoid making your tests dependent on it: Helper functions may be more likely to be renamed, merged or deleted than the functions forming the official API of your component.

How to write unit test first and code later?

I am new to unit testing and have read several times that we should write unit test first and then the actual code. As of now , i am writing my methods and then unit test the code.
If you write the tests first...
You tend to write the code to fit the tests. This encourages the
"simplest thing that solves the problem" type development and keeps
you focused on solving the problem not working on meta-problems.
If you write the code first...
You will be tempted to write the tests to fit the code. In effect this
is the equivalent of writing the problem to fit your answer, which is
kind of backwards and will quite often lead to tests that are of
lesser value.
Sounds good to me. However, How do i write unit tests even before having my code in place?
Am i taking the advice literally ? Does it means that i should have my POCO classes and Interfaces in place and then write unit test ?
Can anyone explain me how this is done with a simple example of say adding two numbers?
It's simple really. Red, Green, Refactor.
Red means - your code is completely broken. The syntax highlighting shows red and the test doesn't pass. Why? You haven't written any code yet.
Green means - Your application builds and the test passes. You've added the required code.
Refactor means - clean it up and make sure the test passes.
You can start by writing a test somewhat like this:
[TestMethod]
public void Can_Create_MathClass() {
var math = new MathClass();
Assert.IsNotNull(math);
}
This will fail (RED). How do you fix it? Create the class.
public class MathClass {
}
That's it. It now passes (GREEN). Next test:
[TestMethod]
public void Can_Add_Two_Numbers() {
var math = new MathClass();
var result = math.Add(1, 2);
Assert.AreEqual(3, result);
}
This also fails (RED). Create the Add method:
public class MathClass {
public int Add(int a, int b) {
return a + b;
}
}
Run the test. This will pass (GREEN).
Refactoring is a matter of cleaning up the code. It also means you can remove redundant tests. We know we have the MathClass now.. so you can completely remove the Can_Create_MathClass test. Once that is done.. you've passed REFACTOR, and can continue on.
It is important to remember that the Refactor step doesn't just mean your normal code. It also means tests. You cannot let your tests deteriorate over time. You must include them in the Refactor step.
When you create your tests first, before the code, you will find it much easier and faster to create your code. The combined time it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away. But, if you already have the unit tests you don't need to create them after the code saving you some time now and lots later.
Creating a unit test helps a developer to really consider what needs to be done. Requirements are nailed down firmly by tests. There can be no misunderstanding a specification written in the form of executable code.
The code you will create is simple and concise, implementing only the features you wanted. Other developers can see how to use this new code by browsing the tests. Input whose results are undefined will be conspicuously absent from the test suite
There is also a benefit to system design. It is often very difficult to unit test some software systems. These systems are typically built code first and testing second, often by a different team entirely. By creating tests first your design will be influenced by a desire to test everything of value to your customer. Your design will reflect this by being easier to test.
Let's take a slightly more advanced example: You want to write a method that returns the largest number from a sequence.
Firstly, write one or more units test for the method to be tested:
int[] testSeq1 = {1, 4, 8, 120, 34, 56, -1, 3, -13};
Assert.That(MaxOf(testSeq1) == 120);
And repeat for some more sequences. Also include a null parameter, a sequence with one element and an empty sequence and decide if an empty sequence or null parameter should throw an exception (and ensure that the unit test expects an exception for an empty sequence if that's the case).
It is during this process that you need to decide the name of the method and the type of its parameters.
At this point, it won't compile.
Then write a stub for the method:
public int MaxOf(IEnumerable<int> sequence)
{
return 0;
}
At this point it compiles, but the unit tests fail.
Then implement MaxOf() so that those unit tests now pass.
Doing it this way around ensures that you immediately focus on the usability of the method, since the very first thing you try to do is to use it - before even beginning to write it. You might well decide to change the method's declaration slightly at this point, based on the usage pattern.
A real world example would apply this approach to using an entire class rather than just one method. For the sake of brevity I have omitted the class from the example above.
It is possible to write the unit tests before you write any code - Visual Studio does have features to generate method stubs from the code you've written in your unit test. doing it this way around can also help understand the methods that the object will need to support - sometimes this can aid later enhancements (If you had a save to disk, that you also overload to save to Stream, this is more testable and aids spooling over the network if required later on)

Unit testing a class that tracks state

I am abstracting the history tracking portion of a class of mine so that it looks like this:
private readonly Stack<MyObject> _pastHistory = new Stack<MyObject>();
internal virtual Boolean IsAnyHistory { get { return _pastHistory.Any(); } }
internal virtual void AddObjectToHistory(MyObject myObject)
{
if (myObject == null) throw new ArgumentNullException("myObject");
_pastHistory.Push(myObject);
}
internal virtual MyObject RemoveLastObject()
{
if(!IsAnyHistory) throw new InvalidOperationException("There is no previous history.");
return _pastHistory.Pop();
}
My problem is that I would like to unit test that Remove will return the last Added object.
AddObjectToHistory
RemoveObjectToHistory -> returns what was put in via AddObjectToHistory
However, it isn't really a unit test if I have to call Add first? But, the only way that I can see to do this in a true unit test way is to pass in the Stack object in the constructor OR mock out IsAnyHistory...but mocking my SUT is odd also. So, my question is, from a dogmatic view is this a unit test? If not, how do I clean it up...is constructor injection my only way? It just seems like a stretch to have to pass in a simple object? Is it ok to push even this simple object out to be injected?
There are two approaches to those scenarios:
Interfere into design, like making _pastHistory internal/protected or injecting stack
Use other (possibly unit tested) methods to perform verification
As always, there is no golden rule, although I'd say you generally should avoid situations where unit tests force design changes (as those changes will most likely introduce ambiguity/unnecessary questions to code consumers).
Nonetheless, in the end it is you who has to weigh how much you want unit test code interfere into design (first case) or bend the perfect unit test definition (second case).
Usually, I find second case much more appealing - it doesn't clutter original class code and you'll most likely have Add already tested - it's safe to rely on it.
I think it's still a unit test, assuming MyObject is a simple object. I often construct input parameters to unit test methods.
I use Michael Feather's unit test criteria:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
My 2 cents... how would the client know if remove worked or not ? How is a 'client' supposed to interact with this object? Are clients going to push in a stack to the history tracker? Treat the test as just another user/consumer/client of the test subject.. using exactly the same interaction as in real production.
I haven't heard of any rule stating that you're not allowed to call multiple methods on the object under test.
To simulate, stack is not empty. I'd just call Add - 99% case. I'd refrain from destroying the encapsulation of that object.. Treat objects like people (I think I read that in Object Thinking). Tell them to do stuff.. don't break-in and enter.
e.g. If you want someone to have some money in their wallet,
the simple way is to give them the money and let them internally put it into their wallet.
throw their wallet away and stuff in a wallet in their pocket.
I like Option1. Also see how it frees you from implementation details (which induce brittleness in tests). Let's say tomorrow the person decides to use an online wallet. The latter approach will break your tests - they will need to be updated for pushing in an online wallet now - even though the object behavior is not broken.
Another example I've seen is for testing Repository.GetX() where people break-in to the DB to inject records with SQL now in the unit test.. where it would have be considerably cleaner and easier to call Repository.AddX(x) first. Isolation is desired but not to the extent that it overrides pragmatism.
I hope I didn't come on too strong here.. it just pains me to see object APIs being 'contorted for testability' to the point where it no longer resembles the 'simplest thing that could work'.
I think you're trying to be a little overly specific with your definition of a unit test. You should be testing the public behavior of your class, not the minute implementation details.
From your code snippet, it looks like all you really need to care about is whether a) calling AddObjectToHistory causes IsAnyHistory to return true and b) RemoveLastObject eventually causes IsAnyHistory to return false.
As stated in the other answers I think your options can be broken down like so.
You take a dogmatic approach to your testing methodology and add constructor injection for the stack object so you can inject your own fake stack object and test your methods.
You write a separate test for add and remove, the remove test will use the add method but consider it a part of the test setup. As long as your add test passes, your remove should be too.

Unit Testing: Self-contained tests vs code duplication (DRY)

I'm making my first steps with unit testing and am unsure about two paradigms which seem to contradict themselves on unit tests, which is:
Every single unit test should be self-contained and not depend on others.
Don't repeat yourself.
To be more concrete, I've got an importer which I want to test. The Importer has a "Import" function, taking raw data (e.g. out of a CSV) and returning an object of a certain kind which also will be stored into a database through ORM (LinqToSQL in this case).
Now I want to test several things, e.g. that the returned object returned is not null, that it's mandatory fields are not null or empty and that it's attributes got the correct values. I wrote 3 unit tests for this. Should each test import and get the job or does this belong into a general setup-logic? On the other hand, believing this blog post, the latter would be a bad idea as far as my understanding goes. Also, wouldn't this violate the self-containment?
My class looks like this:
[TestFixture]
public class ImportJob
{
private TransactionScope scope;
private CsvImporter csvImporter;
private readonly string[] row = { "" };
public ImportJob()
{
CsvReader reader = new CsvReader(new StreamReader(
#"C:\SomePath\unit_test.csv", Encoding.Default),
false, ';');
reader.MissingFieldAction = MissingFieldAction.ReplaceByEmpty;
int fieldCount = reader.FieldCount;
row = new string[fieldCount];
reader.ReadNextRecord();
reader.CopyCurrentRecordTo(row);
}
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
csvImporter = new CsvImporter();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void ImportJob_IsNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j);
}
[Test]
public void ImportJob_MandatoryFields_AreNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j.Customer);
Assert.IsNotNull(j.DateCreated);
Assert.IsNotNull(j.OrderNo);
}
[Test]
public void ImportJob_MandatoryFields_AreValid()
{
Job j = csvImporter.ImportJob(row);
Customer c = csvImporter.GetCustomer("01-01234567");
Assert.AreEqual(j.Customer, c);
Assert.That(j.DateCreated.Date == DateTime.Now.Date);
Assert.That(j.OrderNo == row[(int)Csv.RechNmrPruef]);
}
// etc. ...
}
As can be seen, I'm doing the line Job j = csvImporter.ImportJob(row);
in every unit test, as they should be self-contained. But this does violate the DRY principle and may possibly cause performance issues some day.
What's the best practice in this case?
Your test classes are no different from usual classes, and should be treated as such: all good practices (DRY, code reuse, etc.) should apply there as well.
That depends on how much of your scenario that's common to your test. In the blog post you refered to the main complaint was that the SetUp method did different setup for the three tests and that can't be considered best practise. In your case you've got the same setup for each test/scenario and then you should use a shared SetUp instead of duplicating the code in each test. If you later on find that there are more tests that does not share this setup or requires a different setup shared between a set of tests then refactor those test to a new test case class. You could also have shared setup methods that's not marked with [SetUp] but gets called in the beginning of each test that needs them:
[Test]
public void SomeTest()
{
setupSomeSharedState();
...
}
A way of finding the right mix could be to start off without a SetUp method and when you find that you're duplicating code for test setup then refactor to a shared method.
You could put the
Job j = csvImporter.ImportJob(row);
in your setup. That way you're not repeating code.
you actually should run that line of code for each and every test. Otherwise tests will start failing because of things that happened in other tests. This will become hard to maintain.
The performance problem isn't caused by DRY violations. You actually should setup everything for each and every test. These aren't unit tests, they're integration tests, you rely on external files to run the test. You could make ImportJob read from a stream instead of it directly opening a file. Then you could test with a memorystream.
Whether you move
Job j = csvImporter.ImportJob(row);
into the SetUp function or not, it will still be executed before every test is executed. If you have the exact same line at the top of each test, well then it is just logical that you move that line into the SetUp portion.
The blog entry that you posted complained about the setup of the test values being done in a function disconnected (possibly not on the same screen as) from the test itself -- but your case is different, in that the test data is being driven by an external text file, so that complaint doesn't match up with your specific use case either.
In one of my projects we agreed with team that we will not implement any initialization logic in unit tests constructors. We have Setup, TestFixtureSetup, SetupFixture (since version 2.4 of NUnit) attributes. They are enough for almost all cases when we need initialization. We force developers to use one of these attributes and to explicitly define whether we will run this initialization code before each test, before all tests in a fixture or before all tests in a namespace.
However I will disagree that unit tests should always confirm to all good practices supposed for a usual development. It is desirable, but it is not a rule. My point is that in real life customer doesn't pay for unit tests. Customer pays for the overall quality and functionality of the product. He is not interested to know whether you provide him a bug-free product by covering 100% of code by unit test/automated GUI tests or by employing 3 manual testers per one developer that will click on every piece of the screen after each build.
Unit tests don't add business value to the product, they allow you to save on development and testing efforts and force developers to write better code. So it is always up to you - will you spend additional time on UT refactoring to make unit tests perfect? Or will you spend the same amount of time to add new features for the customers of your product? Do not also forget that unit-tests should be as simple as possible. How to find a golden section?
I suppose this depends on the project, and PM or team lead need to plan and estimate quality of unit tests, their completeness and code coverage as if they estimate all other business features of your product. My opinion, that it is better to have copy-paste unit tests that cover 80% of production code then to have a very well designed and separated unit tests that cover only 20%.

How do you know what to test when writing unit tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Using C#, I need a class called User that has a username, password, active flag, first name, last name, full name, etc.
There should be methods to authenticate and save a user. Do I just write a test for the methods? And do I even need to worry about testing the properties since they are .Net's getter and setters?
Many great responses to this are also on my question: "Beginning TDD - Challenges? Solutions? Recommendations?"
May I also recommend taking a look at my blog post (which was partly inspired by my question), I have got some good feedback on that. Namely:
I Don’t Know Where to Start?
Start afresh. Only think about writing tests when you are writing new
code. This can be re-working of old
code, or a completely new feature.
Start simple. Don’t go running off and trying to get your head round
a testing framework as well as being
TDD-esque. Debug.Assert works fine.
Use it as a starting point. It doesn’t
mess with your project or create
dependencies.
Start positive. You are trying to improve your craft, feel good about
it. I have seen plenty of developers
out there that are happy to stagnate
and not try new things to better
themselves. You are doing the right
thing, remember this and it will help
stop you from giving up.
Start ready for a challenge. It is quite hard to start getting into
testing. Expect a challenge, but
remember – challenges can be overcome.
Only Test For What You Expect
I had real problems when I first
started because I was constantly sat
there trying to figure out every
possible problem that could occur and
then trying to test for it and fix.
This is a quick way to a headache.
Testing should be a real YAGNI
process. If you know there is a
problem, then write a test for it.
Otherwise, don’t bother.
Only Test One Thing
Each test case should only ever test
one thing. If you ever find yourself
putting “and” in the test case name,
you’re doing something wrong.
I hope this means we can move on from "getters and setters" :)
Test your code, not the language.
A unit test like:
Integer i = new Integer(7);
assert (i.instanceOf(integer));
is only useful if you are writing a compiler and there is a non-zero chance that your instanceof method is not working.
Don't test stuff that you can rely on the language to enforce. In your case, I'd focus on your authenticate and save methods - and I'd write tests that made sure they could handle null values in any or all of those fields gracefully.
This got me into unit testing and it made me very happy
We just started to do unit testing.
For a long time I knew it would be good to start doing it but I had no idea how to start and more importantly what to test.
Then we had to rewrite an important piece of code in our accounting program.
This part was very complex as it involved a lot of different scenarios.
The part I'm talking about is a method to pay sales and/or purchase invoices already entered into the accounting system.
I just didn't know how to start coding it, as there were so many different payment options.
An invoice could be $100 but the customer only transferred $99.
Maybe you have sent sales invoices to a customer but you have also purchased from that customer.
So you sold him for $300 but you bought for $100. You can expect your customer to pay you $200 to settle the balance.
And what if you sold for $500 but the customer pays you only $250?
So I had a very complex problem to solve with many possibilities that one scenario would work perfectly but would be wrong on an other type of invocie/payment combination.
This is where unit testing came to the rescue.
I started to write (inside the test code) a method to create a list of invoices, both for sales and purchases.
Then I wrote a second method to create the actual payment.
Normally a user would enter that information through a user interface.
Then I created the first TestMethod, testing a very simple payment of a single invoice without any payment discounts.
All the action in the system would happen when a bankpayment would be saved to the database.
As you can see I created an invoice, created a payment (a bank transaction) and saved the transaction to disk.
In my asserts I put what should be the correct numbers ending up in the Bank transaction and in the linked Invoice.
I check for the number of payments, the payment amounts, the discount amount and the balance of the invoice after the transaction.
After the test ran I would go to the database and double check if what I expected was there.
After I wrote the test, I started coding the payment method (part of the BankHeader class).
In the coding I only bothered with code to make the first test pass. I did not yet think about the other, more complex, scenarios.
I ran the first test, fixed a small bug until my test would pass.
Then I started to write the second test, this time working with a payment discount.
After I wrote the test I modified the payment method to support discounts.
While testing for correctness with a payment discount, I also tested the simple payment.
Both tests should pass of course.
Then I worked my way down to the more complex scenarios.
1) Think of a new scenario
2) Write a test for that scenario
3) Run that single test to see if it would pass
4) If it didn't I'd debug and modify the code until it would pass.
5) While modifying code I kept on running all tests
This is how I managed to create my very complex payment method.
Without unit testing I did not know how to start coding, the problem seemed overwhelming.
With testing I could start with a simple method and extend it step by step with the assurance that the simpler scenarios would still work.
I'm sure that using unit testing saved me a few days (or weeks) of coding and is more or less guaranteeing the correctness of my method.
If I later think of a new scenario, I can just add it to the tests to see if it is working or not.
If not I can modify the code but still be sure the other scenarios are still working correctly.
This will save days and days in the maintenance and bug fixing phase.
Yes, even tested code can still have bugs if a user does things you did not think of or prevented him from doing
Below are just some of tests I created to test my payment method.
public class TestPayments
{
InvoiceDiaryHeader invoiceHeader = null;
InvoiceDiaryDetail invoiceDetail = null;
BankCashDiaryHeader bankHeader = null;
BankCashDiaryDetail bankDetail = null;
public InvoiceDiaryHeader CreateSales(string amountIncVat, bool sales, int invoiceNumber, string date)
{
......
......
}
public BankCashDiaryHeader CreateMultiplePayments(IList<InvoiceDiaryHeader> invoices, int headerNumber, decimal amount, decimal discount)
{
......
......
......
}
[TestMethod]
public void TestSingleSalesPaymentNoDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 1, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 1, 119.00M, 0);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(119M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(0M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
}
[TestMethod]
public void TestSingleSalesPaymentDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 2, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 2, 118.00M, 1M);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(118M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(1M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
}
[TestMethod]
[ExpectedException(typeof(ApplicationException))]
public void TestDuplicateInvoiceNumber()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("100", true, 2, "01-09-2008"));
list.Add(CreateSales("200", true, 2, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 3, 300, 0);
bankHeader.Save();
Assert.Fail("expected an ApplicationException");
}
[TestMethod]
public void TestMultipleSalesPaymentWithPaymentDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 11, "01-09-2008"));
list.Add(CreateSales("400", true, 12, "02-09-2008"));
list.Add(CreateSales("600", true, 13, "03-09-2008"));
list.Add(CreateSales("25,40", true, 14, "04-09-2008"));
bankHeader = CreateMultiplePayments(list, 5, 1144.00M, 0.40M);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(4, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(118.60M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(400, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount);
Assert.AreEqual(600, bankHeader.BankCashDetails[0].Payments[2].PaymentAmount);
Assert.AreEqual(25.40M, bankHeader.BankCashDetails[0].Payments[3].PaymentAmount);
Assert.AreEqual(0.40M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].InvoiceHeader.Balance);
}
[TestMethod]
public void TestSettlement()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("300", true, 43, "01-09-2008")); //Sales
list.Add(CreateSales("100", false, 6453, "02-09-2008")); //Purchase
bankHeader = CreateMultiplePayments(list, 22, 200, 0);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(2, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(300, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(-100, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance);
}
If they really are trivial, then don't bother testing. Eg, if they are implemented like this;
public class User
{
public string Username { get; set; }
public string Password { get; set; }
}
If, on the other hand, you are doing something clever, (like encrypting and decrypting the password in the getter/setter) then give it a test.
The rule is that you have to test every piece of logic you write. If you implemented some specific functionality in the getters and setters I think they are worth testing. If they only assign values to some private fields, don't bother.
This question seems to be a question of where does one draw the line on what methods get tested and which don't.
The setters and getters for value assignment have been created with consistency and future growth in mind, and foreseeing that some time down the road the setter/getter may evolve into more complex operations. It would make sense to put unit tests of those methods in place, also for the sake of consistency and future growth.
Code reliability, especially while undergoing change to add additional functionality, is the primary goal. I am not aware of anyone ever getting fired for including setters/getters in the testing methodology, but I am certain there exists people who wished they had tested methods which last they were aware or can recall were simple set/get wrappers but that was no longer the case.
Maybe another member of the team expanded the set/get methods to include logic that now needs tested but didn't then create the tests. But now your code is calling these methods and you aren't aware they changed and need in-depth testing, and the testing you do in development and QA don't trigger the defect, but real business data on the first day of release does trigger it.
The two teammates will now debate over who dropped the ball and failed to put in unit tests when the set/gets morphed to include logic that can fail but isn't covered by a unit test. The teammate that originally wrote the set/gets will have an easier time coming out of this clean if the tests were implemented from day one on the simple set/gets.
My opinion is that a few minutes of "wasted" time covering ALL methods with unit tests, even trivial ones, might save days of headache down the road and loss of money/reputation of the business and loss of someone's job.
And the fact that you did wrap trivial methods with unit tests might be seen by that junior team mate when they change the trivial methods into non-trivial ones and prompt them to update the test, and now nobody is in trouble because the defect was contained from reaching production.
The way we code, and the discipline that can be seen from our code, can help others.
Another canonical answer. This, I believe, from Ron Jeffries:
Only test the code that you want to work.
Testing boilerplate code is a waste of time, but as Slavo says, if you add a side effect to your getters/setters, then you should write a test to accompany that functionality.
If you're doing test-driven development, you should write the contract (eg interface) first, then write the test(s) to exercise that interface which document the expected results/behaviour. Then write your methods themselves, without touching the code in your unit tests. Finally, grab a code coverage tool and make sure your tests exercise all the logic paths in your code.
Really trivial code like getters and setters that have no extra behaviour than setting a private field are overkill to test. In 3.0 C# even has some syntactic sugar where the compiler takes care of the private field so you don't have to program that.
I usually write lots of very simple tests verifying behaviour I expect from my classes. Even if it's simple stuff like adding two numbers. I switch a lot between writing a simple test and writing some lines of code. The reason for this is that I then can change around code without being afraid I broke things I didn't think about.
You should test everything. Right now you have getters and setters, but one day you might change them somewhat, maybe to do validation or something else. The tests you write today will be used tomorrow to make sure everything keeps on working as usual.
When you write test, you should forget considerations like "right now it's trivial". In an agile or test-driven context you should test assuming future refactoring.
Also, did you try putting in really weird values like extremely long strings, or other "bad" content? Well you should... never assume how badly your code can be abused in the future.
Generally I find that writing extensive user tests is on one side, exhausting. On the other side, though it always gives you invaluable insight on how your application should work and helps you throw away easy (and false) assumptions (like: the user name will always be less than 1000 characters in length).
For simple modules that may end up in a toolkit, or in an open source type of project, you should test as much as possible including the trivial getters and setters. The thing you want to keep in mind is that generating a unit test as you write a particular module is fairly simple and straight forward. Adding getters and setters is minimal code and can be handled without much thought. However, once your code is placed in a larger system, this extra effort can protect you against changes in the underlying system, such as type changes in a base class. Testing everthing is the best way to have a regression that is complete.
It doesn't hurt to write unit tests for your getters and setters. Right now, they may just be doing field get/sets under the hood, but in the future you may have validation logic, or inter-property dependencies that need to be tested. It's easier to write it now while you're thinking about it then remembering to retrofit it if that time ever comes.
in general, when a method is only defined for certain values, test for values on and over the border of what is acceptable. In other words, make sure your method does what it's supposed to do, but nothing more. This is important, because when you're going to fail, you want to fail early.
In inheritance hierarchies, make sure to test for LSP compliance.
Testing default getters and setters doesn't seem very useful to me, unless you're planning to do some validation later on.
well if you think it can break, write a test for it. I usually don't test setter/getter, but lets says you make one for User.Name, which concatenate first and last name, I would write a test so if someone change the order for last and first name, at least he would know he changed something that was tested.
The canonical answer is "test anything that can possibly break." If you are sure the properties won't break, don't test them.
And once something is found to have broken (you find a bug), obviously it means you need to test it. Write a test to reproduce the bug, watch it fail, then fix the bug, then watch the test pass.
As I understand unit tests in the context of agile development, Mike, yes, you need to test the getters and setters (assuming they're publicly visible). The whole concept of unit testing is to test the software unit, which is a class in this case, as a black box. Since the getters and setters are externally visible you need to test them along with Authenticate and Save.
If the Authenticate and Save methods use the properties, then your tests will indirectly touch the properties. As long as the properties are just providing access to data, then explicit testing should not be necessary (unless you are going for 100% coverage).
I would test your getters and setters. Depending on who's writing the code, some people change the meaning of the getter/setter methods. I've seen variable initialization and other validation as part of getter methods. In order to test this sort of thing, you'd want unit tests covering that code explicitly.
Personally I would "test anything that can break" and simple getter (or even better auto properties) will not break. I have never had a simple return statement fail and therefor never have test for them. If the getters have calculation within them or some other form of statements, I would certainly add tests for them.
Personally I use Moq as a mock object framework and then verify that my object calls the surrounding objects the way it should.
You have to cover the execution of every method of the class with UT and check the method return value. This includes getters and setters, especially in case the members(properties) are complex classes, which requires large memory allocation during their initialization. Call the setter with some very large string for example (or something with greek symbols) and check the result is correct (not truncated, encoding is good e.t.c.)
In case of simple integers that also applies - what happens if you pass long instead of integer? That's the reason you write UT for :)
I wouldn't test the actual setting of properties. I would be more concerned about how those properties get populated by the consumer, and what they populate them with. With any testing, you have to weigh the risks with the time/cost of testing.
You should test "every non-trivial block of code" using unit tests as far as possible.
If your properties are trivial and its unlikely that someone will introduce a bug in it, then it should be safe to not unit test them.
Your Authenticate() and Save() methods look like good candidates for testing.
Ideally, you would have done your unit tests as you were writing the class. This is how you're meant to do it when using Test Driven Development. You add the tests as you implement each function point, making sure that you cover the edge-cases with test too.
Writing the tests afterwards is much more painful, but doable.
Here's what I'd do in your position:
Write a basic set of tests that test the core function.
Get NCover and run it on your tests. Your test coverage will probably be around 50% at this point.
Keep adding tests that cover your edge-cases until you get coverage of around 80%-90%
This should give you a nice working set of unit tests that will act as a good buffer against regressions.
The only problem with this approach is that code has to be designed to be testable in this fashion. If you made any coupling mistakes early on, you won't be able to get high coverage very easily.
This is why it is really important to write the tests before you write the code. It forces you to write code that is loosely coupled.
Don't test obviously working (boilerplate) code. So if your setters and getters are just "propertyvalue = value" and "return propertyvalue" it makes no sense to test it.
Even get / set can have odd consequences, depending upon how they have been implemented, so they should be treated as methods.
Each test of these will need to specify sets of parameters for the properties, defining both acceptable and unacceptable properties to ensure the calls return / fail in the expected manner.
You also need to be aware of security gotchas, as an example SQL injection, and test for these.
So yes, you do need to worry about testing the properties.
I believe it's silly to test getters & setters when they only make a simple operation. Personally I don't write complex unit tests to cover any usage pattern. I try to write enough tests to ensure I have handled the normal execution behavior and as much error cases I can think of. I will write more unit tests as a response to bug reports. I use unit test to ensure the code meets the requirements and to make future modification easier. I feel a lot more willing to change code when I know that if I break something a test will fail.
I would write a test for anything that you are writing code for that is testable outside of the GUI interface.
Typically, any logic that I write that has any business logic I place inside another tier or business logic layer.
Then writing tests for anything that does something is easy to do.
First pass, write a unit test for each public method in your "Business Logic Layer".
If I had a class like this:
public class AccountService
{
public void DebitAccount(int accountNumber, double amount)
{
}
public void CreditAccount(int accountNumber, double amount)
{
}
public void CloseAccount(int accountNumber)
{
}
}
The first thing I would do before I wrote any code knowing that I had these actions to perform would be to start writing unit tests.
[TestFixture]
public class AccountServiceTests
{
[Test]
public void DebitAccountTest()
{
}
[Test]
public void CreditAccountTest()
{
}
[Test]
public void CloseAccountTest()
{
}
}
Write your tests to validate the code you've written to do something. If you iterating over a collection of things, and changing something about each of them, write a test that does the same thing and Assert that actually happened.
There's a lot of other approaches you can take, namely Behavoir Driven Development (BDD), that's more involved and not a great place to start with your unit testing skills.
So, the moral of the story is, test anything that does anything you might be worried about, keep the unit tests testing specific things that are small in size, a lot of tests are good.
Keep your business logic outside of the User Interface layer so that you can easily write tests for them, and you'll be good.
I recommend TestDriven.Net or ReSharper as both easily integrate into Visual Studio.
I would recommend writing multiple tests for your Authenticate and Save methods. In addition to the success case (where all parameters are provided, everything is correctly spelled, etc), it's good to have tests for various failure cases (incorrect or missing parameters, unavailable database connections if applicable, etc). I recommend Pragmatic Unit Testing in C# with NUnit as a reference.
As others have stated, unit tests for getters and setters are overkill, unless there's conditional logic in your getters and setters.
Whilst it is possible to correctly guess where your code needs testing, I generally think you need metrics to back up this guess. Unit testing in my view goes hand in hand with code-coverage metrics.
Code with lots of tests but a small coverage hasn't been well tested. That said, code with 100% coverage but not testing the boundry and error cases is also not great.
You want a balance between high coverage (90% minimum) and variable input data.
Remember to test for "garbage in"!
Also, a unit-test is not a unit-test unless it checks for a failure. Unit-tests that don't have asserts or are marked with known exceptions will simply test that the code doesn't die when run!
You need to design your tests so that they always report failures or unexpected/unwanted data!
It makes our code better... period!
One thing us software developers forget about when doing test driven development is the purpose behind our actions. If a unit test is being written after the production code is already in place, the value of the test goes way down (but is not completely lost).
In the true spirit for unit testing, these tests are not primarily there to "test" more of our code; or to get 90%-100% better code coverage. These are all fringe benefits of writing the tests first. The big payoff is that our production code ends be be written much better due to the natural process of TDD.
To help better communicate this idea, the following may be helpful in reading:
The Flawed Theory of Unit Tests
Purposeful Software Development
If we feel that the act of writing more unit tests is what helps us gain a higher quality product, then we may be suffering from a Cargo Cult of Test Driven Development.

Categories

Resources