Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Using C#, I need a class called User that has a username, password, active flag, first name, last name, full name, etc.
There should be methods to authenticate and save a user. Do I just write a test for the methods? And do I even need to worry about testing the properties since they are .Net's getter and setters?
Many great responses to this are also on my question: "Beginning TDD - Challenges? Solutions? Recommendations?"
May I also recommend taking a look at my blog post (which was partly inspired by my question), I have got some good feedback on that. Namely:
I Don’t Know Where to Start?
Start afresh. Only think about writing tests when you are writing new
code. This can be re-working of old
code, or a completely new feature.
Start simple. Don’t go running off and trying to get your head round
a testing framework as well as being
TDD-esque. Debug.Assert works fine.
Use it as a starting point. It doesn’t
mess with your project or create
dependencies.
Start positive. You are trying to improve your craft, feel good about
it. I have seen plenty of developers
out there that are happy to stagnate
and not try new things to better
themselves. You are doing the right
thing, remember this and it will help
stop you from giving up.
Start ready for a challenge. It is quite hard to start getting into
testing. Expect a challenge, but
remember – challenges can be overcome.
Only Test For What You Expect
I had real problems when I first
started because I was constantly sat
there trying to figure out every
possible problem that could occur and
then trying to test for it and fix.
This is a quick way to a headache.
Testing should be a real YAGNI
process. If you know there is a
problem, then write a test for it.
Otherwise, don’t bother.
Only Test One Thing
Each test case should only ever test
one thing. If you ever find yourself
putting “and” in the test case name,
you’re doing something wrong.
I hope this means we can move on from "getters and setters" :)
Test your code, not the language.
A unit test like:
Integer i = new Integer(7);
assert (i.instanceOf(integer));
is only useful if you are writing a compiler and there is a non-zero chance that your instanceof method is not working.
Don't test stuff that you can rely on the language to enforce. In your case, I'd focus on your authenticate and save methods - and I'd write tests that made sure they could handle null values in any or all of those fields gracefully.
This got me into unit testing and it made me very happy
We just started to do unit testing.
For a long time I knew it would be good to start doing it but I had no idea how to start and more importantly what to test.
Then we had to rewrite an important piece of code in our accounting program.
This part was very complex as it involved a lot of different scenarios.
The part I'm talking about is a method to pay sales and/or purchase invoices already entered into the accounting system.
I just didn't know how to start coding it, as there were so many different payment options.
An invoice could be $100 but the customer only transferred $99.
Maybe you have sent sales invoices to a customer but you have also purchased from that customer.
So you sold him for $300 but you bought for $100. You can expect your customer to pay you $200 to settle the balance.
And what if you sold for $500 but the customer pays you only $250?
So I had a very complex problem to solve with many possibilities that one scenario would work perfectly but would be wrong on an other type of invocie/payment combination.
This is where unit testing came to the rescue.
I started to write (inside the test code) a method to create a list of invoices, both for sales and purchases.
Then I wrote a second method to create the actual payment.
Normally a user would enter that information through a user interface.
Then I created the first TestMethod, testing a very simple payment of a single invoice without any payment discounts.
All the action in the system would happen when a bankpayment would be saved to the database.
As you can see I created an invoice, created a payment (a bank transaction) and saved the transaction to disk.
In my asserts I put what should be the correct numbers ending up in the Bank transaction and in the linked Invoice.
I check for the number of payments, the payment amounts, the discount amount and the balance of the invoice after the transaction.
After the test ran I would go to the database and double check if what I expected was there.
After I wrote the test, I started coding the payment method (part of the BankHeader class).
In the coding I only bothered with code to make the first test pass. I did not yet think about the other, more complex, scenarios.
I ran the first test, fixed a small bug until my test would pass.
Then I started to write the second test, this time working with a payment discount.
After I wrote the test I modified the payment method to support discounts.
While testing for correctness with a payment discount, I also tested the simple payment.
Both tests should pass of course.
Then I worked my way down to the more complex scenarios.
1) Think of a new scenario
2) Write a test for that scenario
3) Run that single test to see if it would pass
4) If it didn't I'd debug and modify the code until it would pass.
5) While modifying code I kept on running all tests
This is how I managed to create my very complex payment method.
Without unit testing I did not know how to start coding, the problem seemed overwhelming.
With testing I could start with a simple method and extend it step by step with the assurance that the simpler scenarios would still work.
I'm sure that using unit testing saved me a few days (or weeks) of coding and is more or less guaranteeing the correctness of my method.
If I later think of a new scenario, I can just add it to the tests to see if it is working or not.
If not I can modify the code but still be sure the other scenarios are still working correctly.
This will save days and days in the maintenance and bug fixing phase.
Yes, even tested code can still have bugs if a user does things you did not think of or prevented him from doing
Below are just some of tests I created to test my payment method.
public class TestPayments
{
InvoiceDiaryHeader invoiceHeader = null;
InvoiceDiaryDetail invoiceDetail = null;
BankCashDiaryHeader bankHeader = null;
BankCashDiaryDetail bankDetail = null;
public InvoiceDiaryHeader CreateSales(string amountIncVat, bool sales, int invoiceNumber, string date)
{
......
......
}
public BankCashDiaryHeader CreateMultiplePayments(IList<InvoiceDiaryHeader> invoices, int headerNumber, decimal amount, decimal discount)
{
......
......
......
}
[TestMethod]
public void TestSingleSalesPaymentNoDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 1, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 1, 119.00M, 0);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(119M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(0M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
}
[TestMethod]
public void TestSingleSalesPaymentDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 2, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 2, 118.00M, 1M);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(118M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(1M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
}
[TestMethod]
[ExpectedException(typeof(ApplicationException))]
public void TestDuplicateInvoiceNumber()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("100", true, 2, "01-09-2008"));
list.Add(CreateSales("200", true, 2, "01-09-2008"));
bankHeader = CreateMultiplePayments(list, 3, 300, 0);
bankHeader.Save();
Assert.Fail("expected an ApplicationException");
}
[TestMethod]
public void TestMultipleSalesPaymentWithPaymentDiscount()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("119", true, 11, "01-09-2008"));
list.Add(CreateSales("400", true, 12, "02-09-2008"));
list.Add(CreateSales("600", true, 13, "03-09-2008"));
list.Add(CreateSales("25,40", true, 14, "04-09-2008"));
bankHeader = CreateMultiplePayments(list, 5, 1144.00M, 0.40M);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(4, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(118.60M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(400, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount);
Assert.AreEqual(600, bankHeader.BankCashDetails[0].Payments[2].PaymentAmount);
Assert.AreEqual(25.40M, bankHeader.BankCashDetails[0].Payments[3].PaymentAmount);
Assert.AreEqual(0.40M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].PaymentDiscount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].InvoiceHeader.Balance);
}
[TestMethod]
public void TestSettlement()
{
IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>();
list.Add(CreateSales("300", true, 43, "01-09-2008")); //Sales
list.Add(CreateSales("100", false, 6453, "02-09-2008")); //Purchase
bankHeader = CreateMultiplePayments(list, 22, 200, 0);
bankHeader.Save();
Assert.AreEqual(1, bankHeader.BankCashDetails.Count);
Assert.AreEqual(2, bankHeader.BankCashDetails[0].Payments.Count);
Assert.AreEqual(300, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount);
Assert.AreEqual(-100, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance);
Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance);
}
If they really are trivial, then don't bother testing. Eg, if they are implemented like this;
public class User
{
public string Username { get; set; }
public string Password { get; set; }
}
If, on the other hand, you are doing something clever, (like encrypting and decrypting the password in the getter/setter) then give it a test.
The rule is that you have to test every piece of logic you write. If you implemented some specific functionality in the getters and setters I think they are worth testing. If they only assign values to some private fields, don't bother.
This question seems to be a question of where does one draw the line on what methods get tested and which don't.
The setters and getters for value assignment have been created with consistency and future growth in mind, and foreseeing that some time down the road the setter/getter may evolve into more complex operations. It would make sense to put unit tests of those methods in place, also for the sake of consistency and future growth.
Code reliability, especially while undergoing change to add additional functionality, is the primary goal. I am not aware of anyone ever getting fired for including setters/getters in the testing methodology, but I am certain there exists people who wished they had tested methods which last they were aware or can recall were simple set/get wrappers but that was no longer the case.
Maybe another member of the team expanded the set/get methods to include logic that now needs tested but didn't then create the tests. But now your code is calling these methods and you aren't aware they changed and need in-depth testing, and the testing you do in development and QA don't trigger the defect, but real business data on the first day of release does trigger it.
The two teammates will now debate over who dropped the ball and failed to put in unit tests when the set/gets morphed to include logic that can fail but isn't covered by a unit test. The teammate that originally wrote the set/gets will have an easier time coming out of this clean if the tests were implemented from day one on the simple set/gets.
My opinion is that a few minutes of "wasted" time covering ALL methods with unit tests, even trivial ones, might save days of headache down the road and loss of money/reputation of the business and loss of someone's job.
And the fact that you did wrap trivial methods with unit tests might be seen by that junior team mate when they change the trivial methods into non-trivial ones and prompt them to update the test, and now nobody is in trouble because the defect was contained from reaching production.
The way we code, and the discipline that can be seen from our code, can help others.
Another canonical answer. This, I believe, from Ron Jeffries:
Only test the code that you want to work.
Testing boilerplate code is a waste of time, but as Slavo says, if you add a side effect to your getters/setters, then you should write a test to accompany that functionality.
If you're doing test-driven development, you should write the contract (eg interface) first, then write the test(s) to exercise that interface which document the expected results/behaviour. Then write your methods themselves, without touching the code in your unit tests. Finally, grab a code coverage tool and make sure your tests exercise all the logic paths in your code.
Really trivial code like getters and setters that have no extra behaviour than setting a private field are overkill to test. In 3.0 C# even has some syntactic sugar where the compiler takes care of the private field so you don't have to program that.
I usually write lots of very simple tests verifying behaviour I expect from my classes. Even if it's simple stuff like adding two numbers. I switch a lot between writing a simple test and writing some lines of code. The reason for this is that I then can change around code without being afraid I broke things I didn't think about.
You should test everything. Right now you have getters and setters, but one day you might change them somewhat, maybe to do validation or something else. The tests you write today will be used tomorrow to make sure everything keeps on working as usual.
When you write test, you should forget considerations like "right now it's trivial". In an agile or test-driven context you should test assuming future refactoring.
Also, did you try putting in really weird values like extremely long strings, or other "bad" content? Well you should... never assume how badly your code can be abused in the future.
Generally I find that writing extensive user tests is on one side, exhausting. On the other side, though it always gives you invaluable insight on how your application should work and helps you throw away easy (and false) assumptions (like: the user name will always be less than 1000 characters in length).
For simple modules that may end up in a toolkit, or in an open source type of project, you should test as much as possible including the trivial getters and setters. The thing you want to keep in mind is that generating a unit test as you write a particular module is fairly simple and straight forward. Adding getters and setters is minimal code and can be handled without much thought. However, once your code is placed in a larger system, this extra effort can protect you against changes in the underlying system, such as type changes in a base class. Testing everthing is the best way to have a regression that is complete.
It doesn't hurt to write unit tests for your getters and setters. Right now, they may just be doing field get/sets under the hood, but in the future you may have validation logic, or inter-property dependencies that need to be tested. It's easier to write it now while you're thinking about it then remembering to retrofit it if that time ever comes.
in general, when a method is only defined for certain values, test for values on and over the border of what is acceptable. In other words, make sure your method does what it's supposed to do, but nothing more. This is important, because when you're going to fail, you want to fail early.
In inheritance hierarchies, make sure to test for LSP compliance.
Testing default getters and setters doesn't seem very useful to me, unless you're planning to do some validation later on.
well if you think it can break, write a test for it. I usually don't test setter/getter, but lets says you make one for User.Name, which concatenate first and last name, I would write a test so if someone change the order for last and first name, at least he would know he changed something that was tested.
The canonical answer is "test anything that can possibly break." If you are sure the properties won't break, don't test them.
And once something is found to have broken (you find a bug), obviously it means you need to test it. Write a test to reproduce the bug, watch it fail, then fix the bug, then watch the test pass.
As I understand unit tests in the context of agile development, Mike, yes, you need to test the getters and setters (assuming they're publicly visible). The whole concept of unit testing is to test the software unit, which is a class in this case, as a black box. Since the getters and setters are externally visible you need to test them along with Authenticate and Save.
If the Authenticate and Save methods use the properties, then your tests will indirectly touch the properties. As long as the properties are just providing access to data, then explicit testing should not be necessary (unless you are going for 100% coverage).
I would test your getters and setters. Depending on who's writing the code, some people change the meaning of the getter/setter methods. I've seen variable initialization and other validation as part of getter methods. In order to test this sort of thing, you'd want unit tests covering that code explicitly.
Personally I would "test anything that can break" and simple getter (or even better auto properties) will not break. I have never had a simple return statement fail and therefor never have test for them. If the getters have calculation within them or some other form of statements, I would certainly add tests for them.
Personally I use Moq as a mock object framework and then verify that my object calls the surrounding objects the way it should.
You have to cover the execution of every method of the class with UT and check the method return value. This includes getters and setters, especially in case the members(properties) are complex classes, which requires large memory allocation during their initialization. Call the setter with some very large string for example (or something with greek symbols) and check the result is correct (not truncated, encoding is good e.t.c.)
In case of simple integers that also applies - what happens if you pass long instead of integer? That's the reason you write UT for :)
I wouldn't test the actual setting of properties. I would be more concerned about how those properties get populated by the consumer, and what they populate them with. With any testing, you have to weigh the risks with the time/cost of testing.
You should test "every non-trivial block of code" using unit tests as far as possible.
If your properties are trivial and its unlikely that someone will introduce a bug in it, then it should be safe to not unit test them.
Your Authenticate() and Save() methods look like good candidates for testing.
Ideally, you would have done your unit tests as you were writing the class. This is how you're meant to do it when using Test Driven Development. You add the tests as you implement each function point, making sure that you cover the edge-cases with test too.
Writing the tests afterwards is much more painful, but doable.
Here's what I'd do in your position:
Write a basic set of tests that test the core function.
Get NCover and run it on your tests. Your test coverage will probably be around 50% at this point.
Keep adding tests that cover your edge-cases until you get coverage of around 80%-90%
This should give you a nice working set of unit tests that will act as a good buffer against regressions.
The only problem with this approach is that code has to be designed to be testable in this fashion. If you made any coupling mistakes early on, you won't be able to get high coverage very easily.
This is why it is really important to write the tests before you write the code. It forces you to write code that is loosely coupled.
Don't test obviously working (boilerplate) code. So if your setters and getters are just "propertyvalue = value" and "return propertyvalue" it makes no sense to test it.
Even get / set can have odd consequences, depending upon how they have been implemented, so they should be treated as methods.
Each test of these will need to specify sets of parameters for the properties, defining both acceptable and unacceptable properties to ensure the calls return / fail in the expected manner.
You also need to be aware of security gotchas, as an example SQL injection, and test for these.
So yes, you do need to worry about testing the properties.
I believe it's silly to test getters & setters when they only make a simple operation. Personally I don't write complex unit tests to cover any usage pattern. I try to write enough tests to ensure I have handled the normal execution behavior and as much error cases I can think of. I will write more unit tests as a response to bug reports. I use unit test to ensure the code meets the requirements and to make future modification easier. I feel a lot more willing to change code when I know that if I break something a test will fail.
I would write a test for anything that you are writing code for that is testable outside of the GUI interface.
Typically, any logic that I write that has any business logic I place inside another tier or business logic layer.
Then writing tests for anything that does something is easy to do.
First pass, write a unit test for each public method in your "Business Logic Layer".
If I had a class like this:
public class AccountService
{
public void DebitAccount(int accountNumber, double amount)
{
}
public void CreditAccount(int accountNumber, double amount)
{
}
public void CloseAccount(int accountNumber)
{
}
}
The first thing I would do before I wrote any code knowing that I had these actions to perform would be to start writing unit tests.
[TestFixture]
public class AccountServiceTests
{
[Test]
public void DebitAccountTest()
{
}
[Test]
public void CreditAccountTest()
{
}
[Test]
public void CloseAccountTest()
{
}
}
Write your tests to validate the code you've written to do something. If you iterating over a collection of things, and changing something about each of them, write a test that does the same thing and Assert that actually happened.
There's a lot of other approaches you can take, namely Behavoir Driven Development (BDD), that's more involved and not a great place to start with your unit testing skills.
So, the moral of the story is, test anything that does anything you might be worried about, keep the unit tests testing specific things that are small in size, a lot of tests are good.
Keep your business logic outside of the User Interface layer so that you can easily write tests for them, and you'll be good.
I recommend TestDriven.Net or ReSharper as both easily integrate into Visual Studio.
I would recommend writing multiple tests for your Authenticate and Save methods. In addition to the success case (where all parameters are provided, everything is correctly spelled, etc), it's good to have tests for various failure cases (incorrect or missing parameters, unavailable database connections if applicable, etc). I recommend Pragmatic Unit Testing in C# with NUnit as a reference.
As others have stated, unit tests for getters and setters are overkill, unless there's conditional logic in your getters and setters.
Whilst it is possible to correctly guess where your code needs testing, I generally think you need metrics to back up this guess. Unit testing in my view goes hand in hand with code-coverage metrics.
Code with lots of tests but a small coverage hasn't been well tested. That said, code with 100% coverage but not testing the boundry and error cases is also not great.
You want a balance between high coverage (90% minimum) and variable input data.
Remember to test for "garbage in"!
Also, a unit-test is not a unit-test unless it checks for a failure. Unit-tests that don't have asserts or are marked with known exceptions will simply test that the code doesn't die when run!
You need to design your tests so that they always report failures or unexpected/unwanted data!
It makes our code better... period!
One thing us software developers forget about when doing test driven development is the purpose behind our actions. If a unit test is being written after the production code is already in place, the value of the test goes way down (but is not completely lost).
In the true spirit for unit testing, these tests are not primarily there to "test" more of our code; or to get 90%-100% better code coverage. These are all fringe benefits of writing the tests first. The big payoff is that our production code ends be be written much better due to the natural process of TDD.
To help better communicate this idea, the following may be helpful in reading:
The Flawed Theory of Unit Tests
Purposeful Software Development
If we feel that the act of writing more unit tests is what helps us gain a higher quality product, then we may be suffering from a Cargo Cult of Test Driven Development.
Related
I have 2 projects, one is the "Main" project and the other is the "Test" project.
The policy is that all methods in the Main project must have at least one accompanying test in the test project.
What I want is a new unit test in the Test project that verifies this remains the case. If there are any methods that do not have corresponding tests (and this includes method overloads) then I want this test to fail.
I can sort out appropriate messaging when the test fails.
My guess is that I can get every method (using reflection??) but I'm not sure how to then verify that there is a reference to each method in this Test project (and ignore references in projects)
You can use any existing software to measure code coverage but...
Don't do it!!! Seriously. The aim should be not to have 100% coverage but to have software that can easily evolve. From your test project you can invoke by reflection every single existing method and swallow all the exceptions. That will make your coverage around 100% but what good would it be?
Read about TDD. Start creating testable software that has meaningful tests that will save you when something goes wrong. It's not about coverage, it's about being safe.
This is an example of a meta-test that sounds good in principle, but once you get to the detail you should quickly realise is a bad idea. As has already been suggested, the correct approach is to encourage whoever owns the policy to amend it. As you’ve quoted the policy, it is sufficiently specific that people can satisfy the requirement without really achieving anything of value.
Consider:
public void TestMethod1Exists()
{
try
{
var classUnderTest = new ClassToTest();
classUnderTest.Method1();
}
catch (Exception)
{
}
}
The test contains a call to Method1 on the ClassToTest so the requirement of having a test for that method is satisfied, but nothing useful is being tested. As long as the method exists (which is must if the code compiled) the test will pass.
The intent of the policy is presumably to try to ensure that written code is being tested. Looking at some very basic code:
public string IsSet(bool flag)
{
if (flag)
{
return "YES";
}
return "NO";
}
As methods go, this is pretty simple (it could easily be changed to one line), but even so it contains two routes through the method. Having a test to ensure that this method is being called gives you a false sense of security. You would know it is being called but you would not know if all of the code paths are being tested.
An alternative that has been suggested is that you could just use code coverage tools. These can be useful and give a much better idea as to how well exercised your code is, but again they only give an indication of the coverage, not the quality of that coverage. So, let’s say I had some tests for the IsSet method above:
public void TestWhenTrue()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(true));
}
public void TestWhenFalse()
{
var classUnderTest = new ClassToTest();
Assert.IsString(classUnderTest.IsSet(false));
}
I’m passing sufficient parameters to exercise both code paths, so the coverage for the IsSet method should be 100%. But all I am testing is that the method returns a string. I’m not testing the value of the string, so the tests themselves don’t really add much (if any) value.
Code coverage is a useful metric, but only as part of a larger picture of code quality. Having peer reviews and sharing best practice around how to effectively test the code you are writing within your team, whilst it is less concretely measurable will have a more significant impact on the quality of your test code.
I am new to unit testing and have read several times that we should write unit test first and then the actual code. As of now , i am writing my methods and then unit test the code.
If you write the tests first...
You tend to write the code to fit the tests. This encourages the
"simplest thing that solves the problem" type development and keeps
you focused on solving the problem not working on meta-problems.
If you write the code first...
You will be tempted to write the tests to fit the code. In effect this
is the equivalent of writing the problem to fit your answer, which is
kind of backwards and will quite often lead to tests that are of
lesser value.
Sounds good to me. However, How do i write unit tests even before having my code in place?
Am i taking the advice literally ? Does it means that i should have my POCO classes and Interfaces in place and then write unit test ?
Can anyone explain me how this is done with a simple example of say adding two numbers?
It's simple really. Red, Green, Refactor.
Red means - your code is completely broken. The syntax highlighting shows red and the test doesn't pass. Why? You haven't written any code yet.
Green means - Your application builds and the test passes. You've added the required code.
Refactor means - clean it up and make sure the test passes.
You can start by writing a test somewhat like this:
[TestMethod]
public void Can_Create_MathClass() {
var math = new MathClass();
Assert.IsNotNull(math);
}
This will fail (RED). How do you fix it? Create the class.
public class MathClass {
}
That's it. It now passes (GREEN). Next test:
[TestMethod]
public void Can_Add_Two_Numbers() {
var math = new MathClass();
var result = math.Add(1, 2);
Assert.AreEqual(3, result);
}
This also fails (RED). Create the Add method:
public class MathClass {
public int Add(int a, int b) {
return a + b;
}
}
Run the test. This will pass (GREEN).
Refactoring is a matter of cleaning up the code. It also means you can remove redundant tests. We know we have the MathClass now.. so you can completely remove the Can_Create_MathClass test. Once that is done.. you've passed REFACTOR, and can continue on.
It is important to remember that the Refactor step doesn't just mean your normal code. It also means tests. You cannot let your tests deteriorate over time. You must include them in the Refactor step.
When you create your tests first, before the code, you will find it much easier and faster to create your code. The combined time it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away. But, if you already have the unit tests you don't need to create them after the code saving you some time now and lots later.
Creating a unit test helps a developer to really consider what needs to be done. Requirements are nailed down firmly by tests. There can be no misunderstanding a specification written in the form of executable code.
The code you will create is simple and concise, implementing only the features you wanted. Other developers can see how to use this new code by browsing the tests. Input whose results are undefined will be conspicuously absent from the test suite
There is also a benefit to system design. It is often very difficult to unit test some software systems. These systems are typically built code first and testing second, often by a different team entirely. By creating tests first your design will be influenced by a desire to test everything of value to your customer. Your design will reflect this by being easier to test.
Let's take a slightly more advanced example: You want to write a method that returns the largest number from a sequence.
Firstly, write one or more units test for the method to be tested:
int[] testSeq1 = {1, 4, 8, 120, 34, 56, -1, 3, -13};
Assert.That(MaxOf(testSeq1) == 120);
And repeat for some more sequences. Also include a null parameter, a sequence with one element and an empty sequence and decide if an empty sequence or null parameter should throw an exception (and ensure that the unit test expects an exception for an empty sequence if that's the case).
It is during this process that you need to decide the name of the method and the type of its parameters.
At this point, it won't compile.
Then write a stub for the method:
public int MaxOf(IEnumerable<int> sequence)
{
return 0;
}
At this point it compiles, but the unit tests fail.
Then implement MaxOf() so that those unit tests now pass.
Doing it this way around ensures that you immediately focus on the usability of the method, since the very first thing you try to do is to use it - before even beginning to write it. You might well decide to change the method's declaration slightly at this point, based on the usage pattern.
A real world example would apply this approach to using an entire class rather than just one method. For the sake of brevity I have omitted the class from the example above.
It is possible to write the unit tests before you write any code - Visual Studio does have features to generate method stubs from the code you've written in your unit test. doing it this way around can also help understand the methods that the object will need to support - sometimes this can aid later enhancements (If you had a save to disk, that you also overload to save to Stream, this is more testable and aids spooling over the network if required later on)
I am abstracting the history tracking portion of a class of mine so that it looks like this:
private readonly Stack<MyObject> _pastHistory = new Stack<MyObject>();
internal virtual Boolean IsAnyHistory { get { return _pastHistory.Any(); } }
internal virtual void AddObjectToHistory(MyObject myObject)
{
if (myObject == null) throw new ArgumentNullException("myObject");
_pastHistory.Push(myObject);
}
internal virtual MyObject RemoveLastObject()
{
if(!IsAnyHistory) throw new InvalidOperationException("There is no previous history.");
return _pastHistory.Pop();
}
My problem is that I would like to unit test that Remove will return the last Added object.
AddObjectToHistory
RemoveObjectToHistory -> returns what was put in via AddObjectToHistory
However, it isn't really a unit test if I have to call Add first? But, the only way that I can see to do this in a true unit test way is to pass in the Stack object in the constructor OR mock out IsAnyHistory...but mocking my SUT is odd also. So, my question is, from a dogmatic view is this a unit test? If not, how do I clean it up...is constructor injection my only way? It just seems like a stretch to have to pass in a simple object? Is it ok to push even this simple object out to be injected?
There are two approaches to those scenarios:
Interfere into design, like making _pastHistory internal/protected or injecting stack
Use other (possibly unit tested) methods to perform verification
As always, there is no golden rule, although I'd say you generally should avoid situations where unit tests force design changes (as those changes will most likely introduce ambiguity/unnecessary questions to code consumers).
Nonetheless, in the end it is you who has to weigh how much you want unit test code interfere into design (first case) or bend the perfect unit test definition (second case).
Usually, I find second case much more appealing - it doesn't clutter original class code and you'll most likely have Add already tested - it's safe to rely on it.
I think it's still a unit test, assuming MyObject is a simple object. I often construct input parameters to unit test methods.
I use Michael Feather's unit test criteria:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
My 2 cents... how would the client know if remove worked or not ? How is a 'client' supposed to interact with this object? Are clients going to push in a stack to the history tracker? Treat the test as just another user/consumer/client of the test subject.. using exactly the same interaction as in real production.
I haven't heard of any rule stating that you're not allowed to call multiple methods on the object under test.
To simulate, stack is not empty. I'd just call Add - 99% case. I'd refrain from destroying the encapsulation of that object.. Treat objects like people (I think I read that in Object Thinking). Tell them to do stuff.. don't break-in and enter.
e.g. If you want someone to have some money in their wallet,
the simple way is to give them the money and let them internally put it into their wallet.
throw their wallet away and stuff in a wallet in their pocket.
I like Option1. Also see how it frees you from implementation details (which induce brittleness in tests). Let's say tomorrow the person decides to use an online wallet. The latter approach will break your tests - they will need to be updated for pushing in an online wallet now - even though the object behavior is not broken.
Another example I've seen is for testing Repository.GetX() where people break-in to the DB to inject records with SQL now in the unit test.. where it would have be considerably cleaner and easier to call Repository.AddX(x) first. Isolation is desired but not to the extent that it overrides pragmatism.
I hope I didn't come on too strong here.. it just pains me to see object APIs being 'contorted for testability' to the point where it no longer resembles the 'simplest thing that could work'.
I think you're trying to be a little overly specific with your definition of a unit test. You should be testing the public behavior of your class, not the minute implementation details.
From your code snippet, it looks like all you really need to care about is whether a) calling AddObjectToHistory causes IsAnyHistory to return true and b) RemoveLastObject eventually causes IsAnyHistory to return false.
As stated in the other answers I think your options can be broken down like so.
You take a dogmatic approach to your testing methodology and add constructor injection for the stack object so you can inject your own fake stack object and test your methods.
You write a separate test for add and remove, the remove test will use the add method but consider it a part of the test setup. As long as your add test passes, your remove should be too.
I have a FileExtractor class which a Start method which does some steps.
I've created a test class called "WhenExtractingInvalidFile.cs" within my folder called "FileExtractorTests" and added some Test Methods inside it as below which should be verified as steps of the Start() method:
[TestMethod]
public void Should_remove_original_file()
{
}
[TestMethod]
public void Should_add_original_file_to_errorStorage()
{
}
[TestMethod]
public void Should_log_error_locally()
{
}
This way, it'd nicely organize the behaviors and the expectations that should be met.
The problem is that most of the logic of these test methods are the same so should I be creating one test method that verifies all the steps or separately like above?
[TestMethod]
public void Should_remove_original_file_then_add_original_file_to_errorStorage_then_log_error_locally()
{
}
What's the best practice?
While it's commonly accepted that the Act section of tests should only contain one call, there's still a lot of debate over the "One Assert per Test" practice.
I tend to adhere to it because :
when a test fails, I immediately know (from the test name) which of the multiple things we want to verify on the method under test went wrong.
tests that imply mocking are already harder to read than regular tests, they can easily get arcane when you're asserting against multiple mocks in the same test.
If you don't follow it, I'd at least recommend that you include meaningful Assert messages in order to minimize head scratching when a test fails.
I would do the same thing you did, but with a _on_start postfix like this: Should_remove_original_file_on_start. The latter method will only give you a maximum of one assert fail, even though all aspects of Start could be broken.
DRY - don't repeat yourself.
Think about the scenario where your test fails. What would be most useful from a test organization point?
The second dimension to look at is maintenance. Do you want to run and maintain 1 test or n tests? Don't overburden development with writing many tests that have little value (I think TDD is a bit overrated for this reason). A test is more valuable if it exercises a larger path of the code rather than a short one.
In this particular scenario I would create a single test. If this test fails often and you are not reaching the root cause of the problem fast enough, re-factor into multiple tests.
My company is on a Unit Testing kick, and I'm having a little trouble with refactoring Service Layer code. Here is an example of some code I wrote:
public class InvoiceCalculator:IInvoiceCalculator
{
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
UpdateLine(il);
}
//do a ton of other stuff here
}
private UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff, including calls to other private methods
}
}
In this simplified case (it is reduced from a 1,000 line class that has 1 public method and ~30 private ones), my boss says I should be able to test my CalculateInvoice and UpdateLine separately (UpdateLine actually calls 3 other private methods, and performs database calls as well). But how would I do this? His suggested refactoring seemed a little convoluted to me:
//Tiny part of original code
public class InvoiceCalculator:IInvoiceCalculator
{
public ILineUpdater _lineUpdater;
public InvoiceCalculator (ILineUpdater lineUpdater)
{
_lineUpdater = lineUpdater;
}
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
_lineUpdater.UpdateLine(il);
}
//do a ton of other stuff here
}
}
public class LineUpdater:ILineUpdater
{
public UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff
}
}
I can see how the dependency is now broken, and I can test both pieces, but this would also create 20-30 extra classes from my original class. We only calculate invoices in one place, so these pieces wouldn't really be reusable. Is this the right way to go about making this change, or would you suggest I do something different?
Thank you!
Jess
This is an example of Feature Envy:
line.Amount = line.Qty * line.Rate;
It should probably look more like:
var amount = line.CalculateAmount();
There isn't anything wrong with lots of little classes, it's not about re-usability as much as it's about adaptability. When you have many single responsibility classes, it's easier to see the behavior of your system and change it when your requirements change. Big classes have intertwinded responsibilities which make it very difficult to change.
IMO this all depends on how 'significant' that UpdateLine() method really is. If it's just an implementation detail (e.g. it could easily be inlined inside CalculateInvoice() method and they only thing that would hurt is readability), then you probably don't need to unit test it separately from the master class.
On the other hand, if UpdateLine() method has some value to the business logic, if you can imagine situation when you would need to change this method independently from the rest of the class (and therefore test it separately), then you should go on with refactoring it to a separate LineUpdater class.
You probably won't end up with 20-30 classes this way, because most of those private methods are really just implementation details and do not deserve to be tested separately.
Well, your boss goes more correct way in terms of unit-testing:
He is now able to test CalculateInvoice() without testing UpdateLine() function. He can pass mock object instead of real LineUpdater object and test only CalculateInvoice(), not a whole bunch of code.
Is it right? It depends. Your boss wants to make real unit-tests. And testing in your first example would not be unit-testing, it would be integration testing.
What are advantages of unit-tests before integration tests?
1) Unit-tests allow you to test only one method or property, without it being affected by other methods/database and so on.
2) Second advantage - unit tests execute faster (for example, you said UpdateLine uses database) because they don't test all the nested methods. Nested methods can be database calls so if you have thousand of tests your tests can run slow (several minutes).
3) Third advantage: if your methods make database calls then sometimes you need to setup database (fill it with data which is necessary for test) and it can be not easy - maybe you will have to write a couple of pages of code just to prepare database for a test. With unit tests, you separate database calls from the methods being tested (using mock objects).
But! I am not saying that unit tests a better. They are just different. As I said, unit tests allow you to test a unit in isolation and quickly. Integration tests are easier and allow you to test results of a joint work of different methods and layers. Honestly, I prefer integration tests more :)
Also, I have a couple of suggestions for you:
1) I don't think having Amount field is a good idea. It seems that Amount field is extra because it's value can be calculated based on 2 other public fields. If you want to do it anyway, I would do it as a read only property which returns Qty * Rate.
2) Usually, having a class which consists of 1000 rows may mean that it's badly designed and should be refactored.
Now, I hope you better understand the situation and can decide. Also, if you understand the situation you can talk to your boss and you can decide together.
yeah, nice one. I'm not sure wether the InvoiceLine object also has some logic included, otherwise then you would probably need a IInvoiceLine also.
I sometimes have the same questions. On one hand you want to do things right and unit test your code, but when database calls and maybe filewriting is involved it causes a lot of extra work to setup the first test with all the testobjects which step in when filewriting and database io is about to happen, interfaces, asserts and you also want to test that the datalayer doesn't contain any errors. So a test which is more 'process' then 'unit' is often easier to build.
If you have a project that will be changed a lot (in the future) and lots of dependencies of this code (maybe other programs read the file or database data) it can be nice to have a solid unit test for all parts of your code, and the investment time is worthwhile.
But if the project is, like my latest client says 'let's get it live and maybe we'll tweak a bit next year and next year there will be something new', than i wouldn't be so hard on myself to get all unit tests up and running.
Michel
Your boss' example looks reasonable to me.
Some of the key considerations I try to keep in mind when designing for any scenario are:
Single Responsibility Principle
A class should only change for one reason.
Does each new class justify its existence
Have classes been created just for the sake of it, or do they encapsulate a meaningful portion of logic?
Are you able to test each piece of code in isolation?
In your scenario, just looking at the names, it would appear that you were wandering away from Single Responsibility - You have an IInvoiceCalculator, yet this class is then also responsible for updating InvoiceLines. Not only have you made it very difficult to test the update behaviour, but you now need to change your InvoiceCalculator class both when calculation business rules change and when the rules around updating change.
Then there is the question about the updating logic - does the logic justify a seperate class? That really depends and it is hard to say without seeing the code, but certainly the fact that your boss wants that logic under test would suggest that it is more than a simple on liner call off to a datalayer.
You say that this refactoring creates a great number of extra classes, (I'm taking that you mean across all your business entities, since I only see a couple of new classes and their interfaces in your example) but you have to consider what you get from this. It looks like you gain full testability of your code, the ability to introduce new calculation and new update logic in isolation, and a more clear encapsulation of what are seperate pieces of business logic.
The gains above are of course subject to a cost benefit analysis, but since your boos is asking for them, it sounds like he is happy that they will pay off, against the extra work to implement the code this way.
The final, third point about testing in isolation is also a key benefit of the way your boss has designed this - the closer your public methods are to the code that does that actualy work, the easier it is to inject stubs or mocks for parts of your system that are not under test. For example, if you are testing an update method that calls off the a datalayer, you do not want to test the datalayer, so you would usually inject a mock. If you need to pass that mocked datalayer through all your calculator logic first, your test setup is going to be far more complicated since the mock now needs to meet many other potential requirements, not related to the actual test in question.
While this approach is extra work initially, I'd say that the majority of the work is the time to think through the design, after that, and after you get up to speed on the more injection based style of code, the raw implementation time of software that is structured in that way is actually comparible.
Your hoss' approach is a great example of dependency injection and how doing so allows you to use a mock ILineUpdater to conduct your tests efficiently.