Is NUnit Setup attribute a code smell? - c#

I have written a set of NUnit classes, which have Setup and TearDown attributes. Then I read this: http://jamesnewkirk.typepad.com/posts/2007/09/why-you-should-.html. I can understand what the author is saying where you have to scroll up and scroll down when reading the Unit Tests. However, I also see the benefit of Setup and TearDown. For example, in a recent test class I did this:
private Product1 _product1;
private Product2 _product2;
private IList<Product> _products;
[SetUp]
public void Setup()
{
_product1 = new Product();
_product2 = new Product();
_product = new List<Product>();
_products.Add(_product1);
_products.Add(_product2);
}
Here _product1, _product2 and _products is used by every test. Therefore it seems to violate DRY to put them in every method. Should they be put in every test method?

This question is very subjective, but I don't believe it is a code smell. The example in the linked blog post was very arbitrary and did not use the variables setup in every test. If you look at your code and you are not using the variables from the SetUp, then yes, it is probably a code smell.
If your tests are grouped well however, then each test fixture will be testing a group of functionality that often needs the same setup. In this case, the DRY principle wins out in my books. James argued in his post that he needed to look in three methods to see the state of the data, but I would counter that too much setup code in a test method obscures the purpose of the test.
Copying setup code like you have in your example also makes your tests harder to maintain. If your Product class changes in the future and requires additional construction, then you will need to change it in every test, not in one place. Adding that much setup code to each test method would also make your test class very long and hard to scan.

Related

How to use OneTimeSetup?

NUnit has a OneTimeSetup attribute. I am trying to think of some scenarios when I can use it. However, I cannot find any GitHub examples online (even my link does not have an example).
Say I had some code like this:
[TestFixture]
public class MyFixture
{
IProduct Product;
[OneTimeSetUp]
public void OneTimeSetUp()
{
IFixture fixture = new Fixture().Customize(new AutoMoqCustomization());
Product = fixture.Create<Product>();
}
[Test]
public void Test1()
{
//Use Product here
}
[Test]
public void Test2()
{
//Use Product here
}
[Test]
public void Test3()
{
//Use Product here
}
}
Is this a valid scenario for initialising a variable inside OneTimeSetup i.e. because in this case, Product is used by all of the Test methods?
The fixture setup for Moq seems reasonable to me.
However, reusing the same Product is suspect as it can be modified and accidentally reused which could lead to unstable tests. I would get a new Product each test run, that is if it's the system under test.
Yes. No. :-)
Yes... [OneTimeSetUp] works as you expect. It executes once, before all your tests and all your tests will use the same product.
No... You should probably not use it this way, assuming that the initialization of Product is not extremely costly - it likely is not since it's based on a mock.
OneTimeSetUp is best used only for extremely costly steps like creating a database. In fact, it tends to have very little use in purely unit testing.
You might think, "Well if it's even a little more efficient, why not use it all the time?" The answer is that you are writing tests, which implies you don't actually know what the code under test will do. One of your tests may change the common product, possibly even put it in an invalid state. Your other tests will start to fail, making it tricky to figure out the source of the problem. Almost invariably, you want to start out using SetUp, which will create a separate product for each one of your tests.
Yes, you use OneTimeSetUp (and OneTimeTearDown) to do all the setup that will be shared among tests in that fixture. It is run once per test run, so regardless of however many tests in the fixture the code will execute once. Typically you will initialize instances held in private fields etc for the other tests to use, so that you don't end up with lots of duplicate setup code.

TDD - Am I doing it correctly?

I have a class that deals with Account stuff. It provides methods to login, reset password and create new accounts so far.
I inject the dependencies through the constructor. I have tests that validates each dependency's reference, if the reference is null it throws an ArgumentNullException.
The Account class exposes each of these dependencies through read only properties, I then have tests that validates if the reference passed on the constructor is the same that the property returns. I do this to make sure the references are being held by the class. (I don't know if this is a good practice too.)
First question: Is this a good practice in TDD? I ask this because this class has 6 dependencies so far, and it gets very repetitive and also the tests get pretty long as I have to mock all the dependencies for each test. What I do is just a copy and paste every time and just change the dependency's reference being tested.
Second question: my account creation method does things like validating the model passed, inserting data in 3 different tables or a forth table if a certain set of values are present and sending an email. What should I test here? I have so far a test that checks if the model validation gets executed, if the Add method of each repository gets called, and in this case, I use the Moq's Callback method of the mocked repository to compare each property being added to the repository against the ones I passed by the model.
Something like:
userRepository
.Setup(r => r.Add(It.IsAny<User>()))
.Callback<User>(u =>
{
Assert.AreEqual(model.Email, u.Email);
Assert.IsNotNull(u.PasswordHash);
//...
})
.Verifiable();
As I said, these tests are getting longer, I think that it doesn't hurt to test anything I can, but I don't know if it's worth it as it it's taking time to write the tests.
The purpose of testing is to find bugs.
Are you really going to have a bug where the property exists but is not initialized to the value from the constructor?
public class NoNotReally {
private IMyDependency1 _myDependency;
public IMyDependency1 MyDependency {get {return _myDependency;}}
public NoNotReally(IMyDependency dependency) {
_myDependency = null; // instead of dependency. Really?
}
}
Also, since you're using TDD, you should write the tests before you write the code, and the code should exist only to make the tests pass. Instead of your unnecessary tests of the properties, write a test that demonstrates that your injected dependency is being used. In order or such a test to pass, the dependency will need to exist, it will need to be of the correct type, and it will need to be used in the particular scenario.
In my example, the dependency will come to exist because it's needed, not because some artificial unit test required it to be there.
You say writing these tests feels repetitive. I say you feel the major benefit of TDD. Which is in fact not writing software with less bugs and not writing better software, because TDD doesn't guarantee either (at least not inherently). TDD forces you to think about design decisions and make design decisions all. The. Time. (And reduce debugging time.) If you feel pain while doing TDD, it's usually because a design decision is coming back to bite you. Then it's time to switch to your refactoring hat and improve the design.
Now in this particular case it's just the design of your tests, but you have to make design decisions for those as well.
As for testing whether properties are set. If I understand you correctly, you exposed those properties just for the sake of testing? In that case I'd advise against that. Assume you have a class with a constructor parameter and have a test that asserts the construtor should throw on null arguments:
public class MyClass
{
public MyClass(MyDependency dependency)
{
if (dependency == null)
{
throw new ArgumentNullException("dependency");
}
}
}
[Test]
public void ConstructorShouldThrowOnNullArgument()
{
Assert.Catch<ArgumentNullException>(() => new MyClass(null));
}
(TestFixture class omitted)
Now when you start to write a test for an actual business method of the class under test, the parts will start to fit together.
[Test]
public void TestSomeBusinessFunctionality()
{
MyDependency mockedDependency;
// setup mock
// mock calls on mockedDependency
MyClass myClass = new MyClass(mockedDependency);
var result = myClass.DoSomethingOrOther();
// assertions on result
// if necessary assertion on calls on mockedDependency
}
At that point, you will have to assign the injected dependency from the constructor to a field so you can use it in the method later. And if you manage to get the test to pass without using the dependency... well, heck, obviously you didn't need it to begin with. Or, maybe, you'll only start to need it for the next test.
About the other point. When it becomes a hassle to test all the reponsibilities of a method or class, TDD is telling you that the method/class is doing to much and would maybe like to be split up into parts that are easy to test. E.g. one class for verification, one for mapping and one for executing the storage calls.
That can very well lead to over-engineering, though! So watch out for that and you'll develop a feeling for when to resist the urge for more indirection. ;)
To test if properties are mapped properly, I'd suggest to use stubs or self-made fake objects which have simple properties. That way you can simply compare the source and target properties and don't have to make lengthy setups like the one you posted.
Normally in unit tests (especially in TDD), you are not going to test every single statement in the class that you are testing. The main purpose of the TDD unit tests is to test the business logic of the class, not the initialization stuff.
In other words, you give scenarios (remember to include edge cases too) as input and check the results, which can either be the final values of the properties and/or the return values of the methods.
The reason you don't want to test every single possible code path in your classes is because should you ever decide to refactor your classes later on, you only need to make minimal changes to your TDD unit tests, as they are supposed to be agnostic to the actual implementation (as much as possible).
Note: There are other types of unit tests, such as code coverage tests, that are meant to test every single code path in your classes. However, I personally find these tests impractical, and certainly not encouraged in TDD.

Why and how implementing initial unit tests in legacy application code

I’m in the process of integrating unit tests in an existing legacy application. In the book “Working with legacy application” and many other books that I read, it was written that you always should write unit tests before starting the process of refactoring existing code or integrating new features, correct bugs, etc...
In the tons of samples that I read, the signature of refactoring methods is never or rarely breaks and the old unit tests still work after a lot of changes. The reason is that author code is not so legacy that the code that I view each day when I work with what I considered “legacy code”.
In the reality, when you have a legacy application, the code is so bad that you must break the signature of methods. If you try to write unit tests with the original method, after just 5 minutes of changes, you will break the entire signature and the firsts tests will be good to be send to the trash.
Just as an example, look at the code below:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MyCompany.Accouting
{
public class DataCreator
{
public static System.Data.DataSet CreateInvoice(
System.Data.DataSet customer,
System.Data.DataSet order,
string mails,
ref bool isValid)
{
System.Data.DataSet invoice = new System.Data.DataSet();
int taxGroupId =
ApplicationException.ShareConnection.ExecuteScalar(
"SELECT Id FROM TaxGroup WHERE TaxGroup.IsDefault");
Application.ShareConnection.ExecuteNonQuery(
"INSERT INTO Invoice (CustomerId, EffectiveDate) VALUES(?,?)",
customer.Tables[0].Rows[0]["Id"], System.DateTime.Now);
int invoiceId;
invoiceId = Application.SharedConnection.ExecuteScalar("SELECT ##IDENTITY");
Application.SharedConnection.ExecuteNonQuery(
"INSERT INTO InvoiceLine (ProductId, Quantity, InvoiceId) VALUES(?,?,?)", ,
order.Tables[0].Rows[0]["ProductId"], order.Tables[0].Rows[0]["Quantity"], invoiceId);
foreach(string mail in mails.Split(';'))
{
Application.MailSender.Send(mail);
}
isValid = true;
System.Data.DataRow row = invoice.Tables[0].NewRow();
row["Id"] = invoiceId;
invoice.Tables[0].Rows.Add(row);
return invoice;
}
}
}
As you can see, there is a lot of lot of bad code here.
After the refactoring, the method will not be static, ref parameter will be removed, DataSet will be converted to POCOs object, access to global object like “Application” will be replaced by properties injected dynamically and a lot of other changes will be made like implementing interface, review the name of class, namespace and many many other things. In fact, this code is totally a crap that should be throw away and rewritten from scratch.
If I create a unit tests for the original static method, the test will be break immediately when the static keyword will be removed to use the class in a more object oriented manner. Same for the change of DataSet to Poco, etc…
Why create a unit test if in 5 minutes, I will throw away this test? What in this test is helpful?
Which strategy will you use in this case?
Thank you very much.
The key item here is to pick the point that you are actually going to unit test. In your case, putting a test on the exact method you are replacing doesn't make sense. Instead a test needs to be created for every point in the application that calls your method to ensure that the specific functionality still works the same.
The reason is that once you have completed refactoring the DataCreator class you will have to go back to all of the areas that call it and change those. Putting tests on those areas prior to making changes will ensure that your functionality is the same.
See below:
public class SomeClass {
public Boolean DoSomething() {
OtherClass oc = new OtherClass();
return oc.DoSomethingElse("param1", "param2") == "true";
}
}
public class OtherClass {
public String DoSomethingElse(String param1, String param2) {
// horrible code here which never uses the second parameter
return "true";
}
}
In the above example, you might very well want to refactor DoSomethingElse to change the return type to a boolean value and eliminate the second parameter.
So you start by putting a unit test on the SomeClass.DoSomething method. Then refactor the OtherClass to your hearts content making sure the end result of DoSomething is the same.
Of course, in this situation, you want to make sure you have a unit test for every single thing that calls "DoSomethingElse".
Your unit tests will always have to change with signature changes. The best way to go about this is to set up unit tests that test general behavior, and do simple optimizations first.
For instance, start with optimizing the function's code itself (for instance, fixing up the data access & splitting the function up into a couple.)
Then you can move onto signature refactoring, but before you do, make sure the components that use this class have basic expected-results tests so you know if in the process of removing the out parameter, you neglected something in one of the classes that depends on this.
When doing major refactoring your tests will change quite a bit. Sometimes it's enough to have the conceptual tests laid out so you can make sure that with the refactor the usability is similar or you'll know by what tests get deprecated, what needs to be updated in many other dependent classes.
Write out the interface the way you want it to be, and write the unit tests against that.
Then call the legacy code from the interface until the tests pass.
Then refactor as needed.
Right?
The Unit Test will serve as an active/living record of what functionality was required/performed by the method before you began changing it.
Consider them like checklists for you to think about after you've refactored the method to ensure it still covers what it covered before you refactored it.

Object instantiation in Test Classes

I'm using MS UnitTesting and trying to find my way around writing my first unit tests. It seems like all my unit tests start off with creating the same few objects...
[TestMethod]
CanCreateOrder()
{
<create an order>
...
}
[TestMethod]
CanSetOrderDeliveryAddress()
{
<create an order>
<create an address>
order.DeliveryAddress = address;
...
}
[TestMethod]
CanDispatchAnOrder()
{
<create an order>
<create an address>
order.DeliveryAddress = address;
order.Dispatch();
...
}
...etc
Is this normal, or have I got the wrong idea? I kind of thought every test was supposed to be independent, but how would it be possible to test Dispatch() without that implicitly relying on CreateOrder and SetDeliveryAddress already passing?
And the second part of the question, if the above approach looks ok, should I be using a factory or something to instantiate these objects in my test project? I'm not sure if a test project should only contain test classes/methods, or it's ok to add a bunch of helpers in there too.
You seem to be on the right track to me. In a lot of unit tests you will need to set up the state of the object you are testing in order to be able to test a particular part of behaviour.
It will help to group tests in classes when the bahaviour you are testing requires similar setup, and to use [TestInitialize] methods to reduce the duplication of that setup.
eg:
[TestClass]
public class WhenReadyToDispatch{
private Order order;
[TestInitialize]
public void Initialize
{
order = <create an order>
order.DeliveryAddress = <create an address>
}
[TestMethod]
CanChangeOrderDeliveryAddress()
{
order.DeliveryAddress = address;
}
[TestMethod]
CanDispatchAnOrder()
{
order.Dispatch();
}
}
It's fine to have helper classes in the test project - you should be aiming to make you test code well factored as your production code.
Your first question is to do with mocking and stubbing where you create a fake order and address created from the interface of each class. This is how you can then just be testing the Dispatch method.
You can then assert that the dispatch method did the right thing by checking what happened to your fake (stub/mock) objects.
In answer to the second part of your question;
It's a very good idea to have factory methods and even class hierarchies in order to make writing the tests easier. It's just as important to structure tests in a good way, as it is to structure production code.
I think part of your problem may be in the design of your order object. Trying to write a free standing test only to find that it relies on other functions generally suggest that they are not adequately decoupled. A couple of rules of thumb that may be appropriate here:
If Order.DeliveryAddress is just a simple getter/setter then don't worry about testing it. Thats like trying to prove that C# behaves as it should. There is little advantage to doing this. Conversely, having your dispatcher test rely on this property being in working order is not really a dependency.
However if Order.DeliveryAddress is performing logic, such as ensuring that the address is only modifiable for non-dispatched orders for example, then it is more complicated. You probably don't want to try to dispatch an entire order just to test that Order.DeliveryAddress is no longer modifiable afterwards.
Invoking Single Responsibility Principle (See 1 and 2) here would say that the Order class is now doing too much. It is both dispatching orders and enforcing object state integrity of the order data. In which case you probably want to split the dispatching functionality out into a DispatcherService that simply takes an order and dispatches it, setting an IsDispatched flag on the order in the process.
You can then test the DeliveryAddress behavior by just setting the IsDispatched property appropriately.
A third approach (which is sort of cheating but works well in situations where you are trying to get some testing over legacy objects) is to subclass Order to create a TestableOrder class that exposes to the test fixture the ability to tinker with the internal state of the class. In other words, it could expose a MarkAsDispatched() method that would set the classes internal IsDispatched flag and thus allow you to test that DeliveryAddress is only settable prior to being marked as dispatched.
Hope that helps.

Unit Testing: Self-contained tests vs code duplication (DRY)

I'm making my first steps with unit testing and am unsure about two paradigms which seem to contradict themselves on unit tests, which is:
Every single unit test should be self-contained and not depend on others.
Don't repeat yourself.
To be more concrete, I've got an importer which I want to test. The Importer has a "Import" function, taking raw data (e.g. out of a CSV) and returning an object of a certain kind which also will be stored into a database through ORM (LinqToSQL in this case).
Now I want to test several things, e.g. that the returned object returned is not null, that it's mandatory fields are not null or empty and that it's attributes got the correct values. I wrote 3 unit tests for this. Should each test import and get the job or does this belong into a general setup-logic? On the other hand, believing this blog post, the latter would be a bad idea as far as my understanding goes. Also, wouldn't this violate the self-containment?
My class looks like this:
[TestFixture]
public class ImportJob
{
private TransactionScope scope;
private CsvImporter csvImporter;
private readonly string[] row = { "" };
public ImportJob()
{
CsvReader reader = new CsvReader(new StreamReader(
#"C:\SomePath\unit_test.csv", Encoding.Default),
false, ';');
reader.MissingFieldAction = MissingFieldAction.ReplaceByEmpty;
int fieldCount = reader.FieldCount;
row = new string[fieldCount];
reader.ReadNextRecord();
reader.CopyCurrentRecordTo(row);
}
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
csvImporter = new CsvImporter();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void ImportJob_IsNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j);
}
[Test]
public void ImportJob_MandatoryFields_AreNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j.Customer);
Assert.IsNotNull(j.DateCreated);
Assert.IsNotNull(j.OrderNo);
}
[Test]
public void ImportJob_MandatoryFields_AreValid()
{
Job j = csvImporter.ImportJob(row);
Customer c = csvImporter.GetCustomer("01-01234567");
Assert.AreEqual(j.Customer, c);
Assert.That(j.DateCreated.Date == DateTime.Now.Date);
Assert.That(j.OrderNo == row[(int)Csv.RechNmrPruef]);
}
// etc. ...
}
As can be seen, I'm doing the line Job j = csvImporter.ImportJob(row);
in every unit test, as they should be self-contained. But this does violate the DRY principle and may possibly cause performance issues some day.
What's the best practice in this case?
Your test classes are no different from usual classes, and should be treated as such: all good practices (DRY, code reuse, etc.) should apply there as well.
That depends on how much of your scenario that's common to your test. In the blog post you refered to the main complaint was that the SetUp method did different setup for the three tests and that can't be considered best practise. In your case you've got the same setup for each test/scenario and then you should use a shared SetUp instead of duplicating the code in each test. If you later on find that there are more tests that does not share this setup or requires a different setup shared between a set of tests then refactor those test to a new test case class. You could also have shared setup methods that's not marked with [SetUp] but gets called in the beginning of each test that needs them:
[Test]
public void SomeTest()
{
setupSomeSharedState();
...
}
A way of finding the right mix could be to start off without a SetUp method and when you find that you're duplicating code for test setup then refactor to a shared method.
You could put the
Job j = csvImporter.ImportJob(row);
in your setup. That way you're not repeating code.
you actually should run that line of code for each and every test. Otherwise tests will start failing because of things that happened in other tests. This will become hard to maintain.
The performance problem isn't caused by DRY violations. You actually should setup everything for each and every test. These aren't unit tests, they're integration tests, you rely on external files to run the test. You could make ImportJob read from a stream instead of it directly opening a file. Then you could test with a memorystream.
Whether you move
Job j = csvImporter.ImportJob(row);
into the SetUp function or not, it will still be executed before every test is executed. If you have the exact same line at the top of each test, well then it is just logical that you move that line into the SetUp portion.
The blog entry that you posted complained about the setup of the test values being done in a function disconnected (possibly not on the same screen as) from the test itself -- but your case is different, in that the test data is being driven by an external text file, so that complaint doesn't match up with your specific use case either.
In one of my projects we agreed with team that we will not implement any initialization logic in unit tests constructors. We have Setup, TestFixtureSetup, SetupFixture (since version 2.4 of NUnit) attributes. They are enough for almost all cases when we need initialization. We force developers to use one of these attributes and to explicitly define whether we will run this initialization code before each test, before all tests in a fixture or before all tests in a namespace.
However I will disagree that unit tests should always confirm to all good practices supposed for a usual development. It is desirable, but it is not a rule. My point is that in real life customer doesn't pay for unit tests. Customer pays for the overall quality and functionality of the product. He is not interested to know whether you provide him a bug-free product by covering 100% of code by unit test/automated GUI tests or by employing 3 manual testers per one developer that will click on every piece of the screen after each build.
Unit tests don't add business value to the product, they allow you to save on development and testing efforts and force developers to write better code. So it is always up to you - will you spend additional time on UT refactoring to make unit tests perfect? Or will you spend the same amount of time to add new features for the customers of your product? Do not also forget that unit-tests should be as simple as possible. How to find a golden section?
I suppose this depends on the project, and PM or team lead need to plan and estimate quality of unit tests, their completeness and code coverage as if they estimate all other business features of your product. My opinion, that it is better to have copy-paste unit tests that cover 80% of production code then to have a very well designed and separated unit tests that cover only 20%.

Categories

Resources