Assert object wasn't changed in UnitTest - c#

I have an unit test, where I want to test, that under some circumstances my class hasn't been changed.
[Test]
public void ShouldNotRecalculate()
{
var subject = new Invoice();
//Create50InvoiceItems()
sut.Recalculate(subject);
Assert.That(subject.Items[0].NetAmount.Equals());
Assert.That(subject.Items[0].VatRate.Equals());
...
Assert.That(subject.Items[49].NetAmount.Equals());
Assert.That(subject.Items[49].VatRate.Equals());
}
Without creating variables and saving there old values of those all 50 invoice items, is there any other solution? Should I create a shallow copy of this object? Maybe using AutoMapper?
Thank in advance!

I sometimes implemented the IEquatable interface so that I can compare objects and all it's properties.
It's quite handy in general, and also allows you to create a "reference object" or an object and compare and check if it has been changed.
Depending on the situation you may also just create multiple assertions and check all values.
For those cases I like to use Assert.Multiple() to signify that the values belong together.
In your case I would go for the IEquatable interface, this makes the Assertions super concise.
On top IClonable will make the preparation of the test data easy.
I wonder though - how many objects do you have to test to "proof" your code is working correctly?
Sure, you may have to make a test for each property, but per property, I guess only a few tests should be necessary.
Besides - you can use [TestCaseSource] and potentially [Combinatorical] attributes to generate the test cases necessary.
Then you don't need to create a huge list with test case data yourself.
Meaning - instead of creating a huge list of test case data yourself and test them all in a huge test, let NUnit generate the data for you and run it in many test cases.

Related

TDD - Am I doing it correctly?

I have a class that deals with Account stuff. It provides methods to login, reset password and create new accounts so far.
I inject the dependencies through the constructor. I have tests that validates each dependency's reference, if the reference is null it throws an ArgumentNullException.
The Account class exposes each of these dependencies through read only properties, I then have tests that validates if the reference passed on the constructor is the same that the property returns. I do this to make sure the references are being held by the class. (I don't know if this is a good practice too.)
First question: Is this a good practice in TDD? I ask this because this class has 6 dependencies so far, and it gets very repetitive and also the tests get pretty long as I have to mock all the dependencies for each test. What I do is just a copy and paste every time and just change the dependency's reference being tested.
Second question: my account creation method does things like validating the model passed, inserting data in 3 different tables or a forth table if a certain set of values are present and sending an email. What should I test here? I have so far a test that checks if the model validation gets executed, if the Add method of each repository gets called, and in this case, I use the Moq's Callback method of the mocked repository to compare each property being added to the repository against the ones I passed by the model.
Something like:
userRepository
.Setup(r => r.Add(It.IsAny<User>()))
.Callback<User>(u =>
{
Assert.AreEqual(model.Email, u.Email);
Assert.IsNotNull(u.PasswordHash);
//...
})
.Verifiable();
As I said, these tests are getting longer, I think that it doesn't hurt to test anything I can, but I don't know if it's worth it as it it's taking time to write the tests.
The purpose of testing is to find bugs.
Are you really going to have a bug where the property exists but is not initialized to the value from the constructor?
public class NoNotReally {
private IMyDependency1 _myDependency;
public IMyDependency1 MyDependency {get {return _myDependency;}}
public NoNotReally(IMyDependency dependency) {
_myDependency = null; // instead of dependency. Really?
}
}
Also, since you're using TDD, you should write the tests before you write the code, and the code should exist only to make the tests pass. Instead of your unnecessary tests of the properties, write a test that demonstrates that your injected dependency is being used. In order or such a test to pass, the dependency will need to exist, it will need to be of the correct type, and it will need to be used in the particular scenario.
In my example, the dependency will come to exist because it's needed, not because some artificial unit test required it to be there.
You say writing these tests feels repetitive. I say you feel the major benefit of TDD. Which is in fact not writing software with less bugs and not writing better software, because TDD doesn't guarantee either (at least not inherently). TDD forces you to think about design decisions and make design decisions all. The. Time. (And reduce debugging time.) If you feel pain while doing TDD, it's usually because a design decision is coming back to bite you. Then it's time to switch to your refactoring hat and improve the design.
Now in this particular case it's just the design of your tests, but you have to make design decisions for those as well.
As for testing whether properties are set. If I understand you correctly, you exposed those properties just for the sake of testing? In that case I'd advise against that. Assume you have a class with a constructor parameter and have a test that asserts the construtor should throw on null arguments:
public class MyClass
{
public MyClass(MyDependency dependency)
{
if (dependency == null)
{
throw new ArgumentNullException("dependency");
}
}
}
[Test]
public void ConstructorShouldThrowOnNullArgument()
{
Assert.Catch<ArgumentNullException>(() => new MyClass(null));
}
(TestFixture class omitted)
Now when you start to write a test for an actual business method of the class under test, the parts will start to fit together.
[Test]
public void TestSomeBusinessFunctionality()
{
MyDependency mockedDependency;
// setup mock
// mock calls on mockedDependency
MyClass myClass = new MyClass(mockedDependency);
var result = myClass.DoSomethingOrOther();
// assertions on result
// if necessary assertion on calls on mockedDependency
}
At that point, you will have to assign the injected dependency from the constructor to a field so you can use it in the method later. And if you manage to get the test to pass without using the dependency... well, heck, obviously you didn't need it to begin with. Or, maybe, you'll only start to need it for the next test.
About the other point. When it becomes a hassle to test all the reponsibilities of a method or class, TDD is telling you that the method/class is doing to much and would maybe like to be split up into parts that are easy to test. E.g. one class for verification, one for mapping and one for executing the storage calls.
That can very well lead to over-engineering, though! So watch out for that and you'll develop a feeling for when to resist the urge for more indirection. ;)
To test if properties are mapped properly, I'd suggest to use stubs or self-made fake objects which have simple properties. That way you can simply compare the source and target properties and don't have to make lengthy setups like the one you posted.
Normally in unit tests (especially in TDD), you are not going to test every single statement in the class that you are testing. The main purpose of the TDD unit tests is to test the business logic of the class, not the initialization stuff.
In other words, you give scenarios (remember to include edge cases too) as input and check the results, which can either be the final values of the properties and/or the return values of the methods.
The reason you don't want to test every single possible code path in your classes is because should you ever decide to refactor your classes later on, you only need to make minimal changes to your TDD unit tests, as they are supposed to be agnostic to the actual implementation (as much as possible).
Note: There are other types of unit tests, such as code coverage tests, that are meant to test every single code path in your classes. However, I personally find these tests impractical, and certainly not encouraged in TDD.

Unit test saving/retrieving object

Haven't really used much unit testing before, but read up on it a bit and got the idea that you really only should test 1 thing at a time. But how to do this in a nice way when for example saving and retrieving an object? I can't see that the save worked without using the "retrieve" function. And can't test the retrieve without saving something. At the moment I tried something like this... How can I assure that my test can know which one is not working?
[TestMethod]
public void TestSaveObject()
{
TestStorage storage = new TestStorage();
ObejctToSave s1 = new ObejctToSave {Name = "TEST1"};
ObejctToSave s2 = new ObejctToSave { Name = "TEST2" };
storage.SaveObject(s1);
storage.SaveObject(s2);
List<ObjectToSave> objects = storage.GetObjects();
Assert.AreEqual(2, objects.Count);
Assert.AreEqual("TEST1", objects[0].Name);
Assert.AreEqual("TEST2", objects[1].Name);
}
You noted yourself that unit-tests ought to test one thing at a time. And yet here you're testing two things - storage and retrieval.
If you want to test your service layer for handling persistence correctly, mock the persistence object (repository), and then call service methods to add object - verifying that appropriate methods on repository were called. The same for retrieval.
The main issue is whether:
you are implementing a persistence library. If yes, you should of course test peristence methods, using mock objects that will fake OS calls to file system operations.
you want to test your persistence methods (as your example suggests), but they are using 3rd party library. It doesn't make sense for unit-tests - this is the part when integration testing plays its role.
Briefly speaking - unit test tests a single unit - a "module" of your code separately from other modules. Other parts are being mocked for the purpose of verifying only the code of the unit being tested.
Integration test on the other hand tests a group of modules working together. Often integration tests are implemented to tests typical use cases of your whole system, sometimes they are used for regression testing of only a group of modules for example. There are many possibilities, but the point is that modules are being tested working together - hence integration.
I like the Idea to test one function by using it-s inverse-function
even if this means testing two things at a time and this is probably an integrationtest and not a unittest.
However there are some problems i experienced doing this.
sort order of lists false negative: In your example storage.GetObjects() might retrieve the ojects in the wrong order so your test might fail although every single object was handled correctly. The same applies for objects with sub-lists.
maintaining this test: if you add new properties to ObejctToSave and forget to update the corresponding test you get a false positive - ObejctToSave tells you success althoug the new property is not persisted properly.
For a workaround for these limitations in dotnet i did this
using nbuilder library that automatically assigns a reproducable value to each property so new properties are automatically tested
created a custom object-compare method compare<t>(t object1, t object2, params string[] propertyNamesToBeIgnoredInTest) that uses reflection to compare the properties and sorts sub-list before comparing them.
Interesting point. I think to do it in the most proper TDD way ( that wouldn't be necessary the way it should ) is to create a TestStorage mock, and assert the proper calls are satisfied on it. So you can create a separate test for the Save an Retrieve, both with the mock and proper expectations on it.

Unit test class with one public and multiple private methods

I'm having a little trouble understanding how to approach the following in order to unit test the class.
The object under test is an object that consists out of 1 public method, one that accepts a list of objects of type A and returns an object B (which is a binary stream).
Due to the nature of the resulting binary stream, which gets large, it is not a nice comparison for the test output.
The stream is built using several private instance helper methods.
class Foo
{
private BinaryStream mBinaryStream;
public Foo() {}
public BinaryStream Bar(List<Object> objects) {
// perform magic to build and return the binary stream;
// using several private instance helper methods.
Magic(objects);
MoreMagic(objects);
}
private void Magic(List<Object> objects) { /* work on mBinaryStream */ }
private void MoreMagic(List<Object> objects) { /* work on mBinaryStream */ }
};
Now I know that I need to test the behaviour of the class, thus the Bar method.
However, it's undoable (both space and time wise) to do compare the output of the method with a predefined result.
The number of variations is just too large (and they are corner cases).
One option to go for is to refactor these private helper methods into (a) separate class(es) that can be unit tested. The binary stream can then be chopped into smaller better testable chunks, but also here goes that a lot of cases need to be handled and comparing the binary result will defy the quick time of a unit test. It an option I'd rather not go for.
Another option is to create an interface that defines all these private methods in order to verify (using mocking) if these methods were called or not. This means however that these methods must have public visibility, which is also not nice. And verifying method invocations might be just enough to test for.
Yet another option is to inherit from the class (making the privates protected) and try to test this way.
I have read most of the topics around such issue, but they seem to handle good testable results. This is different than from this challenge.
How would you unit test such class?
Your first option (extract out the functionality into separate classes) is really the "correct" choice from a SOLID perspective. One of the main points of unit testing things (and TDD by extension) is to promote the creation of small, single responsibility classes. So, that is my primary recommendation.
That said, since you're rather against that solution, if what you're wanting to do is verify that certain things are called, and that they are called in a certain order, then you can leverage Moq's functionality.
First, have BinaryStream be an injected item that can be mocked. Then setup the various calls that will be made against that mock, and then do a mockStream.VerifyAll() call on it - this verifies that everything that you setup for that mock was called.
Additionally, you can also setup a mock to do a callback. What you can do with this is setup an empty string collection in your test. Then, in the callback of the mock setup, add a string identifying the name of that function called to the collection. Then after the test is completed, compare that list to a pre-populated list containing the calls that you expect to have been made, in the correct order, and do an EqualTo Assert. Something like this:
public void MyTest()
{
var expectedList = new List<string> { "SomeFunction", "AnotherFunction", ... };
var actualList = new List<string>();
mockStream.Setup(x => x.SomeFunction()).Callback(actualList.Add("SomeFunction"));
...
systemUnderTest.Bar(...);
Assert.That(actualList, Is.EqualTo(expectedList));
mockStream.VerifyAll();
}
Well you are on top of how to deal with the private methods. Testing the stream for the correct output. Personally I'd use a very limited set of input data, and simply exercise the code in the unit test.
All the potential scenarios I'd treat as an integration test.
So have a file (say xml) with input and expected output. Run through it, call the method with the input and compare actual output with expected, report differences. So you could do this as part of checkin, or before deploy to UAT or some such.
Don't try to test private methods - they don't exist from consumer point of view. Consider them as named code regions which are there just to make your Bar method more readable. You always can refactor Bar method - extract other private methods, rename them, or even move back to Bar. That is implementation details, which do not affect class behavior. And class behavior is exactly what you should test.
So, what is behavior of your class? What are consumer expectations from your class? That is what you should define and write down in your tests (ideally just before you make them pass). Start from trivial situations. What if list of objects is empty? Define behavior, write test. What if list contains single object? If behavior of your class is very complex, then probably your class doing too many things. Try to simplify it and move some 'magic' to dependencies.

Is it a good way of unit testing to use another, tested function to make preparations for the actual test?

I'm trying to get into unit testing with NUnit. At the moment, I'm writing a simple test to get used to the syntax and the way of unit testing. But I'm not sure if I'm doing it right with the following test:
The class under test holds a list of strings containing fruit names, where new fruit names can be added via class_under_test.addNewFruit(...). So, to test the functionality of addNewFruit(...), I first use the method to add a new string to the list (e.g. "Pinapple") and, in the next step, verify if the list contains this new string.
I'm not sure if this is a good way to test the functionality of the method, because I rely on the response of another function (which I have already tested in a previous unit test).
Is this the way to test this function, or are there better solutions?
public void addNewFruit_validNewFruitName_ReturnsFalse()
{
//arrange
string newFruit = "Pineapple";
//act
class_under_test.addNewFruit(newFruit);
bool result = class_under_test.isInFruitList(newFruit);
//assert
Assert.That(!result);
}
In a perfect world, every unit test can only be broken in single way. Every unit test "lives" in isolation to every other. Your addNewFruit test can be broken by breaking isInFruitsList - but luckily, this isn't a perfect world either.
Since you already tested isInFruitsList method, you shouldn't worry about that. That's like using 3rd party API - it (usually) is tested, and you assume it works. In your case, you assume isInFruitsList works because, well - you tested it.
Going around the "broken in a single way" you could try to expose underlying fruits list internally (and use InternalsVisibleTo attribute), or passing it via dependency injection. Question is - is it worth the effort? What do you really gain? In such simple case, you usually gain very little and overhead of introducing such constructs usually is not worth the time.

multiple expectations/assertions to verify test result

I have read in several book and articles about TDD and BDD that one should avoid multiple assertions or expectations in a single unit test or specification. And I can understand the reasons for doing so. Still I am not sure what would be a good way to verify a complex result.
Assuming a method under test returns a complex object as a result (e.g. deserialization or database read) how do I verify the result correctly?
1.Asserting on each property:
Assert.AreEqual(result.Property1, 1);
Assert.AreEqual(result.Property2, "2");
Assert.AreEqual(result.Property3, null);
Assert.AreEqual(result.Property4, 4.0);
2.Relying on a correctly implemented .Equals():
Assert.AreEqual(result, expectedResult);
The disadvantage of 1. is that if the first assert fails all the following asserts are not run, which might have contained valuable information to find the problem. Maintainability might also be a problem as Properties come and go.
The disatvantage of 2. is that I seem to be testing more than one thing with this test. I might get false positives or negatives if .Equals() is not implemented correctly. Also with 2. I do not see, what properties are actually different if the test fails but I assume that can often be addressed with a decent .ToString() override. In any case I think I should avoid to be forced to throw the debugger at the failing tests to see the difference. I should see it right away.
The next problem with 2. is that it compares the whole object even though for some tests only some properties might be significant.
What would be a decent way or best practise for this in TDD and BDD.
Don't take TDD advice literally. What the "good guys" mean is that you should test one thing per test (to avoid a test failing for multiple reasons and subsequently having to debug the test to find the cause).
Now test "one thing" means "one behavior" ; NOT one assert per test IMHO.
It's a guideline not a rule.
So options:
For comparing whole data value objects
If the object already exposes a usable production Equals, use it.
Else do not add a Equals just for testing (See also Equality Pollution). Use a helper/extension method (or you could find one in an assertion library) obj1.HasSamePropertiesAs(obj2)
For comparing unstructured parts of objects (set of arbitrary properties - which should be rare),
create a well named private method AssertCustomerDetailsInOrderEquals(params) so that the test is clear in what part you're actually testing. Move the set of assertions into the private method.
With the context present in the question I'd go for option 1.
It likely depends on context. If I'm using some sort of built in object serialization within the .NET framework, I can be reasonably assured that if no errors were encountered then the entire object was appropriately marshaled. In that case, asserting a single field in the object is probably fine. I trust MS libraries to do the right thing.
If you are using SQL and manually mapping results to domain objects I feel that option 1 makes it quicker to diagnose when something breaks than option 2. Option 2 likely relies on toString methods in order to render the assertion failure:
Expected <1 2 null 4.0> but was <1 2 null null>
Now I am stuck trying to figure out what field 4.0/null was. Of course I could put the field name into the method:
Expected <Property1: 1, Property2: 2, Property3: null, Property4: 4.0>
but was <Property1: 1, Property2: 2, Property3: null, Property4: null>
This is fine for small numbers of properties, but begins to break down larger numbers of properties due to wrapping, etc. Also, the toString maintenance could become an issue as it needs to change at the same rate as the equals method.
Of course there is no correct answer, at the end of the day, it really boils down to your team's (or your own) personal preference.
Hope that helps!
Brandon
I would use the second approach by default. You're right, this fails if Equals() is not implemented correctly, but if you've implemented a custom Equals(), you should have unit-tested it too.
The second approach is in fact more abstract and consise and allows you to modify the code easier later, allowing in the same way to reduce code duplication. Let's say you choose the first approach:
You'll have to compare all properties in several places,
If you add a new property to a class, you'll have to add a new assertion in unit tests; modifying your Equals() would be much easier. Of course you have still to add a value of the property in expected result (if it's not the default value), but it would be shorter to do than adding a new assertion.
Also, it is much easier to see what properties are actually different with the second approach. You just run your tests in debug mode, and compare the properties on break.
By the way, you should never use ToString() for this. I suppose you wanted to say [DebuggerDisplay] attribute?
The next problem with 2. is that it compares the whole object even though for some tests only some properties might be significant.
If you have to compare only some properties, than:
ether you refactor your code by implementing a base class which contains only those properties. Example: if you want to compare a Cat to another Cat but only considering the properties common to Dog and other animals, implement Cat : Animal and compare the base class.
or you do what you've done in your first approach. Example: if you do care only about the quantity of milk the expected and the actual cats have drunk and their respective names, you'll have two assertions in your unit test.
Try "one aspect of behavior per test" rather than "one assertion per test". If you need more than one assertion to illustrate the behavior you're interested in, do that.
For instance, your example might be ShouldHaveSensibleDefaults. Splitting that up into ShouldHaveADefaultNameAsEmptyString, ShouldHaveNullAddress, ShouldHaveAQuantityOfZero etc. won't read as clearly. Nor will it help to hide the sensible defaults in another object then do a comparison.
However, I would separate examples where the values had defaults with any properties derived from some logic somewhere, for instance, ShouldCalculateTheTotalQuantity. Moving small examples like this into their own method makes it more readable.
You might also find that different properties on your object are changed by different contexts. Calling out each of these contexts and looking at those properties separately helps me to see how the context relates to the outcome.
Dave Astels, who came up with the "one assertion per test", now uses the phrase "one aspect of behavior" too, though he still finds it useful to separate that behavior. I tend to err on the side of readability and maintainability, so if it makes pragmatic sense to have more than one assertion, I'll do that.

Categories

Resources