TDD - Test Existence of Interface - c#

Getting started with TDD and I want to ground up a Repository-driven Model. However, how can I use NUnit to effectively say:
SomeInterfaceExists()
I want to create tests for each domain model (E.g. ICarRepository, IDriverRepository), etc.)
Does this actually make sense?
Regards

TDD means you drive your development (design) by proceeding in a test-first manner, meaning you
Write the outline (methods) of your class you want to test
You create a Unit test for the sketch of your class
You run your unit test -> it will fail
You hack your class, just enough to make the test pass
You refactor your class
This is repeated for each item.

That's not something you test with TDD. The test is can I call one of the methods of that interface on this class, and does it return the right thing.

I would say that the compiler 'tests' (and I know that's a contentious statement) the interfaces existence when it tries to compile a class that implements it. I would expect to explicitly test for the interfaces existence in a test as it doesn't prove anything. You don't test that a class has been defined, you do test the methods on a class.

The existence of interfaces is implicity tested every time you use the interface in a test. For example, before the interface or any implementations of it exists, you might write a test that says, in part:
ICar car = new Convertible();
This establishes the existence of the ICar interface - your test won't compile until it's created - and that Convertible implements ICar. Every method on car you invoke will elaborate more of the interface.

You could write something like the following:
void AssertSomeInterfaceExists(object domainObject)
{
Assert.IsTrue(domainObject is ICarRepository);
}
You can write this test and it will fail afterwards if someone changes your domain object to no longer implement ICarRepository. However your other unit tests, which depend on your domain object implementing this interface, will no longer compile, which would make this test somewhat redundant.

You don't need to test whether your interface exists or not. You aren't actually "testing" anything there.

What happens if your domain object doesn't require data access code. An example- a shopping cart?

You could use reflection to divine the existence of an interface, but that seems like stretching the idea of TDD.
Remember that TDD is about making sure that functionality holds over time, not about asserting a particular design pattern is applied. As Juri points out, the test isn't the absolute first thing you build.

Related

Unit Testing: Correctly wrapping library calls

I've been reading a lot about unit testing recently. I'm currently reading The Art of Unit Testing by Roy Osherove. But one problem isn't properly addressed in the book: How do you ensure that your stubs behave exactly like the "real thing" does?
For example, I'm trying to create an ImageTagger application.There, I have a class ImageScanner which job it is to find all images inside a folder. The method I want to test has the following signature: IEnumerable<Image> FindAllImages(string folder). If there aren't any Images inside that folder, the method is supposed to return null.
The method itself issues a call to System.IO.Directory.GetFiles(..) in order to find all the images inside that folder.
Now, I want to write a test which ensures that FindAllImages(..) returns null if the folder is empty. As Osherove writes in his book, I should extract an Interface IDirectory which has a single method GetFiles(..). This interface is injected into my ImageScanner class. The actual implementation just calls System.IO.Directory.GetFiles(..) The Interface however allows me to create stubs which can simulate Directory.GetFiles() behavior.
Directory.GetFiles returns an empty array if there aren't any files present. So my stub would look like this:
class EmptyFolderStub : IDirectory
{
string[] GetFiles(string path)
{
return new string[]{};
}
}
I can just inject EmptyFolderStub into my ImageScanner class and test whether it returns null.
Now, what if I decide that there's a better library for searching for files? But its GetFiles(..) method throws an exception or returns null if there are no files to be found. My test still passes since the stub simulates the old GetFiles(..) behavior. However, the production code will fail since it isn't prepared to handle a null or an exception from the new library.
Of course you could say that by extracting an Interface IDirectory there also exists a contract which guarantees that the IDirectory.GetFiles(..) method is supposed to return an empty array. So technically I have to test whether the actual implementation satisfies that contract. But apparently, aside from Integration Testing, there's no way to tell if the method actually behaves this way. I could, of course, read the API specification and make sure that it returns an empty array, but don't think this is the point of unit testing.
How can I overcome this problem? Is this even possible with unit testing or do I have to rely on integration tests to capture edge cases like this? I feel like I'm testing something which I don't have any control over. But I would at least expect that there is a test which breaks when I introduce a new incompatible library.
In unit testing, identifying SUT (system under test) is very important. As soon as it is identified, its dependencies should be replaced them with stubs.
Why? Because we want to pretend that we are living in a perfect world of bug-free collaborators, and under this condition we want to check how SUT only behaves.
Your SUT is surely FindAllImages. Stick to this to avoid being lost. All stubs are actually replacements of dependencies (collaborators) that should work perfectly without any bit of failure (this is the reason of their existence). Stubs cannot fail a test. Stubs are imaginary perfect objects.
Pay attention: this configurations has an important meaning. It has a philosophy behind:
*If a test passes, given that all of its dependencies work fine (either by using stubs or actual objects), the SUT is guaranteed to behave as expected.
In other words, if dependencies work, then SUT works hence green test retain its meaning in any environment : If A --> Then B
But it doesn't say if dependencies fail then SUT test should pass or not or something. IMHO, any further logical interpretation is misleading.
If Not A --> Then ??? (we can't say anything)
In summary:
Failing actual dependency and how SUT should react to is another test scenario. You may design a stub which throws exception and check for SUT's expected
behavior.
Of course you could say that by extracting an Interface IDirectory
there also exists a contract which guarantees that the
IDirectory.GetFiles(..) method is supposed to return an empty array.
... I could, of course, read the API specification and make sure that
it returns an empty array, but don't think this is the point of unit
testing.
Yes, I would say that. You didn't change the signature, but you're changing the contract. An interface is a contract, not just a signature. And that's why that is the point of unit testing - if the contract does its part, this unit does its part.
If you wanted extra peace of mind, you could write unit tests against IDirectory, and in your example, those would break (expecting empty string array). This would alert you to contract changes if you change implementations (or a new version of current implementation breaks it).

TDD: .NET following TDD principles, Mock / Not to Mock?

I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!

Can someone explain "Fake it till you make it" approach in Test Driven Development?

I have a problem to understand the evolution of code when you have taken the "Fake It Until You Make IT" TDD approach.
Ok, you have faked it, let's say you returned a constant so that the broken test is green in the beginning. Then you re-factored your code. Then you run the same test which is going to pass obviously because you have faked it!
But if a test is passing how can you rely on that, especially when you know that you faked that?
How should the faked test be refactored with your real code refactoring so that it can still be reliable?
Thanks
The short answer is: write more tests.
If the method is returning a constant (when it should be calculating something), simply add a test for a condition with a different result. So, let's say you had the following:
#Test
public void testLength()
{
final int result = objectUnderTest.myLength("hello");
assertEquals(5, result);
}
and myLength was implemented as return 5, then you write a similar (additional) test but pass in "foobar" instead and assert that the output is 6.
When you're writing tests, you should try to be very vindictive against the implementation and try to write something that exposes its shortcomings. When you're writing code, I think you're meant to be very laissez-faire and do as little is required to make those nasty tests green.
You first create a unit test testing new functionality that does not exist.
Now, you have a unit test to a non existing method. You then create that method that doesn't do anything and your unit test compiles, but of course, fails.
You then go on building your method, underlying functionality etc until your unit test succeeds.
That's (kind of) test driven development.
The reason you should be able to trust on this is that you should make your unit test so that it actually tests your functionality. Of course, if it just returns a constant and you just test on that, you have a problem. But then, your unit test is not complete.
Your unit tests should (in theory) test every line. And if you've done that OK, this should work.
Fake it 'til you make it says to write the simplest possible thing to pass your current tests. Frequently, when you've written a single test case for a new feature, that simplest possible thing is to return a constant. When something that simple satisfies your tests, it's because you don't (yet) have enough tests. So write another test, as #Andrzej Doyle says. Now the feature you're developing needs some logic to it. Maybe this time the simplest possible thing is to write very basic if-else logic to handle your two test cases. You know you're faking it, so you know you're not done. When it becomes simpler to write the actual code to solve your problem than to extend your fake to cover yet another test case - that's what you do. And you've got enough test cases to make sure you're writing it correctly.
This may be referring to the practice of using mocks/stubs/fakes with which your system/class under test collaborates.
In this scenario, you "fake" the collaborator, not the thing that you are testing, because you don't have an implementation of this collaborator's interface.
Thus, you fake it until you "make it," meaning that you implement it in a concrete class.
In TDD, all the requirements are expressed as tests. If you fake something and all tests pass, your requirements are fulfilled. If this is not giving you the expected behavior, then you have not expressed all your requirements as tests.
If you continue faking stuff at this point, you will eventually notice that the easiest solution would be to actually solve the problem.
When you refactor the code, you are switching from returning a constant value to returning an expression in terms of variables, which are derived/calculated.
The test, assuming it was written correctly the first time around, would still be valid for your newly refactored implementation and does not have to be refactored.
It's important to understand the motivation behind Fake It: It's similar to writing the Assert first, except for your production code. It gets you to green, and lets you focus on turning the fake into a valid expression in the simplest way possible while still passing the test. It's the first thing to try when implementation is not obvious, before you give up and switch to Triangulation.

Is unit testing the definition of an interface necessary?

I have occasionally heard or read about people asserting their interfaces in a unit test. I don't mean mocking an interface for use in another type's test, but specifically creating a test to accompany the interface.
Consider this ultra-lame and off-the-cuff example:
public interface IDoSomething
{
string DoSomething();
}
and the test:
[TestFixture]
public class IDoSomethingTests
{
[Test]
public void DoSomething_Should_Return_Value()
{
var mock = new Mock<IDoSomething>();
var actualValue = mock.Expect(m => m.DoSomething()).Returns("value");
mock.Object.DoSomething();
mock.Verify(m => DoSomething());
Assert.AreEqual("value", actualValue);
}
}
I suppose the idea is to use the test to drive the design of the interface and also to provide guidance for implementors on what's expected so they can draw good tests of their own.
Is this a common (recommended) practice?
In my opinion, just testing the interface using a mocking framework tests little else than the mocking framework itself. Nothing I would spend time on, personally.
I would say that what should drive the design of the interface is what functionality that is needed. I think it would be hard to identify that using only a mocking framework. By creating a concrete implementation of the interface, what is needed or not will become more obvious.
The way I tend to do it (which I by no means claim is the recommended way, just my way), is to write unit tests on concrete types, and introduce interfaces where needed for dependency injection purposes.
For instance, if the concrete type under test needs access to some data layer, I will create an interface for this data layer, create a mock implementation for the interface (or use a mocking framework), inject the mock implementation and run the tests. In this case the interface serves no purpose than offering an abstraction for the data layer.
I've never seen anything like this but it seems pointless. You would want to test the implementation of these interfaces, not the interfaces themselves.
Interfaces are about well designed contracts, not well-implemented ones. Since C# is not a dynamic language that would allow the interface to go un-implemented at runtime, this sort of test is not appropriate for the language. If it were Ruby or Perl, then maybe...
A contract is an idea. The soundness of an idea is something that requires the scrutiny of a human being at design time, not runtime or test time.
An implementation can be a "functional" set of empty stubs. That would still pass the "Interface" test, but would be a poor implementation of the contract. It still doesn't mean the contract is bad.
About all a specific Interface test accomplishes is a reminder of original intention which simply requires you to change code in 2 places when your intentions change.
This is good practice if there are testable black box level requirements that implementers of your interface could reasonably be expected to pass. In such a case, you could create a test class specific to the interface, that would be used to test implementations of that interface.
public interface ArrayMangler
{
void SetArray (Array myArray);
Array GetSortedArray ();
Array GetReverseSortedArray();
}
You could write generic tests for ArrayMangler, and verify that arrays returned by GetSortedArray are indeed sorted, and GetReverseSortedArray are indeed sorted in reverse.
The tests could then be included when testing classes implementing ArrayMangler to verify the reasonably expected semantics are being met.
In my opinion is not the way to go. A interface is created as an act of refactoring (extract interface) not TDD. So you start by creating a class with TDD and after that you extract an interface (if needed).
The compiler itself does the verification of the interface. TDD does the validation of the interface.
You may want to check out code contracts in c# 4 as you are slightly bordering into that area in how you phrase the question. You seem to have bundled a few concepts together, and you are understandably confused.
The short answer to your question is that you've probably misheard/misunderstood it. TDD will drive the evolution of the Interface.
TDD tests the interface by verifying that coverage is achieved without involving the concrete types (the specific ones that implement the interface).
I hope this helps.
Interfaces are about relationships between objects, which means you can't "test-drive" an interface without knowing the context it's being called from. I use interface discovery when using TDD on the object that calls the interface, because the object needs a service from its environment. I don't buy that interfaces can only be extracted from classes, but that's another (and longer) discussion.
If you don't mind the commercial, there's more in our book at http://www.growing-object-oriented-software.com/

Moq how do you test internal methods?

Told by my boss to use Moq and that is it.
I like it but it seems that unlike MSTest or mbunit etc... you cannot test internal methods
So I am forced to make public some internal implementation in my interface so that i can test it.
Am I missing something?
Can you test internal methods using Moq?
Thanks a lot
You can use the InternalsVisibleTo attribute to make the methods visible to Moq.
http://geekswithblogs.net/MattRobertsBlog/archive/2008/12/16/how-to-make-a-quotprotectedquot-method-available-for-quotpartialquot-mocking-and-again.aspx
There is nothing wrong with making the internals visible to other classes for testing. If you need to test the internals of a class, by all means do so. Just because the methods are not public does not mean you should ignore them and test only the public ones. A well designed application actually will have a majority of the code encapsulated within your classes in such a way that they are not public. So ignoring the non-public methods in your testing is a big mistake IMHO. One of the beauties of unit testing is that you test all the parts of your code, no matter how small and when your tests are all up and running at 100% it is a very reasonable assumption that when all these parts are put together, your application will work properly for the end user. Of course verifying that latter part is where integration level tests come in - which is a different discussion. So test away!!!
If you have many code that isn't tested by the public methods, you probably have code that should be moved to another classes.
As said in another answer, you can use the InternalsVisibleTo attribute for that. But that doesn't mean you should do it.
From my point of view Mocking should be used to mock up some behavior that we are dependent on but are not setting out to test. Hence:
Q: Am I missing something?
- No you're not missing anything, MOQ is missing the ability to mock private behaviors.
Q: Can you test internal methods using Moq?
- If the result of the private behavior is visible publicly, then yes you can test the internal method but it's not because of Moq that you can test them. I would like to make a point here is that Mock is not the ability to test but rather the ability to similar behaviors that we are not testing but depend on.
C: A main benefit with TDD is that your code becomes easy to change. If you start testing internals, then the code becomes rigid and hard to change
- I don't agree with this comment for 2 main reasons:
1: It is not a beginner misconception, as TDD is not just about the ability to code faster but also better quality code. Hence the more test we can do the better.
2: It doesn't make the code anymore harder to change if you can somehow can test the internal methods.
Your initial presumption that it is necessary to test internal method is a common beginners misconception about unit testing.
Granted, there may exist cases where private methods should be tested in isolation, but the 99% common case is that the private methods are being tested implicitly because they make the public methods pass their tests. The public methods call the private methods.
Private methods are there for a reason. If they do not result in external testable behaviour, then you don't need them.
Do any of your public tests fail if you just flat out delete them? If yes, then they are already being tested. If not, then why do you need them? Find out what you need them for and then express that in a test against the public interface.
A main benefit with TDD is that your code becomes easy to change. If you start testing internals, then the code becomes rigid and hard to change.
InternalsVisibleTo is your friend for testing internals.
Remember to sign your assemblies and you're safe.

Categories

Resources