Is unit testing the definition of an interface necessary? - c#

I have occasionally heard or read about people asserting their interfaces in a unit test. I don't mean mocking an interface for use in another type's test, but specifically creating a test to accompany the interface.
Consider this ultra-lame and off-the-cuff example:
public interface IDoSomething
{
string DoSomething();
}
and the test:
[TestFixture]
public class IDoSomethingTests
{
[Test]
public void DoSomething_Should_Return_Value()
{
var mock = new Mock<IDoSomething>();
var actualValue = mock.Expect(m => m.DoSomething()).Returns("value");
mock.Object.DoSomething();
mock.Verify(m => DoSomething());
Assert.AreEqual("value", actualValue);
}
}
I suppose the idea is to use the test to drive the design of the interface and also to provide guidance for implementors on what's expected so they can draw good tests of their own.
Is this a common (recommended) practice?

In my opinion, just testing the interface using a mocking framework tests little else than the mocking framework itself. Nothing I would spend time on, personally.
I would say that what should drive the design of the interface is what functionality that is needed. I think it would be hard to identify that using only a mocking framework. By creating a concrete implementation of the interface, what is needed or not will become more obvious.
The way I tend to do it (which I by no means claim is the recommended way, just my way), is to write unit tests on concrete types, and introduce interfaces where needed for dependency injection purposes.
For instance, if the concrete type under test needs access to some data layer, I will create an interface for this data layer, create a mock implementation for the interface (or use a mocking framework), inject the mock implementation and run the tests. In this case the interface serves no purpose than offering an abstraction for the data layer.

I've never seen anything like this but it seems pointless. You would want to test the implementation of these interfaces, not the interfaces themselves.

Interfaces are about well designed contracts, not well-implemented ones. Since C# is not a dynamic language that would allow the interface to go un-implemented at runtime, this sort of test is not appropriate for the language. If it were Ruby or Perl, then maybe...
A contract is an idea. The soundness of an idea is something that requires the scrutiny of a human being at design time, not runtime or test time.
An implementation can be a "functional" set of empty stubs. That would still pass the "Interface" test, but would be a poor implementation of the contract. It still doesn't mean the contract is bad.
About all a specific Interface test accomplishes is a reminder of original intention which simply requires you to change code in 2 places when your intentions change.

This is good practice if there are testable black box level requirements that implementers of your interface could reasonably be expected to pass. In such a case, you could create a test class specific to the interface, that would be used to test implementations of that interface.
public interface ArrayMangler
{
void SetArray (Array myArray);
Array GetSortedArray ();
Array GetReverseSortedArray();
}
You could write generic tests for ArrayMangler, and verify that arrays returned by GetSortedArray are indeed sorted, and GetReverseSortedArray are indeed sorted in reverse.
The tests could then be included when testing classes implementing ArrayMangler to verify the reasonably expected semantics are being met.

In my opinion is not the way to go. A interface is created as an act of refactoring (extract interface) not TDD. So you start by creating a class with TDD and after that you extract an interface (if needed).

The compiler itself does the verification of the interface. TDD does the validation of the interface.
You may want to check out code contracts in c# 4 as you are slightly bordering into that area in how you phrase the question. You seem to have bundled a few concepts together, and you are understandably confused.
The short answer to your question is that you've probably misheard/misunderstood it. TDD will drive the evolution of the Interface.
TDD tests the interface by verifying that coverage is achieved without involving the concrete types (the specific ones that implement the interface).
I hope this helps.

Interfaces are about relationships between objects, which means you can't "test-drive" an interface without knowing the context it's being called from. I use interface discovery when using TDD on the object that calls the interface, because the object needs a service from its environment. I don't buy that interfaces can only be extracted from classes, but that's another (and longer) discussion.
If you don't mind the commercial, there's more in our book at http://www.growing-object-oriented-software.com/

Related

What is the philosophy behind the creation of the Interface infrastructure in OOP?

I believe we invent things for some reasons: OOP came because procedural programming didn't meet our needs; The same goes for the Interface, because other OOP features like Abstract didn't meet our needs.
There are plenty of articles and guides written about what an Interface IS, CAN DO and HOW TO USE IT, however, I'm wondering what the actual philosophy behind the of creation of Interface is? Why we need to have Interface?
Conceptually, an interface is a contract. It's a way of saying that anything implementing this interface is capable of doing these set of things.
Different languages have different things that interfaces can define, and different ways of defining them, but that concept remains.
Using interfaces allows you to not care how some particular task is completed; it allows you to just ensure that it is completed.
By allowing implementations to differ, and allowing the code to define just the smallest subset of what it needs, it allows you to generalize your code.
Perhaps you want to write a method to write a sequence of numbers on the screen. You don't want to go around writing methods for doing that for an array, a set, a tree, on any of the (many) other commonly used data structures. You don't need to care whether you're dealing with an array or a linked list, you just need some way of getting a sequence of items. Interfaces allow you to define just the minimal set of what you need, lets say a getNextItem method, and then if all of those data structures implement that method and interface they can use the one generalized method. That's much easier than writing a separate method for each type of data structure you want to use. (This isn't the only use of interface, just a common one.)
In Java, classes can inherit just from one class, but they can implement multiple interfaces. Interfaces are similar to abstract classes, but if a class extends an abstract class then that class can't extend any other class. Interfaces solve that problem, you can make a class extend an abstract class and implement many interfaces.
I completely agree with susomena, but that's not the only benefit you get, when using interfaces.
For example. In our current application, mocking plays an important role, regarding unit testing. The philosophy of unit testing is, that you should really just test the code of this unit itself. Sometimes, though, there are other dependencies, the "unit under test" (SUT) needs to get. And maybe this dependency has other dependencies and so forth. So instead of complicatetly building and configuring the dependency tree, you just fake this certain dependency. A lot of mocking frameworks need to be setup with the interface of the class, which the SUT depends on. It is usually possible to mock concrete classes, but in our case mocking of concrete classes caused weird behaviours of unit tests, because of constructor calls. But mocking interfaces didn't, because an interface hasn't got a constructor.
My personal philosophy of choosing an abstract class implementation is building an hierarchical class construct, where some default behaviour of the abstract base class is needed. If there isn't any default behaviour, the derived class should inherit, I don't see any points of not choosing an interface over an abstract class implementation.
And here an other (not too good) example of how to choose one over another technique. Imagine you got a lot of animal classes like Cat and Dog. The abstract class Animal might implement this default method:
public abstract void Feed()
{
Console.WriteLine("Feeding with meat");
}
That's alright, if you got a lot of animals, which just are fine with meat. For the little amount of animals, which don't like meat you'd just need to reimplement a new behaviour of Feed().
But what if the animals are a kinda gourmets? And the requirement was, that every animal gets its preferred food? I'd rather choose an interface there, so the programmer is forced to implement a Feed() method for every single type of IAnimal.
IMO the best text that describes interface is the ISP from Robert Martin.
The real power of interfaces comes from the fact that (1) you can treat an object as if it has many different types (because a class can implement different interfaces) and (2) treat objects from different hierarchy trees as if they have the same type (because not related classes can implement the same interface).
If you have a method with a parameter of some interface type (eg., a Comparable), it means this methods can accept any object that implements that interface "ignoring" the class (eg., a String or a Integer, two unrelated classes that implement Comparable).
So, an interface is a much more powerful abstraction than abstract class.
Interfaces were brought into OOP because of the sole reason of it's use in the producer consumer paradigm. Let me explain this with an example...
Suppose there is a vendor that supplies tyres to all the big shot automobile companies. The automobile comapny is considered to be the CONSUMER and the tyre vendor is the PRODUCER. Now te consumer instructs the producer of the various specifications in which a tyre has to be produced(such as the diameter, the wheel base etc.); And the producer must strictly adhere to all of these specs.
Let's have an analogy to OOP from this... Let us develop an application to implement a stack, for which you are developing the UI; and let us assume that you are using a stack library (as a .dll or a .class) to actually implement the stack. Here, you are the consumer and the person who actually wrote the stack program is the producer. Now, you specify the various specifications of the stack saying that it should have a provision to push elements and to pop elements and also a provision to peep at the current stack pointer. And you also specify the interface to access these provisions by specifying the return types and the parameters (prototype of functions) so that you know how to use them in your application.
The simplest way to achive this is by creating an interface and asking the producer to implement this interface. So that, no matter what logic the producer uses(u are not bothered about the implementation as long as your needs are met one way or the other), he will implement a push,pop and a peep method with exact return types and parameters .
In other words, you make the producer strictly adhere to your specs and the way to access your needs by making him implement your interface. You won't accept a stack by just any vendor, if he doesn't implement your interface; Because you cannot be sure if it'll suit your exact need.
class CStack implements StackInterface
{//this class produced by the producer must have all three method implementation
//interface defined by the consumer as per his needs
bool push(int a){
...
}
int pop(){
....
}
int peep(){
...
}
}

How to deal with interface overuse in TDD?

I've noticed that when I'm doing TDD it often leads to a very large amount of interfaces. For classes that have dependencies, they are injected through the constructor in the usual manner:
public class SomeClass
{
public SomeClass(IDependencyA first, IDependency second)
{
// ...
}
}
The result is that almost every class will implement an interface.
Yes, the code will be decoupled and can be tested very easily in isolation, but there will also be extra levels of indirection that just makes me feel a little...uneasy. Something doesn't feel right.
Can anyone share other approaches that doesn't involve such heavy use of interfaces?
How are the rest of you guys doing?
Your tests are telling you to redesign your classes.
There are times when you can't avoid passing complex collaborators that need to be stubbed to make your classes testable, but you should look for ways to provide them with the outputs of those collaborators instead and think about how you could re-arrange their interactions to eliminate complex dependencies.
For example, instead of providing a TaxCalculator with a ITaxRateRepository (that hits a database during CalculateTaxes), obtain those values before creating your TaxCalculator instance and provide them to its constructor:
// Bad! (If necessary on occasion)
public TaxCalculator(ITaxRateRepository taxRateRepository) {}
// Good!
public TaxCalculator(IDictonary<Locale, TaxRate> taxRateDictionary) {}
Sometimes this means you have to make bigger changes, adjust object lifetimes or restructure large swaths of code, but I've often found low-lying fruit once I started looking for it.
For an excellent roundup of techniques for reducing your dependency on dependencies, see Mock Eliminating Patterns.
Don't use interfaces! Most mocking frameworks can mock concrete classes.
That's the drawback of mock based testing approaches. This is as much a test boundary discussion as it is about mocking. By having a 1:1 ratio of test cases to domain classes your test boundary is very small. A result of a small test boundary is a proliferation of interfaces and tests that depend on them. Refactoring becomes more difficult due to the number of interactions you are mocking and stubbing out. By testing clusters of classes with a single test, refactoring becomes easier and you use fewer interfaces. Beware, however that you can test too many classes at once. The more complexity your classes have, the more code paths you need to test. This can lead to a combinatorial explosion and you can't possibly test them all. Listen to the code and tests, they're telling you something about your code. If you see the complexity increasing, it's probably a good time to introduce a new Test Case and Interface/Implementation and mock it out in your original.
If you are feeling uneasy about the number of interfaces being passed into a particular class; then it is probably a sign that you are introducing too many disparate dependencies.
If SomeClass depends on IDependencyA, IDependencyB, and IDependencyC, this is an opportunity to see if you can extract out the logic that the class performs with those three interfaces into another class/interface, IDependencyABC.
Then when you are writing your tests for SomeClass, you only need to mock out the logic that IDependencyABC now provides.
In addition, if you are still uncomfortable; maybe it is not an interface you require. For example, classes that contain state (parameters being passed around, for instance) could probably just be created and passed around as concrete classes instead. Jeff's answer alluded to this, where he mentions passing into your constructor ONLY what you need. This provides less coupling between your constructs and is a better indication of the intent of your class's needs. Just be careful passing around data structures (IDictionary<,>).
In the end, TDD is working when you get that warm fuzzy feeling during your cycles. If you feel uneasy, watch for some of the code smells and fix some of those issues to get back on track.

C# - Is adding systematically an interface a good practice?

In the project I'm working on, I've noticed that for every entity class there is an interface. It seems that the original motivation was to only expose interfaces to other project/solutions.
I find this completely useless, and I don't see the point in creating an interface for every class. By the way, those classes don't have any methods just properties and they don't implement the same interface.
Am I wrong? Or is it a good practice?
Thx
I tend to create an interface for almost every class mainly because of unit testing - if you use dependency injection and want to unit test a class that depends on the class in question, than the standard way is to mock an instance of the class in question (using one of the mocking frameworks, e.g. Rhino-Mocks). However, practically it is only possible only for interfaces, not concrete implementations (yes, theoretically you can mock a concrete class, but there are many painful limitations).
There may be more to the setup than described here that justifies the overhead of interfaces. Generally they're very useful for dependency injection and overall separation of concerns, unit testing and mocking, etc.. It's entirely possible that they're not being used for this purpose (or any other constructive purpose, really) in your environment, though.
Is this generated code, or were these manually created? If the former, I suspect the tool generating them is doing so to prepare for such a use if the developer were so inclined. If the latter, maybe the original designer had something in mind?
For my own "best practices" I almost always do interface-driven development. It's generally a good practice to separate out concerns from one another and use the interfaces as contracts between them.
Exposing interfaces publicly has value in creating a loosely-coupled, behaviour-driven architecture.
Creating an interface for every class - especially if the interface just exposes every public method the class has in a single interface - is a bad implementation of the concept, and (in my experience) leads to more complex code and no improvement in architecture.
It's useful for tests.
A method may take a parameter of type ISomething, and it can be either SqlSomething or XmlSomething, where ISomething is the interface, and SqlSomething and XmlSomething are classes that implement the interface, depending whether you're doing tests (you pass XmlSomething in this case) or running the application (SqlSomething).
Also, when building a universal project, that can work on any database, but aren't using an ORM tool like LINQ (maybe because the database engine might not support LINQ to SQL), you define interfaces, with methods that you use in the application. Later on, developers will implement the interfaces to work with the database, create MySQLProductRepository class, PostgreSQLProductRepository class, that both inherit the same interface, but have different functionality.
In the application code any method takes a parameter a repository object of type IProductRepository, which can be anything.
IMHO it sounds that writing interfaces for no reason is pointless. You cant be totally closed minded but in general doing things that are not immediatly useful tend to accumulate as waste.
The agile concept of Its either adding value or taking value comes to mind.
What happens when you remove them? If nothing then ... what are they there for?
As a side note. Interfaces are extremely useful for Rhino Mocks, dependency injection and so on ...
If those classes only have properties, then interfaces don't add much value, because there's no behavior that is being abstracted.
Interfaces can be useful for abstraction, so the implementation can be mocked in unit tests. But in a well-designed application the business/domain entities should have very little reasons to be mocked. Business/domain services on the other hand are a excellent candidate for interface abstraction.
I have created interfaces for my entities once, and it didn't add any value at all. It only made me realize my design was wrong.
It seems to be an interface is superior to an abstract base class primarily if/when it is necessary to have a class which implements the interface but inherits from some other base class. Multiple inheritance is not allowed, but multiple interface implementations are.
The main caveat I see with using interfaces rather than abstract classes (beyond the extra source code required) is that changing anything in an interface necessitates recompilation of any and all code which uses that interface. By contrast, adding public members to a base class generally only requires recompilation of the base class itself.(*)
(*) Due to the way extension methods are handled, adding members to a class won't "require" recompiling code which uses that class, but may cause code which uses extension methods on the class to change meaning the next time it (the extension-method-using code) is recompiled.
There is no way to tell the future and see if you're going to need to program against an interface down-the-road. But if you decide later to make everything use an interface and, say, a factory to create instances of unknown types (any type that implements the interface), then it is quicker to restrict everyone to programming against an interface and a factory up-front than to replace references to MyImpl with references to IMyInterface later, etc.
So when writing new software, it is a judgment call whether to program against an interface or an implementation, unless you are familiar with what is likely to happen to that kind of software based on previous experiences.
I usually keep it "in flux" for a time whether or not I have an interface, a base class, or both, and even whether the base class is abstract (it usually is). I will work on a project (usually a Visual Studio solution with about 3 to 10 projects in it) for a while (a couple of days, maybe) before I refactor and / or ask for a second opinion. Once a final decision is reached and the code is refactored and tested, I tell fellow devs that it is ready for use.
For unit testing, it's either interfaces everywhere or virtual methods everywhere.
Sometimes I miss Java :)

Is it the best practice to extract an interface for every class?

I have seen code where every class has an interface that it implements.
Sometimes there is no common interface for them all.
They are just there and they are used instead of concrete objects.
They do not offer a generic interface for two classes and are specific to the domain of the problem that the class solves.
Is there any reason to do that?
No.
Interfaces are good for classes with complex behaviour, and are especially handy if you want to be able to create a mock or fake implementation class of that interface for use in unit tests.
But, some classes don't have a lot of behaviour and can be treated more like values and usually consist of a set of data fields. There's little point in creating interfaces for classes like this because doing so would introduce unnecessary overhead when there's little point in mocking or providing alternative implementations of the interface. For example, consider a class:
class Coordinate
{
public Coordinate( int x, int y);
public int X { get; }
public int y { get; }
}
You're unlikely to want an interface ICoordinate to go with this class, because there's little point in implementing it in any other way than simply getting and setting X and Y values.
However, the class
class RoutePlanner
{
// Return a new list of coordinates ordered to be the shortest route that
// can be taken through all of the passed in coordinates.
public List<Coordinate> GetShortestRoute( List<Coordinate> waypoints );
}
you probably would want an IRoutePlanner interface for RoutePlanner because there are many different algorithms that could be used for planning a route.
Also, if you had a third class:
class RobotTank
{
public RobotTank( IRoutePlanner );
public void DriveRoute( List<Coordinate> points );
}
By giving RoutePlanner an interface, you could write a test method for RobotTank that creates one with a mock RoutePlanner that just returns a list of coordinates in no particular order. This would allow the test method to check that the tank navigates correctly between the coordinates without also testing the route planner. This means you can write a test that just tests one unit (the tank), without also testing the route planner.
You'll see though, it's quite easy to feed real Coordinates in to a test like this without needing to hide them behind an ICoordinate interface.
After revisiting this answer, I've decided to amend it slightly.
No, it's not best practice to extract interfaces for every class. This can actually be counterproductive. However, interfaces are useful for a few reasons:
Test support (mocks, stubs).
Implementation abstraction (furthering onto IoC/DI).
Ancillary things like co- and contra-variance support in C#.
For achieving these goals, interfaces are considered good practice (and are actually required for the last point). Depending on the project size, you will find that you may never need talk to an interface or that you are constantly extracting interfaces for one of the above reasons.
We maintain a large application, some parts of it are great and some are suffering from lack of attention. We frequently find ourselves refactoring to pull an interface out of a type to make it testable or so we can change implementations whilst lessening the impact of that change. We also do this to reduce the "coupling" effect that concrete types can accidentally impose if you are not strict on your public API (interfaces can only represent a public API so for us inherently become quite strict).
That said, it is possible to abstract behaviour without interfaces and possible to test types without needing interfaces, so they are not a requirement to the above. It is just that most frameworks / libraries that you may use to support you in those tasks will operate effectively against interfaces.
I'll leave my old answer for context.
Interfaces define a public contract. People implementing interfaces have to implement this contract. Consumers only see the public contract. This means the implementation details have been abstracted away from the consumer.
An immediate use for this these days is Unit Testing. Interfaces are easy to mock, stub, fake, you name it.
Another immediate use is Dependency Injection. A registered concrete type for a given interface is provided to a type consuming an interface. The type doesn't care specifically about the implementation, so it can abstractly ask for the interface. This allows you to change implementations without impacting lots of code (the impact area is very small so long as the contract stays the same).
For very small projects I tend not to bother, for medium projects I tend to bother on important core items, and for large projects there tends to be an interface for almost every class. This is almost always to support testing, but in some cases of injected behaviour, or abstraction of behaviour to reduce code duplication.
Let me quote OO guru, Martin Fowler, to add some solid justification to the most common answer in this thread.
This excerpt comes from the "Patterns of Enterprise Application Architecture" (enlisted in the "classics of programming" and\or the "every dev must read" book category).
[Pattern] Separated Interface
(...)
When to Use It
You use Separated Interface when you need to break a dependency between two parts of the system.
(...)
I come across many developers who have separate interfaces for every class they write. I think this is excessive, especially for
application development. Keeping separate interfaces and
implementations is extra work, especially since you often need factory
classes (with interfaces and implementations) as well. For
applications I recommend using a separate interface only if you want
to break a dependency or you want to have multiple independent
implementations. If you put the interface and implementation
together and need to separate them later, this is a simple refactoring
that can be delayed until you need to do it.
Answering your question: no
I've seen some of the "fancy" code of this type myself, where developer thinks he's SOLID, but instead is unintelligible, difficult to extend and too complex.
There's no practical reason behind extracting Interfaces for each class in your project. That'd be an over-kill. The reason why they must be extracting interfaces would be the fact that they seem to implement an OOAD principle "Program to Interface, not to Implementation". You can find more information about this principle with an example here.
Having the interface and coding to the interface makes it a ton easier to swap out implementations. This also applies with unit testing. If you are testing some code that uses the interface, you can (in theory) use a mock object instead of a concrete object. This allows your test to be more focused and finer grained.
It is more common from what I have seen to switch out implementations for testing (mocks) then in actual production code. And yes it is wroth it for unit testing.
I like interfaces on things that could be implemented two different ways, either in time or space, i.e. either it could be implemented differently in the future, or there are 2 different code clients in different parts of the code which may want a different implementation.
The original writer of your code might have just been robo coding, or they were being clever and preparing for version resilience, or preping for unit testing. More likely the former because version resilience an uncommon need-- (i.e. where the client is deployed and can't be changed and a component will be deployed that must be compatible with the existing client)
I like interfaces on things that are dependencies worth isolation from some other code I plan to test. If these interfaces weren't created to support unit tests either, then I'm not sure they're such a good idea. Interface have a cost to maintain and when it comes time to make an object swappable with another, you might want to have an interface apply to only a few methods (so more classes can implement the interface), it might be better to use an abstract class (so that default behaviors can be implemented in an inheritance tree).
So pre-need interfaces is probably not a good idea.
If is a part of the Dependency Inversion principle. Basically code depends on the interfaces and not on the implementations.
This allows you to easy swap the implementations in and out without affecting the calling classes. It allows for looser coupling which makes maintenance of the system much easier.
As your system grows and gets more complex, this principle keeps making more and more sense!
I don't think it's reasonable for Every class.
It's a matter of how much reuse you expect from what type of a component. Of course, you have to plan for more reuse (without the need to do major refactoring later) than you are really going to use at the moment, but extracting an abstract interface for every single class in a program would mean you have less classes than needed.
Interfaces define a behaviour. If you implement one or more interfaces then your object behaves like the one or other interfaces describes. This allows loose coupling between classes. It is really useful when you have to replace an implementation by another one. Communication between classes shall always be done using interfaces excepting if the classes are really tightly bound to each other.
There might be, if you want to be sure to be able to inject other implementations in the future. For some (maybe most) cases, this is overkill, but it is as with most habits - if you're used to it, you don't loos very much time doing it. And since you can never be sure what you'll want to replace in the future, extracting an interface on every class does have a point.
There is never only one solution to a problem. Thus, there could always be more than one implementation of the same interface.
It might seem silly, but the potential benefit of doing it this way is that if at some point you realize there's a better way to implement a certain functionality, you can just write a new class that implements the same interface, and change one line to make all of your code use that class: the line where the interface variable is assigned.
Doing it this way (writing a new class that implements the same interface) also means you can always switch back and forth between old and new implementations to compare them.
It may end up that you never take advantage of this convenience and your final product really does just use the original class that was written for each interface. If that's the case, great! But it really didn't take much time to write those interfaces, and had you needed them, they would've saved you a lot of time.
The interfaces are good to have since you can mock the classes when (unit-) testing.
I create interfaces for at least all classes that touches external resources (e.g. database, filesystem, webservice) and then write a mock or use a mocking framework to simulate the behavior.
Why do you need interfaces? Think practically and deeply. Interfaces are not really attached to classes, rather they are attached to services. The goal of interface is what you allow others to do with your code without serving them the code. So it relates to the service and its management.
See ya

c# when to program to an interface?

Ok the great thing about programming to an interface is that it allows you to interchange specific classes as long as the new classes implement everything in that interface.
e.g. i program my dataSource object to an interface so i can change it between an xml reader and a sql database reader.
does this mean ideally every class should be programmed to an interface?
when is it not a good idea to use an interface?
When the YAGNI principle applies.
Interfaces are great but it's up to you to decide when the extra time it takes developing one is going to pay off. I've used interfaces plenty of times but there are far more situations where they are completely unnecessary.
Not every class needs to be flexibly interchanged with some other class. Your system design should identify the points where modules might be interchangeable, and use interfaces accordingly. It would be silly to pair every class with an additional interface file if there's no chance of that class ever being part of some functional group.
Every interface you add to your project adds complexity to the codebase. When you deal with interfaces, discoverability of how the program works is harder, because it's not always clear which IComponent is filling in for the job when consumer code is dealing with the interface explicitly.
IMHO, you should try to use interfaces a lot. It's easier to be wrong by not using an interface than by using it.
My main argument on this is because interfaces help you make a more testable code. If a class constructor or a method has a concrete class as a parameter, it is harder (specially in c#, where no free mocking frameworks allow mocking non-virtual methods of concrete classes) for you to make your tests that are REAL unit tests.
I believe that if you have a DTO-like object, than it's overkill to use an interface, once mocking it may be maybe even harder than creating one.
If you're not testing, using dependency injection, inversion of control; and expect never to do any of these (please, avoid being there hehe), then I'd suggest interfaces to be used whenever you will really need to have different implementations, or you want to limit the visibility one class has over another.
Use an interface when you expect to need different behaviours used in the same context. I.e. if your system needs one customer class which is well defined, you probably don't need to use an ICustomer interface. But if you expect a class to comply to a certain behaviour s.a. "object can be saved" which applies to different knids of objects then you shoudl have the class implement an ISavable interface.
Another good reason to use an interface is if you expect different implementations of one kind of object. For example if ypu plan an SMS-Gateway which will route SMS's through several different third-party services, your classes should probably implent a common interface s.a. ISmsGatewayAdapter so your core system is independent from the specific implementation you use.
This also leads to 'dependecy injection' which is a technique to further decouple your classes and which is best implemented by using interfaces
The real question is: what does your class DO? If you're writing a class that actually implements an interface somewhere in the .NET framework, declare it as such! Almost all simple library classes will fit that description.
If, instead, you're writing an esoteric class used only in your application and that cannot possibly take any other form, then it makes no sense to talk about what interfaces it implements.
Starting from the premise of, "should I be implementing an interface?" is flawed. You neither should be nor shouldn't be. You should simply be writing the classes you need, and declaring what they do as you go, including what interfaces they implement.
I prefer to code as much as possible against an interface. I like it because I can use a tool like StructureMap to say "hey...get me an instance of IWidget" and it does the work for me. But by using a tool like this I can programatically or by configuration specify which instance is retrieved. This means that when I am testing I can load up a mock object that conforms to an interface, in my development environment I can load up a special local cache, when I am in production I can load up a caching farm layer, etc. Programming against an interface provides me a lot more power than not programming against an interface. Better to have and not need than need and not have applies here very well. And if you are into SOLID programming the easiest way to achieve many of those principles sort of begins by programming against an interface.
As a general rule of thumb, I think you're better off overusing interfaces a bit than underusing them a bit. Err on the side of interface use.
Otherwise, YAGNI applies.
If you are using Visual Studio, it takes about two seconds to take your class and extract an interface (via the context menu). You can then code to that interface, and hardly any time was spent.
If you are just doing a simple project, then it may be overkill. But on medium+ size projects, I try to code to interfaces throughout the project, as it will make future development easier.

Categories

Resources