How to use strategy pattern with Ninject - c#

I have two repositories AlbumRepository with interface IAlbumRepository and CachedAlbumRepository with interface IAlbumRepository which has constructor interface IAlbumRepository. I need inject with Ninject ICachedAlbumRepository with CachedAlbumRepository and constructor with AlbumRepository.
How to achieve it with Ninject?
Same approach with structure map
x.For<IAlbumRepository>().Use<CachedAlbumRepository>()
.Ctor<IAlbumRepository>().Is<EfAlbumRepository>();
public class CachedAlbumRepository : IAlbumRepository
{
private readonly IAlbumRepository _albumRepository;
public CachedAlbumRepository(IAlbumRepository albumRepository)
{
_albumRepository = albumRepository;
}
private static readonly object CacheLockObject = new object();
public IEnumerable<Album> GetTopSellingAlbums(int count)
{
string cacheKey = "TopSellingAlbums-" + count;
var result = HttpRuntime.Cache[cacheKey] as List<Album>;
if (result == null)
{
lock (CacheLockObject)
{
result = HttpRuntime.Cache[cacheKey] as List<Album>;
if (result == null)
{
result = _albumRepository.GetTopSellingAlbums(count).ToList();
HttpRuntime.Cache.Insert(cacheKey, result, null,
DateTime.Now.AddSeconds(60), TimeSpan.Zero);
}
}
}
return result;
}
}

You need to create 2 bindings - one that says inject CachedAlbumRepository into anything that needs an IAlbumRepository and another that says inject a normal AlbumRepository into CachedAlbumRepository. These bindings should do that:
Bind<IAlbumRepository>()
.To<CachedAlbumRepository>();
Bind<IAlbumRepository>()
.To<AlbumRepository>()
.WhenInjectedInto<CachedAlbumRepository>();

I can't answer the question for you, but I have some feedback for you.
Your application design misses a great opportunity. It misses the opportunity to be and stay maintainable. Since you defined a decorator (the CachedAlbumRepository is a decorator), you will probably start writing decorators for other repositories as well. I imagine you having a decorator for your IArtistRepository, IRecordLabelRepository, etc.
Having to implement these duplicate repositories is a violation of the DRY principle. But the violation of DRY is actually caused by a violation some other principles. Your design is violating some of the SOLID principles, namely:
Your design violates the Single Responsibility Principle, because the query methods that you'll place inside your repository (such as the GetTopSellingAlbums method) are not very cohesive. In other words, your repository classes will get big and will do too much and start to get hard to read, hard to test, hard to change, and hard to maintain.
Your design violates the Open/closed principle, since you will have to alter a repository every time you add a new query to the system. This means changing the interface, changing the decorator, changing the real implementation, and changing every fake implementation there exists in the system.
Your design violates the Interface Segregation Principle, because your repository interfaces will get wide (will have many methods) and consumers of those interfaces are forced to depend on methods that they don’t use. This makes it harder and harder to implement decorators and write fake objects.
The solution to this problem is to hide all repositories behind one single generic abstraction:
public interface IRepository<TEntity>
{
void Save(TEntity entity);
TEntity Get(Guid id);
}
Since this is a generic interface, it doesn't give you any room to add any entity-specific query methods, and this is good. It is good, because the IRepository<T> will be narrow and stable. This makes it really easy to add decorators to it (if you still need to add decorators here).
The trick is to prevent adding query methods to this interface (and don't inherit new interfaces from this interface), but to give each query in the system its own class. Or in fact, two classes. One class with the definition of the data, and one class that knows how to execute that query. And last but not least, you can hide each class behind the same generic abstraction for queries (just as we have one generic abstraction over repositories). And when you do this, you just have to define one single caching decorator that you can apply to any subset of queries in the system.
You can read in detail about this design here. This might seem a bit abstraction at first, but I promise you, when you get the hang of this, there's no way you're ever getting back to your old design.

Related

How does the SOLID open/closed principle fit in with Dependency Injection and dependency inversion

I am starting to apply SOLID principles, and am finding them slightly contradictory. My issue is as follows:
My understanding of dependency inversion principle is that classes should depend on abstractions. In practice this means classes should be derived from interfaces. All fine so far.
Next my understanding of the open/closed principle is that after a certain cut off point, you should not alter the contents of a class, but should extend and override. This makes sense so far to me.
So given the above, I would end up with something like this:
public interface IAbstraction
{
string method1(int example);
}
public Class Abstraction : IAbstraction
{
public virtual string method1(int example)
{
return example.toString();
}
}
and then at time T, method1 now needs to add " ExtraInfo" onto its returned value. Rather than altering the current implementation, I would create a new class that extends Abstraction and make it do what I needed, as follows.
public Class AbstractionV2 : Abstraction
{
public override string method1(int example)
{
return example.toString() + " ExtraInfo";
}
}
And I can see the reason for doing this is that only the code I want to call this updated method will call it, and the rest of the code will call the old method.
All makes sense to me - and I assume my understanding is correct??
However, I am also using dependency injection (simple injector), so my implementations are never through a concrete class, but instead are through my DI configuration, as follows:
container.Register<IAbstraction, Abstraction>();
The issue here is that under this setup, I can either update my DI config to be:
container.Register<IAbstraction, AbstractionV2>();
In which case all instance will now call the new method, meaning I have failed to achieve not changing the original method.
OR
I create a new interface IAbstractionV2 and implement the updated functionality there - meaning duplication of the interface declaration.
I cannot see any way around this - which leads me to wonder if dependency injection and SOLID are compatible? Or am I missing something here?
TL;DR
When we say that code is "available for extension" that doesn't automatically mean that we inherit from it or add new methods to existing interfaces. Inheritance is only one way to "extend" behavior.
When we apply the Dependency Inversion Principle we don't depend directly on other concrete classes, so we don't need to change those implementations if we need them to do something different. And classes that depend on abstractions are extensible because substituting implementations of abstractions gets new behavior from existing classes without modifying them.
(I'm half inclined to delete the rest because it says the same thing in lots more words.)
Examining this sentence may help to shed some light on the question:
and then at time T, method1 now needs to add " ExtraInfo" onto its returned value.
This may sound like it's splitting hairs, but a method never needs to return anything. Methods aren't like people who have something to say and need to say it. The "need" rests with the caller of the method. The caller needs what the method returns.
If the caller was passing int example and receiving example.ToString(), but now it needs to receive example.ToString() + " ExtraInfo", then it is the need of the caller that has changed, not the need of the method being called.
If the need of the caller has changed, does it follow that the needs of all callers have changed? If you change what the method returns to meet the needs of one caller, other callers might be adversely affected. That's why you might create something new that meets the need of one particular caller while leaving the existing method or class unchanged. In that sense the existing code is "closed" while at the same time its behavior is open to extension.
Also, extending existing code doesn't necessarily mean modifying a class, adding a method to an interface, or inheriting. It just means that it incorporates the existing code while providing something extra.
Let's go back to the class you started with.
public Class Abstraction : IAbstraction
{
public virtual string method1(int example)
{
return example.toString();
}
}
Now you have a need for a class that includes the functionality of this class but does something different. It could look like this. (In this example it looks like overkill, but in real-world example it wouldn't.)
public class SomethingDifferent : IAbstraction
{
private readonly IAbstraction _inner;
public SomethingDifferent(IAbstraction inner)
{
_inner = inner;
}
public string method1(int example)
{
return _inner.method1 + " ExtraInfo";
}
}
In this case the new class happens to implement the same interface, so now you've got two implementations of the same interface. But it doesn't need to. It could be this:
public class SomethingDifferent
{
private readonly IAbstraction _inner;
public SomethingDifferent(IAbstraction inner)
{
_inner = inner;
}
public string DoMyOwnThing(int example)
{
return _inner.method1 + " ExtraInfo";
}
}
You could also "extend" the behavior of the original class through inheritance:
public Class AbstractionTwo : Abstraction
{
public overrride string method1(int example)
{
return base.method1(example) + " ExtraInfo";
}
}
All of these examples extend existing code without modifying it. In practice at times it may be beneficial to add existing properties and methods to new classes, but even then we'd like to avoid modifying the parts that are already doing their jobs. And if we're writing simple classes with single responsibilities then we're less likely to find ourselves throwing the kitchen sink into an existing class.
What does that have to do with the Dependency Inversion Principle, or depending on abstractions? Nothing directly, but applying the Dependency Inversion Principle can help us to apply the Open/Closed Principle.
Where practical, the abstractions that our classes depend on should be designed for the use of those classes. We're not just taking whatever interface someone else has created and sticking it into our central classes. We're designing the interface that meets our needs and then adapting other classes to fulfill those needs.
For example, suppose Abstraction and IAbstraction are in your class library, I happen to need something that formats numbers a certain way, and your class looks like it does what I need. I'm not just going to inject IAbstraction into my class. I'm going to write an interface that does what I want:
public interface IFormatsNumbersTheWayIWant
{
string FormatNumber(int number);
}
Then I'm going to write an implementation of that interface that uses your class, like:
public class YourAbstractionNumberFormatter : IFormatsNumbersTheWayIWant
{
public string FormatNumber(int number)
{
return new Abstraction().method1 + " my string";
}
}
(Or it could depend on IAbstraction using constructor injection, whatever.)
If I wasn't applying the Dependency Inversion principle and I depended directly on Abstraction then I'd have to figure out how to change your class to do what
I need. But because I'm depending on an abstraction that I created to meet my needs, automatically I'm thinking of how to incorporate the behavior of your class, not change it. And once I do that, I obviously wouldn't want the behavior of your class to change unexpectedly.
I could also depend on your interface - IAbstraction - and create my own implementation. But creating my own also helps me adhere to the Interface Segregation Principle. The interface I depend on was created for me, so it won't have anything I don't need. Yours might have other stuff I don't need, or you could add more in later.
Realistically we're at times just going to use abstractions that were given to us, like IDataReader. But hopefully that's later when we're writing specific implementation details. When it comes to the primary behaviors of the application (if you're doing DDD, the "domain") it's better to define the interfaces our classes will depend on and then adapt outside classes to them.
Finally, classes that depend on abstractions are also more extensible because we can substitute their dependencies - in effect altering (extending) their behavior without any change to the classes themselves. We can extend them instead of modifying them.
Addressing the exact problem you mentioned:
You have classes that depend on IAbstraction and you've registered an implementation with the container:
container.Register<IAbstraction, Abstraction>();
But you're concerned that if you change it to this:
container.Register<IAbstraction, AbstractionV2>();
then every class that depends on IAbstraction will get AbstractionV2.
You shouldn't need to choose one or the other. Most DI containers provide ways that you can register more than one implementation for the same interface, and then specify which classes get which implementations. In your scenario where only one class needs the new implementation of IAbstraction you might make the existing implementation the default, and then just specify that one particular class gets a different implementation.
I couldn't find an easy way to do this with SimpleInjector. Here's an example using Windsor:
var container = new WindsorContainer();
container.Register(
Component.For<ISaysHello, SaysHelloInSpanish>().IsDefault(),
Component.For<ISaysHello, SaysHelloInEnglish>().Named("English"),
Component.For<ISaysSomething, SaysSomething>()
.DependsOn(Dependency.OnComponent(typeof(ISaysHello),"English")));
Every class that depends on ISaysHello will get SaysHelloInSpanish except for SaysSomething. That one class gets SaysHelloInEnglish.
UPDATE:
The Simple Injector equivalent is the following:
var container = new Container();
container.Register<ISaysSomething, SaysSomething>();
container.RegisterConditional<ISayHello, SaysHelloInEnglish>(
c => c.Consumer.ImplementationType == typeof(SaysSomething));
container.RegisterConditional<ISayHello, SaysHelloInSpanish>(
c => c.Consumer.ImplementationType != typeof(SaysSomething))
Modules become closed to modification once they are referenced by other modules. What becomes closed is the public API, the interface. Behavior can be changed via polymorphic substitution (implementing the interface in a new class and injecting it). Your IoC container can inject this new implementation. This ability to polymorphically substitute is the 'Open to extension' part. So, DIP and Open/Closed work together nicely.
See Wikipedia:"During the 1990s, the open/closed principle became popularly redefined to refer to the use of abstracted interfaces..."

Is the purpose of Dependency Injection pattern lost when you cast the parent interface to its child in C#?

I have a Data Repository interface called IRepository. BakerRepository inherits the generic CRUD methods from it. Now for BakerRepository, it may have some methods that are special to itself. For example, a method called Bake.
I am using Unity.Mvc for the container of the dependencies. This was how I originally use it in the controller, which I learned from a tutorial that I read a few days ago:
private IRepository<Baker, int> _repository;
public BakersController(IRepository<Baker, int> repo)
{
_repository = repo;
}
The container will basically give me the BakerRepository implementation. However, when I try to use the Bake method, which is unique to BakerRepository, it returns an error because _repository is of type IRepository<Baker, int>, and thus knows no Bake method.
So I tried this implementation instead:
private BakerRepository _repository;
public BakersController(IRepository<Baker, int> repo)
{
_repository = repo as BakerRepository;
}
I don't fully understand the DI pattern, I'm only using it now because I learned it as a part of a tutorial about data repositories in ASP.Net MVC. I read up about it and I thought it's actually a good design pattern so I decided to keep using it, although I don't get it a hundred percent.
Now I'm wondering if I rendered the purpose dependency injections useless if I do the implementation this way. I don't understand DI pattern enough, and I just couldn't find an exact answer elsewhere.
Casting the IRepository<Baker, int> in the constructor to BakerRepository violates at least two out of 5 SOLID principles:
Open/Closed Principle is violated, because this will cause changes to a different part of the system (the replacement or decoration of the repository for instance) to cause sweeping changes throughout the system, since you might be using the BakerRepository in many places.
Interface Segregation Principle is likely violated, because it is unlikely that your BakersController uses all BakerRepository method.
Dependency Inversion Principle is violated, because your BakersController depends directly on a concrete type, instead of an abstraction. This makes it harder to change and evolve implementations independently.
None of these problems can be solved by changing the the IRepository<Baker, int> parameter to BakersRepository. Instead, you should break out this special Bake method and place it behind its own abstraction, for instance:
public interface IBakeHandler
{
BakeResults Bake([parameters]);
}
You can mark the BakeRepository with this new IBakeHandler interface as well:
class BakeRepository : IRepository<Bake>, IBakeHandler
{
}
This allows you to let the BakeController to depend on IBakeHandler instead:
private IBakeHandler _bakeHandler;
public BakersController(IBakeHandler bakeHandler)
{
_bakeHandler = bakeHandler;
}
This prevents violating of the SOLID principles, because:
The replacement of the implementation with a proxy, decorator or adapter will not ripple through the system; the BakerController is unaffected by such change.
The IRepository<T> and especially the IBakeHandler stay narrow, making it much easier to create decorators to apply cross-cutting concerns or to create mock/stub/fake implementations for testing.
Repository and IBakeHandler implementations can be placed in assemblies that are unreferenced by the assembly that holds the controller.
Do note though that every time you break open such repository implementation to add new features you are effectively violating the Open/Closed principle and probably the Single Responsibility Principle as well.
In case you have many of those 'extra' repository features, you will start to see many one-method interfaces like IBakeHandler. Once you see this happening, extract new generic abstractions out of these interfaces. You can apply well-known patterns such as described here and here.
The right answer is to actually pass the BakerRepository to your BakerController, because that's what it depends on.
Dependency injection is, as you say, a very useful pattern. However, it is just a tool to help build your objects when you've properly extracted the dependencies of your classes. In a non framework setting you'd be responsible for building these objects, and you'd quickly tire of passing in loads of parameters. DI helps you with that. But it is optional. So, if you did it by hand, you'd always construct the BakerController with a BakerRepository passed in. Therefore the same logic would apply when you're using DI.
In direct answer to your question. You could upcast your dependency but that would have no bearing on what D\I does for you.
Breaking out your dependencies CAN be useful for TDD also, isolating your external dependencies allows for unit testing of a class without exercising relatively expensive I\0 calls or disruptive calls.
Also breaking out dependencies allows you to focus the responsibilities of an object and having object doing one thing well.
In practice I have rarely seen or used D\I (implemented off the back of an IOC container) to change concrete implementations, having only done it recently for feature toggling
Another point is that objects which are not external I\0 could feasibly be newed up and still tested either sociably or in a solitary manner.
You shouldn't necessarily D\I everything....

Interface inheritance - How to not break Liskov's Substitution Principle and the Single Responsibility Pattern?

I have a generic repository pattern, and I'm now seeing that I need a custom method for one specific implementation of this pattern, let's call the implementation CustomerRepository and the method GetNextAvailableCustomerNumber. I have a few ideas but they're not conforming to the SOLID principles of object-oriented design.
I first considered to make a custom repository pattern (ICustomerRepository) for that implementation, but that is not really very feasible. Experience tells me that there has to be some other way which I have not yet considered or even know about at the present time. Besides, I don't think that inventing a new repository interface for every bump in the road should be done so lightly.
I then considered making ICustomerRepository inherit IRepository<Customer> and just add the method signature for GetNextAvailableCustomer, but that goes very much against Liskov's Substitution Principle and I believe it also goes ever so slightly against the Single Responsibility Pattern. I would still be able to implement a customer repository based on IRepository, even though I'd only want to use ICustomerRepository. I would end up with two alternatives and it would no longer be obvious which interface the client should be implementing. I would in this case wish for it only to be possible to implement ICustomerRepository, and not IRepository<Customer>
What would be the proper way to go about this? Is interface inheritance really the way to go, or is there any other preferred method of approach which ideally would conform to LSP?
This is my generic repository interface:
public interface IRepository<T>
where T : IEntity
{
T GetById(int id);
IList<T> GetAll();
IEnumerable<T> Query(Func<T, bool> filter);
int Add(T entity);
void Remove(T entity);
void Update(T entity);
}
You are actually not breaking Liskov's Substitution Principle. Liskov's says
objects in a program should be replaceable with instances of their
subtypes without altering the correctness of that program
In your case you can. With your interpretation of Liskov almost no inheritance and extensions of classes will be allowed.
I think that a ICustomerRepository that "inherents" from IRepository would be just fine. I can still replace ICustomerRepository everywhere where I would use IRepostory (given ICustomerRepository:IRepostory)
Liskov guards against unexpected behavior of subclasses. The most used (although not necessarily the beste) example seems to be the example where a square inherets from a rectangle. Here we have a SetWidth method that is overridden by Square, but Square also sets the height since it is a square. The original methods definition is therefore changed in the subclass and therefore violates the principle.
You will not break the LSP.
Subtypes must be substitutable for their base types. (LSP from Agile.Principles.Patterns.and Practices.In.C#[Robert.C.Martin] book)
If you add a new method GetNextAvailableCustomer to ICustomerRepository, it will still be substitutable by IRepository.
Here is a good article for the repository pattern Entity Framework, Repository and Specification Pattern
source code for different .net versions
Liskov did not mean to extinguish any means to extend a program. The substitution principle is about correctness of the program when you replace a basetype by a subtype.
It is perfectly valid to add additional methods to the subtypes though: Any place where a base type is expected does not know of nor use the additional methods. If you replace the implementation used in those places by a subclass, those places in your code will still work perfectly.
An example of breaking the LSP would be if you create an implementation that throws an exception when you call "Query", or where the "Remove" method adds an element and the "add" method removes an element.
As far as I know creating a ICustomerRepository which inherits from IRepository<Customer> and adds customer specific methods is exactly how the repository pattern was meant.

Why should you strive for more than one interface member whenever possible?

In general, why should you strive for three to five members per interface?
And then, what's wrong with something like this?
interface IRetrieveClient
{
Client Execute(string clientId);
}
interface ISaveClient
{
bool Execute(Client client);
}
interface IDeleteClient
{
bool Execute(string clientId);
}
When I see the this, it screams "Antipattern!" because the interface isn't accomplishing anything, especially when the designer of the application intends for each interface to have a one-to-one relationship with the class that implements it.
Read: Once an interface is implemented, it is never reimplemented again. Now, I didn't design this system and it seems to me like what they wanted to do was implement some version of the command pattern, but when speaking to the developers, they don't seem to get it.
One area where I've used the single-method-per-interface pattern quite extensively is with generics together with some generic dispatch infrastructure.
For example:
public interface IValidateEntities<T>
{
bool Validate(T entity);
}
Then when it is necessary to do something with an entity of some type, we can use reflection to find out which implementation to invoke (results of reflection usually cached in advance).
I gave a presentation on this pattern a while back and it's available for viewing here:
http://www.infoq.com/presentations/Making-Roles-Explicit-Udi-Dahan
This looks entirely reasonable to me. I'm perfectly happy to have a small number of methods (or indeed one method) per interface.
Note that the benefit of combining interfaces is that you can (say) selectively present different views of a class by casting appropriately. e.g. you can construct/modify a class and then present it (narrow it) via a more restricted interface (e.g. having a read-only interface)
e.g.
interface Readable {
Entity get();
}
interface Writeable {
void set(Entity e);
}
A class can implement both interfaces such that you can mutate it, and then you can present it to clients simply as a Readable object, such that they can't mutate it. This is possible due to subdividing the interfaces.
You should be careful to use general rules and apply them too strictly. There is nothing wrong with your approach per se, if that is what you need.
But you might want to think of the following alternative:
interface IClient
{
Client Retrieve(string clientId);
bool Save(Client client);
bool Delete(string clientId);
}
The advantage of this approach is that when you use injection, you only have to register one instance, and you only have one interface in your constructor.
So if it logically belongs together, I would keep it in one interface, because it reduces cluttering and is less complex.
The important thing in the design is to keep to what Robert Martin calls single responsibility principle. In your case I think what L-Three proposed is quite reasonable as client will have one responsibility - talk to database (or service, e.t.c.). So if you see that methods in IClient will become totally different for some reason and are breaking SRP then you can split it many interfaces and I don't think if an interface has only one method it's a problem. For instance a good example is an interface called IKeyGenerator that generates keys in a thread safe manner. That interface has just GetNextKey() method which is perfectly enough.

Using explicit interfaces to ensure programming against an interface

I have seen arguments for using explicit interfaces as a method of locking a classes usage to that interface. The argument seems to be that by forcing others to program to the interface you can ensure better decoupling of the classes and allow easier testing.
Example:
public interface ICut
{
void Cut();
}
public class Knife : ICut
{
void ICut.Cut()
{
//Cut Something
}
}
And to use the Knife object:
ICut obj = new Knife();
obj.Cut();
Would you recommend this method of interface implementation? Why or why not?
EDIT:
Also, given that I am using an explicit interface the following would NOT work.
Knife obj = new Knife();
obj.Cut();
To quote GoF chapter 1:
"Program to an interface, not an implementation".
"Favor object composition over class inheritance".
As C# does not have multiple inheritance, object composition and programming to interfaces are the way to go.
ETA: And you should never use multiple inheritance anyway but that's another topic altogether.. :-)
ETA2: I'm not so sure about the explicit interface. That doesn't seem constructive. Why would I want to have a Knife that can only Cut() if instansiated as a ICut?
I've only used it in scenarios where I want to restrict access to certain methods.
public interface IWriter
{
void Write(string message);
}
public interface IReader
{
string Read();
}
public class MessageLog : IReader, IWriter
{
public string Read()
{
// Implementation
return "";
}
void IWriter.Write(string message)
{
// Implementation
}
}
public class Foo
{
readonly MessageLog _messageLog;
IWriter _messageWriter;
public Foo()
{
_messageLog = new MessageLog();
_messageWriter = _messageLog;
}
public IReader Messages
{
get { return _messageLog; }
}
}
Now Foo can write messages to it's message log using _messageWriter, but clients can only read. This is especially beneficial in a scenario where your classes are ComVisible. Your client can't cast to the Writer type and alter the information inside the message log.
Yes. And not just for testing. It makes sense to factor common behaviour into an interface (or abstract class); that way you can make use of polymorphism.
public class Sword: ICut
{
void ICut.Cut()
{
//Cut Something
}
}
Factory could return a type of sharp implement!:
ICut obj = SharpImplementFactory();
obj.Cut();
This is a bad idea because their usage breaks polymorphism. The type of the reference used should NOT vary the behavior of the object. If you want to ensure loose coupling, make the classes internal and use a DI technology (such as Spring.Net).
There are no doubt certain advantages to forcing the users of your code to cast your objects to the interface types you want them to be using.
But, on the whole, programming to an interface is a methodology or process issue. Programming to an interface is not going to be achieved merely by making your code annoying to the user.
Using interfaces in this method does not, in and of itself, lead to decoupled code. If this is all you do, it just adds another layer of obfuscation and probably makes this more confusing later on.
However, if you combine interface based programming with Inversion of Control and Dependency Injection, then you are really getting somewhere. You can also make use of Mock Objects for Unit Testing with this type of setup if you are into Test Driven Development.
However, IOC, DI and TDD are all major topics in and of themselves, and entire books have been written on each of those subjects. Hopefully this will give you a jumping off point of things you can research.
Well there is an organizational advantage. You can encapsulate your ICuttingSurface, ICut and related functionality into an Assembly that is self-contained and unit testable. Any implementations of the ICut interface are easily Mockable and can be made to be dependant upon only the ICut interface and not actual implementations which makes for a more modular and clean system.
Also this helps keep the inheritance more simplified and gives you more flexibility to use polymoprhism.
Allowing only callers expecting to explicit interface type ensures methods are only visible in the context they are needed in.
Consider a logical entity in a game and u decide that instead of a class responsibile for drawing/ticking the entities you want the code for tick/draw to be in the entity.
implement IDrawable.draw() and ITickable.tick() ensures an entity can only ever be drawn/ticked when the game expects it to. Otherwise these methods wont ever be visible.
Lesser bonus is when implementing multiple interfaces, explicit implementations let you work around cases where two interface method names collide.
Another potential scenario for explicitly implementing an interface is when dealing with an existing class that already implements the functionality, but uses a different method name. For example, if your Knife class already had a method called Slice, you could implement the interface this way:
public class Knife : ICut
{
public void Slice()
{
// slice something
}
void ICut.Cut()
{
Slice();
}
}
If the client code doesn't care about anything other than the fact that it can use the object to Cut() things, then use ICut.
Yes, but not necessarily for the given reasons.
An example:
On my current project, we are building a tool for data entry. We have certain functions that are used by all (or almost all) tabs, and we are coding a single page (the project is web-based) to contain all of the data entry controls.
This page has navigation on it, and buttons to interact with all the common actions.
By defining an interface (IDataEntry) that implements methods for each of the functions, and implementing that interface on each of the controls, we can have the aspx page fire public methods on the user controls which do the actual data entry.
By defining a strict set of interaction methods (such as your 'cut' method in the example) Interfaces allow you to take an object (be it a business object, a web control, or what have you) and work with it in a defined way.
For your example, you could call cut on any ICut object, be it a knife, a saw, a blowtorch, or mono filament wire.
For testing purposes, I think interfaces are also good. If you define tests based around the expected functionality of the interface, you can define objects as described and test them. This is a very high-level test, but it still ensures functionality. HOWEVER, this should not replace unit testing of the individual object methods...it does no good to know that 'obj.Cut' resulted in a cutting if it resulted in the wrong thing being cut, or in the wrong place.

Categories

Resources