Deciding between restriction and freedom (Interfaces and Dependency Injection) - c#

As I understand it, interfaces in C# can be thought of as a contract or promise that a derived class must follow. This allows different objects to behave in different ways when the an overridden method is called.
DI, how I understand it, offers the ability to reduce dependencies by being able to inject the dependency (usually through a container) though the ctor, property, or method.
It seems like they are 2 completely opposing forces between freedom and restraint. I can create a contract that says a derived class MUST follow these certain guidelines which is restrictive in nature versus DI which allows me to inject dependencies (I know that the dependency has to inherit the interface but still...) which is permissive in nature. I can make everything injectable and wildly and change the class.
I guess my question is what's your process in deciding how restrictive you want to be? When is it more important to use interfaces for polymorphism or DI when you want complete freedom? Where do you draw the line? Is it ok to use interfaces when you want the structure (methods and properties) to align among different derived classes and DI when the parameters can be wildly different?
EDIT:
Maybe DI was the wrong example. Let's say I have an IPlugin and Plugin Factory. All plugins need the same information, work the same, etc. So it would make sense to use an interface. Now, one plugin works the same but needs different parameters or different data but ultimately the same structure i.e. Load, Run, etc.
I wanted to pass a command object that can expose different parameters that the plugin will need (using DI) and then each plugin can use the properties of that command object but the fact that I can inject a command object with wildly different parameters kinda breaks the whole idea of having a contract in the first place. Would that be kosher?

DI, how I understand it, offers the ability to reduce dependencies by being able to inject the dependency (usually through a container) though the ctor, property, or method.
Incorrect. Static code analysis shows the dependency is still there. DI just changes how you ended up with an instance to the object. If your ctor say is expecting objects of a particular class then you have a dependency to that type but you also have strong-coupling to a class which is bad. However if your ctor is expecting types of a certain interface, then you have a dependency to that contract definition but not to the actual implementation (lose coupling).
As I understand it, interfaces in C# can be thought of as a contract or promise that a derived class must follow. This allows different objects to behave in different ways when the an overridden method is called.
Yes, but they are also a way to abstract a component by hiding unnecessary details, something say a class cannot. That's why in reasonably large systems it's nice to have a MyContracts.dll in which you define all your interfaces; and say BusinessLogic.dll and ClientStuff.dll. ClientStuff depends on contracts but it doesn't care about the actual implementation and possible zillions of other dependencies BusinessLogic.dll may have (typical WPF or WCF application say).
When is it more important to use interfaces for polymorphism or DI when you want complete freedom? Where do you draw the line? Is it ok to use interfaces when you want the structure (methods and properties) to align among different derived classes and DI when the parameters can be wildly different?
DI does not offer any more freedom then a non-DI system. However I think you might be a little confused over terminology or concepts. DI is always good. Injecting interfaces over class types is better as I mentioned above.
I can create a contract that says a derived class MUST follow these certain guidelines which is restrictive in nature versus DI which allows me to inject dependencies
A contract is a contract is a contract whether it is in the form of an interface or abstract class. If the constructor (constructor-injection) or property (property injection) is asking for a MyDbConnection then that sets up a requirement as to what can be injected more so than say just expecting a IMyDbConnection.
I think you may have it wrong with regards to what DI does. DI does not always inject classes and interfaces are not just for polymorphism. The latter is a common mistake. DI has nothing to do with being "restrictive". That is up to you as to the type you are expecting to have injected.
In fact expecting a concrete class object or something derived by abstract class to be injected is more "restrictive" than injecting an interface by the very nature of how interfaces can be reused across more scenarios. e.g. Not that you would be injecting one but INotifyPropertyChanged pre-dated WPF but not plays a huge role in WPF and MVVM.
Restrictive:
public EditPatientViewModel (SqlPersistenceService svc) {}
Not so restrictive
public EditPatientViewModel (IPersistenceService svc) {}
Deciding between restriction and freedom (Interfaces and Dependency Injection)
That's completely up to you but if you inject interfaces then you will obtain:
decoupling between contract and implementation and all the baggage that goes with it
improved abstraction - hide away unnecessary details

I was reading "The Art of Unit Testing" by Roy Osherove and I think he answers my question.
He's talking about DI in testing and mentions, some people believe this hurts object oriented design principles (namely breaking encapsulation).
Object oriented principles are are there to enforce constraints on the end user of the API so that the object model is used properly and is protected from unintentional usage. Tests for example add another end user. Encapsulating these external dependencies somewhere without allowing anyone to change them, having private constructors or sealed classes, having non-virtual methods are all classic signs of overprotective design.
I think my problem was the fact that I feel like too many dependencies being injected were breaking encapsulation principles but adding these dependencies cater to the end user in a way where they actually need the functionality (like tests) and don't break any encapsulation rules.

Related

Adding a implementation rather than its interface in ASP.NET Core DI

What's the difference between adding an implementation rather than its interface, being that I have just one implementation of this interface?
// Adds a transient service by type of the implementation:
services.AddTransient(typeof(SomeConcreteService));
or
// Adds a transient service by interface of the concrete implementation type:
services.AddTransient<ISomeService, SomeConcreteService>();
services.AddTransient<ISomeService, SomeConcreteService>();
This way is preferred as it lets you use dependency injection in the correct manner.
If you use interfaces in all your controllers and then decide you want to change the concrete implementation, you will only have to edit the one line in your Startup.cs
public HomeController(ISomeService someService)
{
//..
}
When you add the implementation you will only be able to inject it as implementation.
services.AddTransient(typeof(SomeConcreteService));
Injecting this now as ISomeService will cause an error.
While this
services.AddTransient<ISomeService, SomeConcreteService>();
will allow you to inject the interface rather than the implementation.
In the end it is about loosley coupeling. It also makes your software harder to test.
If you only inject the interface you can easily test the class that uses the implementation with its given interface bc you can mock it without any troubles. If you don't and inject the real implementation, the functions of the implementation must be marked as virtual to mock them. You also need to mock the classes your implementation SomeConcreteService might be using.
Today you have one implementation of the interface. Tomorrow you may not. Others may need to extend the service in the future with a decorator, composite, or other design pattern.
Essentially, by using an interface, you are future-proofing your application for every eventuality - it can even be extended in ways that you don't foresee today, without changing a single line of code outside of your DI container registration.
If you use the concrete type, the ability to extend it is very limited. You are basically saying "this is the way it will be forever" without allowing many possibilities for extending it without changing the code. You are giving up the most useful benefit of using the DI pattern - loosely coupling your code by separating its interface from its implementation.

How should common references be passed around in an assembly?

I am trying to get rid off static classes, static helper methods and singleton classes in my code base. Currently, they are pretty much spread over the whole code, especially so for the utility classes and the logging library. This is mainly due to the need for mocking ability as well as object-oriented design and development concerns, e.g. extensibility. I might also need to introduce some form of dependency injection in the future and would like to leave an open door for that.
Basically, the problem I have encountered is about the method of passing the commonly used references around. These are objects that are used by almost every class in the code base, such as the logging interface, the utility (helper) class interface and maybe an instance of a class that holds an internal common state for the assembly which most classes relate to.
There are two options, as far as I'm aware. One is to define a class (or an interface) that stores the common references, a context if you will, and pass the context to each object that is created. The other option is to pass each common reference to almost every class as a separate parameter which would increase the number of parameters of the class constructors.
Which one of these methods is better, what are the pros and cons of each, and is there a better method for this task?
I generally go with the context object approach, and pass the context object either to an object's constructor, or to a method -- depending on which one makes the most sense.
The context object pattern can take a few forms.
You can define an interface that has exactly the members you need, or you can generate a sort of container class. For example, when writing loosely-coupled components, I tend to have each component I implement have a matching interface, so that it can be reimplemented if desired. Then I register the objects on a "manager" object, something like this:
public interface IServiceManager
{
public T GetService<T>();
public T RequireService<T>();
public void RegisterService<T>(T service);
public void UnregisterService<T>(T service);
}
Behind the scenes there is a map from type to object, which allows me to extremely quickly assemble a large set of diverse components into a working whole. Each component asks for the others by interface, and the manager object is what glues them together. (If you correctly author your components, you can even swap out one service for another while the process is running!)
One would register a service something along these lines:
class FooService : IFooService { }
// During process start-up:
serviceManager.RegisterService<IFooService>(new FooService());
There is more overhead with this approach than with the flat-interface approach due to the dictionary lookup, but it has allowed me to build very sophisticated systems that can be easily redeployed with different service implementations. (And, as is usual, any bottlenecks I encounter are never in looking up a service object from a dictionary, but somewhere else such as the database.)
You're going to get varied opinions, but generally passing a separate parameter to the constructor for each dependency is preferred for a few reasons:
It clearly defines the actual dependencies for a class - with a "context" you don't know which parts of the context are used without digging into the code.
Generally having a lot of parameters to a constructor is a design smell, so using constructor injection helps you sniff out design flaws.
When testing you can mock out individual dependencies versus having to mock an entire context
I would suggest passing as a parameter to the constructor. This has great advantage for both dependency injection and unit testability ( mocking).

How should i refactor this?

so in my application I've got several different customers being "serviced". Each customer has their own implementations of various classes that are all based on interfaces.
With the latest customer being added, I've noticed there will be a lot of duplication of code from another customer but the other customer is in no other way related to them.
I've already got a default implementation for several other customers and roll new ones as i need them.
My question is how do i refactor this and still keep the code clean? If i were a dev new to this code base i would want each customer to either use the default or their own implementation of these classes... but that's a lot of duplication.
Consider using an abstract base class with abstract or virtual members. Abstract members are essentially equivalent to interface members (they have no build-in behavior, they only guarantee the method exists) whereas virtual members have a default implementation which can be overridden by derived classes.
Your question is really too vague to answer in full, but here's how you can leverage inheritance.
If you want all classes to use the same implementation of a member then that member can be implemented in the base-class.
If you want each class to have its own implementation of a member then you can either use a base-class with abstract members, or an interface.
If you want some classes to use the same implementations and others to use different implementations then implementing the default behavior in the base-class and override it as needed.
My main point is that OOP there is a spectrum of how much or little functionality is in base/abstract/concrete classes. There's no silver-bullet answer, sometimes your base classes will be skeletons and sometimes they'll be fully fleshed-out; it all depends on the specific problem at hand.
Is there some way that you could create a base class, then a specific implementation for each customer and then using some type of Dependency Injection have that load classes or functionality as needed. You want to really have a DRY system so as to avoid headaches and typos or other similar human mistakes.
You may use either inheritance (put common logic to the base class) or aggregation (spread that logic among other classes and make use them from your customers).
I'd recommend the visitor pattern:
http://en.m.wikipedia.org/wiki/Visitor_pattern
As well as the mediator pattern:
http://en.m.wikipedia.org/wiki/Mediator_pattern
Reason being that it sounds like you may benefit from decoupling, or at least more-loose-coupling, the business logic from your classes, based on what you are saying.
It's a bit difficult to know what to suggest without a better understanding of the code... but some things that have worked for me in similar situations include:
Use a Strategy, for the duplicated code. I've had most success where the strategy is encapsulated within a class implementing a known interface (one class per alternate strategy). Often in such cases I use some form of Dependency Injection framework (typically StructureMap) to pass the appropriate strategy/strategies to the class.
Use some sort of template class (or template methods) for the common item(s).
Use a Decorator to add specific functionality to some basic customer.
STW suggested that I should offer some clarification on what I mean by "Strategy" and how that differs from normal inheritance. I imagine inheritance is something you are very familiar with - something (typically a method - either abstract or virtual) in the base class is replaced by an alternate implementation in the derived class.
A strategy (at least the way I typically use it) is normally implemented by a completely different class. Often all that class will contain is the implementation for a single replaceable operation. For example if the "operation" is to perform some validation, you may have a NullValidationStrategy which does nothing and a ParanoidValidationStrategy which makes sure every McGuffin is the correct height, width and specific shade of blue. The reason I usually implement each strategy in its own class is because I try and follow the Single Responsibility Principle which can make it easier to reuse the code later.
As I mentioned above, I typically use a Dependency Injection (DI) framework to "inject" the appropriate strategy via the class constructor, but a similar results may be obtained via other mechanisms - e.g. having a SetSomeOperationStrategy(ISomeOperation StrategyToUse) method, or a property which holds the strategy reference. If you aren't using DI, and the strategy will always be the same for a given customer type, you could always set the correct choices when the class is constructed. If the strategy won't be the same for each instance of a given customer type, then you probably need some sort of customer factory (often a factory method will be sufficient).
I'd go with the answer of spinon (got my vote at least), but it's to short so let me elaborate:
Use your interfaces for the default implementation and then use dependency injection. Most tools allow you to define a scope or some criteria how to resolve something.
I assume that you do know the client at some early point of the program. So for ninject you just might want to define a "Module" for each client and load that into the kernel, depending on the client.
So I'd create a "no customization" Module and create a "ClientX" Module for every special case that uses ´Bind.To()` instead.
You end up with
a base implementation that is clean/default
a single place change for a new client (got a new one? Great. Either it works with the default or just needs a single Module that maps the interfaces to other classes)
The rest of the code shouldn't mind and get the dependencies via injection (constructor, property, whatever is easiest to go for. Constructor would probably be the nicest way) and has no special treatment at all.
You could even use a conditional binding in Ninject link text to solve the binding issue without different modules at all (although, depending on the number of clients, this might get messy and should better be separated).
I was going to suggest aggregation, as #the_joric suggests, over inheritance, but your description makes it sound like your application is already reasonably well-factored - that there aren't a lot of small classes waiting to be extracted from your existing classes. Assuming that's the case, for any given interface, if you have a perfect class for the new customer already written and ready to go, I would say go ahead and use it. If you're worried about that, for some reason, then take that perfect class, make it abstract, and create empty subclasses for your existing customer and your new customer - and if it's not quite a perfect fit, then that's the way I would go.

C# - Is adding systematically an interface a good practice?

In the project I'm working on, I've noticed that for every entity class there is an interface. It seems that the original motivation was to only expose interfaces to other project/solutions.
I find this completely useless, and I don't see the point in creating an interface for every class. By the way, those classes don't have any methods just properties and they don't implement the same interface.
Am I wrong? Or is it a good practice?
Thx
I tend to create an interface for almost every class mainly because of unit testing - if you use dependency injection and want to unit test a class that depends on the class in question, than the standard way is to mock an instance of the class in question (using one of the mocking frameworks, e.g. Rhino-Mocks). However, practically it is only possible only for interfaces, not concrete implementations (yes, theoretically you can mock a concrete class, but there are many painful limitations).
There may be more to the setup than described here that justifies the overhead of interfaces. Generally they're very useful for dependency injection and overall separation of concerns, unit testing and mocking, etc.. It's entirely possible that they're not being used for this purpose (or any other constructive purpose, really) in your environment, though.
Is this generated code, or were these manually created? If the former, I suspect the tool generating them is doing so to prepare for such a use if the developer were so inclined. If the latter, maybe the original designer had something in mind?
For my own "best practices" I almost always do interface-driven development. It's generally a good practice to separate out concerns from one another and use the interfaces as contracts between them.
Exposing interfaces publicly has value in creating a loosely-coupled, behaviour-driven architecture.
Creating an interface for every class - especially if the interface just exposes every public method the class has in a single interface - is a bad implementation of the concept, and (in my experience) leads to more complex code and no improvement in architecture.
It's useful for tests.
A method may take a parameter of type ISomething, and it can be either SqlSomething or XmlSomething, where ISomething is the interface, and SqlSomething and XmlSomething are classes that implement the interface, depending whether you're doing tests (you pass XmlSomething in this case) or running the application (SqlSomething).
Also, when building a universal project, that can work on any database, but aren't using an ORM tool like LINQ (maybe because the database engine might not support LINQ to SQL), you define interfaces, with methods that you use in the application. Later on, developers will implement the interfaces to work with the database, create MySQLProductRepository class, PostgreSQLProductRepository class, that both inherit the same interface, but have different functionality.
In the application code any method takes a parameter a repository object of type IProductRepository, which can be anything.
IMHO it sounds that writing interfaces for no reason is pointless. You cant be totally closed minded but in general doing things that are not immediatly useful tend to accumulate as waste.
The agile concept of Its either adding value or taking value comes to mind.
What happens when you remove them? If nothing then ... what are they there for?
As a side note. Interfaces are extremely useful for Rhino Mocks, dependency injection and so on ...
If those classes only have properties, then interfaces don't add much value, because there's no behavior that is being abstracted.
Interfaces can be useful for abstraction, so the implementation can be mocked in unit tests. But in a well-designed application the business/domain entities should have very little reasons to be mocked. Business/domain services on the other hand are a excellent candidate for interface abstraction.
I have created interfaces for my entities once, and it didn't add any value at all. It only made me realize my design was wrong.
It seems to be an interface is superior to an abstract base class primarily if/when it is necessary to have a class which implements the interface but inherits from some other base class. Multiple inheritance is not allowed, but multiple interface implementations are.
The main caveat I see with using interfaces rather than abstract classes (beyond the extra source code required) is that changing anything in an interface necessitates recompilation of any and all code which uses that interface. By contrast, adding public members to a base class generally only requires recompilation of the base class itself.(*)
(*) Due to the way extension methods are handled, adding members to a class won't "require" recompiling code which uses that class, but may cause code which uses extension methods on the class to change meaning the next time it (the extension-method-using code) is recompiled.
There is no way to tell the future and see if you're going to need to program against an interface down-the-road. But if you decide later to make everything use an interface and, say, a factory to create instances of unknown types (any type that implements the interface), then it is quicker to restrict everyone to programming against an interface and a factory up-front than to replace references to MyImpl with references to IMyInterface later, etc.
So when writing new software, it is a judgment call whether to program against an interface or an implementation, unless you are familiar with what is likely to happen to that kind of software based on previous experiences.
I usually keep it "in flux" for a time whether or not I have an interface, a base class, or both, and even whether the base class is abstract (it usually is). I will work on a project (usually a Visual Studio solution with about 3 to 10 projects in it) for a while (a couple of days, maybe) before I refactor and / or ask for a second opinion. Once a final decision is reached and the code is refactored and tested, I tell fellow devs that it is ready for use.
For unit testing, it's either interfaces everywhere or virtual methods everywhere.
Sometimes I miss Java :)

c# when to program to an interface?

Ok the great thing about programming to an interface is that it allows you to interchange specific classes as long as the new classes implement everything in that interface.
e.g. i program my dataSource object to an interface so i can change it between an xml reader and a sql database reader.
does this mean ideally every class should be programmed to an interface?
when is it not a good idea to use an interface?
When the YAGNI principle applies.
Interfaces are great but it's up to you to decide when the extra time it takes developing one is going to pay off. I've used interfaces plenty of times but there are far more situations where they are completely unnecessary.
Not every class needs to be flexibly interchanged with some other class. Your system design should identify the points where modules might be interchangeable, and use interfaces accordingly. It would be silly to pair every class with an additional interface file if there's no chance of that class ever being part of some functional group.
Every interface you add to your project adds complexity to the codebase. When you deal with interfaces, discoverability of how the program works is harder, because it's not always clear which IComponent is filling in for the job when consumer code is dealing with the interface explicitly.
IMHO, you should try to use interfaces a lot. It's easier to be wrong by not using an interface than by using it.
My main argument on this is because interfaces help you make a more testable code. If a class constructor or a method has a concrete class as a parameter, it is harder (specially in c#, where no free mocking frameworks allow mocking non-virtual methods of concrete classes) for you to make your tests that are REAL unit tests.
I believe that if you have a DTO-like object, than it's overkill to use an interface, once mocking it may be maybe even harder than creating one.
If you're not testing, using dependency injection, inversion of control; and expect never to do any of these (please, avoid being there hehe), then I'd suggest interfaces to be used whenever you will really need to have different implementations, or you want to limit the visibility one class has over another.
Use an interface when you expect to need different behaviours used in the same context. I.e. if your system needs one customer class which is well defined, you probably don't need to use an ICustomer interface. But if you expect a class to comply to a certain behaviour s.a. "object can be saved" which applies to different knids of objects then you shoudl have the class implement an ISavable interface.
Another good reason to use an interface is if you expect different implementations of one kind of object. For example if ypu plan an SMS-Gateway which will route SMS's through several different third-party services, your classes should probably implent a common interface s.a. ISmsGatewayAdapter so your core system is independent from the specific implementation you use.
This also leads to 'dependecy injection' which is a technique to further decouple your classes and which is best implemented by using interfaces
The real question is: what does your class DO? If you're writing a class that actually implements an interface somewhere in the .NET framework, declare it as such! Almost all simple library classes will fit that description.
If, instead, you're writing an esoteric class used only in your application and that cannot possibly take any other form, then it makes no sense to talk about what interfaces it implements.
Starting from the premise of, "should I be implementing an interface?" is flawed. You neither should be nor shouldn't be. You should simply be writing the classes you need, and declaring what they do as you go, including what interfaces they implement.
I prefer to code as much as possible against an interface. I like it because I can use a tool like StructureMap to say "hey...get me an instance of IWidget" and it does the work for me. But by using a tool like this I can programatically or by configuration specify which instance is retrieved. This means that when I am testing I can load up a mock object that conforms to an interface, in my development environment I can load up a special local cache, when I am in production I can load up a caching farm layer, etc. Programming against an interface provides me a lot more power than not programming against an interface. Better to have and not need than need and not have applies here very well. And if you are into SOLID programming the easiest way to achieve many of those principles sort of begins by programming against an interface.
As a general rule of thumb, I think you're better off overusing interfaces a bit than underusing them a bit. Err on the side of interface use.
Otherwise, YAGNI applies.
If you are using Visual Studio, it takes about two seconds to take your class and extract an interface (via the context menu). You can then code to that interface, and hardly any time was spent.
If you are just doing a simple project, then it may be overkill. But on medium+ size projects, I try to code to interfaces throughout the project, as it will make future development easier.

Categories

Resources