I am maintaining an ASP.NET MVC project. In the project the original developer has an absolute ton of interfaces. For example: IOrderService, IPaymentService, IEmailService, IResourceService. The thing I am confused about is each of these is only implemented by a single class. In other words:
OrderService : IOrderService
PaymentService : IPaymentService
My understanding of interfaces has always been that they are used to create an architecture in which components can be interchanged easily. Something like:
Square : IShape
Circle : IShape
Furthermore, I don't understand how these are being created and used. Here is the OrderService:
public class OrderService : IOrderService
{
private readonly ICommunicationService _communicationService;
private readonly ILogger _logger;
private readonly IRepository<Product> _productRepository;
public OrderService(ICommunicationService communicationService, ILogger logger,
IRepository<Product> productRepository)
{
_communicationService = communicationService;
_logger = logger;
_productRepository = productRepository;
}
}
These objects don't seem be ever be created directly as in OrderService orderService = new OrderService() it is always using the interface. I don't understand why the interfaces are being used instead of the class implementing the interface, or how that even works. Is there something major that I am missing about interfaces that my google skills aren't uncovering?
This particular design pattern is typically to facilitate unit testing, as you can now replace OrderService with a TestOrderService, both of which are only referenced as IOrderService. This means you can write TestOrderService to provide specific behavior to a class under test, then sense whether the class under test is doing the correct things.
In practice, the above is often accomplished by using a Mocking framework, so that you don't actually hand-code a TestOrderService, but rather use a more concise syntax to describe how it should behave for a typical test, then have the mocking framework dynamically generate an implementation for you.
As for why you never see 'new OrderService' in the code, it's likely that your project is using some form of Inversion of Control container, which facilitates automatic Dependency Injection. In other words, you don't have to construct OrderService directly, because somewhere you've configured that any use of IOrderService should automatically be fulfilled by constructing a singleton OrderService and passing it in to the constructor. There are a lot of subtleties here and I'm not exactly sure how your dependency injection is being accomplished (it doesn't have to be automatic; you can also just construct the instances manually and pass them in through the constructors.)
That's not the only use of interfaces, in MVC they are being used to decouple contract from implementation. To understand about MVC you need to read up a bit on the related topics such as separation of concerns and inversion of control (IoC).The actual act of creating an object to be passed to OrderService constructor is handled by IoC container based on some predefined mapping.
These objects don't seem be ever be created directly as in OrderService orderService = new
OrderService()
So waht?
Point is that SOMEONE calls the OrderService constructor and THE CALLER is respónsible for creating them. He hands them over.
I don't understand why the interfaces are being used instead of the class implementing the
interface
Because you want not to know the class - it may change, be external, be configurable using an IOC container and the programmer decided to not require even a common base class. THe less assumptions you make about how someone implements used utility classes, the better.
Is there something major that I am missing about interfaces that my google skills aren't
uncovering?
No, bu a good book about OO programming would help more than random google snippets. This baiscally falls into the architecture area and .NET basics (for the first part).
It's good practice to program against interfaces rather than objects. This question gives good reasons why, but some reasons include allowing the implementation to change (ex. for testing).
Just because there's currently only 1 class that implements the interface doesn't mean that it can't change in the future.
Furthermore, I don't understand how these are being created and used.
This is called dependency injection and basically means that the class doesn't need to know how or where to instantiate it's dependencies from, someone else will handle it.
These are service interfaces, which encapsulate some kind of externality. You often have just a single implementation of them in your main project, but your tests use simpler implementations that don't depend on that external stuff.
For example if your payment service contacts paypal to verify payments, you don't want to do that in a test of unrelated code. Instead you might replace them with a simple implementation that always returns "payment worked" and check that the order process goes through, and another implementation that returns "payment failed" and check that the order process fails too.
To avoid depending on the implementation, you don't create instances yourself, you accept them in the constructor. Then the IoC container that creates your class will fill them in. Look up Inversion of Control.
Your project has probably some code that sets up the IoC container in its startup code. And that code contains information about which class to create when you want an implementation of a certain interface.
Related
As I understand it, interfaces in C# can be thought of as a contract or promise that a derived class must follow. This allows different objects to behave in different ways when the an overridden method is called.
DI, how I understand it, offers the ability to reduce dependencies by being able to inject the dependency (usually through a container) though the ctor, property, or method.
It seems like they are 2 completely opposing forces between freedom and restraint. I can create a contract that says a derived class MUST follow these certain guidelines which is restrictive in nature versus DI which allows me to inject dependencies (I know that the dependency has to inherit the interface but still...) which is permissive in nature. I can make everything injectable and wildly and change the class.
I guess my question is what's your process in deciding how restrictive you want to be? When is it more important to use interfaces for polymorphism or DI when you want complete freedom? Where do you draw the line? Is it ok to use interfaces when you want the structure (methods and properties) to align among different derived classes and DI when the parameters can be wildly different?
EDIT:
Maybe DI was the wrong example. Let's say I have an IPlugin and Plugin Factory. All plugins need the same information, work the same, etc. So it would make sense to use an interface. Now, one plugin works the same but needs different parameters or different data but ultimately the same structure i.e. Load, Run, etc.
I wanted to pass a command object that can expose different parameters that the plugin will need (using DI) and then each plugin can use the properties of that command object but the fact that I can inject a command object with wildly different parameters kinda breaks the whole idea of having a contract in the first place. Would that be kosher?
DI, how I understand it, offers the ability to reduce dependencies by being able to inject the dependency (usually through a container) though the ctor, property, or method.
Incorrect. Static code analysis shows the dependency is still there. DI just changes how you ended up with an instance to the object. If your ctor say is expecting objects of a particular class then you have a dependency to that type but you also have strong-coupling to a class which is bad. However if your ctor is expecting types of a certain interface, then you have a dependency to that contract definition but not to the actual implementation (lose coupling).
As I understand it, interfaces in C# can be thought of as a contract or promise that a derived class must follow. This allows different objects to behave in different ways when the an overridden method is called.
Yes, but they are also a way to abstract a component by hiding unnecessary details, something say a class cannot. That's why in reasonably large systems it's nice to have a MyContracts.dll in which you define all your interfaces; and say BusinessLogic.dll and ClientStuff.dll. ClientStuff depends on contracts but it doesn't care about the actual implementation and possible zillions of other dependencies BusinessLogic.dll may have (typical WPF or WCF application say).
When is it more important to use interfaces for polymorphism or DI when you want complete freedom? Where do you draw the line? Is it ok to use interfaces when you want the structure (methods and properties) to align among different derived classes and DI when the parameters can be wildly different?
DI does not offer any more freedom then a non-DI system. However I think you might be a little confused over terminology or concepts. DI is always good. Injecting interfaces over class types is better as I mentioned above.
I can create a contract that says a derived class MUST follow these certain guidelines which is restrictive in nature versus DI which allows me to inject dependencies
A contract is a contract is a contract whether it is in the form of an interface or abstract class. If the constructor (constructor-injection) or property (property injection) is asking for a MyDbConnection then that sets up a requirement as to what can be injected more so than say just expecting a IMyDbConnection.
I think you may have it wrong with regards to what DI does. DI does not always inject classes and interfaces are not just for polymorphism. The latter is a common mistake. DI has nothing to do with being "restrictive". That is up to you as to the type you are expecting to have injected.
In fact expecting a concrete class object or something derived by abstract class to be injected is more "restrictive" than injecting an interface by the very nature of how interfaces can be reused across more scenarios. e.g. Not that you would be injecting one but INotifyPropertyChanged pre-dated WPF but not plays a huge role in WPF and MVVM.
Restrictive:
public EditPatientViewModel (SqlPersistenceService svc) {}
Not so restrictive
public EditPatientViewModel (IPersistenceService svc) {}
Deciding between restriction and freedom (Interfaces and Dependency Injection)
That's completely up to you but if you inject interfaces then you will obtain:
decoupling between contract and implementation and all the baggage that goes with it
improved abstraction - hide away unnecessary details
I was reading "The Art of Unit Testing" by Roy Osherove and I think he answers my question.
He's talking about DI in testing and mentions, some people believe this hurts object oriented design principles (namely breaking encapsulation).
Object oriented principles are are there to enforce constraints on the end user of the API so that the object model is used properly and is protected from unintentional usage. Tests for example add another end user. Encapsulating these external dependencies somewhere without allowing anyone to change them, having private constructors or sealed classes, having non-virtual methods are all classic signs of overprotective design.
I think my problem was the fact that I feel like too many dependencies being injected were breaking encapsulation principles but adding these dependencies cater to the end user in a way where they actually need the functionality (like tests) and don't break any encapsulation rules.
In most arbitrary applications, there are many cross cutting concerns that need to be addressed among all available layers, e.g. logging, message bus, configuration. What I noticed is that in some classes, they tend to completely blow up the constructor if the modules are being injected using an IoC.
public class MyService : IService
{
public MyService(ILogger logger, IAppSettings settings, IEventBus eventBus...)
{
}
}
For usual cases of constructor over-injection, I tend to refractor the concerns into building blocks that belong closely together so I get fewer dependencies in a class. However, this is not possible with cross cutting concepts.
Among logging frameworks, static factories / services seem to very popular, e.g.
// Application root
MyLoggerService.SetFactory(log4NetFactory);
// Somewhere
MyLoggerService.GetLogger("name") // returns Log4NetLogger created by Log4NetFactory.
My question is: Is this approach a good one, for all kinds of cross cutting stuff? What are the drawbacks if the code may end up looking like this:
public class MyService : IService
{
private readonly IReallyNeedThat _dependency;
public MyService(IReallyNeedThat dependency)
{
_dependency = dependency;
}
private readonly ILogger _logger = LoggerService.GetLogger("MyService");
private readonly IEventBus _eventBus = EventBusService.GetEventBus();
private readonly IConfiguration _configuration = ConfigurationService.GetConfiguration(Level.Roaming)
private readonly IExceptionHandler _exceptionHandler = ExceptionPolicy.GetHandler();
private readonly ITracer _tracer = TraceManager.GetDebugTracer();
}
Moving the dependencies out of the constructor doesn't solve the problem, because you don't lower the amount of dependencies a class has and chances are big that you are still violating Single Responsibility principle and the Open/Close principle, causing your code to be hard to test, hard to change and hard to maintain.
Instead often a good solution is to pull those cross-cutting concerns out of your components and place them into components that are specially tailored for that cross-cutting concern and that component wrap the original component. In other words: create decorators.
This does probably force you to change the design of your classes, because when you don't have generic abstractions to define sets of related services, you will have to define a decorator per abstraction and that would cause a lot of code duplication, which is bad in almost all cases.
So instead, model your system around command/handlers and query/handlers and you'll be in a much better place. You can decorate each piece of business logic with a generic decorator that you define once and reuse all over the place. This keeps your system clean but still very flexible.
If you are more onto TDD, you can guess easily which approach is better.
With dependency injection, your code becomes more (unit)testable. You can inject dependencies via some mocking framework and create your unit tests without much headache.
But in case of static factories, since your factory classes are (hard)wired into your class, while unit testing there is no way out how you can inject them from outside your class.
Benefits of DI over static factories -
Concurrent Development - Think of a logging service that you are consuming, which is being built by somebody else and you are going to unit test your code (and you don't care unit testing of the logging service since you assume, it should be unit tested when you use it) . You batter use DI, inject the dependency using a mock object and done.
Speed - While unit testing your classes, definitely you wouldn't want them to take long time (so that it gives you a coffee break with every single change to your main class ;) ). You would definitely want you unit test to run in a blink of eye and report any error. Static factory that depends on external resources (e.g. network/DB, FileSystem) is going to take time. You better use DI, use a mock object and done.
Testability - DI helps isolating the client from their dependencies (promotes use of Interfaces), hence improves Testability (via use of mocks).
I am trying to get rid off static classes, static helper methods and singleton classes in my code base. Currently, they are pretty much spread over the whole code, especially so for the utility classes and the logging library. This is mainly due to the need for mocking ability as well as object-oriented design and development concerns, e.g. extensibility. I might also need to introduce some form of dependency injection in the future and would like to leave an open door for that.
Basically, the problem I have encountered is about the method of passing the commonly used references around. These are objects that are used by almost every class in the code base, such as the logging interface, the utility (helper) class interface and maybe an instance of a class that holds an internal common state for the assembly which most classes relate to.
There are two options, as far as I'm aware. One is to define a class (or an interface) that stores the common references, a context if you will, and pass the context to each object that is created. The other option is to pass each common reference to almost every class as a separate parameter which would increase the number of parameters of the class constructors.
Which one of these methods is better, what are the pros and cons of each, and is there a better method for this task?
I generally go with the context object approach, and pass the context object either to an object's constructor, or to a method -- depending on which one makes the most sense.
The context object pattern can take a few forms.
You can define an interface that has exactly the members you need, or you can generate a sort of container class. For example, when writing loosely-coupled components, I tend to have each component I implement have a matching interface, so that it can be reimplemented if desired. Then I register the objects on a "manager" object, something like this:
public interface IServiceManager
{
public T GetService<T>();
public T RequireService<T>();
public void RegisterService<T>(T service);
public void UnregisterService<T>(T service);
}
Behind the scenes there is a map from type to object, which allows me to extremely quickly assemble a large set of diverse components into a working whole. Each component asks for the others by interface, and the manager object is what glues them together. (If you correctly author your components, you can even swap out one service for another while the process is running!)
One would register a service something along these lines:
class FooService : IFooService { }
// During process start-up:
serviceManager.RegisterService<IFooService>(new FooService());
There is more overhead with this approach than with the flat-interface approach due to the dictionary lookup, but it has allowed me to build very sophisticated systems that can be easily redeployed with different service implementations. (And, as is usual, any bottlenecks I encounter are never in looking up a service object from a dictionary, but somewhere else such as the database.)
You're going to get varied opinions, but generally passing a separate parameter to the constructor for each dependency is preferred for a few reasons:
It clearly defines the actual dependencies for a class - with a "context" you don't know which parts of the context are used without digging into the code.
Generally having a lot of parameters to a constructor is a design smell, so using constructor injection helps you sniff out design flaws.
When testing you can mock out individual dependencies versus having to mock an entire context
I would suggest passing as a parameter to the constructor. This has great advantage for both dependency injection and unit testability ( mocking).
Currently I'm trying to understand dependency injection better and I'm using asp.net MVC to work with it. You might see some other related questions from me ;)
Alright, I'll start with an example controller (of an example Contacts Manager asp.net MVC application)
public class ContactsController{
ContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
//...Actions here
}
Allright, awesome that's working. My actions can all use the database for CRUD actions. Now I've decided I wanted to add unit testing, and I've added another contructor to mock a database
public class ContactsController{
IContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
public ContactsController(IContactsManagerDb db){
_db = db;
}
//...Actions here
}
Awesome, that's working to, in my unit tests I can create my own implementation of the IContactsManagerDb and unit test my controller.
Now, people usually make the following decision (and here is my actual question), get rid of the empty controller, and use dependency injection to define what implementation to use.
So using StructureMap I've added the following injection rule:
x.For<IContactsManagerDb>().Use<ContactsManagerDb>();
And ofcourse in my Testing Project I'm using a different IContactsManagerDb implementation.
x.For<IContactsManagerDb>().Use<MyTestingContactsManagerDb>();
But my question is, **What problem have I solved or what have I simplified by using dependency injection in this specific case"
I fail to see any practical use of it now, I understand the HOW but not the WHY? What's the use of this? Can anyone add to this project perhaps, make an example how this is more practical and useful?
The first example is not unit testable, so it is not good as it is creating a strong coupling between the different layers of your application and makes them less reusable. The second example is called poor man dependency injection. It's also discussed here.
What is wrong with poor man dependency injection is that the code is not autodocumenting. It doesn't state its intent to the consumer. A consumer sees this code and he could easily call the default constructor without passing any argument, whereas if there was no default constructor it would have immediately been clear that this class absolutely requires some contract to be passed to its constructor in order to function normally. And it is really not to the class to decide which specific implementation to choose. It is up to the consumer of this class.
Dependency injection is useful for 3 main reasons :
It is a method of decoupling interfaces and implementations.
It is good for reducing the amount of boiler plate / factory methods in an application.
It increases the modularity of packages.
As an example - consider the Unit test which required access to a class, defined as an interface. In many cases, a unit test for an interface would have to invoke implementations of that interface -- thus if an implementation changed, so would the unit test. However, with DI, you could "inject" an interface's implementation at run time into a unit test using the injection API - so that changes to implementations only have to be handled by the injection framework, not by individual classes that use those implementations.
Another example is in the web world : Consider the coupling between service providers and service definitions. If a particular component needs access to a service - it is better to design to the interface than to a particular implementation of that service. Injection enables such design, again, by allowing you to dynamically add dependencies by referencing your injection framework.
Thus, the various couplings of classes to one another are moved out of factories and individual classes, and dealt with in a uniform, abstract, reusable, and easily-maintained manner when one has a good DI framework. The best tutorials on DI that I have seen are on Google's Guice tutorials, available on YouTube. Although these are not the same as your particular technology, the principles are identical.
First, your example won't compile. var _db; is not a valid statement because the type of the variable has to be inferred at declaration.
You could do var _db = new ContactsManagerDb();, but then your second constructor won't compile because you're trying to assign an IContactsManagerDb to an instance of ContactsManagerDb.
You could change it to IContactsManagerDb _db;, and then make sure that ContactsManagerDb derives from IContactsManagerDb, but then that makes your first constructor irrelvant. You have to have the constructor that takes the interface argument anyways, so why not just use it all the time?
Dependency Injection is all about removing dependancies from the classes themselves. ContactsController doesn't need to know about ContactsManagerDb in order to use IContactsManagerDb to access the Contacts Manager.
I've been using IoC (mostly Unity) and Dependency Injection in .NET for some time now and I really like the pattern as a way to encourage creation of software classes with loose coupling and which should be easier to isolate for testing.
The approach I generally try to stick to is "Nikola's Five Laws of IoC" - in particular not injecting the container itself and only using constructor injection so that you can clearly see all the dependencies of a class from its constructor signature. Nikola does have an account on here but I'm not sure if he is still active.
Anyway, when I end up either violating one of the other laws or generally ending up with something that doesn't feel or look right, I have to question whether I'm missing something, could do it better, or simply shouldn't be using IoC for certain cases. With that in mind here are a few examples of this and I'd be grateful for any pointers or further discussion on these:
Classes with too many dependencies. ("Any class having more then 3 dependencies should be questioned for SRP violation"). I know this one comes up a lot in dependency injection questions but after reading these I still don't have any Eureka moment that solves my problems:
a) In a large application I invariably find I need 3 dependencies just to access infrastructure (examples - logging, configuration, persistence) before I get to the specific dependencies needed for the class to get its (hopefully single responsibility) job done. I'm aware of the approach that would refactor and wrap such groups of dependencies into a single one, but I often find this becomes simply a facade for several other services rather than having any true responsibility of its own. Can certain infrastructure dependencies be ignored in the context of this rule, provided the class is deemed to still have a single responsibility?
b) Refactoring can add to this problem. Consider the fairly common task of breaking apart a class that has become a bit big - you move one area of functionality into a new class and the first class becomes dependent on it. Assuming the first class still needs all the dependencies it had before, it now has one extra dependency. In this case I probably don't mind that this dependency is more tightly coupled, but its still neater to have the container provide it (as oppose to using new ...()), which it can do even without the new dependency having its own interface.
c) In a one specific example I have a class responsible for running various different functions through the system every few minutes. As all the functions rightly belong in different areas, this class ends up with many dependencies just to be able to execute each function. I'm guessing in this case other approaches, possibly involving events, should be considered but so far I haven't tried to do it because I want to co-ordinate the order the tasks are run and in some cases apply logic involving outcomes along the way.
Once I'm using IoC within an application it seems like almost every class I create that is used by another class ends up being registered in and/or injected by the container. Is this the expected outcome or should some classes have nothing to do with IoC? The alternative of just having something new'd up within the code just looks like a code smell since its then tightly coupled. This is kind of related to 1b above too.
I have all my container initialisation done at application startup, registering types for each interface in the system. Some are deliberately single instance lifecycles where others can be new instance each time they are resolved. However, since the latter are dependencies of the former, in practice they become a single instance too since they are only resolved once - at construction time of the single instance. In many cases this doesn't matter, but in some cases I really want a different instance each time I do an operation, so rather than be able to make use of the built in container functionality, I'm forced to either i) have a factory dependency instead so I can force this behaviour or ii) pass in the container so I can resolve each time. Both of these approaches are frowned upon in Nikola's guidance but I see i) as the lesser of two evils and I do use it in some cases.
In a large application I invariably find I need 3 dependencies just to access infrastructure (examples - logging, configuration, persistence)
imho infrastructure is not dependencies. I have no problem using a servicelocator for getting a logger (private ILogger _logger = LogManager.GetLogger()).
However, persistence is not infrastructure in my point of view. It's a dependency. Break your class into smaller parts.
Refactoring can add to this problem.
Of course. You will get more dependencies until you have successfully refactored all classes. Just hang in there and continue refactoring.
Do create interfaces in a separate project (Separated interface pattern) instead of adding dependencies to classes.
In a one specific example I have a class responsible for running various different functions through the system every few minutes. As all the functions rightly belong in different areas, this class ends up with many dependencies just to be able to execute each function.
Then you are taking the wrong approach. The task runner should not have a dependency on all tasks that should run, it should be the other way around. All tasks should register in the runner.
Once I'm using IoC within an application it seems like almost every class I create that is used by another class ends up being registered in and/or injected by the container.*
I register everything but business objects, DTOs etc in my container.
I have all my container initialisation done at application startup, registering types for each interface in the system. Some are deliberately single instance lifecycles where others can be new instance each time they are resolved. However, since the latter are dependencies of the former, in practice they become a single instance too since they are only resolved once - at construction time of the single instance.
Don't mix lifetimes if you can avoid it. Or don't take in short lived dependencies. In this case you could use a simple messaging solution to update the single instances.
You might want to read my guidelines.
Let me answer question 3. Having a singletons depend on a transient is a problem that container profilers try to detect and warn about. Services should only depend on other services that have a lifetime that is greater than or equals to that of their own. Injecting a factory interface or delegate to solve this is in general a good solution, and passing in the container itself is a bad solution, since you end up with the Service Locator anti-pattern.
Instead of injecting a factory, you can solve this by implementing a proxy. Here's an example:
public interface ITransientDependency
{
void SomeAction();
}
public class Implementation : ITransientDependency
{
public SomeAction() { ... }
}
Using this definition, you can define a proxy class in the Composition Root based on the ITransientDependency:
public class TransientDependencyProxy<T> : ITransientDependency
where T : ITransientDependency
{
private readonly UnityContainer container;
public TransientDependencyProxy(UnityContainer container)
{
this.container = container;
}
public SomeAction()
{
this.container.Resolve<T>().SomeAction();
}
}
Now you can register this TransientDependencyProxy<T> as singleton:
container.RegisterType<ITransientDependency,
TransientDependencyProxy<Implementation>>(
new ContainerControlledLifetimeManager());
While it is registered as singleton, it will still act as a transient, since it will forward its calls to a transient implementation.
This way you can completely hide that the ITransientDependency needs to be a transient from the rest of the application.
If you need this behavior for many different service types, it will get cumbersome to define proxies for each and everyone of them. In that case you could try Unity's interception functionality. You can define a single interceptor that allows you to do this for a wide range of service types.