We are building an application which has a number integration touch points with other systems. We are effectively using Unity for all our dependency injection needs. The whole business layer has been built with an interface driven approach, with the actual implementation being injected at an outer composition root during bootstrap of the application.
We wanted to handle the integration layer in an elegant manner. The business classes and repositories are dependent on IIntegrationController<A, B> interfaces. Several IIntegrationController<A, B> implementations together represent integration with one target system in the background - forming an integration layer. Currently, we hook up everything in the composition root, one shot, at the beginning. The consumers of this interface are also registered with an appropriate InjectionConstrutor and ResolvedParameter up front. Most types are operating with a PerResolveLifetime and the business classes consuming the IIntegrationController are also resolved for each request context separately.
Refer code below.
// IIntegrationController Family 1
// Currently the default registration for IIntegrationController types injected into the business classes
container.RegisterType<IIntegrationController<A, B>, Family1-IntegrationController<A, B>>();
container.RegisterType<IIntegrationController<C, D>, Family1-IntegrationController<C, D>>();
// IIntegrationController Family 2 (currently not registered)
// We want to be able to register this, or manage this set of mapping registrations separately from Family 1,
// and be able to hook these up dynamically instead of Family-1 on a per-resolve basis
container.RegisterType<IIntegrationController<A, B>, Family2-IntegrationController<A, B>>();
container.RegisterType<IIntegrationController<C, D>, Family2-IntegrationController<C, D>>();
// Repository/Business Class that consume IIntegrationControllers.
// There is a whole family of IIntegrationController classes being hooked in,
// and there are multiple implementations for the family (as shown above). A typical AbstractFactory scenario.
container.RegisterType(typeof(Repository<Z>), new PerResolveLifetimeManager(),
new InjectionConstructor(
new ResolvedParameter<IIntegrationController<A, B>>(),
new ResolvedParameter<IIntegrationController<C, D>>())
);
Problem Statement:
We want the ability to switch an entire family of IIntegrationController<A, B> at runtime. When the business class is being resolved, we want it to be injected with the right version of IIntegrationController<A, B>, based on a request parameter available in the context.
A "named" registration based solution would not be scalable, due to two reasons (an entire family of integration classes has to be switched, and it would require clunky name registrations and conditional resolution in the code making it hard to maintain).
The solution should work even when there is a chain/hierarchy of resolutions taking place, i.e. the direct consumer of the IIntegrationController is also resolved through Unity, as it is injected into another class dynamically.
We have tried a DependencyOverride and ResolveOverride class during resolution, but that would require the whole set of Family-2 IIntegrationController resolutions to be overriden, instead of just being able to switch the whole layer.
We understand that instead of injecting a IIntegrationController directly into the business class, an AbstractFactory may have to be injected, but we are not able to get that working, and are not sure where the registration and resolution would happen. If the business class is hooked with an AbstractFactory, first I would have to hook up the right factory per-resolve,
Does this require overriding the InjectionFactory? This link suggests an approach, but we were unable to get it to work smoothly.
What's nice about your design is that you already have the right abstractions in place. You use generic abstractions, so the problem can be solved simply by applying the right patterns on top of your already SOLID design.
In other words, use a proxy:
// This class should be considered part of your composition root.
internal class IntegrationControllerDispatcher<TRequest, TResult>
: IIntegrationController<TRequest, TResult>
{
private readonly IUserContext userContext;
private readonly Family1_IntegrationController<A, B> family1Controller;
private readonly Family2_IntegrationController<A, B> family2Controller;
public IntegrationControllerDispatcher(
IUserContext userContext,
Family1_IntegrationController<A, B> family1Controller,
Family2_IntegrationController<A, B> family2Controller) {
this.userContext = userContext;
this.family1Controller = family1Controller;
this.family2Controller = family2Controller;
}
public TResult Handle(TRequest request) {
return this.GetController().Handle(request);
}
private IIntegrationController<TRequest, TResult> GetController() {
return this.userContext.IsInFamily("family1"))
? this.family1Controller
: this.family2Controller;
}
}
With this class you whole configuration can be reduced to about this:
container.RegisterType<IUserContext, AspNetUserContext>();
container.RegisterType(
typeof(IIntegrationController<,>),
typeof(IntegrationControllerDispatcher<,>));
container.RegisterType(typeof(Repository<>), typeof(Repository<>));
Note the following:
Note the use of the registrations that do an open-generic mapping. You don't have to register ALL your closed versions one by one. You can do it with one line of code.
Also note that the types for the different families aren't
registered. Unity can resolve them automatically, because our
IntegrationControllerDispatcher depends on them directly. This
class is a piece of infrastructure logic and should be placed inside
your Composition Root.
Note that the decision to use a specific family implementation is not made during the time that the object graph is built; It is made at runtime, because the value that determines this is a runtime value. Trying to determine this at the time the object graph is built, will only complicate things, and make it much harder to verify your object graphs.
Furthermore, this runtime value is abstracted behind a function call and placed behind an abstraction (IUserContext.IsInFamily in this case, but that's just an example of course).
Related
This question already has answers here:
What is the difference between an interface and a class, and why I should use an interface when I can implement the methods directly in the class?
(16 answers)
Closed 5 years ago.
With .net core you can register "Services" which as I understand, simply means you can register types to concrete classes.
As such, I decided it's about time I learnt DI and practised it. I understand the concept, and with testing it is massively beneficial. However what confuses me is the idea of registering services and whether it's actually needed.
For example, if I have:
public class MyClass
{
public MyClass(IDataContext)
{
... store it
}
}
Then this means I can inject any class that implements the IDataContext, allowing for fakes and moqs in testing. But why would I register a service and map IDataContext to a concrete class in the startup? Is there something wrong with just using the following in other methods:
DataContext dc = new DataContext(); // concrete
var c = new MyClass(dc);
Edit
This question was around the point of using the container (services) rather than why use an interface in the constructor.
Now those classes where you put this code
public class MyService
{
public void DoSomething()
{
DataContext dc = new DataContext(); // concrete
var c = new MyClass(dc);
c.DoSomething();
}
}
have a hard dependency on DataContext and MyClass. So you can't test MyService in isolation. Classes shouldn't care how other classes do what they do, they should only care that they do what they say they're going to do. That's why we use interfaces. This is separation of concerns. Once you've achieved this, you can unit test any piece of code in isolation without depending on the behavior of outside code.
Registering your dependencies up front in one location is also cleaner and means you can swap dependencies out by changing one location instead of hunting down all the usages and changing them individually.
In my code example at the top, MyService requires the usage of both DataContext and MyClass. Instead, it should be like this:
public class MyService
{
private readonly IMyClass _myClass;
public MyService(IMyClass myClass)
{
_myClass = myClass;
}
public void DoSomething()
{
_myClass.DoSomething();
}
}
public interface IMyClass
{
void DoSomething();
}
public class MyClass : IMyClass
{
private readonly IDataContext _context;
public MyClass(IDataContext context)
{
_context = context;
}
public void DoSomething()
{
_context.SaveSomeData();
}
}
Now MyService isn't dependent on DataContext at all, it doesn't need to worry about it because that's not its job. But it does need something that fulfills IMyClass, but it doesn't care how it's implemented. MyService.DoSomething() can now be unit tested without depending on the behavior of other code.
If you weren't using a container to handle satisfying the dependencies, then you're probably introducing hard dependencies into your classes, which defeats the entire point of coding against an interface in the first place.
Testing in isolation is important. It's not a unit test if you're testing more than one finite piece of code. It's an integration test (which have their own value for different reasons). Unit tests make it quick and easy to verify a finite block of code works as expected. When a unit test isn't passing, you know right where the problem is and don't have to search hard to find it. So if a unit test depends on other types, or even other systems (likely in this case, DataContext is specific to a particular database) then we can't test MyService without touching a database. And that means the database must be in a particular state for testing, which means the test likely isn't idempotent (you can't run it over and over and expect the same results.)
For more information, I suggest you watch Deep Dive into Dependency Injection and Writing Decoupled Quality Code and Testable Software by Miguel Castro. The best point he makes is that if you have to use new to create an instance of an object, you've tightly coupled things. Avoid using new, and Dependency Injection is a pattern that enables you to avoid it. (using new isn't always bad, I'm comfortable with using new for POCO models).
You can inject your dependencies manually. However this can get a very tedious task. If your services get bigger, you will get more dependencies, where each dependency on its own can have multiple dependencies.
If you change your dependencies, you need to adjust all usages. One of the main advantages of a DI container is, that the container will do all dependency resolving. No manual work required. Just register the service and use it wherever you want and how often you want.
For small projects this seems like too much overhead, but if your project grows a little, you will really appreciate this.
For fixed dependencies, which are strongly related and not likely to change, injecting them manually is fine.
Using a DI container has another advantage. The DI container will control the life cycle of its services. A service could be a singleton, transient (each request will get a new instance) or have scoped life time.
For example, if you have a transactional workflow. The scope could match the transaction. While in the transaction, requests to a service will return the same instance.
The next transaction will open a new scope and therefore will get new instances.
This allows you to either discard or commit all instances of one transaction, but prevents that following transaction uses resources from the previous one.
You right, you can create all instances manually. In small projects it's an usual practice. The place in your project where links the classes is called Composition Root, and what you do is Constructor Injection.
IoC-libraries can simplify this code, especially considering complex cases like life-time scopes and group registration.
Inversion of control (of constructing an object)
The idea of this pattern is when you want to construct an object you only need to know about the type of the object and nothing about its dependencies or parameters.
Dependency injection
This pattern is taking the inversion of control pattern a step further by enabling you to directly inject an object into a constructor for example.
Again you only need to know the type of the object you want to get and the dependency container will inject an object.
You also don't need to know if a new object is constucted or if you get a allready existing reference.
The most common used type of dependency injection is constructor injection but you could inject your type into other places like methods.
Seperation of concerns
Generally you register a type by an interface to get rid of the dependency on the type.
This is very helpfull for mocking types when testing and it helps to use the open closed principle.
Martin Fowler on "Inversion of Control Containers and the Dependency Injection pattern".
I have a class which contains a view dependencies (all interfaces). Basically the behavior of the class is defined through the implementation of those interfaces. I want to be able to have a "builder" which can create instances of this class with different implementations of the interfaces(or parts of it). Something like this:
public class API
{
private readonly ISomeInterface _someInterface;
private readonly ISomeOtherInterface _someOtherInterface;
private readonly ISomeAnotherInterface _someAnotherInterface;
API(ISomeInterface someInterface,ISomeOtherInterface someOtherInterface,ISomeAnotherInterface someAnotherInterface)
{*/implementation ommitted*/}
//Example method
public void DoSomethingWhichDependsOnOneOrMoreInterfaces()
{
//somecode
id(_someInterface != null)
_someInterface.SomeMethode();
}
public class MyApiBuilder()
{
// implementation ommitted
API CreateAPI(someEnum type)
{
switch(type)
{
case SpecificAPI32:
var speficImplementationOfSomeInterface = new ImplementsISomeInterface();
speficImplementationOfSomeInterface .Setup("someSetup");
var specificImplementationOfOtherInterface = new ImplementsISomeOtherInterface();
returns new API(speficImplementationOfSomeInterface,specificImplementationOfOtherInterface ,null);
}
}
}
What is the most elegant way of implementing this (if this makes sense at all)? I was first thinking of the Builder Design Patterns but as far as I understood it, its slightly different.
[Edit]
As pointed out, the way I am implementing it is a factory method but I am not fully satisfied with it. The API can contain a varity of different interfaces which can be totally independent of each other but some may depend on others.(but not mandatory) I would like to give the user (the developer using this "API") as much freedom as possible in creating the API he wants to use. Lets try to explain what I am basically up to:
Let's say I am developing a plugin for a game engine which can post achievments and other stuff to various social media channels. So basically there could be a Interface which implements the access to twitter,facebook,youtube,whathever or some custom server. This custom server could need some kind of authentification process. The user should be able to build at start the API in a nice (hmm fluent is nice..) way. So basically something like this:
var myTotallyForMyNeedsBuildAPI = API.CreateCustomApi().With(Api.Twitter).And(Api.Facebook).And(Api.Youtube).And(Api.CustomServer).With(Security.Authentification);
I actually do not know how to make that fluent but something like this would be nice.
It's a good practice to use Dependency Injection as you want to give the programmer the ability to compose the object with desired configuration.
Check MEF and Unity frameworks which are great for this job.
For example in Unity you can write this:
// Introducing an implementation for ISomeInterface
container.Register<ISomeInterface, SomeImplementation>();
// Introducing an implementation for ISomeOtherInterface
container.Register<ISomeOtherInterface, SomeOtherImplementation>();
// Introducing an implementation for ISomeAnotherInterface
container.Register<ISomeAnotherInterface, SomeAnotherImplemenation>();
container.Register<API, API>();
// and finally unity will compose it for you with desired configurations:
var api = container.Resolve<API>();
In this scenario the api will be composed with desired implementations.
What you have implemented is the Factory method pattern.
It's perfectly fine for what you are trying to do, but you could have a look at the other factory patterns (i.e. here) based on your context and how you think you're code will evolve in the future.
Anyway, I will also consider to not tie this three interface together in a single factory. If they are really so tighten together to be consumed together and built together, maybe they should not be three different interfaces in the first place, or at least all three implemented by the same class, so your factory will build the appropriate class with the proper implementation of these.
Probably what you are after is the Decorator pattern.
In your API class you invoke each interface if they have been provided to the API instance, which is the behaviour of the Decorator pattern.
With this pattern you obtain a modular implementation that allow you to add multiple behaviours to your API.
There is an enormous amount of discussion on this topic, but everyone seems to miss an obvious answer. I'd like help vetting this "obvious" IOC container solution. The various conversations assume run-time selection of strategies and the use of an IOC container. I will continue with these assumptions.
I also want to add the assumption that it is not a single strategy that must be selected. Rather, I might need to retrieve an object-graph that has several strategies found throughout the nodes of the graph.
I will first quickly outline the two commonly proposed solutions, and then I will present the "obvious" alternative that I'd like to see an IOC container support. I will be using Unity as the example syntax, though my question is not specific to Unity.
Named Bindings
This approach requires that every new strategy has a binding manually added:
Container.RegisterType<IDataAccess, DefaultAccessor>();
Container.RegisterType<IDataAccess, AlphaAccessor>("Alpha");
Container.RegisterType<IDataAccess, BetaAccessor>("Beta");
...and then the correct strategy is explicitly requested:
var strategy = Container.Resolve<IDataAccess>("Alpha");
Pros: Simple, and supported by all IOC Containers
Cons:
Typically binds the caller to the IOC Container, and certainly requires the caller to know something about the strategy (such as the name "Alpha").
Every new strategy must be manually added to the list of bindings.
This approach is not suitable for handling multiple strategies in an object graph. In short, it does not meet requirements.
Abstract Factory
To illustrate this approach, assume the following classes:
public class DataAccessFactory{
public IDataAccess Create(string strategy){
return //insert appropriate creation logic here.
}
public IDataAccess Create(){
return //Choose strategy through ambient context, such as thread-local-storage.
}
}
public class Consumer
{
public Consumer(DataAccessFactory datafactory)
{
//variation #1. Not sufficient to meet requirements.
var myDataStrategy = datafactory.Create("Alpha");
//variation #2. This is sufficient for requirements.
var myDataStrategy = datafactory.Create();
}
}
The IOC Container then has the following binding:
Container.RegisterType<DataAccessFactory>();
Pros:
The IOC Container is hidden from consumers
The "ambient context" is closer to the desired result but...
Cons:
The constructors of each strategy might have different needs. But now the responsibility of constructor injection has been transferred to the abstract factory from the container. In other words, every time a new strategy is added it may be necessary to modify the corresponding abstract factory.
Heavy use of strategies means heavy amounts of creating abstract factories. It would be nice if the IOC container simply gave a little more help.
If this is a multi-threaded application and the "ambient context" is indeed provided by thread-local-storage, then by the time an object is using an injected abstract-factory to create the type it needs, it may be operating on a different thread which no longer has access to the necessary thread-local-storage value.
Type Switching / Dynamic Binding
This is the approach that I want to use instead of the above two approaches. It involves providing a delegate as part of the IOC container binding. Most all IOC Containers already have this ability, but this specific approach has an important subtle difference.
The syntax would be something like this:
Container.RegisterType(typeof(IDataAccess),
new InjectionStrategy((c) =>
{
//Access ambient context (perhaps thread-local-storage) to determine
//the type of the strategy...
Type selectedStrategy = ...;
return selectedStrategy;
})
);
Notice that the InjectionStrategy is not returning an instance of IDataAccess. Instead it is returning a type description that implements IDataAccess. The IOC Container would then perform the usual creation and "build up" of that type, which might include other strategies being selected.
This is in contrast to the standard type-to-delegate binding which, in the case of Unity, is coded like this:
Container.RegisterType(typeof(IDataAccess),
new InjectionFactory((c) =>
{
//Access ambient context (perhaps thread-local-storage) to determine
//the type of the strategy...
IDataAccess instanceOfSelectedStrategy = ...;
return instanceOfSelectedStrategy;
})
);
The above actually comes close to satisfying the overall need, but definitely falls short of the hypothetical Unity InjectionStrategy.
Focusing on the first sample (which used a hypothetical Unity InjectionStrategy):
Pros:
Hides the container
No need either to create endless abstract factories, or have consumers fiddle with them.
No need to manually adjust IOC container bindings whenever a new strategy is available.
Allows the container to retain lifetime management controls.
Supports a pure DI story, which means that a multi-threaded app can create the entire object-graph on a thread with the proper thread-local-storage settings.
Cons:
Because the Type returned by the strategy was not available when the initial IOC container bindings were created, it means there may be a tiny performance hit the first time that type is returned. In other words, the container must on-the-spot reflect the type to discover what constructors it has, so that it knows how to inject it. All subsequent occurrences of that type should be fast, because the container can cache the results it found from the first time. This is hardly a "con" worth mentioning, but I'm trying for full-disclosure.
???
Is there an existing IOC container that can behave this way? Anyone have a Unity custom injection class that achieves this effect?
As far as I can tell, this question is about run-time selection or mapping of one of several candidate Strategies.
There's no reason to rely on a DI Container to do this, as there are at least three ways to do this in a container-agnostic way:
Use a Metadata Role Hint
Use a Role Interface Role Hint
Use a Partial Type Name Role Hint
My personal preference is the Partial Type Name Role Hint.
I have achieved this requirement in many forms over the last couple of years. Firstly let's pull the main points I can see in your post
assume run-time selection of strategies and the use of an IOC container ... add the assumption that it is not a single strategy that must be selected. Rather, I might need to retrieve an object-graph that has several strategies ... [must not] binds the caller to the IOC Container ... Every new strategy must [not need to] be manually added to the list of bindings ... It would be nice if the IOC container simply gave a little more help.
I have used Simple Injector as my container of choice for some time now and one of the drivers for this decision is that it has extensive support for generics. It is through this feature that we will implement your requirements.
I'm a firm believer that the code should speak for itself so I'll jump right in ...
I have defined an extra class ContainerResolvedClass<T> to demonstrate that Simple Injector finds the right implementation(s) and successfully injects them into a constructor. That is the only reason for the class ContainerResolvedClass<T>. (This class exposes the handlers that are injected into it for test purposes via result.Handlers.)
This first test requires that we get one implementation back for the fictional class Type1:
[Test]
public void CompositeHandlerForType1_Resolves_WithAlphaHandler()
{
var container = this.ContainerFactory();
var result = container.GetInstance<ContainerResolvedClass<Type1>>();
var handlers = result.Handlers.Select(x => x.GetType());
Assert.That(handlers.Count(), Is.EqualTo(1));
Assert.That(handlers.Contains(typeof(AlphaHandler<Type1>)), Is.True);
}
This second test requires that we get one implementation back for the fictional class Type2:
[Test]
public void CompositeHandlerForType2_Resolves_WithAlphaHandler()
{
var container = this.ContainerFactory();
var result = container.GetInstance<ContainerResolvedClass<Type2>>();
var handlers = result.Handlers.Select(x => x.GetType());
Assert.That(handlers.Count(), Is.EqualTo(1));
Assert.That(handlers.Contains(typeof(BetaHandler<Type2>)), Is.True);
}
This third test requires that we get two implementations back for the fictional class Type3:
[Test]
public void CompositeHandlerForType3_Resolves_WithAlphaAndBetaHandlers()
{
var container = this.ContainerFactory();
var result = container.GetInstance<ContainerResolvedClass<Type3>>();
var handlers = result.Handlers.Select(x => x.GetType());
Assert.That(handlers.Count(), Is.EqualTo(2));
Assert.That(handlers.Contains(typeof(AlphaHandler<Type3>)), Is.True);
Assert.That(handlers.Contains(typeof(BetaHandler<Type3>)), Is.True);
}
These tests seems to meet your requirements and best of all no containers are harmed in the solution.
The trick is to use a combination of parameter objects and marker interfaces. The parameter objects contain the data for the behaviour (i.e. the IHandler's) and the marker interfaces govern which behaviours act on which parameter objects.
Here are the marker interfaces and parameter objects - you'll note that Type3 is marked with both marker interfaces:
private interface IAlpha { }
private interface IBeta { }
private class Type1 : IAlpha { }
private class Type2 : IBeta { }
private class Type3 : IAlpha, IBeta { }
Here are the behaviours (IHandler<T>'s):
private interface IHandler<T> { }
private class AlphaHandler<TAlpha> : IHandler<TAlpha> where TAlpha : IAlpha { }
private class BetaHandler<TBeta> : IHandler<TBeta> where TBeta : IBeta { }
This is the single method that will find all implementations of an open generic:
public IEnumerable<Type> GetLoadedOpenGenericImplementations(Type type)
{
var types =
from assembly in AppDomain.CurrentDomain.GetAssemblies()
from t in assembly.GetTypes()
where !t.IsAbstract
from i in t.GetInterfaces()
where i.IsGenericType
where i.GetGenericTypeDefinition() == type
select t;
return types;
}
And this is the code that configures the container for our tests:
private Container ContainerFactory()
{
var container = new Container();
var types = this.GetLoadedOpenGenericImplementations(typeof(IHandler<>));
container.RegisterAllOpenGeneric(typeof(IHandler<>), types);
container.RegisterOpenGeneric(
typeof(ContainerResolvedClass<>),
typeof(ContainerResolvedClass<>));
return container;
}
And finally, the test class ContainerResolvedClass<>
private class ContainerResolvedClass<T>
{
public readonly IEnumerable<IHandler<T>> Handlers;
public ContainerResolvedClass(IEnumerable<IHandler<T>> handlers)
{
this.Handlers = handlers;
}
}
I realise this post is a quite long, but I hope it clearly demonstrates a possible solution to your problem ...
This is a late response but maybe it will help others.
I have a pretty simple approach. I simply create a StrategyResolver to not be directly depending on Unity.
public class StrategyResolver : IStrategyResolver
{
private IUnityContainer container;
public StrategyResolver(IUnityContainer unityContainer)
{
this.container = unityContainer;
}
public T Resolve<T>(string namedStrategy)
{
return this.container.Resolve<T>(namedStrategy);
}
}
Usage:
public class SomeClass: ISomeInterface
{
private IStrategyResolver strategyResolver;
public SomeClass(IStrategyResolver stratResolver)
{
this.strategyResolver = stratResolver;
}
public void Process(SomeDto dto)
{
IActionHandler actionHanlder = this.strategyResolver.Resolve<IActionHandler>(dto.SomeProperty);
actionHanlder.Handle(dto);
}
}
Registration:
container.RegisterType<IActionHandler, ActionOne>("One");
container.RegisterType<IActionHandler, ActionTwo>("Two");
container.RegisterType<IStrategyResolver, StrategyResolver>();
container.RegisterType<ISomeInterface, SomeClass>();
Now, the nice thing about this is that I will never have to touch the StrategyResolver ever again when adding new strategies in the future.
It's very simple. Very clean and I kept the dependency on Unity to a strict minimum. The only time I would have touch the StrategyResolver is if I decide to change container technology which is very unlikely to happen.
Hope this helps!
I generally use a combination of your Abstract Factory and Named Bindings options. After trying many different approaches, I find this approach to be a decent balance.
What I do is create a factory that essentially wraps the instance of the container. See the section in Mark's article called Container-based Factory. As he suggests, I make this factory part of the composition root.
To make my code a little cleaner and less "magic string" based, I use an enum to denote the different possible strategies, and use the .ToString() method to register and resolve.
From your Cons of these approaches:
Typically binds the caller to the IOC Container
In this approach, the container is referenced in the factory, which is part of the Composition Root, so this is no longer an issue (in my opinion).
. . . and certainly requires the caller to know something about the strategy (such as the
name "Alpha").
Every new strategy must be manually added to the list
of bindings. This approach is not suitable for handling multiple
strategies in an object graph. In short, it does not meet
requirements.
At some point, code needs to be written to acknowledge the mapping between the structure that provides the implementation (container, provider, factory, etc.) and the code that requires it. I don't think you can get around this unless you want to use something that is purely convention-based.
The constructors of each strategy might have different needs. But now the responsibility of constructor injection has been transferred to the abstract factory from the container. In other words, every time a new strategy is added it may be necessary to modify the corresponding abstract factory.
This approach solves this concern completely.
Heavy use of strategies means heavy amounts of creating abstract factories.[...]
Yes, you will need one abstract factory for each set of strategies.
If this is a multi-threaded application and the "ambient context" is indeed provided by thread-local-storage, then by the time an object is using an injected abstract-factory to create the type it needs, it may be operating on a different thread which no longer has access to the necessary thread-local-storage value.
This will no longer be an issue since TLC will not be used.
I don't feel that there is a perfect solution, but this approach has worked well for me.
I had something of a convenient service locator anti-pattern in a previous game project. I'd like to replace this with dependency injection. autofac looks like the most likely DI container for me as it seems to have relevant features - but I can't figure out how to achieve what I'm looking for.
Existing Approach
Rather than a single service locator, I had a service locator which could delegate to its parent (in effect providing "scoped" services):
class ServiceLocator {
ServiceLocator _parent;
Dictionary<Type, object> _registered = new Dictionary<Type, object>();
public ServiceLocator(ServiceLocator parent = null) {
_parent = parent;
}
public void Register<T>(T service) {
_registered.Add(typeof(T), service);
}
public T Get<T>() {
object service;
if (_registered.TryGetValue(typeof(T), out service)) {
return (T)service;
}
return _parent.Get<T>();
}
}
Simplifying for clarity, the game consisted of a tree of Component-derived classes:
abstract class Component {
protected ServiceLocator _ownServices;
protected List<Component> _components = new List<Component>();
...
public Component(ServiceLocator parentServices) {
_ownServices = new ServiceLocator(parentServices);
}
...
}
So I could (and did) build tree structures like:
Game
- Audio : IAudioService
- TitleScreen : Screen
- GameplayScreen : Screen
- ShootingComponent : IShootingService
- NavigationComponent : INavigationService
|- AIComponent (uses IAudioService and IShootingService and INavigationService)
And each component could simply call the ServiceLocator with which it's constructed to find all the services it needs.
Benefits:
Components don't have to care who implements the services they use or where those services live; so long as those services' lifetimes are equal to or greater than their own.
Multiple components can share the same service, but that service can exist only as long as it needs to. In particular, we can Dispose() a whole portion of the hierarchy when the player quits a level, which is far easier than having components rebuild complex data structures to adjust to the idea that they're now in a completely new level.
Drawbacks:
As Mark Seeman points out, Service Locator is an Anti-Pattern.
Some components would instantiate service providers purely because I (the programmer) know that nested components need that service, or I (the programmer) know that the game has to have e.g. AI running in the game world, not because the instantiator requires that service per se.
Goal
In the spirit of DI, I would like to remove all knowledge of "service locators" and "scopes" from Components. So they would receive (via DI) constructor parameters for each service they consume. To keep this knowledge out of the components, the composition root will have to specify, for each component:
Whether instantiating a specific type of component creates a new scope
Within that scope, which services are available.
I want to write the intuitive:
class AIComponent
{
public AIComponent(IAudioService audio, IShootingService shooting, INavigationService navigation)
{
...
}
}
And be able to specify in the composition root that
IAudioService is implemented by the Audio class and you should create/obtain a singleton (I can do this!)
IShootingService is implemented by ShootingComponent and there should be one of those created/obtained per Screen
INavigationService as per IShootingService
I must confess I'm completely lost when it comes to the latter two. I won't list my numerous abortive autofac-based attempts here as I've made a few dozen over a long period and none of them were remotely functional. I have read the documentation at length - I know lifetime scopes and Owned<> are in the area I'm looking at, but I can't see how to transparently inject scoped dependencies as I'm looking to - yet I feel that DI in general seems supposed to facilitate exactly what I'm looking to do!
If this is sane, how can I achieve this? Or is this just diabolical? If so, how would you structure such an application making good use of DI to avoid passing objects around recursively, when the lifetimes of those objects vary depending on the context in which the object is being used?
LifetimeScopes sound like the answer. I think what you are basically doing is tying lifetime scopes to screens. So ShootingComponent and friends would be registered with .InstancePerMatchingLifetimeScope("Screen"). The trick is then making it so that each screen is created in a new LifetimeScope tagged as "Screen." My first thought would be to make a screen factory like so:
public class ScreenFactory
{
private readonly ILifetimeScope _parent;
public ScreenFactory(ILifetimeScope parent) { _parent = parent; }
public TScreen CreateScreen<TScreen>() where TScreen : Screen
{
var screenScope = _parent.BeginLifetimeScope("Screen");
var screen = screenScope.Resolve<TScreen>();
screen.Closed += () => screenScope.Dispose();
return screen;
}
}
This is totally untested, but I think the concept makes sense.
Coincidentally currently I am working with similar requirements (Using Autofac) and here is what I came up with so far:
First of all start using modules. They are an excellent way of managing your dependencies and configuration.
Define your dependencies with appropriate lifetimes: IAudioService as singleton and IShootingService as per lifetime scope.
Make sure your lifetime scoped interfaces also implements IDisposable to ensure proper clean up.
Create a thin wrapper to manage all your lifetime in a simple embedded framework: sandwich your game levels between Begin() and End() methods. (this is how I did it. I'm sure you can find a better way of doing this in your own structure)
(optional) Create a 'core' module to keep your generic dependencies (i.e. IAudioService) and separate modules for other islands of dependencies (which might be depending on different implementations of the same interface for example)
Here is an example of how I did it:
public ScopedObjects(ILifetimeScope container, IModule module)
{
_c = container;
_m = module;
}
public void Begin()
{
_scope = _c.BeginLifetimeScope(b => b.RegisterModule(_m));
}
public T Resolve<T>()
{
return _scope.Resolve<T>();
}
public void End()
{
_scope.Dispose();
}
In the example you gave, I would put the resolution of AIComponent between Begin and End calls of the above class when going through levels.
As I said, I'm sure you will be able to come up with a better way of doing it in your development structure, I hope this gives you the basic idea as to how you can implement it, assuming my experience is to be considered a 'good' way of doing this.
Good luck.
I am currently weighing up the advantages and disadvantages between DI and SL. However, I have found myself in the following catch 22 which implies that I should just use SL for everything, and only inject an IoC container into each class.
DI Catch 22:
Some dependencies, like Log4Net, simply do not suit DI. I call these meta-dependencies and feel they should be opaque to calling code. My justification being that if a simple class 'D' was originally implemented without logging, and then grows to require logging, then dependent classes 'A', 'B', and 'C' must now somehow obtain this dependency and pass it down from 'A' to 'D' (assuming 'A' composes 'B', 'B' composes 'C', and so on). We have now made significant code changes just because we require logging in one class.
We therefore require an opaque mechanism for obtaining meta-dependencies. Two come to mind: Singleton and SL. The former has known limitations, primarily with regards to rigid scoping capabilities: at best a Singleton will use an Abstract Factory which is stored at application scope (ie. in a static variable). This allows some flexibility, but is not perfect.
A better solution would be to inject an IoC container into such classes, and then use SL from within that class to resolve these meta-dependencies from the container.
Hence catch 22: because the class is now being injected with an IoC container, then why not use it to resolve all other dependencies too?
I would greatly appreciate your thoughts :)
Because the class is now being injected with an IoC container, then why not use it to resolve all other dependencies too?
Using the service locator pattern completely defeats one of the main points of dependency injection. The point of dependency injection is to make dependencies explicit. Once you hide those dependencies by not making them explicit parameters in a constructor, you're no longer doing full-fledged dependency injection.
These are all constructors for a class named Foo (set to the theme of the Johnny Cash song):
Wrong:
public Foo() {
this.bar = new Bar();
}
Wrong:
public Foo() {
this.bar = ServiceLocator.Resolve<Bar>();
}
Wrong:
public Foo(ServiceLocator locator) {
this.bar = locator.Resolve<Bar>();
}
Right:
public Foo(Bar bar) {
this.bar = bar;
}
Only the latter makes the dependency on Bar explicit.
As for logging, there's a right way to do it without it permeating into your domain code (it shouldn't but if it does then you use dependency injection period). Amazingly, IoC containers can help with this issue. Start here.
Service Locator is an anti-pattern, for reasons excellently described at http://blog.ploeh.dk/2010/02/03/ServiceLocatorIsAnAntiPattern.aspx. In terms of logging, you could either treat that as a dependency just like any other, and inject an abstraction via constructor or property injection.
The only difference with log4net, is that it requires the type of the caller that uses the service. Using Ninject (or some other container) How can I find out the type that is requesting the service? describes how you can solve this (it uses Ninject, but is applicable to any IoC container).
Alternatively, you could think of logging as a cross cutting concern, which isn't appropriate to mix with your business logic code, in which case you can use interception which is provided by many IoC containers. http://msdn.microsoft.com/en-us/library/ff647107.aspx describes using interception with Unity.
My opinion is that it depends. Sometimes one is better and sometimes another. But I'd say that generaly I prefer DI. There are few reasons for that.
When dependency is injected somehow into component it can be treated as part of its interface. Thus its easier for component's user to supply this dependecies, cause they are visible. In case of injected SL or Static SL that dependencies are hidden and usage of component is a bit harder.
Injected dependecies are better for unit testing cause you can simply mock them. In case of SL you have to setup Locator + mock dependencies again. So it is more work.
Sometimes logging can be implemented using AOP, so that it doesn't mix with business logic.
Otherwise, options are :
use an optional dependency (such as setter property), and for unit test you don't inject any logger. IOC container will takes care of setting it automatically for you if you run in production.
When you have a dependency that almost every object of your app is using ("logger" object being the most commmon example), it's one of the few cases where the singleton anti-pattern becomes a good practice. Some people call these "good singletons" an Ambient Context:
http://aabs.wordpress.com/2007/12/31/the-ambient-context-design-pattern-in-net/
Of course this context has to be configurable, so that you can use stub/mock for unit testing.
Another suggested use of AmbientContext, is to put the current Date/Time provider there , so that you can stub it during unit test, and accelerates time if you want.
This is regarding the 'Service Locator is an Anti-Pattern' by Mark Seeman.
I might be wrong here. But I just thought I should share my thoughts too.
public class OrderProcessor : IOrderProcessor
{
public void Process(Order order)
{
var validator = Locator.Resolve<IOrderValidator>();
if (validator.Validate(order))
{
var shipper = Locator.Resolve<IOrderShipper>();
shipper.Ship(order);
}
}
}
The Process() method for OrderProcessor does not actually follow the 'Inversion of Control' principle. It also breaks the Single Responsibility principle at the method level. Why should a method be concerned with instantiating the
objects(via new or any S.L. class) it needs to accomplish anything.
Instead of having the Process() method create the objects the constructor can actually have the parameters for the respective objects(read dependencies) as shown below. Then HOW can a Service Locator be any different from a IOC
container. AND it will aid in Unit Testing as well.
public class OrderProcessor : IOrderProcessor
{
public OrderProcessor(IOrderValidator validator, IOrderShipper shipper)
{
this.validator = validator;
this.shipper = shipper;
}
public void Process(Order order)
{
if (this.validator.Validate(order))
{
shipper.Ship(order);
}
}
}
//Caller
public static void main() //this can be a unit test code too.
{
var validator = Locator.Resolve<IOrderValidator>(); // similar to a IOC container
var shipper = Locator.Resolve<IOrderShipper>();
var orderProcessor = new OrderProcessor(validator, shipper);
orderProcessor.Process(order);
}
I have used the Google Guice DI framework in Java, and discovered that it does much more than make testing easier. For example, I needed a separate log per application (not class), with the further requirement that all my common library code use the logger in the current call context. Injecting the logger made this possible. Admittedly, all the library code needed to be changed: the logger was injected in the constructors. At first, I resisted this approach because of all the coding changes required; eventually I realized that the changes had many benefits:
The code became simpler
The code became much more robust
The dependencies of a class became obvious
If there were many dependencies, it was a clear indication that a class needed refactoring
Static singletons were eliminated
The need for session or context objects disappeared
Multi-threading became much easier, because the DI container could be built to contain just one thread, thus eliminating inadvertent cross-contamination
Needless to say, I am now a big fan of DI, and use it for all but the most trivial applications.
We've landed on a compromise: use DI but bundle top-level dependencies into an object avoiding refactoring hell should those dependencies change.
In the example below, we can add to 'ServiceDependencies' without having to refactor all derived dependencies.
Example:
public ServiceDependencies{
public ILogger Logger{get; private set;}
public ServiceDependencies(ILogger logger){
this.Logger = logger;
}
}
public abstract class BaseService{
public ILogger Logger{get; private set;}
public BaseService(ServiceDependencies dependencies){
this.Logger = dependencies.Logger; //don't expose 'dependencies'
}
}
public class DerivedService(ServiceDependencies dependencies,
ISomeOtherDependencyOnlyUsedByThisService additionalDependency)
: base(dependencies){
//set local dependencies here.
}
I know that people are really saying DI is the only good IOC pattern but I don't get this. I will try to sell SL a bit. I will use the new MVC Core framework to show you what I mean. First DI engines are really complex. What people really mean when they say DI, is use some framework like Unity, Ninject, Autofac... that do all the heavy lifting for you, where SL can be as simple as making a factory class. For a small fast project this is an easy way to do IOC without learning a whole framework for proper DI, they might not be that difficult to learn but still.
Now to the problem that DI can become. I will use a quote from MVC Core docs.
"ASP.NET Core is designed from the ground up to support and leverage dependency injection." Most people say that about DI "99% of your code base should have no knowledge of your IoC container." So why would they need to design from ground up if only 1% of code should be aware of it, didn't old MVC support DI? Well this is the big problem of DI it depends on DI. Making everything work "AS IT SHOULD BE DONE" takes a lot of work. If you look at the new Action Injection is this not depending on DI if you use [FromServices] attribute. Now DI people will say NO you are suppose to go with Factories not this stuff, but as you can see not even people making MVC did it right. The problem of DI is visible in Filters as well look at what you need to do to get DI in a filter
public class SampleActionFilterAttribute : TypeFilterAttribute
{
public SampleActionFilterAttribute():base(typeof(SampleActionFilterImpl))
{
}
private class SampleActionFilterImpl : IActionFilter
{
private readonly ILogger _logger;
public SampleActionFilterImpl(ILoggerFactory loggerFactory)
{
_logger = loggerFactory.CreateLogger<SampleActionFilterAttribute>();
}
public void OnActionExecuting(ActionExecutingContext context)
{
_logger.LogInformation("Business action starting...");
// perform some business logic work
}
public void OnActionExecuted(ActionExecutedContext context)
{
// perform some business logic work
_logger.LogInformation("Business action completed.");
}
}
}
Where if you used SL you could have done this with var _logger = Locator.Get();. And then we come to the Views. With all there good will regarding DI they had to use SL for the views. the new syntax #inject StatisticsService StatsService is the same as var StatsService = Locator.Get<StatisticsService>();.
The most advertised part of DI is unit testing. But what people and up doing is just testing there mock services with no purpose or having to wire up there DI engine to do real tests. And I know that you can do anything badly but people end up making a SL locator even if they don't know what it is. Where not a lot of people make DI without ever reading on it first.
My biggest problem with DI is that the user of the class must be aware of the inner workings of the class in other to use it.
SL can be used in a good way and has some advantages most of all its simplicity.
I know this question is a little old, I just thought I would give my input.
In reality, 9 times out of 10 you really don't need SL and should rely on DI. However, there are some cases where you should use SL. One area that I find myself using SL (or a variation, thereof) is in game development.
Another advantage of SL (in my opinion) is the ability to pass around internal classes.
Below is an example:
internal sealed class SomeClass : ISomeClass
{
internal SomeClass()
{
// Add the service to the locator
ServiceLocator.Instance.AddService<ISomeClass>(this);
}
// Maybe remove of service within finalizer or dispose method if needed.
internal void SomeMethod()
{
Console.WriteLine("The user of my library doesn't know I'm doing this, let's keep it a secret");
}
}
public sealed class SomeOtherClass
{
private ISomeClass someClass;
public SomeOtherClass()
{
// Get the service and call a method
someClass = ServiceLocator.Instance.GetService<ISomeClass>();
someClass.SomeMethod();
}
}
As you can see, the user of the library has no idea this method was called, because we didn't DI, not that we'd be able to anyways.
If the example only takes log4net as dependency, then you only need to do this:
ILog log = LogManager.GetLogger(typeof(Foo));
There is no point to inject the dependency as log4net provides granular logging by taking the type (or a string) as parameter.
Also, DI is not correlated with SL. IMHO the purpose of ServiceLocator is for resolve optional dependencies.
Eg: If the SL provides an ILog interface, i will write logging daa.
For DI, do you need to have a hard reference to the injected type assembly? I don’t see anyone talking about that. For SL, I can tell my resolver where to load my type dynamically when it needed from a config.json or similar. Also, if your assembly contains several thousand types and their inheritance, do you need thousands cascading call to the service collection provider to register them? That’s where I do see much talk about. Most are talking about the benefit of DI and what it is in general, when it comes to how to implement it in .net, they presented with an extension method for adding reference to a hard linked types assembly. That’s not very decoupling to me.