It should be a very quick question. I am trying to learn CQRS pattern and there is one thing which is not clear. There are two dispatchers: for commands and queries. Both of them need to have DI kernel injected in order to get appropriate handler. For example:
var handler = _resolver.Resolve<IQueryHandler<TQuery, TResult>>();
Isn't it violating the concept of DI that Resolve should never be used and everything should be injected with constructor/properties?
There is a bigger example: http://www.adamtibi.net/06-2013/implementing-a-cqrs-based-architecture-with-mvc-and-document-db
Please check out this method:
public void Dispatch<TParameter>(TParameter command) where TParameter : ICommand
{
var handler = _kernel.Get<ICommandHandler<TParameter>>();
handler.Execute(command);
}
I've found this solution on 3 different pages. Why is it done this way instead of creating a factory to map Query to QueryHandler?
If you consider the dispatcher to be a part of the infrastructure, calling Resolve() within it does not violate the DI concept you describe.
Handlers are generally thought of as entry points for logic pipelines (or threads, or however you want to think of them). This is similar to controllers in MVC, or the Main() method in a console application. So like these other constructs, the dispatcher is considered a top-level object in the dependency chain, and is thus a perfectly legitimate place to reference the container.
Edit
So the comments mention Composition Root (CR), which is a term I like but deliberately tried to avoid in this answer, as it tends to confuse people. Is the CR a specific class? An assembly? I tend to think of it more as a concept than a specific construct. It's the logical place in the application where object graphs are composed.
To clarify what I meant about controllers: the controllers would be the entry point, and (as #Zbigniew noted) the controller factory would be (part of) the CR. Similarly, handlers would be the entry point, and the dispatcher would be the CR. Handlers/Controllers would not have a reference to the container, but the Dispatcher/ControllerFactory would.
Related
I started using Autofac in my project. It's simple chat app. I have Messaging class which belong to Node class. Each Node object has own instance of Messaging. Messaging contain several events for signalling incoming messages etc. Currently my code looks like this:
Network.cs (In library)
var scope = container.BeginLifetimeScope(b => { b.RegisterInstance(host).As<IHost>(); });
var node = scope.Resolve<Node>();
NodeCreated?.Invoke(this, node);
Program.cs (In client assembly)
Network.NodeCreated += (sender, node) => { _ = new NodeEventsHandlers(node); };
NodeEventsHandlers.cs (In client assembly)
public NodeEventsHandlers(INode node)
{
this.node = node;
node.Messaging.MessageReceived += OnMessageReceived;
...
It's mess and didn't take advantage of DI. How can I inject event handlers methods into Messaging with Autofac? I found this but I'm not sure is it useful in my case.
There's really not enough here to guarantee a complete answer. For example, you mention you have a Messaging class but you never show it, and that's kind of what the whole question revolves around. However, I may be able to shortcut this a bit.
Dependency injection is for constructing objects... which sounds like I'm stating the obvious, but it's an important point here because, if I'm reading the question right, you're asking how to make dependency injection wire up event handlers. That's not what DI is for, and, more specifically, that's not really something Autofac - as a dependency injection framework - would be able to help you with.
For a really, really high level concept of what Autofac can help with, think:
If you can use new instead, Autofac can help. That is, invoking a constructor and passing parameters.
If you could use Activator.CreateInstance (the reflection way of invoking the constructor), Autofac can help.
If you want to run some code you provide after the object is constructed or when the object is disposed, Autofac can help.
However, there's no facility in Autofac (or any other DI framework I'm aware of) that will "automatically" execute some sort of event wire-up. That's code you'd have to write yourself.
I could possibly see something where you use the Autofac OnActivating event to call some code you write that does the event wire-up when the object is resolved, but that's about as good as it gets.
If I had to guess, I'd say you're likely looking for something closer to a service bus, where you register nodes with the pub/sub framework and that handles attaching and detaching listeners to events. That's a different type of thing entirely than Autofac.
It's fairly established that doing work in ctors for types that are resolved using SimpleInjector is bad practice. Although this often leads to certain late initializations of such types, a particularly interesting case is Reactive Extensions subscriptions.
Take for instance an observable sequence that exhibits Replay(1) semantics (actually BehaviorSubject if we take the StartWith into account), e.g.
private readonly IObservable<Value> _myObservable;
public MyType(IService service)
{
_myObservable = service.OtherObservable
.StartWith(service.Value)
.Select(x => SomeTransform())
.Replay(1)
.RefCount();
}
public IObservable<Value> MyObservable => _myObservable;
Assume now, that SomeTransform is computationally expensive. From the point of view of SimpleInjector, the above is bad practice. Ok, so we need some kind of Initialize() method to call after SimpleInjector is finished. But what about our replay semantics and our StartWith()? Our consumers expect a value when they Subscribe (assume now that this is guaranteed to happen after initialization)!
How do we get around these restrictions in a nice way while still satisfying SimpleInjector? Here's a summary of requirements:
Don't do extensive work in the ctor (i.e. SomeTransform) should not run
_myObservable should be readonly
MyObservable should exhibit Replay(1) semantics
We should always have an initial value (hence the StartWith)
We do not want to Subscribe inside MyType and cache the value (we like immutability)
I experimented with creating an additional observable that starts with false and then gets set to true on initialize, and then merging that together with _myObservable, but couldn't quite get it to work. Additionally, it doesn't seem like the best solution. In essence, all I want to do is delay until Initialize() is done. There must be some way to do this that I'm not seeing?
One easy solution that comes to mind is the use of Lazy<T>
This could look like:
private readonly Lazy<IObservable<Value>> _lazyMyObservable;
public MyType(IService service)
{
_lazyMyObservable = new Lazy<IObservable<Value>>(() => this.InitObservable(service));
}
private IObservable<Value> InitObservable(IService service)
{
return service.OtherObservable
.StartWith(service.Value)
.Select(x => SomeTransform())
.Replay(1)
.RefCount();
}
public IObservable<Value> MyObservable => _lazyMyObservable.Value;
This will init the variable _lazyMyObservable without actually calling SomeTransform(). When a consumer asks for MyType.MyObservable the InitObservable code will be called one time and one time only. This postpones the initialization to the point where the code is actually used.
This will keep your constructor nice and clean and has no need to add initialization logic.
Note that the ctor of the Lazy<T> has several overloads that you can use if you may have issues with multithreading.
Injection constructors should be simple and reliable. This means that the following practices are frowned upon:
Doing any I/O operations inside the constructor. I/O operations can fail and make construction of the object graph unreliable.
Using the class's dependencies inside the constructor. Not only could a called dependency cause I/O of its own, sometimes injected dependencies are not (yet) fully initialized, and final initialization happens at a later point in time. Perhaps after the object graph has been constructed.
Considering how Reactive Extensions work, your MyType constructor doesn't seem to do any I/O. Its SomeTransform method is not called during the creation of MyType. Instead, the observable is configured to call SomeTransform when objects are pushed. This means that from a DI perspective, your injection is still 'simple' and fast. Sometimes your classes need some initialization on top of storing incoming dependencies. Creating and storing a Lazy<T>, for instance, is a good example. It allows delaying doing some I/O while still having more code than merely "receiving the dependencies."
But still you are accessing a dependency inside your constructor, which might cause trouble if that dependency, or its dependencies are not fully initialized. Further more, with Reactive Extensions you make a runtime dependency from IService back to MyType (you already have a design-time dependency from MyType to IService). This is very similar to working with events in .NET. Consequence of this is that it could cause MyType to be kept alive by IService, even when MyType lifetime is expected to be shorter.
So, strictly spoken, from a DI perspective this configuration might be troublesome. But it's hard to imagine a different model when working with Reactive Extensions. That would mean you have to move this configuration of the observables out of the constructors, and do it after the object graph has been constructed. But that will likely cause having to open up your classes so the Composition Root has access to the methods that need to be called. It also causes Temporal Coupling.
In other words, when using Reactive Extensions, it is probably good to have some design rules in place to prevent trouble. These rules could be:
All exposed IObservable<T> properties should always be fully initialized and usable after its type's construction.
All observers and observables should have the same lifetime.
as for my understanding, part of writing (unit-)testable code, a constructor should not do real work in constructor and only assigning fields. This worked pretty well so far. But I came across with a problem and I'm not sure what is the best way to solve it. See the code sample below.
class SomeClass
{
private IClassWithEvent classWithEvent;
public SomeClass(IClassWithEvent classWithEvent)
{
this.classWithEvent = classWithEvent;
// (1) attach event handler in ctor.
this.classWithEvent.Event += OnEvent;
}
public void ActivateEventHandling()
{
// (2) attach event handler in method
this.classWithEvent.Event += OnEvent;
}
private void OnEvent(object sender, EventArgs args)
{
}
}
For me option (1) sounds fine, but it the constructor should only assign fields. Option (2) feels a bit "too much".
Any help is appreciated.
A unit test would test SomeClass at most. Therefore you would typically mock classWithEvent. Using some kind of injection for classWithEvent in ctor is fine.
Just as Thomas Weller said wiring is field assignment.
Option 2 is actually bad IMHO. As if you omit a call to ActivateEventHandling you end up with a improperly initialized class and need to transport knowledge of the requirement to call ActivateEventHandling in comments or somehow else, which make the class harder to use and probably results in a class-usage that was not even tested by you, as you have called ActivateEventHandling and tested it but an uninformed user omitting the activation didn't, and you have certainly not tested your class when ActivateEventHandling was not called, right? :)
Edit: There may be alternative approaches here which are worth mentioning it
Depending on the paradigm it may be wise to avoid wiring events in the class at all. I need to relativize my comment on Stephen Byrne's answer.
Wiring can be regarded as context knowledge. The single responsibility principle says a class should do only one task. Furthermore a class can be used more versatile if it does not have a dependency to something else. A very loosely coupled system would provide many classes witch have events and handlers and do not know other classes.
The environment is then responsible for wiring all the classes together to connect events properly with handlers.
The environment would create the context in which the classes interact with each-other in a meaningful way.
A class in this case does therefore not know to whom it will be bound and it actually does not care. If it requires a value, it asks for it, whom it asks should be unknown to it. In that case there wouldn't even be an interface injected into the ctor to avoid a dependency. This concept is similar to neurons in a brain as they also emit messages to the environment and expect answer not knowing neighbouring neurons.
However I regard a dependency to an interface, if it is injected by some means of a dependency injection container just another paradigm and not less wrong.
The non trivial task of the environment to wire up all classes on start may lead to runtime errors (which are mitigated by a very good test coverage of functional and integration tests, which may be a hard task for large projects) and it gets very annoying if you need to wire dozens of classes and probably hundreds of events on startup manually.
While I agree that wiring in an environment and not in the class itself can be nice, it is not practical for large scale code.
Ralf Westphal (one of the founders of the clean code developer initiative (sorry german only)) has written a software that performs the wiring automatically in a concept called "event based components" (not necessarily coined by himself). It uses naming conventions and signature matching with reflection to bind events and handlers together.
Wiring events is field assignment (because delegates are nothing but simple reference variables that point to methods).
So option(1) is fine.
The point of constructor is not to "assign fields". It is to establish invariants of your object, i. e. something that never changes during its lifetime.
So if in other methods of class you depend on being always subscribed to some object, you'd better do it in the constructor.
On the other hand, if subscriptions come and go (probably not the case here), you can move this code to another method.
The single responsibility principle dictates that that wiring should be avoided. Your class should not care how, or where from it receives data. It would make sense to rename OnEvent method to something more meaningful, and make it public.
Then some other class (bootstrapper, configurator, whatever) should be responsible for the wiring. Your class should only be responsible for what happens when a new data come's in.
Pseudo code:
public interface IEventProvider //your IClassWithEvent
{
Event MyEvent...
}
public class EventResponder : IEventResponder
{
public void OnEvent(object sender, EventArgs args){...}
}
public class Boostrapper
{
public void WireEvent(IEventProvider eventProvider, IEventResponder eventResponder)
{
eventProvider>event += eventResponder.OnEvent;
}
}
Note, the above is pseudo code, and it's only for the purpose to describe the idea.
How your bootstrapper actually is implemented depends on many things. It can be your "main" method, or your global.asax, or whatever you have in place to actually configure and prepare your application.
The idea is, that whatever is responsible to prepare the application to run, should compose it, not the classes themselves, as they should be as single purpose as possible, and should not care too much about how and where they are used.
Abstract
For the past few months I have been programming a light weight, C# based game engine with API abstraction and entity/component/scripting system. The whole idea of it is to ease the game development process in XNA, SlimDX and such, by providing architecture similar to that of the Unity engine.
Design challenges
As most game developers know, there are a lot of different services you need to access throughout your code. Many developers resort to using global static instances of e.g. a Render manager(or a composer), a Scene, Graphicsdevice(DX), Logger, Input state, Viewport, Window and so on. There are some alternative approaches to the global static instances/ singletons. One is to give each class an instance of the classes it needs access to, either through a constructor or constructor/property dependency injection(DI), another is to use a global service locator, like StructureMap's ObjectFactory where the service locator is usually configured as an IoC container.
Dependency Injection
I chose to go the DI way for many reasons. The most obvious one being testability, by programming against interfaces and have all the dependencies of every class provided to them through a constructor, those classes are easily tested since the test container can instantiate the required services, or the mocks of them, and feed into every class to be tested. Another reason for doing DI/IoC was, believe it or not, to increase the readability of the code. No more huge initialization process of instantiating all the different services and manually instantiating classes with references to the required services. Configuring the Kernel(NInject)/Registry(StructureMap) conveniently gives a single point of configuration for the engine/game, where service implementations are picked and configured.
My problems
I often feel like I am creating interfaces for interfaces sake
My productivity has gone down dramatically since all I do is worry about how to do things the DI-way, instead of the quick and simple global static way.
In some cases, e.g. when instantiating new Entities on runtime, one needs access to the IoC container / kernel to create the instance. This creates a dependency on the IoC container itself (ObjectFactory in SM, an instance of the kernel in Ninject), which really goes against the reason for using one in the first place. How can this be resolved? Abstract factories come to mind, but that just further complicates the code.
Depending on service requirements, some classes' constructors can get very large, which will make the class completely useless in other contexts where and if an IoC is not used.
Basically doing DI/IoC dramatically slows down my productivity and in some cases further complicates the code and architecture. Therefore I am uncertain of whether it is a path I should follow, or just give up and do things the old fashioned way. I am not looking for a single answer saying what I should or shouldn't do but a discussion on if using DI is worth it in the long run as opposed to using the global static/singleton way, possible pros and cons I have overlooked and possible solutions to my problems listed above, when dealing with DI.
Should you go back to the old-fashioned way?
My answer in short is no. DI has numerous benefits for all the reasons you mentioned.
I often feel like I am creating interfaces for interfaces sake
If you are doing this you might be violating the
Reused Abstractions Principle (RAP)
Depending on service requirements, some classes' constructors can get
very large, which will make the class completely useless in other
contexts where and if an IoC is not used.
If your classes constructors are too large and complex, this is the best way to show you that you are violating a very important other principle:
Single Reponsibility Principle. In this case it is time to extract and refactor your code into different classes, the number of dependencies suggested is around 4.
In order to do DI you don't have to have an interface, DI is just the way you get your dependencies into your object. Creating interfaces might be a needed way to be able to substitute a dependency for testing purposes.
Unless the object of the dependency is:
Easy to isolate
Doesn't talk to external subsystems (file system
etc)
You can create your dependency as an Abstract class, or any class where the methods you'd like to substitute are virtual. However interfaces do create the best de-coupled way of an dependency.
In some cases, e.g. when instantiating new Entities on runtime, one
needs access to the IoC container / kernel to create the instance.
This creates a dependency on the IoC container itself (ObjectFactory
in SM, an instance of the kernel in Ninject), which really goes
against the reason for using one in the first place. How can this be
resolved? Abstract factories come to mind, but that just further
complicates the code.
As far as a dependency to the IOC container, you should never have a dependency to it in your client classes.
And they don't have to.
In order to first use dependency injection properly is to understand the concept of the Composition Root. This is the only place where your container should be referenced. At this point your entire object graph is constructed. Once you understand this you will realize you never need the container in your clients. As each client just gets its dependency injected.
There are also MANY other creational patterns you can follow to make construction easier:
Say you want to construct an object with many dependencies like this:
new SomeBusinessObject(
new SomethingChangedNotificationService(new EmailErrorHandler()),
new EmailErrorHandler(),
new MyDao(new EmailErrorHandler()));
You can create a concrete factory that knows how to construct this:
public static class SomeBusinessObjectFactory
{
public static SomeBusinessObject Create()
{
return new SomeBusinessObject(
new SomethingChangedNotificationService(new EmailErrorHandler()),
new EmailErrorHandler(),
new MyDao(new EmailErrorHandler()));
}
}
And then use it like this:
SomeBusinessObject bo = SomeBusinessObjectFactory.Create();
You can also use poor mans di and create a constructor that takes no arguments at all:
public SomeBusinessObject()
{
var errorHandler = new EmailErrorHandler();
var dao = new MyDao(errorHandler);
var notificationService = new SomethingChangedNotificationService(errorHandler);
Initialize(notificationService, errorHandler, dao);
}
protected void Initialize(
INotificationService notifcationService,
IErrorHandler errorHandler,
MyDao dao)
{
this._NotificationService = notifcationService;
this._ErrorHandler = errorHandler;
this._Dao = dao;
}
Then it just seems like it used to work:
SomeBusinessObject bo = new SomeBusinessObject();
Using Poor Man's DI is considered bad when your default implementations are in external third party libraries, but less bad when you have a good default implementation.
Then obviously there are all the DI containers, Object builders and other patterns.
So all you need is to think of a good creational pattern for your object. Your object itself should not care how to create the dependencies, in fact it makes them MORE complicated and causes them to mix 2 kinds of logic. So I don't beleive using DI should have loss of productivity.
There are some special cases where your object cannot just get a single instance injected to it. Where the lifetime is generally shorter and on-the-fly instances are required. In this case you should inject the Factory into the object as a dependency:
public interface IDataAccessFactory
{
TDao Create<TDao>();
}
As you can notice this version is generic because it can make use of an IoC container to create various types (Take note though the IoC container is still not visible to my client).
public class ConcreteDataAccessFactory : IDataAccessFactory
{
private readonly IocContainer _Container;
public ConcreteDataAccessFactory(IocContainer container)
{
this._Container = container;
}
public TDao Create<TDao>()
{
return (TDao)Activator.CreateInstance(typeof(TDao),
this._Container.Resolve<Dependency1>(),
this._Container.Resolve<Dependency2>())
}
}
Notice I used activator even though I had an Ioc container, this is important to note that the factory needs to construct a new instance of object and not just assume the container will provide a new instance as the object may be registered with different lifetimes (Singleton, ThreadLocal, etc). However depending on which container you are using some can generate these factories for you. However if you are certain the object is registered with Transient lifetime, you can simply resolve it.
EDIT: Adding class with Abstract Factory dependency:
public class SomeOtherBusinessObject
{
private IDataAccessFactory _DataAccessFactory;
public SomeOtherBusinessObject(
IDataAccessFactory dataAccessFactory,
INotificationService notifcationService,
IErrorHandler errorHandler)
{
this._DataAccessFactory = dataAccessFactory;
}
public void DoSomething()
{
for (int i = 0; i < 10; i++)
{
using (var dao = this._DataAccessFactory.Create<MyDao>())
{
// work with dao
// Console.WriteLine(
// "Working with dao: " + dao.GetHashCode().ToString());
}
}
}
}
Basically doing DI/IoC dramatically slows down my productivity and in
some cases further complicates the code and architecture
Mark Seeman wrote an awesome blog on the subject, and answered the question:
My first reaction to that sort of question is: you say loosely coupled code is harder to understand. Harder than what?
Loose Coupling and the Big Picture
EDIT: Finally I'd like to point out that not every object and dependency needs or should be dependency injected, first consider if what you are using is actually considered a dependency:
What are dependencies?
Application Configuration
System Resources (Clock)
Third Party Libraries
Database
WCF/Network Services
External Systems (File/Email)
Any of the above objects or collaborators can be out of your control and cause side effects and difference in behavior and make it hard to test. These are the times to consider an Abstraction (Class/Interface) and use DI.
What are not dependencies, doesn't really need DI?
List<T>
MemoryStream
Strings/Primitives
Leaf Objects/Dto's
Objects such as the above can simply be instantiated where needed using the new keyword. I would not suggest using DI for such simple objects unless there are specific reasons. Consider the question if the object is under your full control and doesn't cause any additional object graphs or side effects in behavior (at least anything that you want to change/control the behavior of or test). In this case simply new them up.
I have posted a lot of links to Mark Seeman's posts, but I really recommend you read his book and blog posts.
Generally, I like to keep an application completely ignorant of the IoC container. However I have ran into problems where I needed to access it. To abstract away the pain I use a basic Singleton. Before you run for the hills or pull out the shotgun, let me go over my solution. Basically, the IoC singleton does absolutly nothing, it simply delegates to an internal interface that must be passed in. I've found this makes working with the Singleton less painful.
Below is the IoC wrapper:
public static class IoC
{
private static IDependencyResolver inner;
public static void InitWith(IDependencyResolver container)
{
inner = container;
}
/// <exception cref="InvalidOperationException">Container has not been initialized. Please supply an instance if IWindsorContainer.</exception>
public static T Resolve<T>()
{
if ( inner == null)
throw new InvalidOperationException("Container has not been initialized. Please supply an instance if IWindsorContainer.");
return inner.Resolve<T>();
}
public static T[] ResolveAll<T>()
{
return inner.ResolveAll<T>();
}
}
IDependencyResolver:
public interface IDependencyResolver
{
T Resolve<T>();
T[] ResolveAll<T>();
}
I've had great success so far with the few times I've used it (maybe once every few projects, I really prefer not having to use this at all) as I can inject anything I want: Castle, a Stub, fakes, etc.
Is this a slippery road? Am I going to run into potential issues down the road?
I've seen that even Ayende implements this pattern in the Rhino Commons code, but I'd advise against using it wherever possible. There's a reason Castle Windsor doesn't have this code by default. StructureMap does, but Jeremy Miller has been moving away from it. Ideally, you should regard the container itself with as much suspicion as any global variable.
However, as an alternative, you could always configure your container to resolve IDependencyResolver as a reference to your container. This may sound crazy, but it's significantly more flexible. Just remember the rule of thumb that an object should call "new" or perform processing, but not both. For "call new" replace with "resolve a reference".
That's not really a singleton class. That's a static class with static members. And yes that seems a good approach.
I think JP Boodhoo even has a name for this pattern. The Static Gateway pattern.
Just a note: Microsoft Patterns and Practices has created a common service locator (http://www.codeplex.com/CommonServiceLocator) that most of the major IoC containers will be implementing in the near future. You can begin to use it instead of your IDependencyResolver.
BTW: this is the common way to solve your problem and it works quite well.
It all depends on the usage. Using the container like that is called the Service Locator Pattern. There are cases where it's not a good fit and cases where it do apply.
If you google "service locator pattern" you'll see a lot of blog posts saying that it's an anti-pattern, which it's not. The pattern has simply been overused (/abused).
For typical line of business applications you should not use SL as you hide the dependencies. You also got another problem: You can not manage state/lifetime if you use the root container (instead of one of it's lifetimes).
Service locator is a good fit when it comes to infrastructure. For instance ASP.NET MVC uses Service Locator to be able to resolve all dependencies for each controller.