I have a service which acts to create and dispatch commands for execution. It can either write commands to queue for deferred processing, or execute them immediately.
For these purposes I have something like:
class MyService
{
private ICommandQueueWriter _commandQueueWriter;
private ICommandExecutor _commandExecutor;
public MyService(ICommandQueueWriter cqw, ICommandExecutor ce)
{
_commandQueueWriter = cqr;
_commandExecutor = ce;
}
public void DoSomething()
{
_commandQueueWriter.Write(new SomeCommand());
_commandExecutor.Execute(new SomeOtherCommand());
}
}
The service will deal with all kinds of commands. I have a series of ICommandHandler<> implementations which will be registered with a DI container.
My plan has a flaw currently: the ICommandExecutor implementation needs to have access to all of the ICommandHandler<> implementations. In fact, I guess the queued command execution will face the same problem: I'll grab the message later and have to look up the handler somehow.
So my options that I can see are:
Don't use ICommandExecutor as a dependency, just use ICommandHandler<>s directly. But I wanted the option of wrapping all command handler execution through a standard class - to catch exceptions in a consistent way, or manage execution in some other consistent way. I really like the option of having a consistent interface for immediate/deferred execution (either call Write or Execute)
Pass the DI container or root into ICommandExecutor and let it resolve commands. This seems to break the idea that there should be one call to compose an object graph with DI, and might 'hide' dependencies
Have the implementation of ICommandExecutor have all ICommandHandler<>s as a dependency to be injected - so it can pick the one it wants manually. However that doesn't seem ideal either, since all handlers on the system would be instantiated at that point
Is there are fourth option or do I need to bite the bullet with one of these compromises?
To me the use of both a ICommandQueueWriter and ICommandExecutor seems strange. Why should the consumer (MyService in your case) have to know that one command is queued, while another is executed directly. I think this should be transparent.
Have the implementation of ICommandExecutor have all
ICommandHandler<>s as a dependency to be injected
This will cause severe maintenance problems, because you will add new command handlers very regularly and this will cause you to have to update the command executor's constructor every time.
Although you could also inject a collection of command handlers, this would still force you to iterate the list every time you want to execute one to get the correct implementation. This will get slower over time, because you will add new command handlers regularly.
Pass the DI container or root into ICommandExecutor and let it resolve
commands. This seems to break the idea that there should be one call
to compose an object graph with DI, and might 'hide' dependencies
It might seem that you are applying the Service Locator anti-pattern if you do this, but this is only the case if the ICommandExecutor is part of the application code. The trick is to make the ICommandExecutor part of your Composition Root. This solves the problem because the composition root will already be very tightly coupled to your container.
Have the implementation of ICommandExecutor have all ICommandHandler<>s as a dependency to be injected - so it can pick the one it wants manually. However that doesn't seem ideal either, since all handlers on the system would be instantiated at that point
If you want to inject handlers, but don't want handlers to be instantiated at injection time, then
Inject into command executor a collection of lazy instantiated handlers with metadata containing command type.
Like,
IEnumerable(of Lazy(of ICommandHandler, ICommandHandlerMetadata))
where ICommandExecutorMetadata contains command type corresponding to command handler
public interface ICommandHandlerMetadata
readonly property CommandType as Type
end interface
In command executor transform the collection into a dictionary of pairs (Type, Lazy(of ICommandHandler, ICommandHandlerMetadata)), where Type is command type obtained from metadata through Lazy.Metadata.CommandType.
In CommandExecutor.Execute method lazy initiated command handler is obtained from the dictionary by command type. This operation is ultra fast. The actual CommandHandler is obtained through Lazy.Value.
Actual CommandHandler is instantiated only once and exactly when a corresponding command is executed.
Related
I started using Autofac in my project. It's simple chat app. I have Messaging class which belong to Node class. Each Node object has own instance of Messaging. Messaging contain several events for signalling incoming messages etc. Currently my code looks like this:
Network.cs (In library)
var scope = container.BeginLifetimeScope(b => { b.RegisterInstance(host).As<IHost>(); });
var node = scope.Resolve<Node>();
NodeCreated?.Invoke(this, node);
Program.cs (In client assembly)
Network.NodeCreated += (sender, node) => { _ = new NodeEventsHandlers(node); };
NodeEventsHandlers.cs (In client assembly)
public NodeEventsHandlers(INode node)
{
this.node = node;
node.Messaging.MessageReceived += OnMessageReceived;
...
It's mess and didn't take advantage of DI. How can I inject event handlers methods into Messaging with Autofac? I found this but I'm not sure is it useful in my case.
There's really not enough here to guarantee a complete answer. For example, you mention you have a Messaging class but you never show it, and that's kind of what the whole question revolves around. However, I may be able to shortcut this a bit.
Dependency injection is for constructing objects... which sounds like I'm stating the obvious, but it's an important point here because, if I'm reading the question right, you're asking how to make dependency injection wire up event handlers. That's not what DI is for, and, more specifically, that's not really something Autofac - as a dependency injection framework - would be able to help you with.
For a really, really high level concept of what Autofac can help with, think:
If you can use new instead, Autofac can help. That is, invoking a constructor and passing parameters.
If you could use Activator.CreateInstance (the reflection way of invoking the constructor), Autofac can help.
If you want to run some code you provide after the object is constructed or when the object is disposed, Autofac can help.
However, there's no facility in Autofac (or any other DI framework I'm aware of) that will "automatically" execute some sort of event wire-up. That's code you'd have to write yourself.
I could possibly see something where you use the Autofac OnActivating event to call some code you write that does the event wire-up when the object is resolved, but that's about as good as it gets.
If I had to guess, I'd say you're likely looking for something closer to a service bus, where you register nodes with the pub/sub framework and that handles attaching and detaching listeners to events. That's a different type of thing entirely than Autofac.
It's fairly established that doing work in ctors for types that are resolved using SimpleInjector is bad practice. Although this often leads to certain late initializations of such types, a particularly interesting case is Reactive Extensions subscriptions.
Take for instance an observable sequence that exhibits Replay(1) semantics (actually BehaviorSubject if we take the StartWith into account), e.g.
private readonly IObservable<Value> _myObservable;
public MyType(IService service)
{
_myObservable = service.OtherObservable
.StartWith(service.Value)
.Select(x => SomeTransform())
.Replay(1)
.RefCount();
}
public IObservable<Value> MyObservable => _myObservable;
Assume now, that SomeTransform is computationally expensive. From the point of view of SimpleInjector, the above is bad practice. Ok, so we need some kind of Initialize() method to call after SimpleInjector is finished. But what about our replay semantics and our StartWith()? Our consumers expect a value when they Subscribe (assume now that this is guaranteed to happen after initialization)!
How do we get around these restrictions in a nice way while still satisfying SimpleInjector? Here's a summary of requirements:
Don't do extensive work in the ctor (i.e. SomeTransform) should not run
_myObservable should be readonly
MyObservable should exhibit Replay(1) semantics
We should always have an initial value (hence the StartWith)
We do not want to Subscribe inside MyType and cache the value (we like immutability)
I experimented with creating an additional observable that starts with false and then gets set to true on initialize, and then merging that together with _myObservable, but couldn't quite get it to work. Additionally, it doesn't seem like the best solution. In essence, all I want to do is delay until Initialize() is done. There must be some way to do this that I'm not seeing?
One easy solution that comes to mind is the use of Lazy<T>
This could look like:
private readonly Lazy<IObservable<Value>> _lazyMyObservable;
public MyType(IService service)
{
_lazyMyObservable = new Lazy<IObservable<Value>>(() => this.InitObservable(service));
}
private IObservable<Value> InitObservable(IService service)
{
return service.OtherObservable
.StartWith(service.Value)
.Select(x => SomeTransform())
.Replay(1)
.RefCount();
}
public IObservable<Value> MyObservable => _lazyMyObservable.Value;
This will init the variable _lazyMyObservable without actually calling SomeTransform(). When a consumer asks for MyType.MyObservable the InitObservable code will be called one time and one time only. This postpones the initialization to the point where the code is actually used.
This will keep your constructor nice and clean and has no need to add initialization logic.
Note that the ctor of the Lazy<T> has several overloads that you can use if you may have issues with multithreading.
Injection constructors should be simple and reliable. This means that the following practices are frowned upon:
Doing any I/O operations inside the constructor. I/O operations can fail and make construction of the object graph unreliable.
Using the class's dependencies inside the constructor. Not only could a called dependency cause I/O of its own, sometimes injected dependencies are not (yet) fully initialized, and final initialization happens at a later point in time. Perhaps after the object graph has been constructed.
Considering how Reactive Extensions work, your MyType constructor doesn't seem to do any I/O. Its SomeTransform method is not called during the creation of MyType. Instead, the observable is configured to call SomeTransform when objects are pushed. This means that from a DI perspective, your injection is still 'simple' and fast. Sometimes your classes need some initialization on top of storing incoming dependencies. Creating and storing a Lazy<T>, for instance, is a good example. It allows delaying doing some I/O while still having more code than merely "receiving the dependencies."
But still you are accessing a dependency inside your constructor, which might cause trouble if that dependency, or its dependencies are not fully initialized. Further more, with Reactive Extensions you make a runtime dependency from IService back to MyType (you already have a design-time dependency from MyType to IService). This is very similar to working with events in .NET. Consequence of this is that it could cause MyType to be kept alive by IService, even when MyType lifetime is expected to be shorter.
So, strictly spoken, from a DI perspective this configuration might be troublesome. But it's hard to imagine a different model when working with Reactive Extensions. That would mean you have to move this configuration of the observables out of the constructors, and do it after the object graph has been constructed. But that will likely cause having to open up your classes so the Composition Root has access to the methods that need to be called. It also causes Temporal Coupling.
In other words, when using Reactive Extensions, it is probably good to have some design rules in place to prevent trouble. These rules could be:
All exposed IObservable<T> properties should always be fully initialized and usable after its type's construction.
All observers and observables should have the same lifetime.
It should be a very quick question. I am trying to learn CQRS pattern and there is one thing which is not clear. There are two dispatchers: for commands and queries. Both of them need to have DI kernel injected in order to get appropriate handler. For example:
var handler = _resolver.Resolve<IQueryHandler<TQuery, TResult>>();
Isn't it violating the concept of DI that Resolve should never be used and everything should be injected with constructor/properties?
There is a bigger example: http://www.adamtibi.net/06-2013/implementing-a-cqrs-based-architecture-with-mvc-and-document-db
Please check out this method:
public void Dispatch<TParameter>(TParameter command) where TParameter : ICommand
{
var handler = _kernel.Get<ICommandHandler<TParameter>>();
handler.Execute(command);
}
I've found this solution on 3 different pages. Why is it done this way instead of creating a factory to map Query to QueryHandler?
If you consider the dispatcher to be a part of the infrastructure, calling Resolve() within it does not violate the DI concept you describe.
Handlers are generally thought of as entry points for logic pipelines (or threads, or however you want to think of them). This is similar to controllers in MVC, or the Main() method in a console application. So like these other constructs, the dispatcher is considered a top-level object in the dependency chain, and is thus a perfectly legitimate place to reference the container.
Edit
So the comments mention Composition Root (CR), which is a term I like but deliberately tried to avoid in this answer, as it tends to confuse people. Is the CR a specific class? An assembly? I tend to think of it more as a concept than a specific construct. It's the logical place in the application where object graphs are composed.
To clarify what I meant about controllers: the controllers would be the entry point, and (as #Zbigniew noted) the controller factory would be (part of) the CR. Similarly, handlers would be the entry point, and the dispatcher would be the CR. Handlers/Controllers would not have a reference to the container, but the Dispatcher/ControllerFactory would.
Hi I am using the Simple Injector DI library and have been following some really interesting material about an architectural model designed around the command pattern:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
The container will manage the lifetime of the UnitOfWork, and I am using commands to perform specific functions to the database.
My question is if I have a command, for example an AddNewCustomerCommand, which in turn performs another call to another service (i.e. sends a text message), from a design standpoint is this acceptable or should this be done at a higher level and if so how best to do this?
Example code is below:
public class AddNewBusinessUnitHandler
: ICommandHandler<AddBusinessUnitCommand>
{
private IUnitOfWork uow;
private ICommandHandler<OtherServiceCommand> otherHandler;
AddNewBusinessUnitHandler(IUnitOfWork uow,
ICommandHandler<OtherServiceCommand> otherHandler)
{
this.uow = uow;
this.otherHandler = otherHandler;
}
public void Handle(AddBusinessUnitCommand command)
{
var businessUnit = new BusinessUnit()
{
Name = command.BusinessUnitName,
Address = command.BusinessUnitAddress
};
var otherCommand = new OtherServiceCommand()
{
welcomePostTo = command.BusinessUnitName
};
uow.BusinessUnitRepository.Add(businessUnit);
this.otherHandler.Handle(otherCommand);
}
}
It depends on your architectural view of (business) commands, but it is quite natural to have a one to one mapping between a Use Case and a command. In that case, the presentation layer should (during a single user action, such as a button click) do nothing more than create the command and execute it. Furthermore, it should do nothing more than execute that single command, never more. Everything needed to perform that use case, should be done by that command.
That said, sending text messages, writing to the database, doing complex calculations, communicating with web services, and everything else you need to operate the business' needs should be done during the context of that command (or perhaps queued to happen later). Not before, not after, since it is that command that represents the requirements, in a presentation agnostic way.
This doesn't mean that the command handler itself should do all this. It will be quite naturally to move much logic to other services where the handler depends on. So I can imagine your handler depending on a ITextMessageSender interface, for instance.
Another discussion is if command handlers should depend on other depend command handlers. When you look at use cases, it is not unlikely that big use cases consist of multiple smaller sub use cases, so in that sense it isn't strange. Again, there will be a one to one mapping between commands and use cases.
However, note that having a deep dependency graph of nested command handlers depending on each other, can complicate navigating through the code, so take a good look at this. It might be better to inject an ITextSessageSender instead of using an ICommandHandler<SendTextMessageCommand>, for instance.
Another downside of allowing handlers to nest, is that it makes doing infrastructural stuff a bit more complex. For instance, when wrapping command handlers with a decorator that add transactional behavior, you need to make sure that the nested handlers run in the same transaction as the outer most handler. I happened to help a client of me with this today. It's not incredibly hard, but takes a little time to figure out. The same holds for things like deadlock detection, since this also runs at the boundary of the transaction.
Besides, deadlock detection is an great example to show case the power of this command/handler pattern, since almost every other architectural style will make it impossible to plug-in this behavior. Take a look at the DeadlockRetryCommandHandlerDecorator class in this article) to see an example.
When looking at the source code of a couple of projects I found a pattern I can not quite understand.
For instance in FubuMVC and Common Service Locator a Func is used when a static provider is changed.
Can anyone explain what the benefit is of using:
private static Func<IServiceLocator> currentProvider;
public static IServiceLocator Current
{
get { return currentProvider(); }
}
public static void SetLocatorProvider(Func<IServiceLocator> newProvider)
{
currentProvider = newProvider;
}
instead of:
private static IServiceLocator current;
public static IServiceLocator Current
{
get { return current; }
}
public static void SetLocator(IServiceLocator newInstance)
{
current = newInstance;
}
The major advantage of the first model over the second is what's called "lazy initialization". In the second example, as soon as SetLocator is called, you must have an IServiceLocator instance loaded in memory and ready to go. If such instances are expensive to create, and/or created along with a bunch of other objects at once (like on app startup), it's a good idea to try to delay actual creation of the object to reduce noticeable delays to the user. Also, if the dependency may not be used by the dependent class (say it's only needed for certain operations, and the class can do other things that don't require the dependency), it would be a waste to instantiate one.
The solution is to provide a "factory method" instead of an actual instance. When the instance is actually needed, the factory method is called, and the instance is created at the last possible moment before it's used. This reduces front-end loading times and avoids creating unneeded dependencies.
Good answer by #KeithS. Another thing to note here is what happens under the covers of the initialization of certain instances. Keeping a reference to intentionally volatile objects can be tricky.
FubuMVC, for instance, spins up a nested StructureMap container per HTTP request which scopes all service location to that specific request. If you have classes running within that pipeline that have been built up, you'll want to use the contextual injection provided to you via THAT instance of IServiceLocator.
Theres a lot more flexibility to the implementer of newProvider. They can lazy load, async load (and and then if it's not loaded by the time the func is called it can have code to wait), they can allow it change based on runtime parameters etc.
A func allows several things
The locator creation can be delayed until it is needed. It is therefore lazy.
The provider object does not contain any state. It is not its responsiblity to shut down the locator does anything with it except to return the current locator when needed.
When the locator is reconfigured at run time or it decides that a different instance is needed it can control the lifetime of the locator as long as the calling code does not store a reference to locator.
Since the locator is returned by a method it has more flexibility e.g. to create a thread local locator so it can create many objects in each thread without the need to coordinate object creation in one global object which could become a bottleneck when many threads are involved.
I am sure the designers did could give you more points than I did why it can be a good idea to abstract away "simple" things like to return an instance of a service locator.
Yours,
Alois Kraus