Is it safe to publish Domain Event before persisting the Aggregate? - c#

In many different projects I have seen 2 different approaches of raising Domain Events.
Raise Domain Event directly from aggregate. For example imagine you have Customer aggregate and here is a method inside it:
public virtual void ChangeEmail(string email)
{
if(this.Email != email)
{
this.Email = email;
DomainEvents.Raise<CustomerChangedEmail>(new CustomerChangedEmail(email));
}
}
I can see 2 problems with this approach. The first one is that the event is raised regardless of whether the aggregate is persisted or not. Imagine if you want to send an email to a customer after successful registration. An event "CustomerChangedEmail" will be raised and some IEmailSender will send the email even if the aggregate wasn't saved. The second problem with the current implementation is that every event should be immutable. So the question is how can I initialize its "OccuredOn" property? Only inside aggregate! Its logical, right! It forces me to pass ISystemClock (system time abstraction) to each and every method on aggregate! Whaaat??? Don't you find this design brittle and cumbersome? Here is what we'll come up with:
public virtual void ChangeEmail(string email, ISystemClock systemClock)
{
if(this.Email != email)
{
this.Email = email;
DomainEvents.Raise<CustomerChangedEmail>(new CustomerChangedEmail(email, systemClock.DateTimeNow));
}
}
The second approach is to go what Event Sourcing pattern recommends to do. On each and every aggregate, we define a (List) list of uncommited events. Please payAttention that UncommitedEvent is not a domain Event! It doesn't even has OccuredOn property. Now, when ChangeEmail method is called on Customer Aggregate, we don't raise anything. We just save the event to uncommitedEvents collection which exists on our aggregate. Like this:
public virtual void ChangeEmail(string email)
{
if(this.Email != email)
{
this.Email = email;
UncommitedEvents.Add(new CustomerChangedEmail(email));
}
}
So, when does the actual domain event is raised??? This responsibility is delegated to persistence layer. In ICustomerRepository we have access to ISystemClock, because we can easily inject it inside repository. Inside Save() method of ICustomerRepository we should extract all uncommitedEvents from Aggregate and for each of them create a DomainEvent. Then we set up OccuredOn property on newly created Domain Event. Then, IN ONE TRANSACTION we save the aggregate and publish ALL domain events. This way we'll be sure that all events will will raised in transnational boundary with aggregate persistence.
What I don't like about this approach? I don't want to create 2 different types for the same event, i.e for CustomerChangedEmail behavior I should have CustomerChangedEmailUncommited type and CustomerChangedEmailDomainEvent. It would be nice to have just one type. Please share your experience regarding to this topic!

I am not a proponent of either of the two techniques you present :)
Nowadays I favour returning an event or response object from the domain:
public CustomerChangedEmail ChangeEmail(string email)
{
if(this.Email.Equals(email))
{
throw new DomainException("Cannot change e-mail since it is the same.");
}
return On(new CustomerChangedEmail { EMail = email});
}
public CustomerChangedEmail On(CustomerChangedEmail customerChangedEmail)
{
// guard against a null instance
this.EMail = customerChangedEmail.EMail;
return customerChangedEmail;
}
In this way I don't need to keep track of my uncommitted events and I don't rely on a global infrastructure class such as DomainEvents. The application layer controls transactions and persistence in the same way it would without ES.
As for coordinating the publishing/saving: usually another layer of indirection helps. I must mention that I regard ES events as different from system events. System events being those between bounded contexts. A messaging infrastructure would rely on system events as these would usually convey more information than a domain event.
Usually when coordinating things such as sending of e-mails one would make use of a process manager or some other entity to carry state. You could carry this on your Customer with some DateEMailChangedSent and if null then sending is required.
The steps are:
Begin Transaction
Get event stream
Make call to change e-mail on customer, adding even to event stream
record e-mail sending required (DateEMailChangedSent back to null)
Save event stream (1)
Send a SendEMailChangedCommand message (2)
Commit transaction (3)
There are a couple of ways to do that message sending part that may include it in the same transaction (no 2PC) but let's ignore that for now.
Assuming that previously we had sent an e-mail our DateEMailChangedSent has a value before we start we may run into the following exceptions:
(1) If we cannot save the event stream then here's no problem since the exception will rollback the transaction and the processing would occur again.
(2) If we cannot send the message due to some messaging failure then there's no problem since the rollback will set everything back to before we started.
(3) Well, we've sent our message so an exception on commit may seem like an issue but remember that we could not set our DateEMailChangedSent back to null to indicate that we require a new e-mail to be sent.
The message handler for the SendEMailChangedCommand would check the DateEMailChangedSent and if not null it would simply return, acknowledging the message and it disappears. However, if it is null then it would send the mail either interacting with the e-mail gateway directly ot making use of some infrastructure service endpoint through messaging (I'd prefer that).
Well, that's my take on it anyway :)

I have seen 2 different approaches of raising Domain Events.
Historically, there have been two different approaches. Evans didn't include domain events when describing the tactical patterns of domain-driven-design; they came later.
In one approach, Domain Events act as a coordination mechanism within a transaction. Udi Dahan wrote a number of posts describing this pattern, coming to the conclusion:
Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services.
event-sourcing, the common alternative, is actually a very different animal, in so far as the events are written to the book of record, rather than merely being used to coordinate activities in the write model.
The second problem with the current implementation is that every event should be immutable. So the question is how can I initialize its "OccuredOn" property? Only inside aggregate! Its logical, right! It forces me to pass ISystemClock (system time abstraction) to each and every method on aggregate!
Of course - see John Carmack's plan files
If you don't consider time an input value, think about it until you do - it is an important concept
In practice, there are actually two important time concepts to consider. If time is part of your domain model, then it's an input.
If time is just meta data that you are trying to preserve, then the aggregate doesn't necessarily need to know about it -- you can attach the meta data to the event elsewhere. One answer, for example, would be to use an instance of a factory to create the events, with the factory itself responsible for attaching the meta data (including the time).
How can it be achieved? An example of a code sample would help me a lot.
The most straight forward example is to pass the factory as an argument to the method.
public virtual void ChangeEmail(string email, EventFactory factory)
{
if(this.Email != email)
{
this.Email = email;
UncommitedEvents.Add(factory.createCustomerChangedEmail(email));
}
}
And the flow in the application layer looks something like
Create metadata from request
Create the factory from the metadata
Pass the factory as an argument.
Then, IN ONE TRANSACTION we save the aggregate and publish ALL domain events. This way we'll be sure that all events will will raised in transnational boundary with aggregate persistence.
As a rule, most people are trying to avoid two phase commit where possible.
Consequently, publish isn't usually part of the transaction, but held separately.
See Greg Young's talk on Polyglot Data. The primary flow is that subscribers pull events from the book of record. In that design, the push model is a latency optimization.

I tend to implement domain events using the second approach.
Instead of manually retrieving and then dispatching all events in the aggregate roots repository I have a simple DomainEventDispatcher(application layer) class which listens to various persistence events in the application. When an entity is added, updated or deleted it determines whether it is an AggregateRoot. If so, it calls releaseEvents() which returns a collection of domain events that then get dispatched using the application EventBus.
I don't know why you are focusing so much on the occurredOn property.
The domain layer is only concerned with the meat of the domain events such as aggregate root IDs, entity IDs and value object data.
At the application layer you can have an event envelope which can wrap any serialized domain event while giving it some meta data such as a unique ID (UUID/GUID), what aggregate root it came from, the time it occurred etc. This can be persisted to a database.
This meta data is useful in the application layer because you might be publishing these events to other applications using a message bus/event stream over HTTP and it allows each event to be uniquely identifiable.
Again, this meta data about the event generally makes no sense in the domain layer, only the application layer. The domain layer does not care or have any use for event IDs or the time they occurred but other applications which consume these events do. That's why this data is attached at the application layer.

The way I would solve the sending of email problem is by decoupling the publishing of the event and the handling of the event through a messaging queue. This way you close the transaction after sending the event to the queue, and the sending of email, or other effects that cannot or should not be part of the original DB transaction, will happen shortly after, in a different transaction. The simplest way to do that, of course, is to have an event handler that publishes domain events onto the queue.
If you want to be extra sure that the domain events will be published to the queue when the transaction is committed, you can save the events to an OUTBOX table that will be committed with the transaction, and then have a thread read from the table and publish to the event queue

Related

How Event Publishing and Subscription work in ON DEMAND Environment

I have some Event Subscribed in SoftwareViewModel Constuctor and i am somehow thinking to MOVE that particular view and Viewmodel in seperate MODULE and make it ondemand
but in order to make event publishing and subsciption working we need to load that SoftwareViewModel when application loads i.e. in order to make subsciption of SoftwareViewMOdel working.
So how Event publishing & subsciption work in ONDEMAND Viewmodel concept.
Is it doable what i am thinking or not because behaviour of SoftwareViewModel is dependent on settings that load when we do login in application.
**//Want to make this viewmodel ON DEMAND**
public SoftwareViewModel()
{
**//Event that is going to subscribed**
SubscriptionToken subscriptionValidate = this.eventAggregator.GetEvent<PubSubEvent<IValidate>>().Subscribe(i =>
{
//CODE HERE
});
}
Regarding On Demand some explanation:
On Demand i means to day that i have two tabs 1 & 2. I want my tab-2 things should load when i click on tab 2 i.e. SoftwareViewModel OnDemand.
But My Tab -1 has some settings that put effect on SoftwareViewModel i.e. tab-2.In order to do this i am using event subscription and publishing to share data bewteeen tab 1 & 2.
But i want to do everything on click of tab-2.
Question:
Is it possible to make SoftwareViewModel i.e tab-2 on demand with event publsihing and sbscription beacause according to my study publishing only works when subscription registered first.
Please let me know if more description required.
Your understanding is correct; a subscription in a typical pub/sub application will only receive events published after the subscription is established.
This is why pub/sub is basically never the only way a view (model) receives data.
To make it clearer, lets start with a second use case. That tab-2 is entered first. tab-1 doesn't ever get created. How do you get the data then? Not only was tab-2 not subscribed at the right time, the event it was looking for was never published!
Moreover, in a third case, say tab-1 was actually a different process. tab-2 is probably interested in events that occurred before it's process even started!
The solution is the same for all use cases; the view (model) (tab-2 here) must be able to query for the current state of the system. "Fetch, and subscribe for the rest." The query and response could be through your pub/sub system (having built that, its a fair bit of work) or could be through some other method.
TL;DR: You can't rely on just simple pub/sub for your initial data.

Event subscription best practices

I recently stumbled across the following in our application and I'm curious to know whether this is good or bad practice. What I see is events being subscribed to on different levels in the application, business logic and ultimately our framework.
We have functionality to authenticate and authorize users, which is orchestrated by an HttpModule which basically does the following (I only included the most relevant parts):
public class FooModule : IHttpModule
{
private IIdentityProvider _identityProvider;
public void Init(HttpApplication context)
{
_identityProvider = TypeFactory.Create<IIdentityProvider>("...type string from configuration...");
identityProvider.Init(context, ...);
context.PostAuthenticateRequest += OnAuthenticateRequest;
context.PostAuthenticateRequest += OnAuthenticateRequestLogging;
}
...
}
So far, so good: the HttpModule identifies the configured identity provider, initializes it and subscribes to some events. The event handlers are not in question here, so I omitted them.
Then, in the initialization of an arbitrary identity provider:
public class BarIdentityProvider : IIdentityProvider
{
public void Init(HttpApplication httpApplication, ...)
{
var authorizer = new BarAuthorizationProvider();
authorizer.Init(httpApplication, ...);
httpApplication.PostAuthenticateRequest += httpApplication_PostAuthenticateRequest;
httpApplication.AuthorizeRequest += httpApplication_AuthorizeRequest;
}
...
}
And in the AuthorizationRequestHandler the following happens:
public class BarAuthorizationProvider
{
public void Init(HttpApplication httpApplication, ...)
{
httpApplication.PostAuthorizeRequest += OnAuthorizeRequest;
}
...
}
As you can see, there are events being subscribed to in FooModule, BarIdentityProvider and BarAuthorizationProvider which, to me, comes across as event spaghetti. Also, when doing this:
var authorizer = new BarAuthorizationProvider();
authorizer.Init(httpApplication, ...);
I do not expect the authorizer to subscribe to various events and work 'magically'.
As a software developer I expect either:
one HttpModule which subscribes to the necessary events and requests the identity provider and authorization provider for identity and access information. Event handling is minimized in the providers.
multiple HttpModules (i.e. an authentication and an authorization module) which each subscribe to the necessary events. Event handling is minimized in the providers.
Am I correct or are there arguments against my expectation?
My first question would be, is it important that "Event handling is minimized in the providers"? I.e. what specific advantage is there to achieving that goal?
I also wonder what "event spaghetti" is supposed to mean. I mean, it clearly references the age-old concept of "spaghetti code", but that concept refers to code where the path of execution is convoluted, most often due to the use of unstructured flow-control (i.e. goto statements). I don't see how when discussing event handling, there's anything resembling spaghetti.
In other words, the phrase "event spaghetti" simply seems prejudicial here. I.e. the word "spaghetti" is just being thrown in because it's already understood as a pejorative. But lacking a clear objection to the implementation, the pejorative seems unjustified on its own, and the word "spaghetti" does not seem to offer a genuinely useful metaphor as it does in the context of "spaghetti code".
So, back to "minimizing event handling in the providers". Why should this be important? You write that "the event handlers are not in question here", but that seems to indicate that the handlers in the providers are appropriate. I.e. they do work that is relevant to the provider (likely only to the provider). So how else are they to do that work?
In my opinion (the fact that I wrote that suggests that your question might be considered off-topic, being "primarily opinion-based", but I think there's an objective way to look at this, even if that's simply my opinion :) ), one of the big advantages of the event-based programming model is reduced coupling. Assuming the work has to be done somewhere, the alternative to these provider objects subscribing to these events is for some other object to understand the needs of the provider object, and to fulfill that need on the provider objects' behalf.
But then you've just laden that other object with specific knowledge of the provider object. Which increases coupling between the types. Which is generally a bad thing.
If on the other hand, you feel that the event handlers in the providers can be moved to some other type without coupling the types more strongly, then that suggests the event handlers are indeed inappropriate in the providers. But in that case, the specific details of those event handlers most certainly is pertinent here, contrary to your assertion.
I.e. where the event handler goes depends a great deal on what it does. But since you don't seem concerned with the specific content of those handlers, that suggests they are right where they belong, and thus the subscription to events is not a problem at all.

Persistence and Domain Events with persistence ignorant objects

I've been studying on domain driven design in conjunction with domain events. I really like the separations of concerns those events provide. I ran into an issue with the order of persisting a domain object and raising domain events. I would like to raise events in the domain objects, yet I want them to be persistence ignorant.
I've created a basic ShoppingCartService, with this Checkout method:
public void Checkout(IEnumerable<ShoppingCartItem> cart, Customer customer)
{
var order = new Order(cart, customer);
_orderRepositorty.Add(order);
_unitOfWork.Commit();
}
In this example, the constructor of Order would raise an OrderCreated event which can be handled by certain handlers. However, I don't want those events to be raised when the entity is not yet persisted or when persisting somehow fails.
To solve this issue I have figured several solutions:
1. Raise events in the service:
Instead of raising the event in the domain object, I could raise events in the service. In this case, the Checkout method would raise the OrderCreated event. One of the downsides of this approach is that by at looking the Order domain object, it isn't clear which events are raised by what methods. Also, a developer has to remember to raise the event when an order is created elsewhere. It doesn't feel right.
2. Queue domain events
Another option is to queue domain events and raise them when persisting succeeded. This could be achieved by a using statement for example:
using (DomainEvents.QueueEvents<OrderCreated>())
{
var order = new Order(cart, customer);
_orderRepositorty.Add(order);
_unitOfWork.Commit();
}
The QueueEvents<T> method would set a boolean to true and the DomainEvents.Raise<T> method would queue the event rather than executing it directly. In the dispose callback of QueueEvent<T>, the queued events are executed which makes sure that the persisting already happened. This seems rather tricky and it required the service to know which event is being raised in the domain object. In the example I provided it also only supports one type of event to be raised, however, this could be worked around.
3. Persist in domain event
I could persist the object using a domain event. This seems okay, except for the fact that the event handler persisting the object should execute first, however I read somewhere that domain events should not rely on a specific order of execution. Perhaps that is not that important and domain events could somehow know in which order the handlers should execute. For example: suppose I have an interface defining a domain event handler, an implementation would look like this:
public class NotifyCustomer : IDomainEventHandler<OrderCreated>
{
public void Handle(OrderCreated args)
{
// ...
}
}
When I want to handle persisting in using an event handler too, I would create another handler, deriving from the same interface:
public class PersistOrder : IDomainEventHandler<OrderCreated>
{
public void Handle(OrderCreated args)
{
// ...
}
}
}
Now NotifyCustomer behaviour depends on the order being saved in the database, so the PersistOrder event handler should execute first. Is it acceptable that these handlers introduce a property for example that indicates the order of their execution? A snap from the implementation of the DomainEvents.Raise<OrderCreated>() method:
foreach (var handler in Container.ResolveAll<IDomainEventHandler<OrderCreated>>().OrderBy(h => h.Order))
{
handler.Handle(args);
}
Now my question is, do I have any other options? Am I missing something? And what do you think of the solutions I proposed?
Either your (transactional) event handlers enlist in the (potentially distributed) transaction, or you publish/handle the events after the transaction committed. Your "QueueEvents" solution gets the basic idea right, but there are more elegant solutions, like publishing via the repository or the event store. For an example have a look at SimpleCQRS
You might also find these questions and answers useful:
CQRS: Storing events and publishing them - how do I do this in a safe way?
Event Aggregator Error Handling With Rollback
Update on point 3:
... however I read somewhere that domain events should not rely on a specific order of execution.
Regardless of your way of persisting, the order of events absolutely matters (within an aggregate).
Now NotifyCustomer behaviour depends on the order being saved in the database, so the PersistOrder event handler should execute first. Is it acceptable that these handlers introduce a property for example that indicates the order of their execution?
Persisting and handling events are separate concerns - don't persist using an event handler. First persist, then handle.
Disclaimer: I don't know what I am talking about. ⚠️
As an alternative to raising events in the ShoppingCartService you could raise events in the OrderRepository.
As an alternative to queuing domain events in the ShoppingCartService you could queue them in the Order aggregate by inheriting from a base class that provides a AddDomainEvent method.
public class Order : Aggregate
{
public Order(IEnumerable<ShoppingCartItem> cart, Customer customer)
{
AddDomainEvent(new OrderCreated(cart, customer))
}
}
public abstract class Aggregate
{
private List<IDomainEvent> _domainEvents = new List<IDomainEvent>();
public IReadOnlyCollection<IDomainEvent> DomainEvents => _domainEvents.AsReadOnly();
public void AddDomainEvent(IDomainEvent domainEvent)
{
_domainEvents.Add(domainEvent);
}
}

Calling commands from within another command Handle() method

Hi I am using the Simple Injector DI library and have been following some really interesting material about an architectural model designed around the command pattern:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
The container will manage the lifetime of the UnitOfWork, and I am using commands to perform specific functions to the database.
My question is if I have a command, for example an AddNewCustomerCommand, which in turn performs another call to another service (i.e. sends a text message), from a design standpoint is this acceptable or should this be done at a higher level and if so how best to do this?
Example code is below:
public class AddNewBusinessUnitHandler
: ICommandHandler<AddBusinessUnitCommand>
{
private IUnitOfWork uow;
private ICommandHandler<OtherServiceCommand> otherHandler;
AddNewBusinessUnitHandler(IUnitOfWork uow,
ICommandHandler<OtherServiceCommand> otherHandler)
{
this.uow = uow;
this.otherHandler = otherHandler;
}
public void Handle(AddBusinessUnitCommand command)
{
var businessUnit = new BusinessUnit()
{
Name = command.BusinessUnitName,
Address = command.BusinessUnitAddress
};
var otherCommand = new OtherServiceCommand()
{
welcomePostTo = command.BusinessUnitName
};
uow.BusinessUnitRepository.Add(businessUnit);
this.otherHandler.Handle(otherCommand);
}
}
It depends on your architectural view of (business) commands, but it is quite natural to have a one to one mapping between a Use Case and a command. In that case, the presentation layer should (during a single user action, such as a button click) do nothing more than create the command and execute it. Furthermore, it should do nothing more than execute that single command, never more. Everything needed to perform that use case, should be done by that command.
That said, sending text messages, writing to the database, doing complex calculations, communicating with web services, and everything else you need to operate the business' needs should be done during the context of that command (or perhaps queued to happen later). Not before, not after, since it is that command that represents the requirements, in a presentation agnostic way.
This doesn't mean that the command handler itself should do all this. It will be quite naturally to move much logic to other services where the handler depends on. So I can imagine your handler depending on a ITextMessageSender interface, for instance.
Another discussion is if command handlers should depend on other depend command handlers. When you look at use cases, it is not unlikely that big use cases consist of multiple smaller sub use cases, so in that sense it isn't strange. Again, there will be a one to one mapping between commands and use cases.
However, note that having a deep dependency graph of nested command handlers depending on each other, can complicate navigating through the code, so take a good look at this. It might be better to inject an ITextSessageSender instead of using an ICommandHandler<SendTextMessageCommand>, for instance.
Another downside of allowing handlers to nest, is that it makes doing infrastructural stuff a bit more complex. For instance, when wrapping command handlers with a decorator that add transactional behavior, you need to make sure that the nested handlers run in the same transaction as the outer most handler. I happened to help a client of me with this today. It's not incredibly hard, but takes a little time to figure out. The same holds for things like deadlock detection, since this also runs at the boundary of the transaction.
Besides, deadlock detection is an great example to show case the power of this command/handler pattern, since almost every other architectural style will make it impossible to plug-in this behavior. Take a look at the DeadlockRetryCommandHandlerDecorator class in this article) to see an example.

C# -Priority based delegates and chaining in delegates possible? or Windows Work Flow?

Whenever i feel hungry i will publish i am hungry.This will be notified to the service providers say (MealsService,FruitService,JuiceService ).(These service providers know what to serve).
But the serving priority is the concern. Priority here means my first choice is MealsService when there are enough meal is available my need is end with MealsService.To verify the enough meal is availabe the MealsService raises the event "updateMeTheStockStatus" to the "MealsServiceStockUpdateListener" .
The "MealsServiceStockUpdateListener" will only reply back to "MealsService" . No other Service providers ( FruitService,JuiceService ) will be notified by the "MealsServiceStockUpdateListener" .If there is no sufficient stock then only the MealsService passes notification to the JuiceService (as it is the second priority).As usual it checks the stock.If stock is not sufficient it passes message to FruitService,so the flow continues like this.
How can i technically implement this?
Any implemention like priority based delagates and delegate chaining make sense ?
(Somebody! Please reframe it for good readability ).
Update : In this model there is no direct communication between "StackUpdateListener" and "me".Only The "Service Providers" will communicate me.
Like other answerers, I'm not entirely convinced that an event is the way forward, but let's go along with it for the moment.
It seems to me that the business with the MealsServiceStockUpdateListener is a red herring really - you're just trying to execute some event handlers but not others. This sort of thing crops up elsewhere when you have a "BeforeXXX" event which allows cancellation, or perhaps some sort of exception handling event.
Basically you need to get at each of your handlers separately. There are two different ways of doing that - either you can use a normal multicast delegate and call GetInvocationList() or you can change your event declaration to explicitly keep a list of handlers:
private List<EventHandler> handlers = new List<EventHandler>();
public event EventHandler MealRequired
{
add { handlers.Add(value); }
remove
{
int index = handlers.LastIndexOf(value);
if (index != -1)
{
handlers.RemoveAt(index);
}
}
}
These two approaches are not quite equivalent - if you subscribe with a delegate instance which is already a compound delegate, GetInvocationList will flatten it but the List approach won't. I'd probably go with GetInvocationList myself.
Now, the second issue is how to detect when the meal has provided. Again, there are two approaches. The first is to use the normal event handler pattern, making the EventArgs subclass in question mutable. This is the approach that HandledEventArgs takes. The second is to break the normal event pattern, and use a delegate that returns a value which can be used to indicate success or failure (and possibly other information). This is the approach that ResolveEventHandler takes. Either way, you execute the delegates in turn until one of them satistfies your requirements. Here's a short example (not using events per se, but using a compound delegate):
using System;
public class Test
{
static void Main(string[] args)
{
Func<bool> x = FirstProvider;
x += SecondProvider;
x += ThirdProvider;
Execute(x);
}
static void Execute(Func<bool> providers)
{
foreach (Func<bool> provider in providers.GetInvocationList())
{
if (provider())
{
Console.WriteLine("Done!");
return;
}
}
Console.WriteLine("No provider succeeded");
}
static bool FirstProvider()
{
Console.WriteLine("First provider returning false");
return false;
}
static bool SecondProvider()
{
Console.WriteLine("Second provider returning true");
return true;
}
static bool ThirdProvider()
{
Console.WriteLine("Third provider returning false");
return false;
}
}
Rather than publish a message "I'm hungry" to the providers, publish "I need to know current stock available". Then listen until you have enough information to make a request to the correct food service for what you need. This way the logic of what-makes-me-full is not spread amongst the food services... It seems cleaner to me.
Message passing isn't baked into .NET directly, you need to implement your own message forwarding by hand. Fortunately, the "chain of responsiblity design pattern" is designed specifically for the problem you're trying to solve, namely forwarding a message down a chain until someone can handle it.
Useful resources:
Chain of Responsibility on Wikipedia
C# implementation on DoFactory.com
I'm not sure if you really need a priority event. Anyways, let's suppose we want to code that just for fun.
The .NET Framework has no support for such a peculiar construct. Let me show one possible approach to implement it.
The first step would be to create custom store for event delegates (like described here);
Internally, the custom event store could work like a priority queue;
The specific EventArgs used would be HandledEventArgs (or a subclass of it). This would allow the event provider to stop calling handlers after one of them sets the event as Handled;
The next step is the hardest. How to say to tell the event provider what is the priority of the event handler that is being added?
Let me clarify the problem. Usually, the adding of a handler is like this:
eater.GotHungry += mealsService.Someone_GotHungry;
eater.GotHungry += juiceService.Someone_GotHungry;
eater.GotHungry += fruitService.Someone_GotHungry;
The += operator will only receive an delegate. It's not possible to pass a second priority parameter. There might be several possible solutions for this problem. One would be to define the priority in a custom attribute set at the event handler method. A scond approach is discussed in the question.
Compared to the chain of responsibility implementation at dofactory.com, this approach has some advantages. First, the handlers (your food services) do not need to know each other. Also, handlers can be added and remove at any time dynamically. Of course, you could implement a variation of a chain of responsibility that has this advantages too.
I don't think delegates are the proper solution to your problem. Delegates are a low-level service provided by C# for relatively tightly coupled events between components. If I understand your question properly (It is worded a little oddly, so I am not sure I clearly understand your problem), then I think what you need is a mediated consumer/provider.
Rather than having your consumers directly consume the meal, juice, and fruit providers, have them request a food item from a central mediator. The mediator would then be responsible for determining what is available and what should be provided to the consumer. The mediator would be a subscriber to events published by all three services. Whenever stock is added/updated in the Meal, Juice, or Fruit services, they would publish their current stock to all subscribers. The mediator, being a subscriber, would track current stock reductions on its own, and be able to determine for itself whether to send a meal, juice, or fruit to a food consumer when a get food request is made.
For example:
|---------- (GetFoodResponse) ----------------
V |
FoodConsumer ---- (GetFoodRequest) ------> FoodProvider <-----> [ Local Stock Data ]
^
|
|
MealService ---- (PublishStockMessage) ----------|
^
JuiceService --- (PublishStockMessage) ----------|
^
FruitService --- (PublishStockMessage) ----------|
The benefits of such a solution are that you reduce coupling, properly segregate responsibility, and solve your problem. For one, your consumers only need to consume a single service...the FoodProvider. The FoodProvider subscribes to publications from the other three services, and is responsible for determining what food to provide to a consumer. The three food services are not responsible for anything related to the hunger of your food consumers, they are only responsible for providing food and tracking the stock of the food they provide. You also gain the ability to distribute the various components. Your consumers, the food provider, and each of the three food services can all be hosted on different physical machines if required.
However, to achieve the above benefits, your solution becomes more complex. You have more parts, and they need to be connected to each other properly. You have to publish and subscribe to messages, which requires some kind of supporting infrastructure (WCF, MSMQ, some third party ESB, custom solution, etc.) You also have duplication of data, since the food provider tracks stock on its own in addition to each of the food services, which could lead to discontinuity in available stock. This can be mitigated if you manage stock updated properly, but that would also increase complexity.
If you can handle the additional complexity, ultimately, a solution like this would more flexible and adaptable than a more tightly connected solution that uses components and C# events in a local-deployment-only scenario (as in your original example.)
I am having a bit of trouble understanding your analogy here, which sounds like you're obscuring the actual intent of the software, but I think I have done something like what you are describing.
In my case the software was telemarketing software and each of the telemarketers had a calling queue. When that queue raises the event signifying that it is nearing empty, the program will grab a list of available people to call, and then pass them through a chain of responsibility which pushes the available call into the telemarketer's queue like so:
Each element in the chain acts as a priority filter: the first link in the chain will grab all of the people who have never been called before, and if it finishes (ie. went through all of the people who have never been called) without filling up the queue, it will pass the remaining list of people to call to the next link in the chain - which will apply another filter/search. This continues until the last link in the chain which just fires off an e-mail to an administrator indicating that there are no available people to be called and a human needs to intervene quickly before the telemarketers have no work to do.

Categories

Resources