I recently stumbled across the following in our application and I'm curious to know whether this is good or bad practice. What I see is events being subscribed to on different levels in the application, business logic and ultimately our framework.
We have functionality to authenticate and authorize users, which is orchestrated by an HttpModule which basically does the following (I only included the most relevant parts):
public class FooModule : IHttpModule
{
private IIdentityProvider _identityProvider;
public void Init(HttpApplication context)
{
_identityProvider = TypeFactory.Create<IIdentityProvider>("...type string from configuration...");
identityProvider.Init(context, ...);
context.PostAuthenticateRequest += OnAuthenticateRequest;
context.PostAuthenticateRequest += OnAuthenticateRequestLogging;
}
...
}
So far, so good: the HttpModule identifies the configured identity provider, initializes it and subscribes to some events. The event handlers are not in question here, so I omitted them.
Then, in the initialization of an arbitrary identity provider:
public class BarIdentityProvider : IIdentityProvider
{
public void Init(HttpApplication httpApplication, ...)
{
var authorizer = new BarAuthorizationProvider();
authorizer.Init(httpApplication, ...);
httpApplication.PostAuthenticateRequest += httpApplication_PostAuthenticateRequest;
httpApplication.AuthorizeRequest += httpApplication_AuthorizeRequest;
}
...
}
And in the AuthorizationRequestHandler the following happens:
public class BarAuthorizationProvider
{
public void Init(HttpApplication httpApplication, ...)
{
httpApplication.PostAuthorizeRequest += OnAuthorizeRequest;
}
...
}
As you can see, there are events being subscribed to in FooModule, BarIdentityProvider and BarAuthorizationProvider which, to me, comes across as event spaghetti. Also, when doing this:
var authorizer = new BarAuthorizationProvider();
authorizer.Init(httpApplication, ...);
I do not expect the authorizer to subscribe to various events and work 'magically'.
As a software developer I expect either:
one HttpModule which subscribes to the necessary events and requests the identity provider and authorization provider for identity and access information. Event handling is minimized in the providers.
multiple HttpModules (i.e. an authentication and an authorization module) which each subscribe to the necessary events. Event handling is minimized in the providers.
Am I correct or are there arguments against my expectation?
My first question would be, is it important that "Event handling is minimized in the providers"? I.e. what specific advantage is there to achieving that goal?
I also wonder what "event spaghetti" is supposed to mean. I mean, it clearly references the age-old concept of "spaghetti code", but that concept refers to code where the path of execution is convoluted, most often due to the use of unstructured flow-control (i.e. goto statements). I don't see how when discussing event handling, there's anything resembling spaghetti.
In other words, the phrase "event spaghetti" simply seems prejudicial here. I.e. the word "spaghetti" is just being thrown in because it's already understood as a pejorative. But lacking a clear objection to the implementation, the pejorative seems unjustified on its own, and the word "spaghetti" does not seem to offer a genuinely useful metaphor as it does in the context of "spaghetti code".
So, back to "minimizing event handling in the providers". Why should this be important? You write that "the event handlers are not in question here", but that seems to indicate that the handlers in the providers are appropriate. I.e. they do work that is relevant to the provider (likely only to the provider). So how else are they to do that work?
In my opinion (the fact that I wrote that suggests that your question might be considered off-topic, being "primarily opinion-based", but I think there's an objective way to look at this, even if that's simply my opinion :) ), one of the big advantages of the event-based programming model is reduced coupling. Assuming the work has to be done somewhere, the alternative to these provider objects subscribing to these events is for some other object to understand the needs of the provider object, and to fulfill that need on the provider objects' behalf.
But then you've just laden that other object with specific knowledge of the provider object. Which increases coupling between the types. Which is generally a bad thing.
If on the other hand, you feel that the event handlers in the providers can be moved to some other type without coupling the types more strongly, then that suggests the event handlers are indeed inappropriate in the providers. But in that case, the specific details of those event handlers most certainly is pertinent here, contrary to your assertion.
I.e. where the event handler goes depends a great deal on what it does. But since you don't seem concerned with the specific content of those handlers, that suggests they are right where they belong, and thus the subscription to events is not a problem at all.
Related
I've been studying on domain driven design in conjunction with domain events. I really like the separations of concerns those events provide. I ran into an issue with the order of persisting a domain object and raising domain events. I would like to raise events in the domain objects, yet I want them to be persistence ignorant.
I've created a basic ShoppingCartService, with this Checkout method:
public void Checkout(IEnumerable<ShoppingCartItem> cart, Customer customer)
{
var order = new Order(cart, customer);
_orderRepositorty.Add(order);
_unitOfWork.Commit();
}
In this example, the constructor of Order would raise an OrderCreated event which can be handled by certain handlers. However, I don't want those events to be raised when the entity is not yet persisted or when persisting somehow fails.
To solve this issue I have figured several solutions:
1. Raise events in the service:
Instead of raising the event in the domain object, I could raise events in the service. In this case, the Checkout method would raise the OrderCreated event. One of the downsides of this approach is that by at looking the Order domain object, it isn't clear which events are raised by what methods. Also, a developer has to remember to raise the event when an order is created elsewhere. It doesn't feel right.
2. Queue domain events
Another option is to queue domain events and raise them when persisting succeeded. This could be achieved by a using statement for example:
using (DomainEvents.QueueEvents<OrderCreated>())
{
var order = new Order(cart, customer);
_orderRepositorty.Add(order);
_unitOfWork.Commit();
}
The QueueEvents<T> method would set a boolean to true and the DomainEvents.Raise<T> method would queue the event rather than executing it directly. In the dispose callback of QueueEvent<T>, the queued events are executed which makes sure that the persisting already happened. This seems rather tricky and it required the service to know which event is being raised in the domain object. In the example I provided it also only supports one type of event to be raised, however, this could be worked around.
3. Persist in domain event
I could persist the object using a domain event. This seems okay, except for the fact that the event handler persisting the object should execute first, however I read somewhere that domain events should not rely on a specific order of execution. Perhaps that is not that important and domain events could somehow know in which order the handlers should execute. For example: suppose I have an interface defining a domain event handler, an implementation would look like this:
public class NotifyCustomer : IDomainEventHandler<OrderCreated>
{
public void Handle(OrderCreated args)
{
// ...
}
}
When I want to handle persisting in using an event handler too, I would create another handler, deriving from the same interface:
public class PersistOrder : IDomainEventHandler<OrderCreated>
{
public void Handle(OrderCreated args)
{
// ...
}
}
}
Now NotifyCustomer behaviour depends on the order being saved in the database, so the PersistOrder event handler should execute first. Is it acceptable that these handlers introduce a property for example that indicates the order of their execution? A snap from the implementation of the DomainEvents.Raise<OrderCreated>() method:
foreach (var handler in Container.ResolveAll<IDomainEventHandler<OrderCreated>>().OrderBy(h => h.Order))
{
handler.Handle(args);
}
Now my question is, do I have any other options? Am I missing something? And what do you think of the solutions I proposed?
Either your (transactional) event handlers enlist in the (potentially distributed) transaction, or you publish/handle the events after the transaction committed. Your "QueueEvents" solution gets the basic idea right, but there are more elegant solutions, like publishing via the repository or the event store. For an example have a look at SimpleCQRS
You might also find these questions and answers useful:
CQRS: Storing events and publishing them - how do I do this in a safe way?
Event Aggregator Error Handling With Rollback
Update on point 3:
... however I read somewhere that domain events should not rely on a specific order of execution.
Regardless of your way of persisting, the order of events absolutely matters (within an aggregate).
Now NotifyCustomer behaviour depends on the order being saved in the database, so the PersistOrder event handler should execute first. Is it acceptable that these handlers introduce a property for example that indicates the order of their execution?
Persisting and handling events are separate concerns - don't persist using an event handler. First persist, then handle.
Disclaimer: I don't know what I am talking about. ⚠️
As an alternative to raising events in the ShoppingCartService you could raise events in the OrderRepository.
As an alternative to queuing domain events in the ShoppingCartService you could queue them in the Order aggregate by inheriting from a base class that provides a AddDomainEvent method.
public class Order : Aggregate
{
public Order(IEnumerable<ShoppingCartItem> cart, Customer customer)
{
AddDomainEvent(new OrderCreated(cart, customer))
}
}
public abstract class Aggregate
{
private List<IDomainEvent> _domainEvents = new List<IDomainEvent>();
public IReadOnlyCollection<IDomainEvent> DomainEvents => _domainEvents.AsReadOnly();
public void AddDomainEvent(IDomainEvent domainEvent)
{
_domainEvents.Add(domainEvent);
}
}
as for my understanding, part of writing (unit-)testable code, a constructor should not do real work in constructor and only assigning fields. This worked pretty well so far. But I came across with a problem and I'm not sure what is the best way to solve it. See the code sample below.
class SomeClass
{
private IClassWithEvent classWithEvent;
public SomeClass(IClassWithEvent classWithEvent)
{
this.classWithEvent = classWithEvent;
// (1) attach event handler in ctor.
this.classWithEvent.Event += OnEvent;
}
public void ActivateEventHandling()
{
// (2) attach event handler in method
this.classWithEvent.Event += OnEvent;
}
private void OnEvent(object sender, EventArgs args)
{
}
}
For me option (1) sounds fine, but it the constructor should only assign fields. Option (2) feels a bit "too much".
Any help is appreciated.
A unit test would test SomeClass at most. Therefore you would typically mock classWithEvent. Using some kind of injection for classWithEvent in ctor is fine.
Just as Thomas Weller said wiring is field assignment.
Option 2 is actually bad IMHO. As if you omit a call to ActivateEventHandling you end up with a improperly initialized class and need to transport knowledge of the requirement to call ActivateEventHandling in comments or somehow else, which make the class harder to use and probably results in a class-usage that was not even tested by you, as you have called ActivateEventHandling and tested it but an uninformed user omitting the activation didn't, and you have certainly not tested your class when ActivateEventHandling was not called, right? :)
Edit: There may be alternative approaches here which are worth mentioning it
Depending on the paradigm it may be wise to avoid wiring events in the class at all. I need to relativize my comment on Stephen Byrne's answer.
Wiring can be regarded as context knowledge. The single responsibility principle says a class should do only one task. Furthermore a class can be used more versatile if it does not have a dependency to something else. A very loosely coupled system would provide many classes witch have events and handlers and do not know other classes.
The environment is then responsible for wiring all the classes together to connect events properly with handlers.
The environment would create the context in which the classes interact with each-other in a meaningful way.
A class in this case does therefore not know to whom it will be bound and it actually does not care. If it requires a value, it asks for it, whom it asks should be unknown to it. In that case there wouldn't even be an interface injected into the ctor to avoid a dependency. This concept is similar to neurons in a brain as they also emit messages to the environment and expect answer not knowing neighbouring neurons.
However I regard a dependency to an interface, if it is injected by some means of a dependency injection container just another paradigm and not less wrong.
The non trivial task of the environment to wire up all classes on start may lead to runtime errors (which are mitigated by a very good test coverage of functional and integration tests, which may be a hard task for large projects) and it gets very annoying if you need to wire dozens of classes and probably hundreds of events on startup manually.
While I agree that wiring in an environment and not in the class itself can be nice, it is not practical for large scale code.
Ralf Westphal (one of the founders of the clean code developer initiative (sorry german only)) has written a software that performs the wiring automatically in a concept called "event based components" (not necessarily coined by himself). It uses naming conventions and signature matching with reflection to bind events and handlers together.
Wiring events is field assignment (because delegates are nothing but simple reference variables that point to methods).
So option(1) is fine.
The point of constructor is not to "assign fields". It is to establish invariants of your object, i. e. something that never changes during its lifetime.
So if in other methods of class you depend on being always subscribed to some object, you'd better do it in the constructor.
On the other hand, if subscriptions come and go (probably not the case here), you can move this code to another method.
The single responsibility principle dictates that that wiring should be avoided. Your class should not care how, or where from it receives data. It would make sense to rename OnEvent method to something more meaningful, and make it public.
Then some other class (bootstrapper, configurator, whatever) should be responsible for the wiring. Your class should only be responsible for what happens when a new data come's in.
Pseudo code:
public interface IEventProvider //your IClassWithEvent
{
Event MyEvent...
}
public class EventResponder : IEventResponder
{
public void OnEvent(object sender, EventArgs args){...}
}
public class Boostrapper
{
public void WireEvent(IEventProvider eventProvider, IEventResponder eventResponder)
{
eventProvider>event += eventResponder.OnEvent;
}
}
Note, the above is pseudo code, and it's only for the purpose to describe the idea.
How your bootstrapper actually is implemented depends on many things. It can be your "main" method, or your global.asax, or whatever you have in place to actually configure and prepare your application.
The idea is, that whatever is responsible to prepare the application to run, should compose it, not the classes themselves, as they should be as single purpose as possible, and should not care too much about how and where they are used.
I am currently in deep need of logging for an existing application, and can't afford to add the level of logging I expect directly into the code. As a workaround, I'd be happy to have a certain amount of logging done each time an event is fired.
I am considering a solution where I could, at runtime, parse the whole application to wrap all event-to-handler bindings with event-to-wrapper-to-handler; and, ideally, unwrap them at runtime too. In Pseudo-code, this would be:
IDictionary<Event, Action<object, EventArgs>) originalBindings = ...;
public void SetWrapBindings()
{
var allBindingsToReplace = Assembly.GetAssembly().GETALLEVENTS()
.Where(eventInfo => eventInfo.GetOtherMethods().Any());
foreach(var binding in allBindingsToReplace)
{
binding.event -= binding.handler;
binding.event += HandlerWrapper;
originalBindings.Add(binding.event, binding.handler);
}
}
public static void HandlerWrapper(object o, EventArgs e)
{
// Do some logging
try
{
var handler = originalBindings.TryGetValue(/* something */);
handler.Invoke(o, e);
}
// Do more logging
}
In this bunch of pseudo-code there are lots of steps I was not able to write, possibly because I didn't find the correct API use, but maybe also because some operation in that is theoretically impossible (in which case I'd love to know why). These are:
Iterating over all events of the application (this would probably be easy)
Identifying a good key for my IDictionary
At each step, get the relevant information from the context
Of course, binding one extra handler to each existing event (pre/post-executing a routine without really wrapping the handler) would help, but executing the real handler inside a try-catch is a big nice to have.
Any partial answer is still greatly appreciated
If you're looking for non-invasive system-wide logging, you'd be better off using an aspect-oriented programming framework like PostSharp. This page provides a decent jumping off point for what you're looking for.
Edit: To add to this, look at implementing an EventInterceptionAspect if you really do just want to know when any event is raised. Again, the PostSharp blog is a good source of info, and this article shows an implementation of this aspect that does simple stdout logging of adding, removing, and invoking.
Preceding posts:
Event Signature in .NET — Using a Strong Typed 'Sender'?
In a C# event handler, why must the “sender” parameter be an object?
Microsoft's conventions and guidelines force .NET users to use special pattern for creating, raising and handling events in .NET.
Event design guidelines http://msdn.microsoft.com/en-us/library/ms229011.aspx state that
Citation:
The event-handler signature observes the following conventions :
The return type is Void.
The first parameter is named sender
and is of type Object. This is the
object that raised the event.
The second parameter is named e and
is of type EventArgs or a derived
class of EventArgs.This is the
event-specific data.
The method takes exactly two
parameters.
These conventions tell developers that the (following) shorter and more obvious code is evil:
public delegate void ConnectionEventHandler(Server sender, Connection connection);
public partial class Server
{
protected virtual void OnClientConnected(Connection connection)
{
if (ClientConnected != null) ClientConnected(this, connection);
}
public event ConnectionEventHandler ClientConnected;
}
and the (following) longer and less obvious code is good:
public delegate void ConnectionEventHandler(object sender, ConnectionEventArgs e);
public class ConnectionEventArgs : EventArgs
{
public Connection Connection { get; private set; }
public ConnectionEventArgs(Connection connection)
{
this.Connection = connection;
}
}
public partial class Server
{
protected virtual void OnClientConnected(Connection connection)
{
if (ClientConnected != null) ClientConnected(this, new ConnectionEventArgs(connection));
}
public event ConnectionEventHandler ClientConnected;
}
Though these guidelines not state why is it so important to follow these conventions, making developers act like monkeys who don't know why and what are they doing.
IMHO, Microsoft's event signature conventions for .NET are bad for your code because they cause additional zero-efficiency effort to be spent on coding, coding, coding:
Coding "(MyObject)sender" casts (not speaking about 99% of situations that don't require sender at all)
Coding derived "MyEventArgs" for the data to be passed inside event handler.
Coding dereferences (calling "e.MyData" when the data is required instead of just "data")
It's not that hard to do this effort, but practically speaking what are we loosing when not conforming to Microsoft's conventions, except that people take you as an heretic because your act of confrontation to Microsoft's conventions verges on blasphemy :)
Do you agree?
The problems you will have:
When you add another argument, you
will have to change your event
handler signature.
When a programmer first looks at
your code, your event handlers will
not look like event handlers.
Especially the latter can waste you far more time than writing a 5 line class.
Regarding having a strongly-typed sender, I've often wondered that myself.
Regarding the EventArgs, I'd still recommend you use an intermediate EventArgs class because you may want to add event information in the future which you don't currently foresee. If you've used a specific EventArgs class all along, you can simply change the class itself and the code where it gets fired. If you pass the Connection as per your example, you'd have to refactor every event handler.
Edit
Jim Mischel made a good point in his comments. By making the sender an object, we enable the same event method to potentially be reused to handle a variety of events. For example, let's say that a grid needs to update itself if:
the user clicks a "refresh" button, or
the system detects that a new entry has been loaded from the server.
You could then say something like this:
serverBus.EntryReceived += RefreshNeededHandler;
refreshButton.Click += RefreshNeededHandler;
...
public void RefreshNeededHandler(object sender, EventArgs args)
{
...
}
Of course, in practice, I have pretty much never had any call for this kind of reuse, whereas the first thing I tend to to in many, many cases is cast the sender to the object type that I know it has to be. If I want to reuse handlers like this, I think it would be easy enough to make two handlers that both call the same convenience method. For me, an event handler is conceptually supposed to handle a specific type of event on a particular group of objects. So I am not personally convinced that the object sender approach is the best convention.
However, I can imagine cases where this would be extremely handy, like if you want to log every event that gets fired.
The biggest problem I see in not following the convention is that you're going to confuse developers who are used to handling events in the way that the runtime library does. I won't say that the convention is good or bad, but it's certainly not evil. .NET developers know and understand how to work with events that are written in conformance with Microsoft's guidelines. Creating your own event handling mechanism on top of that may be more efficient at runtime and might even lead to code that you think is cleaner. But it's going to be different and you'll end up with two event handling "standards" in your program.
My position is that it's better to use a single less-than-ideal standard (as long as it's not horribly broken) than to have two competing standards.
I used strongly typed events (instead of object as it saves me having to cast), it really isn't that hard to understand, "oh look they've used a type that isn't an object"
As for eventArgs, you should use this in case the object changes as per #StriplingWarrior answer.
I don't understand why devs would get confused over it?
Whenever i feel hungry i will publish i am hungry.This will be notified to the service providers say (MealsService,FruitService,JuiceService ).(These service providers know what to serve).
But the serving priority is the concern. Priority here means my first choice is MealsService when there are enough meal is available my need is end with MealsService.To verify the enough meal is availabe the MealsService raises the event "updateMeTheStockStatus" to the "MealsServiceStockUpdateListener" .
The "MealsServiceStockUpdateListener" will only reply back to "MealsService" . No other Service providers ( FruitService,JuiceService ) will be notified by the "MealsServiceStockUpdateListener" .If there is no sufficient stock then only the MealsService passes notification to the JuiceService (as it is the second priority).As usual it checks the stock.If stock is not sufficient it passes message to FruitService,so the flow continues like this.
How can i technically implement this?
Any implemention like priority based delagates and delegate chaining make sense ?
(Somebody! Please reframe it for good readability ).
Update : In this model there is no direct communication between "StackUpdateListener" and "me".Only The "Service Providers" will communicate me.
Like other answerers, I'm not entirely convinced that an event is the way forward, but let's go along with it for the moment.
It seems to me that the business with the MealsServiceStockUpdateListener is a red herring really - you're just trying to execute some event handlers but not others. This sort of thing crops up elsewhere when you have a "BeforeXXX" event which allows cancellation, or perhaps some sort of exception handling event.
Basically you need to get at each of your handlers separately. There are two different ways of doing that - either you can use a normal multicast delegate and call GetInvocationList() or you can change your event declaration to explicitly keep a list of handlers:
private List<EventHandler> handlers = new List<EventHandler>();
public event EventHandler MealRequired
{
add { handlers.Add(value); }
remove
{
int index = handlers.LastIndexOf(value);
if (index != -1)
{
handlers.RemoveAt(index);
}
}
}
These two approaches are not quite equivalent - if you subscribe with a delegate instance which is already a compound delegate, GetInvocationList will flatten it but the List approach won't. I'd probably go with GetInvocationList myself.
Now, the second issue is how to detect when the meal has provided. Again, there are two approaches. The first is to use the normal event handler pattern, making the EventArgs subclass in question mutable. This is the approach that HandledEventArgs takes. The second is to break the normal event pattern, and use a delegate that returns a value which can be used to indicate success or failure (and possibly other information). This is the approach that ResolveEventHandler takes. Either way, you execute the delegates in turn until one of them satistfies your requirements. Here's a short example (not using events per se, but using a compound delegate):
using System;
public class Test
{
static void Main(string[] args)
{
Func<bool> x = FirstProvider;
x += SecondProvider;
x += ThirdProvider;
Execute(x);
}
static void Execute(Func<bool> providers)
{
foreach (Func<bool> provider in providers.GetInvocationList())
{
if (provider())
{
Console.WriteLine("Done!");
return;
}
}
Console.WriteLine("No provider succeeded");
}
static bool FirstProvider()
{
Console.WriteLine("First provider returning false");
return false;
}
static bool SecondProvider()
{
Console.WriteLine("Second provider returning true");
return true;
}
static bool ThirdProvider()
{
Console.WriteLine("Third provider returning false");
return false;
}
}
Rather than publish a message "I'm hungry" to the providers, publish "I need to know current stock available". Then listen until you have enough information to make a request to the correct food service for what you need. This way the logic of what-makes-me-full is not spread amongst the food services... It seems cleaner to me.
Message passing isn't baked into .NET directly, you need to implement your own message forwarding by hand. Fortunately, the "chain of responsiblity design pattern" is designed specifically for the problem you're trying to solve, namely forwarding a message down a chain until someone can handle it.
Useful resources:
Chain of Responsibility on Wikipedia
C# implementation on DoFactory.com
I'm not sure if you really need a priority event. Anyways, let's suppose we want to code that just for fun.
The .NET Framework has no support for such a peculiar construct. Let me show one possible approach to implement it.
The first step would be to create custom store for event delegates (like described here);
Internally, the custom event store could work like a priority queue;
The specific EventArgs used would be HandledEventArgs (or a subclass of it). This would allow the event provider to stop calling handlers after one of them sets the event as Handled;
The next step is the hardest. How to say to tell the event provider what is the priority of the event handler that is being added?
Let me clarify the problem. Usually, the adding of a handler is like this:
eater.GotHungry += mealsService.Someone_GotHungry;
eater.GotHungry += juiceService.Someone_GotHungry;
eater.GotHungry += fruitService.Someone_GotHungry;
The += operator will only receive an delegate. It's not possible to pass a second priority parameter. There might be several possible solutions for this problem. One would be to define the priority in a custom attribute set at the event handler method. A scond approach is discussed in the question.
Compared to the chain of responsibility implementation at dofactory.com, this approach has some advantages. First, the handlers (your food services) do not need to know each other. Also, handlers can be added and remove at any time dynamically. Of course, you could implement a variation of a chain of responsibility that has this advantages too.
I don't think delegates are the proper solution to your problem. Delegates are a low-level service provided by C# for relatively tightly coupled events between components. If I understand your question properly (It is worded a little oddly, so I am not sure I clearly understand your problem), then I think what you need is a mediated consumer/provider.
Rather than having your consumers directly consume the meal, juice, and fruit providers, have them request a food item from a central mediator. The mediator would then be responsible for determining what is available and what should be provided to the consumer. The mediator would be a subscriber to events published by all three services. Whenever stock is added/updated in the Meal, Juice, or Fruit services, they would publish their current stock to all subscribers. The mediator, being a subscriber, would track current stock reductions on its own, and be able to determine for itself whether to send a meal, juice, or fruit to a food consumer when a get food request is made.
For example:
|---------- (GetFoodResponse) ----------------
V |
FoodConsumer ---- (GetFoodRequest) ------> FoodProvider <-----> [ Local Stock Data ]
^
|
|
MealService ---- (PublishStockMessage) ----------|
^
JuiceService --- (PublishStockMessage) ----------|
^
FruitService --- (PublishStockMessage) ----------|
The benefits of such a solution are that you reduce coupling, properly segregate responsibility, and solve your problem. For one, your consumers only need to consume a single service...the FoodProvider. The FoodProvider subscribes to publications from the other three services, and is responsible for determining what food to provide to a consumer. The three food services are not responsible for anything related to the hunger of your food consumers, they are only responsible for providing food and tracking the stock of the food they provide. You also gain the ability to distribute the various components. Your consumers, the food provider, and each of the three food services can all be hosted on different physical machines if required.
However, to achieve the above benefits, your solution becomes more complex. You have more parts, and they need to be connected to each other properly. You have to publish and subscribe to messages, which requires some kind of supporting infrastructure (WCF, MSMQ, some third party ESB, custom solution, etc.) You also have duplication of data, since the food provider tracks stock on its own in addition to each of the food services, which could lead to discontinuity in available stock. This can be mitigated if you manage stock updated properly, but that would also increase complexity.
If you can handle the additional complexity, ultimately, a solution like this would more flexible and adaptable than a more tightly connected solution that uses components and C# events in a local-deployment-only scenario (as in your original example.)
I am having a bit of trouble understanding your analogy here, which sounds like you're obscuring the actual intent of the software, but I think I have done something like what you are describing.
In my case the software was telemarketing software and each of the telemarketers had a calling queue. When that queue raises the event signifying that it is nearing empty, the program will grab a list of available people to call, and then pass them through a chain of responsibility which pushes the available call into the telemarketer's queue like so:
Each element in the chain acts as a priority filter: the first link in the chain will grab all of the people who have never been called before, and if it finishes (ie. went through all of the people who have never been called) without filling up the queue, it will pass the remaining list of people to call to the next link in the chain - which will apply another filter/search. This continues until the last link in the chain which just fires off an e-mail to an administrator indicating that there are no available people to be called and a human needs to intervene quickly before the telemarketers have no work to do.