I have some Event Subscribed in SoftwareViewModel Constuctor and i am somehow thinking to MOVE that particular view and Viewmodel in seperate MODULE and make it ondemand
but in order to make event publishing and subsciption working we need to load that SoftwareViewModel when application loads i.e. in order to make subsciption of SoftwareViewMOdel working.
So how Event publishing & subsciption work in ONDEMAND Viewmodel concept.
Is it doable what i am thinking or not because behaviour of SoftwareViewModel is dependent on settings that load when we do login in application.
**//Want to make this viewmodel ON DEMAND**
public SoftwareViewModel()
{
**//Event that is going to subscribed**
SubscriptionToken subscriptionValidate = this.eventAggregator.GetEvent<PubSubEvent<IValidate>>().Subscribe(i =>
{
//CODE HERE
});
}
Regarding On Demand some explanation:
On Demand i means to day that i have two tabs 1 & 2. I want my tab-2 things should load when i click on tab 2 i.e. SoftwareViewModel OnDemand.
But My Tab -1 has some settings that put effect on SoftwareViewModel i.e. tab-2.In order to do this i am using event subscription and publishing to share data bewteeen tab 1 & 2.
But i want to do everything on click of tab-2.
Question:
Is it possible to make SoftwareViewModel i.e tab-2 on demand with event publsihing and sbscription beacause according to my study publishing only works when subscription registered first.
Please let me know if more description required.
Your understanding is correct; a subscription in a typical pub/sub application will only receive events published after the subscription is established.
This is why pub/sub is basically never the only way a view (model) receives data.
To make it clearer, lets start with a second use case. That tab-2 is entered first. tab-1 doesn't ever get created. How do you get the data then? Not only was tab-2 not subscribed at the right time, the event it was looking for was never published!
Moreover, in a third case, say tab-1 was actually a different process. tab-2 is probably interested in events that occurred before it's process even started!
The solution is the same for all use cases; the view (model) (tab-2 here) must be able to query for the current state of the system. "Fetch, and subscribe for the rest." The query and response could be through your pub/sub system (having built that, its a fair bit of work) or could be through some other method.
TL;DR: You can't rely on just simple pub/sub for your initial data.
Related
In many different projects I have seen 2 different approaches of raising Domain Events.
Raise Domain Event directly from aggregate. For example imagine you have Customer aggregate and here is a method inside it:
public virtual void ChangeEmail(string email)
{
if(this.Email != email)
{
this.Email = email;
DomainEvents.Raise<CustomerChangedEmail>(new CustomerChangedEmail(email));
}
}
I can see 2 problems with this approach. The first one is that the event is raised regardless of whether the aggregate is persisted or not. Imagine if you want to send an email to a customer after successful registration. An event "CustomerChangedEmail" will be raised and some IEmailSender will send the email even if the aggregate wasn't saved. The second problem with the current implementation is that every event should be immutable. So the question is how can I initialize its "OccuredOn" property? Only inside aggregate! Its logical, right! It forces me to pass ISystemClock (system time abstraction) to each and every method on aggregate! Whaaat??? Don't you find this design brittle and cumbersome? Here is what we'll come up with:
public virtual void ChangeEmail(string email, ISystemClock systemClock)
{
if(this.Email != email)
{
this.Email = email;
DomainEvents.Raise<CustomerChangedEmail>(new CustomerChangedEmail(email, systemClock.DateTimeNow));
}
}
The second approach is to go what Event Sourcing pattern recommends to do. On each and every aggregate, we define a (List) list of uncommited events. Please payAttention that UncommitedEvent is not a domain Event! It doesn't even has OccuredOn property. Now, when ChangeEmail method is called on Customer Aggregate, we don't raise anything. We just save the event to uncommitedEvents collection which exists on our aggregate. Like this:
public virtual void ChangeEmail(string email)
{
if(this.Email != email)
{
this.Email = email;
UncommitedEvents.Add(new CustomerChangedEmail(email));
}
}
So, when does the actual domain event is raised??? This responsibility is delegated to persistence layer. In ICustomerRepository we have access to ISystemClock, because we can easily inject it inside repository. Inside Save() method of ICustomerRepository we should extract all uncommitedEvents from Aggregate and for each of them create a DomainEvent. Then we set up OccuredOn property on newly created Domain Event. Then, IN ONE TRANSACTION we save the aggregate and publish ALL domain events. This way we'll be sure that all events will will raised in transnational boundary with aggregate persistence.
What I don't like about this approach? I don't want to create 2 different types for the same event, i.e for CustomerChangedEmail behavior I should have CustomerChangedEmailUncommited type and CustomerChangedEmailDomainEvent. It would be nice to have just one type. Please share your experience regarding to this topic!
I am not a proponent of either of the two techniques you present :)
Nowadays I favour returning an event or response object from the domain:
public CustomerChangedEmail ChangeEmail(string email)
{
if(this.Email.Equals(email))
{
throw new DomainException("Cannot change e-mail since it is the same.");
}
return On(new CustomerChangedEmail { EMail = email});
}
public CustomerChangedEmail On(CustomerChangedEmail customerChangedEmail)
{
// guard against a null instance
this.EMail = customerChangedEmail.EMail;
return customerChangedEmail;
}
In this way I don't need to keep track of my uncommitted events and I don't rely on a global infrastructure class such as DomainEvents. The application layer controls transactions and persistence in the same way it would without ES.
As for coordinating the publishing/saving: usually another layer of indirection helps. I must mention that I regard ES events as different from system events. System events being those between bounded contexts. A messaging infrastructure would rely on system events as these would usually convey more information than a domain event.
Usually when coordinating things such as sending of e-mails one would make use of a process manager or some other entity to carry state. You could carry this on your Customer with some DateEMailChangedSent and if null then sending is required.
The steps are:
Begin Transaction
Get event stream
Make call to change e-mail on customer, adding even to event stream
record e-mail sending required (DateEMailChangedSent back to null)
Save event stream (1)
Send a SendEMailChangedCommand message (2)
Commit transaction (3)
There are a couple of ways to do that message sending part that may include it in the same transaction (no 2PC) but let's ignore that for now.
Assuming that previously we had sent an e-mail our DateEMailChangedSent has a value before we start we may run into the following exceptions:
(1) If we cannot save the event stream then here's no problem since the exception will rollback the transaction and the processing would occur again.
(2) If we cannot send the message due to some messaging failure then there's no problem since the rollback will set everything back to before we started.
(3) Well, we've sent our message so an exception on commit may seem like an issue but remember that we could not set our DateEMailChangedSent back to null to indicate that we require a new e-mail to be sent.
The message handler for the SendEMailChangedCommand would check the DateEMailChangedSent and if not null it would simply return, acknowledging the message and it disappears. However, if it is null then it would send the mail either interacting with the e-mail gateway directly ot making use of some infrastructure service endpoint through messaging (I'd prefer that).
Well, that's my take on it anyway :)
I have seen 2 different approaches of raising Domain Events.
Historically, there have been two different approaches. Evans didn't include domain events when describing the tactical patterns of domain-driven-design; they came later.
In one approach, Domain Events act as a coordination mechanism within a transaction. Udi Dahan wrote a number of posts describing this pattern, coming to the conclusion:
Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services.
event-sourcing, the common alternative, is actually a very different animal, in so far as the events are written to the book of record, rather than merely being used to coordinate activities in the write model.
The second problem with the current implementation is that every event should be immutable. So the question is how can I initialize its "OccuredOn" property? Only inside aggregate! Its logical, right! It forces me to pass ISystemClock (system time abstraction) to each and every method on aggregate!
Of course - see John Carmack's plan files
If you don't consider time an input value, think about it until you do - it is an important concept
In practice, there are actually two important time concepts to consider. If time is part of your domain model, then it's an input.
If time is just meta data that you are trying to preserve, then the aggregate doesn't necessarily need to know about it -- you can attach the meta data to the event elsewhere. One answer, for example, would be to use an instance of a factory to create the events, with the factory itself responsible for attaching the meta data (including the time).
How can it be achieved? An example of a code sample would help me a lot.
The most straight forward example is to pass the factory as an argument to the method.
public virtual void ChangeEmail(string email, EventFactory factory)
{
if(this.Email != email)
{
this.Email = email;
UncommitedEvents.Add(factory.createCustomerChangedEmail(email));
}
}
And the flow in the application layer looks something like
Create metadata from request
Create the factory from the metadata
Pass the factory as an argument.
Then, IN ONE TRANSACTION we save the aggregate and publish ALL domain events. This way we'll be sure that all events will will raised in transnational boundary with aggregate persistence.
As a rule, most people are trying to avoid two phase commit where possible.
Consequently, publish isn't usually part of the transaction, but held separately.
See Greg Young's talk on Polyglot Data. The primary flow is that subscribers pull events from the book of record. In that design, the push model is a latency optimization.
I tend to implement domain events using the second approach.
Instead of manually retrieving and then dispatching all events in the aggregate roots repository I have a simple DomainEventDispatcher(application layer) class which listens to various persistence events in the application. When an entity is added, updated or deleted it determines whether it is an AggregateRoot. If so, it calls releaseEvents() which returns a collection of domain events that then get dispatched using the application EventBus.
I don't know why you are focusing so much on the occurredOn property.
The domain layer is only concerned with the meat of the domain events such as aggregate root IDs, entity IDs and value object data.
At the application layer you can have an event envelope which can wrap any serialized domain event while giving it some meta data such as a unique ID (UUID/GUID), what aggregate root it came from, the time it occurred etc. This can be persisted to a database.
This meta data is useful in the application layer because you might be publishing these events to other applications using a message bus/event stream over HTTP and it allows each event to be uniquely identifiable.
Again, this meta data about the event generally makes no sense in the domain layer, only the application layer. The domain layer does not care or have any use for event IDs or the time they occurred but other applications which consume these events do. That's why this data is attached at the application layer.
The way I would solve the sending of email problem is by decoupling the publishing of the event and the handling of the event through a messaging queue. This way you close the transaction after sending the event to the queue, and the sending of email, or other effects that cannot or should not be part of the original DB transaction, will happen shortly after, in a different transaction. The simplest way to do that, of course, is to have an event handler that publishes domain events onto the queue.
If you want to be extra sure that the domain events will be published to the queue when the transaction is committed, you can save the events to an OUTBOX table that will be committed with the transaction, and then have a thread read from the table and publish to the event queue
I am building a cross-platform mobile application using MvvmCross framework.
Since I would like to share information between ViewModels, I am registering notifications inside the ViewModel's constructor, using the built in MvxMessenger.
Let's assume a message named ShowAdsMsg, and then the ViewModel looks as follows:
public class AdsViewModel : BaseLookersViewModel, IAdsViewModel
{
private MvxSubscriptionToken _showAdsMsgToken;
public AdsViewModel()
{
_showAdsMsgToken = MvxMessenger.Subscribe<ShowAdsMsg>(message => onShowAdsNavigation(), MvxReference.Weak);
MyMessenger.PublishLastMessage();
}
private void onShowAdsNavigation()
{
//Do Stuff
}
}
About the MyMessenger thing:
The actual navigation to the ViewModel is performed from MainViewModel.
Since that at the very moment of the navigation itself the AdsViewModel does not exist yet, messages published from the MainViewModel cannot reach it.
So, my idea was to naively "remember" the message and publish it when the new ViewModel is ready.
So now the navigation call from the MainViewModel looks like that:
private void navigate()
{
MyMessenger.RememberMessage(new ShowAdsMsg(this));
ShowViewModel<AdsViewModel>( );
}
I am now able to navigate to the ViewModel, and all the notifications are successfully caught.
However...
When I press the BACK button on the device and re-navigate to the same ViewModel,
The constructor is being called again, and so the message subscription re-occur.
As a result, when a message arrives the onShowAdsNavigation() handler is being fired twice!
I found this similar post, discussing the question of how to properly dispose a ViewModel,
but it does not contain a direct solution to my problem.
What I need is a solution. It can be either one of the following:
Idea how Not to subscribe to messages on the ViewModel's ctor.
Guidance about how and when to correctly dispose the ViewModel.
Explanation on why the constructor is being called again, and how to avoid that.
A complete different approach to ViewModel information messaging.
Thanks in advance for you help!
Edit:
I found this SO Answer, which basically answers item number 3 in the list above.
Still, I am wondering what approach should I take regarding the messenger issue.
Another Edit:
I verified that the same behavior exists with MvvmCross tutorial N-05-MultiPage. I simply added a ctor to SecondViewModel, and I hit a breakpoint inside it after each BACK+Renavigate.
Explanation on why the constructor is being called again, and how to avoid that.
The ctor is not called twice on the same object - instead what might happen is that a new View and a new ViewModel are created each time.
By default I would expect a new ViewModel to be created on every forwards navigation on every platform.
By default I would **not expect this to happen during a back button on WindowsPhone - it doesn't happen here for my test cases - but it could happen if:
WindowsPhone removes your first Page (and it's ViewModel) from memory - I guess this might happen if your app is tombstoned or if you are using a custom RootFrame - but I don't expect this to happen by default.
you somehow null the ViewModel (DataContext) in your first Page
Without seeing more of your code I can't guess any more about why this might happen.
I'd personally recommend you look deeper at why you are seeing new ViewModels created during Back, but if you just want a quick fix, then you could look at overriding the ViewModelLocator within MvvmCross - see MvvmCross: Does ShowViewModel always construct new instances?
Note that on WindowsStore, I would expect this to happen - WindowsStore doesn't hold Pages from the backstack in memory by default - but you can overriding this by setting NavigationCacheMode = NavigationCacheMode.Enabled; if you need to.
I come from a PHP background and have used Wordpress quite a lot, I love how their plugin architecture works and the ability to hook events to event names. One of the best parts I like about it is being able to *add_filter()* to any database value just before it gets shown to the end user. My question is multi-part on how to replicate the whole plugin architecture in a C#.NET environment?
Part 1:
To create plug-ins I have researched the MEF framework would probably be the best (Managed Extensibility Framework -http://mef.codeplex.com/). This is designed specifically to take the grunt work out by giving you the ability to monitor directories for new plug-ins, tracking dependencies and other normal things. MEF ships with .NET 3.5+
Part 2
Hooking events? I can't seem to find much information about replicating a global channel based event system. From what I have upto yet I need a publish/subscribe pattern (which isn't that hard to make as you just create some concrete objects and give them events). The hard part is giving each event a 'channel' name and for all the events in the whole system to be part of a global collection (Mediator pattern).
To replicate: (http://codex.wordpress.org/Function_Reference/add_filter)
Example 1
// Add's my button to the end of the content
add_filter('the_content', 'my_plugin_button');
function my_plugin_button( $content ) {
// Add's my button to the end of the content
return $content . "<a href='#'>My button</a>";
}
OR
Example 2
// Add a new admin menu item by hooking in
add_action('admin_menu', 'my_plugin_menu');
function my_plugin_menu() {
add_options_page('My Plugin Options', 'My Plugin', 'manage_options', 'my-unique-identifier', 'my_plugin_options');
}
I hope your all with me upto yet? I have managed to replicate the functionality I need in Javascript and even jQuery has their .on() event function... same thing but channel or list based...
My 2 examples:
http://jsfiddle.net/AaronLayton/U3ucS/53/
http://jsfiddle.net/AaronLayton/eyNre/33/
Can anyone point me in the correct direction or is this the totaly wrong approach for c#?
I think NServiceBus can help you a lot with these issues. Udi Dahan which is the author of NServiceBus has also written a lot of articles about Domain Event pattern, which is a publish/subscribe mechanism.
Know it's been a long time since you posted this and you probably built something already. However I have been thinking about something like this myself. There are 2 options - really forget WordPress and try and build something much cleaner - it's a mess at the bottom of WordPress' code :D
Or this:
function the_content()
{
var result = get_the_content();
// do other stuff...if you want to.
execute_filters(ref result);
execute_actions(ref result);
return result;
}
function execute_filters(ref string result, string action_name)
{
var filters = get_pre_filters(action_name);
filters.ForEach(filter =>
{
/// somehow call the method name in filter. PHP is generally global. C# namespaced,
/// so you would need to think about that.
}
}
function execute_actions(ref string result, string action_name)
{
/// and so on....
}
When building something to mimic WordPress, you need to remember many of the issues of WordPress' plugin architecture (in my personal opinion)... It seems to want to run every plugin near enough on every page even if that page has nothing to do with that plugin. I onced installed a plugin that added 60 database queries to each page call, and it wasn't used.
Try and think smart about it when you are building it. Try and add a way to only have the plugins that are going to get used on the page/post of your new setup to be run e.g. in your database have a "Plugins" field on the post/page object with a list of plugins allowed to run on that page. That way you won't need to check all the plugins each time to see if it wants to run.
Anyways. Hope you got something working.
Long story short, I inherited a fairly complex application and I'm trying to track down a memory leak involving a form. Right now, every time the form is closed and a new one brought up, the old one remains in memory. I tracked down an issue with a static event owned and set by a control within the program (apparently, so long as the static event was set, no instance of that control was considered out of scope, even when no one else referred to said controls). Now, I'm trying to track down the remaining issue.
Using MemProfiler and ANTS Memory Profile, I've learned that the root execution path goes like this:
FormOpenWatch <-- The item which remains active
System.EventHandler -- (this as Delegate)._target
System.Object[]
System.EventHandler -- (this as MultiCastDelegate)._invocationList
System.ComponentModel.EventHandlerList+ListEntry -- handler
System.ComponentModel.EventHandlerList+ListEntry -- next
System.ComponentModel.EventHandlerList+ListEntry -- next
System.ComponentModel.EventHandlerList+ListEntry -- next
System.ComponentModel.EventHandlerList+ListEntry -- next
System.ComponentModel.EventHandlerList -- head
PTU.MdiPTU -- (this as Component).events <-- The base application
Anyone have any insight for what I might be looking for? I've found a Shown event added with the base application, and ensured that it gets removed when the form is being disposed of, but that doesn't seem to have fixed the problem.
Much thanks for any help you can provide.
Later Edit: I have thought I've successfully solved this several times over now, and I'm still having issues. The problem seems to be stemming from my Plotter class (and various derived classes) having this "public static event MouseEventHandler MultiCursorMouseMove;" event. We have a "cursor" which displays the value and time of the graph at the mouse's location. Originally, this worked on one graph at a time, but a request was made to allow the user to toggle a mode where moving the mouse moved the plot across all of the displayed graphs. I wrote up an initial treatment hooking the EventHandlers in as the items were instantiated, and my partner across the pond rewrote it to use the static event, which gets assigned to each item on construction. His way is much more elegant and works better. All except that it's resulted in memory leaks. Using the memory profiling software has shown that every time I try to get rid of the form holding the plots, I'm left with a number of cases of "Disposed instance with direct EventHandler roots". In each of these, it shows that the object is either a Plotter, or an object pointed to by the Plotter. And, in each of these, the base link is that a MultiCursorMouseMove EventList points to these objects. I think that what's happening is that the Plotter is staying alive because it has this static event which in turn is linked to the Plotters. I have managed to verify that MultiCursorMouseMove is null through the debugger at a given point by virtue of my Dispose code removing the event for each Plotter, and yet running the profiler at that same point still shows this chain from MultiCursorMouseMove to these classes.
I'm out of ideas on how to fix this currently. Anyone?
If MdiPTU is the MDI parent form for your application, it sounds like FormOpenWatch might have subscribed to one of its events. If it hasn't done so directly, you might find the subscription in a FormOpenWatch superclass, or perhaps even in other code that can wire up execution of a FormOpenWatch method from an MdiPTU event.
Whenever i feel hungry i will publish i am hungry.This will be notified to the service providers say (MealsService,FruitService,JuiceService ).(These service providers know what to serve).
But the serving priority is the concern. Priority here means my first choice is MealsService when there are enough meal is available my need is end with MealsService.To verify the enough meal is availabe the MealsService raises the event "updateMeTheStockStatus" to the "MealsServiceStockUpdateListener" .
The "MealsServiceStockUpdateListener" will only reply back to "MealsService" . No other Service providers ( FruitService,JuiceService ) will be notified by the "MealsServiceStockUpdateListener" .If there is no sufficient stock then only the MealsService passes notification to the JuiceService (as it is the second priority).As usual it checks the stock.If stock is not sufficient it passes message to FruitService,so the flow continues like this.
How can i technically implement this?
Any implemention like priority based delagates and delegate chaining make sense ?
(Somebody! Please reframe it for good readability ).
Update : In this model there is no direct communication between "StackUpdateListener" and "me".Only The "Service Providers" will communicate me.
Like other answerers, I'm not entirely convinced that an event is the way forward, but let's go along with it for the moment.
It seems to me that the business with the MealsServiceStockUpdateListener is a red herring really - you're just trying to execute some event handlers but not others. This sort of thing crops up elsewhere when you have a "BeforeXXX" event which allows cancellation, or perhaps some sort of exception handling event.
Basically you need to get at each of your handlers separately. There are two different ways of doing that - either you can use a normal multicast delegate and call GetInvocationList() or you can change your event declaration to explicitly keep a list of handlers:
private List<EventHandler> handlers = new List<EventHandler>();
public event EventHandler MealRequired
{
add { handlers.Add(value); }
remove
{
int index = handlers.LastIndexOf(value);
if (index != -1)
{
handlers.RemoveAt(index);
}
}
}
These two approaches are not quite equivalent - if you subscribe with a delegate instance which is already a compound delegate, GetInvocationList will flatten it but the List approach won't. I'd probably go with GetInvocationList myself.
Now, the second issue is how to detect when the meal has provided. Again, there are two approaches. The first is to use the normal event handler pattern, making the EventArgs subclass in question mutable. This is the approach that HandledEventArgs takes. The second is to break the normal event pattern, and use a delegate that returns a value which can be used to indicate success or failure (and possibly other information). This is the approach that ResolveEventHandler takes. Either way, you execute the delegates in turn until one of them satistfies your requirements. Here's a short example (not using events per se, but using a compound delegate):
using System;
public class Test
{
static void Main(string[] args)
{
Func<bool> x = FirstProvider;
x += SecondProvider;
x += ThirdProvider;
Execute(x);
}
static void Execute(Func<bool> providers)
{
foreach (Func<bool> provider in providers.GetInvocationList())
{
if (provider())
{
Console.WriteLine("Done!");
return;
}
}
Console.WriteLine("No provider succeeded");
}
static bool FirstProvider()
{
Console.WriteLine("First provider returning false");
return false;
}
static bool SecondProvider()
{
Console.WriteLine("Second provider returning true");
return true;
}
static bool ThirdProvider()
{
Console.WriteLine("Third provider returning false");
return false;
}
}
Rather than publish a message "I'm hungry" to the providers, publish "I need to know current stock available". Then listen until you have enough information to make a request to the correct food service for what you need. This way the logic of what-makes-me-full is not spread amongst the food services... It seems cleaner to me.
Message passing isn't baked into .NET directly, you need to implement your own message forwarding by hand. Fortunately, the "chain of responsiblity design pattern" is designed specifically for the problem you're trying to solve, namely forwarding a message down a chain until someone can handle it.
Useful resources:
Chain of Responsibility on Wikipedia
C# implementation on DoFactory.com
I'm not sure if you really need a priority event. Anyways, let's suppose we want to code that just for fun.
The .NET Framework has no support for such a peculiar construct. Let me show one possible approach to implement it.
The first step would be to create custom store for event delegates (like described here);
Internally, the custom event store could work like a priority queue;
The specific EventArgs used would be HandledEventArgs (or a subclass of it). This would allow the event provider to stop calling handlers after one of them sets the event as Handled;
The next step is the hardest. How to say to tell the event provider what is the priority of the event handler that is being added?
Let me clarify the problem. Usually, the adding of a handler is like this:
eater.GotHungry += mealsService.Someone_GotHungry;
eater.GotHungry += juiceService.Someone_GotHungry;
eater.GotHungry += fruitService.Someone_GotHungry;
The += operator will only receive an delegate. It's not possible to pass a second priority parameter. There might be several possible solutions for this problem. One would be to define the priority in a custom attribute set at the event handler method. A scond approach is discussed in the question.
Compared to the chain of responsibility implementation at dofactory.com, this approach has some advantages. First, the handlers (your food services) do not need to know each other. Also, handlers can be added and remove at any time dynamically. Of course, you could implement a variation of a chain of responsibility that has this advantages too.
I don't think delegates are the proper solution to your problem. Delegates are a low-level service provided by C# for relatively tightly coupled events between components. If I understand your question properly (It is worded a little oddly, so I am not sure I clearly understand your problem), then I think what you need is a mediated consumer/provider.
Rather than having your consumers directly consume the meal, juice, and fruit providers, have them request a food item from a central mediator. The mediator would then be responsible for determining what is available and what should be provided to the consumer. The mediator would be a subscriber to events published by all three services. Whenever stock is added/updated in the Meal, Juice, or Fruit services, they would publish their current stock to all subscribers. The mediator, being a subscriber, would track current stock reductions on its own, and be able to determine for itself whether to send a meal, juice, or fruit to a food consumer when a get food request is made.
For example:
|---------- (GetFoodResponse) ----------------
V |
FoodConsumer ---- (GetFoodRequest) ------> FoodProvider <-----> [ Local Stock Data ]
^
|
|
MealService ---- (PublishStockMessage) ----------|
^
JuiceService --- (PublishStockMessage) ----------|
^
FruitService --- (PublishStockMessage) ----------|
The benefits of such a solution are that you reduce coupling, properly segregate responsibility, and solve your problem. For one, your consumers only need to consume a single service...the FoodProvider. The FoodProvider subscribes to publications from the other three services, and is responsible for determining what food to provide to a consumer. The three food services are not responsible for anything related to the hunger of your food consumers, they are only responsible for providing food and tracking the stock of the food they provide. You also gain the ability to distribute the various components. Your consumers, the food provider, and each of the three food services can all be hosted on different physical machines if required.
However, to achieve the above benefits, your solution becomes more complex. You have more parts, and they need to be connected to each other properly. You have to publish and subscribe to messages, which requires some kind of supporting infrastructure (WCF, MSMQ, some third party ESB, custom solution, etc.) You also have duplication of data, since the food provider tracks stock on its own in addition to each of the food services, which could lead to discontinuity in available stock. This can be mitigated if you manage stock updated properly, but that would also increase complexity.
If you can handle the additional complexity, ultimately, a solution like this would more flexible and adaptable than a more tightly connected solution that uses components and C# events in a local-deployment-only scenario (as in your original example.)
I am having a bit of trouble understanding your analogy here, which sounds like you're obscuring the actual intent of the software, but I think I have done something like what you are describing.
In my case the software was telemarketing software and each of the telemarketers had a calling queue. When that queue raises the event signifying that it is nearing empty, the program will grab a list of available people to call, and then pass them through a chain of responsibility which pushes the available call into the telemarketer's queue like so:
Each element in the chain acts as a priority filter: the first link in the chain will grab all of the people who have never been called before, and if it finishes (ie. went through all of the people who have never been called) without filling up the queue, it will pass the remaining list of people to call to the next link in the chain - which will apply another filter/search. This continues until the last link in the chain which just fires off an e-mail to an administrator indicating that there are no available people to be called and a human needs to intervene quickly before the telemarketers have no work to do.