Calling commands from within another command Handle() method - c#

Hi I am using the Simple Injector DI library and have been following some really interesting material about an architectural model designed around the command pattern:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
The container will manage the lifetime of the UnitOfWork, and I am using commands to perform specific functions to the database.
My question is if I have a command, for example an AddNewCustomerCommand, which in turn performs another call to another service (i.e. sends a text message), from a design standpoint is this acceptable or should this be done at a higher level and if so how best to do this?
Example code is below:
public class AddNewBusinessUnitHandler
: ICommandHandler<AddBusinessUnitCommand>
{
private IUnitOfWork uow;
private ICommandHandler<OtherServiceCommand> otherHandler;
AddNewBusinessUnitHandler(IUnitOfWork uow,
ICommandHandler<OtherServiceCommand> otherHandler)
{
this.uow = uow;
this.otherHandler = otherHandler;
}
public void Handle(AddBusinessUnitCommand command)
{
var businessUnit = new BusinessUnit()
{
Name = command.BusinessUnitName,
Address = command.BusinessUnitAddress
};
var otherCommand = new OtherServiceCommand()
{
welcomePostTo = command.BusinessUnitName
};
uow.BusinessUnitRepository.Add(businessUnit);
this.otherHandler.Handle(otherCommand);
}
}

It depends on your architectural view of (business) commands, but it is quite natural to have a one to one mapping between a Use Case and a command. In that case, the presentation layer should (during a single user action, such as a button click) do nothing more than create the command and execute it. Furthermore, it should do nothing more than execute that single command, never more. Everything needed to perform that use case, should be done by that command.
That said, sending text messages, writing to the database, doing complex calculations, communicating with web services, and everything else you need to operate the business' needs should be done during the context of that command (or perhaps queued to happen later). Not before, not after, since it is that command that represents the requirements, in a presentation agnostic way.
This doesn't mean that the command handler itself should do all this. It will be quite naturally to move much logic to other services where the handler depends on. So I can imagine your handler depending on a ITextMessageSender interface, for instance.
Another discussion is if command handlers should depend on other depend command handlers. When you look at use cases, it is not unlikely that big use cases consist of multiple smaller sub use cases, so in that sense it isn't strange. Again, there will be a one to one mapping between commands and use cases.
However, note that having a deep dependency graph of nested command handlers depending on each other, can complicate navigating through the code, so take a good look at this. It might be better to inject an ITextSessageSender instead of using an ICommandHandler<SendTextMessageCommand>, for instance.
Another downside of allowing handlers to nest, is that it makes doing infrastructural stuff a bit more complex. For instance, when wrapping command handlers with a decorator that add transactional behavior, you need to make sure that the nested handlers run in the same transaction as the outer most handler. I happened to help a client of me with this today. It's not incredibly hard, but takes a little time to figure out. The same holds for things like deadlock detection, since this also runs at the boundary of the transaction.
Besides, deadlock detection is an great example to show case the power of this command/handler pattern, since almost every other architectural style will make it impossible to plug-in this behavior. Take a look at the DeadlockRetryCommandHandlerDecorator class in this article) to see an example.

Related

Does the need to make the code simpler justify the use of wrong abstractions?

Suppose we have a CommandRunner class that runs Commands, when a Command is created it's kept in the processingQueue for proccessing, if the execution of the Command finishes with errors the Command is moved to the faultedQueue for later processing but when everything is OK the Command is moved to the archiveQueue, the archiveQueue is not going to be processed in any way
the CommandRunner is something like this
class CommandRunner
{
public CommandRunner(IQueue<Command> processingQueue,
IQueue<Command> faultedQueue,
IQueue<Command> archiveQueue)
{
this.processingQueue = processingQueue;
this.faultedQueue= faultedQueue;
this.archiveQueue= archiveQueue;
}
public void RunCommands()
{
while(processingQueue.HasItems)
{
var current = processingQueue.Dequeue();
var result = current.Run();
if(result.HasError)
curent.MoveTo(faultedQueue);
else
curent.MoveTo(archiveQueue);
...
}
}
}
The CommandeRunner recives the three dependecies as a PersistentQueue the PersistentQueue is responsible for the long term storage of the Commands and so we free the CommandRunner from handling this
And the only purpose of the archiveQueue is to keep the design homogenous, to keep the CommandRunner persistence ignorant and with few dependencies
for example we can imagine a Property like this
IEnumerable<Command> AllCommands
{
get
{
return Enumerate(archiveQueue).Union(processingQueue).Union(faultedQueue);
}
}
many portions of the class need to do so(handle the Archive as a Queue to make the code simpler as shown above)
Does it make sense to use a Queue even if it's not the best abstraction, or do I have to use another abstraction for the archive concept.
what are other alternatives to meet these requirement?
Keep in mind that code, especially running code usually gets tangled and messy as time pass. To combat this, good names, good design, and meaningful comments come into play.
If you don't going to process the archiveQueue, and it's just a storage for messages that has been successfully processed, you can always store it as a different type (list, collection, set, whatever suits your needs), and then choose one of the following two:
Keep the name archiveQueue and change the underlying type. I would leave a comment where it's defined (or injected) saying : Notice that this might not be an actual queue. Name is for consistency reasons only.
Change the name to archiveRepository or something similar, while keeping the queue type. Obviously, since it's still a queue, you'll leave a comment saying: Notice, this is actually a queue.
Another thing to keep in mind, is that if you have n people working on your code base, you'll probably get n+1 different perferences about which way it shoud be done :)
Queue is a useful structure when you need to take care about the order of items inside it. If you need in your command post process, take care about the orders commands ran, then the queue can be a good choice.
If you don't need info about the order or commands, maybe you can use a List (on System.Collections namespace).
I think your choice are good, in the same case, I'll use a queues, we have a good example with OS design principles, inside OS (on Kernel) the process are queued for execution, clearly the OS queues are more complicated because they have other variables in mind like priority, and CPU utilization, but we can learn about the use of queues like data structures in process management.

Testable code: Attach event handler in constructor

as for my understanding, part of writing (unit-)testable code, a constructor should not do real work in constructor and only assigning fields. This worked pretty well so far. But I came across with a problem and I'm not sure what is the best way to solve it. See the code sample below.
class SomeClass
{
private IClassWithEvent classWithEvent;
public SomeClass(IClassWithEvent classWithEvent)
{
this.classWithEvent = classWithEvent;
// (1) attach event handler in ctor.
this.classWithEvent.Event += OnEvent;
}
public void ActivateEventHandling()
{
// (2) attach event handler in method
this.classWithEvent.Event += OnEvent;
}
private void OnEvent(object sender, EventArgs args)
{
}
}
For me option (1) sounds fine, but it the constructor should only assign fields. Option (2) feels a bit "too much".
Any help is appreciated.
A unit test would test SomeClass at most. Therefore you would typically mock classWithEvent. Using some kind of injection for classWithEvent in ctor is fine.
Just as Thomas Weller said wiring is field assignment.
Option 2 is actually bad IMHO. As if you omit a call to ActivateEventHandling you end up with a improperly initialized class and need to transport knowledge of the requirement to call ActivateEventHandling in comments or somehow else, which make the class harder to use and probably results in a class-usage that was not even tested by you, as you have called ActivateEventHandling and tested it but an uninformed user omitting the activation didn't, and you have certainly not tested your class when ActivateEventHandling was not called, right? :)
Edit: There may be alternative approaches here which are worth mentioning it
Depending on the paradigm it may be wise to avoid wiring events in the class at all. I need to relativize my comment on Stephen Byrne's answer.
Wiring can be regarded as context knowledge. The single responsibility principle says a class should do only one task. Furthermore a class can be used more versatile if it does not have a dependency to something else. A very loosely coupled system would provide many classes witch have events and handlers and do not know other classes.
The environment is then responsible for wiring all the classes together to connect events properly with handlers.
The environment would create the context in which the classes interact with each-other in a meaningful way.
A class in this case does therefore not know to whom it will be bound and it actually does not care. If it requires a value, it asks for it, whom it asks should be unknown to it. In that case there wouldn't even be an interface injected into the ctor to avoid a dependency. This concept is similar to neurons in a brain as they also emit messages to the environment and expect answer not knowing neighbouring neurons.
However I regard a dependency to an interface, if it is injected by some means of a dependency injection container just another paradigm and not less wrong.
The non trivial task of the environment to wire up all classes on start may lead to runtime errors (which are mitigated by a very good test coverage of functional and integration tests, which may be a hard task for large projects) and it gets very annoying if you need to wire dozens of classes and probably hundreds of events on startup manually.
While I agree that wiring in an environment and not in the class itself can be nice, it is not practical for large scale code.
Ralf Westphal (one of the founders of the clean code developer initiative (sorry german only)) has written a software that performs the wiring automatically in a concept called "event based components" (not necessarily coined by himself). It uses naming conventions and signature matching with reflection to bind events and handlers together.
Wiring events is field assignment (because delegates are nothing but simple reference variables that point to methods).
So option(1) is fine.
The point of constructor is not to "assign fields". It is to establish invariants of your object, i. e. something that never changes during its lifetime.
So if in other methods of class you depend on being always subscribed to some object, you'd better do it in the constructor.
On the other hand, if subscriptions come and go (probably not the case here), you can move this code to another method.
The single responsibility principle dictates that that wiring should be avoided. Your class should not care how, or where from it receives data. It would make sense to rename OnEvent method to something more meaningful, and make it public.
Then some other class (bootstrapper, configurator, whatever) should be responsible for the wiring. Your class should only be responsible for what happens when a new data come's in.
Pseudo code:
public interface IEventProvider //your IClassWithEvent
{
Event MyEvent...
}
public class EventResponder : IEventResponder
{
public void OnEvent(object sender, EventArgs args){...}
}
public class Boostrapper
{
public void WireEvent(IEventProvider eventProvider, IEventResponder eventResponder)
{
eventProvider>event += eventResponder.OnEvent;
}
}
Note, the above is pseudo code, and it's only for the purpose to describe the idea.
How your bootstrapper actually is implemented depends on many things. It can be your "main" method, or your global.asax, or whatever you have in place to actually configure and prepare your application.
The idea is, that whatever is responsible to prepare the application to run, should compose it, not the classes themselves, as they should be as single purpose as possible, and should not care too much about how and where they are used.

Refactoring a large, complex user-interface

I have a big winform with 6 tabs on it, filled with controls. The first tab is the main tab, the other 5 tabs are part of the main tab. In database terms, the other 5 tabs have a reference to the main tab.
As you can imagine, my form is becoming very large and hard to maintain. So my question is, how do you deal with large UI's? How do you handle that?
Consider your aim before you start. You want to aim for SOLID principles, in my opinion. This means, amongst other things, that a class/method should have a single responsibility. In your case, your form code is probably coordinating UI stuff and business rules/domain methods.
Breaking down into usercontrols is a good way to start. Perhaps in your case each tab would have only one usercontrol, for example. You can then keep the actual form code very simple, loading and populating usercontrols. You should have a Command Processor implementation that these usercontrols can publish/subscribe to, to enable inter-view conversations.
Also, research UI design patterns. M-V-C is very popular and well-established, though difficult to implement in stateful desktop-based apps. This has given rise to M-V-P/passive view and M-V-VM patterns. Personally I go for MVVM but you can end up building a lot of "framework code" when implementing in WinForms if you're not careful - keep it simple.
Also, start thinking in terms of "Tasks" or "Actions" therefore building a task-based UI rather than having what amounts to a create/read/update/delete (CRUD) UI. Consider the object bound to the first tab to be an aggregate root, and have buttons/toolbars/linklabels that users can click on to perform certain tasks. When they do so, they may be navigated to a totally different page that aggregates only the specific fields required to do that job, therefore removing the complexity.
Command Processor
The Command Processor pattern is basically a synchronous publisher/consumer pattern for user-initiated events. A basic (and fairly naive) example is included below.
Essentially what you're trying to achieve with this pattern is to move the actual handling of events from the form itself. The form might still deal with UI issues such as hiding/[dis/en]abling controls, animation, etc, but a clean separation of concerns for the real business logic is what you're aiming for. If you have a rich domain model, the "command handlers" will essentially coordinate calls to methods on the domain model. The command processor itself gives you a useful place to wrap handler methods in transactions or provide AOP-style stuff like auditing and logging, too.
public class UserForm : Form
{
private ICommandProcessor _commandProcessor;
public UserForm()
{
// Poor-man's IoC, try to avoid this by using an IoC container
_commandProcessor = new CommandProcessor();
}
private void saveUserButton_Click(object sender, EventArgs e)
{
_commandProcessor.Process(new SaveUserCommand(GetUserFromFormFields()));
}
}
public class CommandProcessor : ICommandProcessor
{
public void Process(object command)
{
ICommandHandler[] handlers = FindHandlers(command);
foreach (ICommandHandler handler in handlers)
{
handler.Handle(command);
}
}
}
The key to handle a large UI is a clean separation of concerns and encapsulation. In my experience, it's best to keep the UI free of data and functionality as much as possible: The Model-View-Controller is a famous (but rather hard to apply) pattern to achieve this.
As the UI tends to get cluttered by the UI code alone it's best to separate all other code from the UI and delegate all things that don't concern the UI directly to other classes (e.g. delegating the handling of user input to controller classes). You could apply this by having a controller class for each tab, but this depends on how complicated each tab is. Maybe it's better to break a single tab down into several controller classes themself and compose them in a single controller class for the tab for easier handling.
I found a variation of the MVC pattern to be useful: The passive view. In this pattern, the view holds nothing more than the hierarchy and state of the UI components. Everything else is delegated to and controlled by controller classes which figure out what to do on user input.
Of course, it also helps to break the UI itself down into well organized and encapusalted components itself.
I would suggest you to read about the CAB ( Composite UI Application Block ) from Microsoft practice and patterns, which features the following patterns : Command Pattern, Strategy Pattern, MVP Pattern ... etc.
Microsoft Practice and patterns
Composite UI Application Block

How to break down large 'macro' classes?

One application I work on does only one thing, looking from outside world. Takes a file as input and after ~5 minutes spits out another file.
What happens inside is actually a sequential series of action. The application is, in our opinion, structured well because each action is like a small box, without too many dependencies.
Usually some later actions use some information from previous one and just a few can be executed in parallel - for the sake of simplicity we prefer to the execution sequential.
Now the problem is that the function that executes all this actions is like a batch file: a long list of calls to different functions with different arguments. So, looking in the code it looks like:
main
{
try
{
result1 = Action1(inputFile);
result2 = Action2(inputFile);
result3 = Action3(result2.value);
result4 = Action4(result1.value, inputFile);
... //You get the idea. There is no pattern passed paramteres
resultN = ActionN(parameters);
write output
}
catch
{
something went wrong, display the error
}
}
How would you model the main function of this application so is not just a long list of commands?
Not everything needs to fit to a clever pattern. There are few more elegant ways to express a long series of imperative statements than as, well, a long series of imperative statements.
If there are certain kinds of flexibility you feel you are currently lacking, express them, and we can try to propose solutions.
If there are certain clusters of actions and results that are re-used often, you could pull them out into new functions and build "aggregate" actions from them.
You could look in to dataflow languages and libraries, but I expect the gain to be small.
Not sure if it's the best approach, but you could have an object that would store all the results and you would give it to each method in turn. Every method would read the parameters it needs and write its result there. You could then have a collection of actions (either as delegates or objects implementing an interface) and call them in a loop.
class Results
{
public int Result1 { get; set; }
public string Result2 { get; set; }
…
}
var actions = new Action<Results>[] { Action1, Action2, … };
Results results = new Results();
foreach (var action in actions)
action(results);
You can think of implementing a Sequential Workflow from Windows Workflow
First of all, this solution is far not bad. If the actions are disjunct, I mean there are no global parameters or other hidden dependencies between different actions or between actions and the environment, it's a good solution. Easy to maintain or read, and when you need to expand the functionality, you have just to add new actions, when the "quantity" changes, you have just to add or remove lines from the macro sequence. If there's no need for change frequently the process chain: don't move!
If it's a system, where the implementation of actions don't often changes, but their order and parameters yes, you may design a simple script language, and transform the macro class into that script. This script should be maintained by someone else than you, someone who is familiar with the problem domain in the level of your "actions". So, he/she can assembly the application using script language without your assistance.
One nice approach for that kind of problem splitting is dataflow programming (a.k.a. Flow-based programming). In dataflow programming, there are pre-written components. Components are black boxes (from the view of the application developer), they have consumer (input) and producer (output) ports, which can be connected to form a processing network, which is then the application. If there're a good set of components for a domain, many applications can created without programming new components. Also, components can be built of other components (they called composite components).
Wikipedia (good starting point):
http://en.wikipedia.org/wiki/Dataflow_programming
http://en.wikipedia.org/wiki/Flow-based_programming
JPM's site (book, wiki, everything):
http://jpaulmorrison.com/fbp/
I think, bigger systems must have that split point you describe as "macro". Even games have that point, e.g. FPS games have a 3D engine and a game logic script, or there's SCUMM VM, which is the same.

C# -Priority based delegates and chaining in delegates possible? or Windows Work Flow?

Whenever i feel hungry i will publish i am hungry.This will be notified to the service providers say (MealsService,FruitService,JuiceService ).(These service providers know what to serve).
But the serving priority is the concern. Priority here means my first choice is MealsService when there are enough meal is available my need is end with MealsService.To verify the enough meal is availabe the MealsService raises the event "updateMeTheStockStatus" to the "MealsServiceStockUpdateListener" .
The "MealsServiceStockUpdateListener" will only reply back to "MealsService" . No other Service providers ( FruitService,JuiceService ) will be notified by the "MealsServiceStockUpdateListener" .If there is no sufficient stock then only the MealsService passes notification to the JuiceService (as it is the second priority).As usual it checks the stock.If stock is not sufficient it passes message to FruitService,so the flow continues like this.
How can i technically implement this?
Any implemention like priority based delagates and delegate chaining make sense ?
(Somebody! Please reframe it for good readability ).
Update : In this model there is no direct communication between "StackUpdateListener" and "me".Only The "Service Providers" will communicate me.
Like other answerers, I'm not entirely convinced that an event is the way forward, but let's go along with it for the moment.
It seems to me that the business with the MealsServiceStockUpdateListener is a red herring really - you're just trying to execute some event handlers but not others. This sort of thing crops up elsewhere when you have a "BeforeXXX" event which allows cancellation, or perhaps some sort of exception handling event.
Basically you need to get at each of your handlers separately. There are two different ways of doing that - either you can use a normal multicast delegate and call GetInvocationList() or you can change your event declaration to explicitly keep a list of handlers:
private List<EventHandler> handlers = new List<EventHandler>();
public event EventHandler MealRequired
{
add { handlers.Add(value); }
remove
{
int index = handlers.LastIndexOf(value);
if (index != -1)
{
handlers.RemoveAt(index);
}
}
}
These two approaches are not quite equivalent - if you subscribe with a delegate instance which is already a compound delegate, GetInvocationList will flatten it but the List approach won't. I'd probably go with GetInvocationList myself.
Now, the second issue is how to detect when the meal has provided. Again, there are two approaches. The first is to use the normal event handler pattern, making the EventArgs subclass in question mutable. This is the approach that HandledEventArgs takes. The second is to break the normal event pattern, and use a delegate that returns a value which can be used to indicate success or failure (and possibly other information). This is the approach that ResolveEventHandler takes. Either way, you execute the delegates in turn until one of them satistfies your requirements. Here's a short example (not using events per se, but using a compound delegate):
using System;
public class Test
{
static void Main(string[] args)
{
Func<bool> x = FirstProvider;
x += SecondProvider;
x += ThirdProvider;
Execute(x);
}
static void Execute(Func<bool> providers)
{
foreach (Func<bool> provider in providers.GetInvocationList())
{
if (provider())
{
Console.WriteLine("Done!");
return;
}
}
Console.WriteLine("No provider succeeded");
}
static bool FirstProvider()
{
Console.WriteLine("First provider returning false");
return false;
}
static bool SecondProvider()
{
Console.WriteLine("Second provider returning true");
return true;
}
static bool ThirdProvider()
{
Console.WriteLine("Third provider returning false");
return false;
}
}
Rather than publish a message "I'm hungry" to the providers, publish "I need to know current stock available". Then listen until you have enough information to make a request to the correct food service for what you need. This way the logic of what-makes-me-full is not spread amongst the food services... It seems cleaner to me.
Message passing isn't baked into .NET directly, you need to implement your own message forwarding by hand. Fortunately, the "chain of responsiblity design pattern" is designed specifically for the problem you're trying to solve, namely forwarding a message down a chain until someone can handle it.
Useful resources:
Chain of Responsibility on Wikipedia
C# implementation on DoFactory.com
I'm not sure if you really need a priority event. Anyways, let's suppose we want to code that just for fun.
The .NET Framework has no support for such a peculiar construct. Let me show one possible approach to implement it.
The first step would be to create custom store for event delegates (like described here);
Internally, the custom event store could work like a priority queue;
The specific EventArgs used would be HandledEventArgs (or a subclass of it). This would allow the event provider to stop calling handlers after one of them sets the event as Handled;
The next step is the hardest. How to say to tell the event provider what is the priority of the event handler that is being added?
Let me clarify the problem. Usually, the adding of a handler is like this:
eater.GotHungry += mealsService.Someone_GotHungry;
eater.GotHungry += juiceService.Someone_GotHungry;
eater.GotHungry += fruitService.Someone_GotHungry;
The += operator will only receive an delegate. It's not possible to pass a second priority parameter. There might be several possible solutions for this problem. One would be to define the priority in a custom attribute set at the event handler method. A scond approach is discussed in the question.
Compared to the chain of responsibility implementation at dofactory.com, this approach has some advantages. First, the handlers (your food services) do not need to know each other. Also, handlers can be added and remove at any time dynamically. Of course, you could implement a variation of a chain of responsibility that has this advantages too.
I don't think delegates are the proper solution to your problem. Delegates are a low-level service provided by C# for relatively tightly coupled events between components. If I understand your question properly (It is worded a little oddly, so I am not sure I clearly understand your problem), then I think what you need is a mediated consumer/provider.
Rather than having your consumers directly consume the meal, juice, and fruit providers, have them request a food item from a central mediator. The mediator would then be responsible for determining what is available and what should be provided to the consumer. The mediator would be a subscriber to events published by all three services. Whenever stock is added/updated in the Meal, Juice, or Fruit services, they would publish their current stock to all subscribers. The mediator, being a subscriber, would track current stock reductions on its own, and be able to determine for itself whether to send a meal, juice, or fruit to a food consumer when a get food request is made.
For example:
|---------- (GetFoodResponse) ----------------
V |
FoodConsumer ---- (GetFoodRequest) ------> FoodProvider <-----> [ Local Stock Data ]
^
|
|
MealService ---- (PublishStockMessage) ----------|
^
JuiceService --- (PublishStockMessage) ----------|
^
FruitService --- (PublishStockMessage) ----------|
The benefits of such a solution are that you reduce coupling, properly segregate responsibility, and solve your problem. For one, your consumers only need to consume a single service...the FoodProvider. The FoodProvider subscribes to publications from the other three services, and is responsible for determining what food to provide to a consumer. The three food services are not responsible for anything related to the hunger of your food consumers, they are only responsible for providing food and tracking the stock of the food they provide. You also gain the ability to distribute the various components. Your consumers, the food provider, and each of the three food services can all be hosted on different physical machines if required.
However, to achieve the above benefits, your solution becomes more complex. You have more parts, and they need to be connected to each other properly. You have to publish and subscribe to messages, which requires some kind of supporting infrastructure (WCF, MSMQ, some third party ESB, custom solution, etc.) You also have duplication of data, since the food provider tracks stock on its own in addition to each of the food services, which could lead to discontinuity in available stock. This can be mitigated if you manage stock updated properly, but that would also increase complexity.
If you can handle the additional complexity, ultimately, a solution like this would more flexible and adaptable than a more tightly connected solution that uses components and C# events in a local-deployment-only scenario (as in your original example.)
I am having a bit of trouble understanding your analogy here, which sounds like you're obscuring the actual intent of the software, but I think I have done something like what you are describing.
In my case the software was telemarketing software and each of the telemarketers had a calling queue. When that queue raises the event signifying that it is nearing empty, the program will grab a list of available people to call, and then pass them through a chain of responsibility which pushes the available call into the telemarketer's queue like so:
Each element in the chain acts as a priority filter: the first link in the chain will grab all of the people who have never been called before, and if it finishes (ie. went through all of the people who have never been called) without filling up the queue, it will pass the remaining list of people to call to the next link in the chain - which will apply another filter/search. This continues until the last link in the chain which just fires off an e-mail to an administrator indicating that there are no available people to be called and a human needs to intervene quickly before the telemarketers have no work to do.

Categories

Resources