Command Pattern - Purpose? - c#

After reading this: http://sourcemaking.com/design_patterns/command
I still don't quite understand why we need this.

The idea is that if commands are encapsulated as objects then those commands can be captured, stored, queued, replayed etc.
It also makes it easier for the command to know how to undo themselves (ie perform the reverse operation) so that then if a command is processed it can be stored in a list and then 'undone' in the reverse order to restore the state before the commands were done.
Also it decouples the sender of the command from the receiver. This can allow multiple things to generate the same command (a menu item and a button for example) and they will be handled in the same way.

It's a good way to encapsulate asynchronous operations and keep their parameters and context in one place.
E.g. a HTTP request: You send a request over a socket, and wait for the response to arrive. If your application is e.g. a web browser, you don't want to block until the request is done but move on. If the response arrives, you have to continue in the context were you stopped, e.g. reading the data and putting it into the right place (e.g. put the downloaded image data somewhere for later rendering). To match the response to the context it belongs to can become tricky if you have one big client class firing off multiple asynchronous operations. Responses might arrive in arbitrary order. Which response belongs to what? What again should be done with the response? How to handle possible errors? If you have those requests encapsulated in commands and let the commands only receive their own response, they'll know better how to continue from there and handle the response. If you have sequences of requests/responses, it's also much easier to keep track of the sequence's state. One can group commands to composite commands (composite pattern).
The client passes everything needed to the command, and waits until the command finishes, reporting back either success or error.
Another big advantage is when using multi-threading: if all data needed for the operation is encapsulated in the command object, it's easy to move the command to another thread and have it executed there, without the usual locking headaches you get when sharing objects among threads. Create command, pass everything it needs to it (copy, not by reference), pass to other thread, synchronize only when receiving the result, done.

The command pattern separates the code that knows how to do some work from the code that knows when it needs to be done, and with what parameters.
The most obvious case is a button that knows when you click it, but doesn't know what work to do at that moment. The command pattern lets you pass a do-some-work object to the button, which invokes the object when it is clicked.

Basically, the Command pattern is a way to partially achieve "Function as object" in Java (or C#).
Since you can't just create a function (or method) and do whatever you want with it like pass it as a parameter to some other function or keep it in a variable for later execution, this is the workaround to do that:
You wrap some code in a class (this is your execute method).
Instantiate the class. Now, this object you have is "a function as an object".
You can pass the object as a parameter, keep it around or whatever.
Eventually, you'll want to call the execute method.

It describes a solution to a problem. Mainly, that we want to issue commands and don't want to define 30 methods over 8 classes to achieve this. By using the pattern mention, we issue a Command object and the object is free to ignore it, or act on it in someway. The complexity of the Command object is implementation-defined, but this is a nice way to tell objects "hey, do this".
Additionally, because we have encapsulated this in an object we can go further and queue commands, dispatch them at intervals we wish and also revert them (provided of course, that the object you send the command to can 'undo' a Command as well as 'do it').
So, imagine a drawing package that allows you to add shapes to a canvas. Each time the user does this, a command can be issued:
m_Canvas.push_back(new Line(1.0f, 2.0f));
m_Canvas.push_back(new Line(3.5f, 3.1f));
m_Canvas.push_back(new Circle(2.0f, 3.0f, 1.5f));
and so on. This assumed Line and Circle are derived from a common Command base class.
Our renderer can use this canvas collection as a way of rendering and un-doing is simply a case of removing the last performed command. By tracking what the user un-does in a separate collection, we can also redo.

Related

Pausing Event Sourced System Commands with Global Variable

In my Event Sourced System, I have an endpoint that administration can use to rebuild read side Databases in the case of some read side inconsistency or corruption.
When this endpoint is hit, I would like to stall (or queue) the regular system commands so they cannot be processed. I want to do this so events are not emitted and read side updates are not made while rebuilding the data stores. If this happened, new (live) event updates could be processed in the middle of the rebuild and put the read side DB in an inconsistent state.
I was going to use a static class with some static properties (essentially mocking a global variable), but have read this is bad practice.
My questions are:
Why is this bad practice in OO design and C#?
What other solutions can I use to accomplish this class communication in place of this global variable?
Why is this bad practice in OO design and C#? (using global variables)
There is a lot of talks about this on the community, but Very briefly, it makes program state unpredictable..
What other solutions can I use to accomplish this class communication in place of this global variable?
You should not stop the command processing if you only need to rebuild a Readmodel. The Write model should go as usual because it doesn't need data from the read side (unless there are some Sagas also). The clients need the commands to be processed so the rebuilding should be done transparently.
Instead you should create another instance of the Readmodel that uses another (temporary) persistence (another database/table/collection/whatever) and use this to rebuild the state. Then, when the rebuilding is done you should replace the old/faulty instance with this new one.
For the transition to be as smooth as possible, the fresh Readmodel should subscribe to the event stream even before the rebuilding starts but it should not process any incoming event. Instead it should put them in a ring buffer, along with the events that are fetched from the Event store or Event log or whatever event source you are using.
The algorithm for processing events from this ring buffer should be the oldest one is processed first. In this way, the new events that are generated by the new commands are not processed until the old events (the one that were generated before the rebuilding started) are processed.
Now that you have a clean Readmodel that is processing the latest events (a catched-up Readmodel) you just need it to replace the faulty Readmodel somehow, i.e. you replace it in you composition root of your application (Dependency injection container). The faulty Readmodel could be now discarded.

CQRS How to get aggregates final version?

Say you have a command that potentially can generates multiple events on an aggregate. How do you figure out what the final version actually is? So you don't get it from the read model until you know all events are processed.
ServiceLocator.CommandBus.Send(new SomeCommand(..., currentVersion));
Using a command bus I see no obvious ways of getting a return value stating the new aggregate version.
Suggestions?
As far as I can tell
You are on exactly the right track
Yup, the command bus is getting in your way
Since the command bus isn't allowing you to pass back the information that you want, you'll have to pass it forward, then query it. The command handler writes the information you want somewhere, and you query it after getting the signal that the command has completed.
In other words, you can think about the application itself as an abstraction where you send it commands, the commands update version numbers, and you query those version numbers. This abstraction has been split into separate responsibilities -- the command responsibility is implemented via the command bus, and the query responsibility by... well, that's the bit to work out.
Each of your command messages should have a unique identifier (you are going to want something like for idempotency anyway). After the events are successfully persisted, the command handler writes the new version number in to key-value store, using the command id as the key.
In your caller, you block until that entry is available; then read it from the store and move on.
Not my favorite choice, but it doesn't really violate any of the principles of good/successful design that we have been taught.
For instance, Gregor Hohpe talks about using a correlation identifier to coordinate between request messages and reply messages. In this example, the request would be the Command, with it's identifier, and the reply would be an message that describes the new high water mark of the stream.
You might imagine, for instance, that the application gets the list of new events for your event sourced entity, saves them, then publishes a new event saying "the stream is now at version 12345". Your code, having submitted the command message, just blocks until the high water message arrives.
(If the message doesn't appear in a reasonable amount of time? Resend the command! We make sure that commands are idempotent so that will be an option.)
Another possibility: maybe you don't really need to know the high water mark at all; after all, you have the command id. If the events persisted by the command have a causation identifier (meaning, each event is tagged with the id of the command that produced it), and you trust that the event history is saved atomically, then... keep the query separate from the command; when the command is finished, "redirect" to the query, passing along the command identifier, and have the query block until at least one message with the right command id appears in the history.
Honestly, this is the same behavior as before, just putting the blocking in a different place.
Another possibility is that the client keeps track of the version itself. Rough idea is that even though the book of record is protected by the server, the client can have its own copy of the model, and a cache of the objects that it cares about. The client runs the command locally to ensure that it won't foul anything, then sends the command to the server -- already knowing what the answer is going to be if the command is successful.
Think of it as another form of optimistic concurrency.

Listening events in a web service or API over Database changes

I have this scenario, and I don't really know where to start. Suppose there's a Web service-like app (might be API tho) hosted on a server. That app receives a request to proccess some data (through some method we will call processData(data theData)).
On the other side, there's a robot (might be installed on the same server) that procceses the data. So, The web-service inserts the request on a common Database (both programms have access to it), and it's supposed to wait for that row to change and send the results back.
The robot periodically check the database for new rows, proccesses the data and set some sort of flag to that row, indicating that the data was processed.
So the main problem here is, what should the method proccessData(..) do to check for the changes of the data row?.
I know one way to do it: I can build an iteration block that checks for the row every x secs. But i don't want to do that. What I want to do is to build some sort of event listener, that triggers when the row changes. I know it might involve some asynchronous programming
I might be dreaming, but is that even possible in a web enviroment.?
I've been reading about a SqlDependency class, Async and AWait classes, etc..
Depending on how much control you have over design of this distributed system, it might be better for its architecture if you take a step back and try to think outside the domain of solutions you have narrowed the problem down to so far. You have identified the "main problem" to be finding a way for the distributed services to communicate with each other through the common database. Maybe that is a thought you should challenge.
There are many potential ways for these components to communicate and if your design goal is to reduce latency and thus avoid polling, it might in fact be the right way for the service that needs to be informed of completion of this work item to be informed of it right away. However, if in the future the throughput of this system has to increase, processing work items in bulk and instead poll for the information might become the only feasible option. This is also why I have chosen to word my answer a bit more generically and discuss the design of this distributed system more abstractly.
If after this consideration your answer remains the same and you do want immediate notification, consider having the component that processes a work item to notify the component(s) that need to be notified. As a general design principle for distributed systems, it is best to have the component that is most authoritative for a given set of data to also be the component to answer requests about that data. In this case, the data you have is the completion status of your work items, so the best component to act on this would be the component completing the work items. It might be better for that component to inform calling clients and components of that completion. Here it's also important to know if you only write this data to the database for the sake of communication between components or if those rows have any value beyond the completion of a given work item, such as for reporting purposes or performance indicators (KPIs).
I think there can be valid reasons, though, why you would not want to have such a call, such as reducing coupling between components or lack of access to communicate with the other component in a direct manner. There are many communication primitives that allow such notification, such as MSMQ under Windows, or Queues in Windows Azure. There are also reasons against it, such as dependency on a third component for communication within your system, which could reduce the availability of your system and lead to outages. The questions you might want to ask yourself here are: "How much work can my component do when everything around it goes down?" and "What are my design priorities for this system in terms of reliability and availability?"
So I think the main problem you might want to really try to solve fist is a bit more abstract: how should the interface through which components of this distributed system communicate look like?
If after all of this you remain set on having the interface of communication between those components be the SQL database, you could explore using INSERT and UPDATE triggers in SQL. You can easily look up the syntax of those commands and specify Stored Procedures that then get executed. In those stored procedures you would want to check the completion flag of any new rows and possibly restrain the number of rows you check by date or have an ID for the last processed work item. To then notify the other component, you could go as far as using the built-in stored procedure XP_cmdshell to execute command lines under Windows. The command you execute could be a simple tool that pings your service for completion of the task.
I'm sorry to have initially overlooked your suggestion to use SQL Query Notifications. That is also a feasible way and works through the Service Broker component. You would define a SqlCommand, as if normally querying your database, pass this to an instance of SqlDependency and then subscribe to the event called OnChange. Once you execute the SqlCommand, you should get calls to the event handler you added to OnChange.
I am not sure, however, how to get the exact changes to the database out of the SqlNotificationEventArgs object that will be passed to your event handler, so your query might need to be specific enough for the application to tell that the work item has completed whenever the query changes, or you might have to do another round-trip to the database from your application every time you are notified to be able to tell what exactly has changed.
Are you referring to a Message Queue? The .Net framework already provides this facility. I would say let the web service manage an application level queue. The robot will request the same web service for things to do. Assuming that the data needed for the jobs are small, you can keep the whole thing in memory. I would rather not involve a database, if you don't already have one.

WCF data object receive progress

I'm looking for a way to retrieve a collection of DTOs from my WCF data service in a way that will allow me to be informed every time a whole DTO from the collection has finished downloading, also I want to be able to read it of course.
Means, if I want to get a collection of users, every time a user from the collection is downloaded completely to the client (serializably-speaking), I want the client-side to be notified and be able to read it.
Is it at all possible?
Thanks!
Edit:
Is passing a callback from the client to the server, which the server will use to send the client each user through iteration, a possible/correct direction? Or is there a better solution?
You’ll probably have to split it into multiple requests in order to do this. For example, one request to retrieve the size of the collection, and then a separate request for each item in the collection. Then you know when each item completes. (If you do this, you can even parallelise the whole thing.)
You can't really subdivide a single call easily, so you'd best making one or two concurrent calls, and getting the objects individually. Using some kind of manager class, and some multithreading, you could have an event fired when a call was completed - and map that to an 'object downloaded' event.
Hope that helps.

handling GUI element properties per app state

I was thinking of centralizing this functionality by having a single method that gets passed an AppState argument and it deals with changing the properties of all GUI elements based on this argument. Every time the app changes its state (ready, busy, downloading so partially busy, etc), this function is called with the appropriate state (or perhaps it's a bit field or something) and it does its magic.
If I scatter changing the state of GUI elements all over the place, then it becomes very easy to forget that when the app is in some state, this other widget over there needs to be disabled too, etc.
Any other ways to deal with this sort of thing?
Emrah,
Your idea is good. You need to limit the state structure and this is the only way to ensure reliable UI. On the other hand do not follow the "one function" idea to strictly. Rather continuously follow its direction, by creating a function and then do progressively refactoring all attributes to a single "setter" function. You need to remember about a few things on your way:
Use only one-way communication. Do not read the state from controls since this is the source of all evil. First limit the number of property reads and then the number of property writes.
You need to incorporate some caching methodology. Ensure that caching does not inject property reading into main code.
Leave dialog boxes alone, just ensure that all dialog box communication is done during opening and closing and not in between (as much as you can).
Implement wrappers on most commonly used controls to ensure strict communication framework. Do not create any global control framework.
Do not use this ideas unless your UI is really complex. In such case using regular WinForms or JavaScript events will lead you to much smaller code.
The less code the better. Do not refactor unless you loose lines.
Good luck!
Yes, this is the most time consuming part of the GUI work, to make a user friendly application. Disable this, enable that, hide this, show that. To make sure all controls has right states when inserting/updateing/deleteing/selecting/deselecting things.
I think thats where you tell a good programmer from a bad programmer. A bad programmer has an active "Save"-button when there is nothing changed to save, a good programmer enables the "save"-button only when there are things to save (just one example of many).
I like the idea of a UIControlstate-handler for this purpose.
Me.UIControlStates=UIControlstates.EditMode or something like that.
If having such object it could raise events when the state changes and there we put the code.
Sub UIControlStates_StateChanged(sender as object, e as UIControlStateArgs)
if e.Oldstate=UIControlStates.Edit and e.NewState=UIControlStates.Normal then
rem Edit was aborted, reset fields
ResetFields()
end if
select case e.NewState
case UIControlStates.Edit
Rem enalbe/disable/hide/show, whatever
Case UIControlStates.Normal
Rem enalbe/disable/hide/show, whatever
Case UIControlStates.Busy
Rem enalbe/disable/hide/show, whatever
Case Else
Rem enalbe/disable/hide/show, whatever
end select
end sub
#Stefan:
I think we are thinking along the same lines, i.e. a single piece of code that gets to modify all the widget states and everyone else has to call into it to make such changes. Except, I was picturing a direct method call while you are picturing raising/capturing events. Is there an advantage to using events vs just a simple method call?

Categories

Resources