Having a state machine as a guard of another state machine - c#

I am designing a system that simulates the flow of a document in an organization. For the sake of simplicity let's assume that the flow has the following states:
Opened
Evaluated
Rejected
Accepted
There are times that some external resources are required to be available to proceed. Thus if a resource isn't available the whole flow should be put on hold. I imagine that there's another (somehow) parallel state machine that has two states:
In Progress
On Hold
The way that I thought I could solve this problem was to check the state of the second state machine as a guard condition in every transition of the first state machine.
But I wonder if there's a common way or pattern for solving this kind of problem?
BTW, I want to implement this state machines using the Stateless or bbv Common(Appccelerate) libraries.

With a UML state machine you could use hierarchical states together with a history states.
In Progress
Opened
Evaluated
Rejected
Accepted
(H) History state
On Hold
Events for the substates of 'In Progress' will only be handled, if 'In Progress' and one of it's substates is active.
The (H) history state can be used to reactivate the most recently active substate when 'In Progress' becomes active.

Based on my experience with state machines I think you would be better off with just the one state machine. Make the On Hold a state and have the other states check in their check conditions whether the desired external resource is available, if it isn't then move the document to the On Hold state.
As for the in progress state, I think this is implied by the other states and not really necessary.
States made up of:
CheckConditions, this is where you put your guard conditions
ExitConditions
EntryConditions

Related

When would I use a routing slip over a state machine

I can’t wrap my head about the 2 concepts being different, when I need a distributed transaction or event business logic my mind always goes to using a state machine. I know routing slips are useful but I can’t really know when to use one.
My question is, when would I need to use one over the other? Or why use a state machine to track the state of a routing slip and not just manage state and use the state machine? I can’t really tell when to use one or the other
A state machine only tracks one state at a time, and that state could have any number of potential exits. If you think of the Mario Brothers games, big Mario could get a fire flower, or a leaf/tail, or a star/invincibility, or get shrunk, or fall in a pit and die. Those are all new states that could transition from the big Mario state.
Routing slip requires a linear set of processes or actions that are fixed from the start. Outgoing mail goes to the origin mailbox (where the flag gets raised), to the origin post office (to be aggregated with all other outgoing mail), to the sorting facility (to choose the destination post office), to the destination post office (to select the route to the destination mailbox), and then to the destination mailbox.
You can't skip any of those steps. You can't do them out of order. There aren't multiple potential exits at each step along the way.

Should I fire trigger to change state from the OnEntry() method in finite state machine?

I am using stateless framework (https://code.google.com/p/stateless/) to model finite state machine in my application. There are certain states that should perform some logic and then immediately move to the next state. I am wondering is it good practice to do this like following:
var machine = new StateMachine<State, Trigger>(State.Idle);
machine.Configure(State.StateA)
.OnEntry(() =>
{
DoSomeStuff();
_machine.Fire(Trigger.TriggerB); // move to StateB
});
Is this good FSM design? If not, what would be better approach? The idea I am trying to implement is to have certain states that automatically advance machine to the next state without having some external code that waits for DoSomeStuff() to finish and then to trigger the machine to move in the next state.
You seem to be talking about state push vs state pull. Both works, one approach can be more efficient in some situations.
It's perfectly fine to have state push approach, where one state does some work and calls a transition.

CQRS run handlers in specific order

I have a running order for 2 handlers Deleting and Reordering pictures and would like some advises for the best solution.
On the UI some pictures are deleted, the user clicks on the deleted button. The whole flow, delete command up to an event handler which actually deletes the physical files is started.
Then immediately the user sorts the remaining pictures. A new flow from reorder command up to the reordering event handler for the file system fires again.
Already there is a concurrency problem. The reordering cannot be correctly applied without having the deletion done. At the moment this problem is handled with some sort of lock. A temp file is created and then deleted at the end of the deletion flow. While that file exists the other thread (reordering or deletion depending on the user actions) awaits.
This is not an ideal solution and would like to change it.
The potential solution must be also pretty fast (off course the current one is not a fast one) as the UI is updated thru a JSON call at the end of ordering.
In a later implementation we are thinking to use a queue of events but for the moment we are pretty stuck.
Any idea would be appreciated!
Thank you, mosu'!
Edit:
Other eventual consistency problems that we had were solved by using a Javascript data manager on the client side. Basically being optimist and tricking the user! :)
I'm starting to believe this is the way to go here as well. But then how would I know when is the data changed in the file system?
Max suggestions are very welcomed and normally they apply.
It is hard sometimes to explain all the details of an implementation but there is a detail that should be mentioned:
The way we store the pictures means that when reordered all pictures paths (and thus all links) change.
A colleague hat the very good idea of simply remove this part. That means that even if the order will change the path of the picture will remain the same. On the UI side there will be a mapping between the picture index in the display order and its path and this means there is no need to change the file system anymore, except when deleting.
As we want to be as permissive as possible with our users this is the best solution for us.
I think, in general, it is also a good approach when there appears to be a concurrency issue. Can the concurrency be removed?
Here is one thought on this.
What exactly you are reordering? Pictures? Based on, say, date.
Why there is command for this? The result of this command going to be seen by everyone or just this particular user?
I can only guess, but it looks like you've got a presentation question here. There is no need to store pictures in some order on the write side, it's just a list of names and links to the file storage. What you should do is to store just a little field somewhere in the user settings or collection settings: Date ascending or Name descending. So you command Reorder should change only this little field. Then when you are loading the gallery this field should be read first and based on this you should load one or another view. Since the store is cheap nowadays, you can store differently sorted collections on the read side for every sort params you need.
To sum up, Delete command is changing the collection on the write side, but Reoder command is just user or collection setting. Hence, there is no concurrency here.
Update
Based on your comments and clarifications.
Of course, you can and, probably, should restrict user actions only by one at the time. If time of deletion and reordering is reasonably short. It's always a question of type of user experience you are asked to achieve. Take a usual example of ordering system. After an order placed, user can almost immediately see it in the UI and the status will be something like InProcess. Most likely you won't let user to change the order in any way, which means you are not going to show any user controls like Cancel button(of course this is just an example). Hence, you can use this approach here.
If 2 users can modify the same physical collection, you have no choice here - you are working with shared data and there should be kind of synchronization. For instance, if you are using sagas, there can be a couple of sagas: Collection reordering saga and Deletion saga - they can cooperate. Deletion process started first - collection aggregate was marked as deletion in progress and then right after this reordering saga started, it will attempt to start the reordering process, but since deletion saga is inprocess, it should wait for DeletedEvent and continue the process afterwards.The same if Reordering operation started first - the Deletion saga should wait until some event and continue after that event arrived.
Update
Ok, if we agreed not touch the file system itself, but the aggregate which represents the picture collection. The most important concurrency issues can be solved with optimistic concurrency approach - in the data storage a unique constraint, based on aggregate id and aggregate version, is usually used.
Here are the typical steps in the command handler:
This is the common sequence of steps a command handler follows:
Validate the command on its own merits.
Load the aggregate.
Validate the command on the current state of the aggregate.
Create a new event, apply the event to the aggregate in memory.
Attempt to persist the aggregate. If there's a concurrency conflict during this step, either give up, or retry things from step 2.
Here is the link which helped me a lot some time ago: http://www.cqrs.nu/

How to know which plug-in/workflow/entity updates data

In my plugin I have a code to check the execution context Depth to avoid infinite loop once the plugin updates itself/entity, but there are cases that entity is being updated from other plugin or workflow with depth 2,3 or 4 and for that specific calls, from that specific plugin I want to process the call and not stop even if the Depth is bigger then 1.
Perhaps a different approach might be better? I've never needed to consider Depth in my plug-ins. I've heard of other people doing the same as you (checking the depth to avoid code from running twice or more) but I usually avoid this by making any changes to the underlying entity in the Pre Operation stage.
If, for example, I have code that changes the name of an Opportunity whenever the opportunity is updated, by putting my code in the post-operation stage of the Update my code would react to the user changing a value by sending a separate Update request back to the platform to apply the change. This new Update itself causes my plug-in to fire again - infinite loop.
If I put my logic in the Pre-Operation stage, I do it differently: the user's change fires the plugin. Before the user's change is committed to the platform, my code is invoked. This time I can look at the Target that was sent in the InputParameters to the Update message. If the name attribute does not exist in the Target (i.e. it wasn't updated) then I can append an attribute called name with the desired value to the Target and this value will get carried through to the platform. In other words, I am injecting my value into the record before it is committed, thereby avoiding the need to issue another Update request. Consequently, my changes causes no further plug-ins to fire.
Obviously I presume that your scenario is more complex but I'd be very surprised if it couldn't fit the same pattern.
I'll start by agreeing with everything that Greg said above - if possible refactor the design to avoid this situation.
If that is not possible you will need to use the IPluginExecutionContext.SharedVariables to communicate between the plug-ins.
Check for a SharedVariable at the start of your plug-in and then set/update it as appropriate. The specific design you'll use will vary based on the complexity you need to manage. I always get use a string with the message and entity ID - easy enough to serialize and deserialize. Then I always know whether I'm already executing the against a certain message for a specific record or not.

Detect when all workflow sequences are idle

I have a workflow that primarily consists of identical elements.
Each element is defined like this:
The workflow may simply stack these elements in a sequence, it may run them in parallel, it may have branching between them, etc. - total freedom for the workflow designer. The whole thing is hosted as a WCF service, but I would prefer not to rely on that, if at all possible.
The high level idea of this whole setup is the following:
When the workflow starts, these elements start firing up, one after another, quickly skipping over the top condition branch. Completion of the previous element causes start of the next one - as defined in the workflow.
At some point, when the condition [B] is right, an element might take the bottom branch and become waiting for a WCF call.
Sooner or later, either all elements come to this kind of stop, or the workflow ends altogether.
What I need is to catch that moment when all elements stop to wait for WCF call.
At that point, I need to perform some calculations that will affect further flow of the workflow. Therefore, I need to catch that moment precisely.
Some notes:
I guarantee that no WCF calls will come before I make those calculations. Therefore, possible race conditions connected to WCF calls are out of scope.
I do not have an application that I control the control flow of. In other words, I am hosted in IIS, and therefore, am subject to restart without notice, and cannot setup timers, long-running loops, message pumps, and the like.
I do not control the design of the workflow.
However, I do totally control the design of the element. In fact, this element is actually a NativeActivity (that's why the diagram is from Visio :-) that I control the source code of.
I also control, to some extent, the hosting environment. That is, I can make modifications to the web application that the workflow is hosted in.
The whole workflow is "attached" to a business object, and all elements have access to it.
The best way to do this is to create an extension that is a TrackingParticipant. This extension will receive all the tracking records in the Track method. Then when it receives the WorkflowInstanceStateRecord and the state is "Idle" you will know that the workflow is idle. Activities can access this extension to receive data from it or call methods on it as well.
This is the technique I used in the Introduction to State Machine Hands On Lab

Categories

Resources