In my plugin I have a code to check the execution context Depth to avoid infinite loop once the plugin updates itself/entity, but there are cases that entity is being updated from other plugin or workflow with depth 2,3 or 4 and for that specific calls, from that specific plugin I want to process the call and not stop even if the Depth is bigger then 1.
Perhaps a different approach might be better? I've never needed to consider Depth in my plug-ins. I've heard of other people doing the same as you (checking the depth to avoid code from running twice or more) but I usually avoid this by making any changes to the underlying entity in the Pre Operation stage.
If, for example, I have code that changes the name of an Opportunity whenever the opportunity is updated, by putting my code in the post-operation stage of the Update my code would react to the user changing a value by sending a separate Update request back to the platform to apply the change. This new Update itself causes my plug-in to fire again - infinite loop.
If I put my logic in the Pre-Operation stage, I do it differently: the user's change fires the plugin. Before the user's change is committed to the platform, my code is invoked. This time I can look at the Target that was sent in the InputParameters to the Update message. If the name attribute does not exist in the Target (i.e. it wasn't updated) then I can append an attribute called name with the desired value to the Target and this value will get carried through to the platform. In other words, I am injecting my value into the record before it is committed, thereby avoiding the need to issue another Update request. Consequently, my changes causes no further plug-ins to fire.
Obviously I presume that your scenario is more complex but I'd be very surprised if it couldn't fit the same pattern.
I'll start by agreeing with everything that Greg said above - if possible refactor the design to avoid this situation.
If that is not possible you will need to use the IPluginExecutionContext.SharedVariables to communicate between the plug-ins.
Check for a SharedVariable at the start of your plug-in and then set/update it as appropriate. The specific design you'll use will vary based on the complexity you need to manage. I always get use a string with the message and entity ID - easy enough to serialize and deserialize. Then I always know whether I'm already executing the against a certain message for a specific record or not.
Related
I'm writing a Mafia (Werewolf)-style game engine in C#. Writing out the logic of an extended mafia game, the model:
A Player (Actor) has one or more Roles, and a Role contains one or more Abilities. Abilities can be Static, Triggered, or Activated (similar to Magic the Gathering) and have an "MAction" with 0 or more targets (in which order can be important) along with other modifiers. Some occur earlier in the Night phase than others, which is represented by a Priority.
MActions are resolved by placing them in a priority queue and resolving the top one, firing its associated event (which can place more actions on the queue, mostly due to Triggered Abilities) and then actually executing a function.
The problem I see with this approach is that there's no way for an MAction to be Cancelled through its event in this mechanism, and I see no clear way to solve it. How should I implement a system so that either MActions can be cancelled or that responses with higher Priorities end up executing first/delaying the initial MAction?
Thanks in advance. I've spent quite some time thinking (and diagramming) this through, can't quite get over this hurdle.
Would it be possible to implement a cancellation stack that is checked by each MAction function and only executes if the MAction you are trying to execute is not in that stack. That way any time an action is popped it would only do something if it wasn't canceled already.
The situation as I understand it:
You have a series of things that happen with complicated rules which decide the order of what happens, and the order of what happens decides the quality/magnitude of the effect.
First things first, in order to make your life easier I'd recommend you limit your players to making all their moves before action resolution takes place. Even if this model is abandoned later it should make it easier for you to debug and resolve the actions. This is especially true if later actions can undo the effects of earlier actions like in the following example:
Dave transforms to a werewolf because he triggers the Full Moon ability. Then with werewolf powers he jumps over a wall and bites Buffy. Before Buffy dies she activates her time reverse ability and kills Dave before he jumps over the wall.
Regardless, your dilemma makes me think that you need to use a rules engine like NRules1, or implement your own. The primary goal of the rules engine will be to order/discard the stuff that happens according to your business logic.
Next you put these actions into a queue/list. The actions are applied against the targets until the business rules tell you to stop (Werewolf Dave died) or there aren't any more actions to apply. Once you stop then the results of the battle/actions are reported to the user.
There are other ways to accomplish your goals but I think this will give you a viable pathway towards your end goal.
1: I've never used this library so I don't know if it is any good.
I have this scenario, and I don't really know where to start. Suppose there's a Web service-like app (might be API tho) hosted on a server. That app receives a request to proccess some data (through some method we will call processData(data theData)).
On the other side, there's a robot (might be installed on the same server) that procceses the data. So, The web-service inserts the request on a common Database (both programms have access to it), and it's supposed to wait for that row to change and send the results back.
The robot periodically check the database for new rows, proccesses the data and set some sort of flag to that row, indicating that the data was processed.
So the main problem here is, what should the method proccessData(..) do to check for the changes of the data row?.
I know one way to do it: I can build an iteration block that checks for the row every x secs. But i don't want to do that. What I want to do is to build some sort of event listener, that triggers when the row changes. I know it might involve some asynchronous programming
I might be dreaming, but is that even possible in a web enviroment.?
I've been reading about a SqlDependency class, Async and AWait classes, etc..
Depending on how much control you have over design of this distributed system, it might be better for its architecture if you take a step back and try to think outside the domain of solutions you have narrowed the problem down to so far. You have identified the "main problem" to be finding a way for the distributed services to communicate with each other through the common database. Maybe that is a thought you should challenge.
There are many potential ways for these components to communicate and if your design goal is to reduce latency and thus avoid polling, it might in fact be the right way for the service that needs to be informed of completion of this work item to be informed of it right away. However, if in the future the throughput of this system has to increase, processing work items in bulk and instead poll for the information might become the only feasible option. This is also why I have chosen to word my answer a bit more generically and discuss the design of this distributed system more abstractly.
If after this consideration your answer remains the same and you do want immediate notification, consider having the component that processes a work item to notify the component(s) that need to be notified. As a general design principle for distributed systems, it is best to have the component that is most authoritative for a given set of data to also be the component to answer requests about that data. In this case, the data you have is the completion status of your work items, so the best component to act on this would be the component completing the work items. It might be better for that component to inform calling clients and components of that completion. Here it's also important to know if you only write this data to the database for the sake of communication between components or if those rows have any value beyond the completion of a given work item, such as for reporting purposes or performance indicators (KPIs).
I think there can be valid reasons, though, why you would not want to have such a call, such as reducing coupling between components or lack of access to communicate with the other component in a direct manner. There are many communication primitives that allow such notification, such as MSMQ under Windows, or Queues in Windows Azure. There are also reasons against it, such as dependency on a third component for communication within your system, which could reduce the availability of your system and lead to outages. The questions you might want to ask yourself here are: "How much work can my component do when everything around it goes down?" and "What are my design priorities for this system in terms of reliability and availability?"
So I think the main problem you might want to really try to solve fist is a bit more abstract: how should the interface through which components of this distributed system communicate look like?
If after all of this you remain set on having the interface of communication between those components be the SQL database, you could explore using INSERT and UPDATE triggers in SQL. You can easily look up the syntax of those commands and specify Stored Procedures that then get executed. In those stored procedures you would want to check the completion flag of any new rows and possibly restrain the number of rows you check by date or have an ID for the last processed work item. To then notify the other component, you could go as far as using the built-in stored procedure XP_cmdshell to execute command lines under Windows. The command you execute could be a simple tool that pings your service for completion of the task.
I'm sorry to have initially overlooked your suggestion to use SQL Query Notifications. That is also a feasible way and works through the Service Broker component. You would define a SqlCommand, as if normally querying your database, pass this to an instance of SqlDependency and then subscribe to the event called OnChange. Once you execute the SqlCommand, you should get calls to the event handler you added to OnChange.
I am not sure, however, how to get the exact changes to the database out of the SqlNotificationEventArgs object that will be passed to your event handler, so your query might need to be specific enough for the application to tell that the work item has completed whenever the query changes, or you might have to do another round-trip to the database from your application every time you are notified to be able to tell what exactly has changed.
Are you referring to a Message Queue? The .Net framework already provides this facility. I would say let the web service manage an application level queue. The robot will request the same web service for things to do. Assuming that the data needed for the jobs are small, you can keep the whole thing in memory. I would rather not involve a database, if you don't already have one.
I have a running order for 2 handlers Deleting and Reordering pictures and would like some advises for the best solution.
On the UI some pictures are deleted, the user clicks on the deleted button. The whole flow, delete command up to an event handler which actually deletes the physical files is started.
Then immediately the user sorts the remaining pictures. A new flow from reorder command up to the reordering event handler for the file system fires again.
Already there is a concurrency problem. The reordering cannot be correctly applied without having the deletion done. At the moment this problem is handled with some sort of lock. A temp file is created and then deleted at the end of the deletion flow. While that file exists the other thread (reordering or deletion depending on the user actions) awaits.
This is not an ideal solution and would like to change it.
The potential solution must be also pretty fast (off course the current one is not a fast one) as the UI is updated thru a JSON call at the end of ordering.
In a later implementation we are thinking to use a queue of events but for the moment we are pretty stuck.
Any idea would be appreciated!
Thank you, mosu'!
Edit:
Other eventual consistency problems that we had were solved by using a Javascript data manager on the client side. Basically being optimist and tricking the user! :)
I'm starting to believe this is the way to go here as well. But then how would I know when is the data changed in the file system?
Max suggestions are very welcomed and normally they apply.
It is hard sometimes to explain all the details of an implementation but there is a detail that should be mentioned:
The way we store the pictures means that when reordered all pictures paths (and thus all links) change.
A colleague hat the very good idea of simply remove this part. That means that even if the order will change the path of the picture will remain the same. On the UI side there will be a mapping between the picture index in the display order and its path and this means there is no need to change the file system anymore, except when deleting.
As we want to be as permissive as possible with our users this is the best solution for us.
I think, in general, it is also a good approach when there appears to be a concurrency issue. Can the concurrency be removed?
Here is one thought on this.
What exactly you are reordering? Pictures? Based on, say, date.
Why there is command for this? The result of this command going to be seen by everyone or just this particular user?
I can only guess, but it looks like you've got a presentation question here. There is no need to store pictures in some order on the write side, it's just a list of names and links to the file storage. What you should do is to store just a little field somewhere in the user settings or collection settings: Date ascending or Name descending. So you command Reorder should change only this little field. Then when you are loading the gallery this field should be read first and based on this you should load one or another view. Since the store is cheap nowadays, you can store differently sorted collections on the read side for every sort params you need.
To sum up, Delete command is changing the collection on the write side, but Reoder command is just user or collection setting. Hence, there is no concurrency here.
Update
Based on your comments and clarifications.
Of course, you can and, probably, should restrict user actions only by one at the time. If time of deletion and reordering is reasonably short. It's always a question of type of user experience you are asked to achieve. Take a usual example of ordering system. After an order placed, user can almost immediately see it in the UI and the status will be something like InProcess. Most likely you won't let user to change the order in any way, which means you are not going to show any user controls like Cancel button(of course this is just an example). Hence, you can use this approach here.
If 2 users can modify the same physical collection, you have no choice here - you are working with shared data and there should be kind of synchronization. For instance, if you are using sagas, there can be a couple of sagas: Collection reordering saga and Deletion saga - they can cooperate. Deletion process started first - collection aggregate was marked as deletion in progress and then right after this reordering saga started, it will attempt to start the reordering process, but since deletion saga is inprocess, it should wait for DeletedEvent and continue the process afterwards.The same if Reordering operation started first - the Deletion saga should wait until some event and continue after that event arrived.
Update
Ok, if we agreed not touch the file system itself, but the aggregate which represents the picture collection. The most important concurrency issues can be solved with optimistic concurrency approach - in the data storage a unique constraint, based on aggregate id and aggregate version, is usually used.
Here are the typical steps in the command handler:
This is the common sequence of steps a command handler follows:
Validate the command on its own merits.
Load the aggregate.
Validate the command on the current state of the aggregate.
Create a new event, apply the event to the aggregate in memory.
Attempt to persist the aggregate. If there's a concurrency conflict during this step, either give up, or retry things from step 2.
Here is the link which helped me a lot some time ago: http://www.cqrs.nu/
Imagine the following situation. Windows service from time to time checks the data in database table. When some new data appears, it starts processing each new row.
The processing consists of several logical stages, let it be:
get some additional data from web service;
find an existins object via web service by data from stage 1 or create a new object;
inform an interested person of the actions done (with details about the object that was found/created on stage 2);
do smth else.
Just for now if any exception happens, the service updates the DB row and sets a flag indicating that a mistake has happened. Some time later service will try to process the row once again... and here is the problem.
Processing will start from the very beginning, from stage 1. In this case if exception happened on stage 4 and if it will happen again and again the interested person from stage 3 will be informed again and again and again...
To completely stop row processing in case of exception is not possible and not desirable in my situation. Ideally it would be nice if there was a way to start processing from the stage where it failed last time.
Now I need your advice how all these can be handled. In fact, everything is even more complicated, because there are several patterns of processing, different number of stages and so on.
Thanks in advance.
UPDATE
Yes, I have the State parameter in data rows :) It is just not used just for now. And exception handling is not a new thing for me.
The question is: what is the best way to handle state switching? In other words to make a clear logical link between stage number and processing method? The flow of execution can be very diffirent and include various number of stages and methods for different rows.
I hope, there are more pleasant ways than writing endless switch/case blocks for every new sutiation?
There are a couple of patterns that can help with each issue in your description.
Windows service checking queue. Have a timer in your service that runs every 1 minute or 5 minutes or whatever, check the queue, and if any new entries, start processing. (see example here: Best Timer for using in a Windows service)
Following a series of steps is generally called a workflow. In a workflow, you have a current status, and you update that status in each stage. So every row would begin in stage = 1. After first step, stage = 2, etc, etc. On exception, it will be on the stage it left off, which will then start the process over again on that stage, or whatever your logic is. This status would be stored with each row, and the dispatch code would check the status and send the service to the correct starting code for the current stage. I.e., think of a set of If statements based on the status.
Handling exceptions is very simple. Every unit of work should be wrapped in a try...catch block. On error, log the exception, and mark the row according to your business rules.
As far as the implementation, use programming best practices to keep your code clean, modular, neat and organized. As you develop the solution, bring specific questions back for more help.
Add a field to your database table which tracks the state of each row. You could call this new field ProcessingState for example.
As the row goes through each logical state you can update this ProcessingState field to identify which state the row is in.
Each logical step in your service should just work rows that are in the appropriate state.
Here is an example, lets say you have five logical steps to work through. You could have the following states;
Waiting State 1.
State 1 complete
Waiting State 2
State 2 complete
etc..
Good luck.
I was thinking of centralizing this functionality by having a single method that gets passed an AppState argument and it deals with changing the properties of all GUI elements based on this argument. Every time the app changes its state (ready, busy, downloading so partially busy, etc), this function is called with the appropriate state (or perhaps it's a bit field or something) and it does its magic.
If I scatter changing the state of GUI elements all over the place, then it becomes very easy to forget that when the app is in some state, this other widget over there needs to be disabled too, etc.
Any other ways to deal with this sort of thing?
Emrah,
Your idea is good. You need to limit the state structure and this is the only way to ensure reliable UI. On the other hand do not follow the "one function" idea to strictly. Rather continuously follow its direction, by creating a function and then do progressively refactoring all attributes to a single "setter" function. You need to remember about a few things on your way:
Use only one-way communication. Do not read the state from controls since this is the source of all evil. First limit the number of property reads and then the number of property writes.
You need to incorporate some caching methodology. Ensure that caching does not inject property reading into main code.
Leave dialog boxes alone, just ensure that all dialog box communication is done during opening and closing and not in between (as much as you can).
Implement wrappers on most commonly used controls to ensure strict communication framework. Do not create any global control framework.
Do not use this ideas unless your UI is really complex. In such case using regular WinForms or JavaScript events will lead you to much smaller code.
The less code the better. Do not refactor unless you loose lines.
Good luck!
Yes, this is the most time consuming part of the GUI work, to make a user friendly application. Disable this, enable that, hide this, show that. To make sure all controls has right states when inserting/updateing/deleteing/selecting/deselecting things.
I think thats where you tell a good programmer from a bad programmer. A bad programmer has an active "Save"-button when there is nothing changed to save, a good programmer enables the "save"-button only when there are things to save (just one example of many).
I like the idea of a UIControlstate-handler for this purpose.
Me.UIControlStates=UIControlstates.EditMode or something like that.
If having such object it could raise events when the state changes and there we put the code.
Sub UIControlStates_StateChanged(sender as object, e as UIControlStateArgs)
if e.Oldstate=UIControlStates.Edit and e.NewState=UIControlStates.Normal then
rem Edit was aborted, reset fields
ResetFields()
end if
select case e.NewState
case UIControlStates.Edit
Rem enalbe/disable/hide/show, whatever
Case UIControlStates.Normal
Rem enalbe/disable/hide/show, whatever
Case UIControlStates.Busy
Rem enalbe/disable/hide/show, whatever
Case Else
Rem enalbe/disable/hide/show, whatever
end select
end sub
#Stefan:
I think we are thinking along the same lines, i.e. a single piece of code that gets to modify all the widget states and everyone else has to call into it to make such changes. Except, I was picturing a direct method call while you are picturing raising/capturing events. Is there an advantage to using events vs just a simple method call?