I'm in the process of designing a system that will allow me to represent broad-scope tasks as workflows, which expose their workitems via an IEnumerable method. The intention here is to use C#'s 'yield' mechanism to allow me to write psuedo-procedural code that the workflow execution system can execute as it sees fit.
For example, say I have a workflow that includes running a query on the database and sending an email alert if the query returns a certain result. This might be the workflow:
public override IEnumerable<WorkItem> Workflow() {
// These would probably be injected from elsewhere
var db = new DB();
var emailServer = new EmailServer();
// other workitems here
var ci = new FindLowInventoryItems(db);
yield return ci;
if (ci.LowInventoryItems.Any()) {
var email = new SendEmailToWarehouse("Inventory is low.", ci.LowInventoryItems);
yield return email;
}
// other workitems here
}
CheckInventory and EmailWarehouse are objects deriving from WorkItem, which has an abstract Execute() method that the subclasses implement, encapsulating the behavior for those actions. The Execute() method gets called in the workflow framework - I have a WorkflowRunner class which enumerates the Workflow(), wraps pre- and post- events around the workitem, and calls Execute in between the events. This allows the consuming application to do whatever it needs in before or after workitems, including canceling, changing workitem properties, etc.
The benefit to all this, I think, is that I can express the core logic of a task in terms of the workitems responsible for getting the work done, and I can do it in a fairly straightforward, almost procedural way. Also because I'm using IEnumerable, and C#'s syntactic sugar that supports it, I can compose these workflows - like higher-level workflows that consume and manipulate sub-workflows. For example I wrote a simple workflow that just interleaves two child workflows together.
My question is this - does this sort of architecture seem reasonable, especially from a maintainability perspective? It seems to achieve several goals for me - self-documenting code (the workflow reads procedurally, so I know what will be executed in what steps), separation of concerns (finding low inventory items does not depend on sending email to the warehouse), etc. Also - are there any potential problems with this sort of architecture that I'm not seeing? Finally, has this been tried before - am I just re-discovering this?
Personally, this would be a "buy before build" decision for me. I'd buy something before I'd write it.
I work for a company that's rather large and can be foolish with its money, so if you're writing this for yourself and can't buy something I'll retract the comment.
Here are a few random ideas:
I'd externalize the workflow into a configuration that I could read in on startup, maybe from a file or a database.
It'd look something like a finite state machine with states, transitions, events, and actions.
I'd want to be able to plug in different actions so I could customize different flows on the fly.
I'd want to be able to register different subscribers who would want to be notified when a particular event happened.
I wouldn't expect to see anything as hard-coded as that e-mail server. I'd rather encapsulate that into an EmailNotifier that I could plug into events that demanded it. What about a beeper notification? Or a cell phone? Blackberry? Same architecture, different notifier.
Do you want to include a handler for human interaction? All the workflows that I deal with are a mix of human and automated processing.
Do you anticipate wanting to connect to other systems, like databases, other apps, web services?
It's a tough problem. Good luck.
#Erik: (Addressing a comment about the applicability of my answer.) If you enjoy the technical challenge of designing and building your own custom workflow system then my answer is not helpful. But if you are trying to solve a real-world WF problem with code that needs to be supported into the future then I recommend using the built-in WF system.
Workflow support is now part of the .Net framework and is called "Workflow Foundation (WF)". It is almost certainly easier to learn how to use the built-in library than to write one of your own, as duffymo pointed out in his "buy before build" comment.
Workflows are expressed in XAML and are supported by a designer in Visual Studio.
There are three types of Workflows (from Wikipedia, link below)
Sequential Workflow (Typically Flow
Chart based, progresses from one stage
to next and does not step back)
State
Machine Workflow (Progress from
'State' to 'State', these workflows
are more complex and return to a
previous point if required)
Rules-driven Workflow (Implemented
based on Sequential/StateMachine
workflow. The rules dictate the
progress of the workflow)
Wikipedia: Windows Workflow Foundation
MSDN: Getting Started with Workflow Foundation (WF)
Related
I am considering using the Elsa workflows for a project, but I couldn't find any examples or documentation on how to use it in client applications (xamarin.forms/blazor wasm). My idea idea is to basically define workflows that include also screen transitions in the client apps. Is this a relevant scenario for Elsa, or am not getting it? I understand that there is some REST API available, but no idea how to use it.
This great article explains how to use it in ASP.NET/backend scenarios https://sipkeschoorstra.medium.com/building-workflow-driven-net-core-applications-with-elsa-139523aa4c50
That's a great use case for Elsa and is something I am planning to create a sample application + guide for. So far, there are guides and samples about executing long-running "back-end" processes using Elsa, but there is now reason one couldn't also use it to implement application navigation logic such as wizards consisting of steps implemented as individual screens for example.
So that's your answer: yes, it is a relevant scenario. But it is unfortunate that there are no concrete samples to point you to at the moment.
Barring any samples, here's how it might work in a client application:
The client application has Elsa services configured.
Whether you decide to store workflow within the app (as code or JSON) or on a remote Elsa Server instance doesn't matter - once you have a workflow in memory, you can execute it.
Since your workflows will be driving UI, you have to think of how tightly-coupled the workflow will be with that UI. For example, a highly tight-coupled workflow might include activities that represent view (names) to present, including transition configuration if that is something to be configured, and outcomes based on what buttons were clicked. A highly loose-coupled workflow on the other hand might act more as a "conductor" or orchestrator of actions and events, where the workflow consists of nothing more than a bunch of primitives such as "SendCommand" and "Event Received", where a "SendCommand" simply raises some application event with a task name that your application then handles. The "Event Received" activity handles the other way around: your application fires instructions to Elsa, and Elsa drives the workflow. A task might be a "Navigate" instruction with the next view name provided as a parameter.
The "SendCommand" and "EventReceived" activities are very new and part of Elsa 2.1 preview packages. Right now they are directly coupled to webhook scenarios (where the commands are sent in the form of HTTP requests to an external application), but the goal is to have various strategies in place (HTTP out requests would just be one of them, another one might be a simple mediator pattern for in-process scenarios such as your client application one).
UPDATE
To retrieve workflows designed in the designer into your client app, you need to get the workflow definition via the following API endpoint:
http(s)://your-elsa-server/v1/workflow-definitions/{workflow-definition-id}/Published
What you'll get back is a JSON representing the workflow definition, which you can now deserialize using IContentSerializer.Deserialize<WorkflowDefinition>, which will give you a WorkflowDefinition. But to be able to actually run a workflow, you need a workflow blueprint. To turn the workflow definition into a blueprint, use `IWorkflowBlueprintMaterializer.CreateWorkflowBlueprintAsync(WorkflowDefinition).
Which will give you a blueprint that can then be executed using e.g. IStartsWorkflow.StartWorkflowAsync(IWorkflowBlueprint).
There are various other services that make it more convenient to construct and run workflows.
To make this as frictionless as possible for your client app, you could consider simply implementing IWorkflowProvider, of which we currently have 3 out of the box:
ProgrammaticWorkflowProvider: provides workflow blueprints based on the workflows coded with the fluent Workflow Builder API.
DatabaseWorkflowProvider: provides blueprints based on those stored in the database (JSON models stored by the designer).
StorageWorkflowProvider: provides blueprints based on JSON files stored on some hard drive or blob storage such as Azure Blob Storage.
What you might do, and in fact what I think we should provide out of the box now that you made me think of it, is create a fourth provider that uses the API endpoints to get workflows from.
Then your client app should not have to be bothered with invoking the Elsa API - the provider does it for you.
I'm an intermediate C#/ASP.net coder working on a fun MVC project. I'd like to implement an experiment framework that will allow objects to "jiggle" functionality in order to find better ways of doing things. Where can I read more about best practices or tutorials on this kind of thing?
e.g
// A helper class that might return different flavors of "mean".
public class MeanHelper
{
...
Mean(IEnumerable<double> input) { <shuffle between the 3 below.> }
OrdinaryMean(IEnumerable<double> input) { ... }
GeometricMean(IEnumerable<double> input) { ... }
HarmonicMean(IEnumerable<double> input) { ... }
...
}
Thanks so much!
What you describe sounds like a strategy pattern; you could for instance have an interface IMean, with a method Calculate, and 3 implementations.
I use my own experimentation framework with the following properties:
experiments are first-class entities
the application runs only as long as there are ongoing experiments
each experiment determines when it has collected enough data
the record is stored in a database
The application undergoing experiments exposes named properties for the experimentation framework to manipulate and monitor. The main application method is an event generator. Each experiment lists a property to manipulate and other properties to monitor. The experimentation framework iterates through the main application method, feeding the events to the ongoing experiments. Each experiment filters the events it recieves, updating variance statistics and manipulating its associated application property. When the experiment signals that it has processed an event, the event, as well as the values of the properties it manipulates and monitors are stored in the experiment database.
Experiment databases are analysized through inspection or with Sho or R. Results of an experiment can be used to study effects of individual property values, as well as correlation and independence of different application properties.
I chose to use the generator pattern, instead of signals or inversion of control, because the experimentation framework should be in charge of when the application runs and stops. The compiler does all the hard work of creating a state machine in the main application method based on the yield command.
I am still refining the framework and rethinking parts of it. I intend to make it an open-source project at some point in the future.
Google's "Overlapping Experiment Infrastructure" paper is an excellent read on the subject.
I think by "jiggle functionality" you mean substitute different functionality based on configuration so that you can experiment things.
If this is the case, then you should look into polymorphism (most likely via interfaces) and perhaps you'd also frameworks and design patterns associated with inversion of control.
Let's assume I am in charge of developing a Scrabble game, being that one of the principal requirements of the client is the ability to later try out different ways and modes of the game. I already made a design that is flexible enough to support those kinds of changes. The only question left is what to expose to the client(objects' access modifiers), and how to organize it (how to expose my objects in namespaces/packages).
How should I define things such that the client can both easily use my standard implementation (a standard Scrabble game, and yet be able to make all the modifications that he wants? I guess what I need is a kind of framework, on which he can work on.
I organized my classes/interfaces in a non-strict layered system:
Data Types
Contains basic data types that might be used in the whole system. This package and its members can be accessed by anyone in the system. All its members are public.
Domain
Contains all the interfaces I've defined and that might be useful to be able to make client's new Scrabble's implementations. Also contains value types, like Piece, that are used in the game. All its members are public.
Implementations
Contains all the needed classes/code to implement my standard Scrabble game in a Implementations.StandardScrabble package. If the client decides to implement other variants of the game, he can create them in Implementations.XYZ, for example.
These classes are all package protected and the only thing that is available to the outside of the package is a Game façade. Uses both Domain and Data Types packages.
UI
Contains the UI class that I have implemented so that both the client and the users of the program can run the game (my implementation). Can access all the other layers.
There are several drawbacks to the way I am organizing things, the most obvious being that if the client wants to create its own version of the game, he will have to basically implement almost everything by himself(I share in the Domain the interfaces, but he can do almost nothing with them). I feel I should maybe pass all the Implementation's classes to the Domain and then only have a Façade that builds up my standard Scrabble in the Implementations namespace?
How would you approach this? Is there any recomended reading on how to build this kind of programs (basically, frameworks)?
Thanks
I think that you're trying to give too much freedom to a client. This must be making things that difficult for you to handle. Based on what you have described it seems that a client will be able to modify almost all parts of your game - model, logic, UI... I think it would be better to restrict modifiable areas in your application but expose some via general Plugin interface set. This would make it easier for a user as well - he will only need to learn how plugins work, not the entire application's logic. Define areas for your plugins if you want - UI plugin, game mode plugin and so on. Many production applications and games work in such way (recall Diablo II and that AMAZING variety of plugins it has!).
For the algorithms and strategies I would define interfaces and default implementations, and provide abstract superclasses which are extended by you own implementations, so that all the boilerplate code is in the abstract superclass. In addition I would allow the client to subclass your impl. Just make more than one impl, and you see what to place where.
But most important: Give your client the code. If he needs to understand where to place his code, he should be able to see what you have coded, too. No need to hide stuff.
Whatever design you come up with, I would err on the side of hiding as much of the implementation as possible. Once you expose an implementation, you cannot take it back (unless you're ready to wage a flame war with your client base). You can always provide default implementations later as you see fit.
Generally, I'd start with only providing thin interfaces. Then, before providing abstract classes, I might offer utility classes (e.g., Factories, Builders, etc.).
I'd recommend reading Effective Java by Josh Bloch for useful general practices when designing object-oriented code.
MVC/Compund Pattern
You may release earlier version of your package.
later on you can upgrade it based on user requirement.
If you are using MVC or other compound pattern wisely, I believe you also can upgrade your package easily.
I want to implement a workflow system on a new website which i am developing. Basically have
an order object (in future may have many more objects) which can have different statuses i.e. initial,assigned,dispatched,cancelled etc. It is the case that the order can only go from one status to another e.g can go from assigned to dispatched but cant go from initial to dispatched etc. i am hoping that maybe someone can give me an approach which is best to take for something like this??????
Try Windows Workflow Foundation, it might be overkill for your application.
If you your WF system is that simple and you do not expect it to evolve much, you could use regular objects with an enumerated type or a dictionary / list of statuses.
Type and value together will give you current status and a list of available actions. Persistence of WF objects will also be very easy.
We have a server written in C# (Framework 3.5 SP1). Customers write client applications using our server API. Recently, we created several levels of license schemes like Basic, Intermediate and All. If you have Basic license then you can call few methods on our API. Similarly if you have Intermediate you get some extra methods to call and if you have All then you can call all the methods.
When server starts it gets the license type. Now in each method I have to check the type of license and decide whether to proceed further with the function or return.
For example, a method InterMediateMethod() can only be used by Intermediate License and All license. So I have to something like this.
public void InterMediateMethod()
{
if(licenseType == "Basic")
{
throw new Exception("Access denied");
}
}
It looks like to me that it is very lame approach. Is there any better way to do this? Is there any declarative way to do this by defining some custom attributes? I looked at creating a custom CodeAccessSecurityAttribute but did not get a good success.
Since you are adding the "if" logic in every method (and god knows what else), you might find it easier to use PostSharp (AOP framework) to achieve the same, but personally, I don't like either of the approaches...
I think it would be much cleaner if you'd maintained three different branches (source code) for each license, which may add a little bit of overhead in terms of maintenance (maybe not), but at least keep it clean and simple.
I'm also interested what others have to say about it.
Good post, I like it...
Possibly one easy and clean approach would be to add a proxy API that duplicates all your API methods and exposes them to the client. When called, the proxy would either forward the call to your real method, or return a "not licensed" error. The proxies could be built into three separate (basic, intermediate, all) classes, and your server would create instances of the approprate proxy for your client's licence. This has the advantage of having minimal performance overhead (because you only check the licence once). You may not even need to use a proxy for the "all" level, so it'll get maximum performance. It may be hard to slip this in depending on your existing design though.
Another possibility may be to redesign and break up your APIs into basic/intermediate/all "sections", and put them in separate assemblies, so the entire assembly can be enabled/disabled by the licence, and attempting to call an unlicensed method can just return a "method not found" error (e.g. a TypeLoadException will occur automatically if you simply haven't loaded the needed assembly). This will make it much easier to test and maintain, and again avoids checking at the per-method level.
If you can't do this, at least try to use a more centralised system than an "if" statement hand-written into every method.
Examples (which may or may not be compatible with your existing design) would include:
Add a custom attribute to each method and have the server dispatch code check this attribute using reflection before it passes the call into the method.
Add a custom attribute to mark the method, and use PostSharp to inject a standard bit of code into the method that will read and test the attribute against the licence.
Use PostSharp to add code to test the licence, but put the licence details for each method in a more data driven system (e.g. use an XML file rather than attributes to describe the method permissions). This will allow you to easily change the licensing across the entire server by editing a single file, and allow you to easily add whole new levels or types of licences in future.
Hope that gives you some ideas.
You might really want to consider buying a licensing solution rather than rolling your own. We use Desaware and are pretty happy with it.
Doing licensing at the method level is going to take you into a world of hurt. Maintenance on that would be a nightmare, and it won't scale at all.
You should really look at componentizing your product. Your code should roughly fall into "features", which can be bundled into "components". The trick is to make each component do a license check, and have a licensing solution that knows if a license includes a component.
Components for our products are generally on the assembly level, but for our web products they can get down to the ASP.Net server control level.
I wonder how the people are licensing the SOA services. They can be licensed per service or per end point.
That can be very hard to maintain.
You can try with using strategy pattern.
This can be your starting point.
I agree with the answer from #Ostati that you should keep 3 branches of your code.
What I would further expand on that is then I would expose 3 different services (preferably WCF services) and issue certificates that grant access to the specific service. That way if anyone tried to access the higher level functionality they would just not be able to access the service period.