I am considering using the Elsa workflows for a project, but I couldn't find any examples or documentation on how to use it in client applications (xamarin.forms/blazor wasm). My idea idea is to basically define workflows that include also screen transitions in the client apps. Is this a relevant scenario for Elsa, or am not getting it? I understand that there is some REST API available, but no idea how to use it.
This great article explains how to use it in ASP.NET/backend scenarios https://sipkeschoorstra.medium.com/building-workflow-driven-net-core-applications-with-elsa-139523aa4c50
That's a great use case for Elsa and is something I am planning to create a sample application + guide for. So far, there are guides and samples about executing long-running "back-end" processes using Elsa, but there is now reason one couldn't also use it to implement application navigation logic such as wizards consisting of steps implemented as individual screens for example.
So that's your answer: yes, it is a relevant scenario. But it is unfortunate that there are no concrete samples to point you to at the moment.
Barring any samples, here's how it might work in a client application:
The client application has Elsa services configured.
Whether you decide to store workflow within the app (as code or JSON) or on a remote Elsa Server instance doesn't matter - once you have a workflow in memory, you can execute it.
Since your workflows will be driving UI, you have to think of how tightly-coupled the workflow will be with that UI. For example, a highly tight-coupled workflow might include activities that represent view (names) to present, including transition configuration if that is something to be configured, and outcomes based on what buttons were clicked. A highly loose-coupled workflow on the other hand might act more as a "conductor" or orchestrator of actions and events, where the workflow consists of nothing more than a bunch of primitives such as "SendCommand" and "Event Received", where a "SendCommand" simply raises some application event with a task name that your application then handles. The "Event Received" activity handles the other way around: your application fires instructions to Elsa, and Elsa drives the workflow. A task might be a "Navigate" instruction with the next view name provided as a parameter.
The "SendCommand" and "EventReceived" activities are very new and part of Elsa 2.1 preview packages. Right now they are directly coupled to webhook scenarios (where the commands are sent in the form of HTTP requests to an external application), but the goal is to have various strategies in place (HTTP out requests would just be one of them, another one might be a simple mediator pattern for in-process scenarios such as your client application one).
UPDATE
To retrieve workflows designed in the designer into your client app, you need to get the workflow definition via the following API endpoint:
http(s)://your-elsa-server/v1/workflow-definitions/{workflow-definition-id}/Published
What you'll get back is a JSON representing the workflow definition, which you can now deserialize using IContentSerializer.Deserialize<WorkflowDefinition>, which will give you a WorkflowDefinition. But to be able to actually run a workflow, you need a workflow blueprint. To turn the workflow definition into a blueprint, use `IWorkflowBlueprintMaterializer.CreateWorkflowBlueprintAsync(WorkflowDefinition).
Which will give you a blueprint that can then be executed using e.g. IStartsWorkflow.StartWorkflowAsync(IWorkflowBlueprint).
There are various other services that make it more convenient to construct and run workflows.
To make this as frictionless as possible for your client app, you could consider simply implementing IWorkflowProvider, of which we currently have 3 out of the box:
ProgrammaticWorkflowProvider: provides workflow blueprints based on the workflows coded with the fluent Workflow Builder API.
DatabaseWorkflowProvider: provides blueprints based on those stored in the database (JSON models stored by the designer).
StorageWorkflowProvider: provides blueprints based on JSON files stored on some hard drive or blob storage such as Azure Blob Storage.
What you might do, and in fact what I think we should provide out of the box now that you made me think of it, is create a fourth provider that uses the API endpoints to get workflows from.
Then your client app should not have to be bothered with invoking the Elsa API - the provider does it for you.
Related
We have developed an application in C# .NET that synchronises data (customers, orders) to a PHP e-commerce application using SOAP.
The WSDL of the PHP application is added as a .NET 2.0 web reference to our application, so the .NET Framework generates classes and functions to communicate with the SOAP web service.
For instance, we are able to send stock information like this:
catalogInventoryStockItemUpdateEntity stock = new catalogInventoryStockItemUpdateEntity();
stock.is_in_stock = 1;
stock.is_in_stockSpecified = true;
stock.qty = "10";
webserv.catalogInventoryStockItemUpdate(sessionid, itemcode, stock);
This works fine, however we are frequently running in situations where one of our customers has additional (non standard) fields defined in the WSDL and wants this fields to be used in the synchronisation.
Our current practice is to create a new branch of our code for this customer and update the web reference to use the specific WSDL of our customer.
To prevent us from getting a long unmaintanable list of branches of our software, I'm planning to do a complete overhaul of the structure of our application.
Now I am wondering what would be the best structure to handle this. I was thinking to put the web reference in it's own class and load this DLL dynamically, so if a customer has a non-standard WSDL we could create it's own class and load it as a 'plug-in' into our software. But the additional fields in (for instance) the catalogInventoryStockItemUpdate will then still not be available in the main part of our application.
Are there any tools that might help in achieving this? I would like to have one main application for synchronisation and put all customer specific mappings and references to the WSDL in a separate class/project.
First of all, for adding plugins support to your app you can use Microsoft Extensibility Framework (MEF). If you're constrained to using .NET 2.0, then there are other custom ways for discovering and loading up plugins (through separate app domains, or by loading them straight to the primary app domain).
As for the design, I would make each plugin:
Hold the service reference to it's particular web service instance.
Make any particular assignments or logic to that service. For example assign 10 to stock.qty.
Provide callbacks/events the application could use to interfere with the logic implemented in the plugin. For example, you could have the plugins expose an event called BeforeStockSubmitted and you could do some validation or checks in the app on the data being submitted to the service.
Your plugin host (the application, or a module of it) should:
Expose a consistent object model for all plugins. You should offer a certain degree of abstraction for all entities the plugins will work with (such as sessionId, stock, etc).
Data coming into the plugins should be abstracted as well. So you can have an IStockInfo interface in the host and each plugin should be constrained to provide their own implementation. The host can populate the common properties of these objects while the plugin takes care of the specific part.
What architecture and patterns can I use to share the most model and logic code between a WPF and an ASP.NET MVC application?
I am trying to achieve a bit more here than just separating my data entities from the two presentation projects. There is a lot more in common e.g. UI logic on what gets displayed under what conditions, when is something required, etc. that I would like to keep in the shared code.
ADDED: I am just beginning to really like the concept of view models independent of my entity model driving my presentation. While some of the annotations used in these are located in assemblies specific to MVC, none of the metadata provided is actually web specific. I would very much like to explore using my MVC view models as data sources for binding to WPF views. Any suggestions on this front will be most appreciated.
My personal favorite configuration is similar to the one Adam King suggested above but I like to keep the logic DLL as part of the web project. I run a project called CT Terminal that follows this pattern. My Terminal.Domain project contains all the application logic and simply returns a CommandResult object with properties that act as instructions to tell the UI project what to do. The UI is completely dumb and only processes what it's told to by the Domain project.
Now, following Adam King's approach I would then slap that Domain DLL into a WPF app and then code the UI to follow the instructions in my returned CommandResult object. However, I prefer a different approach. I wrote the MVC 3 UI to expose a JSON API. This API can be consumed by any application. The JSON API was simple because it was basically a wrapper around my Terminal.Domain project CommandResult object. The JSON returned would have the same basic properties. In this way I would write the WPF app to consume this API rather than the DLL. Now if I make minor changes to internal application logic I just deploy the Web project to the live server. All clients using the API automatically get this new logic.
Obviously if the changes being made affect the properties being returned from the API then that would require a release of new client code, but at least for internal logic you wouldn't have to do that.
One of the most widely used patterns seems to be having the Entities in a seperate DLL assembly, then having this referenced from each of the other projects.
MVC 3 suits the repository pattern very nicely, which can be a clean route to take in the first instance, and will work for both WPF and ASP.net
I actually found Rocky Lhotka's books, software, and videos on this topic very helpful. Here's a few links to his content:
http://www.lhotka.net/
http://channel9.msdn.com/Events/Speakers/Rockford-Lhotka
http://www.amazon.com/Expert-C-2008-Business-Objects/dp/1430210192/ref=sr_1_2?s=books&ie=UTF8&qid=1331834548&sr=1-2
Create a service layer for your application by specifying interfaces with methods that represent all of the operations you need to perform. Also, in this service layer, define all of the data types used by the application. Those data type classes should contain only properties, not operations. Put these interfaces and classes in an assembly all by itself. This assembly should be shared between your web app, WPF app, and the code that implements it.
Finally once you have this separation, you can freely develop the application's internal structure, and leave the responsibility of UI operations (e.g. what happens when you click xyz button) to the respective UI.
As an aside, you can expose your service layer, via WCF and web services. You can use this to make call from the web browser via javascript. You could do things like client-side validation or even look up values on the fly for drop down population. all while reusing it between your two application.
Starting with the obvious. Encapsulate your business logic and domain model in a separate assembly.
In terms of Presentation Layers and shared UI Behaviour, the closest you will get is the MVVM design paradigm, implementation will be C# in WPF/XAML and Javascript for your ASP.NET MVC web frontend.
For the web frontend you can get close to the WPF (MVVM) way of doing things with http://knockoutjs.com/ written by Steve Sanderson of Microsoft. Its MVVM for the browser. Also checkout http://www.asp.net/mvc/mvc4 for more info.
Use Web Api, let both the WPF and the Web application consume the services from Web Api.
Done.
Did you try using Portable class libraries. With this you can make the data layer and use it in ASP.Net MVC, WPF, Windows Phone, Silverlight.
I'm working on a system right now that is in a single process space; we are breaking this up into several processes, initially to run on the same box but ultimately to distribute across several separate machines. I'm leaning towards using an ESB (NServiceBus, Rhino ESB) or possibly rolling my own with WCF + queues to handle the pub/sub and request/response scenarios our app has.
However, I'm struggling with the abstraction: I don't want the various components to know they are talking over the bus. The current APIs connecting the various services translate pretty well to this kind of model, but I want to hide that from the client and server sides. Short of writing a lot of custom proxy code for the client and server, is there a better way to approach this? I realize WCF can auto-generate proxies based on the service definition, but I really like some of the other stuff I get with (say) rhino servicebus.
Ideally, I'd like to be able to swap out different implementations (with and without an ESB/messaging layer) just using IoC (knowing there would have to be limits enforced by convention on what can be passed across the interfaces), but I'm not sure where to go with that. I'd really prefer to not have to change every method call on the current interfaces into its own discrete message class, either.
Any resources/patterns/tools to help me do this? Please ask questions if I'm not clear. Thanks.
There may not be one solution/off-the-shelf component that might help you.
Problem 1:
The basic problem can be solved via an ESB, as it provides location transparency and service aggregation. A regular ESB mediates/brokers requests between service consumer and service provider.
Take a simple example:
Service_A depends on Service_B
Service_C depends on Service_B
Service_B depends on Service_D
In this scenario, the best way to progress is this:
Define contracts exposed by Service_B and Service_D as external dependencies (possibly as a web service, though an ESB supports multiple protocols) in services Service_A, Service_C and Service_B, and consume via an ESB.
In ESB, to start with, route thes services Service_B and Service_D on the same instance.
If you migrate Service_D and Service_B as Service_Dx and Service_Bx on a different location, the ESB can be reconfigured to route to the new location. Also, an ESB can be configured to route to Service_B or Service_Bx based on some set of parameters (eg., test data to Service_B and production data to Service_Bx)
Problem 2:
The problem of IOC could probably be hard to achieve; there may not be a need.
I presume the clients, instead of consuming from a known location, are injected with the whereabouts of service location. This in reality transfers the configuration to client side. With this, for every new client added to the system there needs to be a separate configuration control. This might lead to logistical issues.
Please post your final solution, very interested to know your approach.
I have description of my Application Services using my fancy classes (ServiceDescription class that contains collection of ServiceMethod description, for simplification).
Now, I want to expose one Application Service as one WCF Service (one Contract). The current solution is very lame - I have console application that generates *.svc file for each Application Service (ServiceDescription). There is one method (Operation) generated for one ServiceMethod.
This works well but I would like to make it better. It could be improved using T4 template but I'm sure that there is still better way in WCF.
I would still like to have one *.svc file per one Application Service but I don't want to generate methods (for corresponding Application Service methods).
I'm sure that there must be some interfaces that allow to discover operations dynamically, at runtime. Maybe IContractBehavior...
Thanks.
EDIT1:
I don't want to use generic operation contract because I would like to have the ability to generate service proxy with all operations.
I'm sure that if I write WCF service and operations by hand then WCF uses reflection to discover the operations in the service.
Now, I would like to customize this point in order not to use reflection, just use my "operations discovering code" instead.
I think there is nothing wrong with static code generation in that case. In my opinion, it is a better solution than dynamic generation of contracts. Keep in mind that your contract is the only evidence you have/provide that a service is hosting a particular set operations.
The main issue I see about the dynamic approach is about versioning and compatibility. If everything is dynamically generated, you may end up transparently pushing breaking changes into the system and create some problems with existing clients.
If you have a code generator when you plan on implementing some changes in the application services, it will be easier to remember that the changes you make on the services may have a huge impact.
But if you really want to dynamically handle messages, you could use a generic operation contract (with the Action property set to *), and manually route the messages to the application services.
Keep in mind that you would lose the ability to generate from the service a proxy containing a list of operations available.
I'm in the process of designing a system that will allow me to represent broad-scope tasks as workflows, which expose their workitems via an IEnumerable method. The intention here is to use C#'s 'yield' mechanism to allow me to write psuedo-procedural code that the workflow execution system can execute as it sees fit.
For example, say I have a workflow that includes running a query on the database and sending an email alert if the query returns a certain result. This might be the workflow:
public override IEnumerable<WorkItem> Workflow() {
// These would probably be injected from elsewhere
var db = new DB();
var emailServer = new EmailServer();
// other workitems here
var ci = new FindLowInventoryItems(db);
yield return ci;
if (ci.LowInventoryItems.Any()) {
var email = new SendEmailToWarehouse("Inventory is low.", ci.LowInventoryItems);
yield return email;
}
// other workitems here
}
CheckInventory and EmailWarehouse are objects deriving from WorkItem, which has an abstract Execute() method that the subclasses implement, encapsulating the behavior for those actions. The Execute() method gets called in the workflow framework - I have a WorkflowRunner class which enumerates the Workflow(), wraps pre- and post- events around the workitem, and calls Execute in between the events. This allows the consuming application to do whatever it needs in before or after workitems, including canceling, changing workitem properties, etc.
The benefit to all this, I think, is that I can express the core logic of a task in terms of the workitems responsible for getting the work done, and I can do it in a fairly straightforward, almost procedural way. Also because I'm using IEnumerable, and C#'s syntactic sugar that supports it, I can compose these workflows - like higher-level workflows that consume and manipulate sub-workflows. For example I wrote a simple workflow that just interleaves two child workflows together.
My question is this - does this sort of architecture seem reasonable, especially from a maintainability perspective? It seems to achieve several goals for me - self-documenting code (the workflow reads procedurally, so I know what will be executed in what steps), separation of concerns (finding low inventory items does not depend on sending email to the warehouse), etc. Also - are there any potential problems with this sort of architecture that I'm not seeing? Finally, has this been tried before - am I just re-discovering this?
Personally, this would be a "buy before build" decision for me. I'd buy something before I'd write it.
I work for a company that's rather large and can be foolish with its money, so if you're writing this for yourself and can't buy something I'll retract the comment.
Here are a few random ideas:
I'd externalize the workflow into a configuration that I could read in on startup, maybe from a file or a database.
It'd look something like a finite state machine with states, transitions, events, and actions.
I'd want to be able to plug in different actions so I could customize different flows on the fly.
I'd want to be able to register different subscribers who would want to be notified when a particular event happened.
I wouldn't expect to see anything as hard-coded as that e-mail server. I'd rather encapsulate that into an EmailNotifier that I could plug into events that demanded it. What about a beeper notification? Or a cell phone? Blackberry? Same architecture, different notifier.
Do you want to include a handler for human interaction? All the workflows that I deal with are a mix of human and automated processing.
Do you anticipate wanting to connect to other systems, like databases, other apps, web services?
It's a tough problem. Good luck.
#Erik: (Addressing a comment about the applicability of my answer.) If you enjoy the technical challenge of designing and building your own custom workflow system then my answer is not helpful. But if you are trying to solve a real-world WF problem with code that needs to be supported into the future then I recommend using the built-in WF system.
Workflow support is now part of the .Net framework and is called "Workflow Foundation (WF)". It is almost certainly easier to learn how to use the built-in library than to write one of your own, as duffymo pointed out in his "buy before build" comment.
Workflows are expressed in XAML and are supported by a designer in Visual Studio.
There are three types of Workflows (from Wikipedia, link below)
Sequential Workflow (Typically Flow
Chart based, progresses from one stage
to next and does not step back)
State
Machine Workflow (Progress from
'State' to 'State', these workflows
are more complex and return to a
previous point if required)
Rules-driven Workflow (Implemented
based on Sequential/StateMachine
workflow. The rules dictate the
progress of the workflow)
Wikipedia: Windows Workflow Foundation
MSDN: Getting Started with Workflow Foundation (WF)