We have developed an application in C# .NET that synchronises data (customers, orders) to a PHP e-commerce application using SOAP.
The WSDL of the PHP application is added as a .NET 2.0 web reference to our application, so the .NET Framework generates classes and functions to communicate with the SOAP web service.
For instance, we are able to send stock information like this:
catalogInventoryStockItemUpdateEntity stock = new catalogInventoryStockItemUpdateEntity();
stock.is_in_stock = 1;
stock.is_in_stockSpecified = true;
stock.qty = "10";
webserv.catalogInventoryStockItemUpdate(sessionid, itemcode, stock);
This works fine, however we are frequently running in situations where one of our customers has additional (non standard) fields defined in the WSDL and wants this fields to be used in the synchronisation.
Our current practice is to create a new branch of our code for this customer and update the web reference to use the specific WSDL of our customer.
To prevent us from getting a long unmaintanable list of branches of our software, I'm planning to do a complete overhaul of the structure of our application.
Now I am wondering what would be the best structure to handle this. I was thinking to put the web reference in it's own class and load this DLL dynamically, so if a customer has a non-standard WSDL we could create it's own class and load it as a 'plug-in' into our software. But the additional fields in (for instance) the catalogInventoryStockItemUpdate will then still not be available in the main part of our application.
Are there any tools that might help in achieving this? I would like to have one main application for synchronisation and put all customer specific mappings and references to the WSDL in a separate class/project.
First of all, for adding plugins support to your app you can use Microsoft Extensibility Framework (MEF). If you're constrained to using .NET 2.0, then there are other custom ways for discovering and loading up plugins (through separate app domains, or by loading them straight to the primary app domain).
As for the design, I would make each plugin:
Hold the service reference to it's particular web service instance.
Make any particular assignments or logic to that service. For example assign 10 to stock.qty.
Provide callbacks/events the application could use to interfere with the logic implemented in the plugin. For example, you could have the plugins expose an event called BeforeStockSubmitted and you could do some validation or checks in the app on the data being submitted to the service.
Your plugin host (the application, or a module of it) should:
Expose a consistent object model for all plugins. You should offer a certain degree of abstraction for all entities the plugins will work with (such as sessionId, stock, etc).
Data coming into the plugins should be abstracted as well. So you can have an IStockInfo interface in the host and each plugin should be constrained to provide their own implementation. The host can populate the common properties of these objects while the plugin takes care of the specific part.
Related
I am considering using the Elsa workflows for a project, but I couldn't find any examples or documentation on how to use it in client applications (xamarin.forms/blazor wasm). My idea idea is to basically define workflows that include also screen transitions in the client apps. Is this a relevant scenario for Elsa, or am not getting it? I understand that there is some REST API available, but no idea how to use it.
This great article explains how to use it in ASP.NET/backend scenarios https://sipkeschoorstra.medium.com/building-workflow-driven-net-core-applications-with-elsa-139523aa4c50
That's a great use case for Elsa and is something I am planning to create a sample application + guide for. So far, there are guides and samples about executing long-running "back-end" processes using Elsa, but there is now reason one couldn't also use it to implement application navigation logic such as wizards consisting of steps implemented as individual screens for example.
So that's your answer: yes, it is a relevant scenario. But it is unfortunate that there are no concrete samples to point you to at the moment.
Barring any samples, here's how it might work in a client application:
The client application has Elsa services configured.
Whether you decide to store workflow within the app (as code or JSON) or on a remote Elsa Server instance doesn't matter - once you have a workflow in memory, you can execute it.
Since your workflows will be driving UI, you have to think of how tightly-coupled the workflow will be with that UI. For example, a highly tight-coupled workflow might include activities that represent view (names) to present, including transition configuration if that is something to be configured, and outcomes based on what buttons were clicked. A highly loose-coupled workflow on the other hand might act more as a "conductor" or orchestrator of actions and events, where the workflow consists of nothing more than a bunch of primitives such as "SendCommand" and "Event Received", where a "SendCommand" simply raises some application event with a task name that your application then handles. The "Event Received" activity handles the other way around: your application fires instructions to Elsa, and Elsa drives the workflow. A task might be a "Navigate" instruction with the next view name provided as a parameter.
The "SendCommand" and "EventReceived" activities are very new and part of Elsa 2.1 preview packages. Right now they are directly coupled to webhook scenarios (where the commands are sent in the form of HTTP requests to an external application), but the goal is to have various strategies in place (HTTP out requests would just be one of them, another one might be a simple mediator pattern for in-process scenarios such as your client application one).
UPDATE
To retrieve workflows designed in the designer into your client app, you need to get the workflow definition via the following API endpoint:
http(s)://your-elsa-server/v1/workflow-definitions/{workflow-definition-id}/Published
What you'll get back is a JSON representing the workflow definition, which you can now deserialize using IContentSerializer.Deserialize<WorkflowDefinition>, which will give you a WorkflowDefinition. But to be able to actually run a workflow, you need a workflow blueprint. To turn the workflow definition into a blueprint, use `IWorkflowBlueprintMaterializer.CreateWorkflowBlueprintAsync(WorkflowDefinition).
Which will give you a blueprint that can then be executed using e.g. IStartsWorkflow.StartWorkflowAsync(IWorkflowBlueprint).
There are various other services that make it more convenient to construct and run workflows.
To make this as frictionless as possible for your client app, you could consider simply implementing IWorkflowProvider, of which we currently have 3 out of the box:
ProgrammaticWorkflowProvider: provides workflow blueprints based on the workflows coded with the fluent Workflow Builder API.
DatabaseWorkflowProvider: provides blueprints based on those stored in the database (JSON models stored by the designer).
StorageWorkflowProvider: provides blueprints based on JSON files stored on some hard drive or blob storage such as Azure Blob Storage.
What you might do, and in fact what I think we should provide out of the box now that you made me think of it, is create a fourth provider that uses the API endpoints to get workflows from.
Then your client app should not have to be bothered with invoking the Elsa API - the provider does it for you.
What architecture and patterns can I use to share the most model and logic code between a WPF and an ASP.NET MVC application?
I am trying to achieve a bit more here than just separating my data entities from the two presentation projects. There is a lot more in common e.g. UI logic on what gets displayed under what conditions, when is something required, etc. that I would like to keep in the shared code.
ADDED: I am just beginning to really like the concept of view models independent of my entity model driving my presentation. While some of the annotations used in these are located in assemblies specific to MVC, none of the metadata provided is actually web specific. I would very much like to explore using my MVC view models as data sources for binding to WPF views. Any suggestions on this front will be most appreciated.
My personal favorite configuration is similar to the one Adam King suggested above but I like to keep the logic DLL as part of the web project. I run a project called CT Terminal that follows this pattern. My Terminal.Domain project contains all the application logic and simply returns a CommandResult object with properties that act as instructions to tell the UI project what to do. The UI is completely dumb and only processes what it's told to by the Domain project.
Now, following Adam King's approach I would then slap that Domain DLL into a WPF app and then code the UI to follow the instructions in my returned CommandResult object. However, I prefer a different approach. I wrote the MVC 3 UI to expose a JSON API. This API can be consumed by any application. The JSON API was simple because it was basically a wrapper around my Terminal.Domain project CommandResult object. The JSON returned would have the same basic properties. In this way I would write the WPF app to consume this API rather than the DLL. Now if I make minor changes to internal application logic I just deploy the Web project to the live server. All clients using the API automatically get this new logic.
Obviously if the changes being made affect the properties being returned from the API then that would require a release of new client code, but at least for internal logic you wouldn't have to do that.
One of the most widely used patterns seems to be having the Entities in a seperate DLL assembly, then having this referenced from each of the other projects.
MVC 3 suits the repository pattern very nicely, which can be a clean route to take in the first instance, and will work for both WPF and ASP.net
I actually found Rocky Lhotka's books, software, and videos on this topic very helpful. Here's a few links to his content:
http://www.lhotka.net/
http://channel9.msdn.com/Events/Speakers/Rockford-Lhotka
http://www.amazon.com/Expert-C-2008-Business-Objects/dp/1430210192/ref=sr_1_2?s=books&ie=UTF8&qid=1331834548&sr=1-2
Create a service layer for your application by specifying interfaces with methods that represent all of the operations you need to perform. Also, in this service layer, define all of the data types used by the application. Those data type classes should contain only properties, not operations. Put these interfaces and classes in an assembly all by itself. This assembly should be shared between your web app, WPF app, and the code that implements it.
Finally once you have this separation, you can freely develop the application's internal structure, and leave the responsibility of UI operations (e.g. what happens when you click xyz button) to the respective UI.
As an aside, you can expose your service layer, via WCF and web services. You can use this to make call from the web browser via javascript. You could do things like client-side validation or even look up values on the fly for drop down population. all while reusing it between your two application.
Starting with the obvious. Encapsulate your business logic and domain model in a separate assembly.
In terms of Presentation Layers and shared UI Behaviour, the closest you will get is the MVVM design paradigm, implementation will be C# in WPF/XAML and Javascript for your ASP.NET MVC web frontend.
For the web frontend you can get close to the WPF (MVVM) way of doing things with http://knockoutjs.com/ written by Steve Sanderson of Microsoft. Its MVVM for the browser. Also checkout http://www.asp.net/mvc/mvc4 for more info.
Use Web Api, let both the WPF and the Web application consume the services from Web Api.
Done.
Did you try using Portable class libraries. With this you can make the data layer and use it in ASP.Net MVC, WPF, Windows Phone, Silverlight.
I am new to DDD and at the moment I try to refactor a project towards a domain driven architecture. The project has a client and a server side (ASMX webservice). Now I created a class libary called "Domain" which is referenced by the client application as well as by the server.
Now I want my SOAP communication to be based on my domain model. But as you know the ASMX webservice creates some kind of proxy class library within the client as soon as a web reference is established.
This results in having each domain entity and value object twice under different namespaces.
Is it possible to use the domain model for communication directly and avoid the generation of the ASMX proxy classes?
How are DTOs used within DDD? As you know, some domain parts might not be serializable (e.g. NHibernate / IList usage) so in the past I often created simpler DTO-versions of my entity classes. Is it a common practice to define DTO entities within the domain?
If you were to use WCF, you could have the service interface and DTO classes in a shared assembly that both the client and the server referenced. Converting your project to use WCF may not be too hard, but there is a lot of learning to do before you get started.
A shared interface assembly is only a good ideal if you wish the client and server to be tightly joined, hence it does not work well unless both side are owned by the same team.
Often trying to use the same classes on the server and client lean to deign problem, however sharing classes when it does fit the design saves a lot of work.
(Sorry I don’t think you can used a shared assembly with asmx, it is more a concept from the remoting side of .net history)
It is not possible to use original domain model (obtained by a reference -not a WEB reference- to the domain model) for communication with the ASMX: you must use the model exposed by asmx that is similar but not the same (i.e. List are transformed in Array).
It's possible -and easy- create many classes from many web-services having the same namespace.
Create a .bat file like this
"C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\wsdl.exe" /sharetypes /o:C:\code\MyProxy.cs /namespace:MyProxies http://website/FirstService.asmx http://website/SecondService.asmx
Pause
Where
-"C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\wsdl.exe" is wsdl.exe with relative path (it must be on your PC)
-/sharetypes means that you want only 1 namespace in the output
- /o:C:\code\MyProxy.cs is the .cs file name that will contain all the classes created
- /namespace:MyProxies is the namespace for all the classes created
- http://website/FirstService.asmx http://website/SecondService.asmx is the list of webservices that you need to call
- Pause means that you want window's shel to remain open to read the feedback of the operations.
NOTE
1) All instructions must be in one line (don't press enter); Pause in the second line.
2) If /o:C:\code\MyProxy.cs will be part of your solution (i think so) and you use source control, the file must be checked out to be overwritten by wsdl.exe
Greetings,
We have built an extensive system and data framework api using interfaces and DI. For the data access, if the application is a Windows service/WCF service then a LINQ implementation of the repositories is injected at runtime using Castle. Client web/winform applications use the same Data controllers/domain objects but the implementation portions use injected WCF client classes for data access. The cool part about this setup is that the client and server code can reuse the same domain objects, services and system logic by including the appropriate assembly with a few translations.
I have just now created a Silverlight application using the "Silverlight Navigation Application" template in VS2010. It seems the only way I can reference my desktop CLR code is via linked classes (add existing item/linked). There is not a boat load of plumbing classes, but there are some core classes that handle routing interfaces for emails, SMS messages, logging, and data access using the castle microkernel and application configuration files.
I can do grid displays and whatnot by binding the controls to the WCF service references. However, I would like to reuse the controller model for messaging, data access, logging and so on. I cannot determine if it is worth the time to try to fit all the existing classes into SL project classes or start thinking about somehow creating a new lightweight api for SL? Has anyone had experience with unity/castle and Silverlight?
In regards to "It seems the only way I can reference my desktop CLR code is via linked classes" you could always use a portable class library and it will work on everything from CLR, SL through to Xbox360.
I have description of my Application Services using my fancy classes (ServiceDescription class that contains collection of ServiceMethod description, for simplification).
Now, I want to expose one Application Service as one WCF Service (one Contract). The current solution is very lame - I have console application that generates *.svc file for each Application Service (ServiceDescription). There is one method (Operation) generated for one ServiceMethod.
This works well but I would like to make it better. It could be improved using T4 template but I'm sure that there is still better way in WCF.
I would still like to have one *.svc file per one Application Service but I don't want to generate methods (for corresponding Application Service methods).
I'm sure that there must be some interfaces that allow to discover operations dynamically, at runtime. Maybe IContractBehavior...
Thanks.
EDIT1:
I don't want to use generic operation contract because I would like to have the ability to generate service proxy with all operations.
I'm sure that if I write WCF service and operations by hand then WCF uses reflection to discover the operations in the service.
Now, I would like to customize this point in order not to use reflection, just use my "operations discovering code" instead.
I think there is nothing wrong with static code generation in that case. In my opinion, it is a better solution than dynamic generation of contracts. Keep in mind that your contract is the only evidence you have/provide that a service is hosting a particular set operations.
The main issue I see about the dynamic approach is about versioning and compatibility. If everything is dynamically generated, you may end up transparently pushing breaking changes into the system and create some problems with existing clients.
If you have a code generator when you plan on implementing some changes in the application services, it will be easier to remember that the changes you make on the services may have a huge impact.
But if you really want to dynamically handle messages, you could use a generic operation contract (with the Action property set to *), and manually route the messages to the application services.
Keep in mind that you would lose the ability to generate from the service a proxy containing a list of operations available.