Abstracting out existence of service bus/distributed messaging? - c#

I'm working on a system right now that is in a single process space; we are breaking this up into several processes, initially to run on the same box but ultimately to distribute across several separate machines. I'm leaning towards using an ESB (NServiceBus, Rhino ESB) or possibly rolling my own with WCF + queues to handle the pub/sub and request/response scenarios our app has.
However, I'm struggling with the abstraction: I don't want the various components to know they are talking over the bus. The current APIs connecting the various services translate pretty well to this kind of model, but I want to hide that from the client and server sides. Short of writing a lot of custom proxy code for the client and server, is there a better way to approach this? I realize WCF can auto-generate proxies based on the service definition, but I really like some of the other stuff I get with (say) rhino servicebus.
Ideally, I'd like to be able to swap out different implementations (with and without an ESB/messaging layer) just using IoC (knowing there would have to be limits enforced by convention on what can be passed across the interfaces), but I'm not sure where to go with that. I'd really prefer to not have to change every method call on the current interfaces into its own discrete message class, either.
Any resources/patterns/tools to help me do this? Please ask questions if I'm not clear. Thanks.

There may not be one solution/off-the-shelf component that might help you.
Problem 1:
The basic problem can be solved via an ESB, as it provides location transparency and service aggregation. A regular ESB mediates/brokers requests between service consumer and service provider.
Take a simple example:
Service_A depends on Service_B
Service_C depends on Service_B
Service_B depends on Service_D
In this scenario, the best way to progress is this:
Define contracts exposed by Service_B and Service_D as external dependencies (possibly as a web service, though an ESB supports multiple protocols) in services Service_A, Service_C and Service_B, and consume via an ESB.
In ESB, to start with, route thes services Service_B and Service_D on the same instance.
If you migrate Service_D and Service_B as Service_Dx and Service_Bx on a different location, the ESB can be reconfigured to route to the new location. Also, an ESB can be configured to route to Service_B or Service_Bx based on some set of parameters (eg., test data to Service_B and production data to Service_Bx)
Problem 2:
The problem of IOC could probably be hard to achieve; there may not be a need.
I presume the clients, instead of consuming from a known location, are injected with the whereabouts of service location. This in reality transfers the configuration to client side. With this, for every new client added to the system there needs to be a separate configuration control. This might lead to logistical issues.
Please post your final solution, very interested to know your approach.

Related

Add layer to abstract multiple proxies & services

I think elements of this question have been answered elsewhere, but I couldn't find an answer to my specific circumstance.
I work with an enterprise application. This application interfaces with various 3rd party APIs & services via what is currently a single class and a plethora of proxy dlls. This means that each of these dlls is referenced in the main project. In addition to this, over time as we've added new service calls, a lot of code has been duplicated, with only very small amendments. Most of the service calls do roughly the same thing and take largely the same objects as parameters.
As you can imagine, this presents us with a number of problems, not least of which is the time it takes to add a new service.
We have a task now to refactor and streamline this process to make it more manageable and resilient - I have an idea in my head and I've done a lot of research but I just wanted to see if anyone had any better ideas or similar experiences before I dive in.
What I want to do is add a facade layer so that the base code gets all the data commonly used and bundles it off to the facade along with a parameter stipulating which service it wants to call. The facade would then pass the data to either the proxies or a bridge, which would transform it into the right format for the target service, make the required calls in order, and return any responses back to the facade to pass on.
Although I have an idea of the architecture I want, I'm not 100% sure which way to go in terms of concrete C# code - whether to add the facade and bridge/adapter code in a new project which also has the proxy dlls referenced, or whether to go down the base/interface route and add the required transformation classes directly into the proxy dlls.
Edited to add: I am unable to consider consolidating this functionality into a service or microservice due to wider infrastructure considerations that I am unable to discuss here.
Any suggestions appreciated!

Persist a variable in WCF application per instance

I am creating a WFC Restful service and there is a need to persist a variable that will be persist per user, is there a way I can achieve this without having to pass the variable to all my calls?
I am using trying to log the process of the user throughout the process, weather their request has failed or succeed, IP address, when they requested the action, failure time, etc.
Please note I am new to WCF, thanks in advance.
I recently worked on this (except it wasn't RESTFUL). You could transmit information through HTTP headers and extract that information on the service-side. See http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/
For the client ID itself I can suggest two places to put it. One is OperationContext.Current.IncomingMessageProperties. Another is CorrelationManager.StartLogicalOperation which allows you to define a logical operation - that could be the service request, beginning to end - or multiple operations - and retrieve a unique ID for each operation.
I would lean toward the latter because it's part of System.Diagnostics and can prevent dependencies on System.ServiceModel. (The name CorrelationManager even describes what you're trying to do.)
In either case I would look at interception. That's the ideal way to read the value (wherever you store it) without having to pollute the individual methods with knowledge of logging and client IDs. (I saw from your message that you're trying to avoid that direct dependency on client IDs.)
Here's some documentation on adding Windsor to your WCF service. (At some point I'll add some end-to-end documentation on my blog.) Then, when you're using Windsor to instantiate your services, you can also use it to instantiate the dependencies and put interceptors around them that will perform your logging before or after those dependencies do their work. Within those interceptors you can access or modify that stack of logical operations.
I'm not doing Windsor justice by throwing out a few links. I'd like to flesh it out with some blog posts. But I recommend looking into it. It's beneficial for lots of reasons - interception just one. It helps with the way we compose services and dependencies.
Update - I added a blog post - how to add Windsor to a WCF service in five minutes.

Q: How to build the most basic service aggregation pattern?

I have a set of services I want to be able to access via one end point altogether.
Now I want to build something in wcf rather than use an existing framework/software so that is out of the question.
Suppose I have 10 contracts each representing a contract of an indepedent service that I want to "route" to, what direction should I go?
public partial class ServiceBus : ICardsService
{
//Proxy
CMSClient cards = new CMSClient();
public int methodExample()
{
return cards.methodExample();
}
So far I've tried using a partial class "ServiceBus" that implements each contract but then I have more than a few (60+) recurrences of identical function signatures so I think I should think in a different angle.
Anyone got an idea of what I should do? or what direction to research? currently I'm trying to use a normal wcf service that's going to be configured with a lot of client end points directing to each of the services it routes TO - and one endpoint for the 'application' to consume.
I'm rather new at wcf so anything that may seem too trivial to mention please do mention it anyway.
Thanks in advance.
I have a set of services I want to be able to access via one end point
altogether.
...
So far I've tried using a partial class "ServiceBus" that implements
each contract
It's questionable whether this kind of "service aggregation" pattern should be achieved by condensing multiple endpoints into an uber facade endpoint. Even when implemented well, this will still result in a brittle single failure point in your solution.
Suppose I have 10 contracts each representing a contract of an
indepedent service that I want to "route" to, what direction should I
go?
Stated broadly, your aim seems to be to decouple the caller and service so that the caller makes a call and based on the call context the call is routed the relevant services.
One approach would be to do this call mediation on the client side. This is an unusual approach but would involve creating a "service bus" assembly containing the capability to dynamically call a service at run-time, based on some kind of configurable metadata.
The client code would consume the assembly in-process, and at run-time call into the assembly, which would then make a call to the metadata store, retrieving the contract, binding, and address information for the relevant service, construct a WCF channel, and return it to the client. The client can then happily make calls against the channel and dispose it when finished.
An alternative is to do the call mediation remotely and luckily WCF does provide a routing service for this kind of thing. This allows you to achieve the service aggregation pattern you are proposing, but in a way which is fully configurable so your overall solution will be less brittle. You will still have a single failure point however, unless you load balance the router service.
I'm not sure about making it client side as I can't access some of the
applications (external apis) that are connecting to our service
Well, any solution you choose will likely involve some consumer rewrite - this is almost unavoidable.
I need to make it simple for the programmers using our api
This does not rule out a client side library approach. In fact in some ways this will make it really easy for the developers, all they will need to do is grab a nuget package, wire it up and start calling it. However I agree it's an unusual approach and would also generate a lot of work for you.
I want to implement the aggregation service with one endpoint for a
few contracts
Then you need to find a way to avoid having to implment multiple duplicate (or redundant) service operations in a single service implementation.
The simplest way would probably be to define a completely new service contract which exposes only those operations distinct to each of the services, and additionally a single instance of each of the redundant operations. Then you would need to have some internal routing logic to call the backing service operations depending on what the caller wanted to do. On second thoughts not so simple I think.
Do you have any examples of a distinct service operation and a redundant one?

Propagate Application Service as WCF Service

I have description of my Application Services using my fancy classes (ServiceDescription class that contains collection of ServiceMethod description, for simplification).
Now, I want to expose one Application Service as one WCF Service (one Contract). The current solution is very lame - I have console application that generates *.svc file for each Application Service (ServiceDescription). There is one method (Operation) generated for one ServiceMethod.
This works well but I would like to make it better. It could be improved using T4 template but I'm sure that there is still better way in WCF.
I would still like to have one *.svc file per one Application Service but I don't want to generate methods (for corresponding Application Service methods).
I'm sure that there must be some interfaces that allow to discover operations dynamically, at runtime. Maybe IContractBehavior...
Thanks.
EDIT1:
I don't want to use generic operation contract because I would like to have the ability to generate service proxy with all operations.
I'm sure that if I write WCF service and operations by hand then WCF uses reflection to discover the operations in the service.
Now, I would like to customize this point in order not to use reflection, just use my "operations discovering code" instead.
I think there is nothing wrong with static code generation in that case. In my opinion, it is a better solution than dynamic generation of contracts. Keep in mind that your contract is the only evidence you have/provide that a service is hosting a particular set operations.
The main issue I see about the dynamic approach is about versioning and compatibility. If everything is dynamically generated, you may end up transparently pushing breaking changes into the system and create some problems with existing clients.
If you have a code generator when you plan on implementing some changes in the application services, it will be easier to remember that the changes you make on the services may have a huge impact.
But if you really want to dynamically handle messages, you could use a generic operation contract (with the Action property set to *), and manually route the messages to the application services.
Keep in mind that you would lose the ability to generate from the service a proxy containing a list of operations available.

Extensibility framework/pattern/good practice for Web services?

I'm currently working on a large real-time OLAP application. All data are hold in RAM (a few gigabytes) and the common tasks involve brute scanning over the large quantity of that data (which is fine). The results of processing are exposed via a Web service (singleton/multithreaded) and presented using Silverlight-based client.
The problem is that various customers need different functionality/algorithms and I don't know how to provide extensibility on the server-side. For the client side (Silverlight) I can use MEF/Prism, but I'm not sure what would be a good approach to tackle this problem on the server.
Please note that ideally other web-services should have a direct access (i.e. without marshaling) to the data of the main/current service which holds the large data model.
Are there any:
a) frameworks/libraries
b) patterns
c) good pracitces
which would help me to modularize the application and make the selection of desired modules and their deployment relatively easy?
Sounds to me like Dependency Inversion is required: isolate logical parts of the system (algorithms, etc) by defining interfaces, then use a DI / IoC framework to load the desired implementation at runtime (or on application start, etc).
I haven't used Ninject, but plenty of people love it, so you could try that; there's also Spring.Net.
Good Practices:
Ensure you have clear precise logging so you know what's being used and when.
Think about whether you want a 'default' implementation to load if the desired one fails, or whether you deliberately want to fail so that the wrong data isn't returned by mistake (such as the use of a different algorythm).
I've found that using attributes to decorate injectable modules is really helpful (especially in a web-based system that you don't have immeadiate access to) one reason for this is that you can build pages or controls that list all the known / available implementations at runtime.
You can also use the attribute approach to build a UI that lets users select which one they want; I use it for an open source web-application framework I built: http://www.morphological.geek.nz/Morphfolia/Capabilities/AttributeDriven.aspx

Categories

Resources