Using multiple connection strings with Rebus & AzureServiceBus - c#

We have been using Rebus to send commands to Azure Service Bus. We have a project that spans environments and needs to send commands to two different ASB namespaces (different connection strings).
The way we currently register Rebus doesn't allow us to create a factory or use multiple namespaces (that I'm aware of).
Inside Startup.cs ConfigureServices(...) method:
services.AddRebus(config =>
{
var asbConfig = Configuration.GetSection("AzureServiceBusConfiguration").Get<AzureServiceBusConfiguration>();
config
.Logging(l => l.Serilog(Log.Logger))
.Transport(t => t.UseAzureServiceBusAsOneWayClient(asbConfig.ConnectionString))
.Routing(r => r.TypeBased().Map<MyCommand>($"{asbConfig.Environment}/myQueueName"));
return config;
});
I've tried attacking this from several different directions, and all have fallen short. Is there a supported way to register more than one IBus configuration with different connection strings?
We basically need to spin this up per request scope so we can configure Rebus based on a request header value. Not sure where to start with this.

While Rebus has pretty good support for inserting itself into an IoC container via the "container adapter" concept, it doesn't necessarily make sense to always make it do so automatically.
In this case, I suggest you wrap one-way clients a dedicated class, e.g. something like a CommandSender or something, and then the command sender can initialize its one-way client in the constructor (and dispose it again in its Dispose method).
One-way clients are fairly inexpensive to create, so it might be ok to simply create/dispose every time you need them. If you need them often though, I suggest you use a ConcurrentDictionary to store the initialized instances – just remember to dispose them all when your application shuts down.

Related

.NET Maui how to make use of IOptionsSnapshot

Basically I'm trying to work around the fact that you can't really use IOptionsSnapshot in Maui since the appsettings.json file is set in stone once it's bundled in with the app.
Manually updating the IConfiguration with Configuration["key"] = myValue
require then to notify all scoped services or singletons to retrieve new istances of their IOptionsSnapshot properties.
Yep I need to update those options at runtime. (Even autofac moved from this)
So I either use ApiControllers which are transient that are locally to the app and I don't know if Maui supports them, so the requests always have the updated options.
Or I make use of transient services and resolve them manually every time I need them with
using var scope = scopeFactory.CreateScope();
var service = scope.ServiceProvider.GetRequiredService<MyTransientService>()
Ok, you need to do few things.
First, make a settings service, that stores and reads small key-value pairs:
https://stackoverflow.com/a/74402836/6643940
Now you have to make sure, that everyone is notified about changes.
In my case it is easy:
Using CommunityToolkit.Mvvm, I implement Messaging.
Setting a property sends a message, for whoever cares about those changes. If there is something running, and has subscribed for that message, it will receive it.
Otherwise I fire something, that no one listens to (and this is not bad thing).
The good thing for me is that, I don't even have this Service in the places that I want to detect a change. Everything is de-coupled.
The stuff that DOES use this service, it gets the new values anyway, and since this is singleton, you can add other properties that will be updated for everyone.
The interesting part here, is that custom code you have to write. At one place you may have BaseAddress setting of HttpClient. Good luck remembering that you have to re-construct it when changed.
People are not doing this during runtime for a reason. You will infest your code with bugs.

Persist a variable in WCF application per instance

I am creating a WFC Restful service and there is a need to persist a variable that will be persist per user, is there a way I can achieve this without having to pass the variable to all my calls?
I am using trying to log the process of the user throughout the process, weather their request has failed or succeed, IP address, when they requested the action, failure time, etc.
Please note I am new to WCF, thanks in advance.
I recently worked on this (except it wasn't RESTFUL). You could transmit information through HTTP headers and extract that information on the service-side. See http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/
For the client ID itself I can suggest two places to put it. One is OperationContext.Current.IncomingMessageProperties. Another is CorrelationManager.StartLogicalOperation which allows you to define a logical operation - that could be the service request, beginning to end - or multiple operations - and retrieve a unique ID for each operation.
I would lean toward the latter because it's part of System.Diagnostics and can prevent dependencies on System.ServiceModel. (The name CorrelationManager even describes what you're trying to do.)
In either case I would look at interception. That's the ideal way to read the value (wherever you store it) without having to pollute the individual methods with knowledge of logging and client IDs. (I saw from your message that you're trying to avoid that direct dependency on client IDs.)
Here's some documentation on adding Windsor to your WCF service. (At some point I'll add some end-to-end documentation on my blog.) Then, when you're using Windsor to instantiate your services, you can also use it to instantiate the dependencies and put interceptors around them that will perform your logging before or after those dependencies do their work. Within those interceptors you can access or modify that stack of logical operations.
I'm not doing Windsor justice by throwing out a few links. I'd like to flesh it out with some blog posts. But I recommend looking into it. It's beneficial for lots of reasons - interception just one. It helps with the way we compose services and dependencies.
Update - I added a blog post - how to add Windsor to a WCF service in five minutes.

Q: How to build the most basic service aggregation pattern?

I have a set of services I want to be able to access via one end point altogether.
Now I want to build something in wcf rather than use an existing framework/software so that is out of the question.
Suppose I have 10 contracts each representing a contract of an indepedent service that I want to "route" to, what direction should I go?
public partial class ServiceBus : ICardsService
{
//Proxy
CMSClient cards = new CMSClient();
public int methodExample()
{
return cards.methodExample();
}
So far I've tried using a partial class "ServiceBus" that implements each contract but then I have more than a few (60+) recurrences of identical function signatures so I think I should think in a different angle.
Anyone got an idea of what I should do? or what direction to research? currently I'm trying to use a normal wcf service that's going to be configured with a lot of client end points directing to each of the services it routes TO - and one endpoint for the 'application' to consume.
I'm rather new at wcf so anything that may seem too trivial to mention please do mention it anyway.
Thanks in advance.
I have a set of services I want to be able to access via one end point
altogether.
...
So far I've tried using a partial class "ServiceBus" that implements
each contract
It's questionable whether this kind of "service aggregation" pattern should be achieved by condensing multiple endpoints into an uber facade endpoint. Even when implemented well, this will still result in a brittle single failure point in your solution.
Suppose I have 10 contracts each representing a contract of an
indepedent service that I want to "route" to, what direction should I
go?
Stated broadly, your aim seems to be to decouple the caller and service so that the caller makes a call and based on the call context the call is routed the relevant services.
One approach would be to do this call mediation on the client side. This is an unusual approach but would involve creating a "service bus" assembly containing the capability to dynamically call a service at run-time, based on some kind of configurable metadata.
The client code would consume the assembly in-process, and at run-time call into the assembly, which would then make a call to the metadata store, retrieving the contract, binding, and address information for the relevant service, construct a WCF channel, and return it to the client. The client can then happily make calls against the channel and dispose it when finished.
An alternative is to do the call mediation remotely and luckily WCF does provide a routing service for this kind of thing. This allows you to achieve the service aggregation pattern you are proposing, but in a way which is fully configurable so your overall solution will be less brittle. You will still have a single failure point however, unless you load balance the router service.
I'm not sure about making it client side as I can't access some of the
applications (external apis) that are connecting to our service
Well, any solution you choose will likely involve some consumer rewrite - this is almost unavoidable.
I need to make it simple for the programmers using our api
This does not rule out a client side library approach. In fact in some ways this will make it really easy for the developers, all they will need to do is grab a nuget package, wire it up and start calling it. However I agree it's an unusual approach and would also generate a lot of work for you.
I want to implement the aggregation service with one endpoint for a
few contracts
Then you need to find a way to avoid having to implment multiple duplicate (or redundant) service operations in a single service implementation.
The simplest way would probably be to define a completely new service contract which exposes only those operations distinct to each of the services, and additionally a single instance of each of the redundant operations. Then you would need to have some internal routing logic to call the backing service operations depending on what the caller wanted to do. On second thoughts not so simple I think.
Do you have any examples of a distinct service operation and a redundant one?

Abstracting out existence of service bus/distributed messaging?

I'm working on a system right now that is in a single process space; we are breaking this up into several processes, initially to run on the same box but ultimately to distribute across several separate machines. I'm leaning towards using an ESB (NServiceBus, Rhino ESB) or possibly rolling my own with WCF + queues to handle the pub/sub and request/response scenarios our app has.
However, I'm struggling with the abstraction: I don't want the various components to know they are talking over the bus. The current APIs connecting the various services translate pretty well to this kind of model, but I want to hide that from the client and server sides. Short of writing a lot of custom proxy code for the client and server, is there a better way to approach this? I realize WCF can auto-generate proxies based on the service definition, but I really like some of the other stuff I get with (say) rhino servicebus.
Ideally, I'd like to be able to swap out different implementations (with and without an ESB/messaging layer) just using IoC (knowing there would have to be limits enforced by convention on what can be passed across the interfaces), but I'm not sure where to go with that. I'd really prefer to not have to change every method call on the current interfaces into its own discrete message class, either.
Any resources/patterns/tools to help me do this? Please ask questions if I'm not clear. Thanks.
There may not be one solution/off-the-shelf component that might help you.
Problem 1:
The basic problem can be solved via an ESB, as it provides location transparency and service aggregation. A regular ESB mediates/brokers requests between service consumer and service provider.
Take a simple example:
Service_A depends on Service_B
Service_C depends on Service_B
Service_B depends on Service_D
In this scenario, the best way to progress is this:
Define contracts exposed by Service_B and Service_D as external dependencies (possibly as a web service, though an ESB supports multiple protocols) in services Service_A, Service_C and Service_B, and consume via an ESB.
In ESB, to start with, route thes services Service_B and Service_D on the same instance.
If you migrate Service_D and Service_B as Service_Dx and Service_Bx on a different location, the ESB can be reconfigured to route to the new location. Also, an ESB can be configured to route to Service_B or Service_Bx based on some set of parameters (eg., test data to Service_B and production data to Service_Bx)
Problem 2:
The problem of IOC could probably be hard to achieve; there may not be a need.
I presume the clients, instead of consuming from a known location, are injected with the whereabouts of service location. This in reality transfers the configuration to client side. With this, for every new client added to the system there needs to be a separate configuration control. This might lead to logistical issues.
Please post your final solution, very interested to know your approach.

How to do inter-process communication between two instances of the same application?

I was thinking of using WCF, but then the endpoints would collide.
What are the other options?
The endpoints will collide because the second instance will be created from the same executable file.
You could use any of the interprocess communication primitives (memory mapped files, message passing, pipes or just standard sockets)... or you could just define the end point dynamically based on the given instance (for example based on the process id).
Whatever IPC mechanism you choose, the basic issue is the same - you will have resource collisions unless you configure the instances individually to use disjoint local resources (though in such a way that each instance pair can connect as required). It makes a difference whether you need just point-to-point (and how the target for a given outbound message is determined), or the ability to broadcast to all active instances.
Seems to me that the answer to this question is really "use the one that best meets your requirements", with a harder followup question on how to configure the instances to make that work.

Categories

Resources