I've encountered a weird issue with naming of queues, which are created in Azure Service Bus and used by MassTransit.
Uri is "sb://{#namespace}.servicebus.windows.net/{path}/{queueName}"
E.g. if a path equals to dev and queueName contains dev as a substring, e.g. devices than dev is removed and I see a queue with a name ices created.
Same happens if path = test as well.
I've have not found any reserved words for queue naming so I wonder if there are any.
That issue happens only while sending messages in ASP.NET Web Api process. For Azure Worker Role everything works fine.
Used methods are:
public static Uri BuildQueueUri(string #namespace, string path, string queueName)
{
return new Uri($"sb://{#namespace}.servicebus.windows.net/{path}/{queueName}");
}
protected Task<ISendEndpoint> EstablishSendEndpoint(string queueName)
{
Uri uri = BusConfiguration.BuildQueueUri(
Settings.GetSetting(ConfigKeys.ServiceBusNamespace),
Settings.GetSetting(ConfigKeys.ServiceBusPath),
queueName);
return BusControl.GetSendEndpoint(uri);
}
public async Task<IHttpActionResult> SendGetIpsToInterceptCommand()
{
ISendEndpoint endpoint = await EstablishSendEndpoint(BusConfiguration.SendCommandsQueue);
var guid = Guid.NewGuid();
await endpoint.Send<ICGetItemsToInterceptCommand>(new
{
CommandId = guid
});
return Ok(guid);
}
MassTransit configuration (using Autofac)
private static void RegisterMicroserviceBus(ContainerBuilder builder)
{
builder.Register(c =>
Bus.Factory.CreateUsingAzureServiceBus(sbc =>
{
var serviceUri = ServiceBusEnvironment.CreateServiceUri("sb",
Settings.GetSetting(ConfigKeys.ServiceBusNamespace),
Settings.GetSetting(ConfigKeys.ServiceBusPath));
sbc.Host(serviceUri, h =>
{
h.TokenProvider =
TokenProvider.CreateSharedAccessSignatureTokenProvider(
Settings.GetSetting(ConfigKeys.ServiceBusKeyName),
Settings.GetSetting(ConfigKeys.ServiceBusAccessKey),
TimeSpan.FromDays(365),
TokenScope.Namespace);
});
}))
.SingleInstance()
.As<IBusControl>()
.As<IBus>();
}
Related
I would like to use .net variant of Google Cloud Client Libraries (Resource Manager for creating new project, for example).
I wouldn't like to use neither service account credentials nor ADC.
Can I somehow pass existing OAuth credentials (access token, obtained for appropriate scope) to the client library to authenticate the given user?
(Or) do I need any authentication client library?
Briefly looked at the ProjectsClientBuilder class, but seems heavy generated (also as the documentation), meaning it's a bit harder to find any hint.
The following example shows how to authorize the Google cloud resource manager API using Oauth2 for an installed app.
// Key file from google developer console (INSTALLED APP)
var PathToInstalledKeyFile = #"C:\Development\FreeLance\GoogleSamples\Credentials\credentials.json";
// scope of authorization needed for the method in question.
var scopes = "https://www.googleapis.com/auth/cloud-platform";
// Installed app authorizaton.
var credential = GoogleWebAuthorizationBroker.AuthorizeAsync(GoogleClientSecrets.FromFile(PathToInstalledKeyFile).Secrets,
new []{ scopes },
"userName",
CancellationToken.None,
new FileDataStore("usercreds", true)).Result;
var client = new ProjectsClientBuilder()
{
Credential = credential,
}.Build();
var projects = client.ListProjects(new FolderName("123"));
Note for a web application the code will be different. Web authorization is not the same with the client library. I havent tried to connect any of the cloud apis via web oauth before.
As mentioned above, only thing needed is to initialize Credential property in the project builder prior the Build().
Just for the completeness:
// when using Google.Apis.CloudResourceManager.v3
public class Program
{
private static async Task OlderMethod(string oAuthToken)
{
using var service = new CloudResourceManagerService();
var id = Guid.NewGuid().ToString("N")[..8];
var project = new Google.Apis.CloudResourceManager.v3.Data.Project
{
DisplayName = $"Prog Created {id}",
ProjectId = $"prog-created-{id}",
};
var createRequest = service.Projects.Create(project);
createRequest.Credential = new OlderCredential(oAuthToken);
var operation = await createRequest.ExecuteAsync();
// ...
}
}
public class OlderCredential : IHttpExecuteInterceptor
{
private readonly string oAuthToken;
public OlderCredential(string oAuthToken) { this.oAuthToken = oAuthToken; }
public Task InterceptAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", oAuthToken);
return Task.CompletedTask;
}
}
In the end the newer one is simpler, just returning the token, no need to modify the request itself:
// when using Google.Cloud.ResourceManager.V3
public class Program
{
private static async Task NewerMethod(string oAuthToken)
{
var client = await new ProjectsClientBuilder
{
Credential = new NewerCredential(oAuthToken),
}.BuildAsync();
var id = Guid.NewGuid().ToString("N")[..8];
var project = new Project
{
DisplayName = $"Prog Created New {id}",
ProjectId = $"prog-created-new-{id}",
};
var operation = await client.CreateProjectAsync(project);
}
}
public class NewerCredential : ICredential
{
private readonly string oAuthToken;
public NewerCredential(string oAuthToken) { this.oAuthToken = oAuthToken; }
public void Initialize(ConfigurableHttpClient httpClient) { }
public Task<string> GetAccessTokenForRequestAsync(string? authUri, CancellationToken cancellationToken) => Task.FromResult(oAuthToken);
}
I have a .net core backend with SignalR and a react frontend. I have a basic hub set up with ConcurrentDictionary to manage connection ids:
namespace backend.Hubs
{
public class OrderHub : Hub, IOrderHub
{
private readonly IHubContext<OrderHub> hubContext;
public static ConcurrentDictionary<string, List<string>> ConnectedUsers = new ConcurrentDictionary<string, List<string>>();
public OrderHub(IHubContext<OrderHub> hubContext)
{
this.hubContext = hubContext;
}
public async Task SendMessage(OrderResponseDto order, string id)
{
List<string> connectedUser = null;
ConnectedUsers.TryGetValue(id, out connectedUser);
await hubContext.Clients.Clients(connectedUser).SendAsync("neworder",order);
}
public void AddMapping(string id)
{
List<string> existingUserConnectionIds;
ConnectedUsers.TryGetValue(id, out existingUserConnectionIds);
if (existingUserConnectionIds == null)
{
existingUserConnectionIds = new List<string>();
}
existingUserConnectionIds.Add(Context.ConnectionId);
ConnectedUsers.TryAdd(id, existingUserConnectionIds);
}
public override Task OnDisconnectedAsync(Exception e)
{
List<string> existingUserConnectionIds = null;
foreach(var val in ConnectedUsers.Values)
{
if (val.Contains(Context.ConnectionId))
{
existingUserConnectionIds = val;
break;
}
}
if (existingUserConnectionIds != null)
{
existingUserConnectionIds.Remove(Context.ConnectionId);
}
var keys = ConnectedUsers.Where(en => en.Value.Count == 0).Select(k => k.Key);
foreach(var key in keys)
{
List<string> garb = null;
ConnectedUsers.TryRemove(key, out garb);
}
return base.OnDisconnectedAsync(e);
}
}
}
On the frontend, I establish the connection and call the AddMapping method to save the client connection id to the concurrent dictionary. All works fine when no throttling is enabled in the developer console in the frontend. However, if I change the throttling to slow or fast 3g I encounter a weird problem. The connection is established as usual and I invoke the method. From debugging in .net core I see that the method is called on the backend however the value isn't saved in the concurrent dictionary and it doesn't return anything. The react client thinks the connection is lost and reconnects. Specifically, I am getting the following errors:
This error only happens when I try to invoke a hub method from the client on throttled connection. On an unthrottled connection, it works fine.
Additional client code :
const newConnection = new HubConnectionBuilder()
.withUrl(env.realtimeBaseUrl+'orderhub', {
// skipNegotiation: true,
// transport: HttpTransportType.WebSockets
})
.withAutomaticReconnect()
.build();
newConnection.start().then(r => {
fetchOrders()
setConnectionStatus(connectingForFirstTime);
newConnection.invoke("AddMapping",business.id.toString()).then(res=>{
console.log(res)
setConnection(newConnection);
setConnectionStatus(connected)
})
newConnection.on("neworder",(order)=>{
// data handling
})
newConnection.onreconnecting(()=>setConnectionStatus(reconnecting))
newConnection.onreconnected(()=>{
newConnection.invoke("AddMapping",business.id.toString()).then(res=>{
console.log(res)
setConnection(newConnection);
setConnectionStatus(connected)
})
});
newConnection.onclose(()=>setConnection(null));
}).catch(err => setConnectionStatus(off));
I just tried it on one of my sites and it seems like the way the dev tools performs the throttling disrupts websocket connections to the point that it doesn't seem to work bi-directionally whether it is on slow3g or fast3g simulation. I can reproduce your error on my otherwise working site. My suspicion is the simulator, not your code.
Update oct. 20, 2021
It seems that publishing to a queue works as expected. When I publish to a queue, the message is persisted on the queue. Conversely, when I publish to a topic, the message is not persisted.
Updated
I have added a simple console app that reproduces the same behavior down below.
I am trying to send a message to a topic in service bus from an Azure Function. I have tried this with a managed identity using Mass Transit. I have also tried this with a shared access key using the Azure.Messaging.ServiceBus nuget package. Both methods complete without exception, but the message is not in the topic.
This is what I see when sending from my function:
Additional settings on the topic:
I am able to put messages on the topic using the Service Bus Explorer within the Azure portal.
There are no subscriptions on this topic. I did have one setup as a test earlier, but it has since been deleted.
Mass Transit Setup (in Startup.cs)
private void ConfigureMassTransit(IServiceCollection services, IConfiguration config) {
const string KEY_QUEUE_SERVER = "REDACTED";
const string EMAIL_RETRY_TOPIC = "REDACTED";
const string EMAIL_SENT_TOPIC = "REDACTED";
services.AddMassTransit(x => {
x.UsingAzureServiceBus((context, cfg) => {
cfg.Host(new Uri(config[KEY_QUEUE_SERVER]), host => {
host.TokenProvider = TokenProvider.CreateManagedIdentityTokenProvider();
});
cfg.Message<EmailSentEvent>(m => m.SetEntityName(EMAIL_SENT_TOPIC));
cfg.Message<TransactionEmailFailedEvent>(m => m.SetEntityName(EMAIL_RETRY_TOPIC));
cfg.ConfigureEndpoints(context);
});
});
}
MassTransitQueueAdapter.cs
public class MassTransitQueueAdapter : IQueueAdapter {
#region attributes
private readonly IBus _bus;
#endregion
#region ctor
public MassTransitQueueAdapter(IBus bus) {
_bus = bus;
}
#endregion
#region methods
public void PublishFailure(TransactionEmailFailedEvent failedEvent) {
_bus.Publish(failedEvent);
}
public void PublishSuccess(EmailSentEvent sentEvent) {
_bus.Publish(sentEvent);
}
#endregion
}
ServiceBusQueueAdapter.cs
public class ServiceBusQueueAdapter : IQueueAdapter {
#region attributes
private readonly QueueContext _context;
#endregion
#region ctor
public ServiceBusQueueAdapter(QueueContext context) {
_context = context;
}
#endregion
#region methods
private static ServiceBusClient BuildClient(string connectionString) => new ServiceBusClient(connectionString);
public void PublishFailure(TransactionEmailFailedEvent failedEvent) {
throw new System.NotImplementedException();
}
public void PublishSuccess(EmailSentEvent sentEvent) {
ServiceBusClient client = BuildClient(_context.SentTopicConnectionString);
ServiceBusSender sender = client.CreateSender(_context.SentTopicName);
Task.Run(() => sender.SendMessageAsync(new ServiceBusMessage(JsonConvert.SerializeObject(sentEvent))));
}
#endregion
}
Simple Console App
class Program {
static void Main() {
string cs = "Endpoint=sb://REDACTED.servicebus.windows.net/;SharedAccessKeyName=test_with_manage;SharedAccessKey=REDACTED;";
ServiceBusClient client = new ServiceBusClient(cs);
ServiceBusSender sender = client.CreateSender("test_1");
sender.SendMessageAsync(new ServiceBusMessage("Hello World!"))
.Wait();
}
}
The problem here is with my understanding of topics in Azure Service Bus. I was expecting the Topic to act as a type of storage for messages. I was wrong in that assumption. The topic will only forward messages to subscriptions. The subscription can either persist the message or forward to another topic or queue.
With that knowledge in mind, I was able to get my Service Bus specific implementations working. I apparently still have a bit to learn about Mass Transit.
Azure Webjob is now on V3, so this answer is not up to date anymore (How to integration test Azure Web Jobs?)
I imagine we need to do something like this:
var host = CreateHostBuilder(args).Build();
using (var scope = host.Services.CreateScope())
using (host)
{
var jobHost = host.Services.GetService(typeof(IJobHost)) as JobHost;
var arguments = new Dictionary<string, object>
{
// parameters of MyQueueTriggerMethodAsync
};
await host.StartAsync();
await jobHost.CallAsync("MyQueueTriggerMethodAsync", arguments);
await host.StopAsync();
}
QueueTrigger Function
public MyService(
ILogger<MyService> logger
)
{
_logger = logger;
}
public async Task MyQueueTriggerMethodAsync(
[QueueTrigger("MyQueue")] MyObj obj
)
{
_logger.Log("ReadFromQueueAsync success");
}
But after that, how can I see what's happened?
What do you suggest to be able to do Integration Tests for Azure Webjobs V3?
I'm guessing this is a cross post with Github. The product team recommends looking at their own end-to-end testing for ideas on how to handle integration testing.
To summarize:
You can configure an IHost as a TestHost and add your integrated services to it.
public TestFixture()
{
IHost host = new HostBuilder()
.ConfigureDefaultTestHost<TestFixture>(b =>
{
b.AddAzureStorage();
})
.Build();
var provider = host.Services.GetService<StorageAccountProvider>();
StorageAccount = provider.GetHost().SdkObject;
}
Tests would look something like this:
/// <summary>
/// Covers:
/// - queue binding to custom object
/// - queue trigger
/// - table writing
/// </summary>
public static void QueueToICollectorAndQueue(
[QueueTrigger(TestQueueNameEtag)] CustomObject e2equeue,
[Table(TableName)] ICollector<ITableEntity> table,
[Queue(TestQueueName)] out CustomObject output)
{
const string tableKeys = "testETag";
DynamicTableEntity result = new DynamicTableEntity
{
PartitionKey = tableKeys,
RowKey = tableKeys,
Properties = new Dictionary<string, EntityProperty>()
{
{ "Text", new EntityProperty("before") },
{ "Number", new EntityProperty("1") }
}
};
table.Add(result);
result.Properties["Text"] = new EntityProperty("after");
result.ETag = "*";
table.Add(result);
output = e2equeue;
}
The difficulty in setting up a specific test depends on which triggers and outputs you are using and whether or not an emulator.
I have one .NET 4.5.2 Service Publishing messages to RabbitMq via MassTransit.
And multiple instances of a .NET Core 2.1 Service Consuming those messages.
At the moment competing instances of the .NET core consumer service steal messages from the others.
i.e. The first one to consume the message takes it off the queue and the rest of the service instances don't get to consume it.
I want ALL instances to consume the same message.
How can I achieve this?
Publisher Service is configured as follows:
builder.Register(context =>
{
MessageCorrelation.UseCorrelationId<MyWrapper>(x => x.CorrelationId);
return Bus.Factory.CreateUsingRabbitMq(configurator =>
{
configurator.Host(new Uri("rabbitmq://localhost:5671"), host =>
{
host.Username(***);
host.Password(***);
});
configurator.Message<MyWrapper>(x => { x.SetEntityName("my.exchange"); });
configurator.Publish<MyWrapper>(x =>
{
x.AutoDelete = true;
x.Durable = true;
x.ExchangeType = true;
});
});
})
.As<IBusControl>()
.As<IBus>()
.SingleInstance();
And the .NET Core Consumer Services are configured as follows:
serviceCollection.AddScoped<MyWrapperConsumer>();
serviceCollection.AddMassTransit(serviceConfigurator =>
{
serviceConfigurator.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost:5671"), hostConfigurator =>
{
hostConfigurator.Username(***);
hostConfigurator.Password(***);
});
cfg.ReceiveEndpoint(host, "my.exchange", exchangeConfigurator =>
{
exchangeConfigurator.AutoDelete = true;
exchangeConfigurator.Durable = true;
exchangeConfigurator.ExchangeType = "topic";
exchangeConfigurator.Consumer<MyWrapperConsumer>(provider);
});
}));
});
serviceCollection.AddSingleton<IHostedService, BusService>();
And then MyWrapperConsumer looks like this:
public class MyWrapperConsumer :
IConsumer<MyWrapper>
{
.
.
public MyWrapperConsumer(...) => (..) = (..);
public async Task Consume(ConsumeContext<MyWrapper> context)
{
//Do Stuff
}
}
It sounds like you want to publish messages and have multiple consumer service instances receive them. In that case, each service instance needs to have its own queue. That way, every published message will result in a copy being delivered to each queue. Then, each receive endpoint will read that message from its own queue and consume it.
All that excessive configuration you're doing is going against what you want. To make it work, remove all that exchange type configuration, and just configure each service instance with a unique queue name (you can generate it from host, machine, whatever) and just call Publish on the message producer.
You can see how RabbitMQ topology is configured: https://masstransit-project.com/advanced/topology/rabbitmq.html
Thanks to the Answer from Chris Patterson and the comment from Alexey Zimarev I now believe I have this working.
The guys pointed out (from my understanding, correct me if I am wrong) that I should get rid of specifying the Exchanges and Queues etc myself and stop being so granular with my configuration.
And let MassTransit do the work in knowing which exchange to create & publish to, and which queues to create and bind to that exchange based on my type MyWrapper. And my IConsumerimplementation type MyWrapperConsumer.
Then giving each consumer service its own unique ReceiveEndpoint name we will end up with the exchange fanning out messages of type MyWrapper to each unique queue which gets created by the unique names specified.
So, in my case..
THE PUBLISHER SERVICE config relevant lines of code changed FROM:
configurator.Message<MyWrapper>(x => { x.SetEntityName("my.exchange"); });
configurator.Publish<MyWrapper>(x =>
{
x.AutoDelete = true;
x.Durable = true;
x.ExchangeType = true;
});
TO THIS
configurator.Message<MyWrapper>(x => { });
configurator.AutoDelete = true;
AND EACH CONSUMERS SERVICE instance config relevant lines of code changed FROM:
cfg.ReceiveEndpoint(host, "my.exchange", exchangeConfigurator =>
{
exchangeConfigurator.AutoDelete = true;
exchangeConfigurator.Durable = true;
exchangeConfigurator.ExchangeType = "topic";
exchangeConfigurator.Consumer<MyWrapperConsumer>(provider);
});
TO THIS:
cfg.ReceiveEndpoint(host, Environment.MachineName, queueConfigurator =>
{
queueConfigurator.AutoDelete = true;
queueConfigurator.Consumer<MyWrapperConsumer>(provider);
});
Note, the Environment.MachineName gives the unique queue name for each instance
We can achieve it by having separate queue for each consumer services and each queue bind with a single exchange. When we publish message to exchange it will send copy of message to each queue and eventually received by each consumer services.
Messages :
namespace Masstransit.Message
{
public interface ICustomerRegistered
{
Guid Id { get; }
DateTime RegisteredUtc { get; }
string Name { get; }
string Address { get; }
}
}
namespace Masstransit.Message
{
public interface IRegisterCustomer
{
Guid Id { get; }
DateTime RegisteredUtc { get; }
string Name { get; }
string Address { get; }
}
}
Publisher Console App :
namespace Masstransit.Publisher
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("CUSTOMER REGISTRATION COMMAND PUBLISHER");
Console.Title = "Publisher window";
RunMassTransitPublisher();
}
private static void RunMassTransitPublisher()
{
string rabbitMqAddress = "rabbitmq://localhost:5672";
string rabbitMqQueue = "mycompany.domains.queues";
Uri rabbitMqRootUri = new Uri(rabbitMqAddress);
IBusControl rabbitBusControl = Bus.Factory.CreateUsingRabbitMq(rabbit =>
{
rabbit.Host(rabbitMqRootUri, settings =>
{
settings.Password("guest");
settings.Username("guest");
});
});
Task<ISendEndpoint> sendEndpointTask = rabbitBusControl.GetSendEndpoint(new Uri(string.Concat(rabbitMqAddress, "/", rabbitMqQueue)));
ISendEndpoint sendEndpoint = sendEndpointTask.Result;
Task sendTask = sendEndpoint.Send<IRegisterCustomer>(new
{
Address = "New Street",
Id = Guid.NewGuid(),
RegisteredUtc = DateTime.UtcNow,
Name = "Nice people LTD"
}, c =>
{
c.FaultAddress = new Uri("rabbitmq://localhost:5672/accounting/mycompany.queues.errors.newcustomers");
});
Console.ReadKey();
}
}
}
Receiver Management console app :
namespace Masstransit.Receiver.Management
{
class Program
{
static void Main(string[] args)
{
Console.Title = "Management consumer";
Console.WriteLine("MANAGEMENT");
RunMassTransitReceiver();
}
private static void RunMassTransitReceiver()
{
IBusControl rabbitBusControl = Bus.Factory.CreateUsingRabbitMq(rabbit =>
{
rabbit.Host(new Uri("rabbitmq://localhost:5672"), settings =>
{
settings.Password("guest");
settings.Username("guest");
});
rabbit.ReceiveEndpoint("mycompany.domains.queues.events.mgmt", conf =>
{
conf.Consumer<CustomerRegisteredConsumerMgmt>();
});
});
rabbitBusControl.Start();
Console.ReadKey();
rabbitBusControl.Stop();
}
}
}
Receiver Sales Console app:
namespace Masstransit.Receiver.Sales
{
class Program
{
static void Main(string[] args)
{
Console.Title = "Sales consumer";
Console.WriteLine("SALES");
RunMassTransitReceiver();
}
private static void RunMassTransitReceiver()
{
IBusControl rabbitBusControl = Bus.Factory.CreateUsingRabbitMq(rabbit =>
{
rabbit.Host(new Uri("rabbitmq://localhost:5672"), settings =>
{
settings.Password("guest");
settings.Username("guest");
});
rabbit.ReceiveEndpoint("mycompany.domains.queues.events.sales", conf =>
{
conf.Consumer<CustomerRegisteredConsumerSls>();
});
});
rabbitBusControl.Start();
Console.ReadKey();
rabbitBusControl.Stop();
}
}
}
You can find a working solution on https://github.com/prasantj409/Masstransit-PublishMultipleConsumer.git
By default, RabbitMQ sends each message to all the consumers in sequence. This type of dispatching is called "round-robin" and made for load balancing (you can have multiple instances of your service consuming the same message).
As Chris pointed, to ensure that your service always receives its copy of a message, you need to provide the unique Queue name.
What you need to do:
Make sure that your consumers implements IConsumer interface with the same generic type
Register all this consumers
Use Publish method to send message
Generally there are two types of messages in MassTransit: Events and Commands, and in this case your message is Event. In the case when your message is a Command, only one consumer receives message and you need to use Send method.
Example of Event DTO:
public class OrderChecked
{
public Guid OrderId { get; set; }
}
Consumers:
public class OrderSuccessfullyCheckedConsumer : IConsumer<OrderChecked>
{
public async Task Consume(ConsumeContext<OrderChecked> context)
{
// some your consuming code
}
}
public class OrderSuccessfullyCheckedConsumer2 : IConsumer<OrderChecked>
{
public async Task Consume(ConsumeContext<OrderChecked> context)
{
// some your second consuming code
}
}
Configuring:
services.AddMassTransit(c =>
{
c.AddConsumer<OrderSuccessfullyCheckedConsumer>();
c.AddConsumer<OrderSuccessfullyCheckedConsumer2>();
c.SetKebabCaseEndpointNameFormatter();
c.UsingRabbitMq((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
});
});
services.AddMassTransitHostedService(true);
Publishing the message:
var endpoint = await _bus.GetPublishSendEndpoint<OrderChecked>();
await endpoint.Send(new OrderChecked
{
OrderId = newOrder.Id
});
I want to share a slightly different code example.
instanceId:
Specifies an identifier that uniquely identifies the endpoint
instance, which is appended to the end of the endpoint name.
services.AddMassTransit(x => {
x.SetKebabCaseEndpointNameFormatter();
Guid instanceId = Guid.NewGuid();
x.AddConsumer<MyConsumer>()
.Endpoint(c => c.InstanceId = instanceId.ToString());
x.UsingRabbitMq((context, cfg) => {
...
cfg.ConfigureEndpoints(context);
});
});