I have one .NET 4.5.2 Service Publishing messages to RabbitMq via MassTransit.
And multiple instances of a .NET Core 2.1 Service Consuming those messages.
At the moment competing instances of the .NET core consumer service steal messages from the others.
i.e. The first one to consume the message takes it off the queue and the rest of the service instances don't get to consume it.
I want ALL instances to consume the same message.
How can I achieve this?
Publisher Service is configured as follows:
builder.Register(context =>
{
MessageCorrelation.UseCorrelationId<MyWrapper>(x => x.CorrelationId);
return Bus.Factory.CreateUsingRabbitMq(configurator =>
{
configurator.Host(new Uri("rabbitmq://localhost:5671"), host =>
{
host.Username(***);
host.Password(***);
});
configurator.Message<MyWrapper>(x => { x.SetEntityName("my.exchange"); });
configurator.Publish<MyWrapper>(x =>
{
x.AutoDelete = true;
x.Durable = true;
x.ExchangeType = true;
});
});
})
.As<IBusControl>()
.As<IBus>()
.SingleInstance();
And the .NET Core Consumer Services are configured as follows:
serviceCollection.AddScoped<MyWrapperConsumer>();
serviceCollection.AddMassTransit(serviceConfigurator =>
{
serviceConfigurator.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost:5671"), hostConfigurator =>
{
hostConfigurator.Username(***);
hostConfigurator.Password(***);
});
cfg.ReceiveEndpoint(host, "my.exchange", exchangeConfigurator =>
{
exchangeConfigurator.AutoDelete = true;
exchangeConfigurator.Durable = true;
exchangeConfigurator.ExchangeType = "topic";
exchangeConfigurator.Consumer<MyWrapperConsumer>(provider);
});
}));
});
serviceCollection.AddSingleton<IHostedService, BusService>();
And then MyWrapperConsumer looks like this:
public class MyWrapperConsumer :
IConsumer<MyWrapper>
{
.
.
public MyWrapperConsumer(...) => (..) = (..);
public async Task Consume(ConsumeContext<MyWrapper> context)
{
//Do Stuff
}
}
It sounds like you want to publish messages and have multiple consumer service instances receive them. In that case, each service instance needs to have its own queue. That way, every published message will result in a copy being delivered to each queue. Then, each receive endpoint will read that message from its own queue and consume it.
All that excessive configuration you're doing is going against what you want. To make it work, remove all that exchange type configuration, and just configure each service instance with a unique queue name (you can generate it from host, machine, whatever) and just call Publish on the message producer.
You can see how RabbitMQ topology is configured: https://masstransit-project.com/advanced/topology/rabbitmq.html
Thanks to the Answer from Chris Patterson and the comment from Alexey Zimarev I now believe I have this working.
The guys pointed out (from my understanding, correct me if I am wrong) that I should get rid of specifying the Exchanges and Queues etc myself and stop being so granular with my configuration.
And let MassTransit do the work in knowing which exchange to create & publish to, and which queues to create and bind to that exchange based on my type MyWrapper. And my IConsumerimplementation type MyWrapperConsumer.
Then giving each consumer service its own unique ReceiveEndpoint name we will end up with the exchange fanning out messages of type MyWrapper to each unique queue which gets created by the unique names specified.
So, in my case..
THE PUBLISHER SERVICE config relevant lines of code changed FROM:
configurator.Message<MyWrapper>(x => { x.SetEntityName("my.exchange"); });
configurator.Publish<MyWrapper>(x =>
{
x.AutoDelete = true;
x.Durable = true;
x.ExchangeType = true;
});
TO THIS
configurator.Message<MyWrapper>(x => { });
configurator.AutoDelete = true;
AND EACH CONSUMERS SERVICE instance config relevant lines of code changed FROM:
cfg.ReceiveEndpoint(host, "my.exchange", exchangeConfigurator =>
{
exchangeConfigurator.AutoDelete = true;
exchangeConfigurator.Durable = true;
exchangeConfigurator.ExchangeType = "topic";
exchangeConfigurator.Consumer<MyWrapperConsumer>(provider);
});
TO THIS:
cfg.ReceiveEndpoint(host, Environment.MachineName, queueConfigurator =>
{
queueConfigurator.AutoDelete = true;
queueConfigurator.Consumer<MyWrapperConsumer>(provider);
});
Note, the Environment.MachineName gives the unique queue name for each instance
We can achieve it by having separate queue for each consumer services and each queue bind with a single exchange. When we publish message to exchange it will send copy of message to each queue and eventually received by each consumer services.
Messages :
namespace Masstransit.Message
{
public interface ICustomerRegistered
{
Guid Id { get; }
DateTime RegisteredUtc { get; }
string Name { get; }
string Address { get; }
}
}
namespace Masstransit.Message
{
public interface IRegisterCustomer
{
Guid Id { get; }
DateTime RegisteredUtc { get; }
string Name { get; }
string Address { get; }
}
}
Publisher Console App :
namespace Masstransit.Publisher
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("CUSTOMER REGISTRATION COMMAND PUBLISHER");
Console.Title = "Publisher window";
RunMassTransitPublisher();
}
private static void RunMassTransitPublisher()
{
string rabbitMqAddress = "rabbitmq://localhost:5672";
string rabbitMqQueue = "mycompany.domains.queues";
Uri rabbitMqRootUri = new Uri(rabbitMqAddress);
IBusControl rabbitBusControl = Bus.Factory.CreateUsingRabbitMq(rabbit =>
{
rabbit.Host(rabbitMqRootUri, settings =>
{
settings.Password("guest");
settings.Username("guest");
});
});
Task<ISendEndpoint> sendEndpointTask = rabbitBusControl.GetSendEndpoint(new Uri(string.Concat(rabbitMqAddress, "/", rabbitMqQueue)));
ISendEndpoint sendEndpoint = sendEndpointTask.Result;
Task sendTask = sendEndpoint.Send<IRegisterCustomer>(new
{
Address = "New Street",
Id = Guid.NewGuid(),
RegisteredUtc = DateTime.UtcNow,
Name = "Nice people LTD"
}, c =>
{
c.FaultAddress = new Uri("rabbitmq://localhost:5672/accounting/mycompany.queues.errors.newcustomers");
});
Console.ReadKey();
}
}
}
Receiver Management console app :
namespace Masstransit.Receiver.Management
{
class Program
{
static void Main(string[] args)
{
Console.Title = "Management consumer";
Console.WriteLine("MANAGEMENT");
RunMassTransitReceiver();
}
private static void RunMassTransitReceiver()
{
IBusControl rabbitBusControl = Bus.Factory.CreateUsingRabbitMq(rabbit =>
{
rabbit.Host(new Uri("rabbitmq://localhost:5672"), settings =>
{
settings.Password("guest");
settings.Username("guest");
});
rabbit.ReceiveEndpoint("mycompany.domains.queues.events.mgmt", conf =>
{
conf.Consumer<CustomerRegisteredConsumerMgmt>();
});
});
rabbitBusControl.Start();
Console.ReadKey();
rabbitBusControl.Stop();
}
}
}
Receiver Sales Console app:
namespace Masstransit.Receiver.Sales
{
class Program
{
static void Main(string[] args)
{
Console.Title = "Sales consumer";
Console.WriteLine("SALES");
RunMassTransitReceiver();
}
private static void RunMassTransitReceiver()
{
IBusControl rabbitBusControl = Bus.Factory.CreateUsingRabbitMq(rabbit =>
{
rabbit.Host(new Uri("rabbitmq://localhost:5672"), settings =>
{
settings.Password("guest");
settings.Username("guest");
});
rabbit.ReceiveEndpoint("mycompany.domains.queues.events.sales", conf =>
{
conf.Consumer<CustomerRegisteredConsumerSls>();
});
});
rabbitBusControl.Start();
Console.ReadKey();
rabbitBusControl.Stop();
}
}
}
You can find a working solution on https://github.com/prasantj409/Masstransit-PublishMultipleConsumer.git
By default, RabbitMQ sends each message to all the consumers in sequence. This type of dispatching is called "round-robin" and made for load balancing (you can have multiple instances of your service consuming the same message).
As Chris pointed, to ensure that your service always receives its copy of a message, you need to provide the unique Queue name.
What you need to do:
Make sure that your consumers implements IConsumer interface with the same generic type
Register all this consumers
Use Publish method to send message
Generally there are two types of messages in MassTransit: Events and Commands, and in this case your message is Event. In the case when your message is a Command, only one consumer receives message and you need to use Send method.
Example of Event DTO:
public class OrderChecked
{
public Guid OrderId { get; set; }
}
Consumers:
public class OrderSuccessfullyCheckedConsumer : IConsumer<OrderChecked>
{
public async Task Consume(ConsumeContext<OrderChecked> context)
{
// some your consuming code
}
}
public class OrderSuccessfullyCheckedConsumer2 : IConsumer<OrderChecked>
{
public async Task Consume(ConsumeContext<OrderChecked> context)
{
// some your second consuming code
}
}
Configuring:
services.AddMassTransit(c =>
{
c.AddConsumer<OrderSuccessfullyCheckedConsumer>();
c.AddConsumer<OrderSuccessfullyCheckedConsumer2>();
c.SetKebabCaseEndpointNameFormatter();
c.UsingRabbitMq((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
});
});
services.AddMassTransitHostedService(true);
Publishing the message:
var endpoint = await _bus.GetPublishSendEndpoint<OrderChecked>();
await endpoint.Send(new OrderChecked
{
OrderId = newOrder.Id
});
I want to share a slightly different code example.
instanceId:
Specifies an identifier that uniquely identifies the endpoint
instance, which is appended to the end of the endpoint name.
services.AddMassTransit(x => {
x.SetKebabCaseEndpointNameFormatter();
Guid instanceId = Guid.NewGuid();
x.AddConsumer<MyConsumer>()
.Endpoint(c => c.InstanceId = instanceId.ToString());
x.UsingRabbitMq((context, cfg) => {
...
cfg.ConfigureEndpoints(context);
});
});
Related
I have a .net core backend with SignalR and a react frontend. I have a basic hub set up with ConcurrentDictionary to manage connection ids:
namespace backend.Hubs
{
public class OrderHub : Hub, IOrderHub
{
private readonly IHubContext<OrderHub> hubContext;
public static ConcurrentDictionary<string, List<string>> ConnectedUsers = new ConcurrentDictionary<string, List<string>>();
public OrderHub(IHubContext<OrderHub> hubContext)
{
this.hubContext = hubContext;
}
public async Task SendMessage(OrderResponseDto order, string id)
{
List<string> connectedUser = null;
ConnectedUsers.TryGetValue(id, out connectedUser);
await hubContext.Clients.Clients(connectedUser).SendAsync("neworder",order);
}
public void AddMapping(string id)
{
List<string> existingUserConnectionIds;
ConnectedUsers.TryGetValue(id, out existingUserConnectionIds);
if (existingUserConnectionIds == null)
{
existingUserConnectionIds = new List<string>();
}
existingUserConnectionIds.Add(Context.ConnectionId);
ConnectedUsers.TryAdd(id, existingUserConnectionIds);
}
public override Task OnDisconnectedAsync(Exception e)
{
List<string> existingUserConnectionIds = null;
foreach(var val in ConnectedUsers.Values)
{
if (val.Contains(Context.ConnectionId))
{
existingUserConnectionIds = val;
break;
}
}
if (existingUserConnectionIds != null)
{
existingUserConnectionIds.Remove(Context.ConnectionId);
}
var keys = ConnectedUsers.Where(en => en.Value.Count == 0).Select(k => k.Key);
foreach(var key in keys)
{
List<string> garb = null;
ConnectedUsers.TryRemove(key, out garb);
}
return base.OnDisconnectedAsync(e);
}
}
}
On the frontend, I establish the connection and call the AddMapping method to save the client connection id to the concurrent dictionary. All works fine when no throttling is enabled in the developer console in the frontend. However, if I change the throttling to slow or fast 3g I encounter a weird problem. The connection is established as usual and I invoke the method. From debugging in .net core I see that the method is called on the backend however the value isn't saved in the concurrent dictionary and it doesn't return anything. The react client thinks the connection is lost and reconnects. Specifically, I am getting the following errors:
This error only happens when I try to invoke a hub method from the client on throttled connection. On an unthrottled connection, it works fine.
Additional client code :
const newConnection = new HubConnectionBuilder()
.withUrl(env.realtimeBaseUrl+'orderhub', {
// skipNegotiation: true,
// transport: HttpTransportType.WebSockets
})
.withAutomaticReconnect()
.build();
newConnection.start().then(r => {
fetchOrders()
setConnectionStatus(connectingForFirstTime);
newConnection.invoke("AddMapping",business.id.toString()).then(res=>{
console.log(res)
setConnection(newConnection);
setConnectionStatus(connected)
})
newConnection.on("neworder",(order)=>{
// data handling
})
newConnection.onreconnecting(()=>setConnectionStatus(reconnecting))
newConnection.onreconnected(()=>{
newConnection.invoke("AddMapping",business.id.toString()).then(res=>{
console.log(res)
setConnection(newConnection);
setConnectionStatus(connected)
})
});
newConnection.onclose(()=>setConnection(null));
}).catch(err => setConnectionStatus(off));
I just tried it on one of my sites and it seems like the way the dev tools performs the throttling disrupts websocket connections to the point that it doesn't seem to work bi-directionally whether it is on slow3g or fast3g simulation. I can reproduce your error on my otherwise working site. My suspicion is the simulator, not your code.
I'm using MassTransit.Kafka for produce and consume messages in batches. When I try to consume message one by one everything works fine, but when I try to consume messages in batches I get an error:
Confluent.Kafka.ConsumeException: Local: Value deserialization error
---> System.InvalidOperationException: Exception creating proxy (GreenPipes.DynamicInternal.MassTransit.Batch<Aiforfit.WSW.DataStructures.Events.UserEvent>) for MassTransit.Batch<Aiforfit.WSW.DataStructures.Events.UserEvent>
---> System.TypeLoadException: Method 'get_Item' in type 'GreenPipes.DynamicInternal.MassTransit.Batch<Aiforfit.WSW.DataStructures.Events.UserEvent>' from assembly 'MassTransitGreenPipes.DynamicInternal3c37dde6a7c744b796f7ac1cf544383b, Version=0.0.0.0, Cul
ture=neutral, PublicKeyToken=null' does not have an implementation.
Looks like it's NewtonSoft deserealization error, but everything done according to MassTransit documentation. I've tried to convert UserEvent to Interface because every model in documentation is interface, but it didn't help.
Configuration:
public static IServiceCollection AddKafka(this IServiceCollection services, IConfigurationSection section)
{
var config = section.Get<EventMessagingOptions>().Kafka;
services.AddMassTransitHostedService();
services.AddMassTransit(x =>
{
x.UsingInMemory((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
cfg.UseRawJsonSerializer();
});
x.AddRider(rider =>
{
rider.AddConsumer<UserEventConsumer>(typeof(UserEventConsumerDefinition));
rider.UsingKafka((ctx, k) =>
{
k.SecurityProtocol = config.SecurityProtocol;
k.Host(config.Host, configurator =>
{
configurator.UseSasl(saslConfigurator =>
{
saslConfigurator.Username = config.Username;
saslConfigurator.Password = config.Password;
saslConfigurator.Mechanism = config.SaslMechanism;
});
});
k.TopicEndpoint<Batch<UserEvent>>(config.Topics.UserEvent, config.Topics.UserEventGroupId, e =>
{
e.AutoOffsetReset = AutoOffsetReset.Earliest;
e.ConfigureConsumer<UserEventConsumer>(ctx);
});
});
});
});
return services;
}
public class UserEventConsumerDefinition : ConsumerDefinition<UserEventConsumer>
{
public UserEventConsumerDefinition()
=> Endpoint(x => x.PrefetchCount = 500);
protected override void ConfigureConsumer(
IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<UserEventConsumer> consumerConfigurator)
{
consumerConfigurator.Options<BatchOptions>(options => options
.SetMessageLimit(500)
.SetConcurrencyLimit(25));
}
}
public class UserEventConsumer : IConsumer<Batch<UserEvent>>
{
private readonly ICluster _cluster;
public UserEventConsumer(ICluster cluster)
=> _cluster = cluster;
public async Task Consume(ConsumeContext<Batch<UserEvent>> context)
{
Console.WriteLine(context.Message.Length);
}
}
public class UserEvent
{
public Guid EventId { get; set; } = Guid.NewGuid();
public Guid UserId { get; set; }
public string Test { get; set; }
}
Looks like it's NewtonSoft deserealization error, but everything done according to MassTransit documentation. I've tried to convert UserEvent to Interface because every model in documentation is interface, but it didn't help.
In the case of a topic endpoint, you still specify TopicEndpoint<UserEvent> for the TopicEndpoint, and consume Batch<UserEvent> in your consumer. When configured on the topic endpoint, using ConfigureConsumer<UserEventConsumer>(ctx) it will properly handle the mapping of a batch of events to your consumer.
Assuming you are on the latest version of MassTransit, it should increase the ConcurrentMessageLimit on the topic endpoint to match the batch message capacity.
I've encountered a weird issue with naming of queues, which are created in Azure Service Bus and used by MassTransit.
Uri is "sb://{#namespace}.servicebus.windows.net/{path}/{queueName}"
E.g. if a path equals to dev and queueName contains dev as a substring, e.g. devices than dev is removed and I see a queue with a name ices created.
Same happens if path = test as well.
I've have not found any reserved words for queue naming so I wonder if there are any.
That issue happens only while sending messages in ASP.NET Web Api process. For Azure Worker Role everything works fine.
Used methods are:
public static Uri BuildQueueUri(string #namespace, string path, string queueName)
{
return new Uri($"sb://{#namespace}.servicebus.windows.net/{path}/{queueName}");
}
protected Task<ISendEndpoint> EstablishSendEndpoint(string queueName)
{
Uri uri = BusConfiguration.BuildQueueUri(
Settings.GetSetting(ConfigKeys.ServiceBusNamespace),
Settings.GetSetting(ConfigKeys.ServiceBusPath),
queueName);
return BusControl.GetSendEndpoint(uri);
}
public async Task<IHttpActionResult> SendGetIpsToInterceptCommand()
{
ISendEndpoint endpoint = await EstablishSendEndpoint(BusConfiguration.SendCommandsQueue);
var guid = Guid.NewGuid();
await endpoint.Send<ICGetItemsToInterceptCommand>(new
{
CommandId = guid
});
return Ok(guid);
}
MassTransit configuration (using Autofac)
private static void RegisterMicroserviceBus(ContainerBuilder builder)
{
builder.Register(c =>
Bus.Factory.CreateUsingAzureServiceBus(sbc =>
{
var serviceUri = ServiceBusEnvironment.CreateServiceUri("sb",
Settings.GetSetting(ConfigKeys.ServiceBusNamespace),
Settings.GetSetting(ConfigKeys.ServiceBusPath));
sbc.Host(serviceUri, h =>
{
h.TokenProvider =
TokenProvider.CreateSharedAccessSignatureTokenProvider(
Settings.GetSetting(ConfigKeys.ServiceBusKeyName),
Settings.GetSetting(ConfigKeys.ServiceBusAccessKey),
TimeSpan.FromDays(365),
TokenScope.Namespace);
});
}))
.SingleInstance()
.As<IBusControl>()
.As<IBus>();
}
i'm having problem using events in my servicestack application.
I'm creating an SOA applicatin based on ServiceStack. I've had no problem creating a simple GET/POST manager within the host.
Now i would like to add events
I'm trying using an example, but the event is not received by the client
Does someone have an idea about that?
This is my server:
ServiceStack.Text.JsConfig.EmitCamelCaseNames = true;
ServerEventsFeature serverEventsFeature = new ServerEventsFeature()
{
LimitToAuthenticatedUsers = false,
NotifyChannelOfSubscriptions = true,
OnPublish = (res, msg) =>
{
//fired after ever message is published
res.Write("\n\n\n\n\n\n\n\n\n\n");
res.Flush();
},
OnConnect = (eventSubscription, dictionary) =>
{
},
OnSubscribe = (eventSubscription) =>
{
}
};
Plugins.Add(serverEventsFeature);
container.Register<IServerEvents>(c => new MemoryServerEvents());
container.Register(c => new FrontendMessages(c.Resolve<IServerEvents>()));
container.Register<IWebServiceEventManager>(c => new WebServiceEventManager(DeviceManager, macroManager));
SetConfig(new HostConfig
{
DefaultContentType = MimeTypes.Json,
EnableFeatures = Feature.All.Remove(Feature.Html),
});
public class FrontendMessage
{
public string Level { get; set; }
public string Message { get; set; }
}
public class FrontendMessages
{
private readonly IServerEvents _serverEvents;
private Timer _timer;
public FrontendMessages(IServerEvents serverEvents)
{
if (serverEvents == null) throw new ArgumentNullException(nameof(serverEvents));
_serverEvents = serverEvents;
}
public void Start()
{
var ticks = 0;
_timer = new Timer(_ => {
Info($"Tick {ticks++}");
_timer.Change(500, Timeout.Infinite);
}, null, 500, Timeout.Infinite);
}
public void Info(string message, params object[] parameters)
{
var frontendMessage = new FrontendMessage
{
Level = "success",
Message = message
};
Console.WriteLine("Sending message: " + frontendMessage.Message);
_serverEvents.NotifyChannel("messages", frontendMessage);
}
This is my client:
public async void Connect()
{
try
{
Task.Delay(2000).Wait();
clientEvents = new ServerEventsClient("http://127.0.0.1:20001/", "messages");
clientEvents.OnConnect = (msg) =>
{
};
clientEvents.OnHeartbeat = () =>
{
};
clientEvents.OnCommand = (msg) =>
{
};
clientEvents.OnException = (msg) =>
{
};
clientEvents.OnMessage = (msg) =>
{
};
Dictionary<string, ServerEventCallback> handlers = new Dictionary<string, ServerEventCallback>();
handlers.Add("messages", (client, msg) =>
{
});
clientEvents.RegisterHandlers(handlers);
await clientEvents.Connect();
client = (IServiceClient)(clientEvents.ServiceClient);
}
catch (Exception e)
{
}
}
I'd first recommend looking at ServerEvents Examples and the docs for the C# ServerEventsClient for examples of working configurations.
Your extra ServerEventsFeature configuration isn't useful as you're just specifying the defaults and the Publish() new-line hack is not needed when you disable buffering in ASP.NET. So I would change it to:
Plugins.Add(new ServerEventsFeature());
Second issue is that you're use of Message Event handlers is incorrect, your C# ServerEventsClient is already connected to the messages channel. Your handlers is used to listen for messages sent to the cmd.* selector (e.g. cmd.FrontendMessage).
Since you're publishing a DTO to a channel, i.e:
_serverEvents.NotifyChannel("messages", frontendMessage);
You can use a Global Receiver to handle it, e.g:
public class GlobalReceiver : ServerEventReceiver
{
public void Any(FrontendMessage request)
{
...
}
}
client.RegisterReceiver<GlobalReceiver>();
Thanks mythz!
It works correectly.
Next step is to replicate the same behaviour on javascript client (events and get/post request). Do you have something to suggest me?
Thanks a lot!
Leo
When creating a NServiceBus SendOnly endpoint, the purpose is to just fire-and-forget, i.e. just send a message and then someone else will take care of it. Which seems like the thing I need. I dont want any communication between the bus and the system handling messages. System "A" wants to notify system "B" about something.
Well the creation of an SendOnly endpoint if very straightforward but what about the system listening for messages from an SendOnly endpoint.
I'm trying to set up a listener in a commandline project that will handle messages. The messages get sent to the queue but they doesnt get handled by system "B".
Is this the wrong approach? Is a bus overkill for this type of functionality?
System A:
public class Program
{
static void Main(string[] args)
{
var container = new UnityContainer();
var bus = Configure.With()
.UnityBuilder(container)
.JsonSerializer()
.Log4Net()
.MsmqTransport()
.UnicastBus()
.SendOnly();
while(true)
{
Console.WriteLine("Send a message");
var message = new Message(Console.ReadLine());
bus.Send(message);
}
}
}
System B:
class Program
{
static void Main(string[] args)
{
var container = new UnityContainer();
var bus = Configure.With()
.UnityBuilder(container)
.JsonSerializer()
.Log4Net()
.MsmqTransport()
.UnicastBus()
.LoadMessageHandlers()
.CreateBus()
.Start();
Console.WriteLine("Waiting for messages...");
while(true)
{
}
}
}
public class MessageHandler : IHandleMessages<Message>
{
public void Handle(Message message)
{
Console.WriteLine(message.Data);
}
}
public class Message : IMessage
{
public Message()
{
}
public Message(string data)
{
Data = data;
}
public string Data { get; set; }
}
In the MessageEndpointMappings you need to update it as follows:
Replace DLL with the name of the assembly containing your messages (e.g. "Messages")
Change the Endpoint to the name of the queue which System B is reading from (You can check the queue name by looking in the MSMQ snapin under private queues).
<add Messages="Messages" Endpoint="SystemB" />
NServiceBus 3 automatically creates the queue name based upon the namespace of the hosting assembly.
Additionally, you may want to look at using the NServiceBus.Host to host your handlers instead of your own console application.