Multiple instances of the same subscriber on different machines - c#

I am trying to implement the publisher subscriber pattern using Nservice bus , what i am trying to do is as follow :
I have web application , user can write news and add documents to that Application using his account .
I have a Windows forms desktop application that running on users machines , they can login this desktop app using the same credentials they are using to access the web application.
what i need to to is when users on web applications add news or documnets , the installed desktop app should receive notifications to informing it about this added news .....
i imagin that my desktop application will be the subscriber that will subscribe on events that will be published from the server
what i need to know is
when multiple users for example 1000 user , install this desktop app on different machines to start using it , in this case Nservice bus will consider them as multiple subscribers and will send a copy of message to each one of them or what ???

As described in the documentation, it works like this:
If you use Send(ICommand command), you get message sent directly to the matching target. In this case, if you have multiple consumers, they will compete to get this message and only one gets it. This is usually used to scale applications.
If you do Publish(IEvent #event), your message will get delivered to all subscribers.

When Pub/Sub happens in NServiceBus, there is one logical publisher that sends an event to multiple logical subscribers. Each logical subscriber will only receive one copy of each event that is published.
If many subscribers represent the same logical entity then each event will be processed by only one of the subscribers within that logical entity. Two subscribers are considered to represent the same logical entity if they share an Endpoint Name.
In this case, if every instance of the desktop application have the same endpoint name then they are considered to represent the same logical entity. When an event is published, then only one of those desktop application instances will get a copy of the event. That is probably not what you want.
If you give each instance of the desktop application a different endpoint name then they each represent their own logical entity. When an event is published, all of the desktop instances will get a copy of the event. That is also probably not what you want.
Another thing to consider is pub/sub in NSB is persistent and reliable. If you start a desktop application and it sets up a subscription, that subscription is in effect even if the desktop application is offline. Messages may be queued up waiting for the Desktop Application to come back online to be processed, even if you never run the desktop application again. Does your desktop application need these or do you only care about events that occur while you are connected?
If you want to notify a specific subscriber or subset of subscribers that are online when an event occurs on the server then I would use SignalR before using NSB.
Additional Info
MSMQ and SQL Server do not support pub-sub out of the box and so they use a feature called message driven-subscription. Message driven subscriptions work by having the subscriber send a message to the publisher when it starts up (or calls Subscribe()). This message contains a Reply To header which the publisher stores and uses to send copies of each message that is published.
For SQL Server transport, the Reply To address will be the name of the Endpoint. This in turn is mapped to a table in the database that represents the queue of messages. If multiple subscribers share the same Endpoint name (and hence the same table) then they will get one copy of the event between them and will be competing to see which subscriber pulls it from the queue.
For MSMQ, the Reply To address will be the name of the Endpoint and the name of the machine separated by an # symbol. This means that each subscription will be for a different address. When the event is published, each subscriber will get it's own copy of it. If multiple subscribers represent the same logical entity (i.e. share an Endpoint name), then you should have a distributor in front of them. When you do this then the Reply To header on outgoing subscribe messages will be the address of the Distributor. When an event is published, it will always go to the Distributor which will hand it off to one (and only one) of it's workers for processing.
Rabbit MQ and Azure Service Bus have built-in mechanisms for pub/sub but they will behave in a manner similar to SQL Transport. One instance of the logical subscriber will get a copy of the event.

Related

Have ServiceBus respect transaction scope?

Within code I need to atomically:
//write to a database table
//publish EventA to service bus
//write to another database table
//publish EventB to service bus
If either of the db writes fails, I can easily roll everything back within a transaction scope. However if that happens, it's essential the events are never published to service bus.
Ideally I need the service bus library to 'wait until the transaction scope successfully completes' before publishing the message onto the bus.
I'm working with legacy .net framework code - I could easily write something which holds events to raise in memory, and only raise these once the scope completes. The problem even then is that if EventB fails to publish, EventA has already been published.
What's the easiest way to include service bus events within a transaction scope?
I don't think there's an easy way to address this. Azure Service Bus will not share a transaction with any other service. Just as database transactions cannot include a web service call. The way to go about it would always require some additional complexity compared to a simple DTC transaction that was possible on-premises (to an extent).
But it can be done. With the patterns such as Outbox (unit of work) and Inbox (idempotency) are described very well in this post
Outbox Pattern - This pattern ensures that a message was sent (e.g. to a queue) successfully at least once. With this pattern, instead of directly publishing a message to the queue, we store it in the temporary storage (e.g. database table). We’re wrapping the entity save and message storing with the Unit of Work (transaction). By that, we’re making sure that if the application data was stored, the message wouldn’t be lost. It will be published later by a background process. This process will check if there are any not sent events in the table. When the worker finds such messages, it tries to send them. After it gets confirmation of publishing (e.g. ACK from the queue) it marks the event as sent.
Inbox Pattern - This is a pattern similar to Outbox Pattern. It’s used to handle incoming messages (e.g. from a queue). Accordingly, we have a table in which we’re storing incoming events. Contrary to outbox pattern, we first save the event in the database, then we’re returning ACK to queue. If save succeeded, but we didn’t return ACK to queue, then delivery will be retried. That’s why we have at-least-once delivery again. After that, an outbox-like process runs. It calls message handlers that perform business logic.
To answer your specific question.
What's the easiest way to include service bus events within a transaction scope?
The simplest would be to use a library that does that already, has been tested thoroughly, and can integrate into your system. MassTransit, NServiceBus, Jasper, etc. Alternatively, build your own. I'd not advise unless it's a pet project or the core of your system.

Confusion about RabbitMQ and MQTT in C# environment

I Am trying to play with MQTT and RabbitMQ but with some problems.
Introduction
I have several physical machines that generate data and I want to do something with this data, for simplicity imagine that I just want to write those messages to a separate file for each machine identified by an ID. (do not focus on the details, it is just an example)
What I have
I have an MQTT broker (i.e. mosquitto broker) that handles several messages coming from those machines.
I have a (complex) windows service written in C#. An object instantiation exists for each machine (i.e. machine with id = 1000 leads to an object that represents the physical machine in the service program and so on). This machine object has a RabbitMQ queue of messages that contains every message that is delivered by the machine.
The problem
How can I populate this queue?
I thought that there was the possibility, of using the rabbitMQTT plugin, to instantiate an exchange or something similar to listen for the MQTT topics and to forward the received messages to the appropriate queue as usual, but I cannot find anything on the net.
I hope I've been clear enough in proposing the problem that I Am facing.
Assume that I MUST use RabbitMQ since it is already used for the communication of the different modules of the service.
Hope you can help me understand if there is any possibility to use RabbitMQ to listen for messages from an external MQTT broker that I cannot decide on and then push those messages in an exchange that will route the message based on the routing key extracted from the message.
Practical case:
Real-world machine 10000 produces a message A
Real-world machine 10000 publishes on topic machines/10000 the message A
The service (that is subscribed to the machine/# topic) gets the message A
The service publishes message A in the exchange with routing key machines.10000
The machine 10000's callback processes the message A and does something
Thanks, please be as clear as possible since I need to understand the entire process (if possible)

Queue in producer side

In want to use RabbitMQ to send events to a server from a mobile C# app. The user records a lot of events in the whole day (number of products manufactured, consumed water, ...) and they need to be delivered in a server to be processed. I understand that RabbitMQ creates a queue in the server but also, I would like to have a queue en in client side, in the mobile app. It is usual that in some parts of the factory Internet fails, so when the user records any event, it needs to be sent using the RabbitMQ client, but if Internet fails, it should remain in an "internal" queue, waiting to be sent in the next synchronization.
Which is the best approach for this problem? Does have RabbitMQ client library a feature for this purpose?
No RabbitMQ does not provide any such thing , typically for a user case like your it is best to use a local light weight database. You can go for something like SQLite.
Keep the data locally till it is synchronized and once done you may delete it from local.

Event publisher for ASP.NET Web Api

I have started to work with micro-services and I need to create an event publishing mechanism.
I plan to use Amazon SQS.
The idea is quite simple. I store events in the database in the same transaction as aggregates.
If user would change his email, event UserChangedEmail will be stored in the database.
I also have event handler, such as UserChangedEmailHandler, which will (in this case) be responsible to publish this event to SQS queue, so other services can know that user changed email.
My question is, what is the practice to achieve this? Should I have some kind of background timed process which will scan events table and publish events to SQS?
Can this be process within WebApi application (preferable), or should this be a separate a process?
One of the ideas was to use Hangfire, but it does not support cron jobs under a minute.
Any suggestions?
EDIT:
As suggested in the one of the answers, I've looked in to NServicebus. One of the examples on the NServiceBus page shows core of my concern.
In their example, they create a log that order has been placed. What if log or database entry is successfully commited, but publish breaks and event never gets published?
Here's the code for the event handler:
public class PlaceOrderHandler :
IHandleMessages<PlaceOrder>
{
static ILog log = LogManager.GetLogger<PlaceOrderHandler>();
IBus bus;
public PlaceOrderHandler(IBus bus)
{
this.bus = bus;
}
public void Handle(PlaceOrder message)
{
log.Info($"Order for Product:{message.Product} placed with id: {message.Id}");
log.Info($"Publishing: OrderPlaced for Order Id: {message.Id}");
var orderPlaced = new OrderPlaced
{
OrderId = message.Id
};
bus.Publish(orderPlaced); <!-- my concern
}
}
Off the Shelf Suggestions
Rather than rolling your own, I recommend looking into off the shelf products, as there is a lot of complexity here that will not be apparent out the outset, e.g.
Managing event subscriber list - an SQS queue is more appropriately paired with an event consumer, rather than with an event producer as when a message is consumed it is no longer available on the queue - so if you want to support multiple subscribers for a given event (which is a massive benefit of event driven architectures), how do you know which SQS queues you push the event message onto when it is first raised?
Retry semantics, error forwarding queues - handling temporary errors due to ephemeral infrastructure issues vs permanent errors due to business logic semantic issues
Audit trails of which messages were raised when and sent where
Security of messages sent via SQS (does your business case require them to be encrypted? SQS is an application service offered by Amazon which doesn't provide storage level encryption
Size of messages - SQS has a message size limit so you may eventually need to handle out-of-band transmission of large messages
And that's just off the top of my head...
A few off the shelf systems that would assist:
NServiceBus provides a framework for managing command and event messaging, and it has a plugin framework permitting flexible transport types - NServiceBus.SQS offers SQS as a transport.
Offers comprehensive and flexible retry, audit and error handling
Opinionated use of commands vs events (command messages say "Do this" and are sent to a single service for processing, event messages say "Something happened" and are sent to an arbitrary number of flexible subscribers)
Outbox pattern provides transactionally consistent messaging even with non-transactionally consistent transports, such as SQS
Currently the SQS plugin uses default NServiceBus subscriber persistence, which requires an SQL Server for storing the event subscriber list (see below for an option that leverages SNS)
Built in support for sagas, offering a framework to ensure multi transaction eventual consistency with rollback via compensating actions
Timeouts supporting scheduled message handling
Commercial offering, so not free, but many plugins/extensions are open source
Mass Transit
Doesn't support SQS off the shelf, but does support Azure Service Bus and RabbitMq, so could be an alternative for you if that is an option
Similar offering to NServiceBus, but not 100% the same - NServiceBus vs MassTransit offers a comprehensive comparison
Fully open source/free
Just Saying
A light-weight open source messaging framework designed specifically for SQS/SNS based
SNS topic per event, SQS queue per microservice, use native SNS SQS Queue subcription to achieve fanout
Open Source Free
There may be others, and I've most personal experience with NServiceBus, but I strongly recommend looking into the off the shelf solutions - they will free you up to start designing your system in terms of business events, rather than worrying about the mechanics of event transmission.
Even if you do want to build your own as a learning exercise, reviewing how the above work may give you some tips on what's needed for reliable event driven messaging.
Transactional Consistency and the Outbox Pattern
The question has been edited to ask about the what happens if parts of the operation succeed, but the publish operation fails. I've seen this referred to as the transactional consistency of the messaging, and it generally means that within a transaction, all business side-effects are committed, or none. Business side effects may mean:
Database record updated
Another database record deleted
Message published to a message queue
Email sent
You generally don't want an email sent or a message published, if the database operation failed, and likewise, you don't want the database operation committed if the message publish failed.
So how to ensure consistency of messaging?
NServiceBus handles this in one of two ways:
Use a transactionally consistent message transport, such as MSMQ.
MSMQ is able to make use of Microsoft's DTC (Distributed Transaction Coordinator) and DTC can enroll the publishing of messages in a distributed transaction with SQL server updates - this means that if your business transaction fails, your publish operation will be rolled back and visa versa
The Outbox Pattern
With the outbox pattern, messages are not dispatched immediately - they are added to an Outbox table in a database, ideally the same database as your business data, as part of the same transaction
AFTER the transaction is committed, it attempts to dispatch each message, and only removes it from the outbox on successful dispatch
In the event of a failure of the system after dispatch but before delete, the message will be transmitted a second time. To compensate for this, when Outbox is enabled, NServiceBus will also do de-duplication of inbound messages, by maintaining a record of all inbound messages and discarding duplicates.
De-duplication is especially useful with Amazon SQS, as it is itself eventually consistent, and the same messages may be received twice.
This is the not far from the original concept in your question, but there are differences:
You were concepting a background timed process to scan the events table (aka Outbox table) and publish events to SQS
NServiceBus executes handlers within a pipeline - with Outbox, the dispatch of messages to the transport (aka pushing messages into an SQS queue) is simply one of the last steps in the pipeline. So - whenever a message is handled, any outbound messages generated during the handling will be dispatched immediately after the business transaction is committed - no need for a timed scan of the events table.
Note: Outbox is only successful when there is an ambient NServiceBus Handler transaction - i.e. when you are handling a message within the NServiceBus pipeline. This will NOT be the case in some contexts, e.g. a WebAPI Request pipeline. For this reason, NServiceBus recommends using your API request to send a single Command message only, and then combining business data operations with further messaging within a transactionally consistent command handler in a backend endpoint service. Although point 3 in their doc is more relevant to the MSMQ than SQS transport.
Handler Semantics
One more comment about your proposal - by convention, UserChangedEmailHandler would more commonly be associated with the service that does something in response to the email being changed, rather than simply participating in the propagation of the information that the email has changed. When you have 50 events being published by your system, do you want 50 different handlers just to push those messages onto different queues?
The systems above use a generic framework to propagate messages via the transport, so you can reserve UserChangedEmailHandler for the subscribing system and include in it the business logic that should happen whenever a user changes their email.
In any case I'd go with stateful services. If you want to go a tad hands off, have a look at Azure Service Fabric.
And as in my case, I had my own set of microservices, in a scenario like this I did the basic create operation on db first (Changing the email). I had an event entity and pushed back an event in that collection (in this case mongodb). A stateful service was polling the database and processing the events in batch.
Now in your case, if your web app process is persistent you can opt to enqueue the message right away and keep a field in the event that states whether it was actually processed later by any service or not. I used mongodb for database and Azure Service Bus as a message broker. I think Amazon SQS would be similiar.
Now, if your web app is a vanilla asp.net Web api or mvc process, you only should enlist the event in database and leave as in that way you dont have to create a mesasge broker listener every time you getting a request. One service can poll the db, use the message broker to let the other services know.
If you want a total event driven paradigm, you might need a look in Event Hubs
I strongly suggest keeping a tab on whether any resource has been processed or not from the Message Bus just to make sure it's reliable.
Hope it helps. :)

Is it possible to capture MSMQ messages from a private queue or add a second destination?

The project that I'm working on uses a commercially available package to route audio to various destinations. With this package is a separate application that can be used to log events generated by the audio routing software to a database e.g. connect device 1 to device 3.
I have been tasked with writing an application that reacts to specific events generated by the audio routing software such as reacting to any connections to device 3.
I have noted that the audio routing sofware uses MSMQ to post event information to the event recorder. This means that event data can build up if the recorder software has not run for a while.
I have located the queue - ".\private$\AudioLog" and would like to perform the following actions:
Detect and process new messages as
they are entered onto the queue.
Allow the current event recording
software to continue to
work as before - therefore messages
can not be removed by my
application.
Ensure that I always get to see a
message.
Now I note that I can use MessageQueue to Peek at the queue in order to read messages without deletion and also GetAllMessages() to peek at all messages not removed by the event recorder.
If the recording software isn't connected then I can see that I can gather message data easily enough, but I can't see how I can ensure that I get to see a message before the recorder removes a message when it is connected.
Ideally I would like to add my application as a second destination for the message queue. Is this possible programmatically?
If not as I have administrator privilege, access to the machine with the queue is it possible to configure the queue manually to branch a second copy of the queue to which I can connect my software?
Msmq has a journaling feature. You can configure the queue to have a journal. Then, every message that is removed from the queue( by a read operation) is moved to the journal queue and not deleted. You can then read (or peek) from the journal. If you are using peek operation, make sure that you have a job that delete the journal from time to time.

Categories

Resources