Is possible to use topic exchange as true event notification system?
I've created topic exchange on given exchange named as Cherry. I've got one publisher at routing key cherry.user.created and many consumers with same routing key, but when I publish an event only one of consumers consume an event. I thought that topic can be used as "real event broadcasting" - every consumer gets notified when given event happened, but right now only one consumer consume an event and other consumers do not know about created event...
To clarify my comment about queues. In rabbitmq, if multiple consumers use the same queue - message delivered to that queue is always dispatched in round-robin manner, no matter what. So when you subscribe to topic exchange, best way is to declare new queue for each consumer (with any name, or better random generated by rabbit itself) and use target routing key (cherry.user.created) to bind those queues to exchange.
Related
The situation is as follows. There are three services, one service is event sourced and publishes integration or notification events (outbox pattern) to the other two services (subscribers) using an event bus (like Azure Service bus or ActiveMQ).
This design is inspired by .NET microservices - Architecture e-book - Subscribing to events.
I'm wondering what should happen if one of these events can not be delivered due to an error or if event handeling simply wasn't implemented correctly.
Should I trust my message bus in case of an application error?
Is this a usecase for dead letter queues?
On republishing events, should all messages be republished to all topics or would it be possible to only republish a subset?
Should the service republishing events be able to access publisher and subscriber databases to know the message offset?
Or should the subscribing microservices be able to read the outbox?
Should I trust my message bus in case of an application error?
Yes.
(Edit: After reading this answer, read #StuartLC's answer for more info)
The system you described is an eventually consistent one. It works under the assumption that if each component does its job, all components will eventually converge on a consistent state.
The Outbox's job is to ensure that any event persisted by the Event Source Microservice is durably and reliably delivered to the message bus (via the Event Publisher). Once that happens, the Event Source and the Event Publisher are done--they can assume that the event will eventually be delivered to all subscribers. It is then the message bus's job to ensure that that happens.
The message bus and its subscriptions can be configured for either "at least once" or "at most once" delivery. (Note that "exactly once" delivery is generally not guaranteeable, so an application should be resilient against either duplicate or missed messages, depending on the subscription type).
An "at least once" (called "Peek Lock" by Azure Service Bus) subscription will hold on to the message until the subscriber gives confirmation that it was handled. If the subscriber gives confirmation, the message bus's job is done. If the subscriber responds with an error code or doesn't respond in a timely manner, the message bus may retry delivery. If delivery fails multiple times, the message may be sent to a poison message or dead-letter queue. Either way, the message bus holds on to the message until it gets confirmation that it was received.
On republishing events, should all messages be republished to all topics or would it be possible to only republish a subset?
I can't speak for all messaging systems, but I would expect a message bus to only republish to the subset of subscriptions that failed. Regardless, all subscribers should be prepared to handle duplicate and out-of-order messages.
Should the service republishing events be able to access publisher and subscriber databases to know the message offset?
I'm not sure I understand what you mean by "know the message offset", but as a general guideline, microservices should not share databases. A shared database schema is a contract. Once the contract established, it is difficult to change unless you have total control over all of its consumers (both their code and deployments). It's generally better to share data through application APIs to allow more flexibility.
Or should the subscribing microservices be able to read the outbox?
The point of the message bus is to decouple the message subscribers from the message publisher. Making the subscribers explicitly aware of the publisher defeats that purpose, and will likely be difficult to maintain as the number of publishers and subscribers grows. Instead, rely on a dedicated monitoring service and/or the monitoring capabilities of the message bus to track delivery failures.
Just to add to #xander's excellent answer, I believe that you may be using an inappropriate technology for your event bus. You should find that Azure Event Hubs or Apache Kafka are better candidates for event publish / subscribe architectures. Benefits of a dedicated Event Bus technology over the older Service Bus approaches include:
There is only ever one copy of each event message (whereas Azure Service Bus or RabbitMQ make deep copies of each message for each subscriber)
Messages are not deleted after consumption by any one subsriber. Instead, messages are left on the topic for a defined period of time (which can be indefinite, in Kafka's case).
Each subscriber (consumer group) will be able to track it's committed offset. This allows subscribers to re-connect and rewind if it has lost messages, independently of the publisher, and other subscribers (i.e. isolated).
New consumers can subscribe AFTER messages have been published, and will still be able to receive ALL messages available (i.e. rewind to the start of available events)
With this in mind, :
Should I trust my message bus in case of an application error?
Yes, for the reasons xander provided. Once the publisher has a confirmation that the event bus has accepted the event, the publisher's job is now done and should never send this same event again.
Nitpicky, but since you are in a publish subscribe architecture (i.e. 0..N subscribers), you should refer to the bus as an event bus (not a message bus), irrespective of the technology used.
Is this a usecase for dead letter queues?
Dead letter queues are more usually an artifact of point-to-point queues or service bus delivery architecture, i.e. where there is a command message intended (transactionally) for a single, or possibly finite number of recipients. In a pub-sub event bus topology, it would be unfair to the publisher to expect it to monitor the delivery of all subscribers.
Instead, the subscriber should take on responsibility for resilient delivery. In technologies like Azure Event Hubs and Apache Kafka, events are uniquely numbered per consumer group, so the subscriber can be alerted to a missed message through monitoring of message offsets.
On republishing events, should all messages be republished to all topics or would it be possible to only republish a subset?
No, an event publisher should never republish an event, as this will corrupt the chain of events to all observer subscribers. Remember, that there may be N subscribers to each published event, some of which may be external to your organisation / outside of your control. Events should be regarded as 'facts' which have happened at a point in time. The event publisher shouldn't care whether there are zero or 100 subscribers to an event. It is up to each subscriber to decide on how the event message should be interpreted.
e.g. Different types of subscribers could do any of the following with an event:
Simply log the event for analytics purposes
Translate the event into a command (or Actor Model message) and be handled as a transaction specific to the subscriber
Pass the event into a Rules engine to reason over the wider stream of events, e.g. trigger counter-fraud actions if a specific customer is performing an unusually large number of transactions
etc.
So you can see that republishing events for the benefit of one flakey subscriber would corrupt the data flow for other subscribers.
Should the service republishing events be able to access publisher and subscriber databases to know the message offset?
As xander said, Systems and Microservices shouldn't share databases. However, systems can expose APIs (RESTful, gRPC etc)
The Event Bus itself should track which subscriber has read up to which offset (i.e. per consumer group, per topic and per partition). Each subscriber will be able to monitor and change its offsets, e.g. in case an event was lost and needs to be re-processed. (Again, the producer should never republish an event once it has confirmation that the event has been received by the bus)
Or should the subscribing microservices be able to read the outbox?
There are at least two common approaches to event driven enterprise architectures:
'Minimal information' events, e.g. Customer Y has purchased Product Z. In this case, many of the subscribers will find the information contained in the event insufficient to complete downstream workflows, and will need to enrich the event data, typically by calling an API close to the publisher, in order to retrieve the rest of the data they require. This approach has security benefits (since the API can authenticate the request for more data), but can lead to high I/O load on the API.
'Deep graph' events, where each event message has all the information that any subscriber should ever hope to need (this is surprisingly difficult to future proof!). Although the event message sizes will be bloated, it does save a lot of triggered I/O as the subscribers shouldn't need to perform further enrichment from the the producer.
I am new to Azure Service Bus and would like to know if I can multiple subscribers to a queue or topic? In rabbit MQ I can have multiple subscribers to 1 publisher.
What I am trying to do is, I am using CQRS and when certain commands come into the system when the event is handled I want to push them into a message queue.
I want 2 subscribers to be able to get the messages from that queue, one for me to process internally. another one for process and send externally.
I am new to Azure Service Bus and would like to know if I can multiple
subscribers to a queue or topic?
Yes. This is possible with Azure Service Bus Topics. There can be multiple subscribers to a message sent to a topic. From this link:
In contrast to queues, in which each message is processed by a single
consumer, topics and subscriptions provide a one-to-many form of
communication, in a publish/subscribe pattern. Useful for scaling to
very large numbers of recipients, each published message is made
available to each subscription registered with the topic. Messages are
sent to a topic and delivered to one or more associated subscriptions,
depending on filter rules that can be set on a per-subscription basis.
The way it works is that you create a topic and then create multiple subscriptions in that topic. In each subscription, you can define message filtering rules. When a message is sent to a topic, Azure Service Bus matches that message against the filtering rules in each subscription and if a matching rule is found, then the message is sent to that subscription.
I wanted to add something to the discussion: from the following article it seems that each listener would have to create a different subscriber in azure before listening to the same topic. Basically a topic will have multiple subscribers (every subscription with a single listener) and so the service bus will know exactly how many subscribers will need to listen to the message before it can be completed.
https://medium.com/awesome-azure/azure-difference-between-azure-service-bus-queues-and-topics-comparison-azure-servicebus-queue-vs-topic-4cc97770b65
I successfully setup a Topic Exchange, and I'm able to deliver messages to several consumers at once.
I'd also like to deliver messages to competing consumers and keep using topic exchanges. I read that using the same queue name enables consumers to compete for messages. I might be mistaken, however, since I cannot make it work.
Setup for several listeners to the same topic:
Declare topic exchange
For each listener, declare a new queue with an autogenerated name
Bind this queue to the above exchange with a given topic routing key
How to setup competing consumers to the same topic?
Is it even possible with Topic Exchanges?
Thanks.
Let's review a couple of points first.
First, remember that in RabbitMQ you always consume from queues. Exchanges are just your portals and you cannot directly consume from them.
Second, Topic exchanges allow for binding the queues with routing key "patterns". Therefore, the term topic is valid in the context of "Topic Exchanges".
Now this is what I understand from your question:
Multiple consumers/same routing key:
This is where you want multiple consumers to all consume the messages with the same routing key (or same routing key patterns in the case of Topic Exchanges). This is in fact doable. Just do this:
Declare your Topic Exchange
Declare a queue with some name
Bind that queue to your topic with your desired routing key pattern
Create multiple consumers and have them listen to that same queue.
What will happen is that RabbitMQ is going to load balance to your consumers in a round robin matter. This means all consumers will consume from the same queue. But remember that in this scenario it is possible that a single message is delivered more than once in theory.
What you were doing was to create multiple queues and have one consumer per queue. This means that every message coming to the exchange would be duplicated across all queues. The end result would be that a message gets processed multiple time.
I solved this by using exchange-to-exchange bindings.
The outer exchange is a topic exchange.
The inner exchange is a fanout exchange bound to a client-named queue.
The outer exchange is bound to the inner exchange with a routing key that includes a wildcard.
I'm creating an Azure worker role that will subscribe to some events published by a service that I don't have any control over. Both the publisher and subscriber will be hosted in Azure.
The role will be part of a cloud service that is already using NServiceBus to publish and subscribe to events that it owns. To be consistent, I'm looking to use the same libraries in the new role.
The publisher of the event is not using NServiceBus. They are using the WindowsAzure.ServiceBus package to create BrokeredMessage objects and JSON.Net to serialize the message payloads. They are also using a 'topic per event type' pattern.
There are 3 topics I need my role to subscribe to, ItemActivated, ItemDeactived and ItemExpired. Each topic has a single event type published to it, and the event definition is available to me through a shared nuget package. The events are ItemActivatedEventV1, ItemDeactivedEventV1 etc...
Using the event definitions I can write message handlers, but I'm not convinced NServiceBus is the correct choice in the subscriber. Since the publisher isn't using NServiceBus, none of the message headers will be present - will this impact the behaviour of the subscriber?
The 'topic per event' is also different to the NServiceBus 'endpoint' approach, where multiple events sent to a single endpoint. Is it possible to configure NServiceBus to listen to multiple topics, e.g using MessageHandlerMappings in app.config?
Finally, the subscriber can be configured using JsonSerializer, but this isn't going to be guaranteed to be exactly the same at both ends of the communication. Is this going to impact behaviour?
Has anyone had any experience with this scenario? Any advice appreciated
I have Created a RabbitMQ Producer and a RabbitMQ Consumer....
suppose my producer produces 10 messages. How can i get a particular message from those 10 messages.
I want to know how can i uniquely identify a message and read that or consume that message.
There are several ways to do this, but the one I use most is to use a routing key that is unique to the type of message. Consumers, then, bind to that exchange using a specific routing key, which causes messages to go only to those consumers.
If you can avoid it, you should never just dump messages into a single queue and let the consumers sort them out. The routing keys and exchanges are powerful tools made specifically for routing messages. You should leverage that.
I have an example that shows how to do a topic queue in C# which appears to be what your looking for RabbitMQ Tutorial I also have one that shows how to use the EventingBasicConsumer to avoid blocking when getting messages RabbitMQ EventingBasicConsumer