How to accomplish FIFO with Azure service bus topics - c#

Have been looking for a Message bus with publish/subscribe functionality. Found that AWS SQS does not support FIFO, so had to give up on it. Working with Azure Service bus, found that queues do support FIFO, but it seems like Topics do not support FIFO. And topics are what we need, with their pub-to-many-sub model :(
Is it just a setting I am missing? I tried sending 100 messages from my C# client, and the subscribers got the messages in the wrong order. Any tips would be appreciated.
Thanks!

You can use session to get an Azure topic to provide FIFO ordering but that's not quite the same thing as guaranteeing the order in which messages are processed.
Sessions are not enough of a guarantee here. For example, if you are using PeekLock mode then a message that times out will return to the queue and be processed out of order. You can use ReceiveAndDelete mode to counter this behavior but that means you lose the transactional nature of message handling.
One of the reasons why documentation may be a little light on this area is that it isn't a common use case. Messaging is about decoupling though asynchronous communication and ordering guarantees create temporal coupling between applications.
Ideally you should design your payloads so ordering doesn't matter. If that fails, use a timestamp that allows you to discard messages received out of order.
Discussed in more detail here: Don’t assume message ordering in Azure Service Bus

You should be able to achieve this by setting property SupportOrdering to true
// Configure Topic Settings
TopicDescription td = new TopicDescription("TestTopic");
td.SupportOrdering = true;
// Create a new Topic with custom settings
string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
namespaceManager.CreateTopic(td);

Related

How to read retained messages for a Topic

I have a web application that publishes messages to a topic then several Windows services that subscribe to those topics, some with multiple instances. If the services are running when the messages are published everything works correctly but if they are not then the messages are retained on the queue(s) subscribing to that topic but aren't read when the services start back up.
The desired behavior-
When a message is published to the topic string MyTopic, it is read
from the MyTopicQueue only once. I use some wildcard topics so each message is sent to multiple queues, but multiple instances of a services subscribe to the same topic string and each message should be read by only of those instances
If the subscribers to the MyTopic topic aren't online when the message is published then the messages are retained on MyTopicQueue.
When the Windows services subscribing
to a particularly topic come back on line each retained message is
read from MyTopicQueue by only a single subscriber.
I've found some [typically for IBM] spotty documentation about the MQSUBRQ and MQSO_PUBLICATIONS_ON_REQUEST options but I'm not sure how I should set them. Can someone please help figure out what I need to do to get my desired behavior? [Other than switching back to RabbitMQ which I can't do though I'd prefer it.]
My options:
private readonly int _openOptions = MQC.MQSO_CREATE | MQC.MQSO_FAIL_IF_QUIESCING | MQC.MQSO_MANAGED;
private readonly MQGetMessageOptions _messageOptions = new MQGetMessageOptions()
Code to open the Topic:
_topic = _queueManager.AccessTopic(_settings.TopicString, null,
MQC.MQTOPIC_OPEN_AS_SUBSCRIPTION, _openOptions);
The line of code that reads from the topic (taken from a loop):
_topic.Get(mqMessage, _messageOptions);
If you want the messages to accumulate while you are not connected you need to make the subscription durable by adding MQC.MQSO_DURABLE. In order to be able to resume an existing subscription add MQC.MQSO_RESUME in addition to MQC.MQSO_CREATE.
Be careful with terminology, what you are describing as retained messages is a durable subscription.
Retained publications are something else were MQ can retain one most recently published message on each topic and this message will be retrieved by new subscribers by default unless they use MQSO_NEW_PUBLICATIONS_ONLY to skip receiving the retained publication.
MQSO_PUBLICATIONS_ON_REQUEST allows a subscriber to only receive retained publications on request, it will not receive non-retained publications.
If you want multiple consumers to work together on a single subscription you have two options:
Look at shared subscribers in XMS.NET, look at the CLONESUPP property.
Create a one time durable subscription to a queue on the topics you want consumed, then have your consumers directly consume from the queue not a topic.

Send outbox message without having type

We are using MassTransit with RabbitMQ and part of our implementation includes an outbox pattern.
Now i'm trying to create a docker container whose only purpose is to dispatch messages from outboxes in several databases.
The container gets a list of connection strings to the various databases and then starts to dispatch messages from their outboxes.
Currently we store the following information in our outbox (with examples):
MessageType: SomeNamespace.SomeType, SomeContract
MessageBody: {"SomeProperty":"MyValue"}
TransmitMethod: Send/Publish
QueueName: SomeQueueName
My question is if it's possible to dispatch these messages without having access to the contract types?
I can add more information to the table if needed to make this happen.
You can look at how the MassTransit message scheduler support for Quartz.NET captures and ultimately sends the message on the transport. In this case, it's saving the serialized message from the transport and reloading the JSON into the message body at serialization time.
You might also find useful details in the relational outbox draft PR.

How can I get Azure Service Bus to work in a FIFO manner?

We're not using topics for Azure Service Bus (Which I understand has additional requirements to support ordering, and my understanding was that each queue should revert to operating in a FIFO manner; however, from analysing our logs just for today, we've had 384 of 15442 messages dequeued in a different order to when they were enqueued
To illustrate with an example, we had two messages, d4350a6e68ad4c9fb1fb9ccebd766590 and 0e19fbd29ffd4c4693fff6dd57e4f683; these were enqueued at 2018-11-14 09:27:31.8870000 and 2018-11-14 09:27:35.5950000 respectively (so 0e... was 4ish seconds later than d4...) However, they were dequeued at 2018-11-14 09:30:12.0320000 and 2018-11-14 09:29:57.4850000 respectively (so d4... was 15ish seconds later than 0e...). Over this timescale, we only had a single host active doing both enqueueing and dequeueing.
Whilst the timings on this are relatively close in human terms, we've seen
As I understood the queues to be, well, queues, I'm a little surprised that I'm seeing this behaviour - do I need to do any additional magic to ensure they are dequeued in the order they were enqueued?
For reference, the code that is enqueueing looks a little like:
var brokeredMessage = new BrokeredMessage(objectToQueue, new DataContractJsonSerializer(typeof(T)));
var queueClient = QueueClient.CreateFromConnectionString(connectionString);
queueClient.RetryPolicy = new RetryExponential(TimeSpan.FromSeconds(2), TimeSpan.FromSeconds(5), 5);
queueClient.Send(brokeredMessage);
And we're dequeueing with an Azure Webjob using a service bus trigger
It is expected behavior. To ensure that the messages are processed in order, you should use Sessions in Service Bus Queues.
This will allow you to process the messages in the sequence in which the messages are en-queued.

Apache NMS Getting pending message count

I am trying to get the current number of messages on an activeMQ queue using c#.
I have found this link (that is quite old now)
ActiveMQ with C# and Apache NMS - Count messages in queue
but enumerating the queue seems like a lot of work for this simple task.
Is this the only way to get the queue message count? If I do use this method is the queue locked while I am enumerating (I don't want to block other readers)?
Thanks,
Nick
You can either do the enumeration thing described in that other answer which won't get you the correct answer in many cases or you can use the statistics broker plugin and query that data from the broker.
With the statistics plugin you can send a message to a control queue and listen for a response on the replyTo destination you provide and get the full statistics of the destination, the caveat being that you need to parse out the data but that shouldn't be that hard.
The enumeration method won't lock the queue but it won't work the way you want because there is a limit to the depth that the broker will go into a deep queue before stopping when it is feeding a QueueBrowser so you can't be sure you got a correct count. Also using the statistic plugin results in less broker overhead and network traffic since the broker only has to send you one response with the data in it verses sending you all the messages just for the sake of counting.

Pub/Sub Redis, can I monitor whether any published messages are consumed?

I have a redis instance that publishes messages via different topics. Instead of implementing a complex heartbeat mechanism (complex because the instance would stop publishing messages after some time if they are not consumed), is there a way to check whether pubs are consumed by anyone?
For example, instance RedisServer publishes messages to topic1 and topic2. RedisClient1 subscribes to topic1 and RedisClient2 subscribes to topic2. When RedisClient2 for whatever reason stops consuming messages of topic2 then I want RedisServer to know about it and decide when to stop publishing messages to topic2. The discontinuation of topic2 consumption is unpredictable hence I am not able to inform RedisServer of the discontinuation/unsubscription.
I thought if there was a way for a redis instance to know whether messages of a certain topic are consumed or not then that would be very helpful information.
Any idea whether that is possible?
Given you are using a recent-enough version of redis (> 2.8.0) these two commands may help you:
PUBSUB CHANNELS [pattern]
Which lists the currently active channels ( = channel having at least one subscriber) matching the pattern.
PUBSUB NUMSUB [chan1 ... chanN]
Which returns the number of subscribers for the specified channels (doesn't work for patterns however).
Note: Both solutions won't enable you to determine if a message was truely processed! If you need to know about completion of tasks (if your messages are triggering something), then I would recommend searching for a full blown job queue (for example Resque, if you want to stick with Redis)
Edit: Here's the Redis doc. for all of the above: http://redis.io/commands/pubsub
You can also use the result of PUBLISH. It will give you the number of subscribers that received the message: http://redis.io/commands/publish
This way you don't need to poll the PUBSUB command, just do your "stop publishing" messages logic after you publish a message.
At most you publish one message with no one subscribing.

Categories

Resources