I have a web application that publishes messages to a topic then several Windows services that subscribe to those topics, some with multiple instances. If the services are running when the messages are published everything works correctly but if they are not then the messages are retained on the queue(s) subscribing to that topic but aren't read when the services start back up.
The desired behavior-
When a message is published to the topic string MyTopic, it is read
from the MyTopicQueue only once. I use some wildcard topics so each message is sent to multiple queues, but multiple instances of a services subscribe to the same topic string and each message should be read by only of those instances
If the subscribers to the MyTopic topic aren't online when the message is published then the messages are retained on MyTopicQueue.
When the Windows services subscribing
to a particularly topic come back on line each retained message is
read from MyTopicQueue by only a single subscriber.
I've found some [typically for IBM] spotty documentation about the MQSUBRQ and MQSO_PUBLICATIONS_ON_REQUEST options but I'm not sure how I should set them. Can someone please help figure out what I need to do to get my desired behavior? [Other than switching back to RabbitMQ which I can't do though I'd prefer it.]
My options:
private readonly int _openOptions = MQC.MQSO_CREATE | MQC.MQSO_FAIL_IF_QUIESCING | MQC.MQSO_MANAGED;
private readonly MQGetMessageOptions _messageOptions = new MQGetMessageOptions()
Code to open the Topic:
_topic = _queueManager.AccessTopic(_settings.TopicString, null,
MQC.MQTOPIC_OPEN_AS_SUBSCRIPTION, _openOptions);
The line of code that reads from the topic (taken from a loop):
_topic.Get(mqMessage, _messageOptions);
If you want the messages to accumulate while you are not connected you need to make the subscription durable by adding MQC.MQSO_DURABLE. In order to be able to resume an existing subscription add MQC.MQSO_RESUME in addition to MQC.MQSO_CREATE.
Be careful with terminology, what you are describing as retained messages is a durable subscription.
Retained publications are something else were MQ can retain one most recently published message on each topic and this message will be retrieved by new subscribers by default unless they use MQSO_NEW_PUBLICATIONS_ONLY to skip receiving the retained publication.
MQSO_PUBLICATIONS_ON_REQUEST allows a subscriber to only receive retained publications on request, it will not receive non-retained publications.
If you want multiple consumers to work together on a single subscription you have two options:
Look at shared subscribers in XMS.NET, look at the CLONESUPP property.
Create a one time durable subscription to a queue on the topics you want consumed, then have your consumers directly consume from the queue not a topic.
Related
We're using ActiveMQ locally to transfer data between 5 processes that turn simultaneously.
I have some data I need to send to a process, both at runtime (which works perfectly fine), but also a default value on start. Thing is it is published when the process starts, it just doesn't read because it wasn't subscribed to the topic at the time the data was sent.
I have multiple solutions : I could delay the first publishing for a moment so that the process has time to launch (which doesn't seem very appealing) ; or is there a way to send all stored previously non-treated messages to some process that just subscribed ?
I'm coding in C#.
I don't have any experience with ActiveMQ, but other message system usually have an option which marks the subscription as persistent, which means that; after the first subscription; the message queue itself checks if a certain message is delivered to that system and retries with a timeout. In this scenario you need to start the receiver at least 1 time.
If this is not an option and you want to plug in receiver afterwards, you might want to consider a setup of your messages which allows you to retrieve the full state, i.e. if you send total-messages instead of differential- messages.
After a little google, I came upon this definition durable subscribers, I hope this helps:
See:
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
and
http://activemq.apache.org/manage-durable-subscribers.html
since you are using C# client i don't konw if this is supported
topic = new ActiveMQTopic("TEST.Topic?consumer.retroactive=true");
http://activemq.apache.org/retroactive-consumer.html
So, another solution is to configure this behavior on the broker side by adding that to the activemq.xml and restart :
The subscription recovery policy allows you to go back in time when
you subscribe to a topic.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<subscriptionRecoveryPolicy>
<timedSubscriptionRecoveryPolicy recoverDuration="10000" />
<fixedCountSubscriptionRecoveryPolicy maximumSize="10000" />
</subscriptionRecoveryPolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
http://activemq.apache.org/subscription-recovery-policy.html
I went around the issue by sending a message from each process when they're launched back to the main one, and then only sending the info I needed to send.
So I'm trying to get MSMQ messages forwarded from one machine to another (which is dead easy - I was surprised), but one of the requirements from the ops side of the house is that we need to be able to see a log entry somewhere when the remote server decides not to accept a message. For example, if I try to send to a nonexistent queue, like so:
MessageQueue remoteQueue = new MessageQueue(#"FormatName:Direct=OS:machinename\private$\notarealqueue");
remoteQueue.Send("Test", MessageQueueTransactionType.Single);
The message goes into the local delivery queue, and appears to get sent across the network, but because the queue doesn't exist, the remote MSMQ manager discards the message. However, there's no entry in the Event Log that I can find about the message being dropped on the floor, and that makes people nervous. The Microsoft/Windows/MSMQ/EndToEnd log only seems to involve successful messages, which doesn't seem particularly useful. Is there a log I'm not seeing somewhere?
You can use MSMQ dead letter queues for that.
message.UseDeadLetterQueue = true;
With that enabled, if message can't be delivered it will be sent to one of two system dead letter queues - one for transactional and one for non transactional messages. You'll also find there the reason why message was not delivered, which was the original destination queue, full message body, label, etc.
You can use one of tools for managing queues to resend or recover these messages.
The event log is solely for the health state of MSMQ. What happens to a single message is trivial and not logged in the event log. Imagine what would happen if a million messages were discarded and had to be logged in the event log.
Have been looking for a Message bus with publish/subscribe functionality. Found that AWS SQS does not support FIFO, so had to give up on it. Working with Azure Service bus, found that queues do support FIFO, but it seems like Topics do not support FIFO. And topics are what we need, with their pub-to-many-sub model :(
Is it just a setting I am missing? I tried sending 100 messages from my C# client, and the subscribers got the messages in the wrong order. Any tips would be appreciated.
Thanks!
You can use session to get an Azure topic to provide FIFO ordering but that's not quite the same thing as guaranteeing the order in which messages are processed.
Sessions are not enough of a guarantee here. For example, if you are using PeekLock mode then a message that times out will return to the queue and be processed out of order. You can use ReceiveAndDelete mode to counter this behavior but that means you lose the transactional nature of message handling.
One of the reasons why documentation may be a little light on this area is that it isn't a common use case. Messaging is about decoupling though asynchronous communication and ordering guarantees create temporal coupling between applications.
Ideally you should design your payloads so ordering doesn't matter. If that fails, use a timestamp that allows you to discard messages received out of order.
Discussed in more detail here: Don’t assume message ordering in Azure Service Bus
You should be able to achieve this by setting property SupportOrdering to true
// Configure Topic Settings
TopicDescription td = new TopicDescription("TestTopic");
td.SupportOrdering = true;
// Create a new Topic with custom settings
string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
namespaceManager.CreateTopic(td);
I have a redis instance that publishes messages via different topics. Instead of implementing a complex heartbeat mechanism (complex because the instance would stop publishing messages after some time if they are not consumed), is there a way to check whether pubs are consumed by anyone?
For example, instance RedisServer publishes messages to topic1 and topic2. RedisClient1 subscribes to topic1 and RedisClient2 subscribes to topic2. When RedisClient2 for whatever reason stops consuming messages of topic2 then I want RedisServer to know about it and decide when to stop publishing messages to topic2. The discontinuation of topic2 consumption is unpredictable hence I am not able to inform RedisServer of the discontinuation/unsubscription.
I thought if there was a way for a redis instance to know whether messages of a certain topic are consumed or not then that would be very helpful information.
Any idea whether that is possible?
Given you are using a recent-enough version of redis (> 2.8.0) these two commands may help you:
PUBSUB CHANNELS [pattern]
Which lists the currently active channels ( = channel having at least one subscriber) matching the pattern.
PUBSUB NUMSUB [chan1 ... chanN]
Which returns the number of subscribers for the specified channels (doesn't work for patterns however).
Note: Both solutions won't enable you to determine if a message was truely processed! If you need to know about completion of tasks (if your messages are triggering something), then I would recommend searching for a full blown job queue (for example Resque, if you want to stick with Redis)
Edit: Here's the Redis doc. for all of the above: http://redis.io/commands/pubsub
You can also use the result of PUBLISH. It will give you the number of subscribers that received the message: http://redis.io/commands/publish
This way you don't need to poll the PUBSUB command, just do your "stop publishing" messages logic after you publish a message.
At most you publish one message with no one subscribing.
I am finding out that even when my NSB process does not handle messages for say DTOXXX, it is still sending an auto-subscribe message to the publisher queue for DTOXXX.
This is not the desired behavior. I want the process to publish and subscribe to messages for DTOYYY, but any communication using DTOXXX is strictly send only.
If that wasn't clear enough I have 2 assemblies that contains my DTO. I Want to establish a pub/sub bus, but only for assemblies in YYY.dll. As for the DTOs in the other assembly, I want the communication to be done via SEND only (not pub sub).
The problem I am running across is that NSB is sending auto subscribe message to the other process even though:
There is no handler for the DTOs in the XXX assembly. It's being referenced only so that YYY NSB can send messages to XXX NSB.
The communication between the 2 modules are strictly SEND only. This is done to promote low coupling given the actual use case & biz requirement.
How can I set up my modules properly? That is I need to somehow tell NSB, to auto subscribe for messages but only for the ones in a given namespace/assembly.
You can define your own rules for which messages are considered commands/events (or plain messages) by implementing your own DefiningEventsAs in the configure interface. Nsb will only autosubscribe to events. That may help you for your usecase...
There are a couple of ways to handle this, the first being that you can turn of auto-subscribe and subscribe manually. This is done via .DoNotAutoSubscribe() in you endpoint config. From there you will resolve and instance of IBus and then subscribe explicitly to the messages you care about.
The second way is to separate your messages from all other code into different assemblies and only map the events(pub/sub) to the Publisher via the app.config file.