Do messages in dead letter queues in Azure Service Bus expire?
Some explanation
I have these queue settings:
var queueDescription = new QueueDescription("MyTestQueue")
{
RequiresSession = false,
DefaultMessageTimeToLive = TimeSpan.FromMinutes(1),
EnableDeadLetteringOnMessageExpiration = true,
MaxDeliveryCount = 10
};
namespaceManager.CreateQueue(queueDescription);
When I place some messages in a Azure Service Bus message queue (not queues from Azure Storage) and don't consume them (ever), they'll be moved to the dead letter queue automatically.
However, if I have no consumer for the dead letter queue either, will the messages ever be deleted from the dead letter queue or will they stay there forever? (Is there some official documentation stating how this is supposed to work?)
My Trials
In my trials, I placed 3 messages in the queue. They were dead lettered after 2 minutes or so. They remained in the dead letter queue for at least a day and weren't removed.
Although calling NamespaceManager.GetQueueAsync() gave me the values above (notice how MessageCount is still 3 but DeadLetterMessageCount is strangely 0), I could still receive the messages from the dead letter queue. (So they weren't removed from the queue.)
Sebastian your observation is correct, in that messages once placed in the DeadLetter sub-queue never expire. They will be available there forever until removed explicitly from the DeadLetter sub-queue. In the above error regarding the tooling/api it could be a refresh issue? The call to GetQueueAsync() needs to be made after the messages have been dead-lettered which is not a deterministic time, say if you had a queue with a thousand messages that were expired but that Queue was not being used (send/receive operations) then the count may still return as Active until some operations are performed.
After doing some research I stumbled over a fact I missed completely:
Messages can expire even when dead lettering is disabled.
When messages expire while dead lettering is disabled (which is the default), they'll just get deleted.
So, Microsoft's reasoning for not auto-deleting messages from the dead letter queue is probably:
If you're enabling dead lettering, you explicitly want expired message not to be thrown away but stored somewhere else (the dead letter queue) so that you can review them.
Related
We have some issues with messages from Azure ServiceBus being read multiple times. Previously we had the same issue, which turned out to be due to lock timeout. Then, as the lock timed out the messages were read again, and their deliveryCount increased by 1 for each time the message was read. After this, we set the max delivery count to 1 to avoid resending of messages, and also increased the lock timeout to 5 minutes.
The current issue is a lot more strange.
First, messages are read at 10:45:34. Message locks are set to 10:50:34, and deliveryCount is 1. The reading says it succeeds, at 10:45:35.0. All good so far.
But then, at 10:45:35.8, the same messages are read again! And the delivery count is still 1. Both the sequence number and message id are the same in the two receive logs. This happens for a very small percentage of messages, something like 0,02% of the messages.
From what I understand, reading a message should either result in a success where the message should be removed, or an increase of deliveryCount, which in my case should send the message to DLQ. In these cases, neither happens.
I'm using ServiceBusTrigger, like this:
[FunctionName(nameof(ReceiveMessages))]
public async Task Run([ServiceBusTrigger(queueName: "%QueueName%", Connection = "ServiceBusConnectionString")]
string[] messages,
This seems to be like a bug in either the service bus or the library, any thoughts on what it could be?
That’s not the SDK but rather the specific entity. It sounds like the entity is corrupted. Delete and recreate it. If that doesn’t help, then open a support case.
On a different note, most of the time when delivery count is set to 1 is an indicator of something off. If you truly need at-most-once delivery guarantee, use ReceiveAndDelete mode instead of PeekLock.
We're not using topics for Azure Service Bus (Which I understand has additional requirements to support ordering, and my understanding was that each queue should revert to operating in a FIFO manner; however, from analysing our logs just for today, we've had 384 of 15442 messages dequeued in a different order to when they were enqueued
To illustrate with an example, we had two messages, d4350a6e68ad4c9fb1fb9ccebd766590 and 0e19fbd29ffd4c4693fff6dd57e4f683; these were enqueued at 2018-11-14 09:27:31.8870000 and 2018-11-14 09:27:35.5950000 respectively (so 0e... was 4ish seconds later than d4...) However, they were dequeued at 2018-11-14 09:30:12.0320000 and 2018-11-14 09:29:57.4850000 respectively (so d4... was 15ish seconds later than 0e...). Over this timescale, we only had a single host active doing both enqueueing and dequeueing.
Whilst the timings on this are relatively close in human terms, we've seen
As I understood the queues to be, well, queues, I'm a little surprised that I'm seeing this behaviour - do I need to do any additional magic to ensure they are dequeued in the order they were enqueued?
For reference, the code that is enqueueing looks a little like:
var brokeredMessage = new BrokeredMessage(objectToQueue, new DataContractJsonSerializer(typeof(T)));
var queueClient = QueueClient.CreateFromConnectionString(connectionString);
queueClient.RetryPolicy = new RetryExponential(TimeSpan.FromSeconds(2), TimeSpan.FromSeconds(5), 5);
queueClient.Send(brokeredMessage);
And we're dequeueing with an Azure Webjob using a service bus trigger
It is expected behavior. To ensure that the messages are processed in order, you should use Sessions in Service Bus Queues.
This will allow you to process the messages in the sequence in which the messages are en-queued.
I'm working with Azure ServiceBus, standard tier.
I'm trying to figure out what's happened since a couple of weeks, (it seems it started when bus traffic has increased, maybe 10-15 messages per second).
I have automatic creation of subscription using
subscriptionOpts.AutoDeleteOnIdle = TimeSpan.FromHours(3);
Starting from lasts weeks, (when we got a traffic increment), sometimes our subscriptionclients stopped receiving messages and after 3 hours they get deleted.
var messageOptions = new MessageHandlerOptions(args =>
{
Emaillog.Warn(args.Exception, $"Client ExceptionReceived: {args.Exception}");
return Task.CompletedTask;
}) { AutoComplete = true };
_subscriptionClient.RegisterMessageHandler(async (message, token) => await OnMessageReceived(message, $"{_subscriptionClient.SubscriptionName}", token), messageOptions);
Is it possible that a subscription client gets disconnected and doesn't connect anymore?
I have 4-5 clients processes that connect to this topic, each one with his own subscription.
When I find one of these subscriptions deleted, sometimes they have all been deleted, sometimes only some of them have been deleted.
Is it a bug? The only method call I do on the subscriptionClient is RegisterMessageHandler. I don't manage manually anything else...
Thank you in advance
The property AutoDeleteOnIdle is used to delete the Subscription when there is no message processing with in the Subscription for the specified time span.
As you mentioned that the message flow increased to 15 messages per second, there is no chance that the Subscription is left empty (with out message flow). So there is no reason for the Subscriptions to delete. The idleness of the Subscription is decided by both incoming and outgoing messages.
There can be chances that due to heavy message traffic, the downstream application processing the messages may went offline, leaving the messages unprocessed, eventually when the message flow reduced there is no receiver to process the messages, leaving the Subscription idle for 3 hours and delete.
I have an Azure WebJob and I use the CloudQueue to communicate to it.
From my Web Application:
logger.Info("Writing Conversion Request to Document Queue " + JsonConvert.SerializeObject(blobInfo));
var queueMessage = new CloudQueueMessage(JsonConvert.SerializeObject(blobInfo));
documentQueue.AddMessage(queueMessage);
I verify in my log file that I see the INFO statement being written.
However, when I go to my queue:
What baffles me even more ... this Queue was full of messages before, including timestamps of this evening.
I went and cleared out my Queue, and after clearing it, it will no longer receive messages.
Has anyone ever seen this before?
As Gaurav Mantri mentioned in the comments that the message shoud be processed by your WebJob. When it is up to max attempts then will be moved to poison queue.
We could also get more details about poison messages from azure official tutorials. The following is the snippet from the tutorials.
Messages whose content causes a function to fail are called poison messages. When the function fails, the queue message is not deleted and eventually is picked up again, causing the cycle to be repeated. The SDK can automatically interrupt the cycle after a limited number of iterations, or you can do it manually.
The SDK will call a function up to 5 times to process a queue message. If the fifth try fails, the message is moved to a poison queue. The maximum number of retries is configurable.
I'm working with Azure Service Bus Queues in a request/response pattern using two queues and in general it is working well. I'm using pretty simple code from some good examples I've found. My queues are between web and worker roles, using MVC4, Visual Studio 2012 and .NET 4.5.
During some stress testing, I end up overloading my system and some responses are not delivered before the client gives up (which I will fix, not the point of this question).
When this happens, I end up with many messages left in my response queue, all well beyond their ExpiresAtUtc time. My message TimeToLive is set for 5 minutes.
When I look at the properties for a message still in the queue, it is clearly set to expire in the past, with a TimeToLive of 5 minutes.
I create the queues if they don't exist with the following code:
namespaceManager.CreateQueue(
new QueueDescription( RequestQueueName )
{
RequiresSession = true,
DefaultMessageTimeToLive = TimeSpan.FromMinutes( 5 ) // messages expire if not handled within 5 minutes
} );
What would cause a message to remain in a queue long after it is set to expire?
As I understand it, there is no background process cleaning these up, only the act of moving the queue cursor forward with a call to Receive will cause the server to skip past and dispose of messages which are expired and actually return the first message that is not expired or none if all are expired.