What is the role of "MaxAutoRenewDuration" in azure service bus? - c#

I'm using Microsoft.Azure.ServiceBus. (doc)
I was getting an exception of:
The lock supplied is invalid. Either the lock expired, or the message
has already been removed from the queue.
By the help of these questions:
1, 2, 3,
I am able to avoid the Exception by setting the AutoComplete to false and by increment the Azure's queue lock duration to its max (from 30 seconds to 5 minutes).
_queueClient.RegisterMessageHandler(ProcessMessagesAsync, new
MessageHandlerOptions(ExceptionReceivedHandler)
{
MaxConcurrentCalls = 1,
MaxAutoRenewDuration = TimeSpan.FromSeconds(10),
AutoComplete = false
}
);
private async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
await ProccesMessage(message);
}
private async Task ProccesMessage(Message message)
{
//The complete should be closed before long-timed process
await _queueClient.CompleteAsync(message.SystemProperties.LockToken);
await DoFoo(message.Body); //some long running process
}
My questions are:
This answer suggested that the exception was raised because the lock was being expired before the long time process, but in my case I was marking the message as complete immediately (before the long run process), so I'm not sure why changing the locking duration from azure made any difference? when I change it back to 30 seconds I can see the exception again.
Not sure if it related to the question but what is the purpose MaxAutoRenewDuration, the official docs is The maximum duration during which locks are automatically renewed.. If in my case I have only one app receiver that en-queue from this queue, so is it not needed because I do not need to lock the message from another app to capture it? and why this value should be greater than the longest message lock duration?

There are a few things you need to consider.
Lock duration
Total time since a message acquired from the broker
The lock duration is simple - for how long a single competing consumer can lease a message w/o having that message leased to any other competing consumer.
The total time is a bit tricker. Your callback ProcessMessagesAsync registered with to receive the message is not the only thing that is involved. In the code sample, you've provided, you're setting the concurrency to 1. If there's a prefetch configured (queue gets more than one message with every request for a message or several), the lock duration clock on the server starts ticking for all those messages. So if your processing is done slightly under MaxLockDuration but for the same of example, the last prefetched message was waiting to get processed too long, even if it's done within less than lock duration time, it might lose its lock and the exception will be thrown when attempting completion of that message.
This is where MaxAutoRenewDuration comes into the game. What it does is extends the message lease with the broker, "re-locking" it for the competing consumer that is currently handling the message. MaxAutoRenewDuration should be set to the "possibly maximum processing time a lease will be required". In your sample, it's set to TimeSpan.FromSeconds(10) which is extremely low. It needs to be set to be at least longer than the MaxLockDuration and adjusted to the longest period of time ProccesMessage will need to run. Taking prefetching into consideration.
To help to visualize it, think of the client-side having an in-memory queue where the messages can be stored while you perform the serial processing of the messages one by one in your handler. Lease starts the moment a message arrives from the broker to that in-memory queue. If the total time in the in-memory queue plus the processing exceeds the lock duration, the lease is lost. Your options are:
Enable concurrent processing by setting MaxConcurrentCalls > 1
Increase MaxLockDuration
Reduce message prefetch (if you use it)
Configure MaxAutoRenewDuration to renew the lock and overcome the MaxLockDuration constraint
Note about #4 - it's not a guaranteed operation. Therefore there's a chance a call to the broker will fail and message lock will not be extended. I recommend designing your solutions to work within the lock duration limit. Alternatively, persist message information so that your processing doesn't have to be constrained by the messaging.

Related

C# Producer/Consumer queue with delays

The problem is the classic N producers (with N possibly large) and X consumers with limited resources (X is currently 4). The producers messages come in (say via MQTT) and get queued to be processed by consumers on a FIFO basis. The important part of the processing is that each consumer may need to send back to the producers one or more "replies" and such replies should be at least some time apart (the exact delay is not important). The classic solution where one starts X tasks that wait on the message queue, process and loop is easy to implement using, for example, System.Threading.Channels:
while (!cancellationToken.IsCancellationRequested && await queue.Reader.WaitToReadAsync()) {
while (queue.Reader.TryRead(out IncomingMessage item)) {
// Do some processing.
SendResponse(1);
// Do some more processing.
if (needsToSend2Response) {
await Task.Delay(500);
SendResponse(2);
}
}
}
This works and works well except that if a task needs to delay it can't process any more messages and that's obviously bad.
Possible solutions I thought of:
Use an outbound queue that process the messages and makes sure there is at least a minimum delay between messages sent to the same producer.
Don't use a queue. Just start a new Task every time a new message comes in and arbitrate the limited resources using a semaphore: it works but I don't see how to guarantee the FIFO requirement (some times messages from the same producer are processed in the wrong order).
Any other ideas?

I have a long running process which I call in my Service Bus Queue. I want it to continue beyond 5 minutes

I have a long running process which performs matches between millions of records I call this code using a Service Bus, However when my process passes the 5 minute limit Azure starts processing the already processed records from the start again.
How can I avoid this
Here is my code:
private static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
long receivedMessageTrasactionId = 0;
try
{
IQueueClient queueClient = new QueueClient(serviceBusConnectionString, serviceBusQueueName, ReceiveMode.PeekLock);
// Process the message
receivedMessageTrasactionId = Convert.ToInt64(Encoding.UTF8.GetString(message.Body));
// My Very Long Running Method
await DataCleanse.PerformDataCleanse(receivedMessageTrasactionId);
//Get Transaction and Metric details
await queueClient.CompleteAsync(message.SystemProperties.LockToken);
}
catch (Exception ex)
{
Log4NetErrorLogger(ex);
throw ex;
}
}
Messages are intended for notifications and not long running processing.
You've got a fewoptions:
Receive the message and rely on receiver's RenewLock() operation to extend the lock.
Use user-callback API and specify maximum processing time, if known, via MessageHandlerOptions.MaxAutoRenewDuration setting to auto-renew message's lock.
Record the processing started but do not complete the incoming message. Rather leverage message deferral feature, sending yourself a new delayed message with the reference to the deferred message SequenceNumber. This will allow you to periodically receive a "reminder" message to see if the work is finished. If it is, complete the deferred message by its SequenceNumber. Otherise, complete the "reminder" message along with sending a new one. This approach would require some level of your architecture redesign.
Similar to option 3, but offload processing to an external process that will report the status later. There are frameworks that can help you with that. MassTransit or NServiceBus. The latter has a sample you can download and play with.
Note that option 1 and 2 are not guaranteed as those are client-side initiated operations.

How to implement exponential backoff in Azure Functions?

How to implement exponential backoff in Azure Functions?
I have a function that depends on external API. I would like to handle the unavailability of this service using the retry policy.
This function is triggered when a new message appears in the queue and in this case, this policy is turned on by default:
For most triggers, there is no built-in retry when errors occur during function execution. The two triggers that have retry support are Azure Queue storage and Azure Blob storage. By default, these triggers are retried up to five times. After the fifth retry, both triggers write a message to a special poison queue.
Unfortunately, the retry starts immediately after the exception (TimeSpan.Zero), and this is pointless in this case, because the service is most likely still unavailable.
Is there a way to dynamically modify the time the message is again available in the queue?
I know that I can set visibilityTimeout (host.json reference), but it's set for all queues and that is not what I want to achieve here.
I found one workaround, but it is far from ideal solution. In case of exception, we can add the message again to the queue and set visibilityTimeout for this message:
[FunctionName("Test")]
public static async Task Run([QueueTrigger("queue-test")]string myQueueItem, TraceWriter log,
ExecutionContext context, [Queue("queue-test")] CloudQueue outputQueue)
{
if (true)
{
log.Error("Error message");
await outputQueue.AddMessageAsync(new CloudQueueMessage(myQueueItem), TimeSpan.FromDays(7),
TimeSpan.FromMinutes(1), // <-- visibilityTimeout
null, null).ConfigureAwait(false);
return;
}
}
Unfortunately, this solution is weak because it does not have a context (I do not know which attempt it is and for this reason I can not limit the number of calls and modify the time (exponential backoff)).
Internal retry policy also is not welcome, because it can drastically increase costs (pricing models).
Microsoft added retry policies around November 2020 (preview), which support exponential backoff:
[FunctionName("Test")]
[ExponentialBackoffRetry(5, "00:00:04", "00:15:00")] // retries with delays increasing from 4 seconds to 15 minutes
public static async Task Run([QueueTrigger("queue-test")]string myQueueItem, TraceWriter log, ExecutionContext context)
{
// ...
}
I had a similar problem and ended up using durable functions which have an automatic retry feature built-in. This can be used when you wrap your external API call into activity and when calling this activity you can configure retry behavior through the options object. You can set the following options:
Max number of attempts: The maximum number of retry attempts.
First retry interval: The amount of time to wait before the first retry attempt.
Backoff coefficient: The coefficient used to determine rate of increase of backoff. Defaults to 1.
Max retry interval: The maximum amount of time to wait in between retry attempts.
Retry timeout: The maximum amount of time to spend doing retries. The default behavior is to retry indefinitely.
Handle: A user-defined callback can be specified to determine whether a function should be retried.
One option to consider is to have your Function invoke a Logic App that has a delay set to your desired amount of time and then after the delay invokes the function again. You could also add other retry logic (like # of attempts) to the Logic App using some persistent storage to tally your attempts. You would only invoke the Logic App if there was a connection issue.
Alternatively you could shift your process starting point to Logic Apps as it also can be triggered (think bound) queue messages. In either case Logic Apps adds the ability to pause and re-invoke the Function and/or process.
If you are explicitly completing/deadlettering messages ("autoComplete": false), here's an helper function that will exponentially delay and retry until the max delivery count is reached:
public static async Task ExceptionHandler(IMessageSession MessageSession, string LockToken, int DeliveryCount)
{
if (DeliveryCount < Globals.MaxDeliveryCount)
{
var DelaySeconds = Math.Pow(Globals.ExponentialBackoff, DeliveryCount);
await Task.Delay(TimeSpan.FromSeconds(DelaySeconds));
await MessageSession.AbandonAsync(LockToken);
}
else
{
await MessageSession.DeadLetterAsync(LockToken);
}
}
Since November 2022, there hasn't been anymore support for Function-level retries for QueueTrigger (source).
Instead of this, you must use Binding extensions:
{
"version": "2.0",
"extensions": {
"serviceBus": {
"clientRetryOptions":{
"mode": "exponential",
"tryTimeout": "00:01:00",
"delay": "00:00:00.80",
"maxDelay": "00:01:00",
"maxRetries": 3
}
}
}
}

Azure Service Bus Topic Subscriber lock expired exception

I have an web job that consumes message from Azure Service Bus Topic by registering a OnMessage callback . The message lock duration was set to 30 seconds and the lock renew timeout to 60 seconds. As such jobs taking more than 30 seconds to process service bus message were getting lock expired exception.
Now,I have set the message lock duration to more than lock renew time out. But somehow it still throws same exception. I also restarted my webjob, but still no luck.
I tried running same webjob consuming messages from different topic with later settings and it works fine. Is this behaviour expected, and after how much time does this setting change normally reflect.
Any help will be great
I have set the message lock duration to more than lock renew time out. But somehow it still throws same exception.
The max value of lock duration is 5 min. If you need less than 5 min to process the job, you could increase the lock duration of your message to meet your requirement.
If you need more than 5 min to process your job, you need to set the AutoRenewTimeout property of OnMessageOptions. It will renew the lock if the lock expired before it reached the AutoRenewTimeout. For example, if you set lock duration to 1 min and set AutoRenewTimeout to 5 min. The message will keep in locked for up to 5 min if you don't release the lock.
Here are the sample code I used to test the lock duration and AutoRenewTimeout on my side. If the job spent more time than lock duration and AutoRenewTimeout, it will throw a exception when we complete the message(it means timeout happened). I also modified the lock duration on portal and the configuration will be applied immediately when I receive a message.
SubscriptionClient Client = SubscriptionClient.CreateFromConnectionString(connectionString, "topic name", "subscription name");
// Configure the callback options.
OnMessageOptions options = new OnMessageOptions();
options.AutoComplete = false;
options.AutoRenewTimeout = TimeSpan.FromSeconds(60);
Client.OnMessage((message) =>
{
try
{
//process the message here, I used following code to simulation a long time spent job
for (int i = 0; i < 30; i++)
{
Thread.Sleep(3000);
}
// Remove message from subscription.
message.Complete();
}
catch (Exception ex)
{
// Indicates a problem, unlock message in subscription.
message.Abandon();
}
}, options);
For your issue, please check how much time will be spent on your job and choose a right way to set lock duration and AutoRenewTimeout.
The settings should be reflected almost immediately. Also lock renewal should probably be more than the lock duration or disabled.
Lock renewal feature is ASB client feature and it doesn't override lock duration set on entities. If you can reproduce this issue and share the repro, raise a support issue with Microsoft.

Scheduling Basic Consume in RabbitMQ

I am using RabbitMQ from last month successfully.Messages are read from queue using BasicConsume feature of RabbitMQ.The messages published to queue is immediately consumed by the corresponding consumer.
Now i created a new queue DelayedMsg,The messages published to this queue has to be read only after 5 min delay.What should i do?
Add a current timestamp value to the message while publishing message to the main queue from publisher / sender. Say example, 'published_on' => 1476424186.
At the consumer side, first check the time difference of current timestamp and published_on.
If the difference found to be less than 5 minutes, then send your message in another queue ( a DLX queue ) with setting expiration time.( use 'expiration' property of amqp message )
This expiration value should be ( current timestamp - published_on ) and it should be in milliseconds.
The message will gets expired in the DLX queue on exact 5 min.
Make sure 'x-dead-letter-exchange' should be your main queue exchange and is bounded with the dlx queue so that when the message gets expired, it will automatically gets queued up back into the main queue. see Dead Letter Exchange for more details.
So, consumer now get the message back after 5 min, process it normally, since its current timestamp and published_on difference will be greater than 5 min.
Try avoiding using DLX to implement delayed messages anymore. It is more like a workaround before having the "RabbitMQ Delayed Message Plugin."
Since now we have this plugin, we should try to use it instead:
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/
create 2 queues.
1 is work queue
2 is delay queue
and set delay queue property x-dead-letter-->work queue name,ttl-->5min
send message to delay queue,not need consumer it ,after 5min,this send the message to deal-letter(work queue),so you just consumer the work queue and process it

Categories

Resources