I have an Azure WebJob function that listens to messages on an Azure ServiceBus queue. Usually when I encounter an exception in my code, the message is abandoned as per the Azure WebJobs SDK documentation:
The SDK receives a message in PeekLock mode and calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed.
According to the Azure ServiceBus documentation this should mean that the message becomes available again, and will be retried:
If the application is unable to process the message for some reason, it can call the AbandonAsync method on the received message (instead of CompleteAsync). This method enables Service Bus to unlock the message and make it available to be received again, either by the same consumer or by another competing consumer. Secondly, there is a timeout associated with the lock and if the application fails to process the message before the lock timeout expires (for example, if the application crashes), then Service Bus unlocks the message and makes it available to be received again (essentially performing an AbandonAsync operation by default).
The behavior described above is what usually happens, but I have found an exception to this rule. If my code throws a TaskCanceledException specifically, the message is not abandoned as it should:
public void ProcessQueueMessage([ServiceBusTrigger("queue")] BrokeredMessage message, TextWriter log)
{
throw new TaskCanceledException();
}
When running this function through a web job, I see the error message printed out clear as day, but the message is consumed without any retries and without entering the dead-letter queue. If I replace the TaskCanceledException above with InvalidOperationException, the message is abondened and retried as it should (I have verified this against an actual ServiceBus queue).
I have not been able to find any explanation for this behavior. Currently I am wrapping the TaskCanceledException in another exception to work around the issue.
The question
Is what I am experiencing a bug in the Azure WebJobs SDK? Is TaskCanceledException special in this regard, or do other types of exceptions have similar behavior?
I am using the following NuGet packages:
Microsoft.Azure.WebJobs 2.3.0
Microsoft.Azure.WebJobs.ServiceBus 2.3.0
Functions are supposed to abandon the message if execution was not successful. If you're saying the message was not abandoned and retried even though it should be (assuming MaxDeliveryCount was set to larger than 1 and the receive mode was PeekLock), then it's likely to be the issue with the Functions and not Azure Service Bus. You could verify that by running a console application and performing the same, checking wherever the message is completed and removed from the queue or still on the queue and available for consumption.
Also, looks like you're using the older version of WebJobs (and Azure Service Bus). When performing verification, you'd need to use the older Azure Service Bus client (WindowsAzure.ServiceBus) and not the new one (Microsoft.Azure.ServiceBus).
Related
I have a Azure Function v1 triggered via Service Bus topic. If there happens any error, I put the BrokeredMessage to dead letter. It seems to work but after that I see following in Function's log streaming:
2019-11-19T10:49:31.382 [Error] MessageReceiver error
(Action=Complete) :
Microsoft.ServiceBus.Messaging.MessageLockLostException: The lock
supplied is invalid. Either the lock expired, or the message has
already been removed from the queue.
Here is what how I am putting BrokeredMessage to dead letter:
myBrokeredMessage.DeadLetter(deadLetterReason, exception.Message);
// after this I have tried following but doesn't work:
// 1. do nothing
// 2. myBrokeredMessage.Complete();
// 3. myBrokeredMessage.Abandon();
My Function is running fine. But after it has run and executed above code, that error appears to log streaming. It seems doing what I want (putting BrokeredMessage to dead letter queue), but that error doesn't seem nice and I want to fix it. I guess there is some kind of lock that I'm not handling correctly.
What should I do to fix that error?
What should I do to fix that error?
This is more of a warning than an error. The way Functions are designed is that by default the function completes or dead-letters the message. If you take control over what happens to the incoming message, Functions runtime doesn't it and still tries to apply the logic of completion as from its perspective there was no exception thrown from the user code and therefore the incoming message should be considered successfully processed and be completed.
With Functions 2.x there's a host setting that you could turn on to allow manual completion and disable the automatic completion. That is not available in v1.0, so you'll have to ignore the logged error. Or, alternatively, upgrade to 2.x.
I have an azure function which is triggered by adding a new message into a queue.
It should download a file from an FTP server and the name of the file is a part of the message that I push into the queue.
At some points, the server which is hosting files might become inaccessible and I will get exceptions, of course.
I would like to know how the queue behaves in these cases? does it pop the message and leave it? Or does it keep it and call the function again and again until the task gets completed without any exceptions?
From the docs:
The Functions runtime receives a message in PeekLock mode. It calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed as long as the function is running.
So, if the function fails it will be available for a next run, for a maximum of 10 retries. After 10 retries it goes to the DeadLetter Queue (source):
Service Bus Queues and Subscriptions each have a QueueDescription.MaxDeliveryCount and SubscriptionDescription.MaxDeliveryCount property respectively. the default value is 10. Whenever a message has been delivered under a lock (ReceiveMode.PeekLock), but has been either explicitly abandoned or the lock has expired, the message BrokeredMessage.DeliveryCount is incremented. When DeliveryCount exceeds MaxDeliveryCount, the message is moved to the DLQ, specifying the MaxDeliveryCountExceeded reason code.
I'm new to service bus and not able to figure this out.
Basically i'm using Azure function app which is hooked onto the service bus queue. Let's say a trigger is fired from the service bus and I receive a message from the queue, and in the processing of that message something goes wrong in my code. In such cases how do I make sure to put that message back in the queue again? Currently its just disappearing into thin air and when I restart my function app on VS, the next message from the queue is taken.
Ideally only when all my data processing is done and when i hit myMsg.Success() do I want it to be removed from the queue.
public static async Task RunAsync([ServiceBusTrigger("xx", "yy", AccessRights.Manage)]BrokeredMessage mySbMsg, TraceWriter log)
{
try{ // do something with mySbMsg }
catch{ // put that mySbMsg back in the queue so it doesn't disappear. and throw exception}
}
I was reading up on mySbMsg.Abandon() but it looks like that puts the message in the dead letter queue and I am not sure how to access it? and if there is a better way to error handle?
Cloud queues are a bit different than in-memory queues because they need to be robust to the possibility of the client crashing after it received the queue message but before it finished processing the message.
When a queue message is received, the message becomes "invisible" so that other clients can't pick it up. This gives the client a chance to process it and the client must mark it as completed when it is done (Azure Functions will do this automatically when you return from the function). That way, if the client were to crash in the middle of processing the message (we're on the cloud, so be robust to random machine crashes due to powerloss, etc), the server will see the absence of the completed message, assume the client crashed, and eventually resend the message.
Practically, this means that if you receive a queue message, and throw an exception (and thus we don't mark the message as completed), it will be invisible for a few minutes, but then it will show up again after a few minutes and another client can attempt to handle it. Put another way, in Azure functions, queue messages are automatically retried after exceptions, but the message will be invisible for a few minutes inbetween retries.
If you want the message to remain on the queue to be retried, the function should not swallow exception and rather throw. That way Function will not auto-complete the message and retry it.
Keep in mind that this will cause message to be retried and eventually, if exception persists, to be moved into dead-letter queue.
As per my understanding, I think what you are for is if there is an error in processing the message it needs to retry the execution instead of swallowing it. If you are using Azure Functions V2.0 you define the message handler options in the host.json
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": false,
"maxConcurrentCalls": 1
}
}
}
prefetchCount - Gets or sets the number of messages that the message receiver can simultaneously request.
autoComplete - Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.
After retrying the message n(defaults to 10) number of times it will transfer the message to DLQ.
How does the poison-message handling work for Azure WebJobs SDK's ServiceBusTrigger ? I am looking to push the service bus queue messages that have been dequeued more than 'x' times to a different ServiceBus (or) Storage queue
The online documentation here and here and SDK Samples from here does not have examples on how the poison message handling works for ServiceBusTrigger. Is this work in progress?
I tried implementing a custom poison message handling using dequeueCount parameter but it doesn't look that it is supported for ServiceBusTriggers as I was getting a runtime exception {"Cannot bind parameter 'dequeueCount' when using this trigger."}
public static void ProcessMessage([ServiceBusTrigger(topicName: "abc", subscriptionName: "abc.gdp")] NotificationMessage message,
[Blob("rox/{PayloadId}", FileAccess.Read)] Stream blobInput, Int32 dequeueCount)
{
throw new ArgumentNullException();
}
While you cannot get the dequeueCount property for ServiceBus messages, you can always bind to BrokeredMessage instead of NotificationMessage and get the property from it.
It looks like WebJobs handles this internally at the moment.
Reference: How to use Azure Service Bus with the WebJobs SDK
Specific section:
How ServicebusTrigger works
The SDK receives a message in PeekLock mode and calls Complete on the
message if the function finishes successfully, or calls Abandon if the
function fails. If the function runs longer than the PeekLock timeout,
the lock is automatically renewed.
Service Bus does its own poison queue handling, so that is neither
controlled by, nor configurable in, the WebJobs SDK.
Additional Reference
Poison message handling can't be controlled or configured in Azure Functions. Service Bus handles poison messages itself.
To add to Brendan Green's answer, the WebJobs SDK calls Abandon on messages that failed to process, and after maximum number of retries these messages are moved to the dead letter queue by the Service Bus. The properties defining when a message will be moved into the dead letter queue, such as maximum delivery count, time to live, and PeekLock duration can be changed in Service Bus -> Queue -> Properties.
You can find more information on SB dead letter queue here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues
In an Azure Web Job you can have functions triggered by an Azure queue in a continuously running job.
When a message has been read from a queue, it is deleted.
But if for some reason my Job crashes (think a VM restart) the current Job that was not finished will crash and I will lose the information in the message.
Is it possible to configure Azure Web Jobs not to delete messages from queue automatically, and do it manually when my job finishes?
There are two cases:
The function failed because the message was bad and we couldn't bind it - for example, you bind to an Person object but the message body is invalid JSON. In that case, we delete the message from the queue. We are going to have a mechanism of handling poison messages in a future release. (related question)
The function failed because an exception was thrown after the message was bound - for example, your own code thrown an exception. Whenever we get a message from a queue (except in case #1) we set a lease of, I think, 10 minutes:
If the function still runs after 10 minutes, we renew the lease.
If the function completes, the message is deleted.
If the function throws for any reason, we leave the message there and not renew the lease again. The message will show up in the queue again after the lease time expires (max 10 minutes).
The answer your question, if the VM restarts you are in case #2 and the message should show up again after, at most, 10 minutes.