I have an azure function which is triggered by adding a new message into a queue.
It should download a file from an FTP server and the name of the file is a part of the message that I push into the queue.
At some points, the server which is hosting files might become inaccessible and I will get exceptions, of course.
I would like to know how the queue behaves in these cases? does it pop the message and leave it? Or does it keep it and call the function again and again until the task gets completed without any exceptions?
From the docs:
The Functions runtime receives a message in PeekLock mode. It calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed as long as the function is running.
So, if the function fails it will be available for a next run, for a maximum of 10 retries. After 10 retries it goes to the DeadLetter Queue (source):
Service Bus Queues and Subscriptions each have a QueueDescription.MaxDeliveryCount and SubscriptionDescription.MaxDeliveryCount property respectively. the default value is 10. Whenever a message has been delivered under a lock (ReceiveMode.PeekLock), but has been either explicitly abandoned or the lock has expired, the message BrokeredMessage.DeliveryCount is incremented. When DeliveryCount exceeds MaxDeliveryCount, the message is moved to the DLQ, specifying the MaxDeliveryCountExceeded reason code.
Related
I have an Azure WebJob function that listens to messages on an Azure ServiceBus queue. Usually when I encounter an exception in my code, the message is abandoned as per the Azure WebJobs SDK documentation:
The SDK receives a message in PeekLock mode and calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed.
According to the Azure ServiceBus documentation this should mean that the message becomes available again, and will be retried:
If the application is unable to process the message for some reason, it can call the AbandonAsync method on the received message (instead of CompleteAsync). This method enables Service Bus to unlock the message and make it available to be received again, either by the same consumer or by another competing consumer. Secondly, there is a timeout associated with the lock and if the application fails to process the message before the lock timeout expires (for example, if the application crashes), then Service Bus unlocks the message and makes it available to be received again (essentially performing an AbandonAsync operation by default).
The behavior described above is what usually happens, but I have found an exception to this rule. If my code throws a TaskCanceledException specifically, the message is not abandoned as it should:
public void ProcessQueueMessage([ServiceBusTrigger("queue")] BrokeredMessage message, TextWriter log)
{
throw new TaskCanceledException();
}
When running this function through a web job, I see the error message printed out clear as day, but the message is consumed without any retries and without entering the dead-letter queue. If I replace the TaskCanceledException above with InvalidOperationException, the message is abondened and retried as it should (I have verified this against an actual ServiceBus queue).
I have not been able to find any explanation for this behavior. Currently I am wrapping the TaskCanceledException in another exception to work around the issue.
The question
Is what I am experiencing a bug in the Azure WebJobs SDK? Is TaskCanceledException special in this regard, or do other types of exceptions have similar behavior?
I am using the following NuGet packages:
Microsoft.Azure.WebJobs 2.3.0
Microsoft.Azure.WebJobs.ServiceBus 2.3.0
Functions are supposed to abandon the message if execution was not successful. If you're saying the message was not abandoned and retried even though it should be (assuming MaxDeliveryCount was set to larger than 1 and the receive mode was PeekLock), then it's likely to be the issue with the Functions and not Azure Service Bus. You could verify that by running a console application and performing the same, checking wherever the message is completed and removed from the queue or still on the queue and available for consumption.
Also, looks like you're using the older version of WebJobs (and Azure Service Bus). When performing verification, you'd need to use the older Azure Service Bus client (WindowsAzure.ServiceBus) and not the new one (Microsoft.Azure.ServiceBus).
I'm new to service bus and not able to figure this out.
Basically i'm using Azure function app which is hooked onto the service bus queue. Let's say a trigger is fired from the service bus and I receive a message from the queue, and in the processing of that message something goes wrong in my code. In such cases how do I make sure to put that message back in the queue again? Currently its just disappearing into thin air and when I restart my function app on VS, the next message from the queue is taken.
Ideally only when all my data processing is done and when i hit myMsg.Success() do I want it to be removed from the queue.
public static async Task RunAsync([ServiceBusTrigger("xx", "yy", AccessRights.Manage)]BrokeredMessage mySbMsg, TraceWriter log)
{
try{ // do something with mySbMsg }
catch{ // put that mySbMsg back in the queue so it doesn't disappear. and throw exception}
}
I was reading up on mySbMsg.Abandon() but it looks like that puts the message in the dead letter queue and I am not sure how to access it? and if there is a better way to error handle?
Cloud queues are a bit different than in-memory queues because they need to be robust to the possibility of the client crashing after it received the queue message but before it finished processing the message.
When a queue message is received, the message becomes "invisible" so that other clients can't pick it up. This gives the client a chance to process it and the client must mark it as completed when it is done (Azure Functions will do this automatically when you return from the function). That way, if the client were to crash in the middle of processing the message (we're on the cloud, so be robust to random machine crashes due to powerloss, etc), the server will see the absence of the completed message, assume the client crashed, and eventually resend the message.
Practically, this means that if you receive a queue message, and throw an exception (and thus we don't mark the message as completed), it will be invisible for a few minutes, but then it will show up again after a few minutes and another client can attempt to handle it. Put another way, in Azure functions, queue messages are automatically retried after exceptions, but the message will be invisible for a few minutes inbetween retries.
If you want the message to remain on the queue to be retried, the function should not swallow exception and rather throw. That way Function will not auto-complete the message and retry it.
Keep in mind that this will cause message to be retried and eventually, if exception persists, to be moved into dead-letter queue.
As per my understanding, I think what you are for is if there is an error in processing the message it needs to retry the execution instead of swallowing it. If you are using Azure Functions V2.0 you define the message handler options in the host.json
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": false,
"maxConcurrentCalls": 1
}
}
}
prefetchCount - Gets or sets the number of messages that the message receiver can simultaneously request.
autoComplete - Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.
After retrying the message n(defaults to 10) number of times it will transfer the message to DLQ.
I have a WebJob getting messages from Service bus which is working fine. If the webjob fails to finish or throws exception the message is sent back in the queue which is fine in itself.
I have set MaxDequeueCount to 10 and it is not a problem if it fails in some cases as it will try to process it again. But the problem is that the message seems to be sent to the bottom of the queue and other messages are handled before we get back to the first failed one.
I would like to handle one message at a time because I might have multiple updates on the same entity coming in a row. If the order changes it would update the entity in wrong order.
Is if it is possible to send the message back infront of the queue on error or continue working on the same message until we reach the MaxDequeueCount?
Ideally, you should not be relying on message order.
Given your requirement, you could potentially go with the FIFO semantics of Azure Service Bus Sessions. When a message is handled within a session and message is aborted, it will be handled once again rather than go to the end of the queue. You need to keep in mind the following:
Can only process one message at a time
Requires session to be used
If message is not completed and not aborted, it will be left hanging in the queue and will be picked up when a session is restarted.
I have an Azure WebJob and I use the CloudQueue to communicate to it.
From my Web Application:
logger.Info("Writing Conversion Request to Document Queue " + JsonConvert.SerializeObject(blobInfo));
var queueMessage = new CloudQueueMessage(JsonConvert.SerializeObject(blobInfo));
documentQueue.AddMessage(queueMessage);
I verify in my log file that I see the INFO statement being written.
However, when I go to my queue:
What baffles me even more ... this Queue was full of messages before, including timestamps of this evening.
I went and cleared out my Queue, and after clearing it, it will no longer receive messages.
Has anyone ever seen this before?
As Gaurav Mantri mentioned in the comments that the message shoud be processed by your WebJob. When it is up to max attempts then will be moved to poison queue.
We could also get more details about poison messages from azure official tutorials. The following is the snippet from the tutorials.
Messages whose content causes a function to fail are called poison messages. When the function fails, the queue message is not deleted and eventually is picked up again, causing the cycle to be repeated. The SDK can automatically interrupt the cycle after a limited number of iterations, or you can do it manually.
The SDK will call a function up to 5 times to process a queue message. If the fifth try fails, the message is moved to a poison queue. The maximum number of retries is configurable.
In an Azure Web Job you can have functions triggered by an Azure queue in a continuously running job.
When a message has been read from a queue, it is deleted.
But if for some reason my Job crashes (think a VM restart) the current Job that was not finished will crash and I will lose the information in the message.
Is it possible to configure Azure Web Jobs not to delete messages from queue automatically, and do it manually when my job finishes?
There are two cases:
The function failed because the message was bad and we couldn't bind it - for example, you bind to an Person object but the message body is invalid JSON. In that case, we delete the message from the queue. We are going to have a mechanism of handling poison messages in a future release. (related question)
The function failed because an exception was thrown after the message was bound - for example, your own code thrown an exception. Whenever we get a message from a queue (except in case #1) we set a lease of, I think, 10 minutes:
If the function still runs after 10 minutes, we renew the lease.
If the function completes, the message is deleted.
If the function throws for any reason, we leave the message there and not renew the lease again. The message will show up in the queue again after the lease time expires (max 10 minutes).
The answer your question, if the VM restarts you are in case #2 and the message should show up again after, at most, 10 minutes.