I have an Azure WebJob and I use the CloudQueue to communicate to it.
From my Web Application:
logger.Info("Writing Conversion Request to Document Queue " + JsonConvert.SerializeObject(blobInfo));
var queueMessage = new CloudQueueMessage(JsonConvert.SerializeObject(blobInfo));
documentQueue.AddMessage(queueMessage);
I verify in my log file that I see the INFO statement being written.
However, when I go to my queue:
What baffles me even more ... this Queue was full of messages before, including timestamps of this evening.
I went and cleared out my Queue, and after clearing it, it will no longer receive messages.
Has anyone ever seen this before?
As Gaurav Mantri mentioned in the comments that the message shoud be processed by your WebJob. When it is up to max attempts then will be moved to poison queue.
We could also get more details about poison messages from azure official tutorials. The following is the snippet from the tutorials.
Messages whose content causes a function to fail are called poison messages. When the function fails, the queue message is not deleted and eventually is picked up again, causing the cycle to be repeated. The SDK can automatically interrupt the cycle after a limited number of iterations, or you can do it manually.
The SDK will call a function up to 5 times to process a queue message. If the fifth try fails, the message is moved to a poison queue. The maximum number of retries is configurable.
Related
We have some issues with messages from Azure ServiceBus being read multiple times. Previously we had the same issue, which turned out to be due to lock timeout. Then, as the lock timed out the messages were read again, and their deliveryCount increased by 1 for each time the message was read. After this, we set the max delivery count to 1 to avoid resending of messages, and also increased the lock timeout to 5 minutes.
The current issue is a lot more strange.
First, messages are read at 10:45:34. Message locks are set to 10:50:34, and deliveryCount is 1. The reading says it succeeds, at 10:45:35.0. All good so far.
But then, at 10:45:35.8, the same messages are read again! And the delivery count is still 1. Both the sequence number and message id are the same in the two receive logs. This happens for a very small percentage of messages, something like 0,02% of the messages.
From what I understand, reading a message should either result in a success where the message should be removed, or an increase of deliveryCount, which in my case should send the message to DLQ. In these cases, neither happens.
I'm using ServiceBusTrigger, like this:
[FunctionName(nameof(ReceiveMessages))]
public async Task Run([ServiceBusTrigger(queueName: "%QueueName%", Connection = "ServiceBusConnectionString")]
string[] messages,
This seems to be like a bug in either the service bus or the library, any thoughts on what it could be?
That’s not the SDK but rather the specific entity. It sounds like the entity is corrupted. Delete and recreate it. If that doesn’t help, then open a support case.
On a different note, most of the time when delivery count is set to 1 is an indicator of something off. If you truly need at-most-once delivery guarantee, use ReceiveAndDelete mode instead of PeekLock.
I'm new to service bus and not able to figure this out.
Basically i'm using Azure function app which is hooked onto the service bus queue. Let's say a trigger is fired from the service bus and I receive a message from the queue, and in the processing of that message something goes wrong in my code. In such cases how do I make sure to put that message back in the queue again? Currently its just disappearing into thin air and when I restart my function app on VS, the next message from the queue is taken.
Ideally only when all my data processing is done and when i hit myMsg.Success() do I want it to be removed from the queue.
public static async Task RunAsync([ServiceBusTrigger("xx", "yy", AccessRights.Manage)]BrokeredMessage mySbMsg, TraceWriter log)
{
try{ // do something with mySbMsg }
catch{ // put that mySbMsg back in the queue so it doesn't disappear. and throw exception}
}
I was reading up on mySbMsg.Abandon() but it looks like that puts the message in the dead letter queue and I am not sure how to access it? and if there is a better way to error handle?
Cloud queues are a bit different than in-memory queues because they need to be robust to the possibility of the client crashing after it received the queue message but before it finished processing the message.
When a queue message is received, the message becomes "invisible" so that other clients can't pick it up. This gives the client a chance to process it and the client must mark it as completed when it is done (Azure Functions will do this automatically when you return from the function). That way, if the client were to crash in the middle of processing the message (we're on the cloud, so be robust to random machine crashes due to powerloss, etc), the server will see the absence of the completed message, assume the client crashed, and eventually resend the message.
Practically, this means that if you receive a queue message, and throw an exception (and thus we don't mark the message as completed), it will be invisible for a few minutes, but then it will show up again after a few minutes and another client can attempt to handle it. Put another way, in Azure functions, queue messages are automatically retried after exceptions, but the message will be invisible for a few minutes inbetween retries.
If you want the message to remain on the queue to be retried, the function should not swallow exception and rather throw. That way Function will not auto-complete the message and retry it.
Keep in mind that this will cause message to be retried and eventually, if exception persists, to be moved into dead-letter queue.
As per my understanding, I think what you are for is if there is an error in processing the message it needs to retry the execution instead of swallowing it. If you are using Azure Functions V2.0 you define the message handler options in the host.json
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": false,
"maxConcurrentCalls": 1
}
}
}
prefetchCount - Gets or sets the number of messages that the message receiver can simultaneously request.
autoComplete - Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.
After retrying the message n(defaults to 10) number of times it will transfer the message to DLQ.
If I have a service bus brokered message receiver configured. and it fails for any reason. I call on it
message.abandon();
however this means the message will be back again in the queue/subscription.
can i configure a timeout after which the same message is available in the queue for processing.
For example: if there is only one message in the queue. and it's failing, then it is not good to keep processing it every second or every minute. if i configure something, that can make sure, the failed/abandoned message only reappears after 30 mins . then it is useful.
Any suggestions?
When you abandon a message, you cannot supply a "cool off" time. The message will be available right away. It won't be dead-lettered until MaxDeliveryCount attempts have been exhausted. Once all those processing attempts have been used up, the message will go to the dead-letter queue.
If you need to postpone message processing, there are several options.
Deferring a message
You could defer a message using BrokeredMessage.DeferAsync(). The message will go back to the queue for future processing and a SequenceNumber of the message will be returned. The caveat with this approach is the need to hold on to the SequenceNumber in order to retrieve the message later. If you happened to lose SequenceNumber, it is still possible to browse for deferred messages and retrieve those. More information here.
Scheduling a new future message
Another option would be to clone an incoming failing message, schedule it for some time in future using BrokeredMessage.ScheduledEnqueueTimeUTC and completing the original message. With this option, I'd recommend to send the new message scheduled in future to be dispatch using send-via feature, also known as Transaction Processing, to leverage atomic operation of completing the incoming message and sending the outgoing one. This way the code will not produce "ghost" message if completion fails. More information here.
Scheduling using client, not message
Another option is to schedule using a client and not BrokeredMessage using client.ScheduleMessageAsync(). It will return aSequenceNumber` you need to hold on to, but using this API a message can be cancelled at any point in time w/o waiting for the schedule time to arrive or receiving the message. More information here.
I have a WebJob getting messages from Service bus which is working fine. If the webjob fails to finish or throws exception the message is sent back in the queue which is fine in itself.
I have set MaxDequeueCount to 10 and it is not a problem if it fails in some cases as it will try to process it again. But the problem is that the message seems to be sent to the bottom of the queue and other messages are handled before we get back to the first failed one.
I would like to handle one message at a time because I might have multiple updates on the same entity coming in a row. If the order changes it would update the entity in wrong order.
Is if it is possible to send the message back infront of the queue on error or continue working on the same message until we reach the MaxDequeueCount?
Ideally, you should not be relying on message order.
Given your requirement, you could potentially go with the FIFO semantics of Azure Service Bus Sessions. When a message is handled within a session and message is aborted, it will be handled once again rather than go to the end of the queue. You need to keep in mind the following:
Can only process one message at a time
Requires session to be used
If message is not completed and not aborted, it will be left hanging in the queue and will be picked up when a session is restarted.
In an Azure Web Job you can have functions triggered by an Azure queue in a continuously running job.
When a message has been read from a queue, it is deleted.
But if for some reason my Job crashes (think a VM restart) the current Job that was not finished will crash and I will lose the information in the message.
Is it possible to configure Azure Web Jobs not to delete messages from queue automatically, and do it manually when my job finishes?
There are two cases:
The function failed because the message was bad and we couldn't bind it - for example, you bind to an Person object but the message body is invalid JSON. In that case, we delete the message from the queue. We are going to have a mechanism of handling poison messages in a future release. (related question)
The function failed because an exception was thrown after the message was bound - for example, your own code thrown an exception. Whenever we get a message from a queue (except in case #1) we set a lease of, I think, 10 minutes:
If the function still runs after 10 minutes, we renew the lease.
If the function completes, the message is deleted.
If the function throws for any reason, we leave the message there and not renew the lease again. The message will show up in the queue again after the lease time expires (max 10 minutes).
The answer your question, if the VM restarts you are in case #2 and the message should show up again after, at most, 10 minutes.