I'm having an issue where I'm triggering an event which is being handled by an Azure Function Service Bus Trigger.
The trigger that runs could run for anywhere up to an hour which is fine but I'm finding that after 5 minutes the message gets re added to the queue, so it's getting handled repeatedly.
I can hack around this by ensuring that this specific topic will only read the message once by changing the MaxDeliveryCount but ideally I'd like the lock to have a longer expiry time than the function (max 1 hour).
According to the Microsoft Documentation it should already do this but I'm still getting the issue when it's re queueing the message.
The Functions runtime receives a message in PeekLock mode. It calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed as long as the function is running.
Any ideas?
Azure Service Bus can only lock a message for a maximum of 5 minutes at a time. But it can also renew the lock, which, technically, can allow messages to be locked for as long as needed as long as there are no failures to issue the locking requests. In addition to that, there's a limit on the execution time functions can have. For example, on the Consumption plan, a function won't run longer than the maximum 10 minutes. For any processing longer than that, one should either look into alternatives, that include but are not limited to the following:
Functions Premium
App Service
Containers (Container Apps Service looks very promising)
Related
I have an Azure HttpTrigger function which processes POST requests and scales out on heavy load. The issue is that the caller of the function only waits 3 sec. for a HTTP 200 status code.
But when an azure function scales out it takes 4-6 sec. until the request gets processed. If the caller sends a request during the scale out it is possible that he cancels the request and my service is never able to process it. Which is a worst case scenario.
Is there a way to prevent that? My ideal scenario would be an immediate HTTP 202 answer to the caller. But I'm afraid that this is not possible during a scale out process.
A Scale-out will require your app to be loaded onto another instance so some delay occurs/incur for those requests because of time taken to load your app onto the new instance.
As described in the Official Documentation:
Consumption Plan is the true serverless hosting plan since it enables scaling to zero when idle state, but some requests might have additional latency at startup.
To get constant low latency with autoscaling, you should move to the premium hosting plan which avoids cold starts with perpetually warm instances.
Currently, I´m in a situation where an event needs to update a certain number of records in the DB. Usually, this does not take more than a couple of seconds, but there can be scenarios where can take more than 1 minute. In this scenario, the consumer takes the same message after 30 seconds and retry it.
I was wondering if I can increase that time to wait maybe up to 5 minutes for those rare scenarios without using JobConsumers.
Are you configuring the LockTimeout on the receive endpoint for 30 seconds? MassTransit defaults to 5 minutes, but if the queue already has a lower configured lock timeout, Azure Service Bus will only wait that time period before redelivering the message. If the queue already exists, you'd need to update the queue properties using the Azure Portal, or delete the queue so that MassTransit recreates it (any messages would be lost).
You can also set MaxDeliveryCount to 1 and Azure will move the message to the dead-letter queue after one attempt. The first approach is better though.
What I want is to place a message on something like a queue which has a trigger [Can be a function app trigger] which is triggered some specified time after the message is received and not immediately as the Service Bus Queue Trigger currently works.
I do not want to implement waits within the function app during proccessing since this is expensive.
Problem Description:
I have a system composed of two function apps and 1 service bus queue.
The first function app has a http trigger that receives a http request transaction, validates it and sends the transaction message to the processing queue.
The second function app is a queue listener that receives the message from the second queue, calls an external API for a resource. I have to keep checking for the API to check my transaction status. This makes me to run a while loop within the request which keeps sleeping the thread for 5 seconds after which, I check for the status. I do not terminate the function app in order to keep making the requests. This becomes costly since the function apps are billed per time taken during request execution.
The challenge with a service bus queue function app queue trigger is that it won't wait to trigger the listener function to call the external API to check for the status of the transaction after 5 seconds from the time the message arrives at the queue. I want to trigger the function app queue listener 5 seconds after the message has been received by the queue. Not immediately it is received.
Is there (not necessarily a service bus queue) a service / component / Azure product/ solution that can help me achieve this?
Any extra advise / information is highly welcome
Thank you
We have an WebApi json rest service written in C# for .Net 4.0 running in AWS. The service has a /log endpoint which receives logs and forwards the logs onto logstash via tcp for storage.
The /log endpoint uses Task.Factory.StartNew to send the logs to logstash async and returns StatusCode.OK immediately. This is because we don't want to client to wait for the log to be sent to logstash.
All exceptions are observed and handled, also we don't care if logs are lost because the service is shutdown or recycled from time to time as they are not critical.
At first the flow of logs was very low, probably 20 or 30 per hour during peek time. However we have recently started sending larger amounts of logs through, can be well over a thousand per hour. So the question now is that by using Task.Factoring.StartNew are we generating a large number of threads, i.e. 1 per request to the /log endpoint or is this managed somehow by a thread pool?
We use nLog for internal logging but are wondering if we can pass the logs from the /log endpoint to nlog to take advantage of its async batching features and have it send the logs to logstash? We have a custom target that will send logs to a tcp port.
Thanks in advance.
A Task in .NET does not equal one thread. It's safe to create as many as you need (almost). .NET will manage how many threads are created. .NET will not start more tasks than the hardware can handle.
I am working on a command processing application which uses azure service bus queue.
Commands are issued from a website and posted to the queue and the queue messages are processed by a worker role. Processing involves fetching data from db and other sources based on the queue message values and sending it to different topics. The flow is ,
Receive message
process the message
Mark message as complete / Abandon message On processing exception.
The challenge I face here is the processing time. Sometimes it exceeds the maximum message lock time period (5 minutes -configured) and hence the message is unlocked and it re-appears for the worker role to pick up (consider multiple instances of the worker role). So this causes same message to be processed again.
What are the options I have to handle such a scenario.?
I have thought about ,
Receive message - add to a local variable - mark message complete.
In case of exception send the message again to the queue or to a
separate queue (let us say failed message queue). A second queue
also means another worker role to process it.
In the processing there is a foreach loop that runs. So I thought of
using a Parallel.Foreach instead . but not sure how much of time
gain it will give and also read some posts on issues when using
Parallel in azure.
Suggestions,fixes welcome.
Aravind, you can absolutely use SB queue in this scenario. With the latest SDK you can renew the lock on your message for as long as your are continuing to process it. Details are at: http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.brokeredmessage.renewlock.aspx
This is similar to the Azure storage queue functionality of updating the visibility timeout: http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.windowsazure.storage.queue.cloudqueue.updatemessage.aspx
You may want to consider using an Azure Queue, the maxium lease time for an Azure Queue message is 7 days, as opposed to the Azure Service Bus Queue lease time of 5 minutes.
This msdn article describes the differences between the two Azure queue types.
If the standard Azure Queue doesn't contain all the features you need you might consider using both types of Queue.
You can fire off a Task with a heartbeat operation that keeps renewing the lock for you while you're processing it. This is exactly what I do. I described my approach at Creating a Task with a heartbeat