I have an Azure Function triggered by Azure Service Bus queue - Azure Function App is hosted in consumption plan.
How long does it take at most to wake up an Azure Function when new message in queue appears? (assuming there were no message during past 20 minutes).
Is it specified anywhere? I've found in the Documentation that:
For non-HTTP triggers, new instances will only be allocated at most once every 30 seconds.
Does it mean that I won't be able to process the first message in the queue faster than within 30 seconds?
Will adding Timer Triggered Azure Function to the same Function App (along with Service Bus triggered) help to keep the Azure Function instance up and running ?
Another option to handle a "cold start" of the ServiceBusTrigger function is using an eventing feature of the Azure Event Grid, where the events are emitted immediately if there are no messages in the Service Bus entity and a new message arrives. See more details here.
Note, that the event is emitted immediately when the first message arrived into the service bus entity. In the case, when the messages are there, the next event is emitted based on the idle time of the listener/receiver. This idle (watchdog) time is periodically 120 seconds from the last time used the listener/receiver on the service bus entity.
This is a push model, no listener on the service bus entity, so the AEG Subscriber (EventGridTrigger function) will "balanced" receiving a message with the ServiceBusTrigger function (its listener/receiver).
Using the REST API for deleting/receiving a message from the service bus entity (queue), the subscriber can obtained it very easy and straightforward.
So, using the AEG eventing on the topic providers/Microsoft.ServiceBus/namespaces/myNamespace and filtering on the eventType = "Microsoft.ServiceBus.ActiveMessagesAvailableWithNoListeners", the messages can be received side by side with a ServiceBudTrigger function and to solve the AF problem on the "cold start".
Note, that only the Azure Service Bus premium tier is integrated to AEG.
The following code snippet shows an example of the AEG Subscriber for Service Bus using the EventGridTrigger function:
#r "Newtonsoft.Json"
using System;
using System.Threading.Tasks;
using System.Text;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
public static async Task Run(JObject eventGridEvent, ILogger log)
{
log.LogInformation(eventGridEvent.ToString());
string requestUri = $"{eventGridEvent["data"]?["requestUri"]?.Value<string>()}";
if(!string.IsNullOrEmpty(requestUri))
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", Environment.GetEnvironmentVariable("AzureServiceBus_token"));
var response = await client.DeleteAsync(requestUri);
// status & headers
log.LogInformation(response.ToString());
// message body
log.LogInformation(await response.Content.ReadAsStringAsync());
}
}
await Task.CompletedTask;
}
First of all, when using consumption plan and idle for about 20 minutes, the function app goes into idle. And then if you use the function, you will experience cold start, which needs to take some time to wake up.
For your question:
Does it mean that I won't be able to process the first message in the
queue faster than within 30 seconds?
It depends on following(link is here):
1.which language you're using(eg. function using c# is more faster than java), the following is picked up from the link above:
A typical cold start latency spans from 2 to 15 seconds. C# functions usually complete the start within 3 seconds, while JavaScript and Java have longer tails.
2.The number of dependencies in your function:
Adding dependencies and thus increasing the deployed package size will further increase the cold start durations.
will adding Timer Triggered Azure Function to the same Function App
(along with Service Bus triggered) help to keep the Azure Function
instance up and running ?
Yes, add a Timer Triggered azure function to the same function app would keep the others warm up(up and running).
Azure Functions can run on either a Consumption Plan or a dedicated App Service Plan.
If you run in a dedicated mode, you need to turn on the Always On setting for your Function App to run properly. The Function runtime will go idle after a few minutes of inactivity, so only HTTP triggers will actually "wake up" your functions. This is similar to how WebJobs must have Always On enabled.
For more information, see Always On in the Azure Functions Documentation.
The timeout duration of a function app is defined by the functionTimeout property in the host.json project file. The following table shows the default and maximum values in minutes for both plans and in both runtime versions:
You can read more about cold start here.
https://azure.microsoft.com/en-in/blog/understanding-serverless-cold-start/
HTTP trigger will help in your case y warming up your instance for sure but ideal way for 24/7 type of requirement is to use dedicated App Serice plan. Hope it helps.
Related
I have an azure function deployed that listens on service bus messages.
Whilst I was testing I realised that the the same azure function instance picked up the service bus message as data wrote to a file I was not expecting.
What I thought would happen is that a new instance of an azure function would launch when each new message is added to the service bus. Am I wrong in this understanding? or can this be achieved and I need to set some type of flag so once my function dequeues a message it stops listening, so any future service bus entries get picked up on a new azure function instance
Thanks
EDIT:
This is the function declaration
[FunctionName("ServiceBusShopifyTrigger")]
public async Task ServiceBusTrigger([ServiceBusTrigger("%serviceBusShopifyExtractQueue%", Connection = "serviceBusExtractConnectionString")] string msg, int deliveryCount, DateTime enqueuedTimeUtc, string messageId)
{
I have placed the following in the start of the function:
string instanceid = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
_logger.LogInformation($"****** Orch Shopify - Azure Function Instance ID: {instanceid}");
As I understand this should be unique for each azure instance that spins up. However, I am seeing this as the same each time the function exectues
I also have a HTTP Trigger function and again the Instance ID is the same
Looking at the monitoring I am seeing the following for HTTP Trigger (note the operation ID on closely triggered functions)
These 2 instances were triggered from 2 seperate Http calls
For the service bus monitoring I am seeing different operation ids for closely triggered functions.
Something that I have just noticed in the following:
The calls at 13:26 share the same WEBSITE_INSTANCE_ID
The call at 13:40 share the same WEBSITE_INSTANCE_ID as the calls at 13:26
The call at 15:14 has a different WEBSITE_INSTACE_ID
Does the durable function awake until activity invoked?
I'm about the implement scheduler, and instead use other library such Hangfire or Quartz. i want to implement durable function that will serve as a scheduler.
And my missing piece is, what happen in the function? does the function got shout until next activity invocation? each one is called execution?
[FunctionName("SchedulerRouter")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
[OrchestrationClient]DurableOrchestrationClient starter, ILogger log)
{
var data = await req.Content.ReadAsAsync<JObject>();
var instanceId = await starter.StartNewAsync(FunctionsConsts.MAIN_DURABLE_SCHEDULER_NAME, data);
return starter.CreateCheckStatusResponse(req, instanceId);
}
Looks like you are confusing execution time with Max inactivity time is Azure functions:
Durable function is just related to the maximum execution time of a single call. For "out of the box" functions, that timeout is 10min, for durable functions this limitation gets removed. It also introduces support for stateful executions, which means following calls to the same function can share local variables and static members. This is an extension of the "out of the box" functions patterns which needs some additional boiler plate code to make everything working as expected. More details here: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview
Durable functions and normal functions share the same billing pattern, so cold starts will happen on durable functions as well especially when running in a consumption plan.
Azure functions running in a consumption plan will shutdown during a period of inactivity , and then reallocated and restarted when a new request arrives, this is called: Cold Start. You can mitigate this, building a timer trigger function which awakes your function every 5 to 10 min. But, you will still incurr in cold starts from time to time if your host gets up or down scaled automatically by Azure.
If you want to completely remove the chance of cold starts you will have to move to an App service plan. As a side note, Function apps in Azure are stateless by design, and you should implement your logic with this requirement in mind.
Did you looked into time triggers for AZ Functions? Maybe it is more soutable for you use case. Basically a CRON time tigger that invokes the function according the CRON setting.
The portal example for time trigger
I have a ServiceBusQueue(SBQ), which gets a lots of message payloads.
I have a ServiceBusTrigger(SBT) with accessRights(manage) which continuously polling a message from SBQ.
The problem i am facing is:
My SBT(16 instances at once) pick messages(16 messages individually) at one time and create a request to another server(suppose S1).
If SBT continuously creates 500-600 requests then the server S1 stops to respond.
I am expecting:
I could throttle/restrict to pick the message at once from SBQ so that I indirectly restrict to send the request.
Please share your thoughts, what design should i follow.I couldn't googled the exact solution.
Restrict the maximum concurrent calls of Service Bus Trigger.
In host.json, add configuration to throttle concurrency(i.e. by default 16 messages at once you have seen). Take an example of v2 function.
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 8
}
}
}
}
Restrict Function host instances count. When the host scales out, each instance has one Service Bus trigger which reads multiple messages concurrently as set above.
If the trigger is on dedicated App service plan, scale in the instance counts to some small value. For functions on Consumption plan, add App setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
with reasonable value(<=5). Of course we can set the count to 1 in order to control the behavior strictly.
If we have control over how the messages are sent, schedule the incoming messages to help decrease the request rate.
Use static clients to reuse connection with the Server S1.
I have an Azure function triggered by queue messages. This function makes a request to third-party API. Unfortunately this API has limit - 10 transactions per second, but I might have more than 10 messages per second in service bus queue. How can I limit the number of calls of Azure function to satisfy third-party API limitations?
Unfortunately there is no built-in option for this.
The only reliable way to limit concurrent executions would be to run on a fixed App Service Plan (not Consumption Plan) with just 1 instance running all the time. You will have to pay for this instance.
Then set the option in host.json file:
"serviceBus": {
// The maximum number of concurrent calls to the callback the message
// pump should initiate. The default is 16.
"maxConcurrentCalls": 10
}
Finally, make sure your function takes a second to execute (or other minimal duration, and adjust concurrent calls accordingly).
As #SeanFeldman suggested, see some other ideas in this answer. It's about Storage Queues, but applies to Service Bus too.
You can try writing some custom logic i.e. implement your own in-memory queue in Azure function to queue up requests and limit the calls to third party API. Anyway until the call to third party API succeeds, you dont need to acknowledge the messages in the queue. In this way reliability is also maintained.
The best way to maintain integrity of the system is to throttle the consumption of the Service Bus messages. You can control how your QueueClient processes the messages, see: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues#4-receive-messages-from-the-queue
Check out the "Max Concurrent calls"
static void RegisterOnMessageHandlerAndReceiveMessages()
{
// Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
// Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
// Set it according to how many messages the application wants to process in parallel.
MaxConcurrentCalls = 1,
// Indicates whether the message pump should automatically complete the messages after returning from user callback.
// False below indicates the complete operation is handled by the user callback as in ProcessMessagesAsync().
AutoComplete = false
};
// Register the function that processes messages.
queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
}
Do you want to get rid of N-10 messages you receive in a second interval or do you want to treat every message in respect to the API throttling limit? For the latter, you can add the messages processed by your function to another queue from which you can read a batch of 10 messages via another function (timer trigger) every second
I'm working with Azure Service Bus Queues in a request/response pattern using two queues and in general it is working well. I'm using pretty simple code from some good examples I've found. My queues are between web and worker roles, using MVC4, Visual Studio 2012 and .NET 4.5.
During some stress testing, I end up overloading my system and some responses are not delivered before the client gives up (which I will fix, not the point of this question).
When this happens, I end up with many messages left in my response queue, all well beyond their ExpiresAtUtc time. My message TimeToLive is set for 5 minutes.
When I look at the properties for a message still in the queue, it is clearly set to expire in the past, with a TimeToLive of 5 minutes.
I create the queues if they don't exist with the following code:
namespaceManager.CreateQueue(
new QueueDescription( RequestQueueName )
{
RequiresSession = true,
DefaultMessageTimeToLive = TimeSpan.FromMinutes( 5 ) // messages expire if not handled within 5 minutes
} );
What would cause a message to remain in a queue long after it is set to expire?
As I understand it, there is no background process cleaning these up, only the act of moving the queue cursor forward with a call to Receive will cause the server to skip past and dispose of messages which are expired and actually return the first message that is not expired or none if all are expired.