Whats the difference between Azure Functions and Azure Durable Function - c#

Does the durable function awake until activity invoked?
I'm about the implement scheduler, and instead use other library such Hangfire or Quartz. i want to implement durable function that will serve as a scheduler.
And my missing piece is, what happen in the function? does the function got shout until next activity invocation? each one is called execution?
[FunctionName("SchedulerRouter")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
[OrchestrationClient]DurableOrchestrationClient starter, ILogger log)
{
var data = await req.Content.ReadAsAsync<JObject>();
var instanceId = await starter.StartNewAsync(FunctionsConsts.MAIN_DURABLE_SCHEDULER_NAME, data);
return starter.CreateCheckStatusResponse(req, instanceId);
}

Looks like you are confusing execution time with Max inactivity time is Azure functions:
Durable function is just related to the maximum execution time of a single call. For "out of the box" functions, that timeout is 10min, for durable functions this limitation gets removed. It also introduces support for stateful executions, which means following calls to the same function can share local variables and static members. This is an extension of the "out of the box" functions patterns which needs some additional boiler plate code to make everything working as expected. More details here: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview
Durable functions and normal functions share the same billing pattern, so cold starts will happen on durable functions as well especially when running in a consumption plan.
Azure functions running in a consumption plan will shutdown during a period of inactivity , and then reallocated and restarted when a new request arrives, this is called: Cold Start. You can mitigate this, building a timer trigger function which awakes your function every 5 to 10 min. But, you will still incurr in cold starts from time to time if your host gets up or down scaled automatically by Azure.
If you want to completely remove the chance of cold starts you will have to move to an App service plan. As a side note, Function apps in Azure are stateless by design, and you should implement your logic with this requirement in mind.

Did you looked into time triggers for AZ Functions? Maybe it is more soutable for you use case. Basically a CRON time tigger that invokes the function according the CRON setting.
The portal example for time trigger

Related

"Wake up" time of Azure Function triggered by Service Bus Queue

I have an Azure Function triggered by Azure Service Bus queue - Azure Function App is hosted in consumption plan.
How long does it take at most to wake up an Azure Function when new message in queue appears? (assuming there were no message during past 20 minutes).
Is it specified anywhere? I've found in the Documentation that:
For non-HTTP triggers, new instances will only be allocated at most once every 30 seconds.
Does it mean that I won't be able to process the first message in the queue faster than within 30 seconds?
Will adding Timer Triggered Azure Function to the same Function App (along with Service Bus triggered) help to keep the Azure Function instance up and running ?
Another option to handle a "cold start" of the ServiceBusTrigger function is using an eventing feature of the Azure Event Grid, where the events are emitted immediately if there are no messages in the Service Bus entity and a new message arrives. See more details here.
Note, that the event is emitted immediately when the first message arrived into the service bus entity. In the case, when the messages are there, the next event is emitted based on the idle time of the listener/receiver. This idle (watchdog) time is periodically 120 seconds from the last time used the listener/receiver on the service bus entity.
This is a push model, no listener on the service bus entity, so the AEG Subscriber (EventGridTrigger function) will "balanced" receiving a message with the ServiceBusTrigger function (its listener/receiver).
Using the REST API for deleting/receiving a message from the service bus entity (queue), the subscriber can obtained it very easy and straightforward.
So, using the AEG eventing on the topic providers/Microsoft.ServiceBus/namespaces/myNamespace and filtering on the eventType = "Microsoft.ServiceBus.ActiveMessagesAvailableWithNoListeners", the messages can be received side by side with a ServiceBudTrigger function and to solve the AF problem on the "cold start".
Note, that only the Azure Service Bus premium tier is integrated to AEG.
The following code snippet shows an example of the AEG Subscriber for Service Bus using the EventGridTrigger function:
#r "Newtonsoft.Json"
using System;
using System.Threading.Tasks;
using System.Text;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
public static async Task Run(JObject eventGridEvent, ILogger log)
{
log.LogInformation(eventGridEvent.ToString());
string requestUri = $"{eventGridEvent["data"]?["requestUri"]?.Value<string>()}";
if(!string.IsNullOrEmpty(requestUri))
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", Environment.GetEnvironmentVariable("AzureServiceBus_token"));
var response = await client.DeleteAsync(requestUri);
// status & headers
log.LogInformation(response.ToString());
// message body
log.LogInformation(await response.Content.ReadAsStringAsync());
}
}
await Task.CompletedTask;
}
First of all, when using consumption plan and idle for about 20 minutes, the function app goes into idle. And then if you use the function, you will experience cold start, which needs to take some time to wake up.
For your question:
Does it mean that I won't be able to process the first message in the
queue faster than within 30 seconds?
It depends on following(link is here):
1.which language you're using(eg. function using c# is more faster than java), the following is picked up from the link above:
A typical cold start latency spans from 2 to 15 seconds. C# functions usually complete the start within 3 seconds, while JavaScript and Java have longer tails.
2.The number of dependencies in your function:
Adding dependencies and thus increasing the deployed package size will further increase the cold start durations.
will adding Timer Triggered Azure Function to the same Function App
(along with Service Bus triggered) help to keep the Azure Function
instance up and running ?
Yes, add a Timer Triggered azure function to the same function app would keep the others warm up(up and running).
Azure Functions can run on either a Consumption Plan or a dedicated App Service Plan.
If you run in a dedicated mode, you need to turn on the Always On setting for your Function App to run properly. The Function runtime will go idle after a few minutes of inactivity, so only HTTP triggers will actually "wake up" your functions. This is similar to how WebJobs must have Always On enabled.
For more information, see Always On in the Azure Functions Documentation.
The timeout duration of a function app is defined by the functionTimeout property in the host.json project file. The following table shows the default and maximum values in minutes for both plans and in both runtime versions:
You can read more about cold start here.
https://azure.microsoft.com/en-in/blog/understanding-serverless-cold-start/
HTTP trigger will help in your case y warming up your instance for sure but ideal way for 24/7 type of requirement is to use dedicated App Serice plan. Hope it helps.

Fire & Forget calls in Azure Functions

I have a long running task in an Azure function which I want to run in a background thread using Task.Run. I don't care about the result.
public static async Task Run(...)
{
var taskA = await DoTaskA();
Task.Run(new Action(MethodB));
....
// return result based on taskA
}
Is this an acceptable pattern in Azure functions? (this is an HTTP trigger function)
I know this could also be done by adding a message to a queue and have another function execute this task but I want to know the pros and cons of starting run long running tasks in a background thread in Azure functions.
It might be best to have an Azure Function running TaskA and have it post a message in a ServiceBus which would trigger another Azure Function running TaskB when something is posted in that ServiceBus since no answer is needed anyway.
Here is the example shown on microsoft's website:
[FunctionName("FunctionB")]
public static void Run(
[ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "ServiceBusConnection")]
string myQueueItem,
TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
MethodB();
}
In that situation, you do not have to start a new task. Just call MethodB().
That will give you the flexibility the adjust the Plan of your Azure Functions (App Service vs Consumption Plan) and minimize the overall cost.
Depending on how complex your scenario is, you may want to look into Durable Functions. Durable Functions gives you greater control over a variety of scenarios, including long-running tasks.
No, no and no.
Have your HTTP triggered function return a 202 Accepted, the results of which you post to a blob URL later on. The 202 should include a Location header that points to the soon to exist blob URL and maybe a Retry-after header as well if you have a rough idea how long the processing takes.
The long processing task should be a queue triggered function. Why? Because things don't always go according to plan and you may need to retry processing. Why not have the retry built in.

CancellationTokens in parallel threads

I am posting this partly out of intrest on how the Task Parallel Library works, and for spreading knowledge. And also for investigating whether my "Cancellation" updates is the reason to a new issue where the user is suddenly logged out.
The project I am working on have these components:
Web forms site. A website that acts as portal for administrating company vehicles. Further refered as "Web"
WCF web service. A backend service on a seperate machine. Further refered as "Service"
Third party service. Further refered as "3rd"
Note: I am using .NET 4.0. Therefore the newer updates to the Task Parallel Library are not available.
The issue that I was assigned to fix was that the login function was very slow and CPU intensive. This later was later admitted to be a problem in the Third party service. However I tried to optimize the login behavior as well as I could.
The login request and response doesn't contain perticularly much data. But for gathering the response data several API calls are made to the Third party service.
1. Pre changes
The Web invokes a WCF method on the Service for gathering "session data".
This method would sometimes take so long that it would timeout (I think the timeout was set to 1 minute).
A pseudo representation of the "GetSessionData" method:
var agreements = getAgreements(request);
foreach (var agreement in agreements)
{
getAgreementDetails(agreement);
var customers = getCustomersWithAgreement(agreement);
foreach (var customer in customers)
{
getCustomerInfo(customer);
getCustomerAddress(customer);
getCustomerBranches(customer);
}
}
var person = getPerson(request);
var accounts = getAccount(person.Id);
foreach (var account in accounts)
{
var accountDetail = getAccountDetail(account.Id);
foreach (var vehicle in accountDetail.Vehicles)
{
getCurrentMilageReport(vehicle.Id);
}
}
return sessionData;
See gist for code snippet.
This method quickly becomes heavy the more agreements and accounts the user has.
2. Parallel.ForEach
I figured that I could replace foreach loops with a Parallel.ForEach(). This greatly improved the speed of the method for larger users.
See gist for code snippet.
3. Cancel
Another problem we had was that when the web services server is maxed on CPU usage, all method calls becomes much slower and could result in a timeout for the user. And a popular response to a timeout is to try again, so the user triggers another login attempt which is "queued"(?) due to the high CPU usage levels. This all while the first request has not returned yet.
We discovered that the request is still alive if the web site times out. So we decided to implement a similiar timeout on the Service side.
See gist for code snippet.
The idea is that GetSessionData(..) is to be invoked with a CancellationToken that will trigger Cancel after about the same time as the Web timeout. So that no work will be done if no one is there to show or use the results.
I also implemented the cancellation for the method calls to the Third party service.
Is it correct to share the same CancellationToken for all of the loops and service calls? Could there be an issue when all threads are "aborted" by throwing the cancel exception?
See gist for code snippet.
Is it correct to share the same CancellationToken for all of the loops and service calls? Could there be an issue when all threads are "aborted" by throwing the cancel exception?
Yes, it is correct. And yes, there could be an issue with throwing a lot of exceptions at the same time, but only in specific situations and huge amount of parallel work.
Several hints:
Use one CancellationTokenSource per one complete action. For example, per request. Pass the same Cancellation Token from this source to every asynchronous method
You can avoid throwing an exception and just return from a method. Later, to check that work was done and nothing been cancelled, you you check IsCancellationRequested on cts
Check token for cancellation inside loops on each iteration and just return if cancelled
Use threads only when there is an IO work, for example, when you query something from database or requests to another services; don't use it for CPU-bound work
I was tired at the end of working day and suggested a bad thing. Mainly, you don't need threads for IO bound work, for example, for waiting for a response from database of third service. Use threads only for CPU computations.
Also, I reviewed your code again and found several bottlenecks:
You can call GetAgreementDetail, GetFuelCards, GetServiceLevels, GetCustomers in asynchronously; don't wait for each next, not running all four requests
You can call GetAddressByCustomer and GetBranches in parallel as well
I noticed that you use mutex. I guess it is for protecting agreementDto. Customers and response.Customers on addition. If so, you can reduce scope of the lock
You can start work with Vehicles earlier, as you know UserId at the beginning of the method; do it in parallel too

Azure Functions - run long operation in another thread

I am trying to implement files conversion using Azure Functions solution. The conversion can take a lot of time. Therefore I don't want waiting for the response on the calling server.
I wrote the function that returns response immediately (to indicate that service is available and converting is started) and runs conversion in separate thread. Callback URL is used to send converting result.
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Stream srcBlob, Binder binder, TraceWriter log)
{
log.Info($"C# HTTP trigger function processed a request. RequestUri={req.RequestUri}");
// Get request model
var input = await req.Content.ReadAsAsync<ConvertInputModel>();
//Run convert in separate thread
Task.Run( async () => {
//Read input blob -> convert -> upload output blob
var convertResult = await ConvertAndUploadFile(input, srcBlob, binder, log);
//return result using HttpClient
SendCallback(convertResult, input.CallbackUrl);
});
//Return response immediately
return req.CreateResponse(HttpStatusCode.OK);
}
The problem that the new task breaks binding. I get exception while accessing params. So how can I run long-time operation in separate tread? Or such solution is totally wrong?
This pattern is not recommended (or supported) in Azure Functions. Particularly when running in the consumption plan, since the runtime won't be able to accurately manage your function's lifetime and will eventually shutdown your service.
One of the recommended (and widely used) patterns here would be to queue up this work to be processed by another function, listening on that queue, and return the response to the client right away.
With this approach, you accomplish essentially the same thing, where the actual processing will be done asynchronously, but in a reliable and efficient way (benefiting from automatic scaling to properly handle increased loads, if needed)
Do keep in mind that, when using the consumption plan, there's a function timeout of 5 minutes. If the processing is expected to take longer, you'd need to run your function on a dedicated plan with AlwaysOn enabled.
Your solution of running the background work inside the Azure Function is wrong like you suspected. You need a 2nd service that is designed to run these long running tasks. Here is documentation to Micosoft's best practices on azure for doing background jobs.

webapi 2 - how to properly invoke long running method async/in new thread, and return response to client

I am developing a web-api that takes data from client, and saves it for later use. Now i have an external system that needs to know of all events, so i want to setup a notification component in my web-api.
What i do is, after data is saved, i execute a SendNotification(message) method in my new component. Meanwhile i don't want my client to wait or even know that we're sending notifications, so i want to return a 201 Created / 200 OK response as fast as possible to my clients.
Yes this is a fire-and-forget scenario. I want the notification component to handle all exception cases (if notification fails, the client of the api doesn't really care at all).
I have tried using async/await, but this does not work in the web-api, since when the request-thread terminates, the async operation does so aswell.
So i took a look at Task.Run().
My controller looks like so:
public IHttpActionResult PostData([FromBody] Data data) {
_dataService.saveData(data);
//This could fail, and retry strategy takes time.
Task.Run(() => _notificationHandler.SendNotification(new Message(data)));
return CreatedAtRoute<object>(...);
}
And the method in my NotificationHandler
public void SendNotification(Message message) {
//..send stuff to a notification server somewhere, syncronously.
}
I am relatively new in the C# world, and i don't know if there is a more elegant(or proper) way of doing this. Are there any pitfalls with using this method?
It really depends how long. Have you looked into the possibility of QueueBackgroundWorkItem as detailed here. If you want to implement a very fast fire and forget you also might want to consider a queue to pop these messages onto so you can return from the controller immediately. You'd then have to have something which polls the queue and sends out the notifications i.e. Scheduled Task, Windows service etc. IIRC, if IIS recycles during a task, the process is killed whereas with QueueBackgroundWorkItem there is a grace period for which ASP.Net will let the work item finish it's job.
I would take a look on Hangfire. It is fairly easy to setup, it should be able to run within your ASP.NET process and is easy to migrate to a standalone process in case your IIS load suddenly increases.
I experimented with Hangfire a while ago but in standalone mode. It has enough docs and easy to understand API.

Categories

Resources