let's say I have an orchestrated function that is chaining like that:
[FunctionName("E1")] //default timeout of 5 minutes
public static async Task<List<string>> Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var outputs = new List<string>();
outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Tokyo")); //takes 5 minutes to complete
outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Seattle")); //takes 5 minutes complete
outputs.Add(await context.CallActivityAsync<string>("E1_SayHello_DirectInput", "London")); //takes 5 minutes complete
// should return ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
return outputs;
}
now we have three functions
let's say each one needs 5 minutes to complete (on the default azure consumptionl plan) ,people say each function has it has its own timeout so we should have a total of around 15 minutes in order to complete (5+5+5) for all ,however the top level function E1 has only a timeout of 5 minutes.Will it timeout before complete because the total of all sub-functions exceeds its limit of 5?
if E1 orchestrator timedout then does the activities or subfunctions stop if the orchestrator itself timedout?
The beauty about durable functions is that it is only active when orchestrating the function. When it reaches await context.CallActivityAsync it will start E1_SayHello but it won't wait for its completion. Instead the durable function will unload and resume once E1_SayHello is completed.
What you are doing is called the Function chaining pattern and this behavior I described above is documented there like this:
Each time the code calls await, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding await call.
So no, the durable function won't be active the whole 15 minutes.
Related
If an Azure function executes for say 10 minutes, but has a period of 5 minutes, does the trigger still fire at the 5 minute mark?
The easiest way to answer this question is to test it yourself.
I created a small function that did nothing more than wait for 90 seconds after executing.
The timer is set to run every minute and therefore, the test is to see if the function still executes every minute on the minute or if it delays by the 30 seconds.
The logs show that the answer to your question is ... NO
You can see that it's queued another execution though because as soon as it's finished, it starts again.
You'll also see the start times of each invocation is delayed by the 30 seconds additional it takes for the function to run.
This is the function ...
using System;
public static async Task Run(TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
await Task.Delay(90000);
}
Monitor Log
Invocations
One of the workaround I tried is:
In host.json, given the function timeout explicitly to test locally:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"functionTimeout": "00:05:00"
}
In Azure Function, Timer trigger is scheduled at every 10 minutes:
public void Run([TimerTrigger("0 */10 * * * *")]TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function1 executed at: {DateTime.Now}");
Even Running in Azure Cloud (Consumption Mode) also, there is no timeout happened as the timer is still running every 10 minutes:
As mentioned in the MS Doc and an Azure blog article, the timeout comes on Azure Function request (it means your business logic should be completed before the function timeout occurs, so at every schedule you can run the same logic by using Timer Trigger).
Why the "runtimeStatus" is set to "Completed" only after 52 seconds not 30 as I set in context.CreateTimer() function when checking it with statusQueryGetUri http request?
The documentation that I used
My Code
[FunctionName("H")]
public static async Task<HttpResponseMessage> Start([HttpTrigger(AuthorizationLevel.Anonymous, "get","post",Route = "route/{route}")] HttpRequestMessage req, [DurableClient] IDurableOrchestrationClient client, string route)
{
string id = await client.StartNewAsync("Or1");
return client.CreateCheckStatusResponse(req, id);
}
[FunctionName("Or1")]
public static async Task<string> Or1([OrchestrationTrigger] IDurableOrchestrationContext context, ILogger logger)
{
using (CancellationTokenSource cts = new CancellationTokenSource())
{
DateTime endTime = context.CurrentUtcDateTime.AddSeconds(30);
logger.LogInformation($"*********time now {context.CurrentUtcDateTime}");
logger.LogInformation($"*********end Time {endTime}");
await context.CreateTimer(endTime, cts.Token);
logger.LogInformation($"*********end Time finish {context.CurrentUtcDateTime}");
return "timer finished";
}
}
[FunctionName("Activity1")]
public static async Task A1([ActivityTrigger] IDurableActivityContext context)
{
//Do something
}
The Log
Functions:
H: [GET,POST] http://localhost:7071/api/route/{route}
Activity1: activityTrigger
Or1: orchestrationTrigger
For detailed output, run func with --verbose flag.
[2021-01-13T16:17:06.841Z] Host lock lease acquired by instance ID '000000000000000000000000EB8F9C93'.
[2021-01-13T16:17:24.767Z] Executing 'H' (Reason='This function was programmatically called via the host APIs.', Id=0aeee0e1-6148-4c21-9aa9-d17a43bce8d1)
[2021-01-13T16:17:24.925Z] Executed 'H' (Succeeded, Id=0aeee0e1-6148-4c21-9aa9-d17a43bce8d1, Duration=164ms)
[2021-01-13T16:17:24.995Z] Executing 'Or1' (Reason='(null)', Id=6aa97b04-d526-41b1-9532-afb21c088b18)
[2021-01-13T16:17:25.006Z] *********time now 1/13/2021 4:17:24 PM
[2021-01-13T16:17:25.007Z] *********endTime 1/13/2021 4:17:54 PM
[2021-01-13T16:17:25.017Z] Executed 'Or1' (Succeeded, Id=6aa97b04-d526-41b1-9532-afb21c088b18, Duration=23ms)
[2021-01-13T16:18:16.476Z] Executing 'Or1' (Reason='(null)', Id=9749d719-5789-419a-908f-6523cf497cca)
[2021-01-13T16:18:16.477Z] *********time now 1/13/2021 4:17:24 PM
[2021-01-13T16:18:16.478Z] *********endTime 1/13/2021 4:17:54 PM
[2021-01-13T16:18:16.481Z] *********endTime finish 1/13/2021 4:18:16 PM
[2021-01-13T16:18:16.485Z] Executed 'Or1' (Succeeded, Id=9749d719-5789-419a-908f-6523cf497cca, Duration=9ms)
The azure Orchestrater works on Queue polling which is implemented as a random exponential back-off algorithm to reduce the effect of idle-queue polling on storage transaction costs. When a message is found, the runtime immediately checks for another message; when no message is found, it waits for a period of time before trying again. After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches the maximum wait time, which defaults to 30 seconds.
If see your logs, you can notice that Orchestrater has triggered Timer at 16:17:24 and when it was finished at 16:17:54 a message was added in the storage queue. As mentioned above due to queue polling it seems that the message was picked at 16:18:16 to resume the orchestration execution.
I believe if you trigger the durable function multiple times then you will notice the total time to finish the orchestration would be different for each instance.
You can read about Azure function orchestration queue polling at here.
You can also check the history table to understand when a message was queued and when picked. Read about it at here.
To show how queuing works you can stop the function as soon timer is triggered. Following is the output in my local environment emulator queue which displays that a message is queued when timer is triggered
Now when Orchestrator function resumes again then it polls the message from queue and pick it to process further.
Note - in my local environment, I tried couple of time with your code as I noticed all instances finishes in ~30 secs.
I have an AWS Lambda function written in C# which:
is triggered by a message on a SQS queue
makes 2 (slow/long duration) HTTP REST calls to external (non-AWS) services
sends a message to an SQS Queue
I have configured the Lambda Basic Settings Timeout to 2 minutes.
However, if 2 HTTP REST calls take more than 30 seconds the Lambda times out:
Here is the relevant code, you can see the aligned log statements in the code and logs:
static void get1()
{
using var client = new HttpClient();
Console.WriteLine("Before get1");
var task = Task.Run(() => client.GetAsync("http://slowwly.robertomurray.co.uk/delay/35000/url/http://www.google.co.uk"));
Console.WriteLine("get1 initiated, about to wait");
task.Wait();
Console.WriteLine("get1 wait complete");
var result = task.Result;
Console.WriteLine("After get1, result: " + result.StatusCode);
}
This service http://slowwly.robertomurray.co.uk/delay/35000/url/http://www.google.co.uk, just delays for 35000 milliseconds then provides a response from "http://www.google.co.uk".
If the HTTP REST calls take less than 30 seconds, the Lambda completes and writes a message to the output SQS queue. In this example, I changed the delay/sleep durations to 5 seconds instead of 35 seconds, so the total execution time was less than 30 seconds:
In case the issue was somehow related to the usage of C# GetAsync / task.Wait(), I just tested and found the same timeout behaviour if I instead call:
static void sleepSome(int durationInSeconds)
{
Console.WriteLine("About to sleep for " + durationInSeconds + " seconds");
Thread.Sleep(durationInSeconds * 1000);
Console.WriteLine("Sleep over");
}
Which gives me log output of:
I am wondering if I should use an AWS SDK API from within my Lambda to log to console the configured timeout, just to prove that the timeout I have configured is "active/valid/heeded" etc.
The full end to end orchestration here, in case it is relevant is:
Postman Test client ->
AWS API GW ->
AWS Lambda1 ->
AWS SQS ->
AWS Lambda2 ->
REST API Calls
AWS SQS
AWS Lambda2 is the one that is timing out prematurely, and shown in the logs.
I only seem to have a single version:
And a single alias:
While Lambda itself has a 2 minute timeout, the timeout you see occurring might actually be due to the AWS API Gateway limit of 30 seconds. https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
I have an Azure orchestration where the orchestration client, which triggers the orchestration, threw a timeout exception.
The orchestration client function only does two things, starting two orchestrations, awaiting each as most example code suggest to.
await orchestrationClient.StartNewAsync("TableOrchestrator", updates);
await orchestrationClient.StartNewAsync("ClientOrchestrator", clientExport);
However, as I understand then the orchestration client is not a special function like the orchestration functions, so it can only run for a max of 10 minutes.
Obviously there is a high chance that the combined run time of my two orchestrations exceeds 10 minutes in total.
Questions:
Is the orchestration client state saved like the actual orchestration functions?
Do I need to await the orchestrations they do not depend on previous orchestration results?
Update Made a complete example of what my code does, and the runtimes as shown below.
It seems that starting an orchestration will await it if there is code written after, but not if the orchestration is the last statement!
Updated Questions:
Will any code after calling StartNewAsync() make the function await till the orchestration really finishes? or will e.g. log statements not trigger this behaviour?
Is it the recommended code practice that StartNewAsync() should only be called after all other code has executed?
.
public static class testOrchestration
{
[FunctionName("Start")]
public static async Task Start([TimerTrigger("0 */30 * * * *", RunOnStartup = true, UseMonitor = false)]TimerInfo myStartTimer, [OrchestrationClient] DurableOrchestrationClient orchestrationClient, ILogger log)
{
var startTime = DateTime.Now;
log.LogInformation(new EventId(0, "Startup"), "Starting Orchestror 1 ***");
await orchestrationClient.StartNewAsync("Orchestrator", "ONE");
log.LogInformation($"Elapsed time, await ONE: {DateTime.Now - startTime}");
await Task.Delay(5000);
log.LogInformation($"Elapsed time, await Delay: {DateTime.Now - startTime}");
log.LogInformation(new EventId(0, "Startup"), "Starting Orchestror 2 ***");
await orchestrationClient.StartNewAsync("Orchestrator", "TWO");
log.LogInformation($"Elapsed time, await TWO: {DateTime.Now - startTime}");
}
[FunctionName("Orchestrator")]
public static async Task<string> TestOrchestrator([OrchestrationTrigger] DurableOrchestrationContextBase context, ILogger log)
{
var input = context.GetInput<string>();
log.LogInformation($"Running {input}");
await Task.Delay(5000);
return $"Done {input}";
}
}
Running this gives me the following output:
Starting Orchestror 1 ***
Elapsed time, await ONE: 00:00:08.5445755
Running ONE
Elapsed time, await Delay: 00:00:13.5541264
Starting Orchestror 2 ***
Elapsed time, await TWO: 00:00:13.6211995
Running TWO
StartNewAsync() just schedules the orchestrators to be started (immediately). To await those calls does not mean that your initial function will really wait for the orchestrators to run - or even to actually start and finish its work.
The StartNewAsync (.NET) or startNew (JavaScript) method on the
orchestration client binding starts a new instance. Internally, this
method enqueues a message into the control queue, which then triggers
the start of a function with the specified name that uses the
orchestration trigger binding.
This async operation completes when the orchestration process is
successfully scheduled
Source
This async operation completes when the orchestration process is successfully scheduled.
So yes: You should await those calls (can also be done in parallel as Miguel suggested). But it will not take longer than a few milliseconds.
If they don't depend on each other, you can run them in parallel using:
var t1 = orchestrationClient.StartNewAsync("TableOrchestrator", updates);
var t2 = orchestrationClient.StartNewAsync("ClientOrchestrator", clientExport);
await Task.WhenAll(t1, t2);
I have a program where I let the user create several functions and once he creates all the functions I run them every x milliseconds. In other words I have something like:
// functionsToExecute is of type = List<Action>
// x = some integer
while(true){
foreach(Action action in functionsToExecute)
{
action();
}
Thread.Sleep(x);
}
Now I will like for the user to decide how long to wait per function. For example if the user creates 2 functions he might want the first function to run every 500 milliseconds the next one every 1500. I was thinking about creating two threads for this scenario and then have the same implementation. But what if the user creates 50 functions? I will need 50 threads!
In short I will like to execute x number of Actions each every n milliseconds... What will be the best way to create such algorithm? For example if I have 3 Actions I will like to execute the first action every 200 milliseconds, the next one every 500 milliseconds and the last one every 1000 milliseconds.
Maybe I need something similar to the SetTimout function in javascript
If you're using .NET 4.5 and your code is not time-critical, then you can easily do this with the Task Parallel Library:
static Task Repeat (List<Action> actions, CancellationToken token, int delay)
{
var tasks = new List<Task> ();
var cts = CancellationTokenSource.CreateLinkedTokenSource (token);
foreach (var action in actions) {
var task = Task.Factory.StartNew (async () => {
while (true) {
cts.Token.ThrowIfCancellationRequested ();
await Task.Delay (delay, cts.Token).ConfigureAwait (false);
action ();
}
});
tasks.Add (task);
}
return Task.WhenAll (tasks);
}
Ideally, you should also make your actions async to properly support cancellation.
The .NET runtime automatically takes care of thread scheduling, but there's no guarantee that your action will be executed after exactly the requested timeout. It will be executed after at least that time has elapsed and there's an idle thread available.
i would consider using a ThreadPool (walkthrough). Create each thread to process and have it repeat based on the timeout they're looking for. You can also store the ManualResetEvent for when you need the thread(s) to stop.