I need to get some data from external service and store them in cache but it takes some time. Sometimes the request is quite long.
I'd like to throw TimeoutException and inform user that it is required to wait couple of seconds and retry the request but method which is responsible for collecting data should continue it's work in background.
Let's assume that I have a method:
public Task<CustomEntity> GetCustomEntity()
{
// time consuming task e.g call to external service
return result;
}
Next:
public async Task<CustomEntity> SomeMethod()
{
return await GetCustomEntity();
}
I'd like to somehow measure execution time of GetCustomEntity() and after defined time throw TimeoutException but I'd like to continue Method execution in background.
I tried to use Task.WaitAny with Timeout parameter and to write some wrapper with cancellationToken but all of such solutions stop task execution in background.
Kind of solution is to run task again after timeout is achieved. e.g:
// Wrapper
var taskToRun = Task.Factory.StartNew(() => GetCustomEntity());
if (Task.WaitAny(new Task [] { taskToRun }, TimeSpan.FromMilliseconds(timeout)) < 0)
{
Task.WhenAll(taskToRun);
throw new TimeOutException();
}
But solution above is a bit silly.
In described situation, there is another question. How to return value from wrapper if request fit in timeout.
When looking at your wrapper solution you wait on the task to complete with a timeout. First you do not need WaitAny here. You can basically use Task.Wait(timeout). Secondly Task.WhenAll is not needed for the task to continue in the background so this should be sufficent:
var taskToRun = Task.Factory.StartNew(() => GetCustomEntity());
if (!taskToRun.Wait(TimeSpan.FromMilliseconds(timeout)))
{
throw new TimeoutException();
// taskToRun will continue its execution in the background
}
Keep in mind that .Wait or .WaitAny will block the current thread from execution and could lead to race conditions! If you want to use async/await to avoit that you could use Task.WhenAny:
var taskToRun = Task.Factory.StartNew(() => GetCustomEntity());
var completedTask = await Task.WhenAny(taskToRun, Task.Delay(timeout));
if (completedTask == taskToRun)
{
// No Timeout
}
else
{
// Timeout
throw new TimeoutException();
}
if you want to return the value if the request finished in time you could then do s.th. like this:
var taskToRun = Task.Factory.StartNew(() => GetCustomEntity());
var completedTask = await Task.WhenAny(taskToRun, Task.Delay(timeout));
if (completedTask == taskToRun)
{
// No Timeout
return await completedTask;
}
else
{
// Timeout
throw new TimeoutException();
}
or with the wait approach:
var taskToRun = Task.Factory.StartNew(() => GetCustomEntity());
if (!taskToRun.Wait(TimeSpan.FromMilliseconds(timeout)))
{
throw new TimeoutException();
// taskToRun will continue its execution in the background
}
else
{
return taskToRun.Result;
}
Related
I need to run many tasks in parallel as fast as possible. But if my program runs more than 30 tasks per 1 second, it will be blocked. How to ensure that tasks run no more than 30 per any 1-second interval?
In other words, we must prevent the new task from starting if 30 tasks were completed in the last 1-second interval.
My ugly possible solution:
private async Task Process(List<Task> taskList, int maxIntervalCount, int timeIntervalSeconds)
{
var timeList = new List<DateTime>();
var sem = new Semaphore(maxIntervalCount, maxIntervalCount);
var tasksToRun = taskList.Select(async task =>
{
do
{
sem.WaitOne();
}
while (HasAllowance(timeList, maxIntervalCount, timeIntervalSeconds));
await task;
timeList.Add(DateTime.Now);
sem.Release();
});
await Task.WhenAll(tasksToRun);
}
private bool HasAllowance(List<DateTime> timeList, int maxIntervalCount, int timeIntervalSeconds)
{
return timeList.Count <= maxIntervalCount
|| DateTime.Now.Subtract(TimeSpan.FromSeconds(timeIntervalSeconds)) > timeList[timeList.Count - maxIntervalCount];
}
User code should never have to control how tasks are scheduled directly. For one thing, it can't - controlling how tasks run is the job of the TaskScheduler. When user code calls .Start(), it simply adds a task to a threadpool queue for execution. await executes already executing tasks.
The TaskScheduler samples show how to create limited concurrency schedulers, but again, there are better, high-level options.
The question's code doesn't throttle the queued tasks anyway, it limits how many of them can be awaited. They are all running already. This is similar to batching the previous asynchronous operation in a pipeline, allowing only a limited number of messages to pass to the next level.
ActionBlock with delay
The easy, out-of-the-box way would be to use an ActionBlock with a limited MaxDegreeOfParallelism, to ensure no more than N concurrent operations can run at the same time. If we know how long each operation takes, we could add a bit of delay to ensure we don't overshoot the throttle limit.
In this case, 7 concurrent workers perform 4 requests/second, for a total of 28 maximum request per second. The BoundedCapacity means that only up to 7 items will be stored in the input buffer before downloader.SendAsync blocks. This way we avoid flooding the ActionBlock if the operations take too long.
var downloader = new ActionBlock<string>(
async url => {
await Task.Delay(250);
var response=await httpClient.GetStringAsync(url);
//Do something with it.
},
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 7, BoundedCapacity=7 }
);
//Start posting to the downloader
foreach(var item in urls)
{
await downloader.SendAsync(item);
}
downloader.Complete();
await downloader.Completion;
ActionBlock with SemaphoreSlim
Another option would be to combine this with a SemaphoreSlim that gets reset periodically by a timer.
var refreshTimer = new Timer(_=>sm.Release(30));
var downloader = new ActionBlock<string>(
async url => {
await semaphore.WaitAsync();
try
{
var response=await httpClient.GetStringAsync(url);
//Do something with it.
}
finally
{
semaphore.Release();
}
},
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 5, BoundedCapacity=5 }
);
//Start the timer right before we start posting
refreshTimer.Change(1000,1000);
foreach(....)
{
}
This is the snippet:
var tasks = new List<Task>();
foreach(item in listNeedInsert)
{
var task = TaskToRun(item);
tasks.Add(task);
if(tasks.Count == 100)
{
await Task.WhenAll(tasks);
tasks.Clear();
}
}
// Wait for anything left to finish
await Task.WhenAll(tasks);
Notice that I rather add the task into a List<Task>(); and after all is added, I await all in the same List<Task>();
What you do here:
var tasks = taskList.Select(async task =>
{
do
{
sem.WaitOne();
}
while (timeList.Count <= maxIntervalCount
|| DateTime.Now.Subtract(TimeSpan.FromSeconds(timeIntervalSeconds)) > timeList[timeList.Count - maxIntervalCount]);
await task;
is blocking until the task finishes it's work thus making this call:
Task.WhenAll(tasks).Wait();
completely redundant. Furthermore, this line Task.WhenAll(tasks).Wait(); is performing unnecessary blocking on the WhenAll method.
Is the blocking due to some server/firewall/hardware limit or it is based on observation?
You should try to use BlockingCollection<Task> or similar thread safe collections especially if the job of your tasks are I/O-bound. You can even set the capacity to 30:
var collection = BlockingCollection<Task>(30);
Then you can start 2 async method:
var population = Task.Factory.Start(Populate);
var processing = Task.Factory.Start(Dequeue);
await Task.WhenAll(population, processing);
Task Populate()
{
foreach (...)
collection.Add(...);
collection.CompleteAdding();
}
Task Dequeue
{
while(!collection.IsComplete)
await collection.Take(); //consider using TryTake()
}
If the limit presists due to some true limitation (should be very rare) change Populate() as follows:
var stopper = Stopwatch.StartNew();
for (var i = ....) //instead of foreach
{
if (i % 30 == 0)
{
if (stopper.ElapsedMilliseconds < 1000)
Task.Delay(1000 - stopper.ElapsedMilliseconds); //note that this race condition should be avoided in your code
stopper.Restart();
}
collection.Add(...);
}
collection.CompleteAdding();
I think that this problem can be solved by a SemaphoreSlim limited to the number of maximum tasks per interval, and also by a Task.Delay that delays the release of the SemaphoreSlim after each task's completion, for an interval equal to the required throttling interval. Below is an implementation based on this idea. The rate limiting can be applied in two ways:
With includeAsynchronousDuration: false the rate limit affects how many operations can be started during the specified time span. The duration of each operation is not taken into account.
With includeAsynchronousDuration: true the rate limit affects how many operations can be counted as "active" during the specified time span, and is more restrictive (makes the enumeration slower). Instead of counting each operation as a moment in time (when started), it is counted as a time span (between start and completion). An operation is counted as "active" for a specified time span, if and only if its own time span intersects with the specified time span.
/// <summary>
/// Applies an asynchronous transformation for each element of a sequence,
/// limiting the number of transformations that can start or be active during
/// the specified time span.
/// </summary>
public static async Task<TResult[]> ForEachAsync<TSource, TResult>(
this IEnumerable<TSource> source,
Func<TSource, Task<TResult>> action,
int maxActionsPerTimeUnit,
TimeSpan timeUnit,
bool includeAsynchronousDuration = false,
bool onErrorContinue = false, /* Affects only asynchronous errors */
bool executeOnCapturedContext = false)
{
if (source == null) throw new ArgumentNullException(nameof(source));
if (action == null) throw new ArgumentNullException(nameof(action));
if (maxActionsPerTimeUnit < 1)
throw new ArgumentOutOfRangeException(nameof(maxActionsPerTimeUnit));
if (timeUnit < TimeSpan.Zero || timeUnit.TotalMilliseconds > Int32.MaxValue)
throw new ArgumentOutOfRangeException(nameof(timeUnit));
using var semaphore = new SemaphoreSlim(maxActionsPerTimeUnit,
maxActionsPerTimeUnit);
using var cts = new CancellationTokenSource();
var tasks = new List<Task<TResult>>();
var releaseTasks = new List<Task>();
try // Watch for exceptions thrown by the source enumerator
{
foreach (var item in source)
{
try
{
await semaphore.WaitAsync(cts.Token)
.ConfigureAwait(executeOnCapturedContext);
}
catch (OperationCanceledException) { break; }
// Exceptions thrown synchronously by invoking the action are breaking
// the loop unconditionally (the onErrorContinue has no effect on them).
var task = action(item);
if (!onErrorContinue) task = ObserveFailureAsync(task);
tasks.Add(task);
releaseTasks.Add(ScheduleSemaphoreReleaseAsync(task));
}
}
catch (Exception ex) { tasks.Add(Task.FromException<TResult>(ex)); }
cts.Cancel(); // Cancel all release tasks
Task<TResult[]> whenAll = Task.WhenAll(tasks);
try { return await whenAll.ConfigureAwait(false); }
catch (OperationCanceledException) when (whenAll.IsCanceled) { throw; }
catch { whenAll.Wait(); throw; } // Propagate AggregateException
finally { await Task.WhenAll(releaseTasks); }
async Task<TResult> ObserveFailureAsync(Task<TResult> task)
{
try { return await task.ConfigureAwait(false); }
catch { cts.Cancel(); throw; }
}
async Task ScheduleSemaphoreReleaseAsync(Task<TResult> task)
{
if (includeAsynchronousDuration)
try { await task.ConfigureAwait(false); } catch { } // Ignore exceptions
// Release only if the Task.Delay completed successfully
try { await Task.Delay(timeUnit, cts.Token).ConfigureAwait(false); }
catch (OperationCanceledException) { return; }
semaphore.Release();
}
}
Usage example:
int[] results = await ForEachAsync(Enumerable.Range(1, 100), async n =>
{
await Task.Delay(500); // Simulate some asynchronous I/O-bound operation
return n;
}, maxActionsPerTimeUnit: 30, timeUnit: TimeSpan.FromSeconds(1.0),
includeAsynchronousDuration: true);
The reasons for propagating an AggregateException using the catch+Wait technique, are explained here.
I have an array of tasks and I am awaiting them with Task.WhenAll. My tasks are failing frequently, in which case I inform the user with a message box so that she can try again. My problem is that reporting the error is delayed until all tasks are completed. Instead I would like to inform the user as soon as the first task has thrown an exception. In other words I want a version of Task.WhenAll that fails fast. Since no such build-in method exists I tried to make my own, but my implementation does not behave the way I want. Here is what I came up with:
public static async Task<TResult[]> WhenAllFailFast<TResult>(
params Task<TResult>[] tasks)
{
foreach (var task in tasks)
{
await task.ConfigureAwait(false);
}
return await Task.WhenAll(tasks).ConfigureAwait(false);
}
This generally throws faster than the native Task.WhenAll, but usually not fast enough. A faulted task #2 will not be observed before the completion of task #1. How can I improve it so that it fails as fast as possible?
Update: Regarding cancellation, it is not in my requirements right now, but lets say that for consistency the first cancelled task should stop the awaiting immediately. In this case the combining task returned from WhenAllFailFast should have Status == TaskStatus.Canceled.
Clarification: Τhe cancellation scenario is about the user clicking a Cancel button to stop the tasks from completing. It is not about cancelling automatically the incomplete tasks in case of an exception.
Your best bet is to build your WhenAllFailFast method using TaskCompletionSource. You can .ContinueWith() every input task with a synchronous continuation that errors the TCS when the tasks end in the Faulted state (using the same exception object).
Perhaps something like (not fully tested):
using System;
using System.Threading;
using System.Threading.Tasks;
namespace stackoverflow
{
class Program
{
static async Task Main(string[] args)
{
var cts = new CancellationTokenSource();
cts.Cancel();
var arr = await WhenAllFastFail(
Task.FromResult(42),
Task.Delay(2000).ContinueWith<int>(t => throw new Exception("ouch")),
Task.FromCanceled<int>(cts.Token));
Console.WriteLine("Hello World!");
}
public static Task<TResult[]> WhenAllFastFail<TResult>(params Task<TResult>[] tasks)
{
if (tasks is null || tasks.Length == 0) return Task.FromResult(Array.Empty<TResult>());
// defensive copy.
var defensive = tasks.Clone() as Task<TResult>[];
var tcs = new TaskCompletionSource<TResult[]>();
var remaining = defensive.Length;
Action<Task> check = t =>
{
switch (t.Status)
{
case TaskStatus.Faulted:
// we 'try' as some other task may beat us to the punch.
tcs.TrySetException(t.Exception.InnerException);
break;
case TaskStatus.Canceled:
// we 'try' as some other task may beat us to the punch.
tcs.TrySetCanceled();
break;
default:
// we can safely set here as no other task remains to run.
if (Interlocked.Decrement(ref remaining) == 0)
{
// get the results into an array.
var results = new TResult[defensive.Length];
for (var i = 0; i < tasks.Length; ++i) results[i] = defensive[i].Result;
tcs.SetResult(results);
}
break;
}
};
foreach (var task in defensive)
{
task.ContinueWith(check, default, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default);
}
return tcs.Task;
}
}
}
Edit: Unwraps AggregateException, Cancellation support, return array of results. Defend against array mutation, null and empty. Explicit TaskScheduler.
I recently needed once again the WhenAllFailFast method, and I revised #ZaldronGG's excellent solution to make it a bit more performant (and more in line with Stephen Cleary's recommendations). The implementation below handles around 3,500,000 tasks per second in my PC.
public static Task<TResult[]> WhenAllFailFast<TResult>(params Task<TResult>[] tasks)
{
if (tasks is null) throw new ArgumentNullException(nameof(tasks));
if (tasks.Length == 0) return Task.FromResult(new TResult[0]);
var results = new TResult[tasks.Length];
var remaining = tasks.Length;
var tcs = new TaskCompletionSource<TResult[]>(
TaskCreationOptions.RunContinuationsAsynchronously);
for (int i = 0; i < tasks.Length; i++)
{
var task = tasks[i];
if (task == null) throw new ArgumentException(
$"The {nameof(tasks)} argument included a null value.", nameof(tasks));
HandleCompletion(task, i);
}
return tcs.Task;
async void HandleCompletion(Task<TResult> task, int index)
{
try
{
var result = await task.ConfigureAwait(false);
results[index] = result;
if (Interlocked.Decrement(ref remaining) == 0)
{
tcs.TrySetResult(results);
}
}
catch (OperationCanceledException)
{
tcs.TrySetCanceled();
}
catch (Exception ex)
{
tcs.TrySetException(ex);
}
}
}
Your loop waits for each of the tasks in pseudo-serial, so that's why it waits for task1 to complete before checking if task2 failed.
You might find this article helpful on a pattern for aborting after the first failure: http://gigi.nullneuron.net/gigilabs/patterns-for-asynchronous-composite-tasks-in-c/
public static async Task<TResult[]> WhenAllFailFast<TResult>(
params Task<TResult>[] tasks)
{
var taskList = tasks.ToList();
while (taskList.Count > 0)
{
var task = await Task.WhenAny(taskList).ConfigureAwait(false);
if(task.Exception != null)
{
// Left as an exercise for the reader:
// properly unwrap the AggregateException;
// handle the exception(s);
// cancel the other running tasks.
throw task.Exception.InnerException;
}
taskList.Remove(task);
}
return await Task.WhenAll(tasks).ConfigureAwait(false);
}
I'm adding one more answer to this problem, not because I've found a faster solution, but because I am now a bit skeptical about starting multiple async void operations on an unknown SynchronizationContext. The solution I am proposing here is significantly slower. It's about 3 times slower than #ZaldronGG's excellent solution, and about 10 times slower than my previous async void-based implementation. It has though the advantage that after the completion of the returned Task<TResult[]>, it doesn't leak fire-and-forget continuations attached on the observed tasks. When this task is completed, all the continuations created internally by the WhenAllFailFast method have been cleaned up. Which is a desirable behavior for APIs is general, but in many scenarios it might not be important.
public static Task<TResult[]> WhenAllFailFast<TResult>(params Task<TResult>[] tasks)
{
ArgumentNullException.ThrowIfNull(tasks);
CancellationTokenSource cts = new();
Task<TResult> failedTask = null;
TaskContinuationOptions flags = TaskContinuationOptions.DenyChildAttach |
TaskContinuationOptions.ExecuteSynchronously;
Action<Task<TResult>> continuationAction = new(task =>
{
if (!task.IsCompletedSuccessfully)
if (Interlocked.CompareExchange(ref failedTask, task, null) is null)
cts.Cancel();
});
IEnumerable<Task> continuations = tasks.Select(task => task
.ContinueWith(continuationAction, cts.Token, flags, TaskScheduler.Default));
return Task.WhenAll(continuations).ContinueWith(allContinuations =>
{
cts.Dispose();
var localFailedTask = Volatile.Read(ref failedTask);
if (localFailedTask is not null)
return Task.WhenAll(localFailedTask);
// At this point all the tasks are completed successfully
Debug.Assert(tasks.All(t => t.IsCompletedSuccessfully));
Debug.Assert(allContinuations.IsCompletedSuccessfully);
return Task.WhenAll(tasks);
}, default, flags, TaskScheduler.Default).Unwrap();
}
This implementation is similar to ZaldronGG's in that it attaches one continuation on each task, with the difference being that these continuations are cancelable, and they are canceled en masse when the first non-successful task is observed. It also uses the Unwrap technique that I've discovered recently, which eliminates the need for the manual completion of a TaskCompletionSource<TResult[]> instance, and usually makes for a concise implementation.
I have a method that registers a background task that looks like this:
//snippet from task builder method
try
{
cancellationTokenSource.CancelAfter(10000);
btr = Task.Run(() => registerTask(builder, btr,cancellationTokenSource.Token), cancellationTokenSource.Token).Result;
}
catch (OperationCanceledException) // something went wrong
{
return null;
}
private BackgroundTaskRegistration registerTask(BackgroundTaskBuilder builder, BackgroundTaskRegistration btr, CancellationToken token)
{
CancellationTokenSource newToken = new CancellationTokenSource();
Task cancelledCheck = Task.Run(() =>
{
while (true)
{
if (token.IsCancellationRequested)
{
newToken.Cancel();
token.ThrowIfCancellationRequested();
}
}
}, newToken.Token);
btr = Task.Run(()=> builder.Register(),token).Result;
return btr;
}
My issue is that sometimes the builder.Register() method does not return anything. It's probably a Windows bug of some sort; the Register() method never finishes internally. Indeed, after 10 seconds, the token.ThrowIfCancellationRequested() method is called, but it does not throw to the try-catch statement where it's called. Initially I was calling builder.Register() directly without a Task.Run() but it didn't work, and neither does this.
However, if I replace btr = Task.Run(() =>... with a Task.Delay(ms) instead, where ms > 10000, my intended effect happens.
Am I doing something wrong? Or is there a better way to do this? Basically I just need code that will make the registerTask() method return null when builder.Register() does not finish after a few seconds.
Replacing the code with something like this worked for me:
btr = null;
cancellationTokenSource.CancelAfter(10000);
Task registerTask = Task.Factory.StartNew(() =>
{
btr = builder.Register();
});
Task cancellationTask = Task.Factory.StartNew(() =>
{
while (true)
{
if (cancellationTokenSource.Token.IsCancellationRequested) break;
}
}, cancellationTokenSource.Token);
Task[] tasks = new Task[2] { cancellationTask, registerTask };
Task.WaitAny(tasks);
Instead of handling the error, getting a cancellation request will trigger one of the tasks to end, and when it ends I return the task registration variable whether it's null or not.
I have a doubt with cancellation token source which I am using as shown in the below code:
void Process()
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
var cancellationToken = _cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
MainTaskRoutine();
}, cancellationToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
void MainTaskRoutine()
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
//this method shows that a nested task is created
var cancellationToken = _cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
}, cancellationToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
Edit: further elaboration
End objective is: when a user cancels the operation, all the immediate pending tasks(either children or grand children) should cancel.
Scenario:
As per above code:
1. I first check whether user has asked for cancellation
2. If user has not asked for cancellation then only continue with the task (Please see Process method).
sample code shows only one task here but actually there can be three or more
Lets say that CPU started processing Task1 while other tasks are still in the Task queue waiting for some CPU to come and execute them.
User requests cancellation: Task 2,3 in Process method are immediately cancelled, but Task 1 will continue to work since it is already undergoing processing.
In Task 1 it calls method MainTaskRoutine, which in turn creates more tasks.
In the function of MainTaskRoutine I have written: cancellationToken.ThrowIfCancellationRequested();
So the question is: is it correct way of using CancellationTokenSource as it is dependent on Task.WaitAll()?
[EDITED] As you use an array in your code, I assume there could be multiple tasks, not just one. I also assume that within each task that you're starting from Process you want to do some CPU-bound work first (//do some work here), and then run MainTaskRoutine.
How you handle task cancellation exceptions is determined by your project design workflow. E.g., you could do it inside Process method, or from where you call Process. If your only concern is to remove Task objects from the array where you keep track of the pending tasks, this can be done using Task.ContinueWith. The continuation will be executed regardless of the task's completion status (Cancelled, Faulted or RanToCompletion):
Task Process(CancellationToken cancellationToken)
{
var tArray = new List<Task>();
var tArrayLock = new Object();
var task = Task.Run(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
return MainTaskRoutine(cancellationToken);
}, cancellationToken);
// add the task to the array,
// use lock as we may remove tasks from this array on a different thread
lock (tArrayLock)
tArray.Add(task);
task.ContinueWith((antecedentTask) =>
{
if (antecedentTask.IsCanceled || antecedentTask.IsFaulted)
{
// handle cancellation or exception inside the task
// ...
}
// remove task from the array,
// could be on a different thread from the Process's thread, use lock
lock (tArrayLock)
tArray.Remove(antecedentTask);
}, TaskContinuationOptions.ExecuteSynchronously);
// add more tasks like the above
// ...
// Return aggregated task
Task[] allTasks = null;
lock (tArrayLock)
allTasks = tArray.ToArray();
return Task.WhenAll(allTasks);
}
Your MainTaskRoutine can be structured in exactly the same way as Process, and have the same method signature (return a Task).
Then you may want to perform a blocking wait on the aggregated task returned by Process, or handle its completion asynchronously, e.g:
// handle the completion asynchronously with a blocking wait
void RunProcessSync()
{
try
{
Process(_cancellationTokenSource.Token).Wait();
MessageBox.Show("Process complete");
}
catch (Exception e)
{
MessageBox.Show("Process cancelled (or faulted): " + e.Message);
}
}
// handle the completion asynchronously using ContinueWith
Task RunProcessAync()
{
return Process(_cancellationTokenSource.Token).ContinueWith((task) =>
{
// check task.Status here
MessageBox.Show("Process complete (or cancelled, or faulted)");
}, TaskScheduler.FromCurrentSynchronizationContext());
}
// handle the completion asynchronously with async/await
async Task RunProcessAync()
{
try
{
await Process(_cancellationTokenSource.Token);
MessageBox.Show("Process complete");
}
catch (Exception e)
{
MessageBox.Show("Process cancelled (or faulted): " + e.Message);
}
}
After doing some research I found this link.
The code now looks like this:
see the usage of CancellationTokenSource.CreateLinkedTokenSource in below code
void Process()
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
var cancellationToken = _cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
MainTaskRoutine(cancellationToken);
}, cancellationToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
void MainTaskRoutine(CancellationToken cancellationToken)
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
//this method shows that a nested task is created
using (var cancellationTokenSource = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken))
{
var cancelToken = cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancelToken.ThrowIfCancellationRequested();
//do some work here
}, cancelToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
}
Note: I haven't used it, but I will let You know once it is done :)
I'm looking for an efficient way to throw a timeout exception if a synchronous method takes too long to execute. I've seen some samples but nothing that quite does what I want.
What I need to do is
Check that the sync method does exceed its SLA
If it does throw a timeout exception
I do not have to terminate the sync method if it executes for too long. (Multiple failures will trip a circuit breaker and prevent cascading failure)
My solution so far is show below. Note that I do pass a CancellationToken to the sync method in the hope that it will honor a cancellation request on timeout. Also my solution returns a task that can then be awaited on etc as desired by my calling code.
My concern is that this code creates two tasks per method being monitoring. I think the TPL will manage this well, but I would like to confirm.
Does this make sense? Is there a better way to do this?
private Task TimeoutSyncMethod( Action<CancellationToken> syncAction, TimeSpan timeout )
{
var cts = new CancellationTokenSource();
var outer = Task.Run( () =>
{
try
{
//Start the synchronous method - passing it a cancellation token
var inner = Task.Run( () => syncAction( cts.Token ), cts.Token );
if( !inner.Wait( timeout ) )
{
//Try give the sync method a chance to abort grecefully
cts.Cancel();
//There was a timeout regardless of what the sync method does - so throw
throw new TimeoutException( "Timeout waiting for method after " + timeout );
}
}
finally
{
cts.Dispose();
}
}, cts.Token );
return outer;
}
Edit:
Using #Timothy's answer I'm now using this. While not significantly less code it is a lot clearer. Thanks!
private Task TimeoutSyncMethod( Action<CancellationToken> syncAction, TimeSpan timeout )
{
var cts = new CancellationTokenSource();
var inner = Task.Run( () => syncAction( cts.Token ), cts.Token );
var delay = Task.Delay( timeout, cts.Token );
var timeoutTask = Task.WhenAny( inner, delay ).ContinueWith( t =>
{
try
{
if( !inner.IsCompleted )
{
cts.Cancel();
throw new TimeoutException( "Timeout waiting for method after " + timeout );
}
}
finally
{
cts.Dispose();
}
}, cts.Token );
return timeoutTask;
}
If you have a Task called task, you can do this:
var delay = Task.Delay(TimeSpan.FromSeconds(3));
var timeoutTask = Task.WhenAny(task, delay);
If timeoutTask.Result ends up being task, then it didn't timeout. Otherwise, it's delay and it did timeout.
I don't know if this is going to behave identically to what you have implemented, but it's the built-in way to do this.
I have re-written this solution for .NET 4.0 where some methods are not available e.g.Delay. This version is monitoring a method which returns object. How to implement Delay in .NET 4.0 comes from here: How to put a task to sleep (or delay) in C# 4.0?
public class OperationWithTimeout
{
public Task<object> Execute(Func<CancellationToken, object> operation, TimeSpan timeout)
{
var cancellationToken = new CancellationTokenSource();
// Two tasks are created.
// One which starts the requested operation and second which starts Timer.
// Timer is set to AutoReset = false so it runs only once after given 'delayTime'.
// When this 'delayTime' has elapsed then TaskCompletionSource.TrySetResult() method is executed.
// This method attempts to transition the 'delayTask' into the RanToCompletion state.
Task<object> operationTask = Task<object>.Factory.StartNew(() => operation(cancellationToken.Token), cancellationToken.Token);
Task delayTask = Delay(timeout.TotalMilliseconds);
// Then WaitAny() waits for any of the provided task objects to complete execution.
Task[] tasks = new Task[]{operationTask, delayTask};
Task.WaitAny(tasks);
try
{
if (!operationTask.IsCompleted)
{
// If operation task didn't finish within given timeout call Cancel() on token and throw 'TimeoutException' exception.
// If Cancel() was called then in the operation itself the property 'IsCancellationRequested' will be equal to 'true'.
cancellationToken.Cancel();
throw new TimeoutException("Timeout waiting for method after " + timeout + ". Method was to slow :-)");
}
}
finally
{
cancellationToken.Dispose();
}
return operationTask;
}
public static Task Delay(double delayTime)
{
var completionSource = new TaskCompletionSource<bool>();
Timer timer = new Timer();
timer.Elapsed += (obj, args) => completionSource.TrySetResult(true);
timer.Interval = delayTime;
timer.AutoReset = false;
timer.Start();
return completionSource.Task;
}
}
How to use it then in Console app.
public static void Main(string[] args)
{
var operationWithTimeout = new OperationWithTimeout();
TimeSpan timeout = TimeSpan.FromMilliseconds(10000);
Func<CancellationToken, object> operation = token =>
{
Thread.Sleep(9000); // 12000
if (token.IsCancellationRequested)
{
Console.Write("Operation was cancelled.");
return null;
}
return 123456;
};
try
{
var t = operationWithTimeout.Execute(operation, timeout);
var result = t.Result;
Console.WriteLine("Operation returned '" + result + "'");
}
catch (TimeoutException tex)
{
Console.WriteLine(tex.Message);
}
Console.WriteLine("Press enter to exit");
Console.ReadLine();
}
To elabolate on Timothy Shields clean solution:
if (task == await Task.WhenAny(task, Task.Delay(TimeSpan.FromSeconds(3))))
{
return await task;
}
else
throw new TimeoutException();
This solution, I found, will also handle the case where the Task has a return value - i.e:
async Task<T>
More to be found here: MSDN: Crafting a Task.TimeoutAfter Method
Jasper's answer got me most of the way, but I specifically wanted a void function to call a non-task synchronous method with a timeout. Here's what I ended up with:
public static void RunWithTimeout(Action action, TimeSpan timeout)
{
var task = Task.Run(action);
try
{
var success = task.Wait(timeout);
if (!success)
{
throw new TimeoutException();
}
}
catch (AggregateException ex)
{
throw ex.InnerException;
}
}
Call it like:
RunWithTimeout(() => File.Copy(..), TimeSpan.FromSeconds(3));