C# async aggregate and dispatch - c#

I'm having trouble assimilating the c# Task, async and await patterns.
Windows service, .NET v4.5.2 server-side.
I have a Windows service accepting a variety of sources of incoming records, arriving externally ad-hoc via a self-hosted web api. I would like to batch up these records and then forward them on to another service. If the number of batched records exceeds a threshold, the batch should be dispatched immediately. Furthermore, the batch as it stands should also be dispatched if a time interval has elapsed. This means that a record is never held for more than N seconds.
I'm struggling to fit this into a Task based async pattern.
In days gone by, I would have created a Thread, a ManualResetEvent and a System.Threading.Timer. The Thread would loop around a Wait on the reset event. The Timer would set the event when fired, as would the code doing the aggregation when the batch size exceeded the threshold. Following the Wait, the Thread would stop the Timer, do the dispatch (an HTTP Post), reset the Timer and clear the ManualResetEvent, the loop back and Wait.
However, I am seeing folk say that this is 'bad' as the Wait just blocks a valuable thread resource, and that async/await is my panacea.
First off, are they right? Is my way out-of-date and inefficient or can I JFDI?
I've found examples here for batching and here for tasks at intervals, but not a combination of the two.
Is this requirement actually compatible with async/await?

Actually, you're almost doing the right thing, and they are also partially right.
What you should know is that you should avoid idle threads, with long waiting on events or waiting for I/O to complete (waiting on locks with few contention and fast statement blocks or spinning loops with compare-and-swap are usually OK).
What most of them don't know is that tasks are not magic, for instance, Task.Delay uses a Timer (more exactly, a System.Threading.Timer) and waiting on a non-complete task ends up using a ManualResetEventSlim (an improvement over ManualResetEvent, as it doesn't create a Win32 event unless explicitly asked for, e.g. ((IAsyncResult)task).AsyncWaitHandle).
So yes, your requirements are achievable with async/await, or tasks in general.
Runnable example at .NET Fiddle:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
public class Record
{
private int n;
public Record(int n)
{
this.n = n;
}
public int N { get { return n; } }
}
public class RecordReceiver
{
// Arbitrary constants
// You should fetch value from configuration and define sensible defaults
private static readonly int threshold = 5;
// I chose a low value so the example wouldn't timeout in .NET Fiddle
private static readonly TimeSpan timeout = TimeSpan.FromMilliseconds(100);
// I'll use a Stopwatch to trace execution times
private readonly Stopwatch sw = Stopwatch.StartNew();
// Using a separate private object for locking
private readonly object lockObj = new object();
// The list of accumulated records to execute in a batch
private List<Record> records = new List<Record>();
// The most recent TCS to signal completion when:
// - the list count reached the threshold
// - enough time has passed
private TaskCompletionSource<IEnumerable<Record>> batchTcs;
// A CTS to cancel the timer-based task when the threshold is reached
// Not strictly necessary, but it reduces resource usage
private CancellationTokenSource delayCts;
// The task that will be completed when a batch of records has been dispatched
private Task dispatchTask;
// This method doesn't use async/await,
// because we're not doing an async flow here.
public Task ReceiveAsync(Record record)
{
Console.WriteLine("Received record {0} ({1})", record.N, sw.ElapsedMilliseconds);
lock (lockObj)
{
// When the list of records is empty, set up the next task
//
// TaskCompletionSource is just what we need, we'll complete a task
// not when we've finished some computation, but when we reach some criteria
//
// This is the main reason this method doesn't use async/await
if (records.Count == 0)
{
// I want the dispatch task to run on the thread pool
// In .NET 4.6, there's TaskCreationOptions.RunContinuationsAsynchronously
// .NET 4.6
//batchTcs = new TaskCompletionSource<IEnumerable<Record>>(TaskCreationOptions.RunContinuationsAsynchronously);
//dispatchTask = DispatchRecordsAsync(batchTcs.Task);
// Previously, we have to set up a continuation task using the default task scheduler
// .NET 4.5.2
batchTcs = new TaskCompletionSource<IEnumerable<Record>>();
var asyncContinuationsTask = batchTcs.Task
.ContinueWith(bt => bt.Result, TaskScheduler.Default);
dispatchTask = DispatchRecordsAsync(asyncContinuationsTask);
// Create a cancellation token source to be able to cancel the timer
//
// To be used when we reach the threshold, to release timer resources
delayCts = new CancellationTokenSource();
Task.Delay(timeout, delayCts.Token)
.ContinueWith(
dt =>
{
// When we hit the timer, take the lock and set the batch
// task as complete, moving the current records to its result
lock (lockObj)
{
// Avoid dispatching an empty list of records
//
// Also avoid a race condition by checking the cancellation token
//
// The race would be for the actual timer function to start before
// we had a chance to cancel it
if ((records.Count > 0) && !delayCts.IsCancellationRequested)
{
batchTcs.TrySetResult(new List<Record>(records));
records.Clear();
}
}
},
// Since our continuation function is fast, we want it to run
// ASAP on the same thread where the actual timer function runs
//
// Note: this is just a hint, but I trust it'll be favored most of the time
TaskContinuationOptions.ExecuteSynchronously);
// Remember that we want our batch task to have continuations
// running outside the timer thread, since dispatching records
// is probably too much work for a timer thread.
}
// Actually store the new record somewhere
records.Add(record);
// When we reach the threshold, set the batch task as complete,
// moving the current records to its result
//
// Also, cancel the timer task
if (records.Count >= threshold)
{
batchTcs.TrySetResult(new List<Record>(records));
delayCts.Cancel();
records.Clear();
}
// Return the last saved dispatch continuation task
//
// It'll start after either the timer or the threshold,
// but more importantly, it'll complete after it dispatches all records
return dispatchTask;
}
}
// This method uses async/await, since we want to use the async flow
internal async Task DispatchRecordsAsync(Task<IEnumerable<Record>> batchTask)
{
// We expect it to return a task right here, since the batch task hasn't had
// a chance to complete when the first record arrives
//
// Task.ConfigureAwait(false) allows us to run synchronously and on the same thread
// as the completer, but again, this is just a hint
//
// Remember we've set our task to run completions on the thread pool?
//
// With .NET 4.6, completing a TaskCompletionSource created with
// TaskCreationOptions.RunContinuationsAsynchronously will start scheduling
// continuations either on their captured SynchronizationContext or TaskScheduler,
// or forced to use TaskScheduler.Default
//
// Before .NET 4.6, completing a TaskCompletionSource could mean
// that continuations ran withing the completer, especially when
// Task.ConfigureAwait(false) was used on an async awaiter, or when
// Task.ContinueWith(..., TaskContinuationOptions.ExecuteSynchronously) was used
// to set up a continuation
//
// That's why, before .NET 4.6, we need to actually run a task for that effect,
// and we used Task.ContinueWith without TaskContinuationOptions.ExecuteSynchronously
// and with TaskScheduler.Default, to ensure it gets scheduled
//
// So, why am I using Task.ConfigureAwait(false) here anyway?
// Because it'll make a difference if this method is run from within
// a Windows Forms or WPF thread, or any thread with a SynchronizationContext
// or TaskScheduler that schedules tasks on a dedicated thread
var batchedRecords = await batchTask.ConfigureAwait(false);
// Async methods are transformed into state machines,
// much like iterator methods, but with async specifics
//
// What await actually does is:
// - check if the awaitable is complete
// - if so, continue executing
// Note: if every awaited awaitable is complete along an async method,
// the method will complete synchronously
// This is only expectable with tasks that have already completed
// or I/O that is always ready, e.g. MemoryStream
// - if not, return a task and schedule a continuation for just after the await expression
// Note: the continuation will resume the state machine on the next state
// Note: the returned task will complete on return or on exception,
// but that is something the compiled state machine will handle
foreach (var record in batchedRecords)
{
Console.WriteLine("Dispatched record {0} ({1})", record.N, sw.ElapsedMilliseconds);
// I used Task.Yield as a replacement for actual work
//
// It'll force the async state machine to always return here
// and shedule a continuation that reenters the async state machine right afterwards
//
// This is not something you usually want on production code,
// so please replace this with the actual dispatch
await Task.Yield();
}
}
}
public class Program
{
public static void Main()
{
// Our main entry point is synchronous, so we run an async entry point and wait on it
//
// The difference between MainAsync().Result and MainAsync().GetAwaiter().GetResult()
// is in the way exceptions are thrown:
// - the former aggregates exceptions, throwing an AggregateException
// - the latter doesn't aggregate exceptions if it doesn't have to, throwing the actual exception
//
// Since I'm not combining tasks (e.g. Task.WhenAll), I'm not expecting multiple exceptions
//
// If my main method returned int, I could return the task's result
// and I'd make MainAsync return Task<int> instead of just Task
MainAsync().GetAwaiter().GetResult();
}
// Async entry point
public static async Task MainAsync()
{
var receiver = new RecordReceiver();
// I'll provide a few records:
// - a delay big enough between the 1st and the 2nd such that the 1st will be dispatched
// - 8 records in a row, such that 5 of them will be dispatched, and 3 of them will wait
// - again, a delay big enough that will provoke the last 3 records to be dispatched
// - and a final record, which will wait to be dispatched
//
// We await for Task.Delay between providing records,
// but we'll await for the records in the end only
//
// That is, we'll not await each record before the next,
// as that would mean each record would only be dispatched after at least the timeout
var t1 = receiver.ReceiveAsync(new Record(1));
await Task.Delay(TimeSpan.FromMilliseconds(300));
var t2 = receiver.ReceiveAsync(new Record(2));
var t3 = receiver.ReceiveAsync(new Record(3));
var t4 = receiver.ReceiveAsync(new Record(4));
var t5 = receiver.ReceiveAsync(new Record(5));
var t6 = receiver.ReceiveAsync(new Record(6));
var t7 = receiver.ReceiveAsync(new Record(7));
var t8 = receiver.ReceiveAsync(new Record(8));
var t9 = receiver.ReceiveAsync(new Record(9));
await Task.Delay(TimeSpan.FromMilliseconds(300));
var t10 = receiver.ReceiveAsync(new Record(10));
// I probably should have used a list of records, but this is just an example
await Task.WhenAll(t1, t2, t3, t4, t5, t6, t7, t8, t9, t10);
}
}
You can make this more interesting, like returning a distinct task, such as Task<RecordDispatchReport>, from ReceiveAsync which is completed by the processing part of DispatchRecords, using a TaskCompletionSource for each record.

Related

Should/Could this "recursive Task" be expressed as a TaskContinuation?

In my application I have the need to continually process some piece(s) of Work on some set interval(s). I had originally written a Task to continually check a given Task.Delay to see if it was completed, if so the Work would be processed that corresponded to that Task.Delay. The draw back to this method is the Task that checks these Task.Delays would be in a psuedo-infinite loop when no Task.Delay is completed.
To solve this problem I found that I could create a "recursive Task" (I am not sure what the jargon for this would be) that processes the work at the given interval as needed.
// New Recurring Work can be added by simply creating
// the Task below and adding an entry into this Dictionary.
// Recurring Work can be removed/stopped by looking
// it up in this Dictionary and calling its CTS.Cancel method.
private readonly object _LockRecurWork = new object();
private Dictionary<Work, Tuple<Task, CancellationTokenSource> RecurringWork { get; set; }
...
private Task CreateRecurringWorkTask(Work workToDo, CancellationTokenSource taskTokenSource)
{
return Task.Run(async () =>
{
// Do the Work, then wait the prescribed amount of time before doing it again
DoWork(workToDo);
await Task.Delay(workToDo.RecurRate, taskTokenSource.Token);
// If this Work's CancellationTokenSource is not
// cancelled then "schedule" the next Work execution
if (!taskTokenSource.IsCancellationRequested)
{
lock(_LockRecurWork)
{
RecurringWork[workToDo] = new Tuple<Task, CancellationTokenSource>
(CreateRecurringWorkTask(workToDo, taskTokenSource), taskTokenSource);
}
}
}, taskTokenSource.Token);
}
Should/Could this be represented with a chain of Task.ContinueWith? Would there be any benefit to such an implementation? Is there anything majorly wrong with the current implementation?
Yes!
Calling ContinueWith tells the Task to call your code as soon as it finishes. This is far faster than manually polling it.

Execute set of tasks in parallel but with a group timeout

I'm currently trying to write a status checking tool with a reliable timeout value. One way I'd seen how to do this was using Task.WhenAny() and including a Task.Delay, however it doesn't seem to produce the results I expect:
public void DoIUnderstandTasksTest()
{
var checkTasks = new List<Task>();
// Create a list of dummy tasks that should just delay or "wait"
// for some multiple of the timeout
for (int i = 0; i < 10; i++)
{
checkTasks.Add(Task.Delay(_timeoutMilliseconds/2));
}
// Wrap the group of tasks in a task that will wait till they all finish
var allChecks = Task.WhenAll(checkTasks);
// I think WhenAny is supposed to return the first task that completes
bool didntTimeOut = Task.WhenAny(allChecks, Task.Delay(_timeoutMilliseconds)) == allChecks;
Assert.True(didntTimeOut);
}
What am I missing here?
I think you're confusing the workings of the When... calls with Wait....
Task.WhenAny doesn't return the first task to complete among those you pass to it. Rather, it returns a new Task that will be completed when any of the internal tasks finish. This means your equality check will always return false - the new task will never equal the previous one.
The behavior you're expecting seems similar to Task.WaitAny, which will block current execution until any of the internal tasks complete, and return the index of the completed task.
Using WaitAny, your code will look like this:
// Wrap the group of tasks in a task that will wait till they all finish
var allChecks = Task.WhenAll(checkTasks);
var taskIndexThatCompleted = Task.WaitAny(allChecks, Task.Delay(_timeoutMilliseconds));
Assert.AreEqual(0, taskIndexThatCompleted);

Windows 8 Thread improvements - alternative to Task.Run()?

I have inherited a C#/XAML/Win 8 application. There is some code which is set to run every n seconds.
The code that sets that up is:
if(!_syncThreadStarted)
{
await Task.Run(() => SyncToDatabase());
_syncThreadStarted = true;
}
The above code is ran once.
And then inside SyncToDatabase() we have:
while (true)
{
DatabaseSyncer dbSyncer = new DatabaseSyncer();
await dbSyncer.DeserializeAndUpdate();
await Task.Delay(10); // after elapsed time re-run above code
}
The method DeserializeAndUpdate queries a in-memory collection of objects and pushes those objects to a web service.
Sometimes the send request to the web service takes longer than expected meaning duplicate items are sent.
Question: Is there a way to have a thread or some type of thread pool/background worker which I can stop/abort/destroy inside the method SyncToDatabase() , and then initialize/start it once we are done? This will ensure no subsequent requests are fired while a previous request is still pending.
Edit: I am not very knowledgeable when it comes to Threads, but the logic I want is:
Create thread which runs some method every x seconds, and when it starts that thread stop the "running every x seconds" part, after thread has complete start the "run every x seconds" part again.
E.g. if the thread kicks off at 10:01:30AM and does not complete until 10:01:39AM (9 seconds) the next thread should start at 10:01:44AM (5 seconds after work completed) - does that make sense? I do not want 2 or more threads running at the same time.
Here is my code for the above:
var period = TimeSpan.FromSeconds(5);
var completed = true;
ThreadPoolTimer syncTimer = ThreadPoolTimer.CreatePeriodicTimer(async (source) =>
{
// stop further threads from starting (in case this work takes longer than var period)
syncTimer.Cancel();
DatabaseSyncer dbSyncer = new DatabaseSyncer();
await dbSyncer.DeserializeAndUpdate(); // makes webservices calls
Dispatcher.RunAsync(CoreDispatcerPriority.High, async () =>
{
// Update UI
}
completed = true;
}, period,
(source) =>
{
if(!completed)
{
syncTimer.Cancel(); // not sure if this is correct...
}
}
Thanks,
Andrew)
This is not specific to Windows 8. Usually Task.Run is used for CPU-bound work, to offload it to a pool thread and keep the UI (or the core service loop) responsive. In your case, as far as I can tell, the main payload is dbSyncer.DeserializeAndUpdate, which is already asynchronous and most likely network-IO bound, rather than CPU-bound.
Besides, the author of the original code does _syncThreadStarted = true after await Task.Run(() => SyncToDatabase()). That doesn't make sense, because the work on the pool thread would have been already done by the time _syncThreadStarted = true is executed, thanks to the await.
To cancel the loop inside SyncToDatabase you could use Task Cancellation Pattern. Is SyncToDatabase itself an async method? I presume so, because there's an await in the while loop. Given that, the code which calls it could look something like this:
if(_syncTask != null && !_syncTask.IsCompleted)
{
_ct.Cancel();
// here you may want to make sure that the pending task has been fully shut down,
// keeping possible re-entrancy in mind
// See: https://stackoverflow.com/questions/18999827/a-pattern-for-self-cancelling-and-restarting-task
_syncTask = null;
}
_ct = new CancellationTokenSource();
// _syncTask = SyncToDatabase(ct.Token); // do not await
// edited to run on another thread, as requested by the OP
var _syncTask = Task.Run(async () => await SyncToDatabase(ct.Token), ct.Token);
_syncThreadStarted = true;
And SyncToDatabase could look like:
async Task SyncToDatabase(CancellationToken token)
{
while (true)
{
token.ThrowIfCancellationRequested();
DatabaseSyncer dbSyncer = new DatabaseSyncer();
await dbSyncer.DeserializeAndUpdate();
await Task.Delay(10, token); // after elapsed time re-run above code
}
}
Check this answer for more details on how to cancel and restart a task.
I may have misunderstood the question, but the execution of SynchToDatabase() will wait on the completion of await dbSyncer.DeserializeAndUpdaet() (due to the await keyword, go figure ;)) before executing the continuation, which will then delay for 10 ms (do you want 10ms or did you mean 10 seconds? Parameter for Task.Delay is in milliseconds), then loop back to re-execute the DbSyncer method, so I don't see the problem.

Proper way to implement a never ending task. (Timers vs Task)

So, my app needs to perform an action almost continuously (with a pause of 10 seconds or so between each run) for as long as the app is running or a cancellation is requested. The work it needs to do has the possibility of taking up to 30 seconds.
Is it better to use a System.Timers.Timer and use AutoReset to make sure it doesn't perform the action before the previous "tick" has completed.
Or should I use a general Task in LongRunning mode with a cancellation token, and have a regular infinite while loop inside it calling the action doing the work with a 10 second Thread.Sleep between calls? As for the async/await model, I'm not sure it would be appropriate here as I don't have any return values from the work.
CancellationTokenSource wtoken;
Task task;
void StopWork()
{
wtoken.Cancel();
try
{
task.Wait();
} catch(AggregateException) { }
}
void StartWork()
{
wtoken = new CancellationTokenSource();
task = Task.Factory.StartNew(() =>
{
while (true)
{
wtoken.Token.ThrowIfCancellationRequested();
DoWork();
Thread.Sleep(10000);
}
}, wtoken, TaskCreationOptions.LongRunning);
}
void DoWork()
{
// Some work that takes up to 30 seconds but isn't returning anything.
}
or just use a simple timer while using its AutoReset property, and call .Stop() to cancel it?
I'd use TPL Dataflow for this (since you're using .NET 4.5 and it uses Task internally). You can easily create an ActionBlock<TInput> which posts items to itself after it's processed it's action and waited an appropriate amount of time.
First, create a factory that will create your never-ending task:
ITargetBlock<DateTimeOffset> CreateNeverEndingTask(
Action<DateTimeOffset> action, CancellationToken cancellationToken)
{
// Validate parameters.
if (action == null) throw new ArgumentNullException("action");
// Declare the block variable, it needs to be captured.
ActionBlock<DateTimeOffset> block = null;
// Create the block, it will call itself, so
// you need to separate the declaration and
// the assignment.
// Async so you can wait easily when the
// delay comes.
block = new ActionBlock<DateTimeOffset>(async now => {
// Perform the action.
action(now);
// Wait.
await Task.Delay(TimeSpan.FromSeconds(10), cancellationToken).
// Doing this here because synchronization context more than
// likely *doesn't* need to be captured for the continuation
// here. As a matter of fact, that would be downright
// dangerous.
ConfigureAwait(false);
// Post the action back to the block.
block.Post(DateTimeOffset.Now);
}, new ExecutionDataflowBlockOptions {
CancellationToken = cancellationToken
});
// Return the block.
return block;
}
I've chosen the ActionBlock<TInput> to take a DateTimeOffset structure; you have to pass a type parameter, and it might as well pass some useful state (you can change the nature of the state, if you want).
Also, note that the ActionBlock<TInput> by default processes only one item at a time, so you're guaranteed that only one action will be processed (meaning, you won't have to deal with reentrancy when it calls the Post extension method back on itself).
I've also passed the CancellationToken structure to both the constructor of the ActionBlock<TInput> and to the Task.Delay method call; if the process is cancelled, the cancellation will take place at the first possible opportunity.
From there, it's an easy refactoring of your code to store the ITargetBlock<DateTimeoffset> interface implemented by ActionBlock<TInput> (this is the higher-level abstraction representing blocks that are consumers, and you want to be able to trigger the consumption through a call to the Post extension method):
CancellationTokenSource wtoken;
ActionBlock<DateTimeOffset> task;
Your StartWork method:
void StartWork()
{
// Create the token source.
wtoken = new CancellationTokenSource();
// Set the task.
task = CreateNeverEndingTask(now => DoWork(), wtoken.Token);
// Start the task. Post the time.
task.Post(DateTimeOffset.Now);
}
And then your StopWork method:
void StopWork()
{
// CancellationTokenSource implements IDisposable.
using (wtoken)
{
// Cancel. This will cancel the task.
wtoken.Cancel();
}
// Set everything to null, since the references
// are on the class level and keeping them around
// is holding onto invalid state.
wtoken = null;
task = null;
}
Why would you want to use TPL Dataflow here? A few reasons:
Separation of concerns
The CreateNeverEndingTask method is now a factory that creates your "service" so to speak. You control when it starts and stops, and it's completely self-contained. You don't have to interweave state control of the timer with other aspects of your code. You simply create the block, start it, and stop it when you're done.
More efficient use of threads/tasks/resources
The default scheduler for the blocks in TPL data flow is the same for a Task, which is the thread pool. By using the ActionBlock<TInput> to process your action, as well as a call to Task.Delay, you're yielding control of the thread that you were using when you're not actually doing anything. Granted, this actually leads to some overhead when you spawn up the new Task that will process the continuation, but that should be small, considering you aren't processing this in a tight loop (you're waiting ten seconds between invocations).
If the DoWork function actually can be made awaitable (namely, in that it returns a Task), then you can (possibly) optimize this even more by tweaking the factory method above to take a Func<DateTimeOffset, CancellationToken, Task> instead of an Action<DateTimeOffset>, like so:
ITargetBlock<DateTimeOffset> CreateNeverEndingTask(
Func<DateTimeOffset, CancellationToken, Task> action,
CancellationToken cancellationToken)
{
// Validate parameters.
if (action == null) throw new ArgumentNullException("action");
// Declare the block variable, it needs to be captured.
ActionBlock<DateTimeOffset> block = null;
// Create the block, it will call itself, so
// you need to separate the declaration and
// the assignment.
// Async so you can wait easily when the
// delay comes.
block = new ActionBlock<DateTimeOffset>(async now => {
// Perform the action. Wait on the result.
await action(now, cancellationToken).
// Doing this here because synchronization context more than
// likely *doesn't* need to be captured for the continuation
// here. As a matter of fact, that would be downright
// dangerous.
ConfigureAwait(false);
// Wait.
await Task.Delay(TimeSpan.FromSeconds(10), cancellationToken).
// Same as above.
ConfigureAwait(false);
// Post the action back to the block.
block.Post(DateTimeOffset.Now);
}, new ExecutionDataflowBlockOptions {
CancellationToken = cancellationToken
});
// Return the block.
return block;
}
Of course, it would be good practice to weave the CancellationToken through to your method (if it accepts one), which is done here.
That means you would then have a DoWorkAsync method with the following signature:
Task DoWorkAsync(CancellationToken cancellationToken);
You'd have to change (only slightly, and you're not bleeding out separation of concerns here) the StartWork method to account for the new signature passed to the CreateNeverEndingTask method, like so:
void StartWork()
{
// Create the token source.
wtoken = new CancellationTokenSource();
// Set the task.
task = CreateNeverEndingTask((now, ct) => DoWorkAsync(ct), wtoken.Token);
// Start the task. Post the time.
task.Post(DateTimeOffset.Now, wtoken.Token);
}
I find the new Task-based interface to be very simple for doing things like this - even easier than using the Timer class.
There are some small adjustments you can make to your example. Instead of:
task = Task.Factory.StartNew(() =>
{
while (true)
{
wtoken.Token.ThrowIfCancellationRequested();
DoWork();
Thread.Sleep(10000);
}
}, wtoken, TaskCreationOptions.LongRunning);
You can do this:
task = Task.Run(async () => // <- marked async
{
while (true)
{
DoWork();
await Task.Delay(10000, wtoken.Token); // <- await with cancellation
}
}, wtoken.Token);
This way the cancellation will happen instantaneously if inside the Task.Delay, rather than having to wait for the Thread.Sleep to finish.
Also, using Task.Delay over Thread.Sleep means you aren't tying up a thread doing nothing for the duration of the sleep.
If you're able, you can also make DoWork() accept a cancellation token, and the cancellation will be much more responsive.
Here is what I came up with:
Inherit from NeverEndingTask and override the ExecutionCore method with the work you want to do.
Changing ExecutionLoopDelayMs allows you to adjust the time between loops e.g. if you wanted to use a backoff algorithm.
Start/Stop provide a synchronous interface to start/stop task.
LongRunning means you will get one dedicated thread per NeverEndingTask.
This class does not allocate memory in a loop unlike the ActionBlock based solution above.
The code below is sketch, not necessarily production code :)
:
public abstract class NeverEndingTask
{
// Using a CTS allows NeverEndingTask to "cancel itself"
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
protected NeverEndingTask()
{
TheNeverEndingTask = new Task(
() =>
{
// Wait to see if we get cancelled...
while (!_cts.Token.WaitHandle.WaitOne(ExecutionLoopDelayMs))
{
// Otherwise execute our code...
ExecutionCore(_cts.Token);
}
// If we were cancelled, use the idiomatic way to terminate task
_cts.Token.ThrowIfCancellationRequested();
},
_cts.Token,
TaskCreationOptions.DenyChildAttach | TaskCreationOptions.LongRunning);
// Do not forget to observe faulted tasks - for NeverEndingTask faults are probably never desirable
TheNeverEndingTask.ContinueWith(x =>
{
Trace.TraceError(x.Exception.InnerException.Message);
// Log/Fire Events etc.
}, TaskContinuationOptions.OnlyOnFaulted);
}
protected readonly int ExecutionLoopDelayMs = 0;
protected Task TheNeverEndingTask;
public void Start()
{
// Should throw if you try to start twice...
TheNeverEndingTask.Start();
}
protected abstract void ExecutionCore(CancellationToken cancellationToken);
public void Stop()
{
// This code should be reentrant...
_cts.Cancel();
TheNeverEndingTask.Wait();
}
}

.NET Framework 4.0: Chaining tasks in a loop

I want to chain multiple Tasks, so that when one ends the next one starts. I know I can do this using ContinueWith. But what if I have a large number of tasks, so that:
t1 continues with t2
t2 continues with t3
t3 continues with t4
...
Is there a nice way to do it, other than creating this chain manually using a loop?
Well, assuming you have some sort of enumerable of Action delegates or something you want to do, you can easily use LINQ to do the following:
// Create the base task. Run synchronously.
var task = new Task(() => { });
task.RunSynchronously();
// Chain them all together.
var query =
// For each action
from action in actions
// Assign the task to the continuation and
// return that.
select (task = task.ContinueWith(action));
// Get the last task to wait on.
// Note that this cannot be changed to "Last"
// because the actions enumeration could have no
// elements, meaning that Last would throw.
// That means task can be null, so a check
// would have to be performed on it before
// waiting on it (unless you are assured that
// there are items in the action enumeration).
task = query.LastOrDefault();
The above code is really your loop, just in a fancier form. It does the same thing in that it takes the previous task (after primed with a dummy "noop" Task) and then adds a continuation in the form of ContinueWith (assigning the continuation to the current task in the process for the next iteration of the loop, which is performed when LastOrDefault is called).
You may use static extensions ContinueWhenAll here.
So you can pass multiple tasks.
Update
You can use a chaining extension such as this:
public static class MyTaskExtensions
{
public static Task BuildChain(this Task task,
IEnumerable<Action<Task>> actions)
{
if (!actions.Any())
return task;
else
{
Task continueWith = task.ContinueWith(actions.First());
return continueWith.BuildChain(actions.Skip(1));
}
}
}

Categories

Resources