I'm using EF Core, in an ASP.NET Core environment. My context is registered in my DI container as per-request.
I need to perform extra work before the context's SaveChanges() or SaveChangesAsync(), such as validation, auditing, dispatching notifications, etc. Some of that work is sync, and some is async.
So I want to raise a sync or async event to allow listeners do extra work, block until they are done (!), and then call the DbContext base class to actually save.
public class MyContext : DbContext
{
// sync: ------------------------------
// define sync event handler
public event EventHandler<EventArgs> SavingChanges;
// sync save
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
// raise event for sync handlers to do work BEFORE the save
var handler = SavingChanges;
if (handler != null)
handler(this, EventArgs.Empty);
// all work done, now save
return base.SaveChanges(acceptAllChangesOnSuccess);
}
// async: ------------------------------
// define async event handler
//public event /* ??? */ SavingChangesAsync;
// async save
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default(CancellationToken))
{
// raise event for async handlers to do work BEFORE the save (block until they are done!)
//await ???
// all work done, now save
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
}
As you can see, it's easy for SaveChanges(), but how do I do it for SaveChangesAsync()?
So I want to raise a sync or async event to allow listeners do extra work, block until they are done (!), and then call the DbContext base class to actually save.
As you can see, it's easy for SaveChanges()
Not really... SaveChanges won't wait for any asynchronous handlers to complete. In general, blocking on async work isn't recommended; even in environments such as ASP.NET Core where you won't deadlock, it does impact your scalability. Since your MyContext allows asynchronous handlers, you'd probably want to override SaveChanges to just throw an exception. Or, you could choose to just block, and hope that users won't use asynchronous handlers with synchronous SaveChanges too much.
Regarding the implementation itself, there are a few approaches that I describe in my blog post on async events. My personal favorite is the deferral approach, which looks like this (using my Nito.AsyncEx.Oop library):
public class MyEventArgs: EventArgs, IDeferralSource
{
internal DeferralManager DeferralManager { get; } = new DeferralManager();
public IDisposable GetDeferral() => DeferralManager.DeferralSource.GetDeferral();
}
public class MyContext : DbContext
{
public event EventHandler<MyEventArgs> SavingChanges;
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
// You must decide to either throw or block here (see above).
// Example code for blocking.
var args = new MyEventArgs();
SavingChanges?.Invoke(this, args);
args.DeferralManager.WaitForDeferralsAsync().GetAwaiter().GetResult();
return base.SaveChanges(acceptAllChangesOnSuccess);
}
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default(CancellationToken))
{
var args = new MyEventArgs();
SavingChanges?.Invoke(this, args);
await args.DeferralManager.WaitForDeferralsAsync();
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
}
// Usage (synchronous handler):
myContext.SavingChanges += (sender, e) =>
{
Thread.Sleep(1000); // Synchronous code
};
// Usage (asynchronous handler):
myContext.SavingChanges += async (sender, e) =>
{
using (e.GetDeferral())
{
await Task.Delay(1000); // Asynchronous code
}
};
There is a simpler way (based on this).
Declare a multicast delegate which returns a Task:
namespace MyProject
{
public delegate Task AsyncEventHandler<TEventArgs>(object sender, TEventArgs e);
}
Update the context (I'm only showing async stuff, because sync stuff is unchanged):
public class MyContext : DbContext
{
public event AsyncEventHandler<EventArgs> SavingChangesAsync;
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default(CancellationToken))
{
var delegates = SavingChangesAsync;
if (delegates != null)
{
var tasks = delegates
.GetInvocationList()
.Select(d => ((AsyncEventHandler<EventArgs>)d)(this, EventArgs.Empty))
.ToList();
await Task.WhenAll(tasks);
}
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
}
The calling code looks like this:
context.SavingChanges += OnContextSavingChanges;
context.SavingChangesAsync += OnContextSavingChangesAsync;
public void OnContextSavingChanges(object sender, EventArgs e)
{
someSyncMethod();
}
public async Task OnContextSavingChangesAsync(object sender, EventArgs e)
{
await someAsyncMethod();
}
I'm not sure if this is a 100% safe way to do this. Async events are tricky. I tested with multiple subscribers, and it worked. My environment is ASP.NET Core, so I don't know if it works elsewhere.
I don't know how it compares with the other solution, or which is better, but this one is simpler and makes more sense to me.
EDIT: this works well if your handler doesn't change shared state. If it does, see the much more robust approach by #stephencleary above
I'd suggest a modification of this async event handler
public AsyncEvent SavingChangesAsync;
usage
// async save
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default(CancellationToken))
{
await SavingChangesAsync?.InvokeAsync(cancellationToken);
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
where
public class AsyncEvent
{
private readonly List<Func<CancellationToken, Task>> invocationList;
private readonly object locker;
private AsyncEvent()
{
invocationList = new List<Func<CancellationToken, Task>>();
locker = new object();
}
public static AsyncEvent operator +(
AsyncEvent e, Func<CancellationToken, Task> callback)
{
if (callback == null) throw new NullReferenceException("callback is null");
//Note: Thread safety issue- if two threads register to the same event (on the first time, i.e when it is null)
//they could get a different instance, so whoever was first will be overridden.
//A solution for that would be to switch to a public constructor and use it, but then we'll 'lose' the similar syntax to c# events
if (e == null) e = new AsyncEvent();
lock (e.locker)
{
e.invocationList.Add(callback);
}
return e;
}
public static AsyncEvent operator -(
AsyncEvent e, Func<CancellationToken, Task> callback)
{
if (callback == null) throw new NullReferenceException("callback is null");
if (e == null) return null;
lock (e.locker)
{
e.invocationList.Remove(callback);
}
return e;
}
public async Task InvokeAsync(CancellationToken cancellation)
{
List<Func<CancellationToken, Task>> tmpInvocationList;
lock (locker)
{
tmpInvocationList = new List<Func<CancellationToken, Task>>(invocationList);
}
foreach (var callback in tmpInvocationList)
{
//Assuming we want a serial invocation, for a parallel invocation we can use Task.WhenAll instead
await callback(cancellation);
}
}
}
For my case worked a little tweak to the #grokky answer. I had to not run the event handlers in parallel (as pointed out by #Stephen Cleary) so i ran it in the for each loop fashion instead of going for the Task.WhenAll.
public delegate Task AsyncEventHandler(object sender, EventArgs e);
public abstract class DbContextBase:DbContext
{
public event AsyncEventHandler SavingChangesAsync;
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default)
{
await OnSavingChangesAsync(acceptAllChangesOnSuccess);
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
private async Task OnSavingChangesAsync(bool acceptAllChangesOnSuccess)
{
if (SavingChangesAsync != null)
{
var asyncEventHandlers = SavingChangesAsync.GetInvocationList().Cast<AsyncEventHandler>();
foreach (AsyncEventHandler asyncEventHandler in asyncEventHandlers)
{
await asyncEventHandler.Invoke(this, new SavingChangesEventArgs(acceptAllChangesOnSuccess));
}
}
}
}
Related
I have a class which creates a Task which runs during its whole lifetime, that is, until Dispose() is called on it:
In the constructor I call:
_worker = Task.Run(() => ProcessQueueAsync(_workerCancellation.Token), _workerCancellation.Token);
The way I currently do it (which I am also not sure is the right way) is cancelling the CancellationToken, and waiting on the task.
public void Dispose()
{
if (_isDisposed)
{
return;
}
_workerCancellation.Cancel();
_worker.GetAwaiter().GetResult();
_isDisposed = true;
}
When I do the same in the AsyncDispose method like so:
public async ValueTask DisposeAsync()
{
await _worker;
}
I get this warning
How do I correctly dispose of such a worker? Thanks!
As requested, here is the full code of what I am trying to do:
public sealed class ActiveObjectWrapper<T, TS> : IAsyncDisposable
{
private bool _isDisposed = false;
private const int DefaultQueueCapacity = 1024;
private readonly Task _worker;
private readonly CancellationTokenSource _workerCancellation;
private readonly Channel<(T, TaskCompletionSource<TS>)> _taskQueue;
private readonly Func<T, TS> _onReceive;
public ActiveObjectWrapper(Func<T, TS> onReceive, int? queueCapacity = null)
{
_onReceive = onReceive;
_taskQueue = Channel.CreateBounded<(T, TaskCompletionSource<TS>)>(queueCapacity ?? DefaultQueueCapacity);
_workerCancellation = new CancellationTokenSource();
_worker = Task.Run(() => ProcessQueueAsync(_workerCancellation.Token), _workerCancellation.Token);
}
private async Task ProcessQueueAsync(CancellationToken cancellationToken)
{
await foreach (var (value, taskCompletionSource) in _taskQueue.Reader.ReadAllAsync(cancellationToken))
{
try
{
var result = _onReceive(value); // todo: do I need to propagate the cancellation token?
taskCompletionSource.SetResult(result);
}
catch (Exception exception)
{
taskCompletionSource.SetException(exception);
}
}
}
public async Task<TS> EnqueueAsync(T value)
{
// see: https://devblogs.microsoft.com/premier-developer/the-danger-of-taskcompletionsourcet-class/
var completionSource = new TaskCompletionSource<TS>(TaskCreationOptions.RunContinuationsAsynchronously);
await _taskQueue.Writer.WriteAsync((value, completionSource));
return await completionSource.Task;
}
public async ValueTask DisposeAsync()
{
if (_isDisposed)
{
return;
}
_taskQueue.Writer.Complete();
_workerCancellation.Cancel();
await _worker;
_isDisposed = true;
}
}
This is the pattern that I use when implementing both IDisposabe and IDisposeableAsync. It isn't strictly compliant with the .Net recommendations. I found that implementing DisposeAsyncCore() was unnecessary as my classes are sealed.
public void Dispose()
{
Dispose(disposing: true);
//GC.SuppressFinalize(this);
Worker.GetAwaiter().GetResult();
}
public void Dispose(bool disposing)
{
if (isDisposed)
{
return;
}
if (disposing)
{
lock (isDisposing)
{
if (isDisposed)
{
return;
}
Cts.Cancel();
Cts.Dispose();
isDisposed = true;
}
}
}
public async ValueTask DisposeAsync()
{
Dispose(disposing: true);
//GC.SuppressFinalize(this);
await Worker;
}
This looks like an attempt to build a TransformBlock<TIn,TOut> on top of Channels. Using a Channel properly wouldn't generate such a warning. There's no reason to use a single processing task, or store it in a field.
Using Blocks
First, the equivalent code using a TransformBlock<Tin,TOut> would be:
var block=new TransformBlock<TIn,TOut>(msgIn=>func(msgIn));
foreach(....)
{
block.Post(someMessage);
}
To stop it, block.Complete() would be enough. Any pending messages would still be processed. A TransformBlock is meant to forward its output to other blocks, eg an ActionBlock or BufferBlock.
var finalBlock=new ActionBlock<TOut>(msgOut=>Console.WriteLine(msgOut));
block.LinkTo(finalBlock,new DataflowLinkOptions{PropagateCompletion = true});
...
block.Complete();
await finalBlock.Completion;
Using Channels
Doing something similar with channels doesn't need explicit classes, or Enqueue/Dequeue methods. Channels are build with separate ChannelReader, ChannelWriter interfaces to make it easier to control ownership, concurrency and completion.
A similar pipeline using channels would require only some methods:
ChannelReader<string> FolderToChannel(string path,CancellationToken token=default)
{
Channel<int> channel=Channel.CreateUnbounded();
var writer=channel.Writer;
_ = Task.Run(async ()=>{
foreach(var path in Directory.EnumerateFiles(path))
{
await _writer.SendAsync(path);
if (token.CancellationRequested)
{
return;
}
}
},token).ContinueWith(t=>_writer.TryComplete(t.Exception));
return channel;
}
This produces a reader that can be passed to a processing method. One that could generate another reader with the results:
static ChannelReader<MyClass> ParseFile(this ChannelReader<string> reader,CancellationToken token)
{
Channel<int> channel=Channel.CreateUnbounded();
var writer=channel.Writer;
_ = Task.Run(async ()=>{
await foreach(var path in reader.ReadAllAsync(token))
{
var json= await File.ReadAllTextAsync(path);
var dto= JsonSerializer.DeserializeObject<MyClass>(json);
await _writer.SendAsync(dto);
}
},token).ContineWith(t=>writer.TryComplete(t.Exception);
return channel;
}
And a final step that only consumes a channel:
static async Task LogIt(this ChannelReader<MyClass> reader,CancellationToken token)
{
await Task.Run(async ()=>{
await foreach(var dto in reader.ReadAllAsync(token))
{
Console.WriteLine(dto);
}
},token);
}
The three steps can be combined very easily:
var cts=new CancellationTokenSource();
var complete=FolderToChannel(somePath,cts.Token)
.ParseFile(cts.Token)
.LogIt(cts.Token);
await complete;
By encapsulating the channel itself and the processing in a method there's no ambiguity about who owns the channel, who is responsible for completion or cancellation
This is my class to handle CronJobs. So far it runs fine on its own. However, how can I modify this so that when a job is already running, disallow the same job from running also?
I am not using any library for this.
public abstract class CronJob : IHostedService, IDisposable
{
private System.Timers.Timer timer;
private readonly CronExpression _expression;
private readonly TimeZoneInfo _timeZoneInfo;
protected CronJob(string cronExpression, TimeZoneInfo timeZoneInfo)
{
_expression = CronExpression.Parse(cronExpression, CronFormat.IncludeSeconds);
_timeZoneInfo = timeZoneInfo;
}
protected virtual async Task ScheduleJob(CancellationToken cancellationToken)
{
var next = _expression.GetNextOccurrence(DateTimeOffset.Now, _timeZoneInfo);
if (next.HasValue)
{
var delay = next.Value - DateTimeOffset.Now;
timer = new System.Timers.Timer(delay.TotalMilliseconds);
timer.Elapsed += async (sender, args) =>
{
timer.Stop(); // reset timer
await DoWork(cancellationToken);
await ScheduleJob(cancellationToken); // reschedule next
};
timer.Start();
}
await Task.CompletedTask;
}
public virtual async Task DoWork(CancellationToken cancellationToken)
{
await Task.Delay(5000, cancellationToken); // do the work
}
public virtual async Task StartAsync(CancellationToken cancellationToken)
{
await ScheduleJob(cancellationToken);
}
public virtual async Task StopAsync(CancellationToken cancellationToken)
{
timer?.Stop();
await Task.CompletedTask;
}
public virtual void Dispose()
{
timer?.Dispose();
}
}
You don't have to because .NET Core handles HostedServices as a Singleton. Unless when it comes to hosting multiple instance of the same project that contains this HostedService, then you would have to support multiple instances for your project on your own.
In your case,
This means that ScheduleJob only has its own instance and will never
have a deep copy of its own unless you run a separate instance of the
project that contains it
I'm implementing a worker engine with an upper limit to concurrency. I'm using a semaphore to wait until concurrency drops below the maximum, then use Task.Factory.StartNew to wrap the async handler in a try/catch, with a finally which releases the semaphore.
I realise this creates threads on the thread pool - but my question is, when one of those task-running threads actually awaits (on a real IO call or wait handle), is the thread returned to the pool, as I'd hope it would be?
If there's a better way to implement a task scheduler with limited concurrency where the work handler is an async method (returns Task), I'd love to hear it too. Or, let's say ideally, if there's a way to queue up an async method (again, it's a Task-returning async method) that feels less dodgy than wrapping it in a synchronous delegate and passing it to Task.Factory.StartNew, that would seem perfect..?
(This also makes me think that there are two kinds of parallelism here: how many tasks are being processed overall, but also how many continuations are running on different threads concurrently. Might be cool to have configurable options for both, though not a fixed requirement..)
Edit: snippet:
concurrencySemaphore.Wait(cancelToken);
deferRelease = false;
try
{
var result = GetWorkItem();
if (result == null)
{ // no work, wait for new work or exit signal
signal = WaitHandle.WaitAny(signals);
continue;
}
deferRelease = true;
tasks.Add(Task.Factory.StartNew(() =>
{
try
{
DoWorkHereAsync(result); // guess I'd think to .GetAwaiter().GetResult() here.. not run this yet
}
finally
{
concurrencySemaphore.Release();
}
}, cancelToken));
}
finally
{
if (!deferRelease)
{
concurrencySemaphore.Release();
}
}
Here an example of a TaskWorker, that will not produce countless worker threads.
The magic is done by awaiting SemaphoreSlim.WaitAsync() which is an IO task (and there is no thread).
class TaskWorker
{
private readonly SemaphoreSlim _semaphore;
public TaskWorker(int maxDegreeOfParallelism)
{
if (maxDegreeOfParallelism <= 0)
{
throw new ArgumentOutOfRangeException(nameof(maxDegreeOfParallelism));
}
_semaphore = new SemaphoreSlim(maxDegreeOfParallelism, maxDegreeOfParallelism);
}
public async Task RunAsync(Func<Task> taskFactory, CancellationToken cancellationToken = default(CancellationToken))
{
// No ConfigureAwait(false) here to keep the SyncContext if any
// for the real task
await _semaphore.WaitAsync(cancellationToken);
try
{
await taskFactory().ConfigureAwait(false);
}
finally
{
_semaphore.Release(1);
}
}
public async Task<T> RunAsync<T>(Func<Task<T>> taskFactory, CancellationToken cancellationToken = default(CancellationToken))
{
await _semaphore.WaitAsync(cancellationToken);
try
{
return await taskFactory().ConfigureAwait(false);
}
finally
{
_semaphore.Release(1);
}
}
}
and a simple console app to test
class Program
{
static void Main(string[] args)
{
var worker = new TaskWorker(1);
var cts = new CancellationTokenSource();
var token = cts.Token;
var tasks = Enumerable.Range(1, 10)
.Select(e => worker.RunAsync(() => SomeWorkAsync(e, token), token))
.ToArray();
Task.WhenAll(tasks).GetAwaiter().GetResult();
}
static async Task SomeWorkAsync(int id, CancellationToken cancellationToken)
{
Console.WriteLine($"Some Started {id}");
await Task.Delay(2000, cancellationToken).ConfigureAwait(false);
Console.WriteLine($"Some Finished {id}");
}
}
Update
TaskWorker implementing IDisposable
class TaskWorker : IDisposable
{
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
private readonly SemaphoreSlim _semaphore;
private readonly int _maxDegreeOfParallelism;
public TaskWorker(int maxDegreeOfParallelism)
{
if (maxDegreeOfParallelism <= 0)
{
throw new ArgumentOutOfRangeException(nameof(maxDegreeOfParallelism));
}
_maxDegreeOfParallelism = maxDegreeOfParallelism;
_semaphore = new SemaphoreSlim(maxDegreeOfParallelism, maxDegreeOfParallelism);
}
public async Task RunAsync(Func<Task> taskFactory, CancellationToken cancellationToken = default(CancellationToken))
{
ThrowIfDisposed();
using (var cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, _cts.Token))
{
// No ConfigureAwait(false) here to keep the SyncContext if any
// for the real task
await _semaphore.WaitAsync(cts.Token);
try
{
await taskFactory().ConfigureAwait(false);
}
finally
{
_semaphore.Release(1);
}
}
}
public async Task<T> RunAsync<T>(Func<Task<T>> taskFactory, CancellationToken cancellationToken = default(CancellationToken))
{
ThrowIfDisposed();
using (var cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, _cts.Token))
{
await _semaphore.WaitAsync(cts.Token);
try
{
return await taskFactory().ConfigureAwait(false);
}
finally
{
_semaphore.Release(1);
}
}
}
private void ThrowIfDisposed()
{
if (disposedValue)
{
throw new ObjectDisposedException(this.GetType().FullName);
}
}
#region IDisposable Support
private bool disposedValue = false;
protected virtual void Dispose(bool disposing)
{
if (!disposedValue)
{
if (disposing)
{
_cts.Cancel();
// consume all semaphore slots
for (int i = 0; i < _maxDegreeOfParallelism; i++)
{
_semaphore.WaitAsync().GetAwaiter().GetResult();
}
_semaphore.Dispose();
_cts.Dispose();
}
disposedValue = true;
}
}
public void Dispose()
{
Dispose(true);
}
#endregion
}
You can think that thread is returned to a ThreadPool even thought it is not actauly a return. The thread simply picks next queued item when async operation starts.
I would suggest you to look at Task.Run instead of Task.Factory.StartNew Task.Run vs Task.Factory.StartNew.
And also have a look at TPL DataFlow. I think it will match your requirements.
I using some third-party class that have a long time work call DoLongWork().
When the user want to stop the "DoLongWork" we need to call the method StopWork().
When the DoLongWork method is working the UI need to show some loading bar.
I want to create a third-party proxy class.
In this class I will create a method called StartWork - this method will return Task and when the user is cancel the task using CancellationToken two actions will made:
1) the third-party method "StopWork" will called
2) the UI will stop the loading bar.
I try this but there is some problems catching the cancellation in the third-party proxy class and bubbling the cancellation to the ViewModel class.
public class MyViewModel
{
private CancellationTokenSource _cancellationTokenSource;
private ThirdPartyServiceProxy _thirdPartyServiceProxy = new ThirdPartyServiceProxy();
public bool IsUIInLoadingMode { get; set; }
public async void Start()
{
try
{
_cancellationTokenSource = new CancellationTokenSource();
IsUIInLoadingMode = true;
await _thirdPartyServiceProxy.StartWork(_cancellationTokenSource.Token);
_cancellationTokenSource = null;
}
catch (OperationCanceledException)/*Issue - This never called*/
{
IsUIInLoadingMode = false;
}
}
public void Stop()
{
_cancellationTokenSource?.Cancel();
}
}
public class ThirdPartyServiceProxy
{
private ThirdPartyService _thirdPartyService = new ThirdPartyService();
public Task StartWork(CancellationToken token)
{
var task = Task.Factory.StartNew(() =>
{
_thirdPartyService.DoLongWork();
},token);
//?? - Handle when task canceld - call _thirdPartyService.StopWork();
return task;
}
}
There's a couple of common ways to observe cancellation tokens: periodic polling with ThrowIfCancellationRequested and registering callbacks with Register.
In this case, polling isn't possible, since you don't control the code in DoLongWork. So you'll have to register a callback, which is more work.
public void DoWork(CancellationToken token)
{
token.ThrowIfCancellationRequested();
using (token.Register(() => _thirdPartyService.StopWork()))
_thirdPartyService.DoLongWork();
}
This wrapper assumes that DoLongWork will throw OperationCanceledException if canceled by StopWork.
The wrapper can then be invoked as:
await Task.Run(() => _thirdPartyServiceProxy.StartWork(_cancellationTokenSource.Token));
As a side note, I have switched to Task.Run; this is because StartNew is dangerous.
I have some methods returning Task<T> on which I can await at will. I'd like to have those Tasks executed on a custom TaskScheduler instead of the default one.
var task = GetTaskAsync ();
await task;
I know I can create a new TaskFactory (new CustomScheduler ()) and do a StartNew () from it, but StartNew () takes an action and create the Task, and I already have the Task (returned behind the scenes by a TaskCompletionSource)
How can I specify my own TaskScheduler for await ?
I think what you really want is to do a Task.Run, but with a custom scheduler. StartNew doesn't work intuitively with asynchronous methods; Stephen Toub has a great blog post about the differences between Task.Run and TaskFactory.StartNew.
So, to create your own custom Run, you can do something like this:
private static readonly TaskFactory myTaskFactory = new TaskFactory(
CancellationToken.None, TaskCreationOptions.DenyChildAttach,
TaskContinuationOptions.None, new MyTaskScheduler());
private static Task RunOnMyScheduler(Func<Task> func)
{
return myTaskFactory.StartNew(func).Unwrap();
}
private static Task<T> RunOnMyScheduler<T>(Func<Task<T>> func)
{
return myTaskFactory.StartNew(func).Unwrap();
}
private static Task RunOnMyScheduler(Action func)
{
return myTaskFactory.StartNew(func);
}
private static Task<T> RunOnMyScheduler<T>(Func<T> func)
{
return myTaskFactory.StartNew(func);
}
Then you can execute synchronous or asynchronous methods on your custom scheduler.
The TaskCompletionSource<T>.Task is constructed without any action and the scheduler
is assigned on the first call to ContinueWith(...) (from Asynchronous Programming with the Reactive Framework and the Task Parallel Library — Part 3).
Thankfully you can customize the await behavior slightly by implementing your own class deriving from INotifyCompletion and then using it in a pattern similar to await SomeTask.ConfigureAwait(false) to configure the scheduler that the task should start using in the OnCompleted(Action continuation) method (from await anything;).
Here is the usage:
TaskCompletionSource<object> source = new TaskCompletionSource<object>();
public async Task Foo() {
// Force await to schedule the task on the supplied scheduler
await SomeAsyncTask().ConfigureScheduler(scheduler);
}
public Task SomeAsyncTask() { return source.Task; }
Here is a simple implementation of ConfigureScheduler using a Task extension method with the important part in OnCompleted:
public static class TaskExtension {
public static CustomTaskAwaitable ConfigureScheduler(this Task task, TaskScheduler scheduler) {
return new CustomTaskAwaitable(task, scheduler);
}
}
public struct CustomTaskAwaitable {
CustomTaskAwaiter awaitable;
public CustomTaskAwaitable(Task task, TaskScheduler scheduler) {
awaitable = new CustomTaskAwaiter(task, scheduler);
}
public CustomTaskAwaiter GetAwaiter() { return awaitable; }
public struct CustomTaskAwaiter : INotifyCompletion {
Task task;
TaskScheduler scheduler;
public CustomTaskAwaiter(Task task, TaskScheduler scheduler) {
this.task = task;
this.scheduler = scheduler;
}
public void OnCompleted(Action continuation) {
// ContinueWith sets the scheduler to use for the continuation action
task.ContinueWith(x => continuation(), scheduler);
}
public bool IsCompleted { get { return task.IsCompleted; } }
public void GetResult() { }
}
}
Here's a working sample that will compile as a console application:
using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Threading.Tasks;
namespace Example {
class Program {
static TaskCompletionSource<object> source = new TaskCompletionSource<object>();
static TaskScheduler scheduler = new CustomTaskScheduler();
static void Main(string[] args) {
Console.WriteLine("Main Started");
var task = Foo();
Console.WriteLine("Main Continue ");
// Continue Foo() using CustomTaskScheduler
source.SetResult(null);
Console.WriteLine("Main Finished");
}
public static async Task Foo() {
Console.WriteLine("Foo Started");
// Force await to schedule the task on the supplied scheduler
await SomeAsyncTask().ConfigureScheduler(scheduler);
Console.WriteLine("Foo Finished");
}
public static Task SomeAsyncTask() { return source.Task; }
}
public struct CustomTaskAwaitable {
CustomTaskAwaiter awaitable;
public CustomTaskAwaitable(Task task, TaskScheduler scheduler) {
awaitable = new CustomTaskAwaiter(task, scheduler);
}
public CustomTaskAwaiter GetAwaiter() { return awaitable; }
public struct CustomTaskAwaiter : INotifyCompletion {
Task task;
TaskScheduler scheduler;
public CustomTaskAwaiter(Task task, TaskScheduler scheduler) {
this.task = task;
this.scheduler = scheduler;
}
public void OnCompleted(Action continuation) {
// ContinueWith sets the scheduler to use for the continuation action
task.ContinueWith(x => continuation(), scheduler);
}
public bool IsCompleted { get { return task.IsCompleted; } }
public void GetResult() { }
}
}
public static class TaskExtension {
public static CustomTaskAwaitable ConfigureScheduler(this Task task, TaskScheduler scheduler) {
return new CustomTaskAwaitable(task, scheduler);
}
}
public class CustomTaskScheduler : TaskScheduler {
protected override IEnumerable<Task> GetScheduledTasks() { yield break; }
protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued) { return false; }
protected override void QueueTask(Task task) {
TryExecuteTask(task);
}
}
}
There is no way to embed rich async features into a custom TaskScheduler. This class was not designed with async/await in mind. The standard way to use a custom TaskScheduler is as an argument to the Task.Factory.StartNew method. This method does not understand async delegates. It is possible to provide an async delegate, but it is treated as any other delegate that returns some result. To get the actual awaited result of the async delegate one must call Unwrap() to the task returned.
This is not the problem though. The problem is that the TaskScheduler infrastructure does not treat the async delegate as a single unit of work. Each task is split into multiple mini-tasks (using every await as a separator), and each mini-task is processed individually. This severely restricts the asynchronous functionality that can be implemented on top of this class. As an example here is a custom TaskScheduler that is intended to queue the supplied tasks one at a time (to limit the concurrency in other words):
public class MyTaskScheduler : TaskScheduler
{
private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1);
protected async override void QueueTask(Task task)
{
await _semaphore.WaitAsync();
try
{
await Task.Run(() => base.TryExecuteTask(task));
await task;
}
finally
{
_semaphore.Release();
}
}
protected override bool TryExecuteTaskInline(Task task,
bool taskWasPreviouslyQueued) => false;
protected override IEnumerable<Task> GetScheduledTasks() { yield break; }
}
The SemaphoreSlim should ensure that only one Task would run at a time. Unfortunately it doesn't work. The semaphore is released prematurely, because the Task passed in the call QueueTask(task) is not the task that represents the whole work of the async delegate, but only the part until the first await. The other parts are passed to the TryExecuteTaskInline method. There is no way to correlate these task-parts, because no identifier or other mechanism is provided. Here is what happens in practice:
var taskScheduler = new MyTaskScheduler();
var tasks = Enumerable.Range(1, 5).Select(n => Task.Factory.StartNew(async () =>
{
Console.WriteLine($"{DateTime.Now:HH:mm:ss.fff} Item {n} Started");
await Task.Delay(1000);
Console.WriteLine($"{DateTime.Now:HH:mm:ss.fff} Item {n} Finished");
}, default, TaskCreationOptions.None, taskScheduler))
.Select(t => t.Unwrap())
.ToArray();
Task.WaitAll(tasks);
Output:
05:29:58.346 Item 1 Started
05:29:58.358 Item 2 Started
05:29:58.358 Item 3 Started
05:29:58.358 Item 4 Started
05:29:58.358 Item 5 Started
05:29:59.358 Item 1 Finished
05:29:59.374 Item 5 Finished
05:29:59.374 Item 4 Finished
05:29:59.374 Item 2 Finished
05:29:59.374 Item 3 Finished
Disaster, all tasks are queued at once.
Conclusion: Customizing the TaskScheduler class is not the way to go when advanced async features are required.
Update: Here is another observation, regarding custom TaskSchedulers in the presence of an ambient SynchronizationContext. The await mechanism by default captures the current SynchronizationContext, or the current TaskScheduler, and invokes the continuation on either the captured context
or the scheduler. If both are present, the current SynchronizationContext is preferred, and the current TaskScheduler is ignored. Below is a demonstration of this behavior, in a WinForms application¹:
private async void Button1_Click(object sender, EventArgs e)
{
await Task.Factory.StartNew(async () =>
{
MessageBox.Show($"{Thread.CurrentThread.ManagedThreadId}, {TaskScheduler.Current}");
await Task.Delay(1000);
MessageBox.Show($"{Thread.CurrentThread.ManagedThreadId}, {TaskScheduler.Current}");
}, default, TaskCreationOptions.None,
TaskScheduler.FromCurrentSynchronizationContext()).Unwrap();
}
Clicking the button causes two messages to popup sequentially, with this information:
1, System.Threading.Tasks.SynchronizationContextTaskScheduler
1, System.Threading.Tasks.ThreadPoolTaskScheduler
This experiment shows that only the first part of the asynchronous delegate, the part before the first await, was scheduled on the non-default scheduler.
This behavior limits even further the practical usefulness of custom TaskSchedulers in an async/await-enabled environment.
¹ Windows Forms applications have a WindowsFormsSynchronizationContext installed automatically, when the Application.Run method is called.
Can you fit for this method call:
await Task.Factory.StartNew(
() => { /* to do what you need */ },
CancellationToken.None, /* you can change as you need */
TaskCreationOptions.None, /* you can change as you need */
customScheduler);
After the comments it looks like you want to control the scheduler on which the code after the await is run.
The compile creates a continuation from the await that runs on the current SynchronizationContext by default. So your best shot is to set up the SynchronizationContext before calling await.
There are some ways to await a specific context. See Configure Await from Jon Skeet, especially the part about SwitchTo, for more information on how to implement something like this.
EDIT:
The SwitchTo method from TaskEx has been removed, as it was too easy to misuse. See the MSDN Forum for reasons.
Faced with same issue, tried to use LimitedConcurrencyLevelTaskScheduler, but it does not support async tasks. So...
Just wrote my own small simple Scheduler, that allow to run async Tasks based on global ThreadPool (and Task.Run method) with ability to limit current max degree of parallelism. It is enough for my exact purposes, maybe will also help you, guys.
Main demo code (console app, dotnet core 3.1) :
static async Task Main(string[] args)
{
//5 tasks to run per time
int concurrentLimit = 5;
var scheduler = new ThreadPoolConcurrentScheduler(concurrentLimit);
//catch all errors in separate event handler
scheduler.OnError += Scheduler_OnError;
// just monitor "live" state and output to console
RunTaskStateMonitor(scheduler);
// simulate adding new tasks "on the fly"
SimulateAddingTasksInParallel(scheduler);
Console.WriteLine("start adding 50 tasks");
//add 50 tasks
for (var i = 1; i <= 50; i++)
{
scheduler.StartNew(myAsyncTask);
}
Console.WriteLine("50 tasks added to scheduler");
Thread.Sleep(1000000);
}
Supporting code (place it in the same place) :
private static void Scheduler_OnError(Exception ex)
{
Console.WriteLine(ex.ToString());
}
private static int currentTaskFinished = 0;
//your sample of async task
static async Task myAsyncTask()
{
Console.WriteLine("task started ");
using (HttpClient httpClient = new HttpClient())
{
//just make http request to ... wikipedia!
//sorry, Jimmy Wales! assume,guys, you will not DDOS wiki :)
var uri = new Uri("https://wikipedia.org/");
var response = await httpClient.GetAsync(uri);
string result = await response.Content.ReadAsStringAsync();
if (string.IsNullOrEmpty(result))
Console.WriteLine("error, await is not working");
else
Console.WriteLine($"task result : site length is {result.Length}");
}
//or simulate it using by sync sleep
//Thread.Sleep(1000);
//and for tesing exception :
//throw new Exception("my custom error");
Console.WriteLine("task finished ");
//just incrementing total ran tasks to output in console
Interlocked.Increment(ref currentTaskFinished);
}
static void SimulateAddingTasksInParallel(ThreadPoolConcurrentScheduler taskScheduler)
{
int runCount = 0;
Task.Factory.StartNew(() =>
{
while (true)
{
runCount++;
if (runCount > 5)
break;
//every 10 sec 5 times
Thread.Sleep(10000);
//adding new 5 tasks from outer task
Console.WriteLine("start adding new 5 tasks!");
for (var i = 1; i <= 5; i++)
{
taskScheduler.StartNew(myAsyncTask);
}
Console.WriteLine("new 5 tasks added!");
}
}, TaskCreationOptions.LongRunning);
}
static void RunTaskStateMonitor(ThreadPoolConcurrentScheduler taskScheduler)
{
int prev = -1;
int prevQueueSize = -1;
int prevFinished = -1;
Task.Factory.StartNew(() =>
{
while (true)
{
// getting current thread count in working state
var currCount = taskScheduler.GetCurrentWorkingThreadCount();
// getting inner queue state
var queueSize = taskScheduler.GetQueueTaskCount();
//just output overall state if something changed
if (prev != currCount || queueSize != prevQueueSize || prevFinished != currentTaskFinished)
{
Console.WriteLine($"Monitor : running tasks:{currCount}, queueLength:{queueSize}. total Finished tasks : " + currentTaskFinished);
prev = currCount;
prevQueueSize = queueSize;
prevFinished = currentTaskFinished;
}
// check it every 10 ms
Thread.Sleep(10);
}
}
, TaskCreationOptions.LongRunning);
}
Scheduler :
public class ThreadPoolConcurrentScheduler
{
private readonly int _limitParallelThreadsCount;
private int _threadInProgressCount = 0;
public delegate void onErrorDelegate(Exception ex);
public event onErrorDelegate OnError;
private ConcurrentQueue<Func<Task>> _taskQueue;
private readonly object _queueLocker = new object();
public ThreadPoolConcurrentScheduler(int limitParallelThreadsCount)
{
//set maximum parallel tasks to run
_limitParallelThreadsCount = limitParallelThreadsCount;
// thread-safe queue to store tasks
_taskQueue = new ConcurrentQueue<Func<Task>>();
}
//main method to start async task
public void StartNew(Func<Task> task)
{
lock (_queueLocker)
{
// checking limit
if (_threadInProgressCount >= _limitParallelThreadsCount)
{
//waiting new "free" threads in queue
_scheduleTask(task);
}
else
{
_startNewTask(task);
}
}
}
private void _startNewTask(Func<Task> task)
{
Interlocked.Increment(ref _threadInProgressCount);
Task.Run(async () =>
{
try
{
await task();
}
catch (Exception e)
{
//Console.WriteLine(e);
OnError?.Invoke(e);
}
}).ContinueWith(_onTaskEnded);
}
//will be called on task end
private void _onTaskEnded(Task task)
{
lock (_queueLocker)
{
Interlocked.Decrement(ref _threadInProgressCount);
//queue has more priority, so if thread is free - let's check queue first
if (!_taskQueue.IsEmpty)
{
if (_taskQueue.TryDequeue(out var result))
{
_startNewTask(result);
}
}
}
}
private void _scheduleTask(Func<Task> task)
{
_taskQueue.Enqueue(task);
}
//returning in progress task count
public int GetCurrentWorkingThreadCount()
{
return _threadInProgressCount;
}
//return number of tasks waiting to run
public int GetQueueTaskCount()
{
lock (_queueLocker) return _taskQueue.Count;
}
}
Few notes :
First - check comments to it, maybe it is the worst code ever!
Did not test in prod
Did not implement cancellation tokens and any other functionality, that should be there, but i'm too lazy. Sorry