I have MVC application in which action methods must be executed on certain order. Recently, I am having some strange issues and I suppose that it is due to the fact that I do not do any thread synchronization. I have barely worked with multithreading and I don't know much about it. I tried to implement some kind of locking where I have to lock according to Id. So I implemented class like below to get required lock objects.
public class ReportLockProvider
: IReportLockProvider
{
readonly ConcurrentDictionary<long, object> lockDictionary
= new ConcurrentDictionary<long, object>();
public object ProvideLockObject(long reportId)
{
return lockDictionary.GetOrAdd(reportId, new object());
}
}
I tried to use this as below:
ReportLockProvider lockProvider = new ReportLockProvider();
public async ActionResult MyAction(long reportId)
{
lock(lockProvider.ProvideLockObject(reportId))
{
// Some operations
await Something();
// Some operation
}
}
I hoped that it would work, but it event didn't compiled because I have used await inside lock body. I have searched a bit and came across to SemaphoreSlim in this answer. Now, the problem is that I have to get lock object according to Id. How can I do this? Is it OK to create multiple SemaphoreSlim objects? Is it OK if I modify code like below :
public class ReportLockProvider
: IReportLockProvider
{
readonly ConcurrentDictionary<long, SemaphoreSlim> lockDictionary
= new ConcurrentDictionary<long, SemaphoreSlim>();
public SemaphoreSlim ProvideLockObject(long reportId)
{
return lockDictionary.GetOrAdd(reportId, new SemaphoreSlim(1, 1));
}
}
public async ActionResult MyAction(long reportId)
{
var lockObject = ReportLockProvider.ProvideLockObject(reportId);
await lockObject.WaitAsync();
try
{
// Some operations
await Something();
// Some operation
}
finally
{
lockObject.Release();
}
}
The other question is that, can I use SemaphoreSlim in non-async methods? Is there any better option?
I think you are missing a static keyword in front of your lockDictionary, but it depends on how you instanciate the provider.
Here is a sample with a little change code I cooked up in LinqPad:
async Task Main()
{
ReportLockProvider reportLockProvider = new ReportLockProvider();
List<Task> tasks = new List<Task>(10);
for (long i = 1; i <= 5; i++) {
var local = i;
tasks.Add(Task.Run(() => Enter(local) ));
tasks.Add(Task.Run(() => Enter(local) ));
}
async Task Enter(long id)
{
Console.WriteLine(id + " waiting to enter");
await reportLockProvider.WaitAsync(id);
Console.WriteLine(id + " entered!");
Thread.Sleep(1000 * (int)id);
Console.WriteLine(id + " releasing");
reportLockProvider.Release(id);
}
await Task.WhenAll(tasks.ToArray());
}
public class ReportLockProvider
{
static readonly ConcurrentDictionary<long, SemaphoreSlim> lockDictionary = new ConcurrentDictionary<long, SemaphoreSlim>();
public async Task WaitAsync(long reportId)
{
await lockDictionary.GetOrAdd(reportId, new SemaphoreSlim(1, 1)).WaitAsync();
}
public void Release(long reportId)
{
SemaphoreSlim semaphore;
if (lockDictionary.TryGetValue(reportId, out semaphore))
{
semaphore.Release();
}
}
}
Related
I'm trying to understand the usage of async-await in C#5. If I have 2 jobs started in a method, is there a best way to wait for their completion in C#5+ ? I've done the example below but I fail to see what the async await keywork brings here besides free documentation with async keyword.
I made the following example, I want "FINISHED !" to be printed last. It is not the case however. What did I miss ? How can I make the async method wait until all jobs are finished ? Is there a point using async-await here ? I could just do Task.WaitAll with a non-async method here. I don't really understand what async brings in case you want to wait.
class Program
{
static void Main(string[] args)
{
var fooWorker = new FooWorker();
var barWorker = new BarWorker();
var test = new Class1(fooWorker, barWorker);
test.SomeWork();
Console.ReadLine();
}
}
public class Foo
{
public Foo(string bar) => Bar = bar;
public string Bar { get; }
}
public class Class1
{
private IEnumerable<Foo> _foos;
private readonly FooWorker _fooWorker;
private readonly BarWorker _barWorker;
public Class1(FooWorker fooWorker, BarWorker barWorker)
{
_fooWorker = fooWorker;
_barWorker = barWorker;
}
public void SomeWork()
{
_foos = ProduceManyFoo();
MoreWork();
Console.WriteLine("FINISHED !");
}
private async void MoreWork()
{
if (_foos == null || !_foos.Any()) return;
var fooList = _foos.ToList();
Task fooWorkingTask = _fooWorker.Work(fooList);
Task barWorkingTask = _barWorker.Work(fooList);
await Task.WhenAll(fooWorkingTask, barWorkingTask);
}
private IEnumerable<Foo> ProduceManyFoo()
{
int i = 0;
if (++i < 100) yield return new Foo(DateTime.Now.ToString(CultureInfo.InvariantCulture));
}
}
public abstract class AWorker
{
protected virtual void DoStuff(IEnumerable<Foo> foos)
{
foreach (var foo in foos)
{
Console.WriteLine(foo.Bar);
}
}
public Task Work(IEnumerable<Foo> foos) => Task.Run(() => DoStuff(foos));
}
public class FooWorker : AWorker { }
public class BarWorker : AWorker { }
You are firing off tasks and just forgetting them, while the thread continues running. This fixes it.
Main:
static async Task Main(string[] args)
{
var fooWorker = new FooWorker();
var barWorker = new BarWorker();
var test = new Class1(fooWorker, barWorker);
await test.SomeWork();
Console.ReadLine();
}
SomeWork:
public async Task SomeWork()
{
_foos = ProduceManyFoo();
await MoreWork();
Console.WriteLine("FINISHED !");
}
MoreWork signature change:
private async Task MoreWork()
The obvious code smell which should help make the problem clear is using async void. Unless required this should always be avoided.
When using async and await you'll usually want to chain the await calls to the top-level (in this case Main).
await is non-blocking, so anything that calls an async method should really care about the Task being returned.
Summary: I would like to call an asynchronous method in a constructor. Is this possible?
Details: I have a method called getwritings() that parses JSON data. Everything works fine if I just call getwritings() in an async method and put await to left of it. However , when I create a LongListView in my page and try to populate it I'm finding that getWritings() is surprisingly returning null and the LongListView is empty.
To address this problem, I tried changing the return type of getWritings() to Task<List<Writing>> and then retrieving the result in the constructor via getWritings().Result. However, doing that ends up blocking the UI thread.
public partial class Page2 : PhoneApplicationPage
{
List<Writing> writings;
public Page2()
{
InitializeComponent();
getWritings();
}
private async void getWritings()
{
string jsonData = await JsonDataManager.GetJsonAsync("1");
JObject obj = JObject.Parse(jsonData);
JArray array = (JArray)obj["posts"];
for (int i = 0; i < array.Count; i++)
{
Writing writing = new Writing();
writing.content = JsonDataManager.JsonParse(array, i, "content");
writing.date = JsonDataManager.JsonParse(array, i, "date");
writing.image = JsonDataManager.JsonParse(array, i, "url");
writing.summary = JsonDataManager.JsonParse(array, i, "excerpt");
writing.title = JsonDataManager.JsonParse(array, i, "title");
writings.Add(writing);
}
myLongList.ItemsSource = writings;
}
}
The best solution is to acknowledge the asynchronous nature of the download and design for it.
In other words, decide what your application should look like while the data is downloading. Have the page constructor set up that view, and start the download. When the download completes update the page to display the data.
I have a blog post on asynchronous constructors that you may find useful. Also, some MSDN articles; one on asynchronous data-binding (if you're using MVVM) and another on asynchronous best practices (i.e., you should avoid async void).
You can also do just like this:
Task.Run(() => this.FunctionAsync()).Wait();
Note: Be careful about thread blocking!
I'd like to share a pattern that I've been using to solve these kinds of problems. It works rather well I think. Of course, it only works if you have control over what calls the constructor.
public class MyClass
{
public static async Task<MyClass> Create()
{
var myClass = new MyClass();
await myClass.Initialize();
return myClass;
}
private MyClass()
{
}
private async Task Initialize()
{
await Task.Delay(1000); // Do whatever asynchronous work you need to do
}
}
Basically what we do is we make the constructor private and make our own public static async method that is responsible for creating an instance of MyClass. By making the constructor private and keeping the static method within the same class we have made sure that no one could "accidentally" create an instance of this class without calling the proper initialization methods.
All the logic around the creation of the object is still contained within the class (just within a static method).
var myClass1 = new MyClass() // Cannot be done, the constructor is private
var myClass2 = MyClass.Create() // Returns a Task that promises an instance of MyClass once it's finished
var myClass3 = await MyClass.Create() // asynchronously creates and initializes an instance of MyClass
Implemented on the current scenario it would look something like:
public partial class Page2 : PhoneApplicationPage
{
public static async Task<Page2> Create()
{
var page = new Page2();
await page.getWritings();
return page;
}
List<Writing> writings;
private Page2()
{
InitializeComponent();
}
private async Task getWritings()
{
string jsonData = await JsonDataManager.GetJsonAsync("1");
JObject obj = JObject.Parse(jsonData);
JArray array = (JArray)obj["posts"];
for (int i = 0; i < array.Count; i++)
{
Writing writing = new Writing();
writing.content = JsonDataManager.JsonParse(array, i, "content");
writing.date = JsonDataManager.JsonParse(array, i, "date");
writing.image = JsonDataManager.JsonParse(array, i, "url");
writing.summary = JsonDataManager.JsonParse(array, i, "excerpt");
writing.title = JsonDataManager.JsonParse(array, i, "title");
writings.Add(writing);
}
myLongList.ItemsSource = writings;
}
}
Instead of doing
var page = new Page2();
you would be using:
var page = await Page2.Create();
A quick way to execute some time-consuming operation in any constructor is by creating an action and run them asynchronously.
new Action( async() => await InitializeThingsAsync())();
Running this piece of code will neither block your UI nor leave you with any loose threads. And if you need to update any UI (considering you are not using MVVM approach), you can use the Dispatcher to do so as many have suggested.
A Note: This option only provides you a way to start an execution of a method from the constructor if you don't have any init or onload or navigated overrides. Most likely this will keep on running even after the construction has been completed. Hence the result of this method call may NOT be available in the constructor itself.
My preferred approach:
// caution: fire and forget
Task.Run(async () => await someAsyncFunc());
Try to replace this:
myLongList.ItemsSource = writings;
with this
Dispatcher.BeginInvoke(() => myLongList.ItemsSource = writings);
To put it simply, referring to Stephen Cleary https://stackoverflow.com/a/23051370/267000
your page on creation should create tasks in constructor and you should declare those tasks as class members or put it in your task pool.
Your data are fetched during these tasks, but these tasks should awaited in the code i.e. on some UI manipulations, i.e. Ok Click etc.
I developped such apps in WP, we had a whole bunch of tasks created on start.
You could try AsyncMVVM.
Page2.xaml:
<PhoneApplicationPage x:Class="Page2"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation">
<ListView ItemsSource="{Binding Writings}" />
</PhoneApplicationPage>
Page2.xaml.cs:
public partial class Page2
{
InitializeComponent();
DataContext = new ViewModel2();
}
ViewModel2.cs:
public class ViewModel2: AsyncBindableBase
{
public IEnumerable<Writing> Writings
{
get { return Property.Get(GetWritingsAsync); }
}
private async Task<IEnumerable<Writing>> GetWritingsAsync()
{
string jsonData = await JsonDataManager.GetJsonAsync("1");
JObject obj = JObject.Parse(jsonData);
JArray array = (JArray)obj["posts"];
for (int i = 0; i < array.Count; i++)
{
Writing writing = new Writing();
writing.content = JsonDataManager.JsonParse(array, i, "content");
writing.date = JsonDataManager.JsonParse(array, i, "date");
writing.image = JsonDataManager.JsonParse(array, i, "url");
writing.summary = JsonDataManager.JsonParse(array, i, "excerpt");
writing.title = JsonDataManager.JsonParse(array, i, "title");
yield return writing;
}
}
}
Don't ever call .Wait() or .Result as this is going to lock your app.
Don't spin up a new Task either, just call the ContinueWith
public class myClass
{
public myClass
{
GetMessageAsync.ContinueWith(GetResultAsync);
}
async Task<string> GetMessageAsync()
{
return await Service.GetMessageFromAPI();
}
private async Task GetResultAsync(Task<string> resultTask)
{
if (resultTask.IsFaulted)
{
Log(resultTask.Exception);
}
eles
{
//do what ever you need from the result
}
}
}
https://learn.microsoft.com/en-us/dotnet/standard/asynchronous-programming-patterns/consuming-the-task-based-asynchronous-pattern
A little late to the party, but I think many are struggling with this...
I've been searching for this as well. And to get your method/action running async without waiting or blocking the thread, you'll need to queue it via the SynchronizationContext, so I came up with this solution:
I've made a helper-class for it.
public static class ASyncHelper
{
public static void RunAsync(Func<Task> func)
{
var context = SynchronizationContext.Current;
// you don't want to run it on a threadpool. So if it is null,
// you're not on a UI thread.
if (context == null)
throw new NotSupportedException(
"The current thread doesn't have a SynchronizationContext");
// post an Action as async and await the function in it.
context.Post(new SendOrPostCallback(async state => await func()), null);
}
public static void RunAsync<T>(Func<T, Task> func, T argument)
{
var context = SynchronizationContext.Current;
// you don't want to run it on a threadpool. So if it is null,
// you're not on a UI thread.
if (context == null)
throw new NotSupportedException(
"The current thread doesn't have a SynchronizationContext");
// post an Action as async and await the function in it.
context.Post(new SendOrPostCallback(async state => await func((T)state)), argument);
}
}
Usage/Example:
public partial class Form1 : Form
{
private async Task Initialize()
{
// replace code here...
await Task.Delay(1000);
}
private async Task Run(string myString)
{
// replace code here...
await Task.Delay(1000);
}
public Form1()
{
InitializeComponent();
// you don't have to await nothing.. (the thread must be running)
ASyncHelper.RunAsync(Initialize);
ASyncHelper.RunAsync(Run, "test");
// In your case
ASyncHelper.RunAsync(getWritings);
}
}
This works for Windows.Forms and WPF
In order to use async within the constructor and ensure the data is available when you instantiate the class, you can use this simple pattern:
class FooClass : IFooAsync
{
FooClass
{
this.FooAsync = InitFooTask();
}
public Task FooAsync { get; }
private async Task InitFooTask()
{
await Task.Delay(5000);
}
}
The interface:
public interface IFooAsync
{
Task FooAsync { get; }
}
The usage:
FooClass foo = new FooClass();
if (foo is IFooAsync)
await foo.FooAsync;
Brian Lagunas has shown a solution that I really like. More info his youtube video
Solution:
Add a TaskExtensions method
public static class TaskExtensions
{
public static async void Await(this Task task, Action completedCallback = null ,Action<Exception> errorCallBack = null )
{
try
{
await task;
completedCallback?.Invoke();
}
catch (Exception e)
{
errorCallBack?.Invoke(e);
}
}
}
Usage:
public class MyClass
{
public MyClass()
{
DoSomething().Await();
// DoSomething().Await(Completed, HandleError);
}
async Task DoSomething()
{
await Task.Delay(3000);
//Some works here
//throw new Exception("Thrown in task");
}
private void Completed()
{
//some thing;
}
private void HandleError(Exception ex)
{
//handle error
}
}
The answer is simple, If you are developing an UWP app, then add the async function to the Page_Loaded method of the page.
if you want it to wait task to be done you can improve madlars codes like below. (I tried on .net core 3.1 it worked )
var taskVar = Task.Run(async () => await someAsyncFunc());
taskVar.Wait();
You could put the async calls in a separate method and call that method in the constructor.
Although, this may lead to a situation where some variable values not being available at the time you expect them.
public NewTravelPageVM(){
GetVenues();
}
async void GetVenues(){
var locator = CrossGeolocator.Current;
var position = await locator.GetPositionAsync();
Venues = await Venue.GetVenues(position.Latitude, position.Longitude);
}
I've got this pattern for preventing calling into an async method before it has had a chance to complete previously.
My solution involving needing a flag, and then needing to lock around the flag, feels pretty verbose. Is there a more natural way of achieving this?
public class MyClass
{
private object SyncIsFooRunning = new object();
private bool IsFooRunning { get; set;}
public async Task FooAsync()
{
try
{
lock(SyncIsFooRunning)
{
if(IsFooRunning)
return;
IsFooRunning = true;
}
// Use a semaphore to enforce maximum number of Tasks which are able to run concurrently.
var semaphoreSlim = new SemaphoreSlim(5);
var trackedTasks = new List<Task>();
for(int i = 0; i < 100; i++)
{
await semaphoreSlim.WaitAsync();
trackedTasks.Add(Task.Run(() =>
{
// DoTask();
semaphoreSlim.Release();
}));
}
// Using await makes try/catch/finally possible.
await Task.WhenAll(trackedTasks);
}
finally
{
lock(SyncIsFooRunning)
{
IsFooRunning = false;
}
}
}
}
As noted in the comments, you can use Interlocked.CompareExchange() if you prefer:
public class MyClass
{
private int _flag;
public async Task FooAsync()
{
try
{
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 1)
{
return;
}
// do stuff
}
finally
{
Interlocked.Exchange(ref _flag, 0);
}
}
}
That said, I think it's overkill. Nothing wrong with using lock in this type of scenario, especially if you don't expect a lot of contention on the method. What I do think would be better is to wrap the method so that the caller can always await on the result, whether a new asynchronous operation was started or not:
public class MyClass
{
private readonly object _lock = new object();
private Task _task;
public Task FooAsync()
{
lock (_lock)
{
return _task != null ? _task : (_task = FooAsyncImpl());
}
}
public async Task FooAsyncImpl()
{
try
{
// do async stuff
}
finally
{
lock (_lock) _task = null;
}
}
}
Finally, in the comments, you say this:
Seems a bit odd that all the return types are still valid for Task?
Not clear to me what you mean by that. In your method, the only valid return types would be void and Task. If your return statement(s) returned an actual value, you'd have to use Task<T> where T is the type returned by the return statement(s).
Let's say I have a method it gets data from server
Task<Result> GetDataFromServerAsync(...)
If there is an ongoing call in progress, I don't want to start a new request to server but wait for the original to finish.
Let's say I have
var result = await objet.GetDataFromServerAsync(...);
and in a different place, called almost at the same time I have a second call
var result2 = await objet.GetDataFromServerAsync(...);
I don't want the second to start a new request to server if the first didn't finish. I want both calls to get the same result as soon as first call finish. This is a proof of concept, I have options but I wanted to see how easy it's to do this.
Here is a quick example using Lazy<Task<T>>:
var lazyGetDataFromServer = new Lazy<Task<Result>>
(() => objet.GetDataFromServerAsync(...));
var result = await lazyGetDataFromServer.Value;
var result2 = await lazyGetDataFromServer.Value;
It doesn't matter if these 2 awaits are done from separate threads as Lazy<T> is thread-safe, so result2 if ran second will still wait and use the same output from result.
Using the code from here you can wrap this up in a class called AsyncLazy<T>, and add a custom GetAwaiter so that you can just await it without the need to do .Value, very tidy =)
You can use anything for syncrhonization in your method.
For example, I used SemaphoreSlim:
public class PartyMaker
{
private bool _isProcessing;
private readonly SemaphoreSlim _slowStuffSemaphore = new SemaphoreSlim(1, 1);
private DateTime _something;
public async Task<DateTime> ShakeItAsync()
{
try
{
var needNewRequest = !_isProcessing;
await _slowStuffSemaphore.WaitAsync().ConfigureAwait(false);
if (!needNewRequest) return _something;
_isProcessing = true;
_something = await ShakeItSlowlyAsync().ConfigureAwait(false);
return _something;
}
finally
{
_isProcessing = false;
_slowStuffSemaphore.Release();
}
}
private async Task<DateTime> ShakeItSlowlyAsync()
{
await Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);
return DateTime.UtcNow;
}
}
Usage:
var maker = new PartyMaker();
var tasks = new[] {maker.ShakeItAsync(), maker.ShakeItAsync()};
Task.WaitAll(tasks);
foreach (var task in tasks)
{
Console.WriteLine(task.Result);
}
Console.WriteLine(maker.ShakeItAsync().Result);
Here is result:
17.01.2017 22:28:39
17.01.2017 22:28:39
17.01.2017 22:28:41
UPD
With this modification you can call async methods with args:
public class PartyMaker
{
private readonly SemaphoreSlim _slowStuffSemaphore = new SemaphoreSlim(1, 1);
private readonly ConcurrentDictionary<int, int> _requestCounts = new ConcurrentDictionary<int, int>();
private readonly ConcurrentDictionary<int, DateTime> _cache = new ConcurrentDictionary<int, DateTime>();
public async Task<DateTime> ShakeItAsync(Argument argument)
{
var key = argument.GetHashCode();
DateTime result;
try
{
if (!_requestCounts.ContainsKey(key))
{
_requestCounts[key] = 1;
}
else
{
++_requestCounts[key];
}
var needNewRequest = _requestCounts[key] == 1;
await _slowStuffSemaphore.WaitAsync().ConfigureAwait(false);
if (!needNewRequest)
{
_cache.TryGetValue(key, out result);
return result;
}
_cache.TryAdd(key, default(DateTime));
result = await ShakeItSlowlyAsync().ConfigureAwait(false);
_cache[key] = result;
return result;
}
finally
{
_requestCounts[key]--;
if (_requestCounts[key] == 0)
{
int temp;
_requestCounts.TryRemove(key, out temp);
_cache.TryRemove(key, out result);
}
_slowStuffSemaphore.Release();
}
}
private async Task<DateTime> ShakeItSlowlyAsync()
{
await Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);
return DateTime.UtcNow;
}
}
public class Argument
{
public Argument(int value)
{
Value = value;
}
public int Value { get; }
public override int GetHashCode()
{
return Value.GetHashCode();
}
}
I would like to have a custom thread pool satisfying the following requirements:
Real threads are preallocated according to the pool capacity. The actual work is free to use the standard .NET thread pool, if needed to spawn concurrent tasks.
The pool must be able to return the number of idle threads. The returned number may be less than the actual number of the idle threads, but it must not be greater. Of course, the more accurate the number the better.
Queuing work to the pool should return a corresponding Task, which should place nice with the Task based API.
NEW The max job capacity (or degree of parallelism) should be adjustable dynamically. Trying to reduce the capacity does not have to take effect immediately, but increasing it should do so immediately.
The rationale for the first item is depicted below:
The machine is not supposed to be running more than N work items concurrently, where N is relatively small - between 10 and 30.
The work is fetched from the database and if K items are fetched then we want to make sure that there are K idle threads to start the work right away. A situation where work is fetched from the database, but remains waiting for the next available thread is unacceptable.
The last item also explains the reason for having the idle thread count - I am going to fetch that many work items from the database. It also explains why the reported idle thread count must never be higher than the actual one - otherwise I might fetch more work that can be immediately started.
Anyway, here is my implementation along with a small program to test it (BJE stands for Background Job Engine):
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
namespace TaskStartLatency
{
public class BJEThreadPool
{
private sealed class InternalTaskScheduler : TaskScheduler
{
private int m_idleThreadCount;
private readonly BlockingCollection<Task> m_bus;
public InternalTaskScheduler(int threadCount, BlockingCollection<Task> bus)
{
m_idleThreadCount = threadCount;
m_bus = bus;
}
public void RunInline(Task task)
{
Interlocked.Decrement(ref m_idleThreadCount);
try
{
TryExecuteTask(task);
}
catch
{
// The action is responsible itself for the error handling, for the time being...
}
Interlocked.Increment(ref m_idleThreadCount);
}
public int IdleThreadCount
{
get { return m_idleThreadCount; }
}
#region Overrides of TaskScheduler
protected override void QueueTask(Task task)
{
m_bus.Add(task);
}
protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
{
return TryExecuteTask(task);
}
protected override IEnumerable<Task> GetScheduledTasks()
{
throw new NotSupportedException();
}
#endregion
public void DecrementIdleThreadCount()
{
Interlocked.Decrement(ref m_idleThreadCount);
}
}
private class ThreadContext
{
private readonly InternalTaskScheduler m_ts;
private readonly BlockingCollection<Task> m_bus;
private readonly CancellationTokenSource m_cts;
public readonly Thread Thread;
public ThreadContext(string name, InternalTaskScheduler ts, BlockingCollection<Task> bus, CancellationTokenSource cts)
{
m_ts = ts;
m_bus = bus;
m_cts = cts;
Thread = new Thread(Start)
{
IsBackground = true,
Name = name
};
Thread.Start();
}
private void Start()
{
try
{
foreach (var task in m_bus.GetConsumingEnumerable(m_cts.Token))
{
m_ts.RunInline(task);
}
}
catch (OperationCanceledException)
{
}
m_ts.DecrementIdleThreadCount();
}
}
private readonly InternalTaskScheduler m_ts;
private readonly CancellationTokenSource m_cts = new CancellationTokenSource();
private readonly BlockingCollection<Task> m_bus = new BlockingCollection<Task>();
private readonly List<ThreadContext> m_threadCtxs = new List<ThreadContext>();
public BJEThreadPool(int threadCount)
{
m_ts = new InternalTaskScheduler(threadCount, m_bus);
for (int i = 0; i < threadCount; ++i)
{
m_threadCtxs.Add(new ThreadContext("BJE Thread " + i, m_ts, m_bus, m_cts));
}
}
public void Terminate()
{
m_cts.Cancel();
foreach (var t in m_threadCtxs)
{
t.Thread.Join();
}
}
public Task Run(Action<CancellationToken> action)
{
return Task.Factory.StartNew(() => action(m_cts.Token), m_cts.Token, TaskCreationOptions.DenyChildAttach, m_ts);
}
public Task Run(Action action)
{
return Task.Factory.StartNew(action, m_cts.Token, TaskCreationOptions.DenyChildAttach, m_ts);
}
public int IdleThreadCount
{
get { return m_ts.IdleThreadCount; }
}
}
class Program
{
static void Main()
{
const int THREAD_COUNT = 32;
var pool = new BJEThreadPool(THREAD_COUNT);
var tcs = new TaskCompletionSource<bool>();
var tasks = new List<Task>();
var allRunning = new CountdownEvent(THREAD_COUNT);
for (int i = pool.IdleThreadCount; i > 0; --i)
{
var index = i;
tasks.Add(pool.Run(cancellationToken =>
{
Console.WriteLine("Started action " + index);
allRunning.Signal();
tcs.Task.Wait(cancellationToken);
Console.WriteLine(" Ended action " + index);
}));
}
Console.WriteLine("pool.IdleThreadCount = " + pool.IdleThreadCount);
allRunning.Wait();
Debug.Assert(pool.IdleThreadCount == 0);
int expectedIdleThreadCount = THREAD_COUNT;
Console.WriteLine("Press [c]ancel, [e]rror, [a]bort or any other key");
switch (Console.ReadKey().KeyChar)
{
case 'c':
Console.WriteLine("Cancel All");
tcs.TrySetCanceled();
break;
case 'e':
Console.WriteLine("Error All");
tcs.TrySetException(new Exception("Failed"));
break;
case 'a':
Console.WriteLine("Abort All");
pool.Terminate();
expectedIdleThreadCount = 0;
break;
default:
Console.WriteLine("Done All");
tcs.TrySetResult(true);
break;
}
try
{
Task.WaitAll(tasks.ToArray());
}
catch (AggregateException exc)
{
Console.WriteLine(exc.Flatten().InnerException.Message);
}
Debug.Assert(pool.IdleThreadCount == expectedIdleThreadCount);
pool.Terminate();
Console.WriteLine("Press any key");
Console.ReadKey();
}
}
}
It is a very simple implementation and it appears to be working. However, there is a problem - the BJEThreadPool.Run method does not accept asynchronous methods. I.e. my implementation does not allow me to add the following overloads:
public Task Run(Func<CancellationToken, Task> action)
{
return Task.Factory.StartNew(() => action(m_cts.Token), m_cts.Token, TaskCreationOptions.DenyChildAttach, m_ts).Unwrap();
}
public Task Run(Func<Task> action)
{
return Task.Factory.StartNew(action, m_cts.Token, TaskCreationOptions.DenyChildAttach, m_ts).Unwrap();
}
The pattern I use in InternalTaskScheduler.RunInline does not work in this case.
So, my question is how to add the support for asynchronous work items? I am fine with changing the entire design as long as the requirements outlined at the beginning of the post are upheld.
EDIT
I would like to clarify the intented usage of the desired pool. Please, observe the following code:
if (pool.IdleThreadCount == 0)
{
return;
}
foreach (var jobData in FetchFromDB(pool.IdleThreadCount))
{
pool.Run(CreateJobAction(jobData));
}
Notes:
The code is going to be run periodically, say every 1 minute.
The code is going to be run concurrently by multiple machines watching the same database.
FetchFromDB is going to use the technique described in Using SQL Server as a DB queue with multiple clients to atomically fetch and lock the work from the DB.
CreateJobAction is going to invoke the code denoted by jobData (the job code) and close the work upon the completion of that code. The job code is out of my control and it could be pretty much anything - heavy CPU bound code or light asynchronous IO bound code, badly written synchronous IO bound code or a mix of all. It could run for minutes and it could run for hours. Closing the work is my code and it would by asynchronous IO bound code. Because of this, the signature of the returned job action is that of an asynchronous method.
Item 2 underlines the importance of correctly identifying the amount of idle threads. If there are 900 pending work items and 10 agent machines I cannot allow an agent to fetch 300 work items and queue them on the thread pool. Why? Because, it is most unlikely that the agent will be able to run 300 work items concurrently. It will run some, sure enough, but others will be waiting in the thread pool work queue. Suppose it will run 100 and let 200 wait (even though 100 is probably far fetched). This wields 3 fully loaded agents and 7 idle ones. But only 300 work items out of 900 are actually being processed concurrently!!!
My goal is to maximize the spread of the work amongst the available agents. Ideally, I should evaluate the load of an agent and the "heaviness" of the pending work, but it is a formidable task and is reserved for the future versions. Right now, I wish to assign each agent the max job capacity with the intention to provide the means to increase/decrease it dynamically without restarting the agents.
Next observation. The work can take quite a long time to run and it could be all synchronous code. As far as I understand it is undesirable to utilize thread pool threads for such kind of work.
EDIT 2
There is a statement that TaskScheduler is only for the CPU bound work. But what if I do not know the nature of the work? I mean it is a general purpose Background Job Engine and it runs thousands of different kinds of jobs. I do not have means to tell "that job is CPU bound" and "that on is synchronous IO bound" and yet another one is asynchronous IO bound. I wish I could, but I cannot.
EDIT 3
At the end, I do not use the SemaphoreSlim, but neither do I use the TaskScheduler - it finally trickled down my thick skull that it is unappropriate and plain wrong, plus it makes the code overly complex.
Still, I failed to see how SemaphoreSlim is the way. The proposed pattern:
public async Task Enqueue(Func<Task> taskGenerator)
{
await semaphore.WaitAsync();
try
{
await taskGenerator();
}
finally
{
semaphore.Release();
}
}
Expects taskGenerator either be an asynchronous IO bound code or open a new thread otherwise. However, I have no means to determine whether the work to be executed is one or another. Plus, as I have learned from SemaphoreSlim.WaitAsync continuation code if the semaphore is unlocked, the code following the WaitAsync() is going to run on the same thread, which is not very good for me.
Anyway, below is my implementation, in case anyone fancies. Unfortunately, I am yet to understand how to reduce the pool thread count dynamically, but this is a topic for another question.
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
namespace TaskStartLatency
{
public interface IBJEThreadPool
{
void SetThreadCount(int threadCount);
void Terminate();
Task Run(Action action);
Task Run(Action<CancellationToken> action);
Task Run(Func<Task> action);
Task Run(Func<CancellationToken, Task> action);
int IdleThreadCount { get; }
}
public class BJEThreadPool : IBJEThreadPool
{
private interface IActionContext
{
Task Run(CancellationToken ct);
TaskCompletionSource<object> TaskCompletionSource { get; }
}
private class ActionContext : IActionContext
{
private readonly Action m_action;
public ActionContext(Action action)
{
m_action = action;
TaskCompletionSource = new TaskCompletionSource<object>();
}
#region Implementation of IActionContext
public Task Run(CancellationToken ct)
{
m_action();
return null;
}
public TaskCompletionSource<object> TaskCompletionSource { get; private set; }
#endregion
}
private class CancellableActionContext : IActionContext
{
private readonly Action<CancellationToken> m_action;
public CancellableActionContext(Action<CancellationToken> action)
{
m_action = action;
TaskCompletionSource = new TaskCompletionSource<object>();
}
#region Implementation of IActionContext
public Task Run(CancellationToken ct)
{
m_action(ct);
return null;
}
public TaskCompletionSource<object> TaskCompletionSource { get; private set; }
#endregion
}
private class AsyncActionContext : IActionContext
{
private readonly Func<Task> m_action;
public AsyncActionContext(Func<Task> action)
{
m_action = action;
TaskCompletionSource = new TaskCompletionSource<object>();
}
#region Implementation of IActionContext
public Task Run(CancellationToken ct)
{
return m_action();
}
public TaskCompletionSource<object> TaskCompletionSource { get; private set; }
#endregion
}
private class AsyncCancellableActionContext : IActionContext
{
private readonly Func<CancellationToken, Task> m_action;
public AsyncCancellableActionContext(Func<CancellationToken, Task> action)
{
m_action = action;
TaskCompletionSource = new TaskCompletionSource<object>();
}
#region Implementation of IActionContext
public Task Run(CancellationToken ct)
{
return m_action(ct);
}
public TaskCompletionSource<object> TaskCompletionSource { get; private set; }
#endregion
}
private readonly CancellationTokenSource m_ctsTerminateAll = new CancellationTokenSource();
private readonly BlockingCollection<IActionContext> m_bus = new BlockingCollection<IActionContext>();
private readonly LinkedList<Thread> m_threads = new LinkedList<Thread>();
private int m_idleThreadCount;
private static int s_threadCount;
public BJEThreadPool(int threadCount)
{
ReserveAdditionalThreads(threadCount);
}
private void ReserveAdditionalThreads(int n)
{
for (int i = 0; i < n; ++i)
{
var index = Interlocked.Increment(ref s_threadCount) - 1;
var t = new Thread(Start)
{
IsBackground = true,
Name = "BJE Thread " + index
};
Interlocked.Increment(ref m_idleThreadCount);
t.Start();
m_threads.AddLast(t);
}
}
private void Start()
{
try
{
foreach (var actionContext in m_bus.GetConsumingEnumerable(m_ctsTerminateAll.Token))
{
RunWork(actionContext).Wait();
}
}
catch (OperationCanceledException)
{
}
catch
{
// Should never happen - log the error
}
Interlocked.Decrement(ref m_idleThreadCount);
}
private async Task RunWork(IActionContext actionContext)
{
Interlocked.Decrement(ref m_idleThreadCount);
try
{
var task = actionContext.Run(m_ctsTerminateAll.Token);
if (task != null)
{
await task;
}
actionContext.TaskCompletionSource.SetResult(null);
}
catch (OperationCanceledException)
{
actionContext.TaskCompletionSource.TrySetCanceled();
}
catch (Exception exc)
{
actionContext.TaskCompletionSource.TrySetException(exc);
}
Interlocked.Increment(ref m_idleThreadCount);
}
private Task PostWork(IActionContext actionContext)
{
m_bus.Add(actionContext);
return actionContext.TaskCompletionSource.Task;
}
#region Implementation of IBJEThreadPool
public void SetThreadCount(int threadCount)
{
if (threadCount > m_threads.Count)
{
ReserveAdditionalThreads(threadCount - m_threads.Count);
}
else if (threadCount < m_threads.Count)
{
throw new NotSupportedException();
}
}
public void Terminate()
{
m_ctsTerminateAll.Cancel();
foreach (var t in m_threads)
{
t.Join();
}
}
public Task Run(Action action)
{
return PostWork(new ActionContext(action));
}
public Task Run(Action<CancellationToken> action)
{
return PostWork(new CancellableActionContext(action));
}
public Task Run(Func<Task> action)
{
return PostWork(new AsyncActionContext(action));
}
public Task Run(Func<CancellationToken, Task> action)
{
return PostWork(new AsyncCancellableActionContext(action));
}
public int IdleThreadCount
{
get { return m_idleThreadCount; }
}
#endregion
}
public static class Extensions
{
public static Task WithCancellation(this Task task, CancellationToken token)
{
return task.ContinueWith(t => t.GetAwaiter().GetResult(), token);
}
}
class Program
{
static void Main()
{
const int THREAD_COUNT = 16;
var pool = new BJEThreadPool(THREAD_COUNT);
var tcs = new TaskCompletionSource<bool>();
var tasks = new List<Task>();
var allRunning = new CountdownEvent(THREAD_COUNT);
for (int i = pool.IdleThreadCount; i > 0; --i)
{
var index = i;
tasks.Add(pool.Run(async ct =>
{
Console.WriteLine("Started action " + index);
allRunning.Signal();
await tcs.Task.WithCancellation(ct);
Console.WriteLine(" Ended action " + index);
}));
}
Console.WriteLine("pool.IdleThreadCount = " + pool.IdleThreadCount);
allRunning.Wait();
Debug.Assert(pool.IdleThreadCount == 0);
int expectedIdleThreadCount = THREAD_COUNT;
Console.WriteLine("Press [c]ancel, [e]rror, [a]bort or any other key");
switch (Console.ReadKey().KeyChar)
{
case 'c':
Console.WriteLine("ancel All");
tcs.TrySetCanceled();
break;
case 'e':
Console.WriteLine("rror All");
tcs.TrySetException(new Exception("Failed"));
break;
case 'a':
Console.WriteLine("bort All");
pool.Terminate();
expectedIdleThreadCount = 0;
break;
default:
Console.WriteLine("Done All");
tcs.TrySetResult(true);
break;
}
try
{
Task.WaitAll(tasks.ToArray());
}
catch (AggregateException exc)
{
Console.WriteLine(exc.Flatten().InnerException.Message);
}
Debug.Assert(pool.IdleThreadCount == expectedIdleThreadCount);
pool.Terminate();
Console.WriteLine("Press any key");
Console.ReadKey();
}
}
}
Asynchronous "work items" are often based on async IO. Async IO does not use threads while it runs. Task schedulers are used to execute CPU work (tasks based on a delegate). The concept TaskScheduler does not apply. You cannot use a custom TaskScheduler to influence what async code does.
Make your work items throttle themselves:
static SemaphoreSlim sem = new SemaphoreSlim(maxDegreeOfParallelism); //shared object
async Task MyWorkerFunction()
{
await sem.WaitAsync();
try
{
MyWork();
}
finally
{
sem.Release();
}
}
As mentioned in another answer by usr you can't do this with a TaskScheduler as that is only for CPU bound work, not limiting the level of parallelization of all types of work, whether parallel or not. He also shows you how you can use a SemaphoreSlim to asynchronously limit the degrees of parallelism.
You can expand on this to generalize these concepts in a few ways. The one that seems like it would be the most beneficial to you would be to create a special type of queue that takes operations that return a Task and executes them in such a way that a given max degree of parallelization is achieved.
public class FixedParallelismQueue
{
private SemaphoreSlim semaphore;
public FixedParallelismQueue(int maxDegreesOfParallelism)
{
semaphore = new SemaphoreSlim(maxDegreesOfParallelism,
maxDegreesOfParallelism);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await semaphore.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
semaphore.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await semaphore.WaitAsync();
try
{
await taskGenerator();
}
finally
{
semaphore.Release();
}
}
}
This allows you to create a queue for your application (you can even have several separate queues if you want) that has a fixed degree of parallelization. You can then provide operations returning a Task when they complete and the queue will schedule it when it can and return a Task representing when that unit of work has finished.