Update button text from another thread [duplicate] - c#

I am starting a new task from a function but I would not want it to run on the same thread. I don't care which thread it runs on as long as it is a different one (so the information given in this question does not help).
Am I guaranteed that the below code will always exit TestLock before allowing Task t to enter it again? If not, what is the recommended design pattern to prevent re-entrency?
object TestLock = new object();
public void Test(bool stop = false) {
Task t;
lock (this.TestLock) {
if (stop) return;
t = Task.Factory.StartNew(() => { this.Test(stop: true); });
}
t.Wait();
}
Edit: Based on the below answer by Jon Skeet and Stephen Toub, a simple way to deterministically prevent reentrancy would be to pass a CancellationToken, as illustrated in this extension method:
public static Task StartNewOnDifferentThread(this TaskFactory taskFactory, Action action)
{
return taskFactory.StartNew(action: action, cancellationToken: new CancellationToken());
}

I mailed Stephen Toub - a member of the PFX Team - about this question. He's come back to me really quickly, with a lot of detail - so I'll just copy and paste his text here. I haven't quoted it all, as reading a large amount of quoted text ends up getting less comfortable than vanilla black-on-white, but really, this is Stephen - I don't know this much stuff :) I've made this answer community wiki to reflect that all the goodness below isn't really my content:
If you call Wait() on a Task that's completed, there won't be any blocking (it'll just throw an exception if the task completed with a TaskStatus other than RanToCompletion, or otherwise return as a nop). If you call Wait() on a Task that's already executing, it must block as there’s nothing else it can reasonably do (when I say block, I'm including both true kernel-based waiting and spinning, as it'll typically do a mixture of both). Similarly, if you call Wait() on a Task that has the Created or WaitingForActivation status, it’ll block until the task has completed. None of those is the interesting case being discussed.
The interesting case is when you call Wait() on a Task in the WaitingToRun state, meaning that it’s previously been queued to a TaskScheduler but that TaskScheduler hasn't yet gotten around to actually running the Task's delegate yet. In that case, the call to Wait will ask the scheduler whether it's ok to run the Task then-and-there on the current thread, via a call to the scheduler's TryExecuteTaskInline method. This is called inlining. The scheduler can choose to either inline the task via a call to base.TryExecuteTask, or it can return 'false' to indicate that it is not executing the task (often this is done with logic like...
return SomeSchedulerSpecificCondition() ? false : TryExecuteTask(task);
The reason TryExecuteTask returns a Boolean is that it handles the synchronization to ensure a given Task is only ever executed once). So, if a scheduler wants to completely prohibit inlining of the Task during Wait, it can just be implemented as return false; If a scheduler wants to always allow inlining whenever possible, it can just be implemented as:
return TryExecuteTask(task);
In the current implementation (both .NET 4 and .NET 4.5, and I don’t personally expect this to change), the default scheduler that targets the ThreadPool allows for inlining if the current thread is a ThreadPool thread and if that thread was the one to have previously queued the task.
Note that there isn't arbitrary reentrancy here, in that the default scheduler won’t pump arbitrary threads when waiting for a task... it'll only allow that task to be inlined, and of course any inlining that task in turn decides to do. Also note that Wait won’t even ask the scheduler in certain conditions, instead preferring to block. For example, if you pass in a cancelable CancellationToken, or if you pass in a non-infinite timeout, it won’t try to inline because it could take an arbitrarily long amount of time to inline the task's execution, which is all or nothing, and that could end up significantly delaying the cancellation request or timeout. Overall, TPL tries to strike a decent balance here between wasting the thread that’s doing the Wait'ing and reusing that thread for too much. This kind of inlining is really important for recursive divide-and-conquer problems (e.g. QuickSort) where you spawn multiple tasks and then wait for them all to complete. If such were done without inlining, you’d very quickly deadlock as you exhaust all threads in the pool and any future ones it wanted to give to you.
Separate from Wait, it’s also (remotely) possible that the Task.Factory.StartNew call could end up executing the task then and there, iff the scheduler being used chose to run the task synchronously as part of the QueueTask call. None of the schedulers built into .NET will ever do this, and I personally think it would be a bad design for scheduler, but it’s theoretically possible, e.g.:
protected override void QueueTask(Task task, bool wasPreviouslyQueued)
{
return TryExecuteTask(task);
}
The overload of Task.Factory.StartNew that doesn’t accept a TaskScheduler uses the scheduler from the TaskFactory, which in the case of Task.Factory targets TaskScheduler.Current. This means if you call Task.Factory.StartNew from within a Task queued to this mythical RunSynchronouslyTaskScheduler, it would also queue to RunSynchronouslyTaskScheduler, resulting in the StartNew call executing the Task synchronously. If you’re at all concerned about this (e.g. you’re implementing a library and you don’t know where you’re going to be called from), you can explicitly pass TaskScheduler.Default to the StartNew call, use Task.Run (which always goes to TaskScheduler.Default), or use a TaskFactory created to target TaskScheduler.Default.
EDIT: Okay, it looks like I was completely wrong, and a thread which is currently waiting on a task can be hijacked. Here's a simpler example of this happening:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication1 {
class Program {
static void Main() {
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(Launch).Wait();
}
}
static void Launch()
{
Console.WriteLine("Launch thread: {0}",
Thread.CurrentThread.ManagedThreadId);
Task.Factory.StartNew(Nested).Wait();
}
static void Nested()
{
Console.WriteLine("Nested thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}
}
}
Sample output:
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
As you can see, there are lots of times when the waiting thread is reused to execute the new task. This can happen even if the thread has acquired a lock. Nasty re-entrancy. I am suitably shocked and worried :(

Why not just design for it, rather than bend over backwards to ensure it doesn't happen?
The TPL is a red herring here, reentrancy can happen in any code provided you can create a cycle, and you don't know for sure what's going to happen 'south' of your stack frame. Synchronous reentrancy is the best outcome here - at least you can't self-deadlock yourself (as easily).
Locks manage cross thread synchronisation. They are orthogonal to managing reentrancy. Unless you are protecting a genuine single use resource (probably a physical device, in which case you should probably use a queue), why not just ensure your instance state is consistent so reentrancy can 'just work'.
(Side thought: are Semaphores reentrant without decrementing?)

You could easily test this by writting a quick app that shared a socket connection between threads / tasks.
The task would acquire a lock before sending a message down the socket and waiting for a response. Once this blocks and becomes idle (IOBlock) set another task in the same block to do the same. It should block on acquiring the lock, if it does not and the second task is allowed to pass the lock because it run by the same thread then you have an problem.

Solution with new CancellationToken() proposed by Erwin did not work for me, inlining happened to occur anyway.
So I ended up using another condition advised by Jon and Stephen
(... or if you pass in a non-infinite timeout ...):
Task<TResult> task = Task.Run(func);
task.Wait(TimeSpan.FromHours(1)); // Whatever is enough for task to start
return task.Result;
Note: Omitting exception handling etc here for simplicity, you should mind those in production code.

Related

Two Tasks run on the same thread which invalidates lock

Edit
I find Building Async Coordination Primitives, Part 1: AsyncManualResetEvent might be related to my topic.
In the case of TaskCompletionSource, that means that synchronous continuations can happen as part of a call to {Try}Set*, which means in our AsyncManualResetEvent example, those continuations could execute as part of the Set method. Depending on your needs (and whether callers of Set may be ok with a potentially longer-running Set call as all synchronous continuations execute), this may or may not be what you want.
Many thanks to all of the answers, thank you for your knowledge and patience!
Original Question
I know that Task.Run runs on a threadpool thread and threads can have re-entrancy. But I never knew that 2 tasks can run on the same thread when they are both alive!
My Question is: is that reasonable by design? Does that mean lock inside an async method is meaningless (or say, lock cannot be trusted in async method block, if I'd like a method that doesn't allow reentrancy)?
Code:
namespace TaskHijacking
{
class Program
{
static TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();
static object methodLock = new object();
static void MethodNotAllowReetrance(string callerName)
{
lock(methodLock)
{
Console.WriteLine($"Enter MethodNotAllowReetrance, caller: {callerName}, on thread: {Thread.CurrentThread.ManagedThreadId}");
if (callerName == "task1")
{
tcs.SetException(new Exception("Terminate tcs"));
}
Thread.Sleep(1000);
Console.WriteLine($"Exit MethodNotAllowReetrance, caller: {callerName}, on thread: {Thread.CurrentThread.ManagedThreadId}");
}
}
static void Main(string[] args)
{
var task1 = Task.Run(async () =>
{
await Task.Delay(1000);
MethodNotAllowReetrance("task1");
});
var task2 = Task.Run(async () =>
{
try
{
await tcs.Task; // await here until task SetException on tcs
}
catch
{
// Omit the exception
}
MethodNotAllowReetrance("task2");
});
Task.WaitAll(task1, task2);
Console.ReadKey();
}
}
}
Output:
Enter MethodNotAllowReetrance, caller: task1, on thread: 6
Enter MethodNotAllowReetrance, caller: task2, on thread: 6
Exit MethodNotAllowReetrance, caller: task2, on thread: 6
Exit MethodNotAllowReetrance, caller: task1, on thread: 6
The control flow of the thread 6 is shown in the figure:
You already have several solutions. I just want to describe the problem a bit more. There are several factors at play here that combine to cause the observed re-entrancy.
First, lock is re-entrant. lock is strictly about mutual exclusion of threads, which is not the same as mutual exclusion of code. I think re-entrant locks are a bad idea in the 99% case (as described on my blog), since developers generally want mutual exclusion of code and not threads. SemaphoreSlim, since it is not re-entrant, mutually excludes code. IMO re-entrant locks are a holdover from decades ago, when they were introduced as an OS concept, and the OS is just concerned about managing threads.
Next, TaskCompletionSource<T> by default invokes task continuations synchronously.
Also, await will schedule its method continuation as a synchronous task continuation (as described on my blog).
Task continuations will sometimes run asynchronously even if scheduled synchronously, but in this scenario they will run synchronously. The context captured by await is the thread pool context, and the completing thread (the one calling TCS.TrySet*) is a thread pool thread, and in that case the continuation will almost always run synchronously.
So, you end up with a thread that takes a lock, completes a TCS, thus executing the continuations of that task, which includes continuing another method, which is then able to take that same lock.
To repeat the existing solutions in other answers, to solve this you need to break that chain at some point:
(OK) Use a non-reentrant lock. SemaphoreSlim.WaitAsync will still execute the continuations while holding the lock (not a good idea), but since SemaphoreSlim isn't re-entrant, the method continuation will (asynchronously) wait for the lock to be available.
(Best) Use TaskCompletionSource.RunContinuationsAsynchronously, which will force task continuations onto a (different) thread pool thread. This is a better solution because your code is no longer invoking arbitrary code while holding a lock (i.e., the task continuations).
You can also break the chain by using a non-thread-pool context for the method awaiting the TCS. E.g., if that method had to resume on a UI thread, then it could not be run synchronously from a thread pool thread.
From a broader perspective, if you're mixing locks and TaskCompletionSource instances, it sounds like you may be building (or may need) an asynchronous coordination primitive. I have an open-source library that implements a bunch of them, if that helps.
A task is an abstraction over some amount of work. Usually this means that the work is split into parts, where the execution can be paused and resumed between parts. When resuming it may very well run on another thread. But the pausing/resuming may only be done at the await statements. Notably, while the task is 'paused', for example because it is waiting for IO, it does not consume any thread at all, it will only use a thread while it is actually running.
My Question is: is that reasonable by design? Does that mean lock inside an async method is meaningless?
locks inside a async method is far from meaningless since it allows you to ensure a section of code is only run by one thread at a time.
In your first example there can be only one thread that has the lock at a time. While the lock is held that task cannot be paused/resumed since await is not legal while in a lock body. So a single thread will execute the entire lock body, and that thread cannot do anything else until it completes the lock body. So there is no risk of re-entrancy unless you invoke some code that can call back to the same method.
In your updated example the problem occurs due to TaskCompletionSource.SetException, this is allowed to reuse the current thread to run any continuation of the task immediately. To avoid this, and many other issues, make sure you only hold the lock while running a limited amount of code. Any method calls that may run arbitrary code risks causing deadlocks, reentrancy, and many other problems.
You can solve the specific problem by using a ManualResetEvent(Slim) to do the signaling between threads instead of using a TaskCompletionSource.
So your method is basically like this:
static void MethodNotAllowReetrance()
{
lock (methodLock) tcs.SetResult();
}
...and the tcs.Task has a continuation attached that invokes the MethodNotAllowReetrance. What happens then is the same thing that would happen if your method was like this instead:
static void MethodNotAllowReetrance()
{
lock (methodLock) MethodNotAllowReetrance();
}
The moral lesson is that you must be very careful every time you invoke any method inside a lock-protected region. In this particular case you have a couple of options:
Don't complete the TaskCompletionSource while holding the lock. Defer its completion until after you have exited the protected region:
static void MethodNotAllowReetrance()
{
bool doComplete = false;
lock (methodLock) doComplete = true;
if (doComplete) tcs.SetResult();
}
Configure the TaskCompletionSource so that it invokes its continuations asynchronously, by passing the TaskCreationOptions.RunContinuationsAsynchronously in its constructor. This is an option that you don't have very often. For example when you cancel a CancellationTokenSource, you don't have the option to invoke asynchronously the callbacks registered to its associated CancellationToken.
Refactor the MethodNotAllowReetrance method in a way that it can handle reentrancy.
Use SemaphoreSlim instead of lock, since, as the documentation says:
The SemaphoreSlim class doesn't enforce thread or task identity
In your case, it would look something like this:
// Semaphore only allows one request to enter at a time
private static readonly SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(1, 1);
void SyncMethod() {
_semaphoreSlim.Wait();
try {
// Do some sync work
} finally {
_semaphoreSlim.Release();
}
}
The try/finally block is optional, but it makes sure that the semaphore is released even if an exception is thrown somewhere in your code.
Note that SemaphoreSlim also has a WaitAsync() method, if you want to wait asynchronously to enter the semaphore.

In Unity / C#, does .Net's async/await start, literally, another thread?

Important for anyone researching this difficult topic in Unity specifically,
be sure to see another question I asked which raised related key issues:
In Unity specifically, "where" does an await literally return to?
For C# experts, Unity is single-threaded1
It's common to do calculations and such on another thread.
When you do something on another thread, you often use async/wait since, uh, all the good C# programmers say that's the easy way to do that!
void TankExplodes() {
ShowExplosion(); .. ordinary Unity thread
SoundEffects(); .. ordinary Unity thread
SendExplosionInfo(); .. it goes to another thread. let's use 'async/wait'
}
using System.Net.WebSockets;
async void SendExplosionInfo() {
cws = new ClientWebSocket();
try {
await cws.ConnectAsync(u, CancellationToken.None);
...
Scene.NewsFromServer("done!"); // class function to go back to main tread
}
catch (Exception e) { ... }
}
OK, so when you do this, you do everything "just as you normally do" when you launch a thread in a more conventional way in Unity/C# (so using Thread or whatever or letting a native plugin do it or the OS or whatever the case may be).
Everything works out great.
As a lame Unity programmer who only knows enough C# to get to the end of the day, I have always assumed that the async/await pattern above literally launches another thread.
In fact, does the code above literally launch another thread, or does c#/.Net use some other approach to achieve tasks when you use the natty async/wait pattern?
Maybe it works differently or specifically in the Unity engine from "using C# generally"? (IDK?)
Note that in Unity, whether or not it is a thread drastically affects how you have to handle the next steps. Hence the question.
Issue: I realize there's lots of discussion about "is await a thread", but, (1) I have never seen this discussed / answered in the Unity setting (does it make any difference? IDK?) (2) I simply have never seen a clear answer!
1 Some ancillary calculations (eg, physics etc) are done on other threads, but the actual "frame based game engine" is one pure thread. (It's impossible to "access" the main engine frame thread in any way whatsoever: when programming, say, a native plugin or some calculation on another thread, you just leave markers and values for the components on the engine frame thread to look at and use when they run each frame.)
This reading: Tasks are (still) not threads and async is not parallel might help you understand what's going on under the hood.
In short in order for your task to run on a separate thread you need to call
Task.Run(()=>{// the work to be done on a separate thread. });
Then you can await that task wherever needed.
To answer your question
"In fact, does the code above literally launch another thread, or does
c#/.Net use some other approach to achieve tasks when you use the
natty async/wait pattern?"
No - it doesn't.
If you did
await Task.Run(()=> cws.ConnectAsync(u, CancellationToken.None));
Then cws.ConnectAsync(u, CancellationToken.None) would run on a separate thread.
As an answer to the comment here is the code modified with more explanations:
async void SendExplosionInfo() {
cws = new ClientWebSocket();
try {
var myConnectTask = Task.Run(()=>cws.ConnectAsync(u, CancellationToken.None));
// more code running...
await myConnectTask; // here's where it will actually stop to wait for the completion of your task.
Scene.NewsFromServer("done!"); // class function to go back to main tread
}
catch (Exception e) { ... }
}
You might not need it on a separate thread though because the async work you're doing is not CPU bound (or so it seems). Thus you should be fine with
try {
var myConnectTask =cws.ConnectAsync(u, CancellationToken.None);
// more code running...
await myConnectTask; // here's where it will actually stop to wait for the completion of your task.
Scene.NewsFromServer("done!"); // continue from here
}
catch (Exception e) { ... }
}
Sequentially it will do exactly the same thing as the code above but on the same thread. It will allow the code after "ConnectAsync" to execute and will only stop to wait for the completion of "ConnectAsync" where it says await and since "ConnectAsync" is not CPU bound you (making it somewhat parallel in a sense of the work being done somewhere else i. e. networking) will have enough juice to run your tasks on, unless your code in "...." also requires a lot of CPU bound work, that you'd rather run in parallel.
Also you might want to avoid using async void for it's there only for top level functions. Try using async Task in your method signature. You can read more on this here.
No, async/await does not mean - another thread. It can start another thread but it doesn't have to.
Here you can find quite interesting post about it: https://blogs.msdn.microsoft.com/benwilli/2015/09/10/tasks-are-still-not-threads-and-async-is-not-parallel/
Important notice
First of all, there's an issue with your question's first statement.
Unity is single-threaded
Unity is not single-threaded; in fact, Unity is a multi-threaded environment. Why? Just go to the official Unity web page and read there:
High-performance multithreaded system: Fully utilize the multicore processors available today (and tomorrow), without heavy programming. Our new foundation for enabling high-performance is made up of three sub-systems: the C# Job System, which gives you a safe and easy sandbox for writing parallel code; the Entity Component System (ECS), a model for writing high-performance code by default, and the Burst Compiler, which produces highly-optimized native code.
The Unity 3D engine uses a .NET Runtime called "Mono" which is multi-threaded by its nature. For some platforms, the managed code will be transformed into native code, so there will be no .NET Runtime. But the code itself will be multi-threaded anyway.
So please, don't state misleading and technically incorrect facts.
What you're arguing with, is simply a statement that there is a main thread in Unity which processes the core workload in a frame-based way. This is true. But it isn't something new and unique! E.g. a WPF application running on .NET Framework (or .NET Core starting with 3.0) has a main thread too (often called the UI thread), and the workload is processed on that thread in a frame-based way using the WPF Dispatcher (dispatcher queue, operations, frames etc.) But all this doesn't make the environment single-threaded! It's just a way to handle the application's logic.
An answer to your question
Please note: my answer only applies to such Unity instances that run a .NET Runtime environment (Mono). For those instances that convert the managed C# code into native C++ code and build/run native binaries, my answer is most probably at least inaccurate.
You write:
When you do something on another thread, you often use async/wait since, uh, all the good C# programmers say that's the easy way to do that!
The async and await keywords in C# are just a way to use the TAP (Task-Asynchronous Pattern).
The TAP is used for arbitrary asynchronous operations. Generally speaking, there is no thread. I strongly recommend to read this Stephen Cleary's article called "There is no thread". (Stephen Cleary is a renowned asynchronous programming guru if you don't know.)
The primary cause for using the async/await feature is an asynchronous operation. You use async/await not because "you do something on another thread", but because you have an asynchronous operation you have to wait for. Whether there is a background thread this operation will run or or not - this does not matter for you (well, almost; see below). The TAP is an abstraction level that hides these details.
In fact, does the code above literally launch another thread, or does c#/.Net use some other approach to achieve tasks when you use the natty async/wait pattern?
The correct answer is: it depends.
if ClientWebSocket.ConnectAsync throws an argument validation exception right away (e.g. an ArgumentNullException when uri is null), no new thread will be started
if the code in that method completes very quickly, the result of the method will be available synchronously, no new thread will be started
if the implementation of the ClientWebSocket.ConnectAsync method is a pure asynchronous operation with no threads involved, your calling method will be "suspended" (due to await) - so no new thread will be started
if the method implementation involves threads and the current TaskScheduler is able to schedule this work item on a running thread pool thread, no new thread will be started; instead, the work item will be queued on an already running thread pool thread
if all thread pool threads are already busy, the runtime might spawn new threads depending on its configuration and current system state, so yes - a new thread might be started and the work item will be queued on that new thread
You see, this is pretty much complex. But that's exactly the reason why the TAP pattern and the async/await keyword pair were introduced into C#. These are usually the things a developer doesn't want to bother with, so let's hide this stuff in the runtime/framework.
#agfc states a not quite correct thing:
"This won't run the method on a background thread"
await cws.ConnectAsync(u, CancellationToken.None);
"But this will"
await Task.Run(()=> cws.ConnectAsync(u, CancellationToken.None));
If ConnectAsync's synchronous part implementation is tiny, the task scheduler might run that part synchronously in both cases. So these both snippets might be exactly the same depending on the called method implementation.
Note that the ConnectAsync has an Async suffix and returns a Task. This is a convention-based information that the method is truly asynchronous. In such cases, you should always prefer await MethodAsync() over await Task.Run(() => MethodAsync()).
Further interesting reading:
await vs await Task.Run
return Task.Run vs await Task.Run
I don't like answering my own question, but as it turns out none of the answers here is totally correct. (However many/all of the answers here are hugely useful in different ways).
In fact, the actual answer can be stated in a nutshell:
On which thread the execution resumes after an await is controlled by SynchronizationContext.Current.
That's it.
Thus in any particular version of Unity (and note that, as of writing 2019, they are drastically changing Unity - https://unity.com/dots) - or indeed any C#/.Net environment at all - the question on this page can be answered properly.
The full information emerged at this follow-up QA:
https://stackoverflow.com/a/55614146/294884
The code after an await will continue on another threadpool thread. This can have consequences when dealing with non-thread-safe references in a method, such as a Unity, EF's DbContext and many other classes, including your own custom code.
Take the following example:
[Test]
public async Task TestAsync()
{
using (var context = new TestDbContext())
{
Console.WriteLine("Thread Before Async: " + Thread.CurrentThread.ManagedThreadId.ToString());
var names = context.Customers.Select(x => x.Name).ToListAsync();
Console.WriteLine("Thread Before Await: " + Thread.CurrentThread.ManagedThreadId.ToString());
var result = await names;
Console.WriteLine("Thread After Await: " + Thread.CurrentThread.ManagedThreadId.ToString());
}
}
The output:
------ Test started: Assembly: EFTest.dll ------
Thread Before Async: 29
Thread Before Await: 29
Thread After Await: 12
1 passed, 0 failed, 0 skipped, took 3.45 seconds (NUnit 3.10.1).
Note that code before and after the ToListAsync is running on the same thread. So prior to awaiting any of the results we can continue processing, though the results of the async operation will not be available, just the Task that is created. (which can be aborted, awaited, etc.)
Once we put an await in, the code following will be effectively split off as a continuation, and will/can come back on a different thread.
This applies when awaiting for an async operation in-line:
[Test]
public async Task TestAsync2()
{
using (var context = new TestDbContext())
{
Console.WriteLine("Thread Before Async/Await: " + Thread.CurrentThread.ManagedThreadId.ToString());
var names = await context.Customers.Select(x => x.Name).ToListAsync();
Console.WriteLine("Thread After Async/Await: " + Thread.CurrentThread.ManagedThreadId.ToString());
}
}
Output:
------ Test started: Assembly: EFTest.dll ------
Thread Before Async/Await: 6
Thread After Async/Await: 33
1 passed, 0 failed, 0 skipped, took 4.38 seconds (NUnit 3.10.1).
Again, the code after the await is executed on another thread from the original.
If you want to ensure that the code calling async code remains on the same thread then you need to use the Result on the Task to block the thread until the async task completes:
[Test]
public void TestAsync3()
{
using (var context = new TestDbContext())
{
Console.WriteLine("Thread Before Async: " + Thread.CurrentThread.ManagedThreadId.ToString());
var names = context.Customers.Select(x => x.Name).ToListAsync();
Console.WriteLine("Thread After Async: " + Thread.CurrentThread.ManagedThreadId.ToString());
var result = names.Result;
Console.WriteLine("Thread After Result: " + Thread.CurrentThread.ManagedThreadId.ToString());
}
}
Output:
------ Test started: Assembly: EFTest.dll ------
Thread Before Async: 20
Thread After Async: 20
Thread After Result: 20
1 passed, 0 failed, 0 skipped, took 4.16 seconds (NUnit 3.10.1).
So as far as Unity, EF, etc. goes, you should be cautious about using async liberally where these classes are not thread safe. For instance the following code may lead to unexpected behaviour:
using (var context = new TestDbContext())
{
var ids = await context.Customers.Select(x => x.CustomerId).ToListAsync();
foreach (var id in ids)
{
var orders = await context.Orders.Where(x => x.CustomerId == id).ToListAsync();
// do stuff with orders.
}
}
As far as the code goes this looks fine, but DbContext is not thread-safe and the single DbContext reference will be running on a different thread when it is queried for Orders based on the await on the initial Customer load.
Use async when there is a significant benefit to it over synchronous calls, and you're sure the continuation will access only thread-safe code.
I'm amazed there is no mention of ConfigureAwait in this thread. I was looking for a confirmation that Unity did async/await the same way that it is done for "regular" C# and from what I see above it seems to be the case.
The thing is, by default, an awaited task will resume in the same threading context after completion. If you await on the main thread, it will resume on the main thread. If you await on a thread from the ThreadPool, it will use any available thread from the thread pool. You can always create different contexts for different purposes, like a DB access context and whatnot.
This is where ConfigureAwait is interesting. If you chain a call to ConfigureAwait(false) after your await, you are telling the runtime that you do not need to resume in the same context and therefore it will resume on a thread from the ThreadPool. Omitting a call to ConfigureAwait plays it safe and will resume in the same context (main thread, DB thread, ThreadPool context, whatever the context the caller was on).
So, starting on the main thread, you can await and resume in the main thread like so:
// Main thread
await WhateverTaskAync();
// Main thread
or go to the thread pool like so:
// Main thread
await WhateverTaskAsync().ConfigureAwait(false);
// Pool thread
Likewise, starting from a thread in the pool:
// Pool thread
await WhateverTaskAsync();
// Pool thread
is equivalent to :
// Pool thread
await WhateverTaskAsync().ConfigureAwait(false);
// Pool thread
To go back to the main thread, you would use an API that transfers to the main thread:
// Pool thread
await WhateverTaskAsync().ConfigureAwait(false);
// Pool thread
RunOnMainThread(()
{
// Main thread
NextStep()
});
// Pool thread, possibly before NextStep() is run unless RunOnMainThread is synchronous (which it normally isn't)
This is why people say that calling Task.Run runs code on a pool thread. The await is superfluous though...
// Main Thread
await Task.Run(()
{
// Pool thread
WhateverTaskAsync()
// Pool thread before WhateverTaskAsync completes because it is not awaited
});
// Main Thread before WhateverTaskAsync completes because it is not awaited
Now, calling ConfigureAwait(false) does not guarantee that the code inside the Async method is called in a separate thread. It only states that when returning from the await, you have no guarantee of being in the same threading context.
If your Async method looks like this:
private async Task WhateverTaskAsync()
{
int blahblah = 0;
for(int i = 0; i < 100000000; ++i)
{
blahblah += i;
}
}
... because there is actually no await inside the Async method, you will get a compilation warning and it will all run within the calling context. Depending on its ConfigureAwait state, it might resume on the same or a different context. If you want the method to run on a pool thread, you would instead write the Async method as such:
private Task WhateverTaskAsync()
{
return Task.Run(()
{
int blahblah = 0;
for(int i = 0; i < 100000000; ++i)
{
blahblah += i;
}
}
}
Hopefully, that clears up some things for others.

Concurrency without multithreading Async/Await

There is a strong emphasis that async/await is unrelated to multi-threading in most tutorials; that a single thread can dispatch multiple I/O operations and then handle the results as they complete without creating new threads. The concept makes sense but I've never seen that actual behavior in practice.
Take the below example:
static void Main(string[] args)
{
// No Delay
// var tasks = new List<int> { 3, 2, 1 }.Select(x => DelayedResult(x, 0));
// Staggered delay
// var tasks = new List<int> { 3, 2, 1 }.Select(x => DelayedResult(x, x));
// Simultaneous Delay
// var tasks = new List<int> { 3, 2, 1 }.Select(x => DelayedResult(x, 1));
var allTasks = Task.WhenAll(tasks);
allTasks.Wait();
Console.ReadLine();
}
static async Task<T> DelayedResult<T>(T result, int seconds = 0)
{
ThreadPrint("Yield:" + result);
await Task.Delay(TimeSpan.FromSeconds(seconds));
ThreadPrint("Continuation:" + result);
return result;
}
static void ThreadPrint(string message)
{
int threadId = Thread.CurrentThread.ManagedThreadId;
Console.WriteLine("Thread:" + threadId + "|" + message);
}
"No Delay" uses only one thread and executes the continuation immediately as though it were synchronous code. Looks good.
Thread:1|Yield:3
Thread:1|Continuation:3
Thread:1|Yield:2
Thread:1|Continuation:2
Thread:1|Yield:1
Thread:1|Continuation:1
"Staggered Delay" uses two threads. We have left the single-threaded world behind and there are absolutely new threads being created in the thread pool. At least the thread used for processing the continuations is reused and processing occurs in the order completed rather than the order invoked.
Thread:1|Yield:3
Thread:1|Yield:2
Thread:1|Yield:1
Thread:4|Continuation:1
Thread:4|Continuation:2
Thread:4|Continuation:3
"Simultaneous Delay" uses...4 threads! This is no better than regular old multi-threading; in fact, its worse since there is an ugly state machine hiding under the covers in the IL.
Thread:1|Yield:3
Thread:1|Yield:2
Thread:1|Yield:1
Thread:4|Continuation:1
Thread:7|Continuation:3
Thread:5|Continuation:2
Please provide a code example for the "Simultaneous Delay" that only uses one thread. I suspect there isn't one...which begs the question of why the async/await pattern is advertised as unrelated to multi-threading when it clearly either a) uses the ThreadPool and dispatches new threads as necessary or b) in a UI or ASP.NET context, simply deadlocks on a single thread unless you await "all the way up" which just means that the magic additional thread is being handled by the framework (not that it does not exist).
IMHO, async/await is an awesome abstraction for using continuations everywhere for high availability without getting mired in callback hell...but let's not pretend we are somehow dodging multi-threading. What am I missing?
You are forcing the multithreading in the code you posted.
When you await Task.Delay the current thread is freed to acomplish other tasks if the task scheduler decides it must be run asynchronously, in this case after it's released from the three tasks you lock that thread with Task.WhenAll.Wait which is a synchronous function.
Also, when the task scheduler finds the Task.Delay on the tasks it decides the task is going to be long running so it must be executed asynchronously, not synchronously like the No delay case (yes, you also await Task.Delay on the No delay case, but a delay of 0 seconds, the task scheduler is smart enough to distinguish this case).
As all the tasks resume simultaneously the task scheduler finds the first thread occupied so it creates a new thread for the first task resumed, then the next task sees both threads occupied and so on.
Basically you are asking something impossible to the async mechanism, you want the methods to be executed in parallel while being executed in one thread.
Also, async is not announced as unrelated to multithreading, if someone says that then he doesn't understand what async is, in fact, asynchronous implies multithreading but the async mechanism on .net is smart enough to complete some tasks synchronously to ensure the maximum efficiency.
It can be announced as thread efficient as if a thread is waiting for an I/O operation per example, it can be used for other tasks without completely locking that thread doing nothing, take a TcpClient for example which uses a Socket, at the OS level the socket uses completion threads so retaining that thread doing nothing is totally inefficient, or if you want to go more low level, take a disk read/write which uses DMA to transfer data without using the processor, in that case no other thread is needed at all and retaining the thread is a waste of resources.
Just as a fact, take this description from Microsoft when they introduced async:
Visual Studio 2012 introduces a simplified approach, async
programming, that leverages asynchronous support in the .NET Framework
4.5 and the Windows Runtime. The compiler does the difficult work that the developer used to do, and your application retains a logical
structure that resembles synchronous code. As a result, you get all
the advantages of asynchronous programming with a fraction of the
effort.
Also, using async on an UI thread does not lock the thread, that's the benefit, the UI thread will be freed and keep the UI responsive when it's waiting for long tasks, and instead of programming manually the multithreading and synchronization functions the async mechanism takes care of everything for you.

async await usages for CPU computing vs IO operation?

I already know that async-await keeps the thread context , also handle exception forwarding etc.(which helps a lot).
But consider the following example :
/*1*/ public async Task<int> ExampleMethodAsync()
/*2*/ {
/*3*/ var httpClient = new HttpClient();
/*4*/
/*5*/ //start async task...
/*6*/ Task<string> contentsTask = httpClient.GetStringAsync("http://msdn.microsoft.com");
/*7*/
/*8*/ //wait and return...
/*9*/ string contents = await contentsTask;
/*10*/
/*11*/ //get the length...
/*12*/ int exampleInt = contents.Length;
/*13*/
/*14*/ //return the length...
/*15*/ return exampleInt;
/*16*/ }
If the async method (httpClient.GetStringAsync) is an IO operation ( like in my sample above) So - I gain these things :
Caller Thread is not blocked
Worker thread is released because there is an IO operation ( IO completion ports...) (GetStringAsync uses TaskCompletionSource and not open a new thread)
Preserved thread context
Exception is thrown back
But What if instead of httpClient.GetStringAsync (IO operation) , I have a Task of CalcFirstMillionsDigitsOf_PI_Async (heavy compute bound operation on a sperate thread)
It seems that the only things I gain here is :
Preserved thread context
Exception is thrown back
Caller Thread is not blocked
But I still have another thread ( parallel thread) which executes the operation. and the cpu is switching between the main thread and the operation .
Does my diagnostics is correct?
Actually, you only get the second set of advantages in both cases. await doesn't start asynchronous execution of anything, it's simply a keyword to the compiler to generate code for handling completion, context etc.
You can find a better explanation of this in '"Invoke the method with await"... ugh!' by Stephen Toub.
It's up to the asynchronous method itself to decide how it achieves the asynchronous execution:
Some methods will use a Task to run their code on a ThreadPool thread,
Some will use some IO-completion mechanism. There is even a special ThreadPool for that, which you can use with Tasks with a custom TaskScheduler
Some will wrap a TaskCompletionSource over another mechanism like events or callbacks.
In every case, it is the specific implementation that releases the thread (if one is used). The TaskScheduler releases the thread automatically when a Task finishes execution, so you get this functionality for cases #1 and #2 anyway.
What happens in case #3 for callbacks, depends on how the callback is made. Most of the time the callback is made on a thread managed by some external library. In this case you have to quickly process the callback and return to allow the library to reuse the method.
EDIT
Using a decompiler, it's possible to see that GetStringAsync uses the third option: It creates a TaskCompletionSource that gets signalled when the operation finishes. Executing the operation is delegated to an HttpMessageHandler.
Your analysis is correct, though the wording on your second part makes it sound like async is creating a worker thread for you, which it is not.
In library code, you actually want to keep your synchronous methods synchronous. If you want to consume a synchronous method asynchronously (e.g., from a UI thread), then call it using await Task.Run(..)
Yes, you're correct. I cannot find any wrong statement in your question. Just the term "Preserved thread context" is unclear to me. Do you mean the "logical control flow"? In that case I'd agree.
Regarding the CPU bound example: you'd normally not do it that way because starting a CPU-based task and waiting for it increases overhead and decreases throughput. But this might be valid if you need the caller to be unblocked (in the case of a WinForms or WFP project for example).

Is Task.Factory.StartNew() guaranteed to use another thread than the calling thread?

I am starting a new task from a function but I would not want it to run on the same thread. I don't care which thread it runs on as long as it is a different one (so the information given in this question does not help).
Am I guaranteed that the below code will always exit TestLock before allowing Task t to enter it again? If not, what is the recommended design pattern to prevent re-entrency?
object TestLock = new object();
public void Test(bool stop = false) {
Task t;
lock (this.TestLock) {
if (stop) return;
t = Task.Factory.StartNew(() => { this.Test(stop: true); });
}
t.Wait();
}
Edit: Based on the below answer by Jon Skeet and Stephen Toub, a simple way to deterministically prevent reentrancy would be to pass a CancellationToken, as illustrated in this extension method:
public static Task StartNewOnDifferentThread(this TaskFactory taskFactory, Action action)
{
return taskFactory.StartNew(action: action, cancellationToken: new CancellationToken());
}
I mailed Stephen Toub - a member of the PFX Team - about this question. He's come back to me really quickly, with a lot of detail - so I'll just copy and paste his text here. I haven't quoted it all, as reading a large amount of quoted text ends up getting less comfortable than vanilla black-on-white, but really, this is Stephen - I don't know this much stuff :) I've made this answer community wiki to reflect that all the goodness below isn't really my content:
If you call Wait() on a Task that's completed, there won't be any blocking (it'll just throw an exception if the task completed with a TaskStatus other than RanToCompletion, or otherwise return as a nop). If you call Wait() on a Task that's already executing, it must block as there’s nothing else it can reasonably do (when I say block, I'm including both true kernel-based waiting and spinning, as it'll typically do a mixture of both). Similarly, if you call Wait() on a Task that has the Created or WaitingForActivation status, it’ll block until the task has completed. None of those is the interesting case being discussed.
The interesting case is when you call Wait() on a Task in the WaitingToRun state, meaning that it’s previously been queued to a TaskScheduler but that TaskScheduler hasn't yet gotten around to actually running the Task's delegate yet. In that case, the call to Wait will ask the scheduler whether it's ok to run the Task then-and-there on the current thread, via a call to the scheduler's TryExecuteTaskInline method. This is called inlining. The scheduler can choose to either inline the task via a call to base.TryExecuteTask, or it can return 'false' to indicate that it is not executing the task (often this is done with logic like...
return SomeSchedulerSpecificCondition() ? false : TryExecuteTask(task);
The reason TryExecuteTask returns a Boolean is that it handles the synchronization to ensure a given Task is only ever executed once). So, if a scheduler wants to completely prohibit inlining of the Task during Wait, it can just be implemented as return false; If a scheduler wants to always allow inlining whenever possible, it can just be implemented as:
return TryExecuteTask(task);
In the current implementation (both .NET 4 and .NET 4.5, and I don’t personally expect this to change), the default scheduler that targets the ThreadPool allows for inlining if the current thread is a ThreadPool thread and if that thread was the one to have previously queued the task.
Note that there isn't arbitrary reentrancy here, in that the default scheduler won’t pump arbitrary threads when waiting for a task... it'll only allow that task to be inlined, and of course any inlining that task in turn decides to do. Also note that Wait won’t even ask the scheduler in certain conditions, instead preferring to block. For example, if you pass in a cancelable CancellationToken, or if you pass in a non-infinite timeout, it won’t try to inline because it could take an arbitrarily long amount of time to inline the task's execution, which is all or nothing, and that could end up significantly delaying the cancellation request or timeout. Overall, TPL tries to strike a decent balance here between wasting the thread that’s doing the Wait'ing and reusing that thread for too much. This kind of inlining is really important for recursive divide-and-conquer problems (e.g. QuickSort) where you spawn multiple tasks and then wait for them all to complete. If such were done without inlining, you’d very quickly deadlock as you exhaust all threads in the pool and any future ones it wanted to give to you.
Separate from Wait, it’s also (remotely) possible that the Task.Factory.StartNew call could end up executing the task then and there, iff the scheduler being used chose to run the task synchronously as part of the QueueTask call. None of the schedulers built into .NET will ever do this, and I personally think it would be a bad design for scheduler, but it’s theoretically possible, e.g.:
protected override void QueueTask(Task task, bool wasPreviouslyQueued)
{
return TryExecuteTask(task);
}
The overload of Task.Factory.StartNew that doesn’t accept a TaskScheduler uses the scheduler from the TaskFactory, which in the case of Task.Factory targets TaskScheduler.Current. This means if you call Task.Factory.StartNew from within a Task queued to this mythical RunSynchronouslyTaskScheduler, it would also queue to RunSynchronouslyTaskScheduler, resulting in the StartNew call executing the Task synchronously. If you’re at all concerned about this (e.g. you’re implementing a library and you don’t know where you’re going to be called from), you can explicitly pass TaskScheduler.Default to the StartNew call, use Task.Run (which always goes to TaskScheduler.Default), or use a TaskFactory created to target TaskScheduler.Default.
EDIT: Okay, it looks like I was completely wrong, and a thread which is currently waiting on a task can be hijacked. Here's a simpler example of this happening:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication1 {
class Program {
static void Main() {
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(Launch).Wait();
}
}
static void Launch()
{
Console.WriteLine("Launch thread: {0}",
Thread.CurrentThread.ManagedThreadId);
Task.Factory.StartNew(Nested).Wait();
}
static void Nested()
{
Console.WriteLine("Nested thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}
}
}
Sample output:
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
As you can see, there are lots of times when the waiting thread is reused to execute the new task. This can happen even if the thread has acquired a lock. Nasty re-entrancy. I am suitably shocked and worried :(
Why not just design for it, rather than bend over backwards to ensure it doesn't happen?
The TPL is a red herring here, reentrancy can happen in any code provided you can create a cycle, and you don't know for sure what's going to happen 'south' of your stack frame. Synchronous reentrancy is the best outcome here - at least you can't self-deadlock yourself (as easily).
Locks manage cross thread synchronisation. They are orthogonal to managing reentrancy. Unless you are protecting a genuine single use resource (probably a physical device, in which case you should probably use a queue), why not just ensure your instance state is consistent so reentrancy can 'just work'.
(Side thought: are Semaphores reentrant without decrementing?)
You could easily test this by writting a quick app that shared a socket connection between threads / tasks.
The task would acquire a lock before sending a message down the socket and waiting for a response. Once this blocks and becomes idle (IOBlock) set another task in the same block to do the same. It should block on acquiring the lock, if it does not and the second task is allowed to pass the lock because it run by the same thread then you have an problem.
Solution with new CancellationToken() proposed by Erwin did not work for me, inlining happened to occur anyway.
So I ended up using another condition advised by Jon and Stephen
(... or if you pass in a non-infinite timeout ...):
Task<TResult> task = Task.Run(func);
task.Wait(TimeSpan.FromHours(1)); // Whatever is enough for task to start
return task.Result;
Note: Omitting exception handling etc here for simplicity, you should mind those in production code.

Categories

Resources