I was writing a small console app to try to become familiar with using async/await. In this app, I accidentally created an infinite recursive loop (which I have now fixed). The behavior of this infinitely recursive loop surprised me though. Rather than throwing a StackOverflowException, it became deadlocked.
Consider the following example. If Foo() is called with runAsync set to false, it throws a StackOverflowException. But when runAsync is true, it becomes deadlocked (or at least appears to). Can anyone explain why the behavior is so different?
bool runAsync;
void Foo()
{
Task.WaitAll(Bar(),Bar());
}
async Task Bar()
{
if (runAsync)
await Task.Run(Foo).ConfigureAwait(false);
else
Foo();
}
It's not really deadlocked. This quickly exhausts the available threads in the thread-pool. Then, one new thread is injected every 500ms. You can observe that when you put some Console.WriteLine logging in there.
Basically, this code is invalid because it overwhelms the thread-pool. Nothing in this spirit may be put into production.
If you make all waiting async instead of using Task.WaitAll you turn the apparent deadlock into a runaway memory leak instead. This might be an interesting experiment for you.
The async version doesn't deadlock (as usr explained) but it doesn't throw a StackOverflowException because it doesn't rely on the stack.
The stack is a memory area reserved for a thread (unlike the heap which is shared among all the threads).
When you call an async method it runs synchronously (i.e. using the same thread and stack) until it reaches an await on an uncompleted task. At that point the rest of the method is scheduled as a continuation and the thread is released (together with its stack).
So when you use Task.Run you are offloading Foo to another ThreadPool thread with a clean stack, so you'll never get a StackOverflowException.
You may however, reach an OutOfMemoryException because the async method's state-machine is stored in the heap, available for all threads to resume on. This example will throw very quickly because you don't exhaust the ThreadPool:
static void Main()
{
Foo().Wait();
}
static async Task Foo()
{
await Task.Yield();
await Foo();
}
Related
Edit
I find Building Async Coordination Primitives, Part 1: AsyncManualResetEvent might be related to my topic.
In the case of TaskCompletionSource, that means that synchronous continuations can happen as part of a call to {Try}Set*, which means in our AsyncManualResetEvent example, those continuations could execute as part of the Set method. Depending on your needs (and whether callers of Set may be ok with a potentially longer-running Set call as all synchronous continuations execute), this may or may not be what you want.
Many thanks to all of the answers, thank you for your knowledge and patience!
Original Question
I know that Task.Run runs on a threadpool thread and threads can have re-entrancy. But I never knew that 2 tasks can run on the same thread when they are both alive!
My Question is: is that reasonable by design? Does that mean lock inside an async method is meaningless (or say, lock cannot be trusted in async method block, if I'd like a method that doesn't allow reentrancy)?
Code:
namespace TaskHijacking
{
class Program
{
static TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();
static object methodLock = new object();
static void MethodNotAllowReetrance(string callerName)
{
lock(methodLock)
{
Console.WriteLine($"Enter MethodNotAllowReetrance, caller: {callerName}, on thread: {Thread.CurrentThread.ManagedThreadId}");
if (callerName == "task1")
{
tcs.SetException(new Exception("Terminate tcs"));
}
Thread.Sleep(1000);
Console.WriteLine($"Exit MethodNotAllowReetrance, caller: {callerName}, on thread: {Thread.CurrentThread.ManagedThreadId}");
}
}
static void Main(string[] args)
{
var task1 = Task.Run(async () =>
{
await Task.Delay(1000);
MethodNotAllowReetrance("task1");
});
var task2 = Task.Run(async () =>
{
try
{
await tcs.Task; // await here until task SetException on tcs
}
catch
{
// Omit the exception
}
MethodNotAllowReetrance("task2");
});
Task.WaitAll(task1, task2);
Console.ReadKey();
}
}
}
Output:
Enter MethodNotAllowReetrance, caller: task1, on thread: 6
Enter MethodNotAllowReetrance, caller: task2, on thread: 6
Exit MethodNotAllowReetrance, caller: task2, on thread: 6
Exit MethodNotAllowReetrance, caller: task1, on thread: 6
The control flow of the thread 6 is shown in the figure:
You already have several solutions. I just want to describe the problem a bit more. There are several factors at play here that combine to cause the observed re-entrancy.
First, lock is re-entrant. lock is strictly about mutual exclusion of threads, which is not the same as mutual exclusion of code. I think re-entrant locks are a bad idea in the 99% case (as described on my blog), since developers generally want mutual exclusion of code and not threads. SemaphoreSlim, since it is not re-entrant, mutually excludes code. IMO re-entrant locks are a holdover from decades ago, when they were introduced as an OS concept, and the OS is just concerned about managing threads.
Next, TaskCompletionSource<T> by default invokes task continuations synchronously.
Also, await will schedule its method continuation as a synchronous task continuation (as described on my blog).
Task continuations will sometimes run asynchronously even if scheduled synchronously, but in this scenario they will run synchronously. The context captured by await is the thread pool context, and the completing thread (the one calling TCS.TrySet*) is a thread pool thread, and in that case the continuation will almost always run synchronously.
So, you end up with a thread that takes a lock, completes a TCS, thus executing the continuations of that task, which includes continuing another method, which is then able to take that same lock.
To repeat the existing solutions in other answers, to solve this you need to break that chain at some point:
(OK) Use a non-reentrant lock. SemaphoreSlim.WaitAsync will still execute the continuations while holding the lock (not a good idea), but since SemaphoreSlim isn't re-entrant, the method continuation will (asynchronously) wait for the lock to be available.
(Best) Use TaskCompletionSource.RunContinuationsAsynchronously, which will force task continuations onto a (different) thread pool thread. This is a better solution because your code is no longer invoking arbitrary code while holding a lock (i.e., the task continuations).
You can also break the chain by using a non-thread-pool context for the method awaiting the TCS. E.g., if that method had to resume on a UI thread, then it could not be run synchronously from a thread pool thread.
From a broader perspective, if you're mixing locks and TaskCompletionSource instances, it sounds like you may be building (or may need) an asynchronous coordination primitive. I have an open-source library that implements a bunch of them, if that helps.
A task is an abstraction over some amount of work. Usually this means that the work is split into parts, where the execution can be paused and resumed between parts. When resuming it may very well run on another thread. But the pausing/resuming may only be done at the await statements. Notably, while the task is 'paused', for example because it is waiting for IO, it does not consume any thread at all, it will only use a thread while it is actually running.
My Question is: is that reasonable by design? Does that mean lock inside an async method is meaningless?
locks inside a async method is far from meaningless since it allows you to ensure a section of code is only run by one thread at a time.
In your first example there can be only one thread that has the lock at a time. While the lock is held that task cannot be paused/resumed since await is not legal while in a lock body. So a single thread will execute the entire lock body, and that thread cannot do anything else until it completes the lock body. So there is no risk of re-entrancy unless you invoke some code that can call back to the same method.
In your updated example the problem occurs due to TaskCompletionSource.SetException, this is allowed to reuse the current thread to run any continuation of the task immediately. To avoid this, and many other issues, make sure you only hold the lock while running a limited amount of code. Any method calls that may run arbitrary code risks causing deadlocks, reentrancy, and many other problems.
You can solve the specific problem by using a ManualResetEvent(Slim) to do the signaling between threads instead of using a TaskCompletionSource.
So your method is basically like this:
static void MethodNotAllowReetrance()
{
lock (methodLock) tcs.SetResult();
}
...and the tcs.Task has a continuation attached that invokes the MethodNotAllowReetrance. What happens then is the same thing that would happen if your method was like this instead:
static void MethodNotAllowReetrance()
{
lock (methodLock) MethodNotAllowReetrance();
}
The moral lesson is that you must be very careful every time you invoke any method inside a lock-protected region. In this particular case you have a couple of options:
Don't complete the TaskCompletionSource while holding the lock. Defer its completion until after you have exited the protected region:
static void MethodNotAllowReetrance()
{
bool doComplete = false;
lock (methodLock) doComplete = true;
if (doComplete) tcs.SetResult();
}
Configure the TaskCompletionSource so that it invokes its continuations asynchronously, by passing the TaskCreationOptions.RunContinuationsAsynchronously in its constructor. This is an option that you don't have very often. For example when you cancel a CancellationTokenSource, you don't have the option to invoke asynchronously the callbacks registered to its associated CancellationToken.
Refactor the MethodNotAllowReetrance method in a way that it can handle reentrancy.
Use SemaphoreSlim instead of lock, since, as the documentation says:
The SemaphoreSlim class doesn't enforce thread or task identity
In your case, it would look something like this:
// Semaphore only allows one request to enter at a time
private static readonly SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(1, 1);
void SyncMethod() {
_semaphoreSlim.Wait();
try {
// Do some sync work
} finally {
_semaphoreSlim.Release();
}
}
The try/finally block is optional, but it makes sure that the semaphore is released even if an exception is thrown somewhere in your code.
Note that SemaphoreSlim also has a WaitAsync() method, if you want to wait asynchronously to enter the semaphore.
I am rewriting some of my component management to use async start methods. Sadly it looks like a call to an async method WITHOUT await, still does await the result?
Can anyone enlighten me?
I am calling:
public async Task StartAsync() {
await DoStartProcessingAsync();
}
which in itself is calling a slow implementation of protected abstract Task DoStartProcessingAsync(); - slow because it dome some EF calls, then creates an appdomain etc. - takes "ages".
The actual call is done in the form:
x.StartAsync().Forget();
with "Forget" being a dummy function just to avoid the "no await" warning:
public static void Forget(this Task task) {
}
Sadly, this sequence - is waiting for the slow DoStartAsync method to complete, and I see no reason for that. I am quite old in C#, but quite new to async/await and I was under the impression that, unless I await for the async task - the method would complete. As such, I expected the call to StartAsyc().Forget() to return immediatly. INSTEAD stack trace shows the thread going all the way into the DoStartProcessingAsync() method without any async processing happening.
Anyone can enlighten me on my mistake?
What your trying to achieve here is a fire and forget type mechanism. So async/await isn't really what you want. Your not wanting to await anything.
These are designed to free up threads for long running processes. Right now your returning a Task using an await and then "forgetting" it. So why return the Task at all?
Your freeing up the thread for the long running process, but your also queuing a process that ultimately does nothing (this is adding overhead that you could likely do without).
Simply doing this, would probably make more sense:
public void StartAsync() {
Task.Run(() => DoStartProcessingAsync());
}
One thing to bear in mind is that your now using a ThreadPool thread not a UI thread (depending on what is actually calling this).
I am starting a new task from a function but I would not want it to run on the same thread. I don't care which thread it runs on as long as it is a different one (so the information given in this question does not help).
Am I guaranteed that the below code will always exit TestLock before allowing Task t to enter it again? If not, what is the recommended design pattern to prevent re-entrency?
object TestLock = new object();
public void Test(bool stop = false) {
Task t;
lock (this.TestLock) {
if (stop) return;
t = Task.Factory.StartNew(() => { this.Test(stop: true); });
}
t.Wait();
}
Edit: Based on the below answer by Jon Skeet and Stephen Toub, a simple way to deterministically prevent reentrancy would be to pass a CancellationToken, as illustrated in this extension method:
public static Task StartNewOnDifferentThread(this TaskFactory taskFactory, Action action)
{
return taskFactory.StartNew(action: action, cancellationToken: new CancellationToken());
}
I mailed Stephen Toub - a member of the PFX Team - about this question. He's come back to me really quickly, with a lot of detail - so I'll just copy and paste his text here. I haven't quoted it all, as reading a large amount of quoted text ends up getting less comfortable than vanilla black-on-white, but really, this is Stephen - I don't know this much stuff :) I've made this answer community wiki to reflect that all the goodness below isn't really my content:
If you call Wait() on a Task that's completed, there won't be any blocking (it'll just throw an exception if the task completed with a TaskStatus other than RanToCompletion, or otherwise return as a nop). If you call Wait() on a Task that's already executing, it must block as there’s nothing else it can reasonably do (when I say block, I'm including both true kernel-based waiting and spinning, as it'll typically do a mixture of both). Similarly, if you call Wait() on a Task that has the Created or WaitingForActivation status, it’ll block until the task has completed. None of those is the interesting case being discussed.
The interesting case is when you call Wait() on a Task in the WaitingToRun state, meaning that it’s previously been queued to a TaskScheduler but that TaskScheduler hasn't yet gotten around to actually running the Task's delegate yet. In that case, the call to Wait will ask the scheduler whether it's ok to run the Task then-and-there on the current thread, via a call to the scheduler's TryExecuteTaskInline method. This is called inlining. The scheduler can choose to either inline the task via a call to base.TryExecuteTask, or it can return 'false' to indicate that it is not executing the task (often this is done with logic like...
return SomeSchedulerSpecificCondition() ? false : TryExecuteTask(task);
The reason TryExecuteTask returns a Boolean is that it handles the synchronization to ensure a given Task is only ever executed once). So, if a scheduler wants to completely prohibit inlining of the Task during Wait, it can just be implemented as return false; If a scheduler wants to always allow inlining whenever possible, it can just be implemented as:
return TryExecuteTask(task);
In the current implementation (both .NET 4 and .NET 4.5, and I don’t personally expect this to change), the default scheduler that targets the ThreadPool allows for inlining if the current thread is a ThreadPool thread and if that thread was the one to have previously queued the task.
Note that there isn't arbitrary reentrancy here, in that the default scheduler won’t pump arbitrary threads when waiting for a task... it'll only allow that task to be inlined, and of course any inlining that task in turn decides to do. Also note that Wait won’t even ask the scheduler in certain conditions, instead preferring to block. For example, if you pass in a cancelable CancellationToken, or if you pass in a non-infinite timeout, it won’t try to inline because it could take an arbitrarily long amount of time to inline the task's execution, which is all or nothing, and that could end up significantly delaying the cancellation request or timeout. Overall, TPL tries to strike a decent balance here between wasting the thread that’s doing the Wait'ing and reusing that thread for too much. This kind of inlining is really important for recursive divide-and-conquer problems (e.g. QuickSort) where you spawn multiple tasks and then wait for them all to complete. If such were done without inlining, you’d very quickly deadlock as you exhaust all threads in the pool and any future ones it wanted to give to you.
Separate from Wait, it’s also (remotely) possible that the Task.Factory.StartNew call could end up executing the task then and there, iff the scheduler being used chose to run the task synchronously as part of the QueueTask call. None of the schedulers built into .NET will ever do this, and I personally think it would be a bad design for scheduler, but it’s theoretically possible, e.g.:
protected override void QueueTask(Task task, bool wasPreviouslyQueued)
{
return TryExecuteTask(task);
}
The overload of Task.Factory.StartNew that doesn’t accept a TaskScheduler uses the scheduler from the TaskFactory, which in the case of Task.Factory targets TaskScheduler.Current. This means if you call Task.Factory.StartNew from within a Task queued to this mythical RunSynchronouslyTaskScheduler, it would also queue to RunSynchronouslyTaskScheduler, resulting in the StartNew call executing the Task synchronously. If you’re at all concerned about this (e.g. you’re implementing a library and you don’t know where you’re going to be called from), you can explicitly pass TaskScheduler.Default to the StartNew call, use Task.Run (which always goes to TaskScheduler.Default), or use a TaskFactory created to target TaskScheduler.Default.
EDIT: Okay, it looks like I was completely wrong, and a thread which is currently waiting on a task can be hijacked. Here's a simpler example of this happening:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication1 {
class Program {
static void Main() {
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(Launch).Wait();
}
}
static void Launch()
{
Console.WriteLine("Launch thread: {0}",
Thread.CurrentThread.ManagedThreadId);
Task.Factory.StartNew(Nested).Wait();
}
static void Nested()
{
Console.WriteLine("Nested thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}
}
}
Sample output:
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
As you can see, there are lots of times when the waiting thread is reused to execute the new task. This can happen even if the thread has acquired a lock. Nasty re-entrancy. I am suitably shocked and worried :(
Why not just design for it, rather than bend over backwards to ensure it doesn't happen?
The TPL is a red herring here, reentrancy can happen in any code provided you can create a cycle, and you don't know for sure what's going to happen 'south' of your stack frame. Synchronous reentrancy is the best outcome here - at least you can't self-deadlock yourself (as easily).
Locks manage cross thread synchronisation. They are orthogonal to managing reentrancy. Unless you are protecting a genuine single use resource (probably a physical device, in which case you should probably use a queue), why not just ensure your instance state is consistent so reentrancy can 'just work'.
(Side thought: are Semaphores reentrant without decrementing?)
You could easily test this by writting a quick app that shared a socket connection between threads / tasks.
The task would acquire a lock before sending a message down the socket and waiting for a response. Once this blocks and becomes idle (IOBlock) set another task in the same block to do the same. It should block on acquiring the lock, if it does not and the second task is allowed to pass the lock because it run by the same thread then you have an problem.
Solution with new CancellationToken() proposed by Erwin did not work for me, inlining happened to occur anyway.
So I ended up using another condition advised by Jon and Stephen
(... or if you pass in a non-infinite timeout ...):
Task<TResult> task = Task.Run(func);
task.Wait(TimeSpan.FromHours(1)); // Whatever is enough for task to start
return task.Result;
Note: Omitting exception handling etc here for simplicity, you should mind those in production code.
I am starting a new task from a function but I would not want it to run on the same thread. I don't care which thread it runs on as long as it is a different one (so the information given in this question does not help).
Am I guaranteed that the below code will always exit TestLock before allowing Task t to enter it again? If not, what is the recommended design pattern to prevent re-entrency?
object TestLock = new object();
public void Test(bool stop = false) {
Task t;
lock (this.TestLock) {
if (stop) return;
t = Task.Factory.StartNew(() => { this.Test(stop: true); });
}
t.Wait();
}
Edit: Based on the below answer by Jon Skeet and Stephen Toub, a simple way to deterministically prevent reentrancy would be to pass a CancellationToken, as illustrated in this extension method:
public static Task StartNewOnDifferentThread(this TaskFactory taskFactory, Action action)
{
return taskFactory.StartNew(action: action, cancellationToken: new CancellationToken());
}
I mailed Stephen Toub - a member of the PFX Team - about this question. He's come back to me really quickly, with a lot of detail - so I'll just copy and paste his text here. I haven't quoted it all, as reading a large amount of quoted text ends up getting less comfortable than vanilla black-on-white, but really, this is Stephen - I don't know this much stuff :) I've made this answer community wiki to reflect that all the goodness below isn't really my content:
If you call Wait() on a Task that's completed, there won't be any blocking (it'll just throw an exception if the task completed with a TaskStatus other than RanToCompletion, or otherwise return as a nop). If you call Wait() on a Task that's already executing, it must block as there’s nothing else it can reasonably do (when I say block, I'm including both true kernel-based waiting and spinning, as it'll typically do a mixture of both). Similarly, if you call Wait() on a Task that has the Created or WaitingForActivation status, it’ll block until the task has completed. None of those is the interesting case being discussed.
The interesting case is when you call Wait() on a Task in the WaitingToRun state, meaning that it’s previously been queued to a TaskScheduler but that TaskScheduler hasn't yet gotten around to actually running the Task's delegate yet. In that case, the call to Wait will ask the scheduler whether it's ok to run the Task then-and-there on the current thread, via a call to the scheduler's TryExecuteTaskInline method. This is called inlining. The scheduler can choose to either inline the task via a call to base.TryExecuteTask, or it can return 'false' to indicate that it is not executing the task (often this is done with logic like...
return SomeSchedulerSpecificCondition() ? false : TryExecuteTask(task);
The reason TryExecuteTask returns a Boolean is that it handles the synchronization to ensure a given Task is only ever executed once). So, if a scheduler wants to completely prohibit inlining of the Task during Wait, it can just be implemented as return false; If a scheduler wants to always allow inlining whenever possible, it can just be implemented as:
return TryExecuteTask(task);
In the current implementation (both .NET 4 and .NET 4.5, and I don’t personally expect this to change), the default scheduler that targets the ThreadPool allows for inlining if the current thread is a ThreadPool thread and if that thread was the one to have previously queued the task.
Note that there isn't arbitrary reentrancy here, in that the default scheduler won’t pump arbitrary threads when waiting for a task... it'll only allow that task to be inlined, and of course any inlining that task in turn decides to do. Also note that Wait won’t even ask the scheduler in certain conditions, instead preferring to block. For example, if you pass in a cancelable CancellationToken, or if you pass in a non-infinite timeout, it won’t try to inline because it could take an arbitrarily long amount of time to inline the task's execution, which is all or nothing, and that could end up significantly delaying the cancellation request or timeout. Overall, TPL tries to strike a decent balance here between wasting the thread that’s doing the Wait'ing and reusing that thread for too much. This kind of inlining is really important for recursive divide-and-conquer problems (e.g. QuickSort) where you spawn multiple tasks and then wait for them all to complete. If such were done without inlining, you’d very quickly deadlock as you exhaust all threads in the pool and any future ones it wanted to give to you.
Separate from Wait, it’s also (remotely) possible that the Task.Factory.StartNew call could end up executing the task then and there, iff the scheduler being used chose to run the task synchronously as part of the QueueTask call. None of the schedulers built into .NET will ever do this, and I personally think it would be a bad design for scheduler, but it’s theoretically possible, e.g.:
protected override void QueueTask(Task task, bool wasPreviouslyQueued)
{
return TryExecuteTask(task);
}
The overload of Task.Factory.StartNew that doesn’t accept a TaskScheduler uses the scheduler from the TaskFactory, which in the case of Task.Factory targets TaskScheduler.Current. This means if you call Task.Factory.StartNew from within a Task queued to this mythical RunSynchronouslyTaskScheduler, it would also queue to RunSynchronouslyTaskScheduler, resulting in the StartNew call executing the Task synchronously. If you’re at all concerned about this (e.g. you’re implementing a library and you don’t know where you’re going to be called from), you can explicitly pass TaskScheduler.Default to the StartNew call, use Task.Run (which always goes to TaskScheduler.Default), or use a TaskFactory created to target TaskScheduler.Default.
EDIT: Okay, it looks like I was completely wrong, and a thread which is currently waiting on a task can be hijacked. Here's a simpler example of this happening:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication1 {
class Program {
static void Main() {
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(Launch).Wait();
}
}
static void Launch()
{
Console.WriteLine("Launch thread: {0}",
Thread.CurrentThread.ManagedThreadId);
Task.Factory.StartNew(Nested).Wait();
}
static void Nested()
{
Console.WriteLine("Nested thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}
}
}
Sample output:
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
As you can see, there are lots of times when the waiting thread is reused to execute the new task. This can happen even if the thread has acquired a lock. Nasty re-entrancy. I am suitably shocked and worried :(
Why not just design for it, rather than bend over backwards to ensure it doesn't happen?
The TPL is a red herring here, reentrancy can happen in any code provided you can create a cycle, and you don't know for sure what's going to happen 'south' of your stack frame. Synchronous reentrancy is the best outcome here - at least you can't self-deadlock yourself (as easily).
Locks manage cross thread synchronisation. They are orthogonal to managing reentrancy. Unless you are protecting a genuine single use resource (probably a physical device, in which case you should probably use a queue), why not just ensure your instance state is consistent so reentrancy can 'just work'.
(Side thought: are Semaphores reentrant without decrementing?)
You could easily test this by writting a quick app that shared a socket connection between threads / tasks.
The task would acquire a lock before sending a message down the socket and waiting for a response. Once this blocks and becomes idle (IOBlock) set another task in the same block to do the same. It should block on acquiring the lock, if it does not and the second task is allowed to pass the lock because it run by the same thread then you have an problem.
Solution with new CancellationToken() proposed by Erwin did not work for me, inlining happened to occur anyway.
So I ended up using another condition advised by Jon and Stephen
(... or if you pass in a non-infinite timeout ...):
Task<TResult> task = Task.Run(func);
task.Wait(TimeSpan.FromHours(1)); // Whatever is enough for task to start
return task.Result;
Note: Omitting exception handling etc here for simplicity, you should mind those in production code.
I'm evaluating the Async CTP.
How can I begin execution of an async function on another thread pool's thread?
static async Task Test()
{
// Do something, await something
}
static void Main( string[] args )
{
// Is there more elegant way to write the line below?
var t = TaskEx.Run( () => Test().Wait() );
// Doing much more in this same thread
t.Wait(); // Waiting for much more then just this single task, this is just an example
}
I'm new (my virginal post) to Stack Overflow, but I'm jazzed that you're asking about the Async CTP since I'm on the team working on it at Microsoft :)
I think I understand what you're aiming for, and there's a couple of things you're doing correctly, to get you there.
What I think you want:
static async Task Test()
{
// Do something, await something
}
static void Main(string[] args)
{
// In the CTP, use Task.RunEx(...) to run an Async Method or Async Lambda
// on the .NET thread pool
var t = TaskEx.RunEx(Test);
// the above was just shorthand for
var t = TaskEx.RunEx(new Func<Task>(Test));
// because the C# auto-wraps methods into delegates for you.
// Doing much more in this same thread
t.Wait(); // Waiting for much more then just this single task, this is just an example
}
Task.Run vs. Task.RunEx
Because this CTP installs on top of .NET 4.0, we didn't want to patch the actual System.Threading.Tasks.Task type in mscorlib. Instead, the playground APIs are named FooEx when they conflicted.
Why did we name some of them Run(...) and some of the RunEx(...)? The reason is because of redesigns in method overloading that we hadn't completed yet by the time we released the CTP. In our current working codebase, we've actually had to tweak the C# method overloading rules slightly so that the right thing happens for Async Lambdas - which can return void, Task, or Task<T>.
The issue is that when async method or lambdas return Task or Task<T>, they actually don't have the outer task type in the return expression, because the task is generated for you automatically as part of the method or lambda's invocation. This strongly seems to us like the right experience for code clarity, though that does make things quite different before, since typically the expression of return statements is directly convertible to the return type of the method or lambda.
So thus, both async void lambdas and async Task lambdas support return; without arguments. Hence the need for a clarification in method overload resolution to decide which one to pick. Thus the only reason why you have both Run(...) and RunEx(...) was so that we would make sure to have higher quality support for the other parts of the Async CTP, by the time PDC 2010 hit.
How to think about async methods/lambdas
I'm not sure if this is a point of confusion, but I thought I'd mention it - when you are writing an async method or async lambda, it can take on certain characteristics of whoever is invoking it. This is predicated on two things:
The type on which you are awaiting
And possibly the synchronization context (depending on above)
The CTP design for await and our current internal design are both very pattern-based so that API providers can help flesh out a vibrant set of things that you can 'await' on. This can vary based on the type on which you're awaiting, and the common type for that is Task.
Task's await implementation is very reasonable, and defers to the current thread's SynchronizationContext to decide how to defer work. In the case that you're already in a WinForms or WPF message loop, then your deferred execution will come back on the same message loop (as if you used BeginInvoke() the "rest of your method"). If you await a Task and you're already on the .NET threadpool, then the "rest of your method" will resume on one of the threadpool threads (but not necessarily the same one exactly), since they were pooled to begin with and most likely you're happy to go with the first available pool thread.
Caution about using Wait() methods
In your sample you used:
var t = TaskEx.Run( () => Test().Wait() );
What that does is:
In the surrounding thread synchronously call TaskEx.Run(...) to execute a lambda on the thread pool.
A thread pool thread is designated for the lambda, and it invokes your async method.
The async method Test() is invoked from the lambda. Because the lambda was executing on the thread pool, any continuations inside Test() can run on any thread in the thread pool.
The lambda doesn't actually vacate that thread's stack because it had no awaits in it. The TPL's behavior in this case depends on if Test() actually finished before the Wait() call. However, in this case, there's a real possibility that you will be blocking a thread pool thread while it waits for Test() to finish executing on a different thread.
That's the primary benefit of the 'await' operator is that it allows you to add code that executes later - but without blocking the original thread. In the thread pool case, you can achieve better thread utilization.
Let me know if you have other questions about the Async CTP for VB or C#, I'd love to hear them :)
It's usually up to the method returning the Task to determine where it runs, if it's starting genuinely new work instead of just piggy-backing on something else.
In this case it doesn't look like you really want the Test() method to be async - at least, you're not using the fact that it's asynchronous. You're just starting stuff in a different thread... the Test() method could be entirely synchronous, and you could just use:
Task task = TaskEx.Run(Test);
// Do stuff
t.Wait();
That doesn't require any of the async CTP goodness.
There would be, if this wasn't a console application. For example, if you do this in a Windows Forms application, you could do:
// Added to a button click event, for example
public async void button1_Click(object sender, EventArgs e)
{
// Do some stuff
await Test();
// Do some more stuff
}
However, there is no default SynchronizationContext in a console, so that won't work the way you'd expect. In a console application, you need to explicitly grab the task and then wait at the end.
If you're doing this in a UI thread in Windows Forms, WPF, or even a WCF service, there will be a valid SynchronizationContext that will be used to marshal back the results properly. In a console application, however, when control is "returned" at the await call, the program continues, and just exits immediately. This tends to mess up everything, and produce unexpected behavior.