Usage of ConfigureAwait in .NET - c#

I've read about ConfigureAwait in various places (including SO questions), and here are my conclusions:
ConfigureAwait(true): Runs the rest of the code on the same thread the code before the await was run on.
ConfigureAwait(false): Runs the rest of the code on the same thread the awaited code was run on.
If the await is followed by a code that accesses the UI, the task should be appended with .ConfigureAwait(true). Otherwise, an InvalidOperationException will occur due to another thread accessing UI elements.
My questions are:
Are my conclusions correct?
When does ConfigureAwait(false) improves performance, and when it doesn't?
If writing for a GUI application, but the next lines doesn't access the UI elements. Should I use ConfigureAwait(false) or ConfigureAwait(true) ?

To answer your questions more directly:
ConfigureAwait(true): Runs the rest of the code on the same thread the code before the await was run on.
Not necessarily the same thread, but the same synchronization context. The synchronization context can decide how to run the code. In a UI application, it will be the same thread. In ASP.NET, it may not be the same thread, but you will have the HttpContext available, just like you did before.
ConfigureAwait(false): Runs the rest of the code on the same thread the awaited code was run on.
This is not correct. ConfigureAwait(false) tells it that it does not need the context, so the code can be run anywhere. It could be any thread that runs it.
If the await is followed by a code that accesses the UI, the task should be appended with .ConfigureAwait(true). Otherwise, an InvalidOperationException will occur due to another thread accessing UI elements.
It is not correct that it "should be appended with .ConfigureAwait(true)". ConfigureAwait(true) is the default. So if that's what you want, you don't need to specify it.
When does ConfigureAwait(false) improves performance, and when it doesn't?
Returning to the synchronization context might take time, because it may have to wait for something else to finish running. In reality, this rarely happens, or that waiting time is so minuscule that you'd never notice it.
If writing for a GUI application, but the next lines doesn't access the UI elements. Should I use ConfigureAwait(false) or ConfigureAwait(true) ?
You could use ConfigureAwait(false), but I suggest you don't, for a few reasons:
I doubt you would notice any performance improvement.
It can introduce parallelism that you may not expect. If you use ConfigureAwait(false), the continuation can run on any thread, so you could have problems if you're accessing non-thread-safe objects. It is not common to have these problems, but it can happen.
You (or someone else maintaining this code) may add code that interacts with the UI later and exceptions will be thrown. Hopefully the ConfigureAwait(false) is easy to spot (it could be in a different method than where the exception is thrown) and you/they know what it does.
I find it's easier to not use ConfigureAwait(false) at all (except in libraries). In the words of Stephen Toub (a Microsoft employee) in the ConfigureAwait FAQ:
When writing applications, you generally want the default behavior (which is why it is the default behavior).
Edit: I've written an article of my own on this topic: .NET: Don’t use ConfigureAwait(false)

ConfigureAwait(false) may improve performance if there are not many worker threads available and if the thread that it would need to wait for is constantly busy.
ConfigureAwait(false) is recommended everywhere where coming back to same SynchronizationContext (which usualy is linked with thread) is not needed, especially in libraries that awaits something internally: https://medium.com/bynder-tech/c-why-you-should-use-configureawait-false-in-your-library-code-d7837dce3d7f.
ConfigureAwait(true) (which is the default) is needed when you require same context but may also lead to a dead lock in certain situations.
Consider this code:
void Main()
{
// creating a windows form attaches a synchronization context to the current thread
new System.Windows.Forms.Form();
var task = DoSth();
Console.WriteLine(task.Result);
}
async Task<int> DoSth()
{
await Task.Delay(1000);
return 1;
}
in this example because of not awaited task DoSth, the main UI thread is blocked by waiting for task.Result - at the same time DoSth is blocked because it wants to come back to the UI thread after a delay. This will lead to a deadlock and this code will never execute to the end. Adding .ConfigureAwait(false) solves the problem in this case.

Using ConfigureAwait(false) in application code is normally not going to boost your application's performance in any meaningful way, because normally you don't await inside loops in application code. For example lets consider the case that your app has a button, and an async operation is started everytime the user clicks the button, and the async operation includes a single await. By typing the 22 characters .ConfigureAwait(false) after this await you have already lost comparable time of your life, with the time you can hope to save from 10 users that click this button once every minute, 8 hours per day, for 20 years each (~35,000,000 context switchings in total = some seconds of CPU processing time).
And this before taking into account the time you need to think about whether you can safely include this configuration (depending on whether the continuation contains UI-related code), the time you'll need in order to reconfirm you previous assessment every time you have to maintain/modify the code, and the time you'll lose on debugging in case your assessment was wrong.
On the other hand if your Button_Click handler contains code like this:
private async void Button_Click(object sender, EventArgs e)
{
var client = new WebClient();
using var stream = await client.OpenReadTaskAsync("someUrl");
var buffer = new byte[1024];
while ((await stream.ReadAsync(buffer, 0, buffer.Length)) > 0)
{
//...
}
}
...then by all means do spend the extra time to ConfigureAwait(false) the ReadAsync task. Also do consider refactoring the code by moving the stream-reading part to a separate asynchronous method, so that you can safely access UI elements anywhere inside the Button_Click handler, without been distracted by technicalities that don't belong to this layer of the app.

Related

Are non-thread-safe functions async safe?

Consider the following async function that modifies a non-thread-safe list:
async Task AddNewToList(List<Item> list)
{
// Suppose load takes a few seconds
Item item = await LoadNextItem();
list.Add(item);
}
Simply put: Is this safe?
My concern is that one may invoke the async method, and then while it's loading (either on another thread, or as an I/O operation), the caller may modify the list.
Suppose that the caller is partway through the execution of list.Clear(), for example, and suddenly the Load method finishes! What will happen?
Will the task immediately interrupt and run the list.Add(item); code? Or will it wait until the main thread is done with all scheduled CPU tasks (ie: wait for Clear() to finish), before running the code?
Edit: Since I've basically answered this for myself below, here's a bonus question: Why? Why does it immediately interrupt instead of waiting for CPU bound operations to complete? It seems counter-intuitive to not queue itself up, which would be completely safe.
Edit: Here's a different example I tested myself. The comments indicate the order of execution. I am disappointed!
TaskCompletionSource<bool> source;
private async void buttonPrime_click(object sender, EventArgs e)
{
source = new TaskCompletionSource<bool>(); // 1
await source.Task; // 2
source = null; // 4
}
private void buttonEnd_click(object sender, EventArgs e)
{
source.SetResult(true); // 3
MessageBox.Show(source.ToString()); // 5 and exception is thrown
}
No, its not safe. However also consider that the caller might also have spawned a thread and passed the List to its child thread before calling your code, even in a non async environment, which will have the same detrimental effect.
So; although not safe, there is nothing inherently thread-safe about receiving a List from a caller anyway - there is no way of knowing whether the list is actually being processed from other threads that your own.
Short answer
You always need to be careful using async.
Longer answer
It depends on your SynchronizationContext and TaskScheduler, and what you mean by "safe."
When your code awaits something, it creates a continuation and wraps it in a task, which is then posted to the current SynchronizationContext's TaskScheduler. The context will then determine when and where the continuation will run. The default scheduler simply uses the thread pool, but different types of applications can extend the scheduler and provide more sophisticated synchronization logic.
If you are writing an application that has no SynchronizationContext (for example, a console application, or anything in .NET core), the continuation is simply put on the thread pool, and could execute in parallel with your main thread. In this case you must use lock or synchronized objects such as ConcurrentDictionary<> instead of Dictionary<>, for anything other than local references or references that are closed with the task.
If you are writing a WinForms application, the continuations are put in the message queue, and will all execute on the main thread. This makes it safe to use non-synchronized objects. However, there are other worries, such as deadlocks. And of course if you spawn any threads, you must make sure they use lock or Concurrent objects, and any UI invocations must be marshaled back to the UI thread. Also, if you are nutty enough to write a WinForms application with more than one message pump (this is highly unusual) you'd need to worry about synchronizing any common variables.
If you are writing an ASP.NET application, the SynchronizationContext will ensure that, for a given request, no two threads are executing at the same time. Your continuation might run on a different thread (due to a performance feature known as thread agility), but they will always have the same SynchronizationContext and you are guaranteed that no two threads will access your variables at the same time (assuming, of course, they are not static, in which case they span across HTTP requests and must be synchronized). In addition, the pipeline will block parallel requests for the same session so that they execute in series, so your session state is also protected from threading concerns. However you still need to worry about deadlocks.
And of course you can write your own SynchronizationContext and assign it to your threads, meaning that you specify your own synchronization rules that will be used with async.
See also How do yield and await implement flow of control in .NET?
Assuming the "invalid acces" occures in LoadNextItem(): The Task will throw an exception. Since the context is captured it will pass on to the callers thread so list.Add will not be reached.
So, no it's not thread-safe.
Yes I think that could be a problem.
I would return item and add to the list on the main tread.
private async void GetIntButton(object sender, RoutedEventArgs e)
{
List<int> Ints = new List<int>();
Ints.Add(await GetInt());
}
private async Task<int> GetInt()
{
await Task.Delay(100);
return 1;
}
But you have to call from and async so I do not this this would work either.

Long task after await in an async method

While the following question is generally applicable to all usage of async/await in C#, it refers to Json.NET. The JsonConvert.DeserializeObjectAsync() method has been marked as obsolete by the development team as it would be difficult to maintain and not of much use since most JSON files are small (Refer this).
I have some code following this structure:
public async Task<CarObj> GetCarAsync()
{
string json = await GetJsonStringFromRestEndpoint();
// At this point, we should already be on a separate thread since we have awaited a long running task.
// 1 - Running this relatively long task on this thread should be fine since we're already on a new thread than the caller.
CarObj obj = JsonConvert.DeserializeObject<CarObj>(json);
// 2 - Would this better for some reason?
CarObj obj2 = await Task.Run(() => JsonConvert.DeserializeObject<CarObj>(json));
}
Would option 1 or 2 in the code above be the better solution here?
Arguably this is primarily opinion-based. But…
Assuming the library authors are correct, your first option is better. But not for the reason you think.
When the await GetJsonStringFromRestEndpoint() completes, then assuming the GetCarAsync() method was called from a thread with a synchronization context, the call to DeserializeObject<CarObj>(json); will happen on that same thread.
The reason calling the method synchronously isn't a problem isn't because you're on a different thread (you're not), but rather because as the library authors point out, the input data isn't likely to be large enough for there to be any significant performance problem. You can probably parse the entire JSON data and construct your CarObj value in less time than it takes to queue up the thread pool work item, context-switch to that thread, and then context-switch back.
In other words, don't use worker threads to perform computationally inexpensive work.
// At this point, we should already be on a separate thread since we have awaited a long running task.
No, the caller thread is called back (resumed) instead.
But - if this is not your intended behavior - I'd advice to add
.ConfigureAwait(false);
That would save some synchronization work and afterwards you'll reasonably expect to be in a thread pull thread.

Using ThreadPool and Task.Wait inside an Async/Await Method

I just encountered this code. I immediately started cringing and talking to myself (not nice things). The thing is I don't really understand why and can't reasonably articulate it. It just looks really bad to me - maybe I'm wrong.
public async Task<IHttpActionResult> ProcessAsync()
{
var userName = Username.LogonName(User.Identity.Name);
var user = await _user.GetUserAsync(userName);
ThreadPool.QueueUserWorkItem((arg) =>
{
Task.Run(() => _billing.ProcessAsync(user)).Wait();
});
return Ok();
}
This code looks to me like it's needlessly creating threads with ThreadPool.QueueUserWorkItem and Task.Run. Plus, it looks like it has the potential to deadlock or create serious resource issues when under heavy load. Am I correct?
The _billing.ProcessAsync() method is awaitable(async), so I would expect that a simple "await" keyword would be the right thing to do and not all this other baggage.
I believe Scott is correct with his guess that ThreadPool.QueueUserWorkItem should have been HostingEnvironment.QueueBackgroundWorkItem. The call to Task.Run and Wait, however, are entirely nonsensical - they're pushing work to the thread pool and blocking a thread pool thread on it, when the code is already on the thread pool.
The _billing.ProcessAsync() method is awaitable(async), so I would expect that a simple "await" keyword would be the right thing to do and not all this other baggage.
I strongly agree.
However, this will change the behavior of the action. It will now wait until Billing.ProcessAsync is completed, whereas before it would return early. Note that returning early on ASP.NET is almost always a mistake - I would say any "billing" processing would be even more certainly a mistake. So, replacing this mess with await will make the app more correct, but it will cause the ProcessAsync action to take longer to return to the client.
It's strange, but depending on what the author is trying to achieve, it seems ok to me to queue a work item in the thread pool from inside an async method.
This is not as starting a thread, it's just queueing an action to be done in a ThreadPool's thread when there is a free one. So the async method (ProcessAsync) can continue and don't need to care about the result.
The weird thing is the code inside the lambda to be enqueued in the ThreadPool. Not only the Task.Run() (which is superflous and just causes unnecessary overhead), but to call an async method without waiting for it to finish is bad inside a method that should be run by the ThreadPool, because it returns the control flow to the caller when awaiting something.
So the ThreadPool eventually thinks this method is finished (and the thread free for the next action in the queue), while actually the method wants to be resumed later.
This may lead to very undefined behaviour. This code may have been working (in certain circumstances), but I would not rely on it and use it as productive code.
(The same goes for calling a not-awaited async method inside Task.Run(), as the Task "thinks" it's finished while the method actually wants to be resumed later).
As solution I'd propose to simply await that async method, too:
await _billing.ProcessAsync(user);
But of course without any knowledge about the context of the code snippet I can't guarantee anything. Note that this would change the behaviour: while until now the code did not wait for _billing.ProcessAsync() to finsih, it would now do. So maybe leaving out await and just fire and forget
_billing.ProcessAsync(user);
maybe good enough, too.

What is the use case for async/await? [duplicate]

This question already has answers here:
Brief explanation of Async/Await in .Net 4.5
(3 answers)
Closed 7 years ago.
C# offers multiple ways to perform asynchronous execution such as threads, futures, and async.
In what cases is async the best choice?
I have read many articles about the how and what of async, but so far I have not seen any article that discusses the why.
Initially I thought async was a built-in mechanism to create a future. Something like
async int foo(){ return ..complex operation..; }
var x = await foo();
do_something_else();
bar(x);
Where call to 'await foo' would return immediately, and the use of 'x' would wait on the the return value of 'foo'. async does not do this. If you want this behavior you can use the futures library: https://msdn.microsoft.com/en-us/library/Ff963556.aspx
The above example would instead be something like
int foo(){ return ..complex operation..; }
var x = Task.Factory.StartNew<int>(() => foo());
do_something_else();
bar(x.Result);
Which isn't as pretty as I would have hoped, but it works nonetheless.
So if you have a problem where you want to have multiple threads operate on the work then use futures or one of the parallel operations, such as Parallel.For.
async/await, then, is probably not meant for the use case of performing work in parallel to increase throughput.
async solves the problem of scaling an application for a large number of asynchronous events, such as I/O, when creating many threads is expensive.
Imagine a web server where requests are processed immediately as they come in. The processing happens on a single thread where every function call is synchronous. To fully process a thread might take a few seconds, which means that an entire thread is consumed until the processing is complete.
A naive approach to server programming is to spawn a new thread for each request. In this way it does not matter how long each thread takes to complete because no thread will block any other. The problem with this approach is that threads are not cheap. The underlying operating system can only create so many threads before running out of memory, or some other kind of resource. A web server that uses 1 thread per request will probably not be able to scale past a few hundred/thousand requests per second. The c10k challenge asks that modern servers be able to scale to 10,000 simultaneous users. http://www.kegel.com/c10k.html
A better approach is to use a thread pool where the number of threads in existence is more or less fixed (or at least, does not expand past some tolerable maximum). In that scenario only a fixed number of threads are available for processing the incoming requests. If there are more requests than there are threads available for processing then some requests must wait. If a thread is processing a request and has to wait on a long running I/O process then effectively the thread is not being utilized to its fullest extent, and the server throughput will be much less than it otherwise could be.
The question is now, how can we have a fixed number of threads but still use them efficiently? One answer is to 'cut up' the program logic so that when a thread would normally wait on an I/O process, instead it will start the I/O process but immediately become free for any other task that wants to execute. The part of the program that was going to execute after the I/O will be stored in a thing that knows how to keep executing later on.
For example, the original synchronous code might look like
void process(){
string name = get_user_name();
string address = look_up_address(name);
string tax_forms = find_tax_form(address);
render_tax_form(name, address, tax_forms);
}
Where look_up_address and find_tax_form have to talk to a database and/or make requests to other websites.
The asynchronous version might look like
void process(){
string name = get_user_name();
invoke_after(() => look_up_address(name), (address) => {
invoke_after(() => find_tax_form(address), (tax_forms) => {
render_tax_form(name, address, tax_forms);
}
}
}
This is continuation passing style, where next thing to do is passed as the second lambda to a function that will not block the current thread when the blocking operation (in the first lambda) is invoked. This works but it quickly becomes very ugly and hard to follow the program logic.
What the programmer has manually done in splitting up their program can be automatically done by async/await. Any time there is a call to an I/O function the program can mark that function call with await to inform the caller of the program that it can continue to do other things instead of just waiting.
async void process(){
string name = get_user_name();
string address = await look_up_address(name);
string tax_forms = await find_tax_form(address);
render_tax_form(name, address, tax_forms);
}
The thread that executes process will break out of the function when it gets to look_up_address and continue to do other work: such as processing other requests. When look_up_address has completed and process is ready to continue, some thread (or the same thread) will pick up where the last thread left off and execute the next line find_tax_forms(address).
Since my current belief of async is about managing threads, I don't believe that async makes a lot of sense for UI programming. Generally UI's will not have that many simultaneous events that need to be processed. The use case for async with UI's is preventing the UI thread from being blocked. Even though async can be used with a UI, I would find it dangerous because ommitting an await on some long running function, due to either an accident or forgetfulness, would cause the UI to block.
async void button_callback(){
await do_something_long();
....
}
This code won't block the UI because it uses an await for the long running function that it invokes. If later on another function call is added
async void button_callback(){
do_another_thing();
await do_something_long();
...
}
Where it wasn't clear to the programmer who added the call to do_another_thing just how long it would take to execute, the UI will now be blocked. It seems safer to just always execute all processing in a background thread.
void button_callback(){
new Thread(){
do_another_thing();
do_something_long();
....
}.start();
}
Now there is no possibility that the UI thread will be blocked, and the chances that too many threads will be created is very small.

Async/Await - is it *concurrent*?

I've been considering the new async stuff in C# 5, and one particular question came up.
I understand that the await keyword is a neat compiler trick/syntactic sugar to implement continuation passing, where the remainder of the method is broken up into Task objects and queued-up to be run in order, but where control is returned to the calling method.
My problem is that I've heard that currently this is all on a single thread. Does this mean that this async stuff is really just a way of turning continuation code into Task objects and then calling Application.DoEvents() after each task completes before starting the next one?
Or am I missing something? (This part of the question is rhetorical - I'm fully aware I'm missing something :) )
It is concurrent, in the sense that many outstanding asychronous operations may be in progress at any time. It may or may not be multithreaded.
By default, await will schedule the continuation back to the "current execution context". The "current execution context" is defined as SynchronizationContext.Current if it is non-null, or TaskScheduler.Current if there's no SynchronizationContext.
You can override this default behavior by calling ConfigureAwait and passing false for the continueOnCapturedContext parameter. In that case, the continuation will not be scheduled back to that execution context. This usually means it will be run on a threadpool thread.
Unless you're writing library code, the default behavior is exactly what's desired. WinForms, WPF, and Silverlight (i.e., all the UI frameworks) supply a SynchronizationContext, so the continuation executes on the UI thread (and can safely access UI objects). ASP.NET also supplies a SynchronizationContext that ensures the continuation executes in the correct request context.
Other threads (including threadpool threads, Thread, and BackgroundWorker) do not supply a SynchronizationContext. So Console apps and Win32 services by default do not have a SynchronizationContext at all. In this situation, continuations execute on threadpool threads. This is why Console app demos using await/async include a call to Console.ReadLine/ReadKey or do a blocking Wait on a Task.
If you find yourself needing a SynchronizationContext, you can use AsyncContext from my Nito.AsyncEx library; it basically just provides an async-compatible "main loop" with a SynchronizationContext. I find it useful for Console apps and unit tests (VS2012 now has built-in support for async Task unit tests).
For more information about SynchronizationContext, see my Feb MSDN article.
At no time is DoEvents or an equivalent called; rather, control flow returns all the way out, and the continuation (the rest of the function) is scheduled to be run later. This is a much cleaner solution because it doesn't cause reentrancy issues like you would have if DoEvents was used.
The whole idea behind async/await is that it performs continuation passing nicely, and doesn't allocate a new thread for the operation. The continuation may occur on a new thread, it may continue on the same thread.
The real "meat" (the asynchronous) part of async/await is normally done separately and the communication to the caller is done through TaskCompletionSource. As written here http://blogs.msdn.com/b/pfxteam/archive/2009/06/02/9685804.aspx
The TaskCompletionSource type serves two related purposes, both alluded to by its name: it is a source for creating a task, and the source for that task’s completion. In essence, a TaskCompletionSource acts as the producer for a Task and its completion.
and the example is quite clear:
public static Task<T> RunAsync<T>(Func<T> function)
{
if (function == null) throw new ArgumentNullException(“function”);
var tcs = new TaskCompletionSource<T>();
ThreadPool.QueueUserWorkItem(_ =>
{
try
{
T result = function();
tcs.SetResult(result);
}
catch(Exception exc) { tcs.SetException(exc); }
});
return tcs.Task;
}
Through the TaskCompletionSource you have access to a Task object that you can await, but it isn't through the async/await keywords that you created the multithreading.
Note that when many "slow" functions will be converted to the async/await syntax, you won't need to use TaskCompletionSource very much. They'll use it internally (but in the end somewhere there must be a TaskCompletionSource to have an asynchronous result)
The way I like to explain it is that the "await" keyword simply waits for a task to finish but yields execution to the calling thread while it waits. It then returns the result of the Task and continues from the statement after the "await" keyword once the Task is complete.
Some people I have noticed seem to think that the Task is run in the same thread as the calling thread, this is incorrect and can be proved by trying to alter a Windows.Forms GUI element within the method that await calls. However, the continuation is run in the calling thread where ever possible.
Its just a neat way of not having to have callback delegates or event handlers for when the Task completes.
I feel like this question needs a simpler answer for people. So I'm going to oversimplify.
The fact is, if you save the Tasks and don't await them, then async/await is "concurrent".
var a = await LongTask1(x);
var b = await LongTask2(y);
var c = ShortTask(a, b);
is not concurrent. LongTask1 will complete before LongTask2 starts.
var a = LongTask1(x);
var b = LongTask2(y);
var c = ShortTask(await a, await b);
is concurrent.
While I also urge people to get a deeper understanding and read up on this, you can use async/await for concurrency, and it's pretty simple.

Categories

Resources