In normal C# we write
int DoSomething(){/*...*/)};
lock(mutex)
{
return DoSomething();
}
to ensure in all cases the mutex is released.
But if the signature of DoSomething changed to
Task<int> DoSomeThingAsync(){/*...*/};
Does the following code
return Task.Factory.StartNew(() =>
{
Monitor.Enter(mutex);
return DoSomethingAsync();
}).Unwrap().ContinueWith(t =>
{
Monitor.Exit(mutex);
return t;
}).Unwrap();
do similar things? Is it guaranteed to release the mutex whenever it was entered? Are there any simpler way to do so? (I am not able to use the async keyword so keep thinking in TPL only)
You can't use Monitor in that way because Monitor is thread-affine and in your case the task and continuation may run on different threads.
The appropriate synchronization mechanism to use is a SemaphoreSlim (which isn't thread-affine) set to 1:
public SemaphoreSlim _semaphore = new SemaphoreSlim(1,1);
_semaphore.Wait();
return DoSomethingAsync().ContinueWith(t =>
{
_semaphore.Release();
return t.Result;
});
As long as you don't use one of the TaskContinuationOptions such as OnlyOnFaulted or OnlyOnCanceled the continuation would always run after the task has completed and so the semaphore is guaranteed to be released.
Related
I'm trying to find out what is the difference between the SemaphoreSlim use of Wait and WaitAsync, used in this kind of context:
private SemaphoreSlim semaphore = new SemaphoreSlim(1);
public async Task<string> Get()
{
// What's the difference between using Wait and WaitAsync here?
this.semaphore.Wait(); // await this.semaphore.WaitAsync()
string result;
try {
result = this.GetStringAsync();
}
finally {
this.semaphore.Release();
}
return result;
}
If you have async method - you want to avoid any blocking calls if possible. SemaphoreSlim.Wait() is a blocking call. So what will happen if you use Wait() and semaphore is not available at the moment? It will block the caller, which is very unexpected thing for async methods:
// this will _block_ despite calling async method and using await
// until semaphore is available
var myTask = Get();
var myString = await Get(); // will block also
If you use WaitAsync - it will not block the caller if semaphore is not available at the moment.
var myTask = Get();
// can continue with other things, even if semaphore is not available
Also you should beware to use regular locking mechanisms together with async\await. After doing this:
result = await this.GetStringAsync();
You may be on another thread after await, which means when you try to release the lock you acquired - it might fail, because you are trying to release it not from the same thread you acquired it. Note this is NOT the case for semaphore, because it does not have thread affinity (unlike other such constructs like Monitor.Enter, ReaderWriterLock and so on).
The difference is that Wait blocks the current thread until semaphore is released, while WaitAsync does not.
I have the following code:
private static async Task SaveAsync()
{
if (Monitor.TryEnter(_SaveLock, 1000))
{
try
{
Logging.Info(" writing bitmex data to database");
await SomelenghthyDbUpdate1.ConfigureAwait(false);
await SomelenghthyDbUpdate2.ConfigureAwait(false);
await SomelenghthyDbUpdate3.ConfigureAwait(false);
await SomelenghthyDbUpdate4.ConfigureAwait(false);
}
finally
{
Monitor.Exit(_SaveLock);
}
}
}
First, ConfigureAwait false vs true does yield a real speed difference.
So, knowing that the finally part can be executed in another thread than the caller there is an issue when releasing the lock.
I am trying to prevent two save operations to happen at the same time, since they're event driven AND can be skipped if needed as they happen periodically.
another option I was thinking about is to make an array of tasks and, in the caller thread do a Task.WaitAll(tasks). In that scenario, is it guaranteed I would still be on the same thread at exit?
But is there a clean solution to this problem? maybe setting a flag through a lock?
As per Docs "The SemaphoreSlim class doesn't enforce thread or task identity on calls to the Wait, WaitAsync, and Release methods."
private static SemaphoreSlim sem = new SemaphoreSlim(1);
private static async Task SaveAsync()
{
if(await sem.WaitAsync(TimeSpan.FromSeconds(1))) // Can be pimped with cancel token
{
try
{
Logging.Info(" writing bitmex data to database");
await SomelenghthyDbUpdate1.ConfigureAwait(false);
await SomelenghthyDbUpdate2.ConfigureAwait(false);
await SomelenghthyDbUpdate3.ConfigureAwait(false);
await SomelenghthyDbUpdate4.ConfigureAwait(false);
}
finally
{
sem.Release();
}
}
}
Given is a very common threading scenario:
Declaration
private Thread _thread;
private bool _isRunning = false;
Start
_thread = new Thread(() => NeverEndingProc());
thread.Start();
Method
private void NeverEndingProc() {
while(_isRunning) {
do();
}
}
Possibly used in a asynchronous tcp listener that awaits callbacks until it gets stopped by letting the thread run out (_isRunning = false).
Now I'm wondering: Is it possible to do the same thing with Task? Using a CancellationToken? Or are Tasks only for procedures that are expected to end and report status?
You can certainly do this just by passing NeverEndingProc to Task.Run.
However, there is one important difference in functionality: if an exception is propagated out of NeverEndingProc in a bare Thread, it will crash the process. If it is in a Task, it will raise TaskScheduler.UnobservedException and then be silently ignored (as of .NET 4.5).
That said, there are alternatives you can explore. Reactive Extensions, for example, pretty much removes any need for the "infinite thread loop".
One reason to use Task + CancellationToken is to make the individual processes and their cancellation more independent of each other. In your example, notice how NeverEndingProc needs a direct reference to the _isRunning field in the same class. Instead, you could accept an external token:
Start:
public void StartNeverEndingProc(CancellationToken token) {
Task.Factory.StartNew(() => NeverEndingProc(token), token);
}
Method:
private void NeverEndingProc(CancellationToken token) {
while (true) {
token.ThrowIfCancellationRequested();
do();
}
}
Now cancellation is managed by the caller, and can be applied to multiple independent tasks:
var instance = new YourClass();
var cts = new CancellationTokenSource();
instance.StartNeverEndingProc(cts.Token); // start your task
StartOtherProc(cts.Token); // start another task
cts.Cancel(); // cancel both
I've discovered that TaskCompletionSource.SetResult(); invokes the code awaiting the task before returning. In my case that result in a deadlock.
This is a simplified version that is started in an ordinary Thread
void ReceiverRun()
while (true)
{
var msg = ReadNextMessage();
TaskCompletionSource<Response> task = requests[msg.RequestID];
if(msg.Error == null)
task.SetResult(msg);
else
task.SetException(new Exception(msg.Error));
}
}
The "async" part of the code looks something like this.
await SendAwaitResponse("first message");
SendAwaitResponse("second message").Wait();
The Wait is actually nested inside non-async calls.
The SendAwaitResponse(simplified)
public static Task<Response> SendAwaitResponse(string msg)
{
var t = new TaskCompletionSource<Response>();
requests.Add(GetID(msg), t);
stream.Write(msg);
return t.Task;
}
My assumption was that the second SendAwaitResponse would execute in a ThreadPool thread but it continues in the thread created for ReceiverRun.
Is there anyway to set the result of a task without continuing its awaited code?
The application is a console application.
I've discovered that TaskCompletionSource.SetResult(); invokes the code awaiting the task before returning. In my case that result in a deadlock.
Yes, I have a blog post documenting this (AFAIK it's not documented on MSDN). The deadlock happens because of two things:
There's a mixture of async and blocking code (i.e., an async method is calling Wait).
Task continuations are scheduled using TaskContinuationOptions.ExecuteSynchronously.
I recommend starting with the simplest possible solution: removing the first thing (1). I.e., don't mix async and Wait calls:
await SendAwaitResponse("first message");
SendAwaitResponse("second message").Wait();
Instead, use await consistently:
await SendAwaitResponse("first message");
await SendAwaitResponse("second message");
If you need to, you can Wait at an alternative point further up the call stack (not in an async method).
That's my most-recommended solution. However, if you want to try removing the second thing (2), you can do a couple of tricks: either wrap the SetResult in a Task.Run to force it onto a separate thread (my AsyncEx library has *WithBackgroundContinuations extension methods that do exactly this), or give your thread an actual context (such as my AsyncContext type) and specify ConfigureAwait(false), which will cause the continuation to ignore the ExecuteSynchronously flag.
But those solutions are much more complex than just separating the async and blocking code.
As a side note, take a look at TPL Dataflow; it sounds like you may find it useful.
As your app is a console app, it runs on the default synchronization context, where the await continuation callback will be called on the same thread the awaiting task has become completed on. If you want to switch threads after await SendAwaitResponse, you can do so with await Task.Yield():
await SendAwaitResponse("first message");
await Task.Yield();
// will be continued on a pool thread
// ...
SendAwaitResponse("second message").Wait(); // so no deadlock
You could further improve this by storing Thread.CurrentThread.ManagedThreadId inside Task.Result and comparing it to the current thread's id after the await. If you're still on the same thread, do await Task.Yield().
While I understand that SendAwaitResponse is a simplified version of your actual code, it's still completely synchronous inside (the way you showed it in your question). Why would you expect any thread switch in there?
Anyway, you probably should redesign your logic the way it doesn't make assumptions about what thread you are currently on. Avoid mixing await and Task.Wait() and make all of your code asynchronous. Usually, it's possible to stick with just one Wait() somewhere on the top level (e.g. inside Main).
[EDITED] Calling task.SetResult(msg) from ReceiverRun actually transfers the control flow to the point where you await on the task - without a thread switch, because of the default synchronization context's behavior. So, your code which does the actual message processing is taking over the ReceiverRun thread. Eventually, SendAwaitResponse("second message").Wait() is called on the same thread, causing the deadlock.
Below is a console app code, modeled after your sample. It uses await Task.Yield() inside ProcessAsync to schedule the continuation on a separate thread, so the control flow returns to ReceiverRun and there's no deadlock.
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication
{
class Program
{
class Worker
{
public struct Response
{
public string message;
public int threadId;
}
CancellationToken _token;
readonly ConcurrentQueue<string> _messages = new ConcurrentQueue<string>();
readonly ConcurrentDictionary<string, TaskCompletionSource<Response>> _requests = new ConcurrentDictionary<string, TaskCompletionSource<Response>>();
public Worker(CancellationToken token)
{
_token = token;
}
string ReadNextMessage()
{
// using Thread.Sleep(100) for test purposes here,
// should be using ManualResetEvent (or similar synchronization primitive),
// depending on how messages arrive
string message;
while (!_messages.TryDequeue(out message))
{
Thread.Sleep(100);
_token.ThrowIfCancellationRequested();
}
return message;
}
public void ReceiverRun()
{
LogThread("Enter ReceiverRun");
while (true)
{
var msg = ReadNextMessage();
LogThread("ReadNextMessage: " + msg);
var tcs = _requests[msg];
tcs.SetResult(new Response { message = msg, threadId = Thread.CurrentThread.ManagedThreadId });
_token.ThrowIfCancellationRequested(); // this is how we terminate the loop
}
}
Task<Response> SendAwaitResponse(string msg)
{
LogThread("SendAwaitResponse: " + msg);
var tcs = new TaskCompletionSource<Response>();
_requests.TryAdd(msg, tcs);
_messages.Enqueue(msg);
return tcs.Task;
}
public async Task ProcessAsync()
{
LogThread("Enter Worker.ProcessAsync");
var task1 = SendAwaitResponse("first message");
await task1;
LogThread("result1: " + task1.Result.message);
// avoid deadlock for task2.Wait() with Task.Yield()
// comment this out and task2.Wait() will dead-lock
if (task1.Result.threadId == Thread.CurrentThread.ManagedThreadId)
await Task.Yield();
var task2 = SendAwaitResponse("second message");
task2.Wait();
LogThread("result2: " + task2.Result.message);
var task3 = SendAwaitResponse("third message");
// still on the same thread as with result 2, no deadlock for task3.Wait()
task3.Wait();
LogThread("result3: " + task3.Result.message);
var task4 = SendAwaitResponse("fourth message");
await task4;
LogThread("result4: " + task4.Result.message);
// avoid deadlock for task5.Wait() with Task.Yield()
// comment this out and task5.Wait() will dead-lock
if (task4.Result.threadId == Thread.CurrentThread.ManagedThreadId)
await Task.Yield();
var task5 = SendAwaitResponse("fifth message");
task5.Wait();
LogThread("result5: " + task5.Result.message);
LogThread("Leave Worker.ProcessAsync");
}
public static void LogThread(string message)
{
Console.WriteLine("{0}, thread: {1}", message, Thread.CurrentThread.ManagedThreadId);
}
}
static void Main(string[] args)
{
Worker.LogThread("Enter Main");
var cts = new CancellationTokenSource(5000); // cancel after 5s
var worker = new Worker(cts.Token);
Task receiver = Task.Run(() => worker.ReceiverRun());
Task main = worker.ProcessAsync();
try
{
Task.WaitAll(main, receiver);
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.Message);
}
Worker.LogThread("Leave Main");
Console.ReadLine();
}
}
}
This is not much different from doing Task.Run(() => task.SetResult(msg)) inside ReceiverRun. The only advantage I can think of is that you have an explicit control over when to switch threads. This way, you can stay on the same thread for as long as possible (e.g., for task2, task3, task4, but you still need another thread switch after task4 to avoid a deadlock on task5.Wait()).
Both solutions would eventually make the thread pool grow, which is bad in terms of performance and scalability.
Now, if we replace task.Wait() with await task everywhere inside ProcessAsync in the above code, we will not have to use await Task.Yield and there still will be no deadlocks. However, the whole chain of await calls after the 1st await task1 inside ProcessAsync will actually be executed on the ReceiverRun thread. As long as we don't block this thread with other Wait()-style calls and don't do a lot of CPU-bound work as we're processing messages, this approach might work OK (asynchronous IO-bound await-style calls still should be OK, and they may actually trigger an implicit thread switch).
That said, I think you'd need a separate thread with a serializing synchronization context installed on it for processing messages (similar to WindowsFormsSynchronizationContext). That's where your asynchronous code containing awaits should run. You'd still need to avoid using Task.Wait on that thread. And if an individual message processing takes a lot of CPU-bound work, you should use Task.Run for such work. For async IO-bound calls, you could stay on the same thread.
You may want to look at ActionDispatcher/ActionDispatcherSynchronizationContext from #StephenCleary's
Nito Asynchronous Library for your asynchronous message processing logic. Hopefully, Stephen jumps in and provides a better answer.
"My assumption was that the second SendAwaitResponse would execute in a ThreadPool thread but it continues in the thread created for ReceiverRun."
It depends entirely on what you do within SendAwaitResponse. Asynchrony and concurrency are not the same thing.
Check out: C# 5 Async/Await - is it *concurrent*?
A little late to the party, but here's my solution which i think is added value.
I've been struggling with this also, i've solved it by capturing the SynchronizationContext on the method that is awaited.
It would look something like:
// just a default sync context
private readonly SynchronizationContext _defaultContext = new SynchronizationContext();
void ReceiverRun()
{
while (true) // <-- i would replace this with a cancellation token
{
var msg = ReadNextMessage();
TaskWithContext<TResult> task = requests[msg.RequestID];
// if it wasn't a winforms/wpf thread, it would be null
// we choose our default context (threadpool)
var context = task.Context ?? _defaultContext;
// execute it on the context which was captured where it was added. So it won't get completed on this thread.
context.Post(state =>
{
if (msg.Error == null)
task.TaskCompletionSource.SetResult(msg);
else
task.TaskCompletionSource.SetException(new Exception(msg.Error));
});
}
}
public static Task<Response> SendAwaitResponse(string msg)
{
// The key is here! Save the current synchronization context.
var t = new TaskWithContext<Response>(SynchronizationContext.Current);
requests.Add(GetID(msg), t);
stream.Write(msg);
return t.TaskCompletionSource.Task;
}
// class to hold a task and context
public class TaskWithContext<TResult>
{
public SynchronizationContext Context { get; }
public TaskCompletionSource<TResult> TaskCompletionSource { get; } = new TaskCompletionSource<Response>();
public TaskWithContext(SynchronizationContext context)
{
Context = context;
}
}
I want to use lock or a similar synchronization to protect a critical section. At the same time I want to listen to a CancellationToken.
Right now I'm using a mutex like this, but mutex doesn't have as good performance. Can I use any of other synchronization classes (including the new .Net 4.0) instead of the mutex?
WaitHandle.WaitAny(new[] { CancelToken.WaitHandle, _mutex});
CancelToken.ThrowIfCancellationRequested();
Take a look at the new .NET 4.0 Framework feature SemaphoreSlim Class. It provides SemaphoreSlim.Wait(CancellationToken) method.
Blocks the current thread until it can enter the SemaphoreSlim, while
observing a CancellationToken
From some point of view using Semaphore in such simple case could be an overhead because initially it was designed to provide an access for multiple threads, but perhaps you might find it useful.
EDIT: The code snippet
CancellationToken token = new CancellationToken();
SemaphoreSlim semaphore = new SemaphoreSlim(1,1);
bool tokenCanceled = false;
try {
try {
// block section entrance for other threads
semaphore.Wait(token);
}
catch (OperationCanceledException) {
// The token was canceled and the semaphore was NOT entered...
tokenCanceled = true;
}
// critical section code
// ...
if (token.IsCancellationRequested)
{
// ...
}
}
finally {
if (!tokenCanceled)
semaphore.Release();
}
private object _lockObject = new object();
lock (_lockObject)
{
// critical section
using (token.Register(() => token.ThrowIfCancellationRequested())
{
// Do something that might need cancelling.
}
}
Calling Cancel() on a token will result in the ThrowIfCancellationRequested() being invoked as that was what is hooked up to the Register callback. You can put whatever cancellation logic you want in here. This approach is great because you can cancel blocking calls by forcing the conditions that will cause the call to complete.
ThrowIfCancellationRequested throws a OperationCanceledException. You need to handle this on the calling thread or your whole process could be brought down. A simple way of doing this is by starting your task using the Task class which will aggregate all the exceptions up for you to handle on the calling thread.
try
{
var t = new Task(() => LongRunningMethod());
t.Start();
t.Wait();
}
catch (AggregateException ex)
{
ex.Handle(x => true); // this effectively swallows any exceptions
}
Some good stuff here covering co-operative cancellation
You can use Monitor.TryEnter with timeout to wait for the lock and check periodically for cancellation.
private bool TryEnterSyncLock(object syncObject)
{
while(!Monitor.TryEnter(syncObject, TimeSpan.FromMilliseconds(100)))
{
if (cts_.IsCancellationRequested)
return false;
}
return true;
}
Note that I would not recommend this in high contention situations as it can impact performance. I would use it as a safety mechanism against deadlocks in case you cannot use SemaphoreSlim as it has different same thread re-entrancy semantics than Monitor.Enter.
After returning true, lock on syncObject has to be released using Monitor.Exit.