Async version of Monitor.Pulse/Wait - c#

I'm trying to optimize an async version of something similar (in basic funcionality) to the Monitor.Wait and Monitor.Pulse methods. The idea is to use this over an async method.
Requirements:
1) I have one Task running, that it is in charge of waiting until someone pulses my monitor.
2) That task may compute a complex (ie: time consuming) operation. In the meanwhile, the pulse method could be called several times without doing anything (as the main task is already doing some processing).
3) Once the main task finishes, it starts to Wait again until another Pulse comes in.
Worst case scenario is Wait>Pulse>Wait>Pulse>Wait..., but usually I have tenths/hundreds of pulses for every wait.
So, I have the following class (working, but I think it can be optimized a bit based on my requirements)
internal sealed class Awaiter
{
private readonly ConcurrentQueue<TaskCompletionSource<byte>> _waiting = new ConcurrentQueue<TaskCompletionSource<byte>>();
public void Pulse()
{
TaskCompletionSource<byte> tcs;
if (_waiting.TryDequeue(out tcs))
{
tcs.TrySetResult(1);
}
}
public Task Wait()
{
TaskCompletionSource<byte> tcs;
if (_waiting.TryPeek(out tcs))
{
return tcs.Task;
}
tcs = new TaskCompletionSource<byte>();
_waiting.Enqueue(tcs);
return tcs.Task;
}
}
The problem with the above class is the baggage I'm using just for synchronization. Since I will be waiting from one and only one thread, there is really no need to have a ConcurrentQueue, as I always have only one item in it.
So, I simplified it a bit and wrote the following:
internal sealed class Awaiter2
{
private readonly object _mutex = new object();
private TaskCompletionSource<byte> _waiting;
public void Pulse()
{
var w = _waiting;
if (w == null)
{
return;
}
lock (_mutex)
{
w = _waiting;
if (w == null)
{
return;
}
_waiting = null;
w.TrySetResult(1);
}
}
public Task Wait()
{
var w = _waiting;
if (w != null)
{
return w.Task;
}
lock (_mutex)
{
w = _waiting;
if (w != null)
{
return w.Task;
}
w = _waiting = new TaskCompletionSource<byte>();
return w.Task;
}
}
}
That new version is also working ok, but I'm still thinking it can be optimized a bit more, by removing the locks.
I'm looking for suggestions on how I can optimize the second version. Any ideas?

If you don't need the Wait() call to return a Task but are content with being able to await Wait() then you can implement a custom awaiter/awaitable.
See this link for an overview of the await pattern used by the compiler.
When implementing custom awaitables you will just be dealing with delegates and the actual "waiting" is left up to you. When you want to "await" for a condition it is often possible to keep a list of pending continuations and whenever the condition comes true you can invoke those continuations. You just need to deal with the synchronization coming from the fact that await can be called from arbitrary threads. If you know that you'll only ever await from one thread (say the UI thread) then you don't need any synchronization at all!
I'll try to give you a lock-free implementation but no guarantees that it is correct. If you don't understand why all race conditions are safe you should not use it and implement the async/await protocol using lock-statements or other techniques which you know how to debug.
public sealed class AsyncMonitor
{
private PulseAwaitable _currentWaiter;
public AsyncMonitor()
{
_currentWaiter = new PulseAwaitable();
}
public void Pulse()
{
// Optimize for the case when calling Pulse() when nobody is waiting.
//
// This has an inherent race condition when calling Pulse() and Wait()
// at the same time. The question this was written for did not specify
// how to resolve this, so it is a valid answer to tolerate either
// result and just allow the race condition.
//
if (_currentWaiter.HasWaitingContinuations)
Interlocked.Exchange(ref _currentWaiter, new PulseAwaitable()).Complete();
}
public PulseAwaitable Wait()
{
return _currentWaiter;
}
}
// This class maintains a list of waiting continuations to be executed when
// the owning AsyncMonitor is pulsed.
public sealed class PulseAwaitable : INotifyCompletion
{
// List of pending 'await' delegates.
private Action _pendingContinuations;
// Flag whether we have been pulsed. This is the primary variable
// around which we build the lock free synchronization.
private int _pulsed;
// AsyncMonitor creates instances as required.
internal PulseAwaitable()
{
}
// This check has a race condition which is tolerated.
// It is used to optimize for cases when the PulseAwaitable has no waiters.
internal bool HasWaitingContinuations
{
get { return Volatile.Read(ref _pendingContinuations) != null; }
}
// Called by the AsyncMonitor when it is pulsed.
internal void Complete()
{
// Set pulsed flag first because that is the variable around which
// we build the lock free protocol. Everything else this method does
// is free to have race conditions.
Interlocked.Exchange(ref _pulsed, 1);
// Execute pending continuations. This is free to race with calls
// of OnCompleted seeing the pulsed flag first.
Interlocked.Exchange(ref _pendingContinuations, null)?.Invoke();
}
#region Awaitable
// There is no need to separate the awaiter from the awaitable
// so we use one class to implement both parts of the protocol.
public PulseAwaitable GetAwaiter()
{
return this;
}
#endregion
#region Awaiter
public bool IsCompleted
{
// The return value of this property does not need to be up to date so we could omit the 'Volatile.Read' if we wanted to.
// What is not allowed is returning "true" even if we are not completed, but this cannot happen since we never transist back to incompleted.
get { return Volatile.Read(ref _pulsed) == 1; }
}
public void OnCompleted(Action continuation)
{
// Protected against manual invocations. The compiler-generated code never passes null so you can remove this check in release builds if you want to.
if (continuation == null)
throw new ArgumentNullException(nameof(continuation));
// Standard pattern of maintaining a lock free immutable variable: read-modify-write cycle.
// See for example here: https://blogs.msdn.microsoft.com/oldnewthing/20140516-00/?p=973
// Again the 'Volatile.Read' is not really needed since outdated values will be detected at the first iteration.
var oldContinuations = Volatile.Read(ref _pendingContinuations);
for (;;)
{
var newContinuations = (oldContinuations + continuation);
var actualContinuations = Interlocked.CompareExchange(ref _pendingContinuations, newContinuations, oldContinuations);
if (actualContinuations == oldContinuations)
break;
oldContinuations = actualContinuations;
}
// Now comes the interesting part where the actual lock free synchronization happens.
// If we are completed then somebody needs to clean up remaining continuations.
// This happens last so the first part of the method can race with pulsing us.
if (IsCompleted)
Interlocked.Exchange(ref _pendingContinuations, null)?.Invoke();
}
public void GetResult()
{
// This is just to check against manual calls. The compiler will never call this when IsCompleted is false.
// (Assuming your OnCompleted implementation is bug-free and you don't execute continuations before IsCompleted becomes true.)
if (!IsCompleted)
throw new NotSupportedException("Synchronous waits are not supported. Use 'await' or OnCompleted to wait asynchronously");
}
#endregion
}
You usually don't bother on which thread the continuations run because if they are async methods the compiler has already inserted code (in the continuation) to switch back to the right thread, no need to do it manually in every awaitable implementation.
[edit]
As a starting point for how a locking implementation can look I'll provide one using a lock-statement. It should be easy to replace it by a spinlock or some other locking technique. By using a struct as the awaitable it even has the advantage that it does no additional allocation except for the initial object. (There are of course allocations in the async/await framework in the compiler magic on the calling side, but you can't get rid of these.)
Note that the iteration counter will increment only for every Wait+Pulse pair and will eventually overflow into negative, but that is ok. We just need to bridge the time from the continuation beeing invoked until it can call GetResult. 4 billion Wait+Pulse pairs should be plenty of time for any pending continuations to call its GetResult method. If you don't want that risk you could use a long or Guid for a more unique iteration counter, but IMHO an int is good for almost all scenarios.
public sealed class AsyncMonitor
{
public struct Awaitable : INotifyCompletion
{
// We use a struct to avoid allocations. Note that this means the compiler will copy
// the struct around in the calling code when doing 'await', so for your own debugging
// sanity make all variables readonly.
private readonly AsyncMonitor _monitor;
private readonly int _iteration;
public Awaitable(AsyncMonitor monitor)
{
lock (monitor)
{
_monitor = monitor;
_iteration = monitor._iteration;
}
}
public Awaitable GetAwaiter()
{
return this;
}
public bool IsCompleted
{
get
{
// We use the iteration counter as an indicator when we should be complete.
lock (_monitor)
{
return _monitor._iteration != _iteration;
}
}
}
public void OnCompleted(Action continuation)
{
// The compiler never passes null, but someone may call it manually.
if (continuation == null)
throw new ArgumentNullException(nameof(continuation));
lock (_monitor)
{
// Not calling IsCompleted since we already have a lock.
if (_monitor._iteration == _iteration)
{
_monitor._waiting += continuation;
// null the continuation to indicate the following code
// that we completed and don't want it executed.
continuation = null;
}
}
// If we were already completed then we didn't null the continuation.
// (We should invoke the continuation outside of the lock because it
// may want to Wait/Pulse again and we want to avoid reentrancy issues.)
continuation?.Invoke();
}
public void GetResult()
{
lock (_monitor)
{
// Not calling IsCompleted since we already have a lock.
if (_monitor._iteration == _iteration)
throw new NotSupportedException("Synchronous wait is not supported. Use await or OnCompleted.");
}
}
}
private Action _waiting;
private int _iteration;
public AsyncMonitor()
{
}
public void Pulse(bool executeAsync)
{
Action execute = null;
lock (this)
{
// If nobody is waiting we don't need to increment the iteration counter.
if (_waiting != null)
{
_iteration++;
execute = _waiting;
_waiting = null;
}
}
// Important: execute the callbacks outside the lock because they might Pulse or Wait again.
if (execute != null)
{
// If the caller doesn't want inlined execution (maybe he holds a lock)
// then execute it on the thread pool.
if (executeAsync)
Task.Run(execute);
else
execute();
}
}
public Awaitable Wait()
{
return new Awaitable(this);
}
}

Here is my simple async implementation that I use in my projects:
internal sealed class Pulsar
{
private static TaskCompletionSource<bool> Init() => new TaskCompletionSource<bool>();
private TaskCompletionSource<bool> _tcs = Init();
public void Pulse()
{
Interlocked.Exchange(ref _tcs, Init()).SetResult(true);
}
public Task AwaitPulse(CancellationToken token)
{
return token.CanBeCanceled ? _tcs.Task.WithCancellation(token) : _tcs.Task;
}
}
Add TaskCreationOptions.RunContinuationsAsynchronously to the TCS for async continuations.
The WithCancellation can be omitted of course, if you do not need cancellations.

Because you only have one task ever waiting your function can be simplified to
internal sealed class Awaiter3
{
private volatile TaskCompletionSource<byte> _waiting;
public void Pulse()
{
var w = _waiting;
if (w == null)
{
return;
}
_waiting = null;
#if NET_46_OR_GREATER
w.TrySetResult(1);
#else
Task.Run(() => w.TrySetResult(1));
#endif
}
//This method is not thread safe and can only be called by one thread at a time.
// To make it thread safe put a lock around the null check and the assignment,
// you do not need to have a lock on Pulse, "volatile" takes care of that side.
public Task Wait()
{
if(_waiting != null)
throw new InvalidOperationException("Only one waiter is allowed to exist at a time!");
#if NET_46_OR_GREATER
_waiting = new TaskCompletionSource<byte>(TaskCreationOptions.RunContinuationsAsynchronously);
#else
_waiting = new TaskCompletionSource<byte>();
#endif
return _waiting.Task;
}
}
One behavior I did change. If you are using .NET 4.6 or newer use the code in the #if NET_46_OR_GREATER blocks, if under use the else blocks. When you call TrySetResult you could have the continuation synchronously run, this can cause Pulse() to take a long time to complete. By using TaskCreationOptions.RunContinuationsAsynchronously in .NET 4.6 or wrapping the TrySetResult in a Task.Run for pre 4.6 will make sure that Puse() is not blocked by the continuation of the task.
See the SO question Detect target framework version at compile time on how to make a NET_46_OR_GREATER definition that works in your code.

A simple way to do this is to use SemaphoreSlim which uses Monitor.
public class AsyncMonitor
{
private readonly SemaphoreSlim signal = new SemaphoreSlim(0, 1);
public void Pulse()
{
try
{
signal.Release();
}
catch (SemaphoreFullException) { }
}
public async Task WaitAsync(CancellationToken cancellationToken)
{
await signal.WaitAsync(cancellationToken).ConfigureAwait(false);
}
}

Related

Is it possible to replace yield by await?

Both yield and await are capable of interrupting the execution of a method. Both of them can be used to let the caller continue execution. Only the await seems to me to be a stronger tool that can do not only this but much more. Is it true?
I would like to replace the following:
yield item;
By this:
await buffer.SendAsync(item);
Is it possible? One thing I like about yield is that everything happens on one thread. Could it be the same with the await approach?
I tried to implement it as follows:
class Program
{
public static async Task Main()
{
await ConsumeAsync();
}
// instead of IEnumerable<int> Produce()
static async Task ProduceAsync(Buffer<int> buffer)
{
for (int i = 0; ; i++)
{
Console.WriteLine($"Produced {i} on thread {Thread.CurrentThread.ManagedThreadId}");
await buffer.SendAsync(i); // instead of yield i;
}
}
static async Task ConsumeAsync()
{
Buffer<int> buffer = new Buffer<int>(ProduceAsync); // instead enumerable.GetEnumerator
while (true)
{
int i = await buffer.ProduceAsync(); // instead of enumerator.MoveNext(); enumerator.Current
Console.WriteLine($"Consumed {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
class Buffer<T>
{
private T item;
public Buffer(Func<Buffer<T>, Task> producer)
{
producer(this); // starts the producer
}
public async Task<T> ProduceAsync()
{
// block the consumer
// continue the execution of producer
// await until producer produces something
return item;
}
public async Task SendAsync(T item)
{
this.item = item;
// block the producer
// continue the execution of consumer
// await until the consumer is requesting next item
}
}
But I did not know how to do the synchronization to keep everything on one thread.
Yes, it is possible to use patterns like that. I have done so when writing state machines for UI interactions, where the user needs to place a sequence of clicks, and different things should happen between each click. Using something like await mouse.GetDownTask() allow the code to be written in a fairly linear way.
But that does not necessarily mean that that it was easy to read or understand. The underlying mechanisms to make it all work where quite horrible. So you should be aware that this is abuse of the system, and that other readers of your code will probably not expect such a usage. So use with care, and make sure you are on top of your documentation.
To make this work you likely need to use TaskCompletionSource. Something like this might work:
class Buffer<T>
{
private TaskCompletionSource<T> tcs = new ();
public Buffer() { }
public Task<T> Receive() => tcs.Task;
public void Send(T item)
{
tcs.SetResult(item);
tcs = new TaskCompletionSource<T>();
}
}
You should also be aware of deadlocks. This is a potential problem when using fake async on a single thread like this.
You could just use a Channel<T>:
class Buffer<T>
{
private readonly Channel<T> _channel = Channel.CreateBounded<T>(1);
public Task<T> ProduceAsync()
{
// block the consumer
// continue the execution of producer
// await until producer produces something
return _channel.Reader.ReadAsync().AsTask();
}
public Task SendAsync(T item)
{
return _channel.Writer.WriteAsync(item).AsTask();
// block the producer
// continue the execution of consumer
// await until the consumer is requesting next item
}
}
The built-in Channel<T> implementations have a minimum bounded capacity of 1. My understanding is that you want a channel with zero capacity, like the channels in Go language. Implementing one is doable, but not trivial. For a starting point, see this answer.

How to Pause/Resume an asynchronous worker, without incurring memory allocation overhead?

I have an asynchronous method that contains an endless while loop. This loop is paused and resumed frequently by the main thread of the program. For this purpose I am using currently the PauseTokenSource class from the Nito.AsyncEx package:
private readonly PauseTokenSource _pausation = new();
private async Task StartWorker()
{
while (true)
{
await _pausation.Token.WaitWhilePausedAsync();
DoSomething();
}
}
This works pretty well, but I noticed that around 100 bytes are allocated each time the PauseTokenSource is paused and resumed. Since this happens many times per second, I am searching for a way to eliminate this overhead. My idea is to replace the PauseTokenSource with a home-made pausation mechanism that has a WaitWhilePausedAsync method that returns a ValueTask instead of Task. To make the implementation allocation-free, the ValueTask should be backed by something that implements the IValueTaskSource interface. Here is my current (failed) attempt to implement this mechanism:
public class Pausation : IValueTaskSource
{
private ManualResetValueTaskSourceCore<bool> _source;
private bool _paused;
public Pausation() => _source.RunContinuationsAsynchronously = true;
public void Pause()
{
_paused = true;
_source.Reset();
}
public void Resume()
{
_paused = false;
_source.SetResult(default);
}
public ValueTask WaitWhilePausedAsync()
{
if (!_paused) return ValueTask.CompletedTask;
return new ValueTask(this, _source.Version);
}
void IValueTaskSource.GetResult(short token)
{
_source.GetResult(token);
}
ValueTaskSourceStatus IValueTaskSource.GetStatus(short token)
{
return _source.GetStatus(token);
}
void IValueTaskSource.OnCompleted(Action<object> continuation, object state,
short token, ValueTaskSourceOnCompletedFlags flags)
{
_source.OnCompleted(continuation, state, token, flags);
}
}
My Pausation class is based on the ManualResetValueTaskSourceCore<T> struct, which is supposed to simplify the most common IValueTaskSource implementations. Apparently I am doing something wrong, because when my worker awaits the WaitWhilePausedAsync method, it crashes with an InvalidOperationException.
My question is: How can I fix the Pausation class, so that it works correctly? Should I add more state beyond the _paused field? Am I calling the ManualResetValueTaskSourceCore<T> methods in the wrong places? I am asking either for detailed fixing instructions, or for a complete and working implementation.
Specifics: The Pausation class is intended to be used in a single worker - single controller scenario. There is only one asynchronous worker (the StartWorker method), and only one controller thread issues Pause and Resume commands. Also there is no need for cancellation support. The termination of the worker is handled independently by a CancellationTokenSource (removed from the above snippet for brevity). The only functionality that is needed is the Pause, Resume and WaitWhilePausedAsync methods. The only requirement is that it works correctly, and it doesn't allocate memory.
A runnable online demo of my worker - controller scenario can be found here.
Output with the Nito.AsyncEx.PauseTokenSource class:
Controller loops: 112,748
Worker loops: 84, paused: 36 times
Output with my Pausation class:
Controller loops: 117,397
Unhandled exception. System.InvalidOperationException: Operation is not valid due to the current state of the object.
at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetStatus(Int16 token)
at Program.Pausation.System.Threading.Tasks.Sources.IValueTaskSource.GetStatus(Int16 token)
at Program.<>c__DisplayClass0_0.<<Main>g__StartWorker|0>d.MoveNext()
I believe you're misunderstanding Reset. ManualResetValueTaskSourceCore<T>.Reset is nothing at all like ManualResetEvent.Reset. For ManualResetValueTaskSourceCore<T>, Reset means "the previous operation has completed, and now I want to reuse the value task, so change the version". So, it should be called after GetResult (i.e., after the paused code is done awaiting), not within Pause. The easiest way to encapsulate this IMO is to Reset the state immediately before returning the ValueTask.
Similarly, your code shouldn't call SetResult unless there's a value task already returned. ManualResetValueTaskSourceCore<T> is really designed around sequential operations pretty strictly, and having a separate "controller" complicates things. You could probably get it working by keeping track of whether or not the consumer is waiting, and only attempting to complete if there is one:
public class Pausation : IValueTaskSource
{
private ManualResetValueTaskSourceCore<bool> _source;
private readonly object _mutex = new();
private bool _paused;
private bool _waiting;
public Pausation() => _source.RunContinuationsAsynchronously = true;
public void Pause()
{
lock (_mutex)
_paused = true;
}
public void Resume()
{
var wasWaiting = false;
lock (_mutex)
{
wasWaiting = _waiting;
_paused = _waiting = false;
}
if (wasWaiting)
_source.SetResult(default);
}
public ValueTask WaitWhilePausedAsync()
{
lock (_mutex)
{
if (!_paused) return ValueTask.CompletedTask;
_waiting = true;
_source.Reset();
return new ValueTask(this, _source.Version);
}
}
void IValueTaskSource.GetResult(short token)
{
_source.GetResult(token);
}
ValueTaskSourceStatus IValueTaskSource.GetStatus(short token)
{
return _source.GetStatus(token);
}
void IValueTaskSource.OnCompleted(Action<object> continuation, object state, short token, ValueTaskSourceOnCompletedFlags flags)
{
_source.OnCompleted(continuation, state, token, flags);
}
}
I fixed some of the obvious race conditions using a mutex, but I can't guarantee there aren't more subtle ones remaining. It will certainly be limited to a single controller and single consumer, at least.
You may have misunderstood the purpose of ValueTask. Stephen Toub breaks down why it was introduced here: https://devblogs.microsoft.com/dotnet/understanding-the-whys-whats-and-whens-of-valuetask/
But the most relevant bit of information is that ValueTask does not allocate iff the asynchronous method completes synchronously and successfully.
If it does need to complete asynchronously, there is no getting around needing to allocate a Task class due to the state machine in the background. So this may need another solution entirely.

How to avoid SetResult after task cancellation when using CancelAfter

I converted an EAP into TAP. However, the event which is handled, may not be fired at all, which is why I add a timeout. See example code
class Request<T> {
TaskCompletionSource<T> tcs;
CancellationTokenSource cts;
public int Timeout { get; }
public bool IsCanceled => tcs.Task.IsCanceled;
public Request(int timeout = System.Threading.Timeout.Infinite) => Timeout = timeout;
public Task<T> StartRequestAsync() {
cts.Token.Register(() => tcs.SetCanceled());
cts.CancelAfter(Timeout);
return tcs.Task;
}
public void OnEvent(T result) {
tcs.SetResult(result);
}
}
Note that the CancellationTokenSource and CancellationTokenRegistration are both properly Disposed, in the implementation of IDisposable of the current class. The StartRequestAsync method is also thread-safely prevented from running more than once. The code is simplified for brevity.
Now, I'm externally setting the result of the task.
var req = new Request<int>();
if(!req.IsCanceled)
{
req.SetResult(5);
}
But I think there is a race condition. I assume the cancellation token CancelAfter works as an interrupt. If the token gets cancelled exactly after execution enters the if block, we could get InvalidOperationException: An attempt was made to transition a task to a final state when it had already completed.
Documentation states that TaskCompletionSource is thread-safe, so does that mean that TrySetResult is the atomic version of what I'm trying to achieve here? Is that what I should use instead?
Is there any other way to solve this?
It might help to see how the SetResult and SetCanceled methods are implemented:
public void SetResult(TResult result)
{
if (!TrySetResult(result))
throw new InvalidOperationException(
Environment.GetResourceString("TaskT_TransitionToFinal_AlreadyCompleted"));
}
public void SetCanceled()
{
if(!TrySetCanceled())
throw new InvalidOperationException(
Environment.GetResourceString("TaskT_TransitionToFinal_AlreadyCompleted"));
}
As you can see, both methods delegate to their Try counterparts. You need them in order to detect errors in your code, in case your scenario is deterministic. If it's not, and race conditions are inherent in the scenario, just use the Try versions.

How check "IsAlive" status of c# async Thread? [duplicate]

This question already has answers here:
How to wait for async method to complete?
(7 answers)
Closed 2 years ago.
Let's say I have a MyThread class in my Windows C# app like this:
public class MyThread
{
Thread TheThread;
public MyThread()
{
TheThread = new Thread(MyFunc);
}
public void StartIfNecessary()
{
if (!TheThread.IsAlive)
TheThread.Start();
}
private void MyFunc()
{
for (;;)
{
if (ThereIsStuffToDo)
DoSomeStuff();
}
}
}
That works fine. But now I realize I can make my thread more efficient by using async/await:
public class MyThread
{
Thread TheThread;
public MyThread()
{
TheThread = new Thread(MyFunc);
}
public void StartIfNecessary()
{
if (!TheThread.IsAlive)
TheThread.Start();
}
private async void MyFunc()
{
for (;;)
{
DoSomeStuff();
await MoreStuffIsReady();
}
}
}
What I see now, is that the second time I call StartIfNecessary(), TheThread.IsAlive is false (and ThreadState is Stopped BTW) so it calls TheThread.Start() which then throws the ThreadStateException "Thread is running or terminated; it cannot restart". But I can see that DoMoreStuff() is still getting called, so the function is in fact still executing.
I suspect what is happening, is that when my thread hits the "await", the thread I created is stopped, and when the await on MoreStuffIsReady() completes, a thread from the thread pool is assigned to execute DoSomeStuff(). So it is technically true that the thread I created has been stopped, but the function I created that thread to process is still running.
So how can I tell if "MyFunc" is still active?
I can think of 3 ways to solve this:
1) Add a "bool IsRunning" which is set to true right before calling TheThread.Start(), and MyFunc() sets to false when it completes. This is simple, but requires me to wrap everything in a try/catch/finally which isn't awful but I was hoping there was a way to have the operating system or framework help me out here just in case "MyFunc" dies in some way I wasn't expecting.
2) Find some new function somewhere in System.Threading that will give me the information I need.
3) Rethink the whole thing - since my thread only sticks around for a few milliseconds, is there a way to accomplish this same functionality without creating a thread at all (outside of the thread pool)? Start "MyFunc" as a Task somehow?
Best practices in this case?
Sticking with a Plain Old Thread and using BlockingCollection to avoid a tight loop:
class MyThread
{
private Thread worker = new Thread(MyFunc);
private BlockingCollection<Action> stuff = new BlockingCollection<Action>();
public MyThread()
{
worker.Start();
}
void MyFunc()
{
foreach (var todo in stuff.GetConsumingEnumerable())
{
try
{
todo();
}
catch(Exception ex)
{
// Something went wrong in todo()
}
}
stuff.Dispose(); // should be disposed!
}
public void Shutdown()
{
stuff.CompleteAdding(); // No more adding, but will continue to serve until empty.
}
public void Add( Action stuffTodo )
{
stuff.Add(stuffTodo); // Will throw after Shutdown is called
}
}
BlockingCollection also shows examples with Task if you prefer to go down that road.
Rethink the whole thing
This is definitely the best option. Get rid of the thread completely.
It seems like you have a "consumer" kind of scenario, and you need a consumer with a buffer of data items to work on.
One option is to use ActionBlock<T> from TPL Dataflow:
public class NeedsADifferentName
{
ActionBlock<MyDataType> _block;
public NeedsADifferentName() => _block = new ActionBlock<MyDataType>(MyFunc);
public void QueueData(MyDataType data) => _block.Post(data);
private void MyFunc(MyDataType data)
{
DoSomeStuff(data);
}
}
Alternatively, you can build your own pipeline using something like Channels.

How do I perform both a read and a write of a boolean in one atomic operation?

Let's say I have a method that gets called by multiple threads
public class MultiThreadClass
{
public void Gogogo()
{
// method implementation
}
private volatile bool running;
}
in Gogogo(), I want to check if running is true, and if so, return from the method. However, if it is false, I want to set it to true and continue the method. The solution I see is to do the following:
public class MultiThreadClass
{
public void Gogogo()
{
lock (this.locker)
{
if (this.running)
{
return;
}
this.running = true;
}
// rest of method
this.running = false;
}
private volatile bool running;
private readonly object locker = new object();
}
Is there another way to do this? I've found out that if I leave out the lock, running could be false for 2 different threads, set to true, and the rest of the method would execute on both threads simultaneously.
I guess my goal is to have the rest of my method execute on a single thread (I don't care which one) and not get executed by the other threads, even if all of them (2-4 in this case) call Gogogo() simultaneously.
I could also lock on the entire method, but would the method run slower then? It needs to run as fast as possible, but part of it on only one thread at a time.
(Details: I have a dicionary of ConcurrentQueue's which contain "results" which have "job names". I am trying to dequeue one result per key in the dictionary (one result per job name) and call this a "complete result" which is sent by an event to subscribers. The results are sent via an event to the class, and that event is raised from multiple threads (one per job name; each job raises a "result ready" event on it's own thread)
You can use Interlocked.CompareExchange if you change your bool to an int:
private volatile int running = 0;
if(Interlocked.CompareExchange(ref running, 1, 0) == 0)
{
//running changed from false to true
}
I think Interlocked.Exchange should do the trick.
You can use Interlocked to handle this case without a lock, if you really want to:
public class MultiThreadClass
{
public void Gogogo()
{
if (Interlocked.Exchange(ref running, 1) == 0)
{
//Do stuff
running = 0;
}
}
private volatile int running = 0;
}
That said, unless there is a really high contention rate (which I would not expect) then your code should be entirely adequate. Using Interlocked also suffers a bit in the readability department due to not having bool overloads for their methods.
You need to use Monitor class instead of boolean flag. Use Monitor.TryEnter:
public void Gogogo()
{
if Monitor.TryEnter(this.locker)
{
try
{
// Do stuff
}
finally
{
Monitor.Exit(this.locker);
}
}
}

Categories

Resources