The documentation for LazyThreadSafetyMode states that using the value ExecutionAndPublication could cause deadlocks if the initialization method (or the default constructor, if there is no initialization method) uses locks internally. I am trying to get a better understanding of examples that could cause a deadlock when using this value. In my use of this value, I am initializing a ChannelFactory. I cannot see the ChannelFactory's constructor using any internal locks (reviewing the class with Reflector), so I believe this scenario does not fit the possible deadlock situation, but I am curious what situations could cause a deadlock as well as if there could be a possible deadlock initializing the ChannelFactory.
So, to summarize, my questions are:
Is it possible to cause a deadlock initializing the ChannelFactory using ExecutionAndPublication?
What are some possible ways to cause a deadlock initializing other objects using ExecutionAndPublication?
Suppose you have the following code:
class x
{
static Lazy<ChannelFactory<ISomeChannel>> lcf =
new Lazy<ChannelFactory<ISomeChannel>>(
() => new ChannelFactory<ISomeChannel>("someEndPointConfig"),
LazyThreadSafetyMode.ExecutionAndPublication
);
public static ISomeChannel Create()
{
return lcf.Value.CreateChannel();
}
}
It's as documented – if it doesn't use any locks, this usage cannot cause any deadlocks.
Imagine that you have a lazy value that you initialize by reading from a database, but you want to make sure that only one thread is accessing the DB at any moment. If you have other code that accesses the DB, you could have a deadlock. Consider the following code:
void Main()
{
Task otherThread = Task.Factory.StartNew(() => UpdateDb(43));
Thread.Sleep(100);
Console.WriteLine(lazyInt.Value);
}
static object l = new object();
Lazy<int> lazyInt = new Lazy<int>(Init, LazyThreadSafetyMode.ExecutionAndPublication);
static int Init()
{
lock(l)
{
return ReadFromDb();
}
}
void UpdateDb(int newValue)
{
lock(l)
{
// to make sure deadlock occurs every time
Thread.Sleep(1000);
if (newValue != lazyInt.Value)
{
// some code that requires the lock
}
}
}
Init() reads from the DB, so it has to use the lock. UpdateDb() writes to the DB, so it needs the lock too, and since Lazy uses a lock internally too in this case, it causes deadlock.
In this case, it would be easy to fix the deadlock by moving the access to lazyInt.Value in UpdateDb() outside the lock statement, but it may not be so trivial (or obvious) in other cases.
Related
I have a function to clean up some objects as well as the ReaderWriterLockSlim. But I need the ReaderWriterLockSlim to lock as writer lock to prevent the other thread read the data while I am doing the clean up.
ConcurrentDictionary<string, ReaderWriterLockSlim> RwLocks = new ConcurrentDictionary<string, ReaderWriterLockSlim>();
private ReaderWriterLockSlim GetRwLock(string key)
{
return RwLocks.GetOrAdd(key, _ => new ReaderWriterLockSlim());
}
public void CleanUp(string key)
{
ReaderWriterLockSlim rwLock = this.GetRwLock(key);
try
{
rwLock.EnterWriterLock();
// do some other clean up
this.RwLocks.TryRemove(key, out _);
}
finally
{
rwLock.ExitWriterLock();
// It is safe to do dispose here?
// could other thread enter the read lock or writer lock here?
// and the dispose will throw exceptions?
// What is the best practice to do the dispose?
rwLock.Dispose();
}
}
I have an idea to wrap the ReaderWriterLockSlim. Do you think it could solve the problem or have any potential risk
public class ReaderWriterLockSlimWrapper
{
private ReaderWriterLockSlim rwLock;
private volatile bool disposed;
public ReaderWriterLockSlimWrapper()
{
rwLock = new ReaderWriterLockSlim();
disposed = false;
}
private void DisposeInternal()
{
if (!rwLock.IsReadLockHeld && !rwLock.IsUpgradeableReadLockHeld && !rwLock.IsWriteLockHeld)
{
rwLock.Dispose();
}
}
public void Dispose()
{
disposed = true;
DisposeInternal();
}
public void EnterReadLock()
{
rwLock.EnterReadLock();
}
public void ExitReadLock()
{
rwLock.ExitReadLock();
if (disposed)
{
DisposeInternal();
}
}
public void EnterWriteLock()
{
rwLock.EnterWriteLock();
}
public void ExitWriteLock()
{
rwLock.ExitWriteLock();
if (disposed)
{
DisposeInternal();
}
}
}
You haven't described the specific scenario where you intend to use your two mechanisms, neither for the first one with the CleanUp/GetRwLock methods, nor for the second one with the ReaderWriterLockSlimWrapper class. So I guess the question is:
Are my two mechanisms safe to use in all possible multithreaded scenarios, where thread-safety and atomicity of operations is mandatory?
The answer is no, both of your mechanisms are riddled with race conditions, and offer no guarantees about atomicity. Using them in a multithreaded scenario would result in undefined behavior, including but not limited to:
Unexpected exceptions.
Violations of the policies that a correctly used ReaderWriterLockSlim is normally expected to enforce. In order words it is possible that two threads will acquire a writer lock for the same key concurrently to each other, or concurrently with others threads that have acquired a reader lock for the same key, or both.
Explaining why your mechanisms are flawed is quite involved. A general explanation is that whenever you use the pattern if (x.BooleanProperty) x.Method(); in a multithreaded environment, although the BooleanProperty and the Method might be individually thread-safe, you are allowing a second thread to preempt the current thread between the two invocations, and invalidate the result of the first check.
As a side note, be aware that the ReaderWriterLockSlim is not a cross-process synchronization primitive. So even if you fix your mechanisms and then attempt to use them in a web application, the policies might still be violated because the web server might decide at random moments to recycle the current process and start a new one. In that case the web application might by running concurrently on two processes, for a period of time that spans a few seconds or even minutes.
Sometimes I encounter async/await code that accesses fields of an object. For example this snippet of code from the Stateless project:
private readonly Queue<QueuedTrigger> _eventQueue = new Queue<QueuedTrigger>();
private bool _firing;
async Task InternalFireQueuedAsync(TTrigger trigger, params object[] args)
{
if (_firing)
{
_eventQueue.Enqueue(new QueuedTrigger { Trigger = trigger, Args = args });
return;
}
try
{
_firing = true;
await InternalFireOneAsync(trigger, args).ConfigureAwait(false);
while (_eventQueue.Count != 0)
{
var queuedEvent = _eventQueue.Dequeue();
await InternalFireOneAsync(queuedEvent.Trigger, queuedEvent.Args).ConfigureAwait(false);
}
}
finally
{
_firing = false;
}
}
If I understand correctly the await **.ConfigureAwait(false) indicates that the code that is executed after this await does not necessarily has to be executed on the same context. So the while loop here could be executed on a ThreadPool thread. I don't see what is making sure that the _firing and _eventQueue fields are synchronized, for example what is creating the a lock/memory-fence/barrier here? So my question is; do I need to make the fields thread-safe, or is something in the async/await structure taking care of this?
Edit: to clarify my question; in this case InternalFireQueuedAsync should always be called on the same thread. In that case only the continuation could run on a different thread, which makes me wonder, do I need synchronization-mechanisms(like an explicit barrier) to make sure the values are synchronized to avoid the issue described here: http://www.albahari.com/threading/part4.aspx
Edit 2: there is also a small discussion at stateless:
https://github.com/dotnet-state-machine/stateless/issues/294
I don't see what is making sure that the _firing and _eventQueue fields are synchronized, for example what is creating the a lock/memory-fence/barrier here? So my question is; do I need to make the fields thread-safe, or is something in the async/await structure taking care of this?
await will ensure all necessary memory barriers are in place. However, that doesn't make them "thread-safe".
in this case InternalFireQueuedAsync should always be called on the same thread.
Then _firing is fine, and doesn't need volatile or anything like that.
However, the usage of _eventQueue is incorrect. Consider what happens when a thread pool thread has resumed the code after the await: it is entirely possible that Queue<T>.Count or Queue<T>.Dequeue() will be called by a thread pool thread at the same time Queue<T>.Enqueue is called by the main thread. This is not threadsafe.
If the main thread calling InternalFireQueuedAsync is a thread with a single-threaded context (such as a UI thread), then one simple fix is to remove all the instances of ConfigureAwait(false) in this method.
To be safe, you should mark field _firing as volatile - that will guarantee the memory barrier and be sure that the continuation part, which might run on a different thread, will read the correct value. Without volatile, the compiler, the CLR or the JIT compiler, or even the CPU may do some optimizations that cause the code to read a wrong value for it.
As for _eventQueue, you don't modify the field, so marking it as volatile is useless. If only one thread calls 'InternalFireQueuedAsync', you don't access it from multiple threads concurrently, so you are ok.
However, if multiple threads call InternalFireQueuedAsync, you will need to use a ConcurrentQueue instead, or lock your access to _eventQueue. You then better also lock your access to _firing, or access it using Interlocked, or replace it with a ManualResetEvent.
ConfigureAwait(false) means that the Context is not captured to run the continuation. Using the Thread Pool Context does not mean that continuations are run in parallel. Using await before and within the while loop ensures that the code (continuations) are run sequentially so no need to lock in this case.
You may have however a race condition when checking the _firing value.
use lock or ConcurrentQueue.
solution with lock:
private readonly Queue<QueuedTrigger> _eventQueue = new Queue<QueuedTrigger>();
private bool _firing;
private object _eventQueueLock = new object();
async Task InternalFireQueuedAsync(TTrigger trigger, params object[] args)
{
if (_firing)
{
lock(_eventQueueLock)
_eventQueue.Enqueue(new QueuedTrigger { Trigger = trigger, Args = args });
return;
}
try
{
_firing = true;
await InternalFireOneAsync(trigger, args).ConfigureAwait(false);
lock(_eventQueueLock)
while (_eventQueue.Count != 0)
{
var queuedEvent = _eventQueue.Dequeue();
await InternalFireOneAsync(queuedEvent.Trigger, queuedEvent.Args).ConfigureAwait(false);
}
}
finally
{
_firing = false;
}
}
solution with ConcurrentQueue:
private readonly ConccurentQueue<QueuedTrigger> _eventQueue = new ConccurentQueue<QueuedTrigger>();
private bool _firing;
async Task InternalFireQueuedAsync(TTrigger trigger, params object[] args)
{
if (_firing)
{
_eventQueue.Enqueue(new QueuedTrigger { Trigger = trigger, Args = args });
return;
}
try
{
_firing = true;
await InternalFireOneAsync(trigger, args).ConfigureAwait(false);
lock(_eventQueueLock)
while (_eventQueue.Count != 0)
{
object queuedEvent; // change object > expected type
if(!_eventQueue.TryDequeue())
continue;
await InternalFireOneAsync(queuedEvent.Trigger, queuedEvent.Args).ConfigureAwait(false);
}
}
finally
{
_firing = false;
}
}
In my quest to build a condition variable class I stumbled on a trivially simple way of doing it and I'd like to share this with the stack overflow community. I was googling for the better part of an hour and was unable to actually find a good tutorial or .NET-ish example that felt right, hopefully this can be of use to other people out there.
It's actually incredibly simple, once you know about the semantics of lock and Monitor.
But first, you do need an object reference. You can use this, but remember that this is public, in the sense that anyone with a reference to your class can lock on that reference. If you are uncomfortable with this, you can create a new private reference, like this:
readonly object syncPrimitive = new object(); // this is legal
Somewhere in your code where you'd like to be able to provide notifications, it can be accomplished like this:
void Notify()
{
lock (syncPrimitive)
{
Monitor.Pulse(syncPrimitive);
}
}
And the place where you'd do the actual work is a simple looping construct, like this:
void RunLoop()
{
lock (syncPrimitive)
{
for (;;)
{
// do work here...
Monitor.Wait(syncPrimitive);
}
}
}
Yes, this looks incredibly deadlock-ish, but the locking protocol for Monitor is such that it will release the lock during the Monitor.Wait. In fact, it's a requirement that you have obtained the lock before you call either Monitor.Pulse, Monitor.PulseAll or Monitor.Wait.
There's one caveat with this approach that you should know about. Since the lock is required to be held before calling the communication methods of Monitor you should really only hang on to the lock for an as short duration as possible. A variation of the RunLoop that's more friendly towards long running background tasks would look like this:
void RunLoop()
{
for (;;)
{
// do work here...
lock (syncPrimitive)
{
Monitor.Wait(syncPrimitive);
}
}
}
But now we've changed up the problem a bit, because the lock is no longer protecting the shared resource throughout the processing. So, if some of your code in the do work here... bit needs to access a shared resource you'll need an separate lock managing access to that.
We can leverage the above to create a simple thread-safe producer consumer collection (although .NET already provides an excellent ConcurrentQueue<T> implementation; this is just to illustrate the simplicity of using Monitor in implementing such mechanisms).
class BlockingQueue<T>
{
// We base our queue on the (non-thread safe) .NET 2.0 Queue collection
readonly Queue<T> q = new Queue<T>();
public void Enqueue(T item)
{
lock (q)
{
q.Enqueue(item);
System.Threading.Monitor.Pulse(q);
}
}
public T Dequeue()
{
lock (q)
{
for (;;)
{
if (q.Count > 0)
{
return q.Dequeue();
}
System.Threading.Monitor.Wait(q);
}
}
}
}
Now the point here is not to build a blocking collection, that also available in the .NET framework (see BlockingCollection). The point is to illustrate how simple it is to build an event driven message system using the Monitor class in .NET to implement conditional variable. Hope you find this useful.
Use ManualResetEvent
The class that is similar to conditional variable is the ManualResetEvent, just that the method name is slightly different.
The notify_one() in C++ would be named Set() in C#.
The wait() in C++ would be named WaitOne() in C#.
Moreover, ManualResetEvent also provides a Reset() method to set the state of the event to non-signaled.
The accepted answer is not a good one.
According to the Dequeue() code, Wait() gets called in each loop, which causes unnecessary waiting thus excessive context switches. The correct paradigm should be, wait() is called when the waiting condition is met. In this case, the waiting condition is q.Count() == 0.
Here's a better pattern to follow when it comes to using a Monitor.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682052%28v=vs.85%29.aspx
Another comment on C# Monitor is, it does not make use of a condition variable(which will essentially wake up all threads waiting for that lock, regardless of the conditions in which they went to wait; consequently, some threads may grab the lock and immediately return to sleep when they find the waiting condition hasn't been changed). It does not provide you with as find-grained threading control as pthreads. But it's .Net anyway, so not completely unexpected.
=============upon the request of John, here's an improved version=============
class BlockingQueue<T>
{
readonly Queue<T> q = new Queue<T>();
public void Enqueue(T item)
{
lock (q)
{
while (false) // condition predicate(s) for producer; can be omitted in this particular case
{
System.Threading.Monitor.Wait(q);
}
// critical section
q.Enqueue(item);
}
// generally better to signal outside the lock scope
System.Threading.Monitor.Pulse(q);
}
public T Dequeue()
{
T t;
lock (q)
{
while (q.Count == 0) // condition predicate(s) for consumer
{
System.Threading.Monitor.Wait(q);
}
// critical section
t = q.Dequeue();
}
// this can be omitted in this particular case; but not if there's waiting condition for the producer as the producer needs to be woken up; and here's the problem caused by missing condition variable by C# monitor: all threads stay on the same waiting queue of the shared resource/lock.
System.Threading.Monitor.Pulse(q);
return t;
}
}
A few things I'd like to point out:
1, I think my solution captures the requirements & definitions more precisely than yours. Specifically, the consumer should be forced to wait if and only if there's nothing left in the queue; otherwise it's up to the OS/.Net runtime to schedule threads. In your solution, however, the consumer is forced to wait in each loop, regardless whether it has actually consumed anything or not - this is the excessive waiting/context switches I was talking about.
2, My solution is symmetric in the sense that both the consumer and the producer code share the same pattern while yours is not. If you did know the pattern and just omitted for this particular case, then I take back this point.
3, Your solution signals inside the lock scope, while my solutions signals outside the lock scope. Please refer to this answer as to why your solution is worse.
why should we signal outside the lock scope
I was talking about the flaw of missing condition variables in C# monitor, and here's its impact: there's simply no way for C# to implemented the solution of moving the waiting thread from the condition queue to the lock queue. Therefore, the excessive context switch is doomed to take place in the three-thread scenario proposed by the answer in the link.
Also, the lack of condition variable makes it impossible to distinguish between the various cases where threads wait on the same shared resource/lock, but for different reasons. All waiting threads are place on a big waiting queue for that shared resource, which undermines efficiency.
"But it's .Net anyway, so not completely unexpected" --- it's understandable that .Net does not pursue as high efficiency as C++, it's understandable. But it does not imply programmers should not know the differences and their impacts.
Go to deadlockempire.github.io/. They have an amazing tutorial that will help you understand the condition variable as well as locks and will cetainly help you write your desired class.
You can step through the following code at deadlockempire.github.io and trace it. Here is the code snippet
while (true) {
Monitor.Enter(mutex);
if (queue.Count == 0) {
Monitor.Wait(mutex);
}
queue.Dequeue();
Monitor.Exit(mutex);
}
while (true) {
Monitor.Enter(mutex);
if (queue.Count == 0) {
Monitor.Wait(mutex);
}
queue.Dequeue();
Monitor.Exit(mutex);
}
while (true) {
Monitor.Enter(mutex);
queue.Enqueue(42);
Monitor.PulseAll(mutex);
Monitor.Exit(mutex);
}
As has been pointed out by h9uest's answer and comments the Monitor's Wait interface does not allow for proper condition variables (i.e. it does not allow for waiting on multiple conditions per shared lock).
The good news is that the other synchronization primitives (e.g. SemaphoreSlim, lock keyword, Monitor.Enter/Exit) in .NET can be used to implement a proper condition variable.
The following ConditionVariable class will allow you to wait on multiple conditions using a shared lock.
class ConditionVariable
{
private int waiters = 0;
private object waitersLock = new object();
private SemaphoreSlim sema = new SemaphoreSlim(0, Int32.MaxValue);
public ConditionVariable() {
}
public void Pulse() {
bool release;
lock (waitersLock)
{
release = waiters > 0;
}
if (release) {
sema.Release();
}
}
public void Wait(object cs) {
lock (waitersLock) {
++waiters;
}
Monitor.Exit(cs);
sema.Wait();
lock (waitersLock) {
--waiters;
}
Monitor.Enter(cs);
}
}
All you need to do is create an instance of the ConditionVariable class for each condition you want to be able to wait on.
object queueLock = new object();
private ConditionVariable notFullCondition = new ConditionVariable();
private ConditionVariable notEmptyCondition = new ConditionVariable();
And then just like in the Monitor class, the ConditionVariable's Pulse and Wait methods must be invoked from within a synchronized block of code.
T Take() {
lock(queueLock) {
while(queue.Count == 0) {
// wait for queue to be not empty
notEmptyCondition.Wait(queueLock);
}
T item = queue.Dequeue();
if(queue.Count < 100) {
// notify producer queue not full anymore
notFullCondition.Pulse();
}
return item;
}
}
void Add(T item) {
lock(queueLock) {
while(queue.Count >= 100) {
// wait for queue to be not full
notFullCondition.Wait(queueLock);
}
queue.Enqueue(item);
// notify consumer queue not empty anymore
notEmptyCondition.Pulse();
}
}
Below is a link to the full source code of a proper Condition Variable class using 100% managed code in C#.
https://github.com/CodeExMachina/ConditionVariable
i think i found "The WAY" on the tipical problem of a
List<string> log;
used by multiple thread, one tha fill it and the other processing and the other one empting
avoiding empty
while(true){
//stuff
Thread.Sleep(100)
}
variables used in Program
public static readonly List<string> logList = new List<string>();
public static EventWaitHandle evtLogListFilled = new AutoResetEvent(false);
the processor work like
private void bw_DoWorkLog(object sender, DoWorkEventArgs e)
{
StringBuilder toFile = new StringBuilder();
while (true)
{
try
{
{
//waiting form a signal
Program.evtLogListFilled.WaitOne();
try
{
//critical section
Monitor.Enter(Program.logList);
int max = Program.logList.Count;
for (int i = 0; i < max; i++)
{
SetText(Program.logList[0]);
toFile.Append(Program.logList[0]);
toFile.Append("\r\n");
Program.logList.RemoveAt(0);
}
}
finally
{
Monitor.Exit(Program.logList);
// end critical section
}
try
{
if (toFile.Length > 0)
{
Logger.Log(toFile.ToString().Substring(0, toFile.Length - 2));
toFile.Clear();
}
}
catch
{
}
}
}
catch (Exception ex)
{
Logger.Log(System.Reflection.MethodBase.GetCurrentMethod(), ex);
}
Thread.Sleep(100);
}
}
On the filler thread we have
public static void logList_add(string str)
{
try
{
try
{
//critical section
Monitor.Enter(Program.logList);
Program.logList.Add(str);
}
finally
{
Monitor.Exit(Program.logList);
//end critical section
}
//set start
Program.evtLogListFilled.Set();
}
catch{}
}
this solution is fully tested, the istruction Program.evtLogListFilled.Set(); may release the lock on Program.evtLogListFilled.WaitOne() and also the next future lock.
I think this is the simpliest way.
I'm using a named mutex to lock access to a file (with path 'strFilePath') in a construction like this:
private void DoSomethingsWithAFile(string strFilePath)
{
Mutex mutex = new Mutex(false,strFilePath.Replace("\\",""));
try
{
mutex.WaitOne();
//do something with the file....
}
catch(Exception ex)
{
//handle exception
}
finally
{
mutex.ReleaseMutex();
}
}
So, this way the code will only block the thread when the same file is being processed already.
Well, I tested this and seemed to work okay, but I really would like to know your thoughts about this.
Since you are talking about a producer-consumer situation with multiple threads the "standard solution would be to use BlockingCollection which is part of .NET 4 and up - several links with information:
http://msdn.microsoft.com/en-us/library/dd997371.aspx
http://blogs.msdn.com/b/csharpfaq/archive/2010/08/12/blocking-collection-and-the-producer-consumer-problem.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/03/c.net-little-wonders-concurrentbag-and-blockingcollection.aspx
http://www.albahari.com/threading/part5.aspx
IF you just want to make the locking process work then:
use a ConcurrentDictionary in combination with the TryAdd method call... if it returns true then the file was not "locked" and is now "locked" so the thread can proceed - and "unlock" it by calling Remove at the end... any other thread gets false in the meantime and can decide what to do...
I would definitely recommend the BlockingCollection approach though!
I ran into the same problem with many threads that can write in the same file.
The one of the reason that mutex not good because it slowly:
duration of call mutexSyncTest: 00:00:08.9795826
duration of call NamedLockTest: 00:00:00.2565797
BlockingCollection collection - very good idea, but for my case with rare collisions, parallel writes better than serial writes. Also way with dictionary much more easy to realise.
I use this solution (UPDATED):
public class NamedLock
{
private class LockAndRefCounter
{
public long refCount;
}
private ConcurrentDictionary<string, LockAndRefCounter> locksDictionary = new ConcurrentDictionary<string, LockAndRefCounter>();
public void DoWithLockBy(string key, Action actionWithLock)
{
var lockObject = new LockAndRefCounter();
var keyLock = locksDictionary.GetOrAdd(key, lockObject);
Interlocked.Increment(ref keyLock.refCount);
lock (keyLock)
{
actionWithLock();
Interlocked.Decrement(ref keyLock.refCount);
if (Interlocked.Read(ref keyLock.refCount) <= 0)
{
LockAndRefCounter removed;
locksDictionary.TryRemove(key, out removed);
}
}
}
}
An alternative would be: make one consumer thread which works on a queue, and blocks if it is empty. You can have several producer threads adding several filepaths to this queue and inform the consumer.
Since .net 4.0 there's a nice new class: System.Collections.Concurrent.BlockingCollection<T>
A while ago I had the same issue here on Stack Overflow - How do I implement my own advanced Producer/Consumer scenario?
public void MyTest()
{
bool eventFinished = false;
myEventRaiser.OnEvent += delegate { doStuff(); eventFinished = true; };
myEventRaiser.RaiseEventInSeperateThread()
while(!eventFinished) Thread.Sleep(1);
Assert.That(stuff);
}
Why can't eventFinished be volatile and does it matter?
It would seem to me that in this case the compiler or runtime could become to smart for its own good and 'know' in the while loop that eventFinished can only be false. Especially when you consider the way a lifted variable gets generated as a member of a class and the delegate as a method of that same class and thereby depriving optimizations of the fact that eventFinished was once a local variable.
There exists a threading primitive, ManualResetEvent to do precisely this task - you don't want to be using a boolean flag.
Something like this should do the job:
public void MyTest()
{
var doneEvent = new ManualResetEvent(false);
myEventRaiser.OnEvent += delegate { doStuff(); doneEvent.Set(); };
myEventRaiser.RaiseEventInSeparateThread();
doneEvent.WaitOne();
Assert.That(stuff);
}
Regarding the lack of support for the volatile keyword on local variables, I don't believe there is any reason why this might not in theory be possible in C#. Most likely, it is not supported simply because there was no use for such a feature prior to C# 2.0. Now, with the existence of anonymous methods and lambda functions, such support could potentially become useful. Someone please clarify matters if I'm missing something here.
In most scenarios, local variables are specific to a thread, so the issues associated with volatile are completely unnecessary.
This changes when, like in your example, it is a "captured" variable - when it is silently implemented as a field on a compiler-generated class. So in theory it could be volatile, but in most cases it wouldn't be worth the extra complexity.
In particular, something like a Monitor (aka lock) with Pulse etc could do this just as well, as could any number of other threading constructs.
Threading is tricky, and an active loop is rarely the best way to manage it...
Re the edit... secondThread.Join() would be the obvious thing - but if you really want to use a separate token, see below. The advantage of this (over things like ManualResetEvent) is that it doesn't require anything from the OS - it is handled purely inside the CLI.
using System;
using System.Threading;
static class Program {
static void WriteLine(string message) {
Console.WriteLine(Thread.CurrentThread.Name + ": " + message);
}
static void Main() {
Thread.CurrentThread.Name = "Main";
object syncLock = new object();
Thread thread = new Thread(DoStuff);
thread.Name = "DoStuff";
lock (syncLock) {
WriteLine("starting second thread");
thread.Start(syncLock);
Monitor.Wait(syncLock);
}
WriteLine("exiting");
}
static void DoStuff(object lockHandle) {
WriteLine("entered");
for (int i = 0; i < 10; i++) {
Thread.Sleep(500);
WriteLine("working...");
}
lock (lockHandle) {
Monitor.Pulse(lockHandle);
}
WriteLine("exiting");
}
}
You could also use Voltile.Write if you want to make the local var behave as Volatile. As in:
public void MyTest()
{
bool eventFinished = false;
myEventRaiser.OnEvent += delegate { doStuff(); Volatile.Write(ref eventFinished, true); };
myEventRaiser.RaiseEventInSeperateThread()
while(!Volatile.Read(eventFinished)) Thread.Sleep(1);
Assert.That(stuff);
}
What would happen if the Event raised didn't complete until after the process had exited the scope of that local variable? The variable would have been released and your thread would fail.
The sensible approach is to attach a delegate function that indicates to the parent thread that the sub-thread has completed.