Given the code:
object sync = new object();
string result = null;
var thread = new Thread(() => {
var workResult = DoSomeWork();
lock (sync) { result = workResult; }
});
thread.Start();
thread.Join();
lock (sync) {
// No code in here - not 'atomic' wrt the thread
// as the thread has been terminated and joined.
}
// Is it SAFE to access the `result` here?
UseResultFromThread(result);
Does the empty lock ensure a happens-before with respect to thread visibility of the value of result, which is set from within the thread?
If not (and even if so), is there a better approach than using lock here given the previously established thread lifetime ordering?
Or (and the Y question) is the Join sufficient for thread visibility of the modified variable(s)?
It will work, yes, as entering a lock involves a memory barrier. You could use Thread.MemoryBarrier instead to just do that. Performance would be almost identical, it would be mostly to improve semantics for the reader.
That said, the whole thing becomes a lot easier if you use tasks instead, as they're specifically designed to represents operations that have a result, and they will take care of the appropriate synchronization when accessing that result. Your code could be written as simply:
var result = Task.Run(() => DoSomeWork()).Result;
UseResultFromThread(result);
Of course, there isn't even much point in creating a new thread to do some work if you're only going to wait for it to finish. At that point you might as well just have the original thread do the work and not bother with a second thread in the first place; that greatly simplifies the whole thing:
UseResultFromThread(DoSomeWOrk());
And done.
Related
I have a multi-threaded application that implements async methods. The application utilizes resources that are not thread safe, and needs to be used on a single thread. The worker thread is guarded like this
private void EnsureWorkerIsRunning()
{
// first try without lock
if (_processingRequests)
{
return;
}
lock (_processLock)
{
// try again without lock
if (_processingRequests)
{
return;
}
_processingRequests = true;
DoWork();
_processingRequests = false;
}
}
That is
Check if (the bool) _processingRequests is true without any lock. If it is processing requests, return and be confident that the worker is running.
If _processingRequests is false, continue to the lock statement, only allowing one thread to enter at a time. The first thread that enters the block set's _processingRequests to true and starts the worker. Any subsequent threads that enter the lock block will bail, since _processingRequests are now true.
Adding a lock directly introduces a performance hit that is not acceptable.
I'm looking for a more elegant way to achieve the same thing without affecting the performance. Any ideas?
The technique you are using is called Double-checked locking and is perfectly fine approach to use, where suitable. It's widely used and it's nothing to do with elegance, because it's main purpose is to reduce the performance degradation when entering lock statement each time, with additional checking for a condition without a lock.
However in your particular case it's more suitable to use just Monitor.TryEnter, which returns false if some thread has already acquired the lock.
Also, a blog post about the impact of processor's context switches, that double-checked locking avoids where unnecessary.
bool _processingRequests with additional lock(_processLock) is a nonsense.
Use proper synchronization, e.g. Monitor:
object _processLock = new object();
// acquiring lock,
if(Monitor.TryEnter(_processLock)) // if already acquired - exit immediately(return false)
try
{
...
}
finally { Monitor.Exit(_processLock); }
This will either do job or, if _processLock is already occupied, don't do (seems you want that behavior), no need to check for anything.
In a Parallel.For, is it possible to synchronize each threads with a 'WaitAll' ?
Parallel.For(0, maxIter, i =>
{
// Do stuffs
// Synchronisation : wait for all threads => ???
// Do another stuffs
});
Parallel.For, in the background, batches the iterations of the loop into one or more Tasks, which can executed in parallel. Unless you take ownership of the partitioning, the number of tasks (and threads) is (and should!) be abstracted away. Control will only exit the Parallel.For loop once all the tasks have completed (i.e. no need for WaitAll).
The idea of course is that each loop iteration is independent and doesn't require synchronization.
If synchronization is required in the tight loop, then you haven't isolated the Tasks correctly, or it means that Amdahl's Law is in effect, and the problem can't be speeded up through parallelization.
However, for an aggregation type pattern, you may need to synchronize after completion of each Task - use the overload with the localInit / localFinally to do this, e.g.:
// allTheStrings is a shared resource which isn't thread safe
var allTheStrings = new List<string>();
Parallel.For( // for (
0, // var i = 0;
numberOfIterations, // i < numberOfIterations;
() => new List<string> (), // localInit - Setup each task. List<string> --> localStrings
(i, parallelLoopState, localStrings) =>
{
// The "tight" loop. If you need to synchronize here, there is no point
// using parallel at all
localStrings.Add(i.ToString());
return localStrings;
},
(localStrings) => // local Finally for each task.
{
// Synchronization needed here is needed - run once per task
lock(allTheStrings)
{
allTheStrings.AddRange(localStrings);
}
});
In the above example, you could also have just declared allTheStrings as
var allTheStrings = new ConcurrentBag<string>();
In which case, we wouldn't have required the lock in the localFinally.
You shouldn't (for reasons stated by other users), but if you want to, you can use Barrier. This can be used to cause all threads to wait (block) at a certain point before X number of participants hit a barrier, causing the barrier to proceed and threads to unblock. The downside of this approach, as others have said, deadlocks
I have a function in C# that can be called multiple times from multiple threads and I want it to be done only once so I thought about this:
class MyClass
{
bool done = false;
public void DoSomething()
{
lock(this)
if(!done)
{
done = true;
_DoSomething();
}
}
}
The problem is _DoSomething takes a long time and I don't want many threads to wait on it when they can just see that done is true.
Something like this can be a workaround:
class MyClass
{
bool done = false;
public void DoSomething()
{
bool doIt = false;
lock(this)
if(!done)
doIt = done = true;
if(doIt)
_DoSomething();
}
}
But just doing the locking and unlocking manually will be much better.
How can I manually lock and unlock just like the lock(object) does? I need it to use same interface as lock so that this manual way and lock will block each other (for more complex cases).
The lock keyword is just syntactic sugar for Monitor.Enter and Monitor.Exit:
Monitor.Enter(o);
try
{
//put your code here
}
finally
{
Monitor.Exit(o);
}
is the same as
lock(o)
{
//put your code here
}
Thomas suggests double-checked locking in his answer. This is problematic. First off, you should not use low-lock techniques unless you have demonstrated that you have a real performance problem that is solved by the low-lock technique. Low-lock techniques are insanely difficult to get right.
Second, it is problematic because we don't know what "_DoSomething" does or what consequences of its actions we are going to rely on.
Third, as I pointed out in a comment above, it seems crazy to return that the _DoSomething is "done" when another thread is in fact still in the process of doing it. I don't understand why you have that requirement, and I'm going to assume that it is a mistake. The problems with this pattern still exist even if we set "done" after "_DoSomething" does its thing.
Consider the following:
class MyClass
{
readonly object locker = new object();
bool done = false;
public void DoSomething()
{
if (!done)
{
lock(locker)
{
if(!done)
{
ReallyDoSomething();
done = true;
}
}
}
}
int x;
void ReallyDoSomething()
{
x = 123;
}
void DoIt()
{
DoSomething();
int y = x;
Debug.Assert(y == 123); // Can this fire?
}
Is this threadsafe in all possible implementations of C#? I don't think it is. Remember, non-volatile reads may be moved around in time by the processor cache. The C# language guarantees that volatile reads are consistently ordered with respect to critical execution points like locks, and it guarantees that non-volatile reads are consistent within a single thread of execution, but it does not guarantee that non-volatile reads are consistent in any way across threads of execution.
Let's look at an example.
Suppose there are two threads, Alpha and Bravo. Both call DoIt on a fresh instance of MyClass. What happens?
On thread Bravo, the processor cache happens to do a (non-volatile!) fetch of the memory location for x, which contains zero. "done" happens to be on a different page of memory which is not fetched into the cache quite yet.
On thread Alpha at the "same time" on a different processor DoIt calls DoSomething. Thread Alpha now runs everything in there. When thread Alpha is done its work, done is true and x is 123 on Alpha's processor. Thread Alpha's processor flushes those facts back out to main memory.
Thread bravo now runs DoSomething. It reads the page of main memory containing "done" into the processor cache and sees that it is true.
So now "done" is true, but "x" is still zero in the processor cache for thread Bravo. Thread Bravo is not required to invalidate the portion of the cache that contains "x" being zero because on thread Bravo neither the read of "done" nor the read of "x" were volatile reads.
The proposed version of double-checked locking is not actually double-checked locking at all. When you change the double-checked locking pattern you need to start over again from scratch and re-analyze everything.
The way to make this version of the pattern correct is to make at least the first read of "done" into a volatile read. Then the read of "x" will not be permitted to move "ahead" of the volatile read to "done".
You can check the value of done before and after the lock:
if (!done)
{
lock(this)
{
if(!done)
{
done = true;
_DoSomething();
}
}
}
This way you won't enter the lock if done is true. The second check inside the lock is to cope with race conditions if two threads enter the first if at the same time.
BTW, you shouldn't lock on this, because it can cause deadlocks. Lock on a private field instead (like private readonly object _syncLock = new object())
The lock keyword is just syntactic sugar for the Monitor class. Also you could call Monitor.Enter(), Monitor.Exit().
But the Monitor class itself has also the functions TryEnter() and Wait() which could help in your situation.
I know this answer comes several years late, but none of the current answers seem to address your actual scenario, which only became apparent after your comment:
The other threads don't need to use any information generated by ReallyDoSomething.
If the other threads don't need to wait for the operation to complete, the second code snippet in your question would work fine. You can optimize it further by eliminating your lock entirely and using an atomic operation instead:
private int done = 0;
public void DoSomething()
{
if (Interlocked.Exchange(ref done, 1) == 0) // only evaluates to true ONCE
_DoSomething();
}
Furthermore, if your _DoSomething() is a fire-and-forget operation, then you might not even need the first thread to wait for it, allowing it to run asynchronously in a task on the thread pool:
int done = 0;
public void DoSomething()
{
if (Interlocked.Exchange(ref done, 1) == 0)
Task.Factory.StartNew(_DoSomething);
}
I asked the question below couple of weeks ago. Now, when reviewing my question and all the answers, a very important detail jumped into my eyes: In my second code example, isn't DoTheCodeThatNeedsToRunAsynchronously() executed in the main (UI) thread? Doesn't the timer just wait a second and then post an event to the main thread? This would mean then that the code-that-needs-to-run-asynchronously isn't run asynchronously at all?!
Original question:
I have recently faced a problem multiple times and solved it in different ways, always being uncertain on whether it is thread safe or not: I need to execute a piece of C# code asynchronously. (Edit: I forgot to mention I'm using .NET 3.5!)
That piece of code works on an object that is provided by the main thread code. (Edit: Let's assume that object is thread-safe in itself.) I'll present you two ways I tried (simplified) and have these four questions:
What is the best way to achieve what I want? Is it one of the two or another approach?
Is one of the two ways not thread-safe (I fear both...) and why?
The first approach creates a thread and passes it the object in the constructor. Is that how I'm supposed to pass the object?
The second approach uses a timer which doesn't provide that possibility, so I just use the local variable in the anonymous delegate. Is that safe or is it possible in theory that the reference in the variable changes before it is evaluated by the delegate code? (This is a very generic question whenever one uses anonymous delegates). In Java you are forced to declare the local variable as final (i.e. it cannot be changed once assigned). In C# there is no such possibility, is there?
Approach 1: Thread
new Thread(new ParameterizedThreadStart(
delegate(object parameter)
{
Thread.Sleep(1000); // wait a second (for a specific reason)
MyObject myObject = (MyObject)parameter;
DoTheCodeThatNeedsToRunAsynchronously();
myObject.ChangeSomeProperty();
})).Start(this.MyObject);
There is one problem I had with this approach: My main thread might crash, but the process still persists in the memory due to the zombie thread.
Approach 2: Timer
MyObject myObject = this.MyObject;
System.Timers.Timer timer = new System.Timers.Timer();
timer.Interval = 1000;
timer.AutoReset = false; // i.e. only run the timer once.
timer.Elapsed += new System.Timers.ElapsedEventHandler(
delegate(object sender, System.Timers.ElapsedEventArgs e)
{
DoTheCodeThatNeedsToRunAsynchronously();
myObject.ChangeSomeProperty();
});
DoSomeStuff();
myObject = that.MyObject; // hypothetical second assignment.
The local variable myObject is what I'm talking about in question 4. I've added a second assignment as an example. Imagine the timer elapses after the second assigment, will the delegate code operate on this.MyObject or that.MyObject?
Whether or not either of these pieces of code is safe has to do with the structure of MyObject instances. In both cases you are sharing the myObject variable between the foreground and background threads. There is nothing stopping the foreground thread from modifying myObject while the background thread is running.
This may or may not be safe and depends on the structure of MyObject. However if you haven't specifically planned for it then it's most certainly an unsafe operation.
I recommend using Task objects, and restructuring the code so that the background task returns its calculated value rather than changing some shared state.
I have a blog entry that discusses five different approaches to background tasks (Task, BackgroundWorker, Delegate.BeginInvoke, ThreadPool.QueueUserWorkItem, and Thread), with the pros and cons of each.
To answer your questions specifically:
What is the best way to achieve what I want? Is it one of the two or another approach? The best solution is to use the Task object instead of a specific Thread or timer callback. See my blog post for all the reasons why, but in summary: Task supports returning a result, callbacks on completion, proper error handling, and integration with the universal cancellation system in .NET.
Is one of the two ways not thread-safe (I fear both...) and why? As others have stated, this totally depends on whether MyObject.ChangeSomeProperty is threadsafe. When dealing with asynchronous systems, it's easier to reason about threadsafety when each asynchronous operation does not change shared state, and rather returns a result.
The first approach creates a thread and passes it the object in the constructor. Is that how I'm supposed to pass the object? Personally, I prefer using lambda binding, which is more type-safe (no casting necessary).
The second approach uses a timer which doesn't provide that possibility, so I just use the local variable in the anonymous delegate. Is that safe or is it possible in theory that the reference in the variable changes before it is evaluated by the delegate code? Lambdas (and delegate expressions) bind to variables, not to values, so the answer is yes: the reference may change before it is used by the delegate. If the reference may change, then the usual solution is to create a separate local variable that is only used by the lambda expression,
as such:
MyObject myObject = this.MyObject;
...
timer.AutoReset = false; // i.e. only run the timer once.
var localMyObject = myObject; // copy for lambda
timer.Elapsed += new System.Timers.ElapsedEventHandler(
delegate(object sender, System.Timers.ElapsedEventArgs e)
{
DoTheCodeThatNeedsToRunAsynchronously();
localMyObject.ChangeSomeProperty();
});
// Now myObject can change without affecting timer.Elapsed
Tools like ReSharper will try to detect whether local variables bound in lambdas may change, and will warn you if it detects this situation.
My recommended solution (using Task) would look something like this:
var ui = TaskScheduler.FromCurrentSynchronizationContext();
var localMyObject = this.myObject;
Task.Factory.StartNew(() =>
{
// Run asynchronously on a ThreadPool thread.
Thread.Sleep(1000); // TODO: review if you *really* need this
return DoTheCodeThatNeedsToRunAsynchronously();
}).ContinueWith(task =>
{
// Run on the UI thread when the ThreadPool thread returns a result.
if (task.IsFaulted)
{
// Do some error handling with task.Exception
}
else
{
localMyObject.ChangeSomeProperty(task.Result);
}
}, ui);
Note that since the UI thread is the one calling MyObject.ChangeSomeProperty, that method doesn't have to be threadsafe. Of course, DoTheCodeThatNeedsToRunAsynchronously still does need to be threadsafe.
"Thread-safe" is a tricky beast. With both of your approches, the problem is that the "MyObject" your thread is using may be modified/read by multiple threads in a way that makes the state appear inconsistent, or makes your thread behave in a way inconsistent with actual state.
For example, say your MyObject.ChangeSomeproperty() MUST be called before MyObject.DoSomethingElse(), or it throws. With either of your approaches, there is nothing to stop any other thread from calling DoSomethingElse() before the thread that will call ChangeSomeProperty() finishes.
Or, if ChangeSomeProperty() happens to be called by two threads, and it (internally) changes state, the thread context switch may happen while the first thread is in the middle of it's work and the end result is that the actual new state after both threads is "wrong".
However, by itself, neither of your approaches is inherently thread-unsafe, they just need to make sure that changing state is serialized and that accessing state is always giving a consistent result.
Personally, I wouldn't use the second approach. If you're having problems with "zombie" threads, set IsBackground to true on the thread.
Your first attempt is pretty good, but the thread continued to exist even after the application exits, because you didn't set the IsBackground property to true... here is a simplified (and improved) version of your code:
MyObject myObject = this.MyObject;
Thread t = new Thread(()=>
{
Thread.Sleep(1000); // wait a second (for a specific reason)
DoTheCodeThatNeedsToRunAsynchronously();
myObject.ChangeSomeProperty();
});
t.IsBackground = true;
t.Start();
With regards to the thread safety: it's difficult to tell if your program functions correctly when multiple threads execute simultaneously, because you're not showing us any points of contention in your example. It's very possible that you will experience concurrency issues if your program has contention on MyObject.
Java has the final keyword and C# has a corresponding keyword called readonly, but neither final nor readonly ensure that the state of the object you're modifying will be consistent between threads. The only thing these keywords do is ensure that you do not change the reference the object is pointing to. If two threads have read/write contention on the same object, then you should perform some type of synchronization or atomic operations on that object in order to ensure thread safety.
Update
OK, if you modify the reference to which myObject is pointing to, then your contention is now on myObject. I'm sure that my answer will not match your actual situation 100%, but given the example code you've provided I can tell you what will happen:
You will not be guaranteed which object gets modified: it can be that.MyObject or this.MyObject. That's true regardless if you're working with Java or C#. The scheduler may schedule your thread/timer to be executed before, after or during the second assignment. If you're counting on a specific order of execution, then you have to do something to ensure that order of execution. Usually that something is a communication between the threads in the form of a signal: a ManualResetEvent, Join or something else.
Here is a join example:
MyObject myObject = this.MyObject;
Thread task = new Thread(()=>
{
Thread.Sleep(1000); // wait a second (for a specific reason)
DoTheCodeThatNeedsToRunAsynchronously();
myObject.ChangeSomeProperty();
});
task.IsBackground = true;
task.Start();
task.Join(); // blocks the main thread until the task thread is finished
myObject = that.MyObject; // the assignment will happen after the task is complete
Here is a ManualResetEvent example:
ManualResetEvent done = new ManualResetEvent(false);
MyObject myObject = this.MyObject;
Thread task = new Thread(()=>
{
Thread.Sleep(1000); // wait a second (for a specific reason)
DoTheCodeThatNeedsToRunAsynchronously();
myObject.ChangeSomeProperty();
done.Set();
});
task.IsBackground = true;
task.Start();
done.WaitOne(); // blocks the main thread until the task thread signals it's done
myObject = that.MyObject; // the assignment will happen after the task is done
Of course, in this case it's pointless to even spawn multiple threads, since you're not going to allow them to run concurrently. One way to avoid this is by not changing the reference to myObject after you've started the thread, then you won't need to Join or WaitOne on the ManualResetEvent.
So this leads me to a question: why are you assigning a new object to myObject? Is this a part of a for-loop which is starting multiple threads to perform multiple asynchronous tasks?
What is the best way to achieve what I want? Is it one of the two or another approach?
Both look fine, but...
Is one of the two ways not thread-safe (I fear both...) and why?
...they are not thread safe unless MyObject.ChangeSomeProperty() is thread safe.
The first approach creates a thread and passes it the object in the constructor. Is that how I'm supposed to pass the object?
Yes. Using a closure (as in your second approach) is fine as well, with the additional advantage that you don't need to do a cast.
The second approach uses a timer which doesn't provide that possibility, so I just use the local variable in the anonymous delegate. Is that safe or is it possible in theory that the reference in the variable changes before it is evaluated by the delegate code? (This is a very generic question whenever one uses anonymous delegates).
Sure, if you add myObject = null; directly after setting timer.Elapsed, then the code in your thread will fail. But why would you want to do that? Note that changing this.MyObject will not affect the variable captured in your thread.
So, how to make this thread-safe? The problem is that myObject.ChangeSomeProperty(); might run in parallel with some other code that modifies the state of myObject. There are basically two solutions to that:
Option 1: Execute myObject.ChangeSomeProperty() in the main UI thead. This is the simplest solution if ChangeSomeProperty is fast. You can use the Dispatcher (WPF) or Control.Invoke (WinForms) to jump back to the UI thread, but the easiest way is to use a BackgroundWorker:
MyObject myObject = this.MyObject;
var bw = new BackgroundWorker();
bw.DoWork += (sender, args) => {
// this will happen in a separate thread
Thread.Sleep(1000);
DoTheCodeThatNeedsToRunAsynchronously();
}
bw.RunWorkerCompleted += (sender, args) => {
// We are back in the UI thread here.
if (args.Error != null) // if an exception occurred during DoWork,
MessageBox.Show(args.Error.ToString()); // do your error handling here
else
myObject.ChangeSomeProperty();
}
bw.RunWorkerAsync(); // start the background worker
Option 2: Make the code in ChangeSomeProperty() thread-safe by using the lock keyword (inside ChangeSomeProperty as well as inside any other method modifying or reading the same backing field).
The bigger thread-safety concern here, in my mind, may be the 1 second Sleep. If this is required in order to synchronize with some other operation (giving it time to complete), then I strongly recommend using a proper synchronization pattern rather than relying on the Sleep. Monitor.Pulse or AutoResetEvent are two common ways to achieve synchronization. Both should be used carefully, as it's easy to introduce subtle race conditions. However, using Sleep for synchronization is a race condition waiting to happen.
Also, if you want to use a thread (and don't have access to the Task Parallel Library in .NET 4.0), then ThreadPool.QueueUserWorkItem is preferable for short-running tasks. The thread pool threads also won't hang up the application if it dies, as long as there is not some deadlock preventing a non-background thread from dying.
One thing not mentioned so far: The choice of threading methods depends heavily on specifically what DoTheCodeThatNeedsToRunAsynchronously() does.
Different .NET threading approaches are suitable for different requirements. One very large concern is whether this method will complete quickly, or take some time (is it short-lived or long-running?).
Some .NET threading mechanisms, like ThreadPool.QueueUserWorkItem(), are for use by short-lived threads. They avoid the expense of creating a thread by using "recycled" threads--but the number of threads it will recycle is limited, so a long-running task shouldn't hog the ThreadPool's threads.
Other options to consider are using:
ThreadPool.QueueUserWorkItem() is a convienient means to fire-and-forget small tasks on a ThreadPool thread
System.Threading.Tasks.Task is a new feature in .NET 4 which makes small tasks easy to run in async/parallel mode.
Delegate.BeginInvoke() and Delegate.EndInvoke() (BeginInvoke() will run the code asynchronously, but it's crucial that you ensure EndInvoke() is called as well to avoid potential resource-leaks. It's also based on ThreadPool threads I believe.
System.Threading.Thread as shown in your example. Threads provide the most control but are also more expensive than the other methods--so they are ideal for long-running tasks or detail-oriented multithreading.
Overall my personal preference has been to use Delegate.BeginInvoke()/EndInvoke() -- it seems to strike a good balance between control and ease of use.
This isn't about the different methods I could or should be using to utilize the queues in the best manner, rather something I have seen happening that makes no sense to me.
void Runner() {
// member variable
queue = Queue.Synchronized(new Queue());
while (true) {
if (0 < queue.Count) {
queue.Dequeue();
}
}
}
This is run in a single thread:
var t = new Thread(Runner);
t.IsBackground = true;
t.Start();
Other events are "Enqueue"ing else where. What I've seen happen is over a period of time, the Dequeue will actually throw InvalidOperationException, queue empty. This should be impossible seeing as how the count guarantees there is something there, and I'm positive that nothing else is "Dequeue"ing.
The question(s):
Is it possible that the Enqueue actually increases the count before the item is fully on the queue (whatever that means...)?
Is it possible that the thread is somehow restarting (expiring, reseting...) at the Dequeue statement, but immediately after it already removed an item?
Edit (clarification):
These code pieces are part of a Wrapper class that implements the background helper thread. The Dequeue here is the only Dequeue, and all Enqueue/Dequeue are on the Synchronized member variable (queue).
Using Reflector, you can see that no, the count does not get increased until after the item is added.
As Ben points out, it does seem as you do have multiple people calling dequeue.
You say you are positive that nothing else is calling dequeue. Is that because you only have the one thread calling dequeue? Is dequeue called anywhere else at all?
EDIT:
I wrote a little sample code, but could not get the problem to reproduce. It just kept running and running without any exceptions.
How long was it running before you got errors? Maybe you can share a bit more of the code.
class Program
{
static Queue q = Queue.Synchronized(new Queue());
static bool running = true;
static void Main()
{
Thread producer1 = new Thread(() =>
{
while (running)
{
q.Enqueue(Guid.NewGuid());
Thread.Sleep(100);
}
});
Thread producer2 = new Thread(() =>
{
while (running)
{
q.Enqueue(Guid.NewGuid());
Thread.Sleep(25);
}
});
Thread consumer = new Thread(() =>
{
while (running)
{
if (q.Count > 0)
{
Guid g = (Guid)q.Dequeue();
Console.Write(g.ToString() + " ");
}
else
{
Console.Write(" . ");
}
Thread.Sleep(1);
}
});
consumer.IsBackground = true;
consumer.Start();
producer1.Start();
producer2.Start();
Console.ReadLine();
running = false;
}
}
Here is what I think the problematic sequence is:
(0 < queue.Count) evaluates to true, the queue is not empty.
This thread gets preempted and another thread runs.
The other thread removes an item from the queue, emptying it.
This thread resumes execution, but is now within the if block, and attempts to dequeue an empty list.
However, you say nothing else is dequeuing...
Try outputting the count inside the if block. If you see the count jump numbers downwards, someone else is dequeuing.
Here's a possible answer from the MSDN page on this topic:
Enumerating through a collection is
intrinsically not a thread-safe
procedure. Even when a collection is
synchronized, other threads can still
modify the collection, which causes
the enumerator to throw an exception.
To guarantee thread safety during
enumeration, you can either lock the
collection during the entire
enumeration or catch the exceptions
resulting from changes made by other
threads.
My guess is that you're correct - at some point, there's a race condition happening, and you end up dequeuing something that isn't there.
A Mutex or Monitor.Lock is probably appropriate here.
Good luck!
Are the other areas that are "Enqueuing" data also using the same synchronized queue object? In order for the Queue.Synchronized to be thread-safe, all Enqueue and Dequeue operations must use the same synchronized queue object.
From MSDN:
To guarantee the thread safety of the
Queue, all operations must be done
through this wrapper only.
Edited:
If you are looping over many items that involve heavy computation or if you are using a long-term thread loop (communications, etc.), you should consider having a wait function such as System.Threading.Thread.Sleep, System.Threading.WaitHandle.WaitOne, System.Threading.WaitHandle.WaitAll, or System.Threading.WaitHandle.WaitAny in the loop, otherwise it might kill system performance.
question 1: If you're using a synchronized queue, then: no, you're safe! But you'll need to use the synchronized instance on both sides, the supplier and the feeder.
question 2: Terminating your worker thread when there is no work to do, is a simple job. However, you either way need a monitoring thread or have the queue start a background worker thread whenever the queue has something to do. The last one sounds more like the ActiveObject Pattern, than a simple queue (which's Single-Responsibily-Pattern says that it should only do queueing).
In addition, I'd go for a blocking queue instead of your code above. The way your code works requires CPU processing power even if there is no work to do. A blocking queue lets your worker thread sleep whenever there is nothing to do. You can have multiple sleeping threads running without using CPU processing power.
C# doesn't come with a blocking queue implementation, but there a many out there. See this example and this one.
Another option for making thread-safe use of queues is the ConcurrentQueue<T> class that has been introduced since 2009 (the year of this question). This may help avoid having to write your own synchronization code or at least help making it much simpler.
From .NET Framework 4.6 onward, ConcurrentQueue<T> also implements the interface IReadOnlyCollection<T>.