Release a lock before waiting, and re-acquire it after - c#

In Java, you can associate multiple Condition objects to a single ReentrantLock. What would the C# equivalent be?
Real-world example: The example implementation in the Java Condition documentation uses two Condition objects, notFull and notEmpty, tied to the same lock. How could that example be translated to C#?
Background: I often find Java code using two Condition objects to signal various states, associated to the same Lock; in C#, it seems that you can either
call Monitor.Enter on an object, and then Monitor.WaitOne/Monitor.Pulse, but that's just one condition.
use multiple Auto/ManualResetEvent objects, but these cannot atomically reacquire a given lock after waiting.
Note: I can think of one way: using Monitor.WaitOne/Monitor.PulseAll on a single object, and checking for the condition after waking up; that's what you do in Java as well to protect against spurious wake-ups. It doesn't really do, though, because it forces you to call PulseAll instead of Pulse, since Pulse might wake up a thread waiting on another condition. Unfortunately, using PulseAll instead of Pulse has performance implications (threads competing for the same lock).

I think if you are doing new development and can do .NET 4 or above, you'll be better served by the new concurrent collection classes, like ConcurrentQueue.
But if you can't make that move, and to strictly answer your question, in .NET this is somewhat simplified imho, to implement a prod/cons pattern you would just do wait and then pulse like below (note that I typed this on notepad)
// max is 1000 items in queue
private int _count = 1000;
private Queue<string> _myQueue = new Queue<string>();
private static object _door = new object();
public void AddItem(string someItem)
{
lock (_door)
{
while (_myQueue.Count == _count)
{
// reached max item, let's wait 'till there is room
Monitor.Wait(_door);
}
_myQueue.Enqueue(someItem);
// signal so if there are therads waiting for items to be inserted are waken up
// one at a time, so they don't try to dequeue items that are not there
Monitor.Pulse(_door);
}
}
public string RemoveItem()
{
string item = null;
lock (_door)
{
while (_myQueue.Count == 0)
{
// no items in queue, wait 'till there are items
Monitor.Wait(_door);
}
item = _myQueue.Dequeue();
// signal we've taken something out
// so if there are threads waiting, will be waken up one at a time so we don't overfill our queue
Monitor.Pulse(_door);
}
return item;
}
Update: To clear up any confusion, note that Monitor.Wait releases a lock, therefore you won't get a deadlock

#Jason If the queue is full and you wake only ONE thread, you are not guaranteed that thread is a consumer. It might be a producer and you get stuck.

I haven't come across much C# code that would want to share state within a lock. Without rolling your own you could use a SemaphoreSlim (but I recommend ConcurrentQueue(T) or BlockingCollection(T)).
public class BoundedBuffer<T>
{
private readonly SemaphoreSlim _locker = new SemaphoreSlim(1,1);
private readonly int _maxCount = 1000;
private readonly Queue<T> _items;
public int Count { get { return _items.Count; } }
public BoundedBuffer()
{
_items = new Queue<T>(_maxCount);
}
public BoundedBuffer(int maxCount)
{
_maxCount = maxCount;
_items = new Queue<T>(_maxCount);
}
public void Put(T item, CancellationToken token)
{
_locker.Wait(token);
try
{
while(_maxCount == _items.Count)
{
_locker.Release();
Thread.SpinWait(1000);
_locker.Wait(token);
}
_items.Enqueue(item);
}
catch(OperationCanceledException)
{
try
{
_locker.Release();
}
catch(SemaphoreFullException) { }
throw;
}
finally
{
if(!token.IsCancellationRequested)
{
_locker.Release();
}
}
}
public T Take(CancellationToken token)
{
_locker.Wait(token);
try
{
while(0 == _items.Count)
{
_locker.Release();
Thread.SpinWait(1000);
_locker.Wait(token);
}
return _items.Dequeue();
}
catch(OperationCanceledException)
{
try
{
_locker.Release();
}
catch(SemaphoreFullException) { }
throw;
}
finally
{
if(!token.IsCancellationRequested)
{
_locker.Release();
}
}
}
}

Related

Implementing a Starve method ("Unrelease"/"Hold") for SemaphoreSlim

I'm using a SemaphoreSlim with a FIFO behaviour and now I want to add to it a Starve(int amount) method to remove threads from the pool, sort of the opposite to Release().
If there are any running tasks, they will of course continue until they are done, since for the moment the semaphore is not keeping track of what is actually running and "owes" the semaphore a release call.
The reason is that the user will dynamically control how many processes are allowed at any time for a given semaphore.
The strategy I'm following is:
if there are threads available, i.e., CurrentCount > 0, then call Await() on the SemaphoreSlim without releasing back.
if there are no more threads available, because presumably tasks are running and potentially even queuing, then next time that Release() is called ignore it to prevent threads being released (an int variable keeps count)
I have added the code I have so far below. The main issues I'm struggling with are how to ensure thread safety, no deadlocks and no surprising race conditions.
Given that I cannot access the private lock() of the semaphore, I created a new object to at least try and prevent several threads to manipulate the new variables (within the wrapper) at the same time.
However, I fear that other variables like CurrentCount which are within the SemaphoreSlim could also change half way through and mess things up... I would expect the lock in the Release() method to prevent changes to CurrentCount, but maybe I should also apply the lock to the Wait and WaitAsync (which potentially could also change CurrentCount)? That would probably also result in uneccessary locks between two calls to Wait (?)
The call to semaphore.Wait() is in this situation any better or worse than await semaphore.WaitAsync() ?
Are there any better ways to extend the functionality of a class such as SemaphoreSlim, which contains many private variables that potentially are needed or that would be useful to have access to?
I briefly considered creating a new class which inherits from SemaphoreSlim, or looking at extension methods, maybe using reflection to access the private variables,... but none seem to be obvious or valid.
public class SemaphoreQueue
{
private SemaphoreSlim semaphore;
private ConcurrentQueue<TaskCompletionSource<bool>> queue = new ConcurrentQueue<TaskCompletionSource<bool>>();
private int releasesToIgnore;
private object lockObj;
private const int NO_MAXIMUM = Int32.MaxValue; // cannot access SemaphoreSlim.NO_MAXIMUM
public SemaphoreQueue(int initialCount) : this(initialCount, NO_MAXIMUM) { }
public SemaphoreQueue(int initialCount, int maxCount)
{
semaphore = new SemaphoreSlim(initialCount, maxCount);
lockObj = new object();
releasesToIgnore = 0;
}
public void Starve(int amount)
{
lock (lockObj)
{
// a maximum of CurrentCount threads can be immediatelly starved by calling Wait without release
while ((semaphore.CurrentCount > 0) && (amount > 0))
{
semaphore.Wait();
amount -= 1;
}
// presumably there are still tasks running. The next Releases will be ignored.
if (amount > 0)
releasesToIgnore += amount;
}
}
public int Release()
{
return Release(1);
}
public int Release(int num)
{
lock (lockObj)
{
if (releasesToIgnore > num)
{
releasesToIgnore -= num;
return semaphore.CurrentCount;
}
else
{
int oldReleasesToIgnore = releasesToIgnore;
releasesToIgnore = 0;
return semaphore.Release(num - oldReleasesToIgnore);
}
}
}
public void Wait(CancellationToken token)
{
WaitAsync(token).Wait();
}
public Task WaitAsync(CancellationToken token)
{
var tcs = new TaskCompletionSource<bool>();
queue.Enqueue(tcs);
QueuedAwait(token);
return tcs.Task;
}
public int CurrentCount { get => this.semaphore.CurrentCount; }
private void QueuedAwait(CancellationToken token)
{
semaphore.WaitAsync(token).ContinueWith(t =>
{
TaskCompletionSource<bool> popped;
if (queue.TryDequeue(out popped))
popped.SetResult(true);
});
}
public void Dispose()
{
semaphore.Dispose();
}
}

Notify thread when data is added in queue

I have one thread which is adding data in the queue, now I want that other thread should get notified when the data is added so that it can start processing data from queue.
one option is thread will poll the queue continuously to see if count is more than zero but I think this is not good way, any other suggestion will be greatly appreciated
Any suggestion how I can achieve this, I am using .net framework 3.5.
and what if i have two thread one is doing q.Enqueue(data) and other is doing q.dequeue(), in this case do i need to manage the lock..?
You can use ManualResetEvent to notify a thread.
ManualResetEvent e = new ManualResetEvent(false);
After each q.enqueue(); do e.Set() and in the processing thread, you wait for items with e.WaitOne().
If you do processing inside a loop, you should do e.Reset() right after e.WaitOne().
I don't use queue's, because I'd rather batch process them. This is more usefull when you have to open/close (log)files, open/close databases. Here is an example how I create such:
// J. van Langen
public abstract class QueueHandler<T> : IDisposable
{
// some events to trigger.
ManualResetEvent _terminating = new ManualResetEvent(false);
ManualResetEvent _terminated = new ManualResetEvent(false);
AutoResetEvent _needProcessing = new AutoResetEvent(false);
// my 'queue'
private List<T> _queue = new List<T>();
public QueueHandler()
{
new Thread(new ThreadStart(() =>
{
// what handles it should wait on.
WaitHandle[] handles = new WaitHandle[] { _terminating, _needProcessing };
// while not terminating, loop (0 timeout)
while (!_terminating.WaitOne(0))
{
// wait on the _terminating and the _needprocessing handle.
WaitHandle.WaitAny(handles);
// my temporay array to store the current items.
T[] itemsCopy;
// lock the queue
lock (_queue)
{
// create a 'copy'
itemsCopy = _queue.ToArray();
// clear the queue.
_queue.Clear();
}
if (itemsCopy.Length > 0)
HandleItems(itemsCopy);
}
// the thread is done.
_terminated.Set();
})).Start();
}
public abstract void HandleItems(T[] items);
public void Enqueue(T item)
{
// lock the queue to add the item.
lock (_queue)
_queue.Add(item);
_needProcessing.Set();
}
// batch
public void Enqueue(IEnumerable<T> items)
{
// lock the queue to add multiple items.
lock (_queue)
_queue.AddRange(items);
_needProcessing.Set();
}
public void Dispose()
{
// let the thread know it should stop.
_terminating.Set();
// wait until the thread is stopped.
_terminated.WaitOne();
}
}
For the _terminating/_terminated I use a ManualResetEvent because those are only set ones.
For the _needProcessing I use an AutoResetEvent It can't be done with a ManualResetEvent, because when it's triggered, another thread could Set it again, so if you Reset it after the WaitHandle.WaitAny you could undone newly added items. (hmmm, if anyone could explain this easier, be welcome. :)
Example:
public class QueueItem
{
}
public class MyQueue : QueueHandler<QueueItem>
{
public override void HandleItems(QueueItem[] items)
{
// do your thing.
}
}
public void Test()
{
MyQueue queue = new MyQueue();
QueueItem item = new QueueItem();
queue.Enqueue(item);
QueueItem[] batch = new QueueItem[]
{
new QueueItem(),
new QueueItem()
};
queue.Enqueue(batch);
// even on dispose, all queued items will be processed in order to stop the QueueHandler.
queue.Dispose();
}
Use the BlockingCollection class. This nice thing about it is that the Take method blocks (without polling) if the queue is empty. It is included in .NET 4.0+ or as part of the Reactive Extension download or maybe even the TPL backport via NuGet. If you want you can use the following unoptimized variation of the class.
public class BlockingCollection<T>
{
private readonly Queue<T> m_Queue = new Queue<T>();
public void Add(T item)
{
lock (m_Queue)
{
m_Queue.Enqueue(item);
Monitor.Pulse(m_Queue);
}
}
public T Take()
{
lock (m_Queue)
{
while (m_Queue.Count == 0)
{
Monitor.Wait(m_Queue);
}
return m_Queue.Dequeue();
}
}
public bool TryTake(out T item)
{
item = default(T);
lock (m_Queue)
{
if (m_Queue.Count > 0)
{
item = m_Queue.Dequeue();
}
}
return item != null;
}
}
I think BlockingCollection would do better then Queue. Other then that, continuously checking queue size (and pausing the thread when its zero) is quite ok approach.
Btw, we are talking producer-consumer pattern here. I guess you can google it for some other approaches.

Multiple producers / single consumer, processing async without locks?

I got the following code (which doesn't work very well in a multi threaded environment)
public class SomeClass
{
private readonly ConcurrentQueue<ISocketWriterJob> _writeQueue = new ConcurrentQueue<ISocketWriterJob>();
private ISocketWriterJob _currentJob;
public void Send(ISocketWriterJob job)
{
if (_currentJob != null)
{
_writeQueue.Enqueue(job);
return;
}
_currentJob = job;
_currentJob.Write(_writeArgs);
// The job is invoked asynchronously here
}
private void HandleWriteCompleted(SocketError error, int bytesTransferred)
{
// error checks etc removed for this sample.
if (_currentJob.WriteCompleted(bytesTransferred))
{
_currentJob.Dispose();
if (!_writeQueue.TryDequeue(out _currentJob))
{
_currentJob = null;
return;
}
}
_currentJob.Write(_writeArgs);
// the job is invoked asycnhronously here.
}
}
The Send method should invoke the job asynchronously if there isn't a current job being executed. It should enqueue the job if there is.
Putting a lock around the _currentJob assignment/check would make everything work just fine. But are there a lock free way to solve it?
Update
I'm using a socket and it's SendAsync method to send the information. Which means that I do not know if there is a write/job pending or not when the Send() method is invoked.
Consider using of CompareExchange with hypothesis about intended state transitions. No need to use ConcurrentQueue since now we are in control of our synchronization.
Updated to use state machine
Updated again to remove unneeded Interlocked.Exchange (for state assignment).
public class SomeClass
{
private readonly Queue<ISocketWriterJob> _writeQueue = new Queue<ISocketWriterJob>();
private ISocketWriterJob _currentJob;
private enum State { Idle, Active, Enqueue, Dequeue };
private State _state;
public void Send(ISocketWriterJob job)
{
bool spin = true;
while(spin)
{
switch(_state)
{
case State.Idle:
if (Interlocked.CompareExchange(ref _state, State.Active, State.Idle) == State.Idle)
{
spin = false;
}
// else consider new state
break;
case State.Active:
if (Interlocked.CompareExchange(ref _state, State.Enqueue, State.Active) == State.Active)
{
_writeQueue.Enqueue(job);
_state = State.Active;
return;
}
// else consider new state
break;
case State.Enqueue:
case State.Dequeue:
// spin to wait for new state
Thread.Yield();
break;
}
}
_currentJob = job;
_currentJob.Write(_writeArgs);
// The job is invoked asynchronously here
}
private void HandleWriteCompleted(SocketError error, int bytesTransferred)
{
// error checks etc removed for this sample.
if (_currentJob.WriteCompleted(bytesTransferred))
{
_currentJob.Dispose();
bool spin = true;
while(spin)
{
switch(_state)
{
case State.Active:
if (Interlocked.CompareExchange(ref _state, State.Dequeue, State.Active) == State.Active)
{
if (!_writeQueue.TryDequeue(out _currentJob))
{
// handle in state _currentJob = null;
_state = State.Idle;
return;
}
else
{
_state = State.Active;
}
}
// else consider new state
break;
case State.Enqueue:
// spin to wait for new state
Thread.Yield();
break;
// impossible states
case State.Idle:
case State.Dequeue:
break;
}
}
}
_logger.Debug(_writeArgs.GetHashCode() + ": writing more ");
_currentJob.Write(_writeArgs);
// the job is invoked asycnhronously here.
}
}
At the moment the split between your producer and consumer is a little fuzzy; you have "produce a job into a queue or consume it immediately" and "consume a job from the queue or quit if there isn't one"; it would be clearer as "produce a job into a queue" and "consume a job from the queue (initially)" and "consume a job from the queue (once a job finishes").
The trick here is to use a BlockingCollection so you can wait for a job to appear:
BlockingCollection<ISocketWriterJob> _writeQueue =
new BlockingCollection<ISocketWriterJob>();
Let threads calling Send literally just queue a job:
public void Send(ISocketWriterJob job)
{
_writeQueue.Add(job);
}
Then have another thread that just consumes jobs.
public void StartConsumingJobs()
{
// Get the first job or wait for one to be queued.
_currentJob = _writeQueue.Take();
// Start job
}
private void HandleWriteCompleted(SocketError error, int bytesTransferred)
{
if (_currentJob.WriteCompleted(bytesTransferred))
{
_currentJob.Dispose();
// Get next job, or wait for one to be queued.
_currentJob = _writeQueue.Take();
}
_currentJob.Write(_writeArgs);
// Start/continue job as before
}
I don't think that you will gain something from using lock-free techniques. Even with simple locks you'll be able to stay in user mode because Monitor.Enter/Monitor.Exit used spinning first and only if you'll wait longer in waiting state they'll transitioned into kernel mode.
This means that lock-based technique will perform as good as any lock-free technique, because you can lock only for storing job into the queue and getting it back from it, but you'll have much clear and robust code that every developer can understand:
public class SomeClass
{
// We don't have to use Concurrent collections
private readonly Queue<ISocketWriterJob> _writeQueue = new Queue<ISocketWriterJob>();
private readonly object _syncRoot = new object();
private ISocketWriterJob _currentJob;
public void Send(ISocketWriterJob job)
{
lock(_syncRoot)
{
if (_currentJob != null)
{
_writeQueue.Enqueue(job);
return;
}
_currentJob = job;
}
// Use job instead of shared state
StartJob(job);
}
private void StartJob(ISocketWriterJob job)
{
job.Write(_writeArgs);
// The job is invoked asynchronously here
}
private void HandleWriteCompleted(SocketError error, int bytesTransferred)
{
ISocketWriterJob currentJob = null;
// error checks etc removed for this sample.
lock(_syncRoot)
{
// I suppose this operation pretty fast as well as Dispose
if (_currentJob.WriteCompleted(bytesTransferred))
{
_currentJob.Dispose();
// There is no TryDequeue method in Queue<T>
// But we can easily add it using extension method
if (!_writeQueue.TryDequeue(out _currentJob))
{
// We don't have set _currentJob to null
// because we'll achieve it via out parameter
// _currentJob = null;
return;
}
}
// Storing current job for further work
currentJob = _currentJob;
}
StartJob(currentJob);
}
}
Lock-free is a optimization and like any other optimization you should measure performance first to make sure that you have an issue with your simple lock-based implementation and only if its true - use some lower level techniques like lock free. Performance and maintainability is a classical tradeoff and you should choose very carefully.
You can mark the current job as volatile which should ensure the current thread gets the latest state. Generally though, locking is favorable.
private volatile ISocketWriterJob _currentJob;

Weird C# Threading ThreadInterrupted exception

I am developing an application simulating a network comprising a number of nodes that exchange messages. I try to simulate the transmission channel with a Queue where every node can place a message. Then, another entity takes over the message and delivers it to the specified node. Then, i want to signal (with an event) the end of a transmission phase when the message queue is idle for a certain amount of time, say X, namely no new messages have been added to the queue for X mseconds.
I understand that my case follows the consumer/producer paradigm. So far, i have done the following:
public class Com<T>
{
private Thread dispatcher;
private Queue<T> queue;
private int waitTime;
private Object locker;
private Timer timer;
public event EventHandler EmptyQueueEvent;
public Com()
{
queue = new Queue<T>();
locker = new Object();
waitTime = X;
timer = new Timer(FireEmpty, null, Timeout.Infinite,Timeout.Infinite);
dispatcher = new Thread(Serve);
dispatcher.IsBackground = true;
dispatcher.Start();
}
private void Serve()
{
while (true)
{
try
{
if (queue.Count == 0)
{
timer.Change(waitTime, 0);
Thread.Sleep(Timeout.Infinite);
}
}
catch (ThreadInterruptedException)
{
}
while (queue.Count != 0)
{
lock (locker)
{
deliver(queue.Dequeue());
}
}
}
}
private void deliver(T item)
{
// Do stuff
}
public void Add(T item)
{
timer.Change(Timeout.Infinite, Timeout.Infinite);
lock (locker)
{
queue.Enqueue(item);
}
dispatcher.Interrupt();
}
private void FireEmpty(object o)
{
//Fire Event
}
}
However, running my simulations proves that my synchronization is not enough, since I am sometimes getting a "ThreadInterruptedException" while trying to dequeue my message (in method Serve()). Note that the exception does not occur each time i run the simulation, but rather rarely: approximately every 850-1000 executions (i am running the execution iteratively)..
Does anybody have an idea what it is wrong with my code? :)
Have you tried locking before you attempt to get the Queue count? Like:
private void Serve()
{
while (true)
{
try
{
int count = 0;
lock(locker)
count= queue.Count;
if (count == 0)
{
timer.Change(waitTime, 0);
Thread.Sleep(Timeout.Infinite);
}
}
catch (ThreadInterruptedException)
{
}
while (queue.Count != 0)
{
lock (locker)
{
deliver(queue.Dequeue());
}
}
}
}
It's possible that an add is getting called at the same time you're trying to count the number of items. Also, you might want to consider one of the collections from System.Collections.Concurrent if you're using .net 4.0.
** UPDATE **
I just took a closer look at your code and had an "Oh duh" moment. You should be getting a ThreadInterruptException because you're calling delegate.Interrupt(). Check the MSDN documentation on that. I think what you need to do is use something like a ManualResetEvent and instead of calling Interrupt() do a WaitOne() on that event.
** UPDATE2 **
Here's some sample code that includes my other locking suggestion as well (on Gist):
https://gist.github.com/1683547

C# once the main thread sleep, all thread stopped

I have a class running the Producer-Consumer model like this:
public class SyncEvents
{
public bool waiting;
public SyncEvents()
{
waiting = true;
}
}
public class Producer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private Object _waitAck;
public Producer(Queue<Delegate> q, SyncEvents sync, Object obj)
{
_queue = q;
_sync = sync;
_waitAck = obj;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
Monitor.Wait(_sync, 0);
if (_queue.Count > 0)
{
_sync.waiting = false;
}
else
{
_sync.waiting = true;
lock (_waitAck)
{
Monitor.Pulse(_waitAck);
}
}
Monitor.Pulse(_sync);
}
}
}
}
public class Consumer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private int count = 0;
public Consumer(Queue<Delegate> q, SyncEvents sync)
{
_queue = q;
_sync = sync;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
while (_queue.Count == 0)
{
Monitor.Wait(_sync);
}
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
count++;
Monitor.Pulse(_sync);
}
}
}
}
/// <summary>
/// Act as a consumer to the queries produced by the DataGridViewCustomCell
/// </summary>
public class QueryThread
{
private SyncEvents _syncEvents = new SyncEvents();
private Object waitAck = new Object();
private Queue<Delegate> _queryQueue = new Queue<Delegate>();
Producer queryProducer;
Consumer queryConsumer;
public QueryThread()
{
queryProducer = new Producer(_queryQueue, _syncEvents, waitAck);
queryConsumer = new Consumer(_queryQueue, _syncEvents);
Thread producerThread = new Thread(queryProducer.ThreadRun);
Thread consumerThread = new Thread(queryConsumer.ThreadRun);
producerThread.IsBackground = true;
consumerThread.IsBackground = true;
producerThread.Start();
consumerThread.Start();
}
public bool isQueueEmpty()
{
return _syncEvents.waiting;
}
public void wait()
{
lock (waitAck)
{
while (_queryQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
}
public void Enqueue(Delegate item)
{
_queryQueue.Enqueue(item);
}
}
The code run smoothly but the wait() function.
In some case I want to wait until all the function in the queue were finished running so I made the wait() function.
The producer will fire the waitAck pulse at suitable time.
However, when the line "Monitor.Wait(waitAck);" is ran in the wait() function, all thread stop, includeing the producer and consumer thread.
Why would this happen and how can I solve it? thanks!
It seems very unlikely that all the threads will actually stop, although I should point out that to avoid false wake-ups you should probably have a while loop instead of an if statement:
lock (waitAck)
{
while(queryProducer.secondQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
The fact that you're calling Monitor.Wait means that waitAck should be released so it shouldn't prevent the consumer threads from locking...
Could you give more information about the way in which the producer/consumer threads are "stopping"? Does it look like they've just deadlocked?
Is your producer using Notify or NotifyAll? You've got an extra waiting thread now, so if you only use Notify it's only going to release a single thread... it's hard to see whether or not that's a problem without the details of your Producer and Consumer classes.
If you could show a short but complete program to demonstrate the problem, that would help.
EDIT: Okay, now you've posted the code I can see a number of issues:
Having so many public variables is a recipe for disaster. Your classes should encapsulate their functionality so that other code doesn't have to go poking around for implementation bits and pieces. (For example, your calling code here really shouldn't have access to the queue.)
You're adding items directly to the second queue, which means you can't efficiently wake up the producer to add them to the first queue. Why do you even have multiple queues?
You're always waiting on _sync in the producer thread... why? What's going to notify it to start with? Generally speaking the producer thread shouldn't have to wait, unless you have a bounded buffer
You have a static variable (_waitAck) which is being overwritten every time you create a new instance. That's a bad idea.
You also haven't shown your SyncEvents class - is that meant to be doing anything interesting?
To be honest, it seems like you've got quite a strange design - you may well be best starting again from scratch. Try to encapsulate the whole producer/consumer queue in a single class, which has Produce and Consume methods, as well as WaitForEmpty (or something like that). I think you'll find the synchronization logic a lot easier that way.
Here is my take on your code:
public class ProducerConsumer
{
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
public ProducerConsumer(Queue<Delegate> queue)
{
lock (_sync)
{
// Note: I would recommend that you don't even
// bother with taking in a queue. You should be able
// to just instantiate a new Queue<Delegate>()
// and use it when you Enqueue. There is nothing that
// you really need to pass into the constructor.
_queue = queue;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public override void Enqueue(Delegate value)
{
lock (_sync)
{
_queue.Enqueue(value);
_ready.Set();
}
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
Delegate query;
try
{
while (true)
{
_ready.WaitOne();
lock (_sync)
{
if (_queue.Count > 0)
{
query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_ready.Reset();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
}
}
base.Dispose(disposing);
}
}
I'm not exactly sure what you're trying to achieve with the wait function... I'm assuming you're trying to put some type of a limit to the number of items that can be queued. In that case simply throw an exception or return a failure signal when you have too many items in the queue, the client that is calling Enqueue will keep retrying until the queue can take more items. Taking an optimistic approach will save you a LOT of headaches and it simply helps you get rid of a lot of complex logic.
If you REALLY want to have the wait in there, then I can probably help you figure out a better approach. Let me know what are you trying to achieve with the wait and I'll help you out.
Note: I took this code from one of my projects, modified it a little and posted it here... there might be some minor syntax errors, but the logic should be correct.
UPDATE: Based on your comments I made some modifications: I added another ManualResetEvent to the class, so when you call BlockQueue() it gives you an event which you can wait on and sets a flag to stop the Enqueue function from queuing more elements. Once all the queries in the queue are serviced, the flag is set to true and the _wait event is set so whoever is waiting on it gets the signal.
public class ProducerConsumer
{
private bool _canEnqueue;
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
private static ManualResetEvent _wait = new ManualResetEvent(false);
public ProducerConsumer()
{
lock (_sync)
{
_queue = new Queue<Delegate> _queue;
_canEnqueue = true;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public bool Enqueue(Delegate value)
{
lock (_sync)
{
// Don't allow anybody to enqueue
if( _canEnqueue )
{
_queue.Enqueue(value);
_ready.Set();
return true;
}
}
// Whoever is calling Enqueue should try again later.
return false;
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
try
{
while (true)
{
// Wait for a query to be enqueued
_ready.WaitOne();
// Process the query
lock (_sync)
{
if (_queue.Count > 0)
{
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_canEnqueue = true;
_ready.Reset();
_wait.Set();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
// Block your queue from enqueuing, return null
// if the queue is already empty.
public ManualResetEvent BlockQueue()
{
lock(_sync)
{
if( _queue.Count > 0 )
{
_canEnqueue = false;
_wait.Reset();
}
else
{
// You need to tell the caller that they can't
// block your queue while it's empty. The caller
// should check if the result is null before calling
// WaitOne().
return null;
}
}
return _wait;
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
// Set wait when you're disposing the queue
// so that nobody is left with a lingering wait.
_wait.Set();
}
}
base.Dispose(disposing);
}
}

Categories

Resources