Our existing implementation of domain events limits (by blocking) publishing to one thread at a time to avoid reentrant calls to handlers:
public interface IDomainEvent {} // Marker interface
public class Dispatcher : IDisposable
{
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Subscribe code...
public void Publish(IDomainEvent domainEvent)
{
semaphore.Wait();
try
{
// Get event subscriber(s) from concurrent dictionary...
foreach (Action<IDomainEvent> subscriber in eventSubscribers)
{
subscriber(domainEvent);
}
}
finally
{
semaphore.Release();
}
}
// Dispose pattern...
}
If a handler publishes an event, this will deadlock.
How can I rewrite this to serialize calls to Publish? In other words, if subscribing handler A publishes event B, I'll get:
Handler A called
Handler B called
while preserving the condition of no reentrant calls to handlers in a multithreaded environment.
I do not want to change the public method signature; there's no place in the application to call a method to publish a queue, for instance.
We came up with a way to do it synchronously.
public class Dispatcher : IDisposable
{
private readonly ConcurrentQueue<IDomainEvent> queue = new ConcurrentQueue<IDomainEvent>();
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Subscribe code...
public void Publish(IDomainEvent domainEvent)
{
queue.Enqueue(domainEvent);
if (IsPublishing)
{
return;
}
PublishQueue();
}
private void PublishQueue()
{
IDomainEvent domainEvent;
while (queue.TryDequeue(out domainEvent))
{
InternalPublish(domainEvent);
}
}
private void InternalPublish(IDomainEvent domainEvent)
{
semaphore.Wait();
try
{
// Get event subscriber(s) from concurrent dictionary...
foreach (Action<IDomainEvent> subscriber in eventSubscribers)
{
subscriber(domainEvent);
}
}
finally
{
semaphore.Release();
}
// Necessary, as calls to Publish during publishing could have queued events and returned.
PublishQueue();
}
private bool IsPublishing
{
get { return semaphore.CurrentCount < 1; }
}
// Dispose pattern for semaphore...
}
}
You will have to make Publish asynchronous to achieve that. Naive implementation would be as simple as:
public class Dispatcher : IDisposable {
private readonly BlockingCollection<IDomainEvent> _queue = new BlockingCollection<IDomainEvent>(new ConcurrentQueue<IDomainEvent>());
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
public Dispatcher() {
new Thread(Consume) {
IsBackground = true
}.Start();
}
private List<Action<IDomainEvent>> _subscribers = new List<Action<IDomainEvent>>();
public void AddSubscriber(Action<IDomainEvent> sub) {
_subscribers.Add(sub);
}
private void Consume() {
try {
foreach (var #event in _queue.GetConsumingEnumerable(_cts.Token)) {
try {
foreach (Action<IDomainEvent> subscriber in _subscribers) {
subscriber(#event);
}
}
catch (Exception ex) {
// log, handle
}
}
}
catch (OperationCanceledException) {
// expected
}
}
public void Publish(IDomainEvent domainEvent) {
_queue.Add(domainEvent);
}
public void Dispose() {
_cts.Cancel();
}
}
It can't be done with that interface. You can process the event subscriptions asynchronously to remove the deadlock while still running them serially, but then you can't guarantee the order you described. Another call to Publish might enqueue something (event C) while the handler for event A is running but before it publishes event B. Then event B ends up behind event C in the queue.
As long as Handler A is on equal footing with other clients when it comes to getting an item in the queue, it either has to wait like everyone else (deadlock) or it has to play fairly (first come, first served). The interface you have there doesn't allow the two to be treated differently.
That's not to say you couldn't get up to some shenanigans in your logic to attempt to differentiate them (e.g. based on thread id or something else identifiable), but anything along those lines would unreliable if you don't control the subscriber code as well.
Related
I am working on a WPF application using Prism. I am using EventAggregator to communicate between viewmodels.
public class PublisherViewModel
{
_eventAggregator.GetEvent<RefreshEvent>().Publish("STOCKS");
}
public class SubscriberViewModel
{
public SubscriberViewModel(IEventAggregator ea)
{
ea.GetEvent<RefreshEvent>().Subscribe(RefreshData);
}
void RefreshData(string category)
{
Task.Run(() =>
{
//Long Running Operation
Dispatcher.Invoke(() =>
{
//Refresh UI
});
});
}
}
PublisherViewModel can publish event one after another. However at a SubscriberViewModel as I have a long running Task and is not awaited (which I cannot) the second request coming from publisher start execution right away. At SubscriberViewModel I want to handle all incoming request such that they are executed one after another in the order which they arrive.
I am thinking to handle this using a queue based mechanism.
Could you please suggest me the best practice for the same.
Thanks!!
Update:-
I have used the below approach
public class BlockingQueue<T> wehre T : class
{
private readonly BlockingCollection<JobQueueItem<T>> _jobs;
public BlockingQueue(int upperBound)
{
_jobs = new BlockingCollection<JobQueueItem<T>>(upperBound);
var thread = new Thread(new ThreadStart(OnStart));
thread.IsBackground = true;
thread.Start();
}
public void Enqueue(T parameter, Func<T, Task> action)
{
_jobs.Add(new JobQueueItem<T> { Parameter = parameter, JobAction = action });
}
private void OnStart()
{
foreach (var job in _jobs.GetConsumingEnumerable(CancellationToken.None))
{
if (job != null && job.JobAction != null)
{
job.Action.Invoke(job.Parameter).Wait();
}
}
}
private class JobQueueItem<T>
{
internal T Parameter { get; set; }
internal Func<T, Task> JobAction { get; set; }
}
}
public class SubscriberViewModel
{
BlockingQueue<RefreshEventArgs> RefreshQueue = new ...;
//inside Subscribed method
RefreshQueue.Enqueue(args, RefreshData);
}
Please suggest. Thanks!
I am thinking to handle this using a queue based mechanism.
This is the way to go. Set up a queue (probably an asynchronous queue), push the events in the subscriber and consume them from a worker task.
TPL Dataflow is one option to do this: create an ActionBlock<string> from the handler and post the events to it as they come in.
I have a class in which a method is called on the basis of events received from external application.
public void ProcessItems(Store id,Items items)
{
//Some logic
UpdateValidItems(id,validItems)
}
public void UpdateValidItems(Store id,Items items)
{
//Save in DB
}
The external application may invoke "ProcessItems" while UpdateValidItems is being processed. I want that if UpdateValidItems is being processed and event invoked during UpdateValidItems processing then it should wait until UpdateValidItems completed. Is there any way to do it?
Also, multiple stores can be processed at a time. So it should wait for store based only. If storeId is different then it should not wait.
I'd decouple incoming Events and processing:
Have a thread wait on a Blocking Queue
Event writes to Blocking Queue
Thread from 0.) gets notified, dequeues 1 "Row" (Id and Items)
Said Thread processes the Items
Thread waits again OR if meanwhile the Event added more Rows: Process until queue is empty, then wait again.
This ensures that:
Only one Store is mutated at a time
Event returns fast
Following events for same store do not interfere with current processing.
You may also have a look into DataFlow to implement a similar approach.
Edit / Basic Example:
public class Handler
{
private readonly BlockingCollection<QueueEntry> _queue = new BlockingCollection<QueueEntry>();
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
// I used a Form with a button to simulate events, so you'll have to adapt that..
public Handler(Form1 parent)
{
// register for incoming Items
parent.NewItems += Parent_NewItems;
// Start processing on a long running Pool-Thread
Task.Factory.StartNew(QueueWorker, TaskCreationOptions.LongRunning);
}
// Stop Processing
public void Shutdown( bool doitnow )
{
// Mark the queue "complete" - adding is now forbidden.
_queue.CompleteAdding();
// If you want to stop NOW, cancel all operations
if (doitnow ) { _cts.Cancel(); }
// Else the Task will run until the queue has been processed.
}
// This is all that happens on the EDT / Main / UI Thread
private void Parent_NewItems(object sender, NewItemsEventArgs e)
{
try
{
_queue.Add(new QueueEntry { Sender = sender, Event = e });
}
catch (InvalidOperationException)
{
// dontcare ? I didn't - You may, though.
// Will be thrown if the queue has been marked complete.
}
}
private async Task QueueWorker()
{
// While the queue has not been marked complete and is empty
while (!_queue.IsCompleted)
{
QueueEntry entry = null;
try
{
// Wait until an entry is available or until canceled.
entry = _queue.Take(_cts.Token);
}
catch ( OperationCanceledException )
{
// dontcare
}
if (entry != null)
{
await Process(entry, _cts.Token);
}
}
}
private async Task Process(QueueEntry entry, CancellationToken cancel)
{
// Dummy Processing...
await Task.Delay(TimeSpan.FromSeconds(entry.Event.Items), cancel);
}
}
public class QueueEntry
{
public object Sender { get; set; }
public NewItemsEventArgs Event { get; set; }
}
Of, course, this can be tuned to allow for some concurrency / parallel processing.
private object lock_object = new object();
public void ProcessItems(Store id,Items items)
{
//Some logic
lock(lock_object)
{
UpdateValidItems(id,validItems)
}
}
I guess it is sort of a code review, but here is my implementation of the producer / consumer pattern. What I would like to know is would there be a case in which the while loops in the ReceivingThread() or SendingThread() methods might stop executing. Please note that EnqueueSend(DataSendEnqeueInfo info) is called from multiple different threads and I probably can't use tasks here since I definitely have to consume commands in a separate thread.
private Thread mReceivingThread;
private Thread mSendingThread;
private Queue<DataRecievedEnqeueInfo> mReceivingThreadQueue;
private Queue<DataSendEnqeueInfo> mSendingThreadQueue;
private readonly object mReceivingQueueLock = new object();
private readonly object mSendingQueueLock = new object();
private bool mIsRunning;
EventWaitHandle mRcWaitHandle;
EventWaitHandle mSeWaitHandle;
private void ReceivingThread()
{
while (mIsRunning)
{
mRcWaitHandle.WaitOne();
DataRecievedEnqeueInfo item = null;
while (mReceivingThreadQueue.Count > 0)
{
lock (mReceivingQueueLock)
{
item = mReceivingThreadQueue.Dequeue();
}
ProcessReceivingItem(item);
}
mRcWaitHandle.Reset();
}
}
private void SendingThread()
{
while (mIsRunning)
{
mSeWaitHandle.WaitOne();
while (mSendingThreadQueue.Count > 0)
{
DataSendEnqeueInfo item = null;
lock (mSendingQueueLock)
{
item = mSendingThreadQueue.Dequeue();
}
ProcessSendingItem(item);
}
mSeWaitHandle.Reset();
}
}
internal void EnqueueRecevingData(DataRecievedEnqeueInfo info)
{
lock (mReceivingQueueLock)
{
mReceivingThreadQueue.Enqueue(info);
mRcWaitHandle.Set();
}
}
public void EnqueueSend(DataSendEnqeueInfo info)
{
lock (mSendingQueueLock)
{
mSendingThreadQueue.Enqueue(info);
mSeWaitHandle.Set();
}
}
P.S the idea here is that am using WaitHandles to put thread to sleep when the queue is empty, and signal them to start when new items are enqueued.
UPDATE
I am just going to leave this https://blogs.msdn.microsoft.com/benwilli/2015/09/10/tasks-are-still-not-threads-and-async-is-not-parallel/ ,for people who might be trying to implement Producer/Consumer pattern using TPL or tasks.
Use a BlockingCollection instead of Queue, EventWaitHandle and lock objects:
public class DataInfo { }
private Thread mReceivingThread;
private Thread mSendingThread;
private BlockingCollection<DataInfo> queue;
private CancellationTokenSource receivingCts = new CancellationTokenSource();
private void ReceivingThread()
{
try
{
while (!receivingCts.IsCancellationRequested)
{
// This will block until an item is added to the queue or the cancellation token is cancelled
DataInfo item = queue.Take(receivingCts.Token);
ProcessReceivingItem(item);
}
}
catch (OperationCanceledException)
{
}
}
internal void EnqueueRecevingData(DataInfo info)
{
// When a new item is produced, just add it to the queue
queue.Add(info);
}
// To cancel the receiving thread, cancel the token
private void CancelReceivingThread()
{
receivingCts.Cancel();
}
Personally, for simple producer-consumer problems, I would just use BlockingCollection. There would be no need to manually code your own synchronization logic. The consuming threads will also block if there are no items present in the queue.
Here is what your code might look like if you use this class:
private BlockingCollection<DataRecievedEnqeueInfo> mReceivingThreadQueue = new BlockingCollection<DataRecievedEnqeueInfo>();
private BlockingCollection<DataSendEnqeueInfo> mSendingThreadQueue = new BlockingCollection<DataSendEnqeueInfo>();
public void Stop()
{
// No need for mIsRunning. Makes the enumerables in the GetConsumingEnumerable() calls
// below to complete.
mReceivingThreadQueue.CompleteAdding();
mSendingThreadQueue.CompleteAdding();
}
private void ReceivingThread()
{
foreach (DataRecievedEnqeueInfo item in mReceivingThreadQueue.GetConsumingEnumerable())
{
ProcessReceivingItem(item);
}
}
private void SendingThread()
{
foreach (DataSendEnqeueInfo item in mSendingThreadQueue.GetConsumingEnumerable())
{
ProcessSendingItem(item);
}
}
internal void EnqueueRecevingData(DataRecievedEnqeueInfo info)
{
// You can also use TryAdd() if there is a possibility that you
// can add items after you have stopped. Otherwise, this can throw an
// an exception after CompleteAdding() has been called.
mReceivingThreadQueue.Add(info);
}
public void EnqueueSend(DataSendEnqeueInfo info)
{
mSendingThreadQueue.Add(info);
}
As suggested in comments, you also can give a try to the TPL Dataflow blocks.
As far as I can see, you have two similar pipelines, for receive and send, so I assume that your class hierarchy is like this:
class EnqueueInfo { }
class DataRecievedEnqeueInfo : EnqueueInfo { }
class DataSendEnqeueInfo : EnqueueInfo { }
We can assemble an abstract class which will encapsulate the logic for creating the pipeline, and providing the interface for processing the items, like this:
abstract class EnqueueInfoProcessor<T>
where T : EnqueueInfo
{
// here we will store all the messages received before the handling
private readonly BufferBlock<T> _buffer;
// simple action block for actual handling the items
private ActionBlock<T> _action;
// cancellation token to cancel the pipeline
public EnqueueInfoProcessor(CancellationToken token)
{
_buffer = new BufferBlock<T>(new DataflowBlockOptions { CancellationToken = token });
_action = new ActionBlock<T>(item => ProcessItem(item), new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = Environment.ProcessorCount,
CancellationToken = token
});
// we are linking two blocks so all the items from buffer
// will flow down to action block in order they've been received
_buffer.LinkTo(_action, new DataflowLinkOptions { PropagateCompletion = true });
}
public void PostItem(T item)
{
// synchronously wait for posting to complete
_buffer.Post(item);
}
public async Task SendItemAsync(T item)
{
// asynchronously wait for message to be posted
await _buffer.SendAsync(item);
}
// abstract method to implement
protected abstract void ProcessItem(T item);
}
Note that you also can encapsulate the link between two blocks by using the Encapsulate<TInput, TOutput> method, but in that case you have to properly handle the Completion of the buffer block, if you're using it.
After this, we just need to implement two methods for receive and send handle logic:
public class SendEnqueueInfoProcessor : EnqueueInfoProcessor<DataSendEnqeueInfo>
{
SendEnqueueInfoProcessor(CancellationToken token)
: base(token)
{
}
protected override void ProcessItem(DataSendEnqeueInfo item)
{
// send logic here
}
}
public class RecievedEnqueueInfoProcessor : EnqueueInfoProcessor<DataRecievedEnqeueInfo>
{
RecievedEnqueueInfoProcessor(CancellationToken token)
: base(token)
{
}
protected override void ProcessItem(DataRecievedEnqeueInfo item)
{
// recieve logic here
}
}
You also can create more complicated pipeline with TransformBlock<DataRecievedEnqeueInfo, DataSendEnqeueInfo>, if your message flow is about a ReceiveInfo message became SendInfo.
I have a listener, which receives work in the form of IPayload. The listener should push this work to observers who actually do the work. This is my first crude attempt to achieve this:
public interface IObserver
{
void DoWork(IPayload payload);
}
public interface IObservable
{
void RegisterObserver(IObserver observer);
void RemoveObserver(IObserver observer);
void NotifyObservers(IPayload payload);
}
public class Observer : IObserver
{
public void DoWork(IPayload payload)
{
// do some heavy lifting
}
}
public class Listener : IObservable
{
private readonly List<IObserver> _observers = new List<IObserver>();
public void PushIncomingPayLoad(IPayload payload)
{
NotifyObservers(payload);
}
public void RegisterObserver(IObserver observer)
{
_observers.Add(observer);
}
public void RemoveObserver(IObserver observer)
{
_observers.Remove(observer);
}
public void NotifyObservers(IPayload payload)
{
Parallel.ForEach(_observers, observer =>
{
observer.DoWork(payload);
});
}
}
Is this a valid approach that follows the observer/observable pattern (i.e. pub sub?)? My understanding is that the NotifyObservers also spins up a threat for each payload. Is this correct? Any improvement suggestions very much welcome.
Please note that all observers have to finish their work before new work in the form of a payload is passed on to them - the order of 'observation' does not matter. Basically, the listener has to act like a master whilst exploiting the cores of the host as much as possibly using the TPL. IMHO this requires the explicit registration of observers with the listener/Observable.
PS:
I think Parallel.ForEach does not create a thread for each observer: Why isn't Parallel.ForEach running multiple threads? If this is true how can I ensure to create a thread for each observer?
An alternative I have in mind is this:
public async void NotifyObservers(IPayload payload)
{
foreach (var observer in _observers)
{
var observer1 = observer;
await Task.Run(() => observer1.DoWork(payload));
}
await Task.WhenAll();
}
Of course you can do it this way, but in .net that is not needed if you dont want to reinvent the wheel :-)
In c# there this could be done using events.
A short example :
//Your Listener who has a public eventhandler where others can add them as listeners
public class Listener{
//your eventhandler where others "add" them as listeners
public event EventHandler<PayLoadEventsArgs> IncomingPayload;
//a method where you process new data and want to notify the others
public void PushIncomingPayLoad(IPayload payload)
{
//check if there are any listeners
if(IncomingPayload != null)
//if so, then notify them with the data in the PayloadEventArgs
IncomingPayload(this, new PayloadEventArgs(payload));
}
}
//Your EventArgs class to hold the data
public class PayloadEventArgs : EventArgs{
Payload payload { get; private set; }
public PayloadEventArgs(Payload payload){
this.payload = payload;
}
}
public class Worker{
//add this instance as a observer
YourListenerInstance.IncomingPayload += DoWork;
//remove this instance
YourListenerInstance.IncomingPayload -= DoWork;
//This method gets called when the Listener notifies the IncomingPayload listeners
void DoWork(Object sender, PayloadEventArgs e){
Console.WriteLine(e.payload);
}
}
EDIT: As the question asks for parallel execution, how about doing the new thread at the subscriber side? I think this is the easiest approach to achieve this.
//Inside the DoWork method of the subscriber start a new thread
Task.Factory.StartNew( () =>
{
//Do your work here
});
//If you want to make sure that a new thread is used for the task, then add the TaskCreationOptions.LongRunning parameter
Task.Factory.StartNew( () =>
{
//Do your work here
}, TaskCreationOptions.LongRunning);
Hopefully this answers your question? If not, please leave a comment.
I have a class running the Producer-Consumer model like this:
public class SyncEvents
{
public bool waiting;
public SyncEvents()
{
waiting = true;
}
}
public class Producer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private Object _waitAck;
public Producer(Queue<Delegate> q, SyncEvents sync, Object obj)
{
_queue = q;
_sync = sync;
_waitAck = obj;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
Monitor.Wait(_sync, 0);
if (_queue.Count > 0)
{
_sync.waiting = false;
}
else
{
_sync.waiting = true;
lock (_waitAck)
{
Monitor.Pulse(_waitAck);
}
}
Monitor.Pulse(_sync);
}
}
}
}
public class Consumer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private int count = 0;
public Consumer(Queue<Delegate> q, SyncEvents sync)
{
_queue = q;
_sync = sync;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
while (_queue.Count == 0)
{
Monitor.Wait(_sync);
}
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
count++;
Monitor.Pulse(_sync);
}
}
}
}
/// <summary>
/// Act as a consumer to the queries produced by the DataGridViewCustomCell
/// </summary>
public class QueryThread
{
private SyncEvents _syncEvents = new SyncEvents();
private Object waitAck = new Object();
private Queue<Delegate> _queryQueue = new Queue<Delegate>();
Producer queryProducer;
Consumer queryConsumer;
public QueryThread()
{
queryProducer = new Producer(_queryQueue, _syncEvents, waitAck);
queryConsumer = new Consumer(_queryQueue, _syncEvents);
Thread producerThread = new Thread(queryProducer.ThreadRun);
Thread consumerThread = new Thread(queryConsumer.ThreadRun);
producerThread.IsBackground = true;
consumerThread.IsBackground = true;
producerThread.Start();
consumerThread.Start();
}
public bool isQueueEmpty()
{
return _syncEvents.waiting;
}
public void wait()
{
lock (waitAck)
{
while (_queryQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
}
public void Enqueue(Delegate item)
{
_queryQueue.Enqueue(item);
}
}
The code run smoothly but the wait() function.
In some case I want to wait until all the function in the queue were finished running so I made the wait() function.
The producer will fire the waitAck pulse at suitable time.
However, when the line "Monitor.Wait(waitAck);" is ran in the wait() function, all thread stop, includeing the producer and consumer thread.
Why would this happen and how can I solve it? thanks!
It seems very unlikely that all the threads will actually stop, although I should point out that to avoid false wake-ups you should probably have a while loop instead of an if statement:
lock (waitAck)
{
while(queryProducer.secondQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
The fact that you're calling Monitor.Wait means that waitAck should be released so it shouldn't prevent the consumer threads from locking...
Could you give more information about the way in which the producer/consumer threads are "stopping"? Does it look like they've just deadlocked?
Is your producer using Notify or NotifyAll? You've got an extra waiting thread now, so if you only use Notify it's only going to release a single thread... it's hard to see whether or not that's a problem without the details of your Producer and Consumer classes.
If you could show a short but complete program to demonstrate the problem, that would help.
EDIT: Okay, now you've posted the code I can see a number of issues:
Having so many public variables is a recipe for disaster. Your classes should encapsulate their functionality so that other code doesn't have to go poking around for implementation bits and pieces. (For example, your calling code here really shouldn't have access to the queue.)
You're adding items directly to the second queue, which means you can't efficiently wake up the producer to add them to the first queue. Why do you even have multiple queues?
You're always waiting on _sync in the producer thread... why? What's going to notify it to start with? Generally speaking the producer thread shouldn't have to wait, unless you have a bounded buffer
You have a static variable (_waitAck) which is being overwritten every time you create a new instance. That's a bad idea.
You also haven't shown your SyncEvents class - is that meant to be doing anything interesting?
To be honest, it seems like you've got quite a strange design - you may well be best starting again from scratch. Try to encapsulate the whole producer/consumer queue in a single class, which has Produce and Consume methods, as well as WaitForEmpty (or something like that). I think you'll find the synchronization logic a lot easier that way.
Here is my take on your code:
public class ProducerConsumer
{
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
public ProducerConsumer(Queue<Delegate> queue)
{
lock (_sync)
{
// Note: I would recommend that you don't even
// bother with taking in a queue. You should be able
// to just instantiate a new Queue<Delegate>()
// and use it when you Enqueue. There is nothing that
// you really need to pass into the constructor.
_queue = queue;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public override void Enqueue(Delegate value)
{
lock (_sync)
{
_queue.Enqueue(value);
_ready.Set();
}
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
Delegate query;
try
{
while (true)
{
_ready.WaitOne();
lock (_sync)
{
if (_queue.Count > 0)
{
query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_ready.Reset();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
}
}
base.Dispose(disposing);
}
}
I'm not exactly sure what you're trying to achieve with the wait function... I'm assuming you're trying to put some type of a limit to the number of items that can be queued. In that case simply throw an exception or return a failure signal when you have too many items in the queue, the client that is calling Enqueue will keep retrying until the queue can take more items. Taking an optimistic approach will save you a LOT of headaches and it simply helps you get rid of a lot of complex logic.
If you REALLY want to have the wait in there, then I can probably help you figure out a better approach. Let me know what are you trying to achieve with the wait and I'll help you out.
Note: I took this code from one of my projects, modified it a little and posted it here... there might be some minor syntax errors, but the logic should be correct.
UPDATE: Based on your comments I made some modifications: I added another ManualResetEvent to the class, so when you call BlockQueue() it gives you an event which you can wait on and sets a flag to stop the Enqueue function from queuing more elements. Once all the queries in the queue are serviced, the flag is set to true and the _wait event is set so whoever is waiting on it gets the signal.
public class ProducerConsumer
{
private bool _canEnqueue;
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
private static ManualResetEvent _wait = new ManualResetEvent(false);
public ProducerConsumer()
{
lock (_sync)
{
_queue = new Queue<Delegate> _queue;
_canEnqueue = true;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public bool Enqueue(Delegate value)
{
lock (_sync)
{
// Don't allow anybody to enqueue
if( _canEnqueue )
{
_queue.Enqueue(value);
_ready.Set();
return true;
}
}
// Whoever is calling Enqueue should try again later.
return false;
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
try
{
while (true)
{
// Wait for a query to be enqueued
_ready.WaitOne();
// Process the query
lock (_sync)
{
if (_queue.Count > 0)
{
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_canEnqueue = true;
_ready.Reset();
_wait.Set();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
// Block your queue from enqueuing, return null
// if the queue is already empty.
public ManualResetEvent BlockQueue()
{
lock(_sync)
{
if( _queue.Count > 0 )
{
_canEnqueue = false;
_wait.Reset();
}
else
{
// You need to tell the caller that they can't
// block your queue while it's empty. The caller
// should check if the result is null before calling
// WaitOne().
return null;
}
}
return _wait;
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
// Set wait when you're disposing the queue
// so that nobody is left with a lingering wait.
_wait.Set();
}
}
base.Dispose(disposing);
}
}