C# concurrent: Is it a good idea to use many AutoResetEvent? - c#

Suppose there are many threads calling Do(), and only one worker thread handles the actual job.
void Do(Job job)
{
concurrentQueue.Enqueue(job);
// wait for job done
}
void workerThread()
{
while (true)
{
Job job;
if (concurrentQueue.TryDequeue(out job))
{
// do job
}
}
}
The Do() should wait until the job done before return. So I wrote the following code:
class Task
{
public Job job;
public AutoResetEvent ev;
}
void Do(Job job)
{
using (var ev = new AutoResetEvent(false))
{
concurrentQueue.Enqueue(new Task { job = job, ev = ev }));
ev.WaitOne();
}
}
void workerThread()
{
while (true)
{
Task task;
if (concurrentQueue.TryDequeue(out task))
{
// do job
task.ev.Set();
}
}
}
After some tests I found it works as expected. However I'm not sure is it a good way to allocate many AutoResetEvents, or is there a better way to accomplish?

Since all clients must wait a single thread to do the job, there is no real need for using a queue. So I suggest to use the Monitor class instead, and specifically the Wait/Pulse functionality. It is a bit low level and verbose though.
class Worker<TResult> : IDisposable
{
private readonly object _outerLock = new object();
private readonly object _innerLock = new object();
private Func<TResult> _currentJob;
private TResult _currentResult;
private Exception _currentException;
private bool _disposed;
public Worker()
{
var thread = new Thread(MainLoop);
thread.IsBackground = true;
thread.Start();
}
private void MainLoop()
{
lock (_innerLock)
{
while (true)
{
Monitor.Wait(_innerLock); // Wait for client requests
if (_disposed) break;
try
{
_currentResult = _currentJob.Invoke();
_currentException = null;
}
catch (Exception ex)
{
_currentException = ex;
_currentResult = default;
}
Monitor.Pulse(_innerLock); // Notify the waiting client that the job is done
}
} // We are done
}
public TResult DoWork(Func<TResult> job)
{
TResult result;
Exception exception;
lock (_outerLock) // Accept only one client at a time
{
lock (_innerLock) // Acquire inner lock
{
if (_disposed) throw new InvalidOperationException();
_currentJob = job;
Monitor.Pulse(_innerLock); // Notify worker thread about the new job
Monitor.Wait(_innerLock); // Wait for worker thread to process the job
result = _currentResult;
exception = _currentException;
// Clean up
_currentJob = null;
_currentResult = default;
_currentException = null;
}
}
// Throw the exception, if occurred, preserving the stack trace
if (exception != null) ExceptionDispatchInfo.Capture(exception).Throw();
return result;
}
public void Dispose()
{
lock (_outerLock)
{
lock (_innerLock)
{
_disposed = true;
Monitor.Pulse(_innerLock); // Notify worker thread to exit loop
}
}
}
}
Usage example:
var worker = new Worker<int>();
int result = worker.DoWork(() => 1); // Accepts a function as argument
Console.WriteLine($"Result: {result}");
worker.Dispose();
Output:
Result: 1
Update: The previous solution is not await-friendly, so here is one that allows proper awaiting. It uses a TaskCompletionSource for each job, stored in a BlockingCollection.
class Worker<TResult> : IDisposable
{
private BlockingCollection<TaskCompletionSource<TResult>> _blockingCollection
= new BlockingCollection<TaskCompletionSource<TResult>>();
public Worker()
{
var thread = new Thread(MainLoop);
thread.IsBackground = true;
thread.Start();
}
private void MainLoop()
{
foreach (var tcs in _blockingCollection.GetConsumingEnumerable())
{
var job = (Func<TResult>)tcs.Task.AsyncState;
try
{
var result = job.Invoke();
tcs.SetResult(result);
}
catch (Exception ex)
{
tcs.TrySetException(ex);
}
}
}
public Task<TResult> DoWorkAsync(Func<TResult> job)
{
var tcs = new TaskCompletionSource<TResult>(job,
TaskCreationOptions.RunContinuationsAsynchronously);
_blockingCollection.Add(tcs);
return tcs.Task;
}
public TResult DoWork(Func<TResult> job) // Synchronous call
{
var task = DoWorkAsync(job);
try { task.Wait(); } catch { } // Swallow the AggregateException
// Throw the original exception, if occurred, preserving the stack trace
if (task.IsFaulted) ExceptionDispatchInfo.Capture(task.Exception.InnerException).Throw();
return task.Result;
}
public void Dispose()
{
_blockingCollection.CompleteAdding();
}
}
Usage example
var worker = new Worker<int>();
int result = await worker.DoWorkAsync(() => 1); // Accepts a function as argument
Console.WriteLine($"Result: {result}");
worker.Dispose();
Output:
Result: 1

From a synchronization perspective this is working fine.
But it seems useless to do it this way. If you want to execute jobs one after the other you can just use a lock:
lock (lockObject) {
RunJob();
}
What is your intention with this code?
There also is an efficiency question because each task creates an OS event and waits on it. If you use the more modern TaskCompletionSource this will use the same thing under the hood if you synchronously wait on that task. You can use asynchronous waiting (await myTCS.Task;) to possibly increase efficiency a bit. Of course this infects the entire call stack with async/await. If this is a fairly low volume operation you won't gain much.

In general I think would work, although when you say "many" threads are calling Do() this might not scale well ... suspended threads use resources.
Another problem with this code is that at idle times, you will have a "hard loop" in "workerThread" which will cause your application to return high CPU utilization times. You may want to add this code to "workerThread":
if (concurrentQueue.IsEmpty) Thread.Sleep(1);
You might also want to introduce a timeout to the WaitOne call to avoid a log jam.

Related

Task counter C#, Interlocked

I would like to run tasks in parallel, with no more than 10 instances running at a given time.
This is the code I have so far:
private void Listen()
{
while (true)
{
var context = listener.GetContext();
var task = Task.Run(() => HandleContextAsync(context));
Interlocked.Increment(ref countTask);
if (countTask > 10)
{
//I save tasks in the collection
}
else
{
task.ContinueWith(delegate { Interlocked.Decrement(ref countTask); }); //I accomplish the task and reduce the counter
}
}
}
I would suggest that you use a Parallel loop; for example:
Parallel.For(1, 10, a =>
{
var context = listener.GetContext();
...
});
That will start a defined number of tasks without you needing to manage the process yourself.
If you want to continually execute code in parallel, with up to 10 instances at a time, this may be worth considering:
private void Listen()
{
var options = new ParallelOptions() { MaxDegreeOfParallelism = 10 };
Parallel.For(1, long.MaxValue - 1, options, (i) =>
{
var context = listener.GetContext();
HandleContextAsync(context);
});
}
Basically, it will run the code continually (well roughly long.MaxValue times). MaxDegreeOfParallelism ensures that it runs only 10 'instances' of the code at a time.
I'm assuming that the result from GetContext is not created by you, so, its probably not useful to use a Parallel.For when you don't know how many times to run or don't have all the contexts to handle right away.
So, probably the best way to resolve this would be by implementing your own TaskScheduler. This way you can add more tasks to be resolved on demand with a fixed concurrency level.
Based on the example from Microsoft Docs website you can already achieve this.
I made an example program with some changes to the LimitedConcurrencyLevelTaskScheduler from the website.
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
namespace parallel
{
class Program
{
private static Random Rand = new Random();
static void Main(string[] args)
{
var ts = new LimitedConcurrencyLevelTaskScheduler(10);
var taskFactory = new TaskFactory(ts);
while (true)
{
var context = GetContext(ts);
if (context.Equals("Q", StringComparison.OrdinalIgnoreCase))
break;
taskFactory.StartNew(() => HandleContextAsync(context));
}
Console.WriteLine("Waiting...");
while (ts.CountRunning != 0)
{
Console.WriteLine("Now running {0}x tasks with {1}x queued.", ts.CountRunning, ts.CountQueued);
Thread.Yield();
Thread.Sleep(100);
}
}
private static void HandleContextAsync(string context)
{
// delays for 1-10 seconds to make the example easier to understand
Thread.Sleep(Rand.Next(1000, 10000));
Console.WriteLine("Context: {0}, from thread: {1}", context, Thread.CurrentThread.ManagedThreadId);
}
private static string GetContext(LimitedConcurrencyLevelTaskScheduler ts)
{
Console.WriteLine("Now running {0}x tasks with {1}x queued.", ts.CountRunning, ts.CountQueued);
return Console.ReadLine();
}
}
// Provides a task scheduler that ensures a maximum concurrency level while
// running on top of the thread pool.
public class LimitedConcurrencyLevelTaskScheduler : TaskScheduler
{
// Indicates whether the current thread is processing work items.
[ThreadStatic]
private static bool _currentThreadIsProcessingItems;
// The list of tasks to be executed
private readonly LinkedList<Task> _tasks = new LinkedList<Task>(); // protected by lock(_tasks)
public int CountRunning => _nowRunning;
public int CountQueued
{
get
{
lock (_tasks)
{
return _tasks.Count;
}
}
}
// The maximum concurrency level allowed by this scheduler.
private readonly int _maxDegreeOfParallelism;
// Indicates whether the scheduler is currently processing work items.
private volatile int _delegatesQueuedOrRunning = 0;
private volatile int _nowRunning;
// Creates a new instance with the specified degree of parallelism.
public LimitedConcurrencyLevelTaskScheduler(int maxDegreeOfParallelism)
{
if (maxDegreeOfParallelism < 1)
throw new ArgumentOutOfRangeException("maxDegreeOfParallelism");
_maxDegreeOfParallelism = maxDegreeOfParallelism;
}
// Queues a task to the scheduler.
protected sealed override void QueueTask(Task task)
{
// Add the task to the list of tasks to be processed. If there aren't enough
// delegates currently queued or running to process tasks, schedule another.
lock (_tasks)
{
_tasks.AddLast(task);
if (_delegatesQueuedOrRunning < _maxDegreeOfParallelism)
{
Interlocked.Increment(ref _delegatesQueuedOrRunning);
NotifyThreadPoolOfPendingWork();
}
}
}
// Inform the ThreadPool that there's work to be executed for this scheduler.
private void NotifyThreadPoolOfPendingWork()
{
ThreadPool.UnsafeQueueUserWorkItem(_ =>
{
// Note that the current thread is now processing work items.
// This is necessary to enable inlining of tasks into this thread.
_currentThreadIsProcessingItems = true;
try
{
// Process all available items in the queue.
while (true)
{
Task item;
lock (_tasks)
{
// When there are no more items to be processed,
// note that we're done processing, and get out.
if (_tasks.Count == 0)
{
Interlocked.Decrement(ref _delegatesQueuedOrRunning);
break;
}
// Get the next item from the queue
item = _tasks.First.Value;
_tasks.RemoveFirst();
}
// Execute the task we pulled out of the queue
Interlocked.Increment(ref _nowRunning);
if (base.TryExecuteTask(item))
Interlocked.Decrement(ref _nowRunning);
}
}
// We're done processing items on the current thread
finally { _currentThreadIsProcessingItems = false; }
}, null);
}
// Attempts to execute the specified task on the current thread.
protected sealed override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
{
// If this thread isn't already processing a task, we don't support inlining
if (!_currentThreadIsProcessingItems) return false;
// If the task was previously queued, remove it from the queue
if (taskWasPreviouslyQueued)
// Try to run the task.
if (TryDequeue(task))
return base.TryExecuteTask(task);
else
return false;
else
return base.TryExecuteTask(task);
}
// Attempt to remove a previously scheduled task from the scheduler.
protected sealed override bool TryDequeue(Task task)
{
lock (_tasks) return _tasks.Remove(task);
}
// Gets the maximum concurrency level supported by this scheduler.
public sealed override int MaximumConcurrencyLevel { get { return _maxDegreeOfParallelism; } }
// Gets an enumerable of the tasks currently scheduled on this scheduler.
protected sealed override IEnumerable<Task> GetScheduledTasks()
{
bool lockTaken = false;
try
{
Monitor.TryEnter(_tasks, ref lockTaken);
if (lockTaken) return _tasks;
else throw new NotSupportedException();
}
finally
{
if (lockTaken) Monitor.Exit(_tasks);
}
}
}
}

How to create a FIFO/strong semaphore

I need to code my own FIFO/strong semaphore in C#, using a semaphore class of my own as a base. I found this example, but it's not quite right since I'm not supposed to be using Monitor.Enter/Exit yet.
These are the methods for my regular semaphore, and I was wondering if there was a simple way to adapt it to be FIFO.
public virtual void Acquire()
{
lock (this)
{
while (uintTokens == 0)
{
Monitor.Wait(this);
}
uintTokens--;
}
}
public virtual void Release(uint tokens = 1)
{
lock (this)
{
uintTokens += tokens;
Monitor.PulseAll(this);
}
}
So SemaphoreSlim gives us a good starting place, so we'll begin by wrapping one of those in a new class, and directing everything but the wait method to that semaphore.
To get a queue like behavior we'll want a queue object, and to make sure it's safe in the face of multithreaded access, we'll use a ConcurrentQueue.
In this queue we'll put TaskCompletionSource objects. When we want to have something start waiting it can create a TCS, add it to the queue, and then inform the semaphore to asynchronously pop the next item off of the queue and mark it as "completed" when the wait finishes. We'll know that there will always be an equal or lesser number of continuations as there are items in the queue.
Then we just wait on the Task from the TCS.
We can also trivially create a WaitAsync method that returns a task, by just returning it instead of waiting on it.
public class SemaphoreQueue
{
private SemaphoreSlim semaphore;
private ConcurrentQueue<TaskCompletionSource<bool>> queue =
new ConcurrentQueue<TaskCompletionSource<bool>>();
public SemaphoreQueue(int initialCount)
{
semaphore = new SemaphoreSlim(initialCount);
}
public SemaphoreQueue(int initialCount, int maxCount)
{
semaphore = new SemaphoreSlim(initialCount, maxCount);
}
public void Wait()
{
WaitAsync().Wait();
}
public Task WaitAsync()
{
var tcs = new TaskCompletionSource<bool>();
queue.Enqueue(tcs);
semaphore.WaitAsync().ContinueWith(t =>
{
TaskCompletionSource<bool> popped;
if (queue.TryDequeue(out popped))
popped.SetResult(true);
});
return tcs.Task;
}
public void Release()
{
semaphore.Release();
}
}
I have created a FifoSemaphore class and I am successfully using it in my solutions. Current limitation is that it behaves like a Semaphore(1, 1).
public class FifoSemaphore
{
private readonly object lockObj = new object();
private List<Semaphore> WaitingQueue = new List<Semaphore>();
private Semaphore RequestNewSemaphore()
{
lock (lockObj)
{
Semaphore newSemaphore = new Semaphore(1, 1);
newSemaphore.WaitOne();
return newSemaphore;
}
}
#region Public Functions
public void Release()
{
lock (lockObj)
{
WaitingQueue.RemoveAt(0);
if (WaitingQueue.Count > 0)
{
WaitingQueue[0].Release();
}
}
}
public void WaitOne()
{
Semaphore semaphore = RequestNewSemaphore();
lock (lockObj)
{
WaitingQueue.Add(semaphore);
semaphore.Release();
if(WaitingQueue.Count > 1)
{
semaphore.WaitOne();
}
}
semaphore.WaitOne();
}
#endregion
}
Usage is just like with a regular semaphore:
FifoSemaphore fifoSemaphore = new FifoSemaphore();
On each thread:
fifoSemaphore.WaitOne();
//do work
fifoSemaphore.Release();

AutoResetEvent Reset immediately after Set

Consider the following pattern:
private AutoResetEvent signal = new AutoResetEvent(false);
private void Work()
{
while (true)
{
Thread.Sleep(5000);
signal.Set();
//has a waiting thread definitely been signaled by now?
signal.Reset();
}
}
public void WaitForNextEvent()
{
signal.WaitOne();
}
The purpose of this pattern is to allow external consumers to wait for a certain event (e.g. - a message arriving). WaitForNextEvent is not called from within the class.
To give an example that should be familiar, consider System.Diagnostics.Process. It exposes an Exited event, but it also exposes a WaitForExit method, which allows the caller to wait synchronously until the process exits. this is what I am trying to achieve here.
The reason I need signal.Reset() is that if a thread calls WaitForNextEvent after signal.Set() has already been called (or in other words, if .Set was called when no threads were waiting), it returns immediately, as the event has already been previously signaled.
The question
Is it guaranteed that a thread calling WaitForNextEvent() will be signaled before signal.Reset() is called? If not, what are other solutions for implementing a WaitFor method?
Instead of using AutoResetEvent or ManualResetEvent, use this:
public sealed class Signaller
{
public void PulseAll()
{
lock (_lock)
{
Monitor.PulseAll(_lock);
}
}
public void Pulse()
{
lock (_lock)
{
Monitor.Pulse(_lock);
}
}
public void Wait()
{
Wait(Timeout.Infinite);
}
public bool Wait(int timeoutMilliseconds)
{
lock (_lock)
{
return Monitor.Wait(_lock, timeoutMilliseconds);
}
}
private readonly object _lock = new object();
}
Then change your code like so:
private Signaller signal = new Signaller();
private void Work()
{
while (true)
{
Thread.Sleep(5000);
signal.Pulse(); // Or signal.PulseAll() to signal ALL waiting threads.
}
}
public void WaitForNextEvent()
{
signal.Wait();
}
There is no guarantee. This:
AutoResetEvent flag = new AutoResetEvent(false);
new Thread(() =>
{
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
Console.WriteLine("Work Item Started");
flag.WaitOne();
Console.WriteLine("Work Item Executed");
}).Start();
// For fast systems, you can help by occupying processors.
for (int ix = 0; ix < 2; ++ix)
{
new Thread(() => { while (true) ; }).Start();
}
Thread.Sleep(1000);
Console.WriteLine("Sleeped");
flag.Set();
// Decomment here to make it work
//Thread.Sleep(1000);
flag.Reset();
Console.WriteLine("Finished");
Console.ReadLine();
won't print "Work Item Executed" on my system. If I add a Thread.Sleep between the Set and the Reset it prints it. Note that this is very processor dependent, so you could have to create tons of threads to "fill" the CPUs. On my PC it's reproducible 50% of the times :-)
For the Exited:
readonly object mylock = new object();
then somewhere:
lock (mylock)
{
// Your code goes here
}
and the WaitForExit:
void WaitForExit()
{
lock (mylock) ;
// exited
}
void bool IsExited()
{
bool lockTacken = false;
try
{
Monitor.TryEnter(mylock, ref lockTacken);
}
finally
{
if (lockTacken)
{
Monitor.Exit(mylock);
}
}
return lockTacken;
}
Note that the lock construct isn't compatible with async/await (as aren't nearly all the locking primitives of .NET)
I would use TaskCompletionSources:
private volatile TaskCompletionSource<int> signal = new TaskCompletionSource<int>();
private void Work()
{
while (true)
{
Thread.Sleep(5000);
var oldSignal = signal;
signal = new TaskCompletionSource<int>()
//has a waiting thread definitely been signaled by now?
oldSignal.SetResult(0);
}
}
public void WaitForNextEvent()
{
signal.Task.Wait();
}
By the time that the code calls SetResult, no new code entering WaitForNextEvent can obtain the TaskCompletionSource that is being signalled.
I believe it is not guaranteed.
However, your logic flow is not understood by me. If your main thread Sets the signal, why should it wait until that signal reaches its destination? Wouldn't it be better to continue your "after signal set" logic in that thread which was waiting?
If you cannot do that, I recommend you to use second WaitHandle to signal the first thread that the second one has reveiced the signal. But I cannot see any pros of such a strategy.

Synchronization across threads / atomic checks?

I need to create an method invoker that any thread (Thread B for example sake) can call, which will execute on the main executing thread (Thead A) at a specific given point in its execution.
Example usage would be as follows:
static Invoker Invoker = new Invoker();
static void ThreadA()
{
new Thread(ThreadB).Start();
Thread.Sleep(...); // Hypothetic Alpha
Invoker.Invoke(delegate { Console.WriteLine("Action"); }, true);
Console.WriteLine("Done");
Console.ReadLine();
}
static void ThreadB()
{
Thread.Sleep(...); // Hypothetic Beta
Invoker.Execute();
}
The Invoker class looks like this:
public class Invoker
{
private Queue<Action> Actions { get; set; }
public Invoker()
{
this.Actions = new Queue<Action>();
}
public void Execute()
{
while (this.Actions.Count > 0)
{
this.Actions.Dequeue()();
}
}
public void Invoke(Action action, bool block = true)
{
ManualResetEvent done = new ManualResetEvent(!block);
this.Actions.Enqueue(delegate
{
action();
if (block) done.Set();
});
if (block)
{
done.WaitOne();
}
}
}
This works fine in most cases, although it won't if, for any reason, the execution (and therefore the Set) is done before the WaitOne, in which case it will just freeze (it allows for the thread to proceed, then blocks). That could be reproduced if Alpha >> Beta.
I can use booleans and whatnot, but I'm never getting a real atomic safety here. I tried some fixes, but they wouldn't work in the case where Beta >> Alpha.
I also thought of locking around both the Invoker.Execute and Invoker.Invoke methods so that we are guaranteed that the execution does not occur between enqueing and waiting. However, the problem is that the lock also englobes the WaitOne, and therefore never finishes (deadlock).
How should I go about getting absolute atomic safety in this paradigm?
Note: It really is a requirement that I work with this design, from external dependencies. So changing design is not a real option.
EDIT: I did forget to mention that I want a blocking behaviour (based on bool block) until the delegate is executed on the Invoke call.
Use a Semaphore(Slim) instead of the ManualResetEvent.
Create a semaphore with an maximum count of 1, call WaitOne() in the calling thread, and call Release() in the delegate.
If you've already called Release(), WaitOne() should return immediately.
Make sure to Dispose() it when you're done, preferably in a using block.
If block is false, you shouldn't create it in the first place (although for SemaphoreSlim, that's not so bad).
You can use my technique:
public void BlockingInvoke(Action action)
{
volatile bool isCompleted = false;
volatile bool isWaiting = false;
ManualResetEventSlim waiter = new ManualResetEventSlim();
this.Actions.Enqueue(delegate
{
action();
isCompleted = true;
Thread.MemoryBarrier();
if (!isWaiting)
waiter.Dispose();
else
waiter.Set();
});
isWaiting = true;
Thread.MemoryBarrier();
if (!isCompleted)
waiter.Wait();
waiter.Dispose();
}
Untested
I'm answering only to show the implementation SLaks described and my solution to ensure proper and unique disposal with locks. It's open to improvement and criticism, but it actually works.
public class Invoker
{
private Queue<Action> Actions { get; set; }
public Invoker()
{
this.Actions = new Queue<Action>();
}
public void Execute()
{
while (this.Actions.Count > 0)
{
this.Actions.Dequeue()();
}
}
public void Invoke(Action action, bool block = true)
{
if (block)
{
SemaphoreSlim semaphore = new SemaphoreSlim(1);
bool disposed = false;
this.Actions.Enqueue(delegate
{
action();
semaphore.Release();
lock (semaphore)
{
semaphore.Dispose();
disposed = true;
}
});
lock (semaphore)
{
if (!disposed)
{
semaphore.Wait();
semaphore.Dispose();
}
}
}
else
{
this.Actions.Enqueue(action);
}
}
}

C# once the main thread sleep, all thread stopped

I have a class running the Producer-Consumer model like this:
public class SyncEvents
{
public bool waiting;
public SyncEvents()
{
waiting = true;
}
}
public class Producer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private Object _waitAck;
public Producer(Queue<Delegate> q, SyncEvents sync, Object obj)
{
_queue = q;
_sync = sync;
_waitAck = obj;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
Monitor.Wait(_sync, 0);
if (_queue.Count > 0)
{
_sync.waiting = false;
}
else
{
_sync.waiting = true;
lock (_waitAck)
{
Monitor.Pulse(_waitAck);
}
}
Monitor.Pulse(_sync);
}
}
}
}
public class Consumer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private int count = 0;
public Consumer(Queue<Delegate> q, SyncEvents sync)
{
_queue = q;
_sync = sync;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
while (_queue.Count == 0)
{
Monitor.Wait(_sync);
}
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
count++;
Monitor.Pulse(_sync);
}
}
}
}
/// <summary>
/// Act as a consumer to the queries produced by the DataGridViewCustomCell
/// </summary>
public class QueryThread
{
private SyncEvents _syncEvents = new SyncEvents();
private Object waitAck = new Object();
private Queue<Delegate> _queryQueue = new Queue<Delegate>();
Producer queryProducer;
Consumer queryConsumer;
public QueryThread()
{
queryProducer = new Producer(_queryQueue, _syncEvents, waitAck);
queryConsumer = new Consumer(_queryQueue, _syncEvents);
Thread producerThread = new Thread(queryProducer.ThreadRun);
Thread consumerThread = new Thread(queryConsumer.ThreadRun);
producerThread.IsBackground = true;
consumerThread.IsBackground = true;
producerThread.Start();
consumerThread.Start();
}
public bool isQueueEmpty()
{
return _syncEvents.waiting;
}
public void wait()
{
lock (waitAck)
{
while (_queryQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
}
public void Enqueue(Delegate item)
{
_queryQueue.Enqueue(item);
}
}
The code run smoothly but the wait() function.
In some case I want to wait until all the function in the queue were finished running so I made the wait() function.
The producer will fire the waitAck pulse at suitable time.
However, when the line "Monitor.Wait(waitAck);" is ran in the wait() function, all thread stop, includeing the producer and consumer thread.
Why would this happen and how can I solve it? thanks!
It seems very unlikely that all the threads will actually stop, although I should point out that to avoid false wake-ups you should probably have a while loop instead of an if statement:
lock (waitAck)
{
while(queryProducer.secondQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
The fact that you're calling Monitor.Wait means that waitAck should be released so it shouldn't prevent the consumer threads from locking...
Could you give more information about the way in which the producer/consumer threads are "stopping"? Does it look like they've just deadlocked?
Is your producer using Notify or NotifyAll? You've got an extra waiting thread now, so if you only use Notify it's only going to release a single thread... it's hard to see whether or not that's a problem without the details of your Producer and Consumer classes.
If you could show a short but complete program to demonstrate the problem, that would help.
EDIT: Okay, now you've posted the code I can see a number of issues:
Having so many public variables is a recipe for disaster. Your classes should encapsulate their functionality so that other code doesn't have to go poking around for implementation bits and pieces. (For example, your calling code here really shouldn't have access to the queue.)
You're adding items directly to the second queue, which means you can't efficiently wake up the producer to add them to the first queue. Why do you even have multiple queues?
You're always waiting on _sync in the producer thread... why? What's going to notify it to start with? Generally speaking the producer thread shouldn't have to wait, unless you have a bounded buffer
You have a static variable (_waitAck) which is being overwritten every time you create a new instance. That's a bad idea.
You also haven't shown your SyncEvents class - is that meant to be doing anything interesting?
To be honest, it seems like you've got quite a strange design - you may well be best starting again from scratch. Try to encapsulate the whole producer/consumer queue in a single class, which has Produce and Consume methods, as well as WaitForEmpty (or something like that). I think you'll find the synchronization logic a lot easier that way.
Here is my take on your code:
public class ProducerConsumer
{
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
public ProducerConsumer(Queue<Delegate> queue)
{
lock (_sync)
{
// Note: I would recommend that you don't even
// bother with taking in a queue. You should be able
// to just instantiate a new Queue<Delegate>()
// and use it when you Enqueue. There is nothing that
// you really need to pass into the constructor.
_queue = queue;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public override void Enqueue(Delegate value)
{
lock (_sync)
{
_queue.Enqueue(value);
_ready.Set();
}
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
Delegate query;
try
{
while (true)
{
_ready.WaitOne();
lock (_sync)
{
if (_queue.Count > 0)
{
query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_ready.Reset();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
}
}
base.Dispose(disposing);
}
}
I'm not exactly sure what you're trying to achieve with the wait function... I'm assuming you're trying to put some type of a limit to the number of items that can be queued. In that case simply throw an exception or return a failure signal when you have too many items in the queue, the client that is calling Enqueue will keep retrying until the queue can take more items. Taking an optimistic approach will save you a LOT of headaches and it simply helps you get rid of a lot of complex logic.
If you REALLY want to have the wait in there, then I can probably help you figure out a better approach. Let me know what are you trying to achieve with the wait and I'll help you out.
Note: I took this code from one of my projects, modified it a little and posted it here... there might be some minor syntax errors, but the logic should be correct.
UPDATE: Based on your comments I made some modifications: I added another ManualResetEvent to the class, so when you call BlockQueue() it gives you an event which you can wait on and sets a flag to stop the Enqueue function from queuing more elements. Once all the queries in the queue are serviced, the flag is set to true and the _wait event is set so whoever is waiting on it gets the signal.
public class ProducerConsumer
{
private bool _canEnqueue;
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
private static ManualResetEvent _wait = new ManualResetEvent(false);
public ProducerConsumer()
{
lock (_sync)
{
_queue = new Queue<Delegate> _queue;
_canEnqueue = true;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public bool Enqueue(Delegate value)
{
lock (_sync)
{
// Don't allow anybody to enqueue
if( _canEnqueue )
{
_queue.Enqueue(value);
_ready.Set();
return true;
}
}
// Whoever is calling Enqueue should try again later.
return false;
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
try
{
while (true)
{
// Wait for a query to be enqueued
_ready.WaitOne();
// Process the query
lock (_sync)
{
if (_queue.Count > 0)
{
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_canEnqueue = true;
_ready.Reset();
_wait.Set();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
// Block your queue from enqueuing, return null
// if the queue is already empty.
public ManualResetEvent BlockQueue()
{
lock(_sync)
{
if( _queue.Count > 0 )
{
_canEnqueue = false;
_wait.Reset();
}
else
{
// You need to tell the caller that they can't
// block your queue while it's empty. The caller
// should check if the result is null before calling
// WaitOne().
return null;
}
}
return _wait;
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
// Set wait when you're disposing the queue
// so that nobody is left with a lingering wait.
_wait.Set();
}
}
base.Dispose(disposing);
}
}

Categories

Resources