Access to singleton object from another thread - c#

I call service method using
ThreadPool.QueueUserWorkItem(o => service.Method(arg1, arg2));
Service has object 'loggingService' with I was get using Spring.Net
private readonly ILoggingService loggingService = ObjectBuilder.GetObjectByName("LoggingService");
'LoggingService' class is singleton. It writes log info to log.txt.
When I try to call loggingService.Info("test") in this service method, I get exception: file is busy by another process.
How can I access to the loggingService?

Your singleton is apparently per-thread.
You will need some way of passing the LoggingService across threads.
For example, you could set service.loggingService in the original thread.
Alternatively, you might be able to configure Spring.Net to make it a non-thread-local singleton.
Note that your LoggingService must be thread-safe, or you'll get strange errors at runtime.

I had a similar issue while writing some client side application that used a bunch of threads.
Basically you want your LoggingService to keep an internal queue (whose access should be controlled via a lock) and every time you call the log method you only append the message to this queue. At the end of the log method check if the queue is currently being written to a file and if not, start writing.

public static class SingletonLoggingService
{
public static ILoggingService LoggingService = ObjectBuilder.GetObjectByName("LoggingService");
}
SingletonLoggingService.LoggingService.Info("Test");

I did it!
I use Queue and threading:
internal class LoggingService : ILoggingService {
private readonly Queue<LogEntry> queue = new Queue<LogEntry>();
private Thread waiter;
public LoggingService() {
waiter = new Thread(AddLogEntry);
waiter.Start();
}
public void Shutdown() {
try {
waiter.Abort();
} catch {}
}
public void Error(string s, Exception e) {
lock (queue) {
queue.Enqueue(new LogEntry(s, e, LogEntryType.Error));
}
}
public void Warning(string message) {
lock (queue) {
queue.Enqueue(new LogEntry(message, LogEntryType.Warning));
}
}
public void Info(string message) {
lock (queue) {
queue.Enqueue(new LogEntry(message, LogEntryType.Info));
}
}
private void AddLogEntry(object state) {
while (true) {
lock (queue) {
if (queue.Count > 0) {
LogEntry logEntry = queue.Dequeue();
switch (logEntry.Type)
{
case LogEntryType.Error:
logWriter.Error(logEntry.Message, logEntry.Exception);
break;
case LogEntryType.Warning:
logWriter.Warning(logEntry.Message);
break;
case LogEntryType.Info:
logWriter.Info(logEntry.Message);
break;
}
}
}
Thread.Sleep(100);
if (waiter.ThreadState == ThreadState.Aborted) {
waiter = null;
break;
}
}
}
}
I call Shutdown() at the end of app:
protected override void OnExit(ExitEventArgs e) {
loggingService.Shutdown();
base.OnExit(e);
}

Related

Producer/ Consumer pattern using threads and EventWaitHandle

I guess it is sort of a code review, but here is my implementation of the producer / consumer pattern. What I would like to know is would there be a case in which the while loops in the ReceivingThread() or SendingThread() methods might stop executing. Please note that EnqueueSend(DataSendEnqeueInfo info) is called from multiple different threads and I probably can't use tasks here since I definitely have to consume commands in a separate thread.
private Thread mReceivingThread;
private Thread mSendingThread;
private Queue<DataRecievedEnqeueInfo> mReceivingThreadQueue;
private Queue<DataSendEnqeueInfo> mSendingThreadQueue;
private readonly object mReceivingQueueLock = new object();
private readonly object mSendingQueueLock = new object();
private bool mIsRunning;
EventWaitHandle mRcWaitHandle;
EventWaitHandle mSeWaitHandle;
private void ReceivingThread()
{
while (mIsRunning)
{
mRcWaitHandle.WaitOne();
DataRecievedEnqeueInfo item = null;
while (mReceivingThreadQueue.Count > 0)
{
lock (mReceivingQueueLock)
{
item = mReceivingThreadQueue.Dequeue();
}
ProcessReceivingItem(item);
}
mRcWaitHandle.Reset();
}
}
private void SendingThread()
{
while (mIsRunning)
{
mSeWaitHandle.WaitOne();
while (mSendingThreadQueue.Count > 0)
{
DataSendEnqeueInfo item = null;
lock (mSendingQueueLock)
{
item = mSendingThreadQueue.Dequeue();
}
ProcessSendingItem(item);
}
mSeWaitHandle.Reset();
}
}
internal void EnqueueRecevingData(DataRecievedEnqeueInfo info)
{
lock (mReceivingQueueLock)
{
mReceivingThreadQueue.Enqueue(info);
mRcWaitHandle.Set();
}
}
public void EnqueueSend(DataSendEnqeueInfo info)
{
lock (mSendingQueueLock)
{
mSendingThreadQueue.Enqueue(info);
mSeWaitHandle.Set();
}
}
P.S the idea here is that am using WaitHandles to put thread to sleep when the queue is empty, and signal them to start when new items are enqueued.
UPDATE
I am just going to leave this https://blogs.msdn.microsoft.com/benwilli/2015/09/10/tasks-are-still-not-threads-and-async-is-not-parallel/ ,for people who might be trying to implement Producer/Consumer pattern using TPL or tasks.
Use a BlockingCollection instead of Queue, EventWaitHandle and lock objects:
public class DataInfo { }
private Thread mReceivingThread;
private Thread mSendingThread;
private BlockingCollection<DataInfo> queue;
private CancellationTokenSource receivingCts = new CancellationTokenSource();
private void ReceivingThread()
{
try
{
while (!receivingCts.IsCancellationRequested)
{
// This will block until an item is added to the queue or the cancellation token is cancelled
DataInfo item = queue.Take(receivingCts.Token);
ProcessReceivingItem(item);
}
}
catch (OperationCanceledException)
{
}
}
internal void EnqueueRecevingData(DataInfo info)
{
// When a new item is produced, just add it to the queue
queue.Add(info);
}
// To cancel the receiving thread, cancel the token
private void CancelReceivingThread()
{
receivingCts.Cancel();
}
Personally, for simple producer-consumer problems, I would just use BlockingCollection. There would be no need to manually code your own synchronization logic. The consuming threads will also block if there are no items present in the queue.
Here is what your code might look like if you use this class:
private BlockingCollection<DataRecievedEnqeueInfo> mReceivingThreadQueue = new BlockingCollection<DataRecievedEnqeueInfo>();
private BlockingCollection<DataSendEnqeueInfo> mSendingThreadQueue = new BlockingCollection<DataSendEnqeueInfo>();
public void Stop()
{
// No need for mIsRunning. Makes the enumerables in the GetConsumingEnumerable() calls
// below to complete.
mReceivingThreadQueue.CompleteAdding();
mSendingThreadQueue.CompleteAdding();
}
private void ReceivingThread()
{
foreach (DataRecievedEnqeueInfo item in mReceivingThreadQueue.GetConsumingEnumerable())
{
ProcessReceivingItem(item);
}
}
private void SendingThread()
{
foreach (DataSendEnqeueInfo item in mSendingThreadQueue.GetConsumingEnumerable())
{
ProcessSendingItem(item);
}
}
internal void EnqueueRecevingData(DataRecievedEnqeueInfo info)
{
// You can also use TryAdd() if there is a possibility that you
// can add items after you have stopped. Otherwise, this can throw an
// an exception after CompleteAdding() has been called.
mReceivingThreadQueue.Add(info);
}
public void EnqueueSend(DataSendEnqeueInfo info)
{
mSendingThreadQueue.Add(info);
}
As suggested in comments, you also can give a try to the TPL Dataflow blocks.
As far as I can see, you have two similar pipelines, for receive and send, so I assume that your class hierarchy is like this:
class EnqueueInfo { }
class DataRecievedEnqeueInfo : EnqueueInfo { }
class DataSendEnqeueInfo : EnqueueInfo { }
We can assemble an abstract class which will encapsulate the logic for creating the pipeline, and providing the interface for processing the items, like this:
abstract class EnqueueInfoProcessor<T>
where T : EnqueueInfo
{
// here we will store all the messages received before the handling
private readonly BufferBlock<T> _buffer;
// simple action block for actual handling the items
private ActionBlock<T> _action;
// cancellation token to cancel the pipeline
public EnqueueInfoProcessor(CancellationToken token)
{
_buffer = new BufferBlock<T>(new DataflowBlockOptions { CancellationToken = token });
_action = new ActionBlock<T>(item => ProcessItem(item), new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = Environment.ProcessorCount,
CancellationToken = token
});
// we are linking two blocks so all the items from buffer
// will flow down to action block in order they've been received
_buffer.LinkTo(_action, new DataflowLinkOptions { PropagateCompletion = true });
}
public void PostItem(T item)
{
// synchronously wait for posting to complete
_buffer.Post(item);
}
public async Task SendItemAsync(T item)
{
// asynchronously wait for message to be posted
await _buffer.SendAsync(item);
}
// abstract method to implement
protected abstract void ProcessItem(T item);
}
Note that you also can encapsulate the link between two blocks by using the Encapsulate<TInput, TOutput> method, but in that case you have to properly handle the Completion of the buffer block, if you're using it.
After this, we just need to implement two methods for receive and send handle logic:
public class SendEnqueueInfoProcessor : EnqueueInfoProcessor<DataSendEnqeueInfo>
{
SendEnqueueInfoProcessor(CancellationToken token)
: base(token)
{
}
protected override void ProcessItem(DataSendEnqeueInfo item)
{
// send logic here
}
}
public class RecievedEnqueueInfoProcessor : EnqueueInfoProcessor<DataRecievedEnqeueInfo>
{
RecievedEnqueueInfoProcessor(CancellationToken token)
: base(token)
{
}
protected override void ProcessItem(DataRecievedEnqeueInfo item)
{
// recieve logic here
}
}
You also can create more complicated pipeline with TransformBlock<DataRecievedEnqeueInfo, DataSendEnqeueInfo>, if your message flow is about a ReceiveInfo message became SendInfo.

Simple in-memory message queue

Our existing implementation of domain events limits (by blocking) publishing to one thread at a time to avoid reentrant calls to handlers:
public interface IDomainEvent {} // Marker interface
public class Dispatcher : IDisposable
{
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Subscribe code...
public void Publish(IDomainEvent domainEvent)
{
semaphore.Wait();
try
{
// Get event subscriber(s) from concurrent dictionary...
foreach (Action<IDomainEvent> subscriber in eventSubscribers)
{
subscriber(domainEvent);
}
}
finally
{
semaphore.Release();
}
}
// Dispose pattern...
}
If a handler publishes an event, this will deadlock.
How can I rewrite this to serialize calls to Publish? In other words, if subscribing handler A publishes event B, I'll get:
Handler A called
Handler B called
while preserving the condition of no reentrant calls to handlers in a multithreaded environment.
I do not want to change the public method signature; there's no place in the application to call a method to publish a queue, for instance.
We came up with a way to do it synchronously.
public class Dispatcher : IDisposable
{
private readonly ConcurrentQueue<IDomainEvent> queue = new ConcurrentQueue<IDomainEvent>();
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Subscribe code...
public void Publish(IDomainEvent domainEvent)
{
queue.Enqueue(domainEvent);
if (IsPublishing)
{
return;
}
PublishQueue();
}
private void PublishQueue()
{
IDomainEvent domainEvent;
while (queue.TryDequeue(out domainEvent))
{
InternalPublish(domainEvent);
}
}
private void InternalPublish(IDomainEvent domainEvent)
{
semaphore.Wait();
try
{
// Get event subscriber(s) from concurrent dictionary...
foreach (Action<IDomainEvent> subscriber in eventSubscribers)
{
subscriber(domainEvent);
}
}
finally
{
semaphore.Release();
}
// Necessary, as calls to Publish during publishing could have queued events and returned.
PublishQueue();
}
private bool IsPublishing
{
get { return semaphore.CurrentCount < 1; }
}
// Dispose pattern for semaphore...
}
}
You will have to make Publish asynchronous to achieve that. Naive implementation would be as simple as:
public class Dispatcher : IDisposable {
private readonly BlockingCollection<IDomainEvent> _queue = new BlockingCollection<IDomainEvent>(new ConcurrentQueue<IDomainEvent>());
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
public Dispatcher() {
new Thread(Consume) {
IsBackground = true
}.Start();
}
private List<Action<IDomainEvent>> _subscribers = new List<Action<IDomainEvent>>();
public void AddSubscriber(Action<IDomainEvent> sub) {
_subscribers.Add(sub);
}
private void Consume() {
try {
foreach (var #event in _queue.GetConsumingEnumerable(_cts.Token)) {
try {
foreach (Action<IDomainEvent> subscriber in _subscribers) {
subscriber(#event);
}
}
catch (Exception ex) {
// log, handle
}
}
}
catch (OperationCanceledException) {
// expected
}
}
public void Publish(IDomainEvent domainEvent) {
_queue.Add(domainEvent);
}
public void Dispose() {
_cts.Cancel();
}
}
It can't be done with that interface. You can process the event subscriptions asynchronously to remove the deadlock while still running them serially, but then you can't guarantee the order you described. Another call to Publish might enqueue something (event C) while the handler for event A is running but before it publishes event B. Then event B ends up behind event C in the queue.
As long as Handler A is on equal footing with other clients when it comes to getting an item in the queue, it either has to wait like everyone else (deadlock) or it has to play fairly (first come, first served). The interface you have there doesn't allow the two to be treated differently.
That's not to say you couldn't get up to some shenanigans in your logic to attempt to differentiate them (e.g. based on thread id or something else identifiable), but anything along those lines would unreliable if you don't control the subscriber code as well.

How to end Singleton-object queue if I use Spring AOP cross-cutting concerns + log4net for logging

I have an application, that use Spring.AOP library to apply proxy-object to log what methods of program do (I use xml-configuration). Before I used log4net to log messages with Spring.AOP(simplified class):
public class CommandLoggingAdvice : IMethodInterceptor
{
// here I get an instance of log4net
private ILog _Logger = null;
protected ILog Logger
{
_Logger = LogManager.GetLogger("Logger1");
}
public object Invoke(IMethodInvocation invocation)
{
Logger1.Info("Now we enter to method");
// here i call the method
object returnValue = invocation.Proceed();
Logger1.Info("Now we exit from method");
return returnValue;
}
}
But there were a problem: I needed to use a queue of messages, which should work in independent thread to distribute program load on several thread
Here is a new Spring.AOP class:
public class CommandLoggingAdvice : IMethodInterceptor
{
private static ProducerConsumerClass LoggingQueue = ProducerConsumerClass.Instance;
public object Invoke(IMethodInvocation invocation)
{
LoggingQueue.AddTask("Now we enter to method");
// here I call the method
object returnValue = invocation.Proceed();
LoggingQueue.AddTask("Now we exit from method");
return returnValue;
}
}
/// <summary>
/// ProducerConsumerClass implements:
/// - SingleTon-object, Producer/Consumer queue (queue is a FIFO BlockingCollection) - I need this class to process all messages, which come from CommonLoggingAdvice class. The reason is that I need to do it in independent thread (.IsBackground = false)
/// - This version of Singleton class is threadsafe
/// </summary>
public sealed class ProducerConsumerClass : IDisposable
{
// here Iget an instance of log4net
private ILog _Logger = null;
protected ILog Logger
{
_Logger = LogManager.GetLogger("Logger1");
}
private BlockingCollection<string> tasks = new BlockingCollection<string>();
private static volatile ProducerConsumerClass _instance;
private static object locker = new object();
Thread worker;
private ProducerConsumerClass()
{
worker = new Thread(Work);
worker.Name = "Queue thread";
worker.IsBackground = false;
worker.Start();
}
public static ProducerConsumerClass Instance
{
get
{
if (_instance == null)
{
lock (locker)
{
if (_instance == null)
{
_instance = new ProducerConsumerClass();
}
}
}
return _instance;
}
}
public void AddTask(string task)
{
tasks.Add(task);
}
// now this is unused method
// I need to call this method somehow at the end of program, but cross-cutting concern doesn't allow to do it straightahead
public void Dispose()
{
tasks.CompleteAdding();
worker.Join();
tasks.Dispose();
}
void Work()
{
while (true)
{
string task = null;
if (!tasks.IsCompleted)
{
Thread.Sleep(1000);
task = tasks.Take();
Logger1.Info(worker.Name + " " + task );
}
else
{
return;
}
}
}
}
So this class is always running (and so the "worker" thread);
if "tasks" is empty, - tasks.Take() forces "worker" thread to pause until something will be added using tasks.Add().
But when all functions of program are ended and i need to exit from program - "tasks" is empty and "worker" is paused - so I can not exit from infinite cycle => program never ends.
As long as Spring.AOP classes are cross-cutting and they apply automatically, I don't know how to tell "worker" thread ( method Work() ) that it should be completed ( CompleteAdding() method , or Dispose() ).
Could you help me with this problem or tell any other ways to do what I need:
cross-cutting concern with Spring.AOP for logging
threadsafe implemenation of Singleton-class with queue(or Producer/consumer pattern) in independent thread, which live as long as lives application and a little more: until the queue is empty.
You can use a independent thread to write your log in a queue. Using lock to solve your needs about write in your queue.

How to let only one thread to run a critical section while discarding the other threads without hang

I'm developing a windows service with .NET framework 4.0 and C#.
This service will open a socket to receive commands.
I have this socket listener class:
public class SocketListener
{
private System.Net.Sockets.TcpListener m_server;
public SQLServerSocketListener()
{
IPEndPoint ip = new IPEndPoint(IPAddress.Any, 5445);
m_server = new System.Net.Sockets.TcpListener(ip);
}
public void Start()
{
m_server.Start();
m_server.BeginAcceptTcpClient(new AsyncCallback(Callback), m_server);
}
public void Stop()
{
if (m_server != null)
m_server.Stop();
}
private void Callback(IAsyncResult ar)
{
if (!(m_server.Server.IsBound) ||
(m_server.Server == null))
return;
TcpClient client;
try
{
client = m_server.EndAcceptTcpClient(ar);
}
catch (ObjectDisposedException)
{
//Listener canceled
return;
}
DataHandler dataHandler = new DataHandler(client);
ThreadPool.QueueUserWorkItem(dataHandler.HandleClient, client);
m_server.BeginAcceptTcpClient(new AsyncCallback(Callback), m_server);
}
}
And this class to process the commands received through the socket:
class DataHandler
{
private bool m_disposed = false;
private TcpClient m_controlClient;
private IPEndPoint m_remoteEndPoint;
private string m_clientIP;
private NetworkStream m_controlStream;
private StreamReader m_controlReader;
public DataHandler(TcpClient client)
{
m_controlClient = client;
}
public void HandleClient(object obj)
{
m_remoteEndPoint = (IPEndPoint)m_controlClient.Client.RemoteEndPoint;
m_clientIP = m_remoteEndPoint.Address.ToString();
m_controlStream = m_controlClient.GetStream();
m_controlReader = new StreamReader(m_controlStream, true);
string line;
try
{
while (((line = m_controlReader.ReadLine()) != null) ||
(m_controlClient == null) ||
(!m_controlClient.Connected))
{
CommandHandler.ProcessCommand(line);
}
}
catch (Exception ex)
{
Console.WriteLine("CodeServerService.DataHandler error: {0}", ex.Message);
}
finally
{
Dispose();
}
}
}
And, the CommandHandler:
class CommandHandler
{
public static void ProcessCommand(string command, string connStringINICIC, string connStringTRZIC, byte codeLevel)
{
switch (command)
{
case "GetNewCodes<EOF>":
CodesIncremental.GetNewCodes();
break;
}
}
}
And CodesIncremental:
public class CodesIncremental
{
public static bool GetNewCodes()
{
[ ... ]
}
}
My problem is that I can receive GetNewCodes<EOF> command before the first one finish. So, I need to don't let GetNewCodes<EOF>runs if there is another GetNewCodes<EOF> running.
How can I don't let run CodesIncremental.GetNewCodes(); if this code its running in another thread?
I need something to discard the commands received while CodesIncremental.GetNewCodes(); is running.
In pseudo code:
If CodesIncremental.GetNewCodes(); is running do nothing.
This version does not block. CompareExchange ensures atomicity, so only one thread will swap the value of the _running variable, the rest of threads will just return inmediately.
public class CodesIncremental
{
static Int32 _running = 0;
public static bool GetNewCodes()
{
if (Interlocked.CompareExchange(ref _running, 1, 0) == 1)
return false;
try
{
// Do stuff...
return true;
}
finally
{
_running = 0;
}
}
}
A difference than monitors or other synchronization methods, there is little contention on this method, and it is quite faster.
Maybe like this using AutoResetEvent:
public class CodesIncremental
{
private AutoResetEvent _event = new AutoResetEvent(true);
public static bool GetNewCodes()
{
if(!_event.WaitOne(0))
return true; //is running
try
{
/*
actions in case if isn't running
*/
}
finally
{
_event.Set();
}
return false;
}
}
EDIT: Update to address the modification of the question.
A simple way is to use the Monitor.TryEnter and Monitor.Exit
Just call the ExecuteGetNewCodeCommand for the processing of your "GetNewCode" command.
object _myLock = new object();
void ExecuteGetNewCodeCommand( ArgType args)
{
bool result = false;
try
{
result = Monitor.TryEnter(_myLock); // This method returns immediately
if( !result) // check if the lock is acquired.
return;
// Execute your command code here
}
finally
{
if(result) // release the lock.
Monitor.Exit(_myLock);
}
}
Old answer (before the modification of the question):
Think about using a queue and a Thread Pool.
Every time you receive a new Command (including "GetNewCode") insert it into a queue. In addition, you will have a Thread Pool that will read requests from the queue and execute them.
If you are using only one thread in the Thread pool, or a dedicated thread for this type of commands (where there are other threads for other requests/commands in the queue/queus), then only one "GetNewCode" request will be running at the same time.
This way you can control the number of threads your server will run. Thus, also the resources your server uses.
If you just synchronize (via locks or other mechanism) then there are a performance penalties. And maybe a denial of service, if you reached a thread limit. Let's say for somehow the execution of a request is taking too long (Maybe a deadlock in your code). If you will not use a Thread pool, and will execute the commands/requests on the same thread the client connected to your, then your sever may hang.
Though, If you will synchronize the threads inside the thread pool, then the server will not hang. Maybe it will be really slow to execute the requests, but it will still run and work.
There is a default .Net ThreadPool implementation at MSDN.
Add a lock to your CodesIncremental Class:
public class CodesIncremental
{
private object m_threadLock = new object();
public static bool GetNewCodes()
{
lock(m_threadLock)
{
[ ... ]
}
}
}
http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx
This way when your GetNewCodes method is called the first time the 'lock' statement will get an exclusive lock on the 'm_threadLock' object and only release it when the execution leaves the lock block, if any other thread calls the methods while the first thread is still inside the lock block it will not be able to get an exclusive lock and execution will suspend until it can.
Update:
Since you want to discard other calls try this:
public class CodesIncremental
{
private static object m_threadLock = new object();
private static bool m_running = false;
public static bool GetNewCodes()
{
lock(m_threadLock)
{
if(m_running)
{
return;
}
m_running = true;
}
try
{
[ ... ]
}
finally
{
m_running = false;
}
}
}
there might be better ways but this should do the trick.
Update 2: Hadn't seen the static

C# once the main thread sleep, all thread stopped

I have a class running the Producer-Consumer model like this:
public class SyncEvents
{
public bool waiting;
public SyncEvents()
{
waiting = true;
}
}
public class Producer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private Object _waitAck;
public Producer(Queue<Delegate> q, SyncEvents sync, Object obj)
{
_queue = q;
_sync = sync;
_waitAck = obj;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
Monitor.Wait(_sync, 0);
if (_queue.Count > 0)
{
_sync.waiting = false;
}
else
{
_sync.waiting = true;
lock (_waitAck)
{
Monitor.Pulse(_waitAck);
}
}
Monitor.Pulse(_sync);
}
}
}
}
public class Consumer
{
private readonly Queue<Delegate> _queue;
private SyncEvents _sync;
private int count = 0;
public Consumer(Queue<Delegate> q, SyncEvents sync)
{
_queue = q;
_sync = sync;
}
public void ThreadRun()
{
lock (_sync)
{
while (true)
{
while (_queue.Count == 0)
{
Monitor.Wait(_sync);
}
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
count++;
Monitor.Pulse(_sync);
}
}
}
}
/// <summary>
/// Act as a consumer to the queries produced by the DataGridViewCustomCell
/// </summary>
public class QueryThread
{
private SyncEvents _syncEvents = new SyncEvents();
private Object waitAck = new Object();
private Queue<Delegate> _queryQueue = new Queue<Delegate>();
Producer queryProducer;
Consumer queryConsumer;
public QueryThread()
{
queryProducer = new Producer(_queryQueue, _syncEvents, waitAck);
queryConsumer = new Consumer(_queryQueue, _syncEvents);
Thread producerThread = new Thread(queryProducer.ThreadRun);
Thread consumerThread = new Thread(queryConsumer.ThreadRun);
producerThread.IsBackground = true;
consumerThread.IsBackground = true;
producerThread.Start();
consumerThread.Start();
}
public bool isQueueEmpty()
{
return _syncEvents.waiting;
}
public void wait()
{
lock (waitAck)
{
while (_queryQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
}
public void Enqueue(Delegate item)
{
_queryQueue.Enqueue(item);
}
}
The code run smoothly but the wait() function.
In some case I want to wait until all the function in the queue were finished running so I made the wait() function.
The producer will fire the waitAck pulse at suitable time.
However, when the line "Monitor.Wait(waitAck);" is ran in the wait() function, all thread stop, includeing the producer and consumer thread.
Why would this happen and how can I solve it? thanks!
It seems very unlikely that all the threads will actually stop, although I should point out that to avoid false wake-ups you should probably have a while loop instead of an if statement:
lock (waitAck)
{
while(queryProducer.secondQueue.Count > 0)
{
Monitor.Wait(waitAck);
}
}
The fact that you're calling Monitor.Wait means that waitAck should be released so it shouldn't prevent the consumer threads from locking...
Could you give more information about the way in which the producer/consumer threads are "stopping"? Does it look like they've just deadlocked?
Is your producer using Notify or NotifyAll? You've got an extra waiting thread now, so if you only use Notify it's only going to release a single thread... it's hard to see whether or not that's a problem without the details of your Producer and Consumer classes.
If you could show a short but complete program to demonstrate the problem, that would help.
EDIT: Okay, now you've posted the code I can see a number of issues:
Having so many public variables is a recipe for disaster. Your classes should encapsulate their functionality so that other code doesn't have to go poking around for implementation bits and pieces. (For example, your calling code here really shouldn't have access to the queue.)
You're adding items directly to the second queue, which means you can't efficiently wake up the producer to add them to the first queue. Why do you even have multiple queues?
You're always waiting on _sync in the producer thread... why? What's going to notify it to start with? Generally speaking the producer thread shouldn't have to wait, unless you have a bounded buffer
You have a static variable (_waitAck) which is being overwritten every time you create a new instance. That's a bad idea.
You also haven't shown your SyncEvents class - is that meant to be doing anything interesting?
To be honest, it seems like you've got quite a strange design - you may well be best starting again from scratch. Try to encapsulate the whole producer/consumer queue in a single class, which has Produce and Consume methods, as well as WaitForEmpty (or something like that). I think you'll find the synchronization logic a lot easier that way.
Here is my take on your code:
public class ProducerConsumer
{
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
public ProducerConsumer(Queue<Delegate> queue)
{
lock (_sync)
{
// Note: I would recommend that you don't even
// bother with taking in a queue. You should be able
// to just instantiate a new Queue<Delegate>()
// and use it when you Enqueue. There is nothing that
// you really need to pass into the constructor.
_queue = queue;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public override void Enqueue(Delegate value)
{
lock (_sync)
{
_queue.Enqueue(value);
_ready.Set();
}
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
Delegate query;
try
{
while (true)
{
_ready.WaitOne();
lock (_sync)
{
if (_queue.Count > 0)
{
query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_ready.Reset();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
}
}
base.Dispose(disposing);
}
}
I'm not exactly sure what you're trying to achieve with the wait function... I'm assuming you're trying to put some type of a limit to the number of items that can be queued. In that case simply throw an exception or return a failure signal when you have too many items in the queue, the client that is calling Enqueue will keep retrying until the queue can take more items. Taking an optimistic approach will save you a LOT of headaches and it simply helps you get rid of a lot of complex logic.
If you REALLY want to have the wait in there, then I can probably help you figure out a better approach. Let me know what are you trying to achieve with the wait and I'll help you out.
Note: I took this code from one of my projects, modified it a little and posted it here... there might be some minor syntax errors, but the logic should be correct.
UPDATE: Based on your comments I made some modifications: I added another ManualResetEvent to the class, so when you call BlockQueue() it gives you an event which you can wait on and sets a flag to stop the Enqueue function from queuing more elements. Once all the queries in the queue are serviced, the flag is set to true and the _wait event is set so whoever is waiting on it gets the signal.
public class ProducerConsumer
{
private bool _canEnqueue;
private ManualResetEvent _ready;
private Queue<Delegate> _queue;
private Thread _consumerService;
private static Object _sync = new Object();
private static ManualResetEvent _wait = new ManualResetEvent(false);
public ProducerConsumer()
{
lock (_sync)
{
_queue = new Queue<Delegate> _queue;
_canEnqueue = true;
_ready = new ManualResetEvent(false);
_consumerService = new Thread(Run);
_consumerService.IsBackground = true;
_consumerService.Start();
}
}
public bool Enqueue(Delegate value)
{
lock (_sync)
{
// Don't allow anybody to enqueue
if( _canEnqueue )
{
_queue.Enqueue(value);
_ready.Set();
return true;
}
}
// Whoever is calling Enqueue should try again later.
return false;
}
// The consumer blocks until the producer puts something in the queue.
private void Run()
{
try
{
while (true)
{
// Wait for a query to be enqueued
_ready.WaitOne();
// Process the query
lock (_sync)
{
if (_queue.Count > 0)
{
Delegate query = _queue.Dequeue();
query.DynamicInvoke(null);
}
else
{
_canEnqueue = true;
_ready.Reset();
_wait.Set();
continue;
}
}
}
}
catch (ThreadInterruptedException)
{
_queue.Clear();
return;
}
}
// Block your queue from enqueuing, return null
// if the queue is already empty.
public ManualResetEvent BlockQueue()
{
lock(_sync)
{
if( _queue.Count > 0 )
{
_canEnqueue = false;
_wait.Reset();
}
else
{
// You need to tell the caller that they can't
// block your queue while it's empty. The caller
// should check if the result is null before calling
// WaitOne().
return null;
}
}
return _wait;
}
protected override void Dispose(bool disposing)
{
lock (_sync)
{
if (_consumerService != null)
{
_consumerService.Interrupt();
// Set wait when you're disposing the queue
// so that nobody is left with a lingering wait.
_wait.Set();
}
}
base.Dispose(disposing);
}
}

Categories

Resources