I am subscribed to a real-time data feed and am maintaining a state based on the received data. Normally, all data is received in order, but in the case where a message is dropped, I buffer the messages, receive a snapshot of the state through a REST API, and then play back the buffer, skipping any messages with an Id preceding the one specified in the snapshot. Currently, I am doing the following:
class StateManager
{
private long _lastId;
private bool _isSyncing;
private object _syncLock;
private Dictionary<decimal,decimal> _state;
private ConcurrentQueue<SocketMessage> _messageBuffer;
private ManualResetEvent _messageEvent;
private ManualResetEvent _processingEvent;
public StateManager( DataSocket socket )
{
_isSyncing = false;
_syncLock = new object();
_state = new Dictionary<decimal,decimal>();
_messageBuffer = new ConcurrentQueue<SocketMessage>();
socket.OnMessage += OnSocketMessage;
Task.Factory.StartNew( MessageProcessingThread, TaskCreationOptions.LongRunning );
}
public void ApplySnapshot( Snapshot snapshot )
{
lock( _syncLock )
{
if( _isSyncing ) return;
_isSyncing = true;
_processingEvent.Reset();
}
// Apply the snapshot to the state...
_isSyncing = false;
_processingEvent.Set();
}
private void OnSocketMessage( object sender, SocketMessage msg )
{
_messageBuffer.Enqueue( msg );
_messageEvent.Set();
}
private async Task MessageProcessingThread()
{
while(true)
{
_messageEvent.WaitOne();
while(true)
{
_processingEvent.WaitOne();
if( !_messageBuffer.TryDequeue( out var msg ) )
{
_messageEvent.Reset();
break;
}
ApplyToState( msg );
}
}
}
}
This works fine, but I feel like it's a bit sloppy and could perform better when under heavy loads. Thus, I am looking at transitioning to Microsoft.Tpl.Dataflow, which will handle the queueing and processing execution for me. However, I have used Dataflow before, and I have a concern:
Is there a way I can pause the execution of an ActionBlock such that it will buffer new tasks but not process them until I resume? In the case where I detect a dropped message, I need to be able to pause the processing until a fresh snapshot is applied, and then resume and process all of the buffered messages.
I could just use _processingEvent inside of the ActionBlock, but I feel like this would cause a bunch of problems. First, it would block the task, causing more tasks to be started, and those would block, quickly filling up TPL's internal task queue. Additionally, it would cause all of the blocked tasks to complete simultaneously, possibly out of order, causing another re-sync event to occur.
If this is not possible with TPL, is there a better way to go about this?
Related
[ This question needs to be reimagined. One of my thread queues MUST run on an STA thread, and the code below does not accommodate that. In particular it seems Task<> chooses its own thread and that just is not going to work for me. ]
I have a task queue (BlockingCollection) that I'm running through on a dedicated thread. That queue receives a series of Task<> objects that it runs sequentially within that thread via a while loop.
I need a means of Cancelling that series of tasks, and a means of knowing that the tasks are all complete. I have not been able to figure out how to do this.
Here's a fragment of my queuing class. ProcessQueue is run on a separate thread from main. QueueJob calls occur on the main thread.
using Job = Tuple<Task<bool>, string>;
public class JobProcessor
{
private readonly BlockingCollection<Job> m_queue = new BlockingCollection<Job>();
volatile bool cancel_queue = false;
private bool ProcessQueue()
{
while (true)
{
if (m_queue.IsAddingCompleted)
break;
Job tuple;
if (!m_queue.TryTake(out tuple, Timeout.Infinite))
break;
var task = tuple.Item1;
var taskName = tuple.Item2;
try
{
Console.WriteLine("Task {0}::{1} starting", this.name, taskName);
task.RunSynchronously();
Console.WriteLine("Task {0}::{1} completed", this.name, taskName);
}
catch (Exception e)
{
string message = e.Message;
}
if (cancel_queue) // CANCEL BY ERASING TASKS AND NOT RUNNING.
{
while (m_queue.TryTake(out tuple))
{
}
}
} // while(true)
return true;
}
public Task<bool> QueueJob(Func<bool> input)
{
var task = new Task<bool>(input);
try
{
m_queue.Add(Tuple.Create(task, input.Method.Name));
}
catch (InvalidOperationException)
{
Task<bool> dummy = new Task<bool>(() => false);
dummy.Start();
return dummy;
}
return task;
}
Here are the functions that trouble me:
public void ClearQueue()
{
cancel_queue = true;
// wait for queue to become empty. HOW?
cancel_queue = false;
}
public void WaitForCompletion()
{
// wait for all tasks to be completed.
// not sufficient to wait for empty queue because the last task
// must also execute and finish. HOW?
}
}
Here is some usage:
class SomeClass
{
void Test()
{
JobProcessor jp = new JobProcessor();
// launch Processor loop on separate thread... code not shown.
// send a bunch of jobs via QueueJob... code not show.
// launch dialog... code not shown.
if (dialog_result == Result.Cancel)
jp.ClearQueue();
if (dialog_result == Result.Proceed)
jp.WaitForCompletion();
}
}
The idea is after the work is completed or cancelled, new work may be posted. In general though, new work may come in asynchronously. WaitForCompletion might in fact be "when all work is done, inform the user and then do other stuff", so it doesn't strictly have to be a synchronous function call like above, but I can't figure how to make these happen.
(One further complication, I expect to have several queues that interact. While I am careful to keep things parallelized in a way to prevent deadlocks, I am not confident what happens when cancellation is introduced into the mix, but this is probably beyond scope for this question.)
WaitForCompletion() sounds easy enough. Create a semaphore or event, create a task whose only action is to signal the semaphore, queue up the task, wait on the semaphore.
When the thread finishes the last 'real' task, the semaphore task will be run and so the thread that called WaitForCompletion will become ready/running:)
Would not a similar approach work for cancellation? Have a very high priority thread that you create/signal that drains the queue of all pending jobs, disposing them, queueing up the semaphore task and waiting for the 'last task done' signal?
Is Looping inside a task really recommended?
example code:
public void doTask(){
Task.Factory.StartNew(() => {
do{
// do tasks here.... call webservice
}while(true till cancelled)
});
}
any answers would be great! :)
because it is a case for my webservice calling right now, and the memory consumption goes out of control.
So may I ask, is looping inside a task really good or not recommended at all?
As Requested by SLC, heres the code:
CancellationTokenSource tokenSrc;
Task myTask;
private void btnStart_Click(object sender, EventArgs e)
{
isPressed = !isPressed;
if(isPressed)
{
tokenSrc = new CancellationTokenSource();
myTask = Task.Factory.StartNew(() =>
{
do{
checkMatches(tokenSrc.Token);
}while(tokenSrc.IsCancellationRequested != true);
}, tokenSrc.Token);
}
else {
try{
tokenSrc.Cancel();
// Log to notepad
}
catch(Exception err){
// Log to notepad
}
finally {
if(myTask.IsCanceled || myTask.IsCompleted || myTask.isFaulted) {
myTask.Dispose();
}
}
}
}
private void checkMatches(CancellationTokenSource token)
{
try
{
if(!token.IsCancellationRequested)
{
//Create Endpoint...
//Bypass ServCertValidation for test purposes
ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(delegate {return true;});
using(WebServiceAsmx.SoapClient client = new....)
{
client.CheckResp response = client.chkMatch();
// if's here for the response then put to logs
}
}
}
catch(Exception err)
{
// err.toLogs
}
}
It's perfectly fine to do this, especially if your task runs constantly, for example picking up a message queue.
while (not shutting down)
get next email to send
if exists next email to send
send
else
wait for 10 seconds
wend
Ensure that you have a way to get out if you need to cancel it, like you've done with a flag, and you should be fine.
Regarding webservices:
You should have no problem calling the webservice repeatedly, nor should it cause any memory spikes. However, you should make sure your initialisation code is not inside the loop:
BAD
while (notShuttingDown)
make a new connection
initialise
make a call to the service()
wend
GOOD
make a new connection
initialise
while (notShuttingDown)
make a call to the service
wend
Depending on your webservice it might be more optimal to create a batch operation, for example if your service is HTTP then hitting it repeatedly involves a lot of overhead. A persistent TCP connection might be better because it could be creating and destroying a lot of objects to make the calls.
For example
slow, lots of overhead:
myRecords = { cat, dog, mouse }
foreach record in myRecords
webservice check record
endforeach
faster:
myRecords = { cat, dog, mouse }
webservice check [myRecords] // array of records is passed instead of one by one
Debugging: The most likely risk is that somehow the task is not being disposed correctly - can you add this to your method to debug?
myTask = Task.Factory.StartNew(() =>
{
Console.Writeline("Task Started");
do{
checkMatches(tokenSrc.Token);
Thread.Sleep(10); // Some pause to stop your code from going as fast as it possibly can and putting your CPU usage to 100% (or 100/number of cores%)
}while(tokenSrc.IsCancellationRequested != true);
Console.Writeline("Task Stopped");
}
You might have to change that so it writes to a file or similar depending on if you have a console.
Then run it and make sure that only 1 task is being created.
I'm creating project using ZeroMQ. I need functions to start and to kill thread. Start function seems to work fine but there are problems with stop function.
private Thread _workerThread;
private object _locker = new object();
private bool _stop = false;
public void Start()
{
_workerThread = new Thread(RunZeroMqServer);
_workerThread.Start();
}
public void Stop()
{
lock (_locker)
{
_stop = true;
}
_workerThread.Join();
Console.WriteLine(_workerThread.ThreadState);
}
private void RunZeroMqServer()
{
using (var context = ZmqContext.Create())
using (ZmqSocket server = context.CreateSocket(SocketType.REP))
{
/*
var bindingAddress = new StringBuilder("tcp://");
bindingAddress.Append(_ipAddress);
bindingAddress.Append(":");
bindingAddress.Append(_port);
server.Bind(bindingAddress.ToString());
*/
//server.Bind("tcp://192.168.0.102:50000");
server.Bind("tcp://*:12345");
while (!_stop)
{
string message = server.Receive(Encoding.Unicode);
if (message == null) continue;
var response = ProcessMessage(message);
server.Send(response, Encoding.Unicode);
Thread.Sleep(100);
}
}
}
Maybe someone have any idea about this Stop() function, why it doesn't kill thread?
I got hint that I should use Thread.MemoryBarrier and volatile but have no idea how it should works.
There is also ProcessMessage() function to process messages, I just didn't copy it to don't litter :)
The problem seems to be that you're calling a blocking version of ZmqSocket.Receive. While it's waiting to receive a message it's not processing the rest of your code, so it never hits the loop condition.
The solution is to use one of the non-blocking methods, or one that has a timeout. Try this:
string message = server.Receive(Encoding.Unicode, TimeSpan.FromMilliseconds(100));
This should return after 100ms if no message is received, or earlier if a message arrives. Either way it will get a shot at the loop condition.
As to the _stop flag itself...
When you're accessing variables from multiple threads locking is a good idea. In the case of a simple flag however, both reading and writing are pretty much atomic operations. In this case it's sufficient to declare it as volatile (private volatile bool _stop = false;) to tell the compiler to make sure it always actually reads the current value each time you tell it to.
What I am trying to achieve is to have a consumer producer method. There can be many producers but only one consumer. There cannot be a dedicated consumer because of scalability, so the idea is to have the producer start the consuming process if there is data to be consumed and there is currently no active consumer.
1. Many threads can be producing messages. (Asynchronous)
2. Only one thread can be consuming messages. (Synchronous)
3. We should only have a consumer in process if there is data to be consumed
4. A continuous consumer that waits for data would not be efficient if we add many of these classes.
In my example I have a set of methods that send data. Multiple threads can write data Write() but only one of those threads will loop and Send data SendNewData(). The reason that only one loop can write data is because the order of data must be synchronous, and with a AsyncWrite() out of our control we can only guarantee order by running one AyncWrite() at a time.
The problem that I have is that if a thread gets called to Write() produce, it will queue the data and check the Interlocked.CompareExchance to see if there is a consumer. If it sees that another thread is in the loop already consuming, it will assume that this consumer will send the data. This is a problem if that looping thread consumer is at "Race Point A" since this consumer has already checked that there is no more messages to send and is about to shut down the consuming process.
Is there a way to prevent this race condition without locking a large part of the code. The real scenario has many queues and is a bit more complex than this.
In the real code List<INetworkSerializable> is actually a byte[] BufferPool. I used List for the example to make this block easier to read.
With 1000s of these classes being active at once, I cannot afford to have the SendNewData looping continuously with a dedicated thread. The looping thread should only be active if there is data to send.
public void Write(INetworkSerializable messageToSend)
{
Queue.Enqueue(messageToSend);
// Check if there are any current consumers. If not then we should instigate the consuming.
if (Interlocked.CompareExchange(ref RunningWrites, 1, 0) == 0)
{ //We are now the thread that consumes and sends data
SendNewData();
}
}
//Only one thread should be looping here to keep consuming and sending data synchronously.
private void SendNewData()
{
INetworkSerializable dataToSend;
List<INetworkSerializable> dataToSendList = new List<INetworkSerializable>();
while (true)
{
if (!Queue.TryDequeue(out dataToSend))
{
//Race Point A
if (dataToSendList.IsEmpty)
{
//All data is sent, return so that another thread can take responsibility.
Interlocked.Decrement(ref RunningWrites);
return;
}
//We have data in the list to send but nothing more to consume so lets send the data that we do have.
break;
}
dataToSendList.Add(dataToSend);
}
//Async callback is WriteAsyncCallback()
WriteAsync(dataToSendList);
}
//Callback after WriteAsync() has sent the data.
private void WriteAsyncCallback()
{
//Data was written to sockets, now lets loop back for more data
SendNewData();
}
It sounds like you would be better off with the producer-consumer pattern that is easily implemented with the BlockingCollection:
var toSend = new BlockingCollection<something>();
// producers
toSend.Add(something);
// when all producers are done
toSend.CompleteAdding();
// consumer -- this won't end until CompleteAdding is called
foreach(var item in toSend.GetConsumingEnumerable())
Send(item);
To address the comment of knowing when to call CompleteAdding, I would launch the 1000s of producers as tasks, wait for all those tasks to complete (Task.WaitAll), and then call CompleteAdding. There are good overloads taking in CancellationTokens that would give you better control, if needed.
Also, TPL is pretty good about scheduling off blocked threads.
More complete code:
var toSend = new BlockingCollection<int>();
Parallel.Invoke(() => Produce(toSend), () => Consume(toSend));
...
private static void Consume(BlockingCollection<int> toSend)
{
foreach (var value in toSend.GetConsumingEnumerable())
{
Console.WriteLine("Sending {0}", value);
}
}
private static void Produce(BlockingCollection<int> toSend)
{
Action<int> generateToSend = toSend.Add;
var producers = Enumerable.Range(0, 1000)
.Select(n => new Task(value => generateToSend((int) value), n))
.ToArray();
foreach(var p in producers)
{
p.Start();
}
Task.WaitAll(producers);
toSend.CompleteAdding();
}
Check this variant. There are some descriptive comments in code.
Also notice that WriteAsyncCallback now don't call SendNewData method anymore
private int _pendingMessages;
private int _consuming;
public void Write(INetworkSerializable messageToSend)
{
Interlocked.Increment(ref _pendingMessages);
Queue.Enqueue(messageToSend);
// Check if there is anyone consuming messages
// if not, we will have to become a consumer and process our own message,
// and any other further messages until we have cleaned the queue
if (Interlocked.CompareExchange(ref _consuming, 1, 0) == 0)
{
// We are now the thread that consumes and sends data
SendNewData();
}
}
// Only one thread should be looping here to keep consuming and sending data synchronously.
private void SendNewData()
{
INetworkSerializable dataToSend;
var dataToSendList = new List<INetworkSerializable>();
int messagesLeft;
do
{
if (!Queue.TryDequeue(out dataToSend))
{
// there is one possibility that we get here while _pendingMessages != 0:
// some other thread had just increased _pendingMessages from 0 to 1, but haven't put a message to queue.
if (dataToSendList.Count == 0)
{
if (_pendingMessages == 0)
{
_consuming = 0;
// and if we have no data this mean that we are safe to exit from current thread.
return;
}
}
else
{
// We have data in the list to send but nothing more to consume so lets send the data that we do have.
break;
}
}
dataToSendList.Add(dataToSend);
messagesLeft = Interlocked.Decrement(ref _pendingMessages);
}
while (messagesLeft > 0);
// Async callback is WriteAsyncCallback()
WriteAsync(dataToSendList);
}
private void WriteAsync(List<INetworkSerializable> dataToSendList)
{
// some code
}
// Callback after WriteAsync() has sent the data.
private void WriteAsyncCallback()
{
// ...
SendNewData();
}
The race condition can be prevented by adding the following and double checking the Queue after we have declared that we are no longer the consumer.
if (dataToSend.IsEmpty)
{
//Declare that we are no longer the consumer.
Interlocked.Decrement(ref RunningWrites);
//Double check the queue to prevent race condition A
if (Queue.IsEmpty)
return;
else
{ //Race condition A occurred. There is data again.
//Let's try to become a consumer.
if (Interlocked.CompareExchange(ref RunningWrites, 1, 0) == 0)
continue;
//Another thread has nominated itself as the consumer. Our job is done.
return;
}
}
break;
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread.
The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write.
new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null);
What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool.
How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated.
Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
I wrote this code a while back, feel free to use it.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MediaBrowser.Library.Logging {
public abstract class ThreadedLogger : LoggerBase {
Queue<Action> queue = new Queue<Action>();
AutoResetEvent hasNewItems = new AutoResetEvent(false);
volatile bool waiting = false;
public ThreadedLogger() : base() {
Thread loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting = true;
hasNewItems.WaitOne(10000,true);
waiting = false;
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public override void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public override void Flush() {
while (!waiting) {
Thread.Sleep(1);
}
}
}
}
Some advantages:
It keeps the background logger alive, so it does not need to spin up and spin down threads.
It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue.
It copies the queues to ensure the queue is not blocked while the log operation is performed
It uses an AutoResetEvent to ensure the bg thread is in a wait state
It is, IMHO, very easy to follow
Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues.
public abstract class ThreadedLogger : IDisposable {
Queue<Action> queue = new Queue<Action>();
ManualResetEvent hasNewItems = new ManualResetEvent(false);
ManualResetEvent terminate = new ManualResetEvent(false);
ManualResetEvent waiting = new ManualResetEvent(false);
Thread loggingThread;
public ThreadedLogger() {
loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
// this is performed from a bg thread, to ensure the queue is serviced from a single thread
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting.Set();
int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate });
// terminate was signaled
if (i == 1) return;
hasNewItems.Reset();
waiting.Reset();
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public void Flush() {
waiting.WaitOne();
}
public void Dispose() {
terminate.Set();
loggingThread.Join();
}
}
Advantages over the original:
It's disposable, so you can get rid of the async logger
The flush semantics are improved
It will respond slightly better to a burst followed by silence
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my "deadlocks / monitor methods" page you'll find the code in the second half.
There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a ConcurrentQueue<T> in a BlockingCollection<T>.
The version on that page is non-generic (it was written a long time ago) but you'd probably want to make it generic - it would be trivial to do.
You would call Produce from each "normal" thread, and Consume from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
Here is what I came up with... also see Sam Saffron's answer. This answer is community wiki in case there are any problems that people see in the code and want to update.
/// <summary>
/// A singleton queue that manages writing log entries to the different logging sources (Enterprise Library Logging) off the executing thread.
/// This queue ensures that log entries are written in the order that they were executed and that logging is only utilizing one thread (backgroundworker) at any given time.
/// </summary>
public class AsyncLoggerQueue
{
//create singleton instance of logger queue
public static AsyncLoggerQueue Current = new AsyncLoggerQueue();
private static readonly object logEntryQueueLock = new object();
private Queue<LogEntry> _LogEntryQueue = new Queue<LogEntry>();
private BackgroundWorker _Logger = new BackgroundWorker();
private AsyncLoggerQueue()
{
//configure background worker
_Logger.WorkerSupportsCancellation = false;
_Logger.DoWork += new DoWorkEventHandler(_Logger_DoWork);
}
public void Enqueue(LogEntry le)
{
//lock during write
lock (logEntryQueueLock)
{
_LogEntryQueue.Enqueue(le);
//while locked check to see if the BW is running, if not start it
if (!_Logger.IsBusy)
_Logger.RunWorkerAsync();
}
}
private void _Logger_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
LogEntry le = null;
bool skipEmptyCheck = false;
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is empty than BW is done
return;
else if (_LogEntryQueue.Count > 1) //if greater than 1 we can skip checking to see if anything has been enqueued during the logging operation
skipEmptyCheck = true;
//dequeue the LogEntry that will be written to the log
le = _LogEntryQueue.Dequeue();
}
//pass LogEntry to Enterprise Library
Logger.Write(le);
if (skipEmptyCheck) //if LogEntryQueue.Count was > 1 before we wrote the last LogEntry we know to continue without double checking
{
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is still empty than BW is done
return;
}
}
}
}
}
I suggest to start with measuring actual performance impact of logging on the overall system (i.e. by running profiler) and optionally switching to something faster like log4net (I've personally migrated to it from EntLib logging a long time ago).
If this does not work, you can try using this simple method from .NET Framework:
ThreadPool.QueueUserWorkItem
Queues a method for execution. The method executes when a thread pool thread becomes available.
MSDN Details
If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush:
public static void FlushLogs()
{
bool queueHasValues = true;
while (queueHasValues)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
lock (m_loggerQueueSync)
{
queueHasValues = m_loggerQueue.Count > 0;
}
}
//force MEL to flush all its listeners
foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values)
{
foreach (TraceListener listener in logSource.Listeners)
{
listener.Flush();
}
}
}
I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data.
Thanks for sharing your solution, it set me into a good direction!
--Johnny S
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning.
Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed.
public static void FlushLogs()
{
int queueCount;
bool isProcessingLogs;
while (true)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
//check to see if we are currently processing logs
lock (m_isProcessingLogsSync)
{
isProcessingLogs = m_isProcessingLogs;
}
//check to see if more events were added while the logger was processing the last batch
lock (m_loggerQueueSync)
{
queueCount = m_loggerQueue.Count;
}
if (queueCount == 0 && !isProcessingLogs)
break;
//since something is in the queue, reset the signal so we will not keep looping
Thread.Sleep(400);
}
}
Just an update:
Using enteprise library 5.0 with .NET 4.0 it can easily be done by:
static public void LogMessageAsync(LogEntry logEntry)
{
Task.Factory.StartNew(() => LogMessage(logEntry));
}
See:
http://randypaulo.wordpress.com/2011/07/28/c-enterprise-library-asynchronous-logging/
An extra level of indirection may help here.
Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
If you log something on a separate thread, the message may not be written if the application crashes, which makes it rather useless.
The reason goes why you should always flush after every written entry.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops.
But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.