c# Task class and memory leak - c#

I have an application which handles data from text file - it reads a line from the file then handles it and then puts a result in another file. After handling one row it handles the next one until the whole file is done. Some rows from the file is very time-consuming for handling. So I decided to put handling-logic in separate thread and if handling takes longer then 10 sec. I kill the thread. So my code is like this:
public class Handler
{
public void Handle(string row)
{
// Perform handling
}
}
public class Program
{
private static bool HandleRow(string row)
{
Task task = new Task(() => new Handler().Handle(row));
task.Start(); // updated
var waitResult = task.Wait(timeout); // timeout is 10 sec.
if(waitResult == false || task.IsFaulted)
return false;
return true;
}
public static void Main()
{
foreach(var row in GetRowsToHandle())
HandleRow(row);
}
}
but somehow when running the program I get out of memory exception. It seems that memory is not released properly.
Does anyone know why memory leaks might happen?
UPDATED
I forgot to include task.Start() in my code sniffer. Now I put it there

Task is Disposable : task.Dispose();

Your 10s timeout only times out the task. It doesn't stop Handle() from executing (if indeed it ever starts - I can't see a Start there). It just means you locally see a timeout on task.
Also, it depends in part on what GetRowsToHandle() does - does it return a non-buffered sequence, or is it a list, etc.
While Task does support cancellation, this requires co-operation from the implementation. To be honest, since you aren't doing anything async you might be better off just handling your own "have I taken too long" basic timeout in Handle(). A thread-abort (the other option) is not to be recommended.

Related

Is there a way to globally WaitAll() for all tasks created by a process?

I have a process which does logging by calling an external service. Because of the overhead involved (small, but builds up for many logging messages), my process logs asynchronously, in a "fire and forget" kind of way. I don't want to wait for each log message to go through before continuing, and I don't want to fail my process for a problem with the logger.
In order to accomplish this, I have wrapped the main log call in a Task - each call to logging fires off a Task, which just goes off and does it's thing. Most of the time, my process loops through the things it needs to check, handles them, and then exits just fine, logging all the way. However, on those occasions when it finds only a single item to handle, the process completes so quickly that the process exits, thus killing all of its threads, before the logging actually happens, and I get almost nothing in the logs.
I have confirmed that this is what is happening by checking that the items are handled as expected even when they are not logged (they are), and by putting a short (100 millisecond) delay into the logging method (outside of the Task), so that the logging DOES actually block. In this case, everything logs as expected.
(Based on this, I actually believe that even when the process works as expected, we may be missing a couple of log entries from the end of each run, since it is exiting before the last entries can go through, but I haven't been able to tell for certain.)
I could just put a delay at the very end of the process, so that no matter what, it hangs at at least a second or two to give these "fire and forget" Tasks time to complete, but that feels clunky.
Another option I'm considering is creating a global list of logging Tasks that will collect the Tasks as they are created, so that I can do a Task.WaitAll() on them. This feels like a bit of overhead I shouldn't have to deal with, but it may be the best solution.
What I'm looking for is some way to, at the end of my process, do a WaitAll() type of call that doesn't require me to know what Tasks I'm waiting for - just wait for any and all Tasks still hanging out there (except for the main Thread of the process, of course).
Does such a thing exist, or do I need to just keep track of all of my Tasks globally?
You could create a task aggregator, a task that completes when all observed tasks are completed. It would be a functionally equivalent version of Task.WhenAll, but much more lightweight since only the number of incomplete tasks would be stored, not the tasks themselves. Here is an implementation of this idea:
public class TaskAggregator
{
private int _activeCount = 0;
private int _isAddingCompleted = 0;
private TaskCompletionSource<bool> _tcs = new TaskCompletionSource<bool>();
public Task Task { get => _tcs.Task; }
public int ActiveCount
{
get => Interlocked.CompareExchange(ref _activeCount, 0, 0);
}
public bool IsAddingCompleted
{
get => Interlocked.CompareExchange(ref _isAddingCompleted, 0, 0) != 0;
}
public void Add(Task task)
{
Interlocked.Increment(ref _activeCount);
task.ContinueWith(_ =>
{
int localActiveCount = Interlocked.Decrement(ref _activeCount);
if (localActiveCount == 0 && this.IsAddingCompleted)
_tcs.TrySetResult(true);
}, TaskContinuationOptions.ExecuteSynchronously);
}
public void CompleteAdding()
{
Interlocked.Exchange(ref _isAddingCompleted, 1);
if (this.ActiveCount == 0) _tcs.TrySetResult(true);
}
}
Usage example:
public static TaskAggregator LogTasksAggregator = new TaskAggregator();
public static void Log(string str)
{
var logTask = Console.Out.WriteLineAsync(str);
LogTasksAggregator.Add(logTask);
}
// End of program
LogTasksAggregator.CompleteAdding();
bool completedInTime = LogTasksAggregator.Task.Wait(5000);
if (!completedInTime)
{
Console.WriteLine("LogTasksAggregator timed out");
}
I was going to suggest using internal Task[] System.Threading.Tasks.TaskScheduler.GetScheduledTasksForDebugger() as documented in TaskScheduler.GetScheduledTasks Method but it doesn't seem to return tasks that are currently running:
Task.Factory.StartNew(() =>
{
Console.WriteLine("Sleeping");
Thread.Sleep(1000);
Console.WriteLine("Done");
});
// Retrieve method info for internal Task[] GetScheduledTasksForDebugger()
var typeInfo = typeof(System.Threading.Tasks.TaskScheduler);
var bindingAttr = BindingFlags.NonPublic | BindingFlags.Instance;
var methodInfo = typeInfo.GetMethod("GetScheduledTasksForDebugger", bindingAttr);
Task[] tasks = (Task[])methodInfo.Invoke(System.Threading.Tasks.TaskScheduler.Current, null);
Task.WaitAll(tasks);
I think you're going to have to manage a List<Task> and WaitAll on that.

Cancel lock on an object

This might be simple, but couldn't figure it out yet.
Simply put:
I have a long running operation (about 8 min) in my repo layer.
public static ReleaseSelection LoadedReleaseSelection = new ReleaseSelection();
private static object s_cacheLock = new object();
public Long Load(ReleaseSelection releaseSelection)
{
//check if the release passed in to load is different the one previously loaded
if (releaseSelection != LoadedReleaseSelection)
{
//do something to break the lock(s_cacheLock)
}
lock (s_cacheLock)
{
//Reads from TAB files and put them into cache objects.. runs for about 8 mins
LoadedReleaseSelection = releaseSelection;
}
}
a service layer calls the Load asynchronously
public Task<long> LoadAsync()
{
ReleaseSelection releaseSelection = //get value from db through another repo call
if (releaseSelection == null)
{
return null;
}
return Task.Factory.StartNew(() => m_releaseRepository.Load(releaseSelection));
}
finally, this service is being called by an API endpoint
public async Task<IHttpActionResult> ReleaseLoadPost()
{
await m_releaseService.LoadAsync();
return Ok();
}
how I can come about canceling the lock(s_cacheLock) inside Load operation (first code block) when the following condition is true
//check if the release passed in to load is different the one previously loaded
if (releaseSelection != LoadedReleaseSelection)
{
//do something to break the lock(s_cacheLock)
}
so that another thread won't have to wait till previous load has been completed?
Note: I need the lock(m_cacheLock) because I have other methods that read from the caches and should really not do that until all caches are loaded.
There is no need to use lock to protect the 8 mins load process, what you need is just lock the update cache set statement after the load complete. You should also make the load process cancellable by using a CancellationToken and check the token cancellation status periodically during the load process.
Use Monitor.Enter & Monitor.Exit instead of lock, make sure to catch exceptions and release the lock.
Example:
Monitor.Enter(s_cacheLock)
// do work
Monitor.Exit(s_cacheLock)

How do I know when it's safe to call Dispose?

I have a search application that takes some time (10 to 15 seconds) to return results for some requests. It's not uncommon to have multiple concurrent requests for the same information. As it stands, I have to process those independently, which makes for quite a bit of unnecessary processing.
I've come up with a design that should allow me to avoid the unnecessary processing, but there's one lingering problem.
Each request has a key that identifies the data being requested. I maintain a dictionary of requests, keyed by the request key. The request object has some state information and a WaitHandle that is used to wait on the results.
When a client calls my Search method, the code checks the dictionary to see if a request already exists for that key. If so, the client just waits on the WaitHandle. If no request exists, I create one, add it to the dictionary, and issue an asynchronous call to get the information. Again, the code waits on the event.
When the asynchronous process has obtained the results, it updates the request object, removes the request from the dictionary, and then signals the event.
This all works great. Except I don't know when to dispose of the request object. That is, since I don't know when the last client is using it, I can't call Dispose on it. I have to wait for the garbage collector to come along and clean up.
Here's the code:
class SearchRequest: IDisposable
{
public readonly string RequestKey;
public string Results { get; set; }
public ManualResetEvent WaitEvent { get; private set; }
public SearchRequest(string key)
{
RequestKey = key;
WaitEvent = new ManualResetEvent(false);
}
public void Dispose()
{
WaitEvent.Dispose();
GC.SuppressFinalize(this);
}
}
ConcurrentDictionary<string, SearchRequest> Requests = new ConcurrentDictionary<string, SearchRequest>();
string Search(string key)
{
SearchRequest req;
bool addedNew = false;
req = Requests.GetOrAdd(key, (s) =>
{
// Create a new request.
var r = new SearchRequest(s);
Console.WriteLine("Added new request with key {0}", key);
addedNew = true;
return r;
});
if (addedNew)
{
// A new request was created.
// Start a search.
ThreadPool.QueueUserWorkItem((obj) =>
{
// Get the results
req.Results = DoSearch(req.RequestKey); // DoSearch takes several seconds
// Remove the request from the pending list
SearchRequest trash;
Requests.TryRemove(req.RequestKey, out trash);
// And signal that the request is finished
req.WaitEvent.Set();
});
}
Console.WriteLine("Waiting for results from request with key {0}", key);
req.WaitEvent.WaitOne();
return req.Results;
}
Basically, I don't know when the last client will be released. No matter how I slice it here, I have a race condition. Consider:
Thread A Creates a new request, starts Thread 2, and waits on the wait handle.
Thread B Begins processing the request.
Thread C detects that there's a pending request, and then gets swapped out.
Thread B Completes the request, removes the item from the dictionary, and sets the event.
Thread A's wait is satisfied, and it returns the result.
Thread C wakes up, calls WaitOne, is released, and returns the result.
If I use some kind of reference counting so that the "last" client calls Dispose, then the object would be disposed by Thread A in the above scenario. Thread C would then die when it tried to wait on the disposed WaitHandle.
The only way I can see to fix this is to use a reference counting scheme and protect access to the dictionary with a lock (in which case using ConcurrentDictionary is pointless) so that a lookup is always accompanied by an increment of the reference count. Whereas that would work, it seems like an ugly hack.
Another solution would be to ditch the WaitHandle and use an event-like mechanism with callbacks. But that, too, would require me to protect the lookups with a lock, and I have the added complication of dealing with an event or a naked multicast delegate. That seems like a hack, too.
This probably isn't a problem currently, because this application doesn't yet get enough traffic for those abandoned handles to add up before the next GC pass comes and cleans them up. And maybe it won't ever be a problem? It worries me, though, that I'm leaving them to be cleaned up by the GC when I should be calling Dispose to get rid of them.
Ideas? Is this a potential problem? If so, do you have a clean solution?
Consider using Lazy<T> for SearchRequest.Results maybe? But that would probably entail a bit of redesign. Haven't thought this out completely.
But what would probably be almost a drop-in replacement for your use case is to implement your own Wait() and Set() methods in SearchRequest. Something like:
object _resultLock;
void Wait()
{
lock(_resultLock)
{
while (!_hasResult)
Monitor.Wait(_resultLock);
}
}
void Set(string results)
{
lock(_resultLock)
{
Results = results;
_hasResult = true;
Monitor.PulseAll(_resultLock);
}
}
No need to dispose. :)
I think that your best bet to make this work is to use the TPL for all of you multi-threading needs. That's what it is good at.
As per my comment on your question, you need to keep in mind that ConcurrentDictionary does have side-effects. If multiple threads try to call GetOrAdd at the same time then the factory can be invoked for all of them, but only one will win. The values produced for the other threads will just be discarded, however by then the compute has been done.
Since you also said that doing searches is expensive then the cost of taking a lock ad then using a standard dictionary would be minimal.
So this is what I suggest:
private Dictionary<string, Task<string>> _requests
= new Dictionary<string, Task<string>>();
public string Search(string key)
{
Task<string> task;
lock (_requests)
{
if (_requests.ContainsKey(key))
{
task = _requests[key];
}
else
{
task = Task<string>
.Factory
.StartNew(() => DoSearch(key));
_requests[key] = task;
task.ContinueWith(t =>
{
lock(_requests)
{
_requests.Remove(key);
}
});
}
}
return task.Result;
}
This option nicely runs the search, remembers the task throughout the duration of the search and then removes it from the dictionary when it completes. All requests for the same key while a search is executing get the same task and so will get the same result once the task is complete.
I've test the code and it works.

Queue a thread in .net

I have 2 functions that needs to be executed one after the other. In this function, async calls are made. How do I go about executing the second function after the async call is completed?
For eg.
public void main()
{
executeFn("1");
executeFn("2"); //I want this to be executed after 1 has finished.
}
private bool executeFn(string someval)
{
runSomeAsyncCode(); //This is some async uploading function that is yet to be defined.
}
You can use Thread.Join.
But then I do not see the point of async execution of those 2 functions as they become sequential.
Let runSomeAsyncCode() return an IAsyncResult and implement the BeginX EndX methods similar to the CLR Asynchronous Programming Model. Use the EndX method to wait for the code to finish executing.
Your async method you're calling must have something to notify the caller when it's completed am I correct? (otherwise it would be just execute and forget, which is unlikely) If so, you simply have to wait for the notification to come up and execute the second method.
try this:
public void main()
{
executeFn("1");
executeFn("2");
}
List<string> QueuedCalls = new List<string>(); // contains the queued items
bool isRunning = false; // indicates if there is an async operation running
private bool executeFn(string someval)
{
if(isRunning) { QueuedCalls.Add(someval); return; } // if there is an operation running, queue the call
else { isRunning = true; } // if there is not an operation running, then update the isRunning property and run the code
runSomeAsyncCode(); //undefined async operation here<-
isRunning = false; //get here when the async is completed, (updates the app telling it this operation is done)
if(QueuedCalls.Count != 0)//check if there is anything in the queue
{
//there is something in the queue, so remove it from the queue and execute it.
string val = QueuedCalls[0];
QueuedCalls.RemoveAt(0);
executeFn(val);
}
}
this way will not block any threads, and will simply execute the queued call when the first finnishs,which is what i believe you want! happy coding! now id recommend running the last section, at where it sets the isRunning to false, inside your async operation, or trigger it with an event or something, the only catch is that peice of code has to be executed when your async operation is completed, so however you want to do that is up to you
You can consider using Generic delegates execute the first method async then in the call back execute the other method async. If you are really worried executing them sync with respect to each other.
One simple way is to use a custom threadpool
http://www.codeplex.com/smartthreadpool
You can instantiate a separate threadpool, Set the threadpool size to 1, and queue the workers

How to effectively log asynchronously?

I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread.
The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write.
new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null);
What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool.
How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated.
Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
I wrote this code a while back, feel free to use it.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MediaBrowser.Library.Logging {
public abstract class ThreadedLogger : LoggerBase {
Queue<Action> queue = new Queue<Action>();
AutoResetEvent hasNewItems = new AutoResetEvent(false);
volatile bool waiting = false;
public ThreadedLogger() : base() {
Thread loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting = true;
hasNewItems.WaitOne(10000,true);
waiting = false;
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public override void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public override void Flush() {
while (!waiting) {
Thread.Sleep(1);
}
}
}
}
Some advantages:
It keeps the background logger alive, so it does not need to spin up and spin down threads.
It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue.
It copies the queues to ensure the queue is not blocked while the log operation is performed
It uses an AutoResetEvent to ensure the bg thread is in a wait state
It is, IMHO, very easy to follow
Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues.
public abstract class ThreadedLogger : IDisposable {
Queue<Action> queue = new Queue<Action>();
ManualResetEvent hasNewItems = new ManualResetEvent(false);
ManualResetEvent terminate = new ManualResetEvent(false);
ManualResetEvent waiting = new ManualResetEvent(false);
Thread loggingThread;
public ThreadedLogger() {
loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
// this is performed from a bg thread, to ensure the queue is serviced from a single thread
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting.Set();
int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate });
// terminate was signaled
if (i == 1) return;
hasNewItems.Reset();
waiting.Reset();
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public void Flush() {
waiting.WaitOne();
}
public void Dispose() {
terminate.Set();
loggingThread.Join();
}
}
Advantages over the original:
It's disposable, so you can get rid of the async logger
The flush semantics are improved
It will respond slightly better to a burst followed by silence
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my "deadlocks / monitor methods" page you'll find the code in the second half.
There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a ConcurrentQueue<T> in a BlockingCollection<T>.
The version on that page is non-generic (it was written a long time ago) but you'd probably want to make it generic - it would be trivial to do.
You would call Produce from each "normal" thread, and Consume from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
Here is what I came up with... also see Sam Saffron's answer. This answer is community wiki in case there are any problems that people see in the code and want to update.
/// <summary>
/// A singleton queue that manages writing log entries to the different logging sources (Enterprise Library Logging) off the executing thread.
/// This queue ensures that log entries are written in the order that they were executed and that logging is only utilizing one thread (backgroundworker) at any given time.
/// </summary>
public class AsyncLoggerQueue
{
//create singleton instance of logger queue
public static AsyncLoggerQueue Current = new AsyncLoggerQueue();
private static readonly object logEntryQueueLock = new object();
private Queue<LogEntry> _LogEntryQueue = new Queue<LogEntry>();
private BackgroundWorker _Logger = new BackgroundWorker();
private AsyncLoggerQueue()
{
//configure background worker
_Logger.WorkerSupportsCancellation = false;
_Logger.DoWork += new DoWorkEventHandler(_Logger_DoWork);
}
public void Enqueue(LogEntry le)
{
//lock during write
lock (logEntryQueueLock)
{
_LogEntryQueue.Enqueue(le);
//while locked check to see if the BW is running, if not start it
if (!_Logger.IsBusy)
_Logger.RunWorkerAsync();
}
}
private void _Logger_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
LogEntry le = null;
bool skipEmptyCheck = false;
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is empty than BW is done
return;
else if (_LogEntryQueue.Count > 1) //if greater than 1 we can skip checking to see if anything has been enqueued during the logging operation
skipEmptyCheck = true;
//dequeue the LogEntry that will be written to the log
le = _LogEntryQueue.Dequeue();
}
//pass LogEntry to Enterprise Library
Logger.Write(le);
if (skipEmptyCheck) //if LogEntryQueue.Count was > 1 before we wrote the last LogEntry we know to continue without double checking
{
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is still empty than BW is done
return;
}
}
}
}
}
I suggest to start with measuring actual performance impact of logging on the overall system (i.e. by running profiler) and optionally switching to something faster like log4net (I've personally migrated to it from EntLib logging a long time ago).
If this does not work, you can try using this simple method from .NET Framework:
ThreadPool.QueueUserWorkItem
Queues a method for execution. The method executes when a thread pool thread becomes available.
MSDN Details
If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush:
public static void FlushLogs()
{
bool queueHasValues = true;
while (queueHasValues)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
lock (m_loggerQueueSync)
{
queueHasValues = m_loggerQueue.Count > 0;
}
}
//force MEL to flush all its listeners
foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values)
{
foreach (TraceListener listener in logSource.Listeners)
{
listener.Flush();
}
}
}
I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data.
Thanks for sharing your solution, it set me into a good direction!
--Johnny S
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning.
Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed.
public static void FlushLogs()
{
int queueCount;
bool isProcessingLogs;
while (true)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
//check to see if we are currently processing logs
lock (m_isProcessingLogsSync)
{
isProcessingLogs = m_isProcessingLogs;
}
//check to see if more events were added while the logger was processing the last batch
lock (m_loggerQueueSync)
{
queueCount = m_loggerQueue.Count;
}
if (queueCount == 0 && !isProcessingLogs)
break;
//since something is in the queue, reset the signal so we will not keep looping
Thread.Sleep(400);
}
}
Just an update:
Using enteprise library 5.0 with .NET 4.0 it can easily be done by:
static public void LogMessageAsync(LogEntry logEntry)
{
Task.Factory.StartNew(() => LogMessage(logEntry));
}
See:
http://randypaulo.wordpress.com/2011/07/28/c-enterprise-library-asynchronous-logging/
An extra level of indirection may help here.
Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
If you log something on a separate thread, the message may not be written if the application crashes, which makes it rather useless.
The reason goes why you should always flush after every written entry.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops.
But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.

Categories

Resources