multithreaded unit test - notify calling thread of event to stop sleeping - c#

I wrote a test (NUnit) to test my Amazon SES email sending functionality which should generate a bounce notification in my Amazon SQS queue. I poll this queue in a loop for 1 minute on a new thread to wait and verify the bounce notification.
I'd like to increase the time to a few minutes to ensure I dont miss it. However the response may come within seconds in which case I'd want to just record how long it took and finish the test as there is no point to continue waiting once receipt is verified.
How can I accomplish this threading scenario in a clean way, and by that I mean without polluting MethodInMainApp() with test code. In the main app this should not happen (it should continue polling indefinitely), it should only stop early in test. I should probably pass in the ThreadStart function from both entry points but that doesnt answer the question Im asking.
[Test]
public async void SendAndLogBounceEmailNotification()
{
Thread bouncesThread = Startup.MethodInMainApp();
bouncesThread.Start();
bool success = await _emailService.SendAsync(...);
Assert.AreEqual(success, true);
//Sleep this thread for 1 minute while the
//bouncesThread polls for bounce notifications
Thread.Sleep(60000);
}
public static Thread MethodInMainApp()
{
...
Thread bouncesThread = new Thread(() =>
{
while (true)
{
ReceiveMessageResponse receiveMessageResponse = sqsBouncesClient.ReceiveMessage(bouncesQueueRequest);
if(receiveMessageResponse.Messages.Count > 0)
{
ProcessQueuedBounce(receiveMessageResponse);
//done for test
}
}
});
bouncesThread.SetApartmentState(ApartmentState.STA);
return bouncesThread;
}

Use the Monitor class.
Instead of Thread.Sleep(60000), use this:
lock (_lockObject)
{
Monitor.Wait(_lockObject, 60000);
}
Then where you want to signal the thread to continue:
lock (_lockObject)
{
Monitor.Pulse(_lockObject);
}
Of course this requires adding static readonly object _lockObject = new object() somewhere in the class.
I'm not saying that the overall strategy is actually the right approach. Seems like here, it would be better for the unit test method to explicitly call some other method which is doing the work to wait for and validate the response. But the above should address your specific question.

Related

Understanding the C# lock [duplicate]

I am having hard time in understanding Wait(), Pulse(), PulseAll(). Will all of them avoid deadlock? I would appreciate if you explain how to use them?
Short version:
lock(obj) {...}
is short-hand for Monitor.Enter / Monitor.Exit (with exception handling etc). If nobody else has the lock, you can get it (and run your code) - otherwise your thread is blocked until the lock is aquired (by another thread releasing it).
Deadlock typically happens when either A: two threads lock things in different orders:
thread 1: lock(objA) { lock (objB) { ... } }
thread 2: lock(objB) { lock (objA) { ... } }
(here, if they each acquire the first lock, neither can ever get the second, since neither thread can exit to release their lock)
This scenario can be minimised by always locking in the same order; and you can recover (to a degree) by using Monitor.TryEnter (instead of Monitor.Enter/lock) and specifying a timeout.
or B: you can block yourself with things like winforms when thread-switching while holding a lock:
lock(obj) { // on worker
this.Invoke((MethodInvoker) delegate { // switch to UI
lock(obj) { // oopsiee!
...
}
});
}
The deadlock appears obvious above, but it isn't so obvious when you have spaghetti code; possible answers: don't thread-switch while holding locks, or use BeginInvoke so that you can at least exit the lock (letting the UI play).
Wait/Pulse/PulseAll are different; they are for signalling. I use this in this answer to signal so that:
Dequeue: if you try to dequeue data when the queue is empty, it waits for another thread to add data, which wakes up the blocked thread
Enqueue: if you try and enqueue data when the queue is full, it waits for another thread to remove data, which wakes up the blocked thread
Pulse only wakes up one thread - but I'm not brainy enough to prove that the next thread is always the one I want, so I tend to use PulseAll, and simply re-verify the conditions before continuing; as an example:
while (queue.Count >= maxSize)
{
Monitor.Wait(queue);
}
With this approach, I can safely add other meanings of Pulse, without my existing code assuming that "I woke up, therefore there is data" - which is handy when (in the same example) I later needed to add a Close() method.
Simple recipe for use of Monitor.Wait and Monitor.Pulse. It consists of a worker, a boss, and a phone they use to communicate:
object phone = new object();
A "Worker" thread:
lock(phone) // Sort of "Turn the phone on while at work"
{
while(true)
{
Monitor.Wait(phone); // Wait for a signal from the boss
DoWork();
Monitor.PulseAll(phone); // Signal boss we are done
}
}
A "Boss" thread:
PrepareWork();
lock(phone) // Grab the phone when I have something ready for the worker
{
Monitor.PulseAll(phone); // Signal worker there is work to do
Monitor.Wait(phone); // Wait for the work to be done
}
More complex examples follow...
A "Worker with something else to do":
lock(phone)
{
while(true)
{
if(Monitor.Wait(phone,1000)) // Wait for one second at most
{
DoWork();
Monitor.PulseAll(phone); // Signal boss we are done
}
else
DoSomethingElse();
}
}
An "Impatient Boss":
PrepareWork();
lock(phone)
{
Monitor.PulseAll(phone); // Signal worker there is work to do
if(Monitor.Wait(phone,1000)) // Wait for one second at most
Console.Writeline("Good work!");
}
No, they don't protect you from deadlocks. They are just more flexible tools for thread synchronization. Here is a very good explanation how to use them and very important pattern of usage - without this pattern you will break all the things:
http://www.albahari.com/threading/part4.aspx
Something that total threw me here is that Pulse just gives a "heads up" to a thread in a Wait. The Waiting thread will not continue until the thread that did the Pulse gives up the lock and the waiting thread successfully wins it.
lock(phone) // Grab the phone
{
Monitor.PulseAll(phone); // Signal worker
Monitor.Wait(phone); // ****** The lock on phone has been given up! ******
}
or
lock(phone) // Grab the phone when I have something ready for the worker
{
Monitor.PulseAll(phone); // Signal worker there is work to do
DoMoreWork();
} // ****** The lock on phone has been given up! ******
In both cases it's not until "the lock on phone has been given up" that another thread can get it.
There might be other threads waiting for that lock from Monitor.Wait(phone) or lock(phone). Only the one that wins the lock will get to continue.
They are tools for synchronizing and signaling between threads. As such they do nothing to prevent deadlocks, but if used correctly they can be used to synchronize and communicate between threads.
Unfortunately most of the work needed to write correct multithreaded code is currently the developers' responsibility in C# (and many other languages). Take a look at how F#, Haskell and Clojure handles this for an entirely different approach.
Unfortunately, none of Wait(), Pulse() or PulseAll() have the magical property which you are wishing for - which is that by using this API you will automatically avoid deadlock.
Consider the following code
object incomingMessages = new object(); //signal object
LoopOnMessages()
{
lock(incomingMessages)
{
Monitor.Wait(incomingMessages);
}
if (canGrabMessage()) handleMessage();
// loop
}
ReceiveMessagesAndSignalWaiters()
{
awaitMessages();
copyMessagesToReadyArea();
lock(incomingMessages) {
Monitor.PulseAll(incomingMessages); //or Monitor.Pulse
}
awaitReadyAreaHasFreeSpace();
}
This code will deadlock! Maybe not today, maybe not tomorrow. Most likely when your code is placed under stress because suddenly it has become popular or important, and you are being called to fix an urgent issue.
Why?
Eventually the following will happen:
All consumer threads are doing some work
Messages arrive, the ready area can't hold any more messages, and PulseAll() is called.
No consumer gets woken up, because none are waiting
All consumer threads call Wait() [DEADLOCK]
This particular example assumes that producer thread is never going to call PulseAll() again because it has no more space to put messages in. But there are many, many broken variations on this code possible. People will try to make it more robust by changing a line such as making Monitor.Wait(); into
if (!canGrabMessage()) Monitor.Wait(incomingMessages);
Unfortunately, that still isn't enough to fix it. To fix it you also need to change the locking scope where Monitor.PulseAll() is called:
LoopOnMessages()
{
lock(incomingMessages)
{
if (!canGrabMessage()) Monitor.Wait(incomingMessages);
}
if (canGrabMessage()) handleMessage();
// loop
}
ReceiveMessagesAndSignalWaiters()
{
awaitMessagesArrive();
lock(incomingMessages)
{
copyMessagesToReadyArea();
Monitor.PulseAll(incomingMessages); //or Monitor.Pulse
}
awaitReadyAreaHasFreeSpace();
}
The key point is that in the fixed code, the locks restrict the possible sequences of events:
A consumer threads does its work and loops
That thread acquires the lock
And thanks to locking it is now true that either:
a. Messages haven't yet arrived in the ready area, and it releases the lock by calling Wait() BEFORE the message receiver thread can acquire the lock and copy more messages into the ready area, or
b. Messages have already arrived in the ready area and it receives the messages INSTEAD OF calling Wait(). (And while it is making this decision it is impossible for the message receiver thread to e.g. acquire the lock and copy more messages into the ready area.)
As a result the problem of the original code now never occurs:
3. When PulseEvent() is called No consumer gets woken up, because none are waiting
Now observe that in this code you have to get the locking scope exactly right. (If, indeed I got it right!)
And also, since you must use the lock (or Monitor.Enter() etc.) in order to use Monitor.PulseAll() or Monitor.Wait() in a deadlock-free fashion, you still have to worry about possibility of other deadlocks which happen because of that locking.
Bottom line: these APIs are also easy to screw up and deadlock with, i.e. quite dangerous
This is a simple example of monitor use :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp4
{
class Program
{
public static int[] X = new int[30];
static readonly object _object = new object();
public static int count=0;
public static void PutNumbers(int numbersS, int numbersE)
{
for (int i = numbersS; i < numbersE; i++)
{
Monitor.Enter(_object);
try
{
if(count<30)
{
X[count] = i;
count++;
Console.WriteLine("Punt in " + count + "nd: "+i);
Monitor.Pulse(_object);
}
else
{
Monitor.Wait(_object);
}
}
finally
{
Monitor.Exit(_object);
}
}
}
public static void RemoveNumbers(int numbersS)
{
for (int i = 0; i < numbersS; i++)
{
Monitor.Enter(_object);
try
{
if (count > 0)
{
X[count] = 0;
int x = count;
count--;
Console.WriteLine("Removed " + x + " element");
Monitor.Pulse(_object);
}
else
{
Monitor.Wait(_object);
}
}
finally
{
Monitor.Exit(_object);
}
}
}
static void Main(string[] args)
{
Thread W1 = new Thread(() => PutNumbers(10,50));
Thread W2 = new Thread(() => PutNumbers(1, 10));
Thread R1 = new Thread(() => RemoveNumbers(30));
Thread R2 = new Thread(() => RemoveNumbers(20));
W1.Start();
R1.Start();
W2.Start();
R2.Start();
W1.Join();
R1.Join();
W2.Join();
R2.Join();
}
}
}

Always Running Threads on Windows Service

I'm writing a Windows Service that will kick off multiple worker threads that will listen to Amazon SQS queues and process messages. There will be about 20 threads listening to 10 queues.
The threads will have to be always running and that's why I'm leaning towards to actually using actual threads for the worker loops rather than threadpool threads.
Here is a top level implementation. Windows service will kick off multiple worker threads and each will listen to it's queue and process messages.
protected override void OnStart(string[] args)
{
for (int i = 0; i < _workers; i++)
{
new Thread(RunWorker).Start();
}
}
Here is the implementation of the work
public async void RunWorker()
{
while(true)
{
// .. get message from amazon sqs sync.. about 20ms
var message = sqsClient.ReceiveMessage();
try
{
await PerformWebRequestAsync(message);
await InsertIntoDbAsync(message);
}
catch(SomeExeception)
{
// ... log
//continue to retry
continue;
}
sqsClient.DeleteMessage();
}
}
I know I can perform the same operation with Task.Run and execute it on the threadpool thread rather than starting individual thread, but I don't see a reason for that since each thread will always be running.
Do you see any problems with this implementation? How reliable would it be to leave threads always running in this fashion and what can I do to make sure that each thread is always running?
One problem with your existing solution is that you call your RunWorker in a fire-and-forget manner, albeit on a new thread (i.e., new Thread(RunWorker).Start()).
RunWorker is an async method, it will return to the caller when the execution point hits the first await (i.e. await PerformWebRequestAsync(message)). If PerformWebRequestAsync returns a pending task, RunWorker returns and the new thread you just started terminates.
I don't think you need a new thread here at all, just use AmazonSQSClient.ReceiveMessageAsync and await its result. Another thing is that you shouldn't be using async void methods unless you really don't care about tracking the state of the asynchronous task. Use async Task instead.
Your code might look like this:
List<Task> _workers = new List<Task>();
CancellationTokenSource _cts = new CancellationTokenSource();
protected override void OnStart(string[] args)
{
for (int i = 0; i < _MAX_WORKERS; i++)
{
_workers.Add(RunWorkerAsync(_cts.Token));
}
}
public async Task RunWorkerAsync(CancellationToken token)
{
while(true)
{
token.ThrowIfCancellationRequested();
// .. get message from amazon sqs sync.. about 20ms
var message = await sqsClient.ReceiveMessageAsync().ConfigureAwait(false);
try
{
await PerformWebRequestAsync(message);
await InsertIntoDbAsync(message);
}
catch(SomeExeception)
{
// ... log
//continue to retry
continue;
}
sqsClient.DeleteMessage();
}
}
Now, to stop all pending workers, you could simple do this (from the main "request dispatcher" thread):
_cts.Cancel();
try
{
Task.WaitAll(_workers.ToArray());
}
catch (AggregateException ex)
{
ex.Handle(inner => inner is OperationCanceledException);
}
Note, ConfigureAwait(false) is optional for Windows Service, because there's no synchronization context on the initial thread, by default. However, I'd keep it that way to make the code independent of the execution environment (for cases where there is synchronization context).
Finally, if for some reason you cannot use ReceiveMessageAsync, or you need to call another blocking API, or simply do a piece of CPU intensive work at the beginning of RunWorkerAsync, just wrap it with Task.Run (as opposed to wrapping the whole RunWorkerAsync):
var message = await Task.Run(
() => sqsClient.ReceiveMessage()).ConfigureAwait(false);
Well, for one I'd use a CancellationTokenSource instantiated in the service and passed down to the workers. Your while statement would become:
while(!cancellationTokenSource.IsCancellationRequested)
{
//rest of the code
}
This way you can cancel all your workers from the OnStop service method.
Additionally, you should watch for:
If you're playing with thread states from outside of the thread, then a ThreadStateException, or ThreadInterruptedException or one of the others might be thrown. So, you want to handle a proper thread restart.
Do the workers need to run without pause in-between iterations? I would throw in a sleep in there (even a few ms's) just so they don't keep the CPU up for nothing.
You need to handle ThreadStartException and restart the worker, if it occurs.
Other than that there's no reason why those 10 treads can't run for as long as the service runs (days, weeks, months at a time).

C# : Monitor - Wait,Pulse,PulseAll

I am having hard time in understanding Wait(), Pulse(), PulseAll(). Will all of them avoid deadlock? I would appreciate if you explain how to use them?
Short version:
lock(obj) {...}
is short-hand for Monitor.Enter / Monitor.Exit (with exception handling etc). If nobody else has the lock, you can get it (and run your code) - otherwise your thread is blocked until the lock is aquired (by another thread releasing it).
Deadlock typically happens when either A: two threads lock things in different orders:
thread 1: lock(objA) { lock (objB) { ... } }
thread 2: lock(objB) { lock (objA) { ... } }
(here, if they each acquire the first lock, neither can ever get the second, since neither thread can exit to release their lock)
This scenario can be minimised by always locking in the same order; and you can recover (to a degree) by using Monitor.TryEnter (instead of Monitor.Enter/lock) and specifying a timeout.
or B: you can block yourself with things like winforms when thread-switching while holding a lock:
lock(obj) { // on worker
this.Invoke((MethodInvoker) delegate { // switch to UI
lock(obj) { // oopsiee!
...
}
});
}
The deadlock appears obvious above, but it isn't so obvious when you have spaghetti code; possible answers: don't thread-switch while holding locks, or use BeginInvoke so that you can at least exit the lock (letting the UI play).
Wait/Pulse/PulseAll are different; they are for signalling. I use this in this answer to signal so that:
Dequeue: if you try to dequeue data when the queue is empty, it waits for another thread to add data, which wakes up the blocked thread
Enqueue: if you try and enqueue data when the queue is full, it waits for another thread to remove data, which wakes up the blocked thread
Pulse only wakes up one thread - but I'm not brainy enough to prove that the next thread is always the one I want, so I tend to use PulseAll, and simply re-verify the conditions before continuing; as an example:
while (queue.Count >= maxSize)
{
Monitor.Wait(queue);
}
With this approach, I can safely add other meanings of Pulse, without my existing code assuming that "I woke up, therefore there is data" - which is handy when (in the same example) I later needed to add a Close() method.
Simple recipe for use of Monitor.Wait and Monitor.Pulse. It consists of a worker, a boss, and a phone they use to communicate:
object phone = new object();
A "Worker" thread:
lock(phone) // Sort of "Turn the phone on while at work"
{
while(true)
{
Monitor.Wait(phone); // Wait for a signal from the boss
DoWork();
Monitor.PulseAll(phone); // Signal boss we are done
}
}
A "Boss" thread:
PrepareWork();
lock(phone) // Grab the phone when I have something ready for the worker
{
Monitor.PulseAll(phone); // Signal worker there is work to do
Monitor.Wait(phone); // Wait for the work to be done
}
More complex examples follow...
A "Worker with something else to do":
lock(phone)
{
while(true)
{
if(Monitor.Wait(phone,1000)) // Wait for one second at most
{
DoWork();
Monitor.PulseAll(phone); // Signal boss we are done
}
else
DoSomethingElse();
}
}
An "Impatient Boss":
PrepareWork();
lock(phone)
{
Monitor.PulseAll(phone); // Signal worker there is work to do
if(Monitor.Wait(phone,1000)) // Wait for one second at most
Console.Writeline("Good work!");
}
No, they don't protect you from deadlocks. They are just more flexible tools for thread synchronization. Here is a very good explanation how to use them and very important pattern of usage - without this pattern you will break all the things:
http://www.albahari.com/threading/part4.aspx
Something that total threw me here is that Pulse just gives a "heads up" to a thread in a Wait. The Waiting thread will not continue until the thread that did the Pulse gives up the lock and the waiting thread successfully wins it.
lock(phone) // Grab the phone
{
Monitor.PulseAll(phone); // Signal worker
Monitor.Wait(phone); // ****** The lock on phone has been given up! ******
}
or
lock(phone) // Grab the phone when I have something ready for the worker
{
Monitor.PulseAll(phone); // Signal worker there is work to do
DoMoreWork();
} // ****** The lock on phone has been given up! ******
In both cases it's not until "the lock on phone has been given up" that another thread can get it.
There might be other threads waiting for that lock from Monitor.Wait(phone) or lock(phone). Only the one that wins the lock will get to continue.
They are tools for synchronizing and signaling between threads. As such they do nothing to prevent deadlocks, but if used correctly they can be used to synchronize and communicate between threads.
Unfortunately most of the work needed to write correct multithreaded code is currently the developers' responsibility in C# (and many other languages). Take a look at how F#, Haskell and Clojure handles this for an entirely different approach.
Unfortunately, none of Wait(), Pulse() or PulseAll() have the magical property which you are wishing for - which is that by using this API you will automatically avoid deadlock.
Consider the following code
object incomingMessages = new object(); //signal object
LoopOnMessages()
{
lock(incomingMessages)
{
Monitor.Wait(incomingMessages);
}
if (canGrabMessage()) handleMessage();
// loop
}
ReceiveMessagesAndSignalWaiters()
{
awaitMessages();
copyMessagesToReadyArea();
lock(incomingMessages) {
Monitor.PulseAll(incomingMessages); //or Monitor.Pulse
}
awaitReadyAreaHasFreeSpace();
}
This code will deadlock! Maybe not today, maybe not tomorrow. Most likely when your code is placed under stress because suddenly it has become popular or important, and you are being called to fix an urgent issue.
Why?
Eventually the following will happen:
All consumer threads are doing some work
Messages arrive, the ready area can't hold any more messages, and PulseAll() is called.
No consumer gets woken up, because none are waiting
All consumer threads call Wait() [DEADLOCK]
This particular example assumes that producer thread is never going to call PulseAll() again because it has no more space to put messages in. But there are many, many broken variations on this code possible. People will try to make it more robust by changing a line such as making Monitor.Wait(); into
if (!canGrabMessage()) Monitor.Wait(incomingMessages);
Unfortunately, that still isn't enough to fix it. To fix it you also need to change the locking scope where Monitor.PulseAll() is called:
LoopOnMessages()
{
lock(incomingMessages)
{
if (!canGrabMessage()) Monitor.Wait(incomingMessages);
}
if (canGrabMessage()) handleMessage();
// loop
}
ReceiveMessagesAndSignalWaiters()
{
awaitMessagesArrive();
lock(incomingMessages)
{
copyMessagesToReadyArea();
Monitor.PulseAll(incomingMessages); //or Monitor.Pulse
}
awaitReadyAreaHasFreeSpace();
}
The key point is that in the fixed code, the locks restrict the possible sequences of events:
A consumer threads does its work and loops
That thread acquires the lock
And thanks to locking it is now true that either:
a. Messages haven't yet arrived in the ready area, and it releases the lock by calling Wait() BEFORE the message receiver thread can acquire the lock and copy more messages into the ready area, or
b. Messages have already arrived in the ready area and it receives the messages INSTEAD OF calling Wait(). (And while it is making this decision it is impossible for the message receiver thread to e.g. acquire the lock and copy more messages into the ready area.)
As a result the problem of the original code now never occurs:
3. When PulseEvent() is called No consumer gets woken up, because none are waiting
Now observe that in this code you have to get the locking scope exactly right. (If, indeed I got it right!)
And also, since you must use the lock (or Monitor.Enter() etc.) in order to use Monitor.PulseAll() or Monitor.Wait() in a deadlock-free fashion, you still have to worry about possibility of other deadlocks which happen because of that locking.
Bottom line: these APIs are also easy to screw up and deadlock with, i.e. quite dangerous
This is a simple example of monitor use :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp4
{
class Program
{
public static int[] X = new int[30];
static readonly object _object = new object();
public static int count=0;
public static void PutNumbers(int numbersS, int numbersE)
{
for (int i = numbersS; i < numbersE; i++)
{
Monitor.Enter(_object);
try
{
if(count<30)
{
X[count] = i;
count++;
Console.WriteLine("Punt in " + count + "nd: "+i);
Monitor.Pulse(_object);
}
else
{
Monitor.Wait(_object);
}
}
finally
{
Monitor.Exit(_object);
}
}
}
public static void RemoveNumbers(int numbersS)
{
for (int i = 0; i < numbersS; i++)
{
Monitor.Enter(_object);
try
{
if (count > 0)
{
X[count] = 0;
int x = count;
count--;
Console.WriteLine("Removed " + x + " element");
Monitor.Pulse(_object);
}
else
{
Monitor.Wait(_object);
}
}
finally
{
Monitor.Exit(_object);
}
}
}
static void Main(string[] args)
{
Thread W1 = new Thread(() => PutNumbers(10,50));
Thread W2 = new Thread(() => PutNumbers(1, 10));
Thread R1 = new Thread(() => RemoveNumbers(30));
Thread R2 = new Thread(() => RemoveNumbers(20));
W1.Start();
R1.Start();
W2.Start();
R2.Start();
W1.Join();
R1.Join();
W2.Join();
R2.Join();
}
}
}

How to effectively log asynchronously?

I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread.
The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write.
new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null);
What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool.
How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated.
Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
I wrote this code a while back, feel free to use it.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MediaBrowser.Library.Logging {
public abstract class ThreadedLogger : LoggerBase {
Queue<Action> queue = new Queue<Action>();
AutoResetEvent hasNewItems = new AutoResetEvent(false);
volatile bool waiting = false;
public ThreadedLogger() : base() {
Thread loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting = true;
hasNewItems.WaitOne(10000,true);
waiting = false;
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public override void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public override void Flush() {
while (!waiting) {
Thread.Sleep(1);
}
}
}
}
Some advantages:
It keeps the background logger alive, so it does not need to spin up and spin down threads.
It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue.
It copies the queues to ensure the queue is not blocked while the log operation is performed
It uses an AutoResetEvent to ensure the bg thread is in a wait state
It is, IMHO, very easy to follow
Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues.
public abstract class ThreadedLogger : IDisposable {
Queue<Action> queue = new Queue<Action>();
ManualResetEvent hasNewItems = new ManualResetEvent(false);
ManualResetEvent terminate = new ManualResetEvent(false);
ManualResetEvent waiting = new ManualResetEvent(false);
Thread loggingThread;
public ThreadedLogger() {
loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
// this is performed from a bg thread, to ensure the queue is serviced from a single thread
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting.Set();
int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate });
// terminate was signaled
if (i == 1) return;
hasNewItems.Reset();
waiting.Reset();
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public void Flush() {
waiting.WaitOne();
}
public void Dispose() {
terminate.Set();
loggingThread.Join();
}
}
Advantages over the original:
It's disposable, so you can get rid of the async logger
The flush semantics are improved
It will respond slightly better to a burst followed by silence
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my "deadlocks / monitor methods" page you'll find the code in the second half.
There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a ConcurrentQueue<T> in a BlockingCollection<T>.
The version on that page is non-generic (it was written a long time ago) but you'd probably want to make it generic - it would be trivial to do.
You would call Produce from each "normal" thread, and Consume from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
Here is what I came up with... also see Sam Saffron's answer. This answer is community wiki in case there are any problems that people see in the code and want to update.
/// <summary>
/// A singleton queue that manages writing log entries to the different logging sources (Enterprise Library Logging) off the executing thread.
/// This queue ensures that log entries are written in the order that they were executed and that logging is only utilizing one thread (backgroundworker) at any given time.
/// </summary>
public class AsyncLoggerQueue
{
//create singleton instance of logger queue
public static AsyncLoggerQueue Current = new AsyncLoggerQueue();
private static readonly object logEntryQueueLock = new object();
private Queue<LogEntry> _LogEntryQueue = new Queue<LogEntry>();
private BackgroundWorker _Logger = new BackgroundWorker();
private AsyncLoggerQueue()
{
//configure background worker
_Logger.WorkerSupportsCancellation = false;
_Logger.DoWork += new DoWorkEventHandler(_Logger_DoWork);
}
public void Enqueue(LogEntry le)
{
//lock during write
lock (logEntryQueueLock)
{
_LogEntryQueue.Enqueue(le);
//while locked check to see if the BW is running, if not start it
if (!_Logger.IsBusy)
_Logger.RunWorkerAsync();
}
}
private void _Logger_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
LogEntry le = null;
bool skipEmptyCheck = false;
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is empty than BW is done
return;
else if (_LogEntryQueue.Count > 1) //if greater than 1 we can skip checking to see if anything has been enqueued during the logging operation
skipEmptyCheck = true;
//dequeue the LogEntry that will be written to the log
le = _LogEntryQueue.Dequeue();
}
//pass LogEntry to Enterprise Library
Logger.Write(le);
if (skipEmptyCheck) //if LogEntryQueue.Count was > 1 before we wrote the last LogEntry we know to continue without double checking
{
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is still empty than BW is done
return;
}
}
}
}
}
I suggest to start with measuring actual performance impact of logging on the overall system (i.e. by running profiler) and optionally switching to something faster like log4net (I've personally migrated to it from EntLib logging a long time ago).
If this does not work, you can try using this simple method from .NET Framework:
ThreadPool.QueueUserWorkItem
Queues a method for execution. The method executes when a thread pool thread becomes available.
MSDN Details
If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush:
public static void FlushLogs()
{
bool queueHasValues = true;
while (queueHasValues)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
lock (m_loggerQueueSync)
{
queueHasValues = m_loggerQueue.Count > 0;
}
}
//force MEL to flush all its listeners
foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values)
{
foreach (TraceListener listener in logSource.Listeners)
{
listener.Flush();
}
}
}
I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data.
Thanks for sharing your solution, it set me into a good direction!
--Johnny S
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning.
Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed.
public static void FlushLogs()
{
int queueCount;
bool isProcessingLogs;
while (true)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
//check to see if we are currently processing logs
lock (m_isProcessingLogsSync)
{
isProcessingLogs = m_isProcessingLogs;
}
//check to see if more events were added while the logger was processing the last batch
lock (m_loggerQueueSync)
{
queueCount = m_loggerQueue.Count;
}
if (queueCount == 0 && !isProcessingLogs)
break;
//since something is in the queue, reset the signal so we will not keep looping
Thread.Sleep(400);
}
}
Just an update:
Using enteprise library 5.0 with .NET 4.0 it can easily be done by:
static public void LogMessageAsync(LogEntry logEntry)
{
Task.Factory.StartNew(() => LogMessage(logEntry));
}
See:
http://randypaulo.wordpress.com/2011/07/28/c-enterprise-library-asynchronous-logging/
An extra level of indirection may help here.
Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
If you log something on a separate thread, the message may not be written if the application crashes, which makes it rather useless.
The reason goes why you should always flush after every written entry.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops.
But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.

Multithreading in WCF with accurate timer requirement

I have a WCF app that accepts requests to start a job. Each job needs to do something after exactly X minutes (e.g. 5 mins.), there can also be a job request at any time and simultaneously.
This is what I have in mind,
// WCF class
public class RequestManager
{
// WCF method
public void StartNewJob()
{
// start a new thread with timer for each job?
}
}
public class Job
{
public Job()
{
// do some initializations
// do something after x mins
// sleep or timer?
}
private void DoSomething()
{
// do some follow-ups
}
}
With my approach, I'm afraid that there will be too many threads that's doing nothing for X mins. Per-second accuracy would be a requirement as well (say it starts a job at 0:05:01, the follow up should be at 0:10:01).
What would be the best way to approach this?
I would suggest you looking at the RegisterWaitForSingleObject function:
var waitObject = new AutoResetEvent(false);
// Execute the callback on a new thread 10 seconds after this call
// and execute it only once
ThreadPool.RegisterWaitForSingleObject(
waitObject,
(state, timeout) => { Console.WriteLine("ok"); },
null,
TimeSpan.FromSeconds(10),
true);
// Execute the callback on a new thread 10 seconds after this call
// and continue executing it at 10 seconds intervals until the
// waitHandle is signaled.
ThreadPool.RegisterWaitForSingleObject(
waitObject,
(state, timeout) => { Console.WriteLine("ok"); },
null,
TimeSpan.FromSeconds(10),
false);
Sounds like you need the serives of the Timer class:
// WCF class
public class RequestManager
{
// WCF method
public void StartNewJob()
{
Job myJob = new Job();
// Initialise myJob...
myJob.Start();
}
}
public class Job
{
private Timer myTimer = new Timer();
public Job()
{
myTimer.Elapsed += new ElapsedEventHandler(this.OnTimedEvent);
}
public void Start(int Miniutes)
{
myTimer.Interval = 60000 * Miniutes;
myTimer.Enabled = true;
}
private static void OnTimedEvent(object source, ElapsedEventArgs e)
{
// So something
}
}
The above code assumes that:
You dont do anything silly like attempt to call Start() twice on the same instance of timer.
There is some other non-background thread active elsewhere in the application preventing the application from closing.
Its not a full example, but hopefully it should give you the idea - the Timer class will deal with keeping time without needing a thread active for each job.
You need to use some timing/scheduling framework like Quartz.NET or create your own one (lightweight).
Using timer seems to be good (and easier to implement) for me.
There are several timer classes you can use in .NET. Please see the following document (even though it's bit aged, but it seems to be a good start): Comparing the Timer Classes in the .NET Framework Class Library
However, you can still achieve this behavior with Thread.Sleep() as well by calculating the offset while taking timestamps on a thread wake-up and on a completion of Job.DoSomethig().
You may want to consider the followings carefully:
Any contentions between threads executing Job.DoSomething()?
You should be very careful in the following scenario: what if Job.DoSomething() sometimes takes more than the period (i.e. it starts at 0:05 and completes 0:13 in the example above). What does this mean to your application and how will it be handled?
a. Total failure - abort the current(0:05) execution at 0:10 and launch 0:10 execution.
b. Not a big deal (skip 0:10 one and run Job.DoSomething() at 0:15).
c. Not a big deal, but need to launch 0:10 execution immediately after 0:05 task finishes (what if it keeps taking more than 5 sec??).
d. Need to launch 0:10 execution even though 0:05 execution is currently running.
e. anything else?
For the policy you select above, does your choice of implementation (either any of timer classes listed above or Thread.Sleep()) easy to support your policy?

Categories

Resources