dedicated thread for logging .net - c#

I am considering creating an asynchronous logging component having a dedicated thread that will read new items from queue and write to database, file, etc. If I create a thread as a background one - it will be terminated as soon as the process ends thus all items in queue will be lost. If I create it is a foreground one - I will have to figure out when to stop it as it will prevent the application from closing. Is there any way not to make developers remember to 'stop' logging functionality before application exits?

I believe you can:
Subscribe to the AppDomain.ProcessExit event;
Use a Volatile sentinel variable as a shutdown flag;
Set the flag when the ProcessExit event fires up;
Monitor the state of the flag inside your thread, and gracefully shut down accordingly.
This way you may keep a foreground thread aware of impending doom.

First of all I have to agree with the comments above. I would just use something like NLog rather than trying to roll my own. While it may seem like there is a lot to learn at first, it is still better than writing and debugging your own.
If you really want to travel this road, my recommendation would be to use a 'using' statement and IDisposable to control the asynchronous behavior. Just start a normal thread in the ctor and signal & Join the thread on Dispose().
Example usage:
void Main()
{
using (new Logging())
{
...
}
}
Example class (untested):
class Logging :IDisposable
{
ManualResetEvent _stop = new ManualResetEvent(false);
Thread _worker = null;
public Logging()
{
_worker = new Thread(AsyncThread);
_worker.Start();
}
public void Dispose()
{
_stop.Set();
_worker.Join();
}
public void AsyncThread()
{
...
}
}
In your logging routine, you will want to test if the thread is running and then decide between queuing the log write or directly appending to the log output. This way log messages before and after the async thread will continue to work correctly.

Related

How to keep a thread alive in C#

I have a windows service that is designed to handle incoming data, process it, and alert users if necessary. One thing that I am having trouble figuring out is how to keep a thread alive.
I have a few classes that share a ConcurrentBag of Device objects. The DeviceManager class is tasked with populating this collection and updating the device objects if a parameter about a device changes in the database. So for example, in the database someone updates device 23 to have a normal high of 50F. The DeviceManager would update the appropriate device in memory to have this new value.
Oracle provides an event handler to be notified when a table changes (docs here). I want to attach an event handler so I can be notified when to update my devices in memory. The problem is, how can I create a thread for my DeviceManager to work in and for it to just idle in the thread until the event occurs and is handled there? I would like to have the event fire and be handled in this thread instead of the main one.
You can create a separate worker thread when your service starts up. The worker thread will connect to the database and listen for change notifications, and update your ConcurrentBag accordingly. When the service is shut down, you can gracefully terminate the thread.
MSDN has an example that I think will help you: How to: Create and Terminate Threads
There are a large number of synchronization techniques available in .NET, and to discuss the entire scope would be too broad to address here. However, you should look at the Monitor class, with its Wait() and Pulse() methods.
For example:
private readonly object _lockObj = new object();
public void StartThread()
{
new Thread(ThreadProc).Start();
}
public void SignalThread()
{
lock (_lockObj)
{
// Initialize some data that the thread will use here...
// Then signal the thread
Monitor.Pulse(_lockObj);
}
}
private void ThreadProc()
{
lock (_lockObj)
{
// Wait for the signal
Monitor.Wait(_lockObj);
// Here, use data initialized by the other thread
}
}
Of course you can put the thread's locking/waiting code in a loop if you need for the thread to repeat the operation.
It looks like there's no shortage of other questions involving the Monitor class on SO:
https://stackoverflow.com/search?q=%5Bc%23%5D+monitor+pulse+wait
And of course, the documentation on MSDN has other examples as well.

How to terminate a thread when the worker can't check the termination string

I have the following code running in a Windows form. The method it is calling takes about 40 seconds to complete, and I need to allow the user the ability to click an 'Abort' button to stop the thread running.
Normally I would have the Worker() method polling to see if the _terminationMessage was set to "Stop" but I can't do this here because the long running method, ThisMethodMightReturnSomethingAndICantChangeIt() is out of my control.
How do I implement this user feature please ?
Here is my thread code.
private const string TerminationValue = "Stop";
private volatile string _terminationMessage;
private bool RunThread()
{
try
{
var worker = new Thread(Worker);
_terminationMessage = "carry on";
_successful = false;
worker.Start();
worker.Join();
finally
{
return _successful;
}
}
private void Worker()
{
ThisMethodMightReturnSomethingAndICantChangeIt();
_successful = true;
}
Well, the simple answer would be "you can't". There's no real thread abort that you can use to cancel any processing that's happenning.
Thread.Abort will allow you to abort a managed thread, running managed code at the moment, but it's really just a bad idea. It's very easy to end up in an inconsistent state just because you were just now running a singleton constructor or something. In the end, there's quite a big chance you're going to blow something up.
A bit orthogonal to the question, but why are you still using threading code like this? Writing multi-threaded code is really hard, so you want to use as many high-level features as you can. The complexity can easily be seen already in your small snippet of code - you're Joining the newly created thread, which means that you're basically gaining no benefit whatsoever from starting the Worker method on a new thread - you start it, and then you just wait. It's just like calling Worker outright, except you'll save an unnecessary thread.
try will not catch exceptions that pop up in a separate thread. So any exception that gets thrown inside of Worker will simply kill your whole process. Not good.
The only way to implement reliable cancellation is through cooperative aborts. .NET has great constructs for this since 4.0, CancellationToken. It's easy to use, it's thread-safe (unlike your solution), and it can be propagated through all the method chain so that you can implement cancellation at depth. Sadly, if you simply can't modify the ThisMethodMightReturnSomethingAndICantChangeIt method, you're out of luck.
The only "supported" "cancellation" pattern that just works is Process.Kill. You'd have to launch the processing method in a wholy separate process, not just a separate thread. That can be killed, and it will not hurt your own process. Of course, it means you have to separate that call into a new process - that's usually quite tricky, and it's not a very good design (though it seems like you have little choice).
So if the method doesn't support some form of cancellation, just treat it like so. It can't be aborted, period. Any way that does abort it is a dirty hack.
Well, here's my solution so far. I will definitely read up on newer .NET higher level features as you suggest. Thanks for the pointers in the right direction
private void RunThread()
{
try
{
var worker = new Thread(Worker);
SetFormEnabledStatus(false);
_usuccessful = false;
worker.Start();
// give up if no response before timeout
worker.Join(60000); // TODO - Add timeout to config
worker.Abort();
}
finally
{
SetFormEnabledStatus(true);
}
}
private void Worker()
{
try
{
_successful= false;
ThisMethodMightReturnSomethingAndICantChangeIt();
_successful = true;
}
catch (ThreadAbortException ex)
{
// nlog.....
}
catch (Exception ex)
{
// nlog...
}
}

Raise event from multiple worker threads?

Using C# to create a windows service application. I have a main object that creates worker threads to periodically conduct various tasks. Each worker completes a specific task, waits for a time, then repeats.
If one of those tasks should fail, I want that thread to alert the main to log that a task failed and then to exit.
I had thought about using a ManualResetEvent where Set would be called from each worker (and main would loop on checking it). Problem is, multiple workers could fail simultaneously and attempt to Set() the event at the same time.
Is there a thread-safe way to handle alerting from multiple worker threads? Only one alert is required, I don't need to handle any more than the first one received.
Why don't use Double-checked locking in your Setter / Event handler?
private static readonly object Locker = new object();
private bool _closing = false;
private void YourErrorHandler(object sender, EventArgs args)
{
if(!_closing)
lock (Locker)
if(!_closing)
{
_closing = true;
//What ever you need to do here
}
}
If you need cross process sync, you will need to use Mutex or something else. But hope you get the idea
Once one of the threads fails, you want to shut down the whole thing, correct? Once that happens, when you join all of the threads, you can check their status and report an error for each individual one that failed. Something like:
while(eventNotSet) sleep();
foreach(thread)
{
thread.Join();
checkStatus(thread);
}

C# threading pattern that will let me flush

I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.

Should a class with a Thread member implement IDisposable?

Let's say I have this class Logger that is logging strings in a low-priority worker thread, which isn't a background thread. Strings are queued in Logger.WriteLine and munched in Logger.Worker. No queued strings are allowed to be lost. Roughly like this (implementation, locking, synchronizing, etc. omitted for clarity):
public class Logger
{
private Thread workerThread;
private Queue<String> logTexts;
private AutoResetEvent logEvent;
private AutoResetEvent stopEvent;
// Locks the queue, adds the text to it and sets the log event.
public void WriteLine(String text);
// Sets the stop event without waiting for the thread to stop.
public void AsyncStop();
// Waits for any of the log event or stop event to be signalled.
// If log event is set, it locks the queue, grabs the texts and logs them.
// If stop event is set, it exits the function and the thread.
private void Worker();
}
Since the worker thread is a foreground thread, I have to be able to deterministically stop it if the process should be able to finish.
Question: Is the general recommendation in this scenario to let Logger implement IDisposable and stop the worker thread in Dispose()? Something like this:
public class Logger : IDisposable
{
...
public void Dispose()
{
AsyncStop();
this.workerThread.Join();
}
}
Or are there better ways of handling it?
That would certainly work - a Thread qualifies as a resource, etc. The main benefit of IDisposable comes from the using statement, so it really depends on whether the typical use for the owner of the object is to use the object for a duration of time in a single method - i.e.
void Foo() {
...
using(var obj = YourObject()) {
... some loop?
}
...
}
If that makes sense (perhaps a work pump), then fine; IDisposable would be helpful for the case when an exception is thrown. If that isn't the typical use then other than highlighting that it needs some kind of cleanup, it isn't quite so helpful.
That's usually the best, as long as you have a deterministic way to dispose the logger (using block on the main part of the app, try/finally, shutdown handler, etc).
It may be a good idea to have the thread hold a WeakReference to the managing object, and periodically check to ensure that it still exists. In theory, you could use a finalizer to nudge your thread (note that the finalizer, unlike the Dispose, should not do a Thread.Join), but it may be a good idea to allow for the possibility of the finalizer failing.
You should be aware that if user doesn't call Dispose manually (via using or otherwise) application will never exit, as Thread object will hold strong reference to your Logger. Answer provided by supercat is much better general solution to this problem.

Categories

Resources