Use Dispose() or finalizer to clean up managed threads? - c#

Suppose I have a message pump class in C++0x like the following (note, SynchronizedQueue is a queue of function<void()> and when you call receive() on the queue and it is empty, it blocks the calling thread until there is an item to return):
class MessagePump
{
private:
bool done_;
Thread* thread_;
SynchronizedQueue queue_;
void Run()
{
while (!done)
{
function<void()> msg = queue_.receive();
msg();
}
}
public:
MessagePump():
done_(false)
{
thread_ = new thread ([=] { this->Run(); } ) );
}
~MessagePump()
{
Send( [&]{ done = true; } );
thread_->join();
}
void Send (function<void()> msg)
{
queue_.send(msg);
}
};
I have converted this class into C#, but I have a question for the code in the destructor. According to the IDisposable pattern, I should only provide a Dispose() method in order to free managed and unmanaged resources.
Should I put the C++ destructor code into:
A custom CleanUp() method that the client needs to call when application is exiting? What if the client forgets?
A Dispose() method of IDisposable so that the client can also call it? But again, what if the client forgets?
Inside the C# finalizer method so it will always execute? I read that if you do not have any unmanaged resources, you shouldn't include a finalizer method because it hurts performance.
Nowhere? Just ignore marking the done_ flag and just let GC handle it naturally since the Thread object is a managed resource? Will the thread be forcibly aborted in this way?
I have also found out that if I don't mark the message pump thread created inside the constructor as a background thread, my MessagePump object never gets GC'ed and the application just hangs when it exits. What's the reason for this?

At a high level, I would just suggest using the .NET thread pool (System.Threading.ThreadPool) for queueing and executing multiple work items, since that's what it was designed for (assuming the work items are allowed to be executed asynchronously). Specifically, check out the QueueUserWorkItem method.
To answer your questions, though:
Should I put the C++ destructor code into:
A custom CleanUp() method that the client needs to call when application is exiting? What if the client forgets?
A Dispose() method of IDisposable so that the client can also call it? But again, what if the client forgets?
Always prefer implementing IDisposable over custom CleanUp methods (in the BCL, some Stream classes have a Close method that is really just an alias for Dispose). The IDisposable pattern is the way to do deterministic cleanup with C#. The client forgetting to call Dispose is always an issue, but this can often be detected by static analysis tools (e.g. FxCop).
Inside the C# finalizer method so it will always execute? I read that if you do not have any unmanaged resources, you shouldn't include a finalizer method because it hurts performance.
Finalizers are not guaranteed to execute (see this article), so a correct program cannot assume that they will execute. Performance won't be an issue here. I'm guessing you'll have a couple of MessagePump objects at most, so the cost of having a finalizer is insubstantial.
Nowhere? Just ignore marking the done_ flag and just let GC handle it naturally since the Thread object is a managed resource? Will the thread be forcibly aborted in this way?
The thread is managed by the CLR and will be properly cleaned-up. If the thread returns from its entry point (Run here), it won't be aborted, it will just exit cleanly. This code still needs to go somewhere though, so I would provide explicit cleanup through IDisposable.
I have also found out that if I don't mark the message pump thread created inside the constructor as a background thread, my MessagePump object never gets GC'ed and the application just hangs when it exits. What's the reason for this?
A .NET application runs until all foreground (non-background) threads terminate. So if you don't mark your MessagePump thread as a background thread, it will keep your application alive while it runs. If some object still references your MessagePump, then the MessagePump will never be GC'ed or finalized. Referencing the article above again, though, you can't assume that the finalizer will ever run.

One pattern that may be helpful is to have outside users of the message pump hold strong references to a "STILL IN USE" flag object to which the pump itself only holds a weak weak reference (which will be invalidated as soon as the object's "STILL IN USE" becomes eligible for finalization). The finalizer for this object might be able to send the message pump a message, and the message pump could check the continued validity of its weak reference; if it has become invalid, the message pump could then shut down.
Note that one common difficulty with message pumps is that the thread that operates them will tend to keep alive a lot of objects which are used by nothing but that thread. One needs a separate object, to which the thread will avoid keeping a strong reference, to ensure that things can get cleaned up.

Related

What happens if a new Entry is written to the Event Log while the application is inside the handler for a previous entry being written?

My application needs to review all new application Event Log entries as they come in.
private void eventLog_Application_EntryWritten(object sender, EntryWrittenEventArgs e)
{
// Process e.Entry
}
What I would like to know is what happens if another Entry is written to the EventLog while a previous Entry is being handled?
The documentation for EventLog.EntryWritten Event provides an example of handling an entry written event which uses threading (which is why I am asking the question).
In this example they use System.Threading and call the WaitOne() and Set() methods on the AutoResetEvent class, however I'm not sure precisely what this code is intended to achieve.
The documentation states that - WaitOne() "blocks the current thread until the current WaitHandle receives a signal", and that Set() "sets the state of the event to signaled, allowing one or more waiting threads to proceed". I'm not sure what the threading portion of this example is intended to demonstrate, and how this relates to how (or if) it needs to be applied in practice.
It appears that WaitOne() blocks the thread immediately after the entry has been written, until it has been handled, where it is then set to signalled (using Set()), before allowing the thread to proceed. Is this the one and only thread for the application?
Most importantly, when my application is not responsible for writing the the events which need to be read from the EventLog, how should this principle be applied? (If, indeed, it needs to be applied.)
What does happen if a new Entry is written while the application is inside the handler?
Nothing dramatic happens, it is serialized by the framework. The underlying winapi function that triggers the EventWritten event is NotifyChangeEventLog(). The .NET Framework uses the threadpool to watch for the event to get signaled with ThreadPool.RegisterWaitForSingleObject(). You can see it being used here.
Which is your cue to why the MSDN sample uses ARE (AutoResetEvent). The event handler runs on that threadpool thread, exactly when that happens is unpredictable. The sample uses a console mode app, without that ARE it would immediately terminate. With the ARE, it displays one notification and quits. Not actually that useful of course, I would personally just have used Console.ReadLine() in the sample so it just keeps running and continues to display info until you press the Enter key.
You don't need this if you use a service or a GUI app, something that's going to run for a long time until the user explicitly closes it. Note the EventLog.SynchronizingObject property, makes it easy to not have to deal with the threadpool thread in a Winforms app.
The example is not really helping to explain the way the AutoResetEvent works in a multi-threaded scenario, so I'll try to explain how I understand it to work.
The AutoResetEvent signal static variable, is instantiated as a new AutoResetEvent with its signaled state set to false, or "non-signaled", meaning that calling signal.WaitOne() will cause the thread that called WaitOne to wait at that point, until the signal variable is "set" by calling the signal.Set() method.
I found an explanation of AutoResetEvent that describes it very well in understandable real-world terms, which also included this excellent example below.
http://www.albahari.com/threading/part2.aspx#_AutoResetEvent
AutoResetEvent
An AutoResetEvent is like a ticket turnstile: inserting a ticket lets
exactly one person through. The “auto” in the class’s name refers to
the fact that an open turnstile automatically closes or “resets” after
someone steps through. A thread waits, or blocks, at the turnstile by
calling WaitOne (wait at this “one” turnstile until it opens), and a
ticket is inserted by calling the Set method. If a number of threads
call WaitOne, a queue builds up behind the turnstile. (As with locks,
the fairness of the queue can sometimes be violated due to nuances in
the operating system). A ticket can come from any thread; in other
words, any (unblocked) thread with access to the AutoResetEvent object
can call Set on it to release one blocked thread.
class BasicWaitHandle
{
static EventWaitHandle _waitHandle = new AutoResetEvent (false);
static void Main()
{
new Thread (Waiter).Start();
Thread.Sleep (1000); // Pause for a second...
_waitHandle.Set(); // Wake up the Waiter.
}
static void Waiter()
{
Console.WriteLine ("Waiting...");
_waitHandle.WaitOne(); // Wait for notification
Console.WriteLine ("Notified");
}
}
According to https://msdn.microsoft.com/en-us/library/0680sfkd.aspx the eventlog components are not thread-safe and that code is there to prevent unexpected behaviour in simultaneous interactions.
If multiple threads are executing these lines simultaneously, if is possible for one thread to change the EventLog.Source Property of the event log, and for another thread to write a message, after that property had been changed.

Should I implement IDisposable for a class containing a Thread

I have a class that uses the Thread class:
class A
{
public Thread thread
{ get; set; }
}
Should I implement IDisposable and set Thread property to null?
class A : IDisposable
{
public Thread Thread
{ get; set; }
protected bool Disposed
{ get; set; }
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!this.Disposed)
{
if (disposing)
{
if (Thread != null)
Thread = null;
}
Disposed = true;
}
}
}
Or not?
Why?
You implement IDisposable only when your class is handling an unmanaged object, resources or other IDisposable objects. A Thread is not an unmanaged object and will get garbage collected when nothing is referencing it or when the process handling it is terminated. Since Thread is not implementing IDisposable, your class referencing it does not need to implement it either.
Optionally, for IDisposable within the scope of a method, they can be wrapped in a using statement and the Dispose() method is automatically called when the scope is exited.
It depends what your thread is doing. If your thread is performing a long running task that may run indefinitely, then I would consider that thread as a resource (which will not be garbage collected). For example consider if the thread is designed to poll some state indefinitely, or consume items from a queue (like a thread-pool thread consumes tasks or a TCP server consumes new connections) etc. In this case, I would say the natural effect of disposing your class would be to free up this thread resource. Setting it to null is not really useful in this case. Rather Dispose should probably involve flagging a synchronization event (or maybe a CancellationToken) to notify the thread that it should finish up its infinite task, and then the disposing thread should wait some time for the thread to finish (join). As always with joins, be careful of a deadlock scenario and consider some alternative action if the thread refuses to terminate. For obvious reasons I would not do this join in the finalizer.
As an example of what I'm meaning, consider the scenario where your class A is actually class MyTcpListener, designed to listen and wait for new TCP connections on a given port indefinitely. Then consider what you expect following (somewhat unlikely) code to do:
using (MyTcpListener listener = new MyTcpListener(port:1234))
{
// Do something here
}
// Create another one. This would fail if the previous Dispose
// did not unbind from the port.
using (MyTcpListener listener = new MyTcpListener(port:1234))
{
// Do something else here
}
Assuming I know the constructor of MyTcpListener creates a listener thread, I would expect that after the Dispose call has returned that the MyTcpListener would no longer be bound to the TCP port - i.e. that the TCP listener thread would have fully terminated. It goes without saying that if you didn't provide some mechanism to stop the listener that there would be a resource leak. The stopping mechanism could be a call to some method "Stop", but I personally think the "Dispose" pattern fits this scenario more cleanly since forgetting to stop something does not generally imply a resource leak.
Your code may call for different assumptions, so I would suggest judging it on the scenario. If your thread is short-running, e.g. it has some known finite task to complete and then it will terminate on its own, then I would say that disposing is less critical or perhaps useless.

Killing a Thread in C#

This is not about terminating a system-process but killing "myself". I have several parallel theads, which CAN hang because of different reasons.
I already created a watchdog when a thread is taking too long:
TimerCallback timerDelegate = new TimerCallback(CheckProcessStatus);
System.Threading.Timer watchDogTimer = new Timer(timerDelegate, new ProcessHealth(plog), 1000 * 60, 1000 * 60);
try
{
// lots of code here
} finally
{
watchDogTimer.Dispose();
}
Watchdog:
public void CheckProcessStatus(Object timerState) {
ProcessHealth ph = (ProcessHealth)timerState;
System.writeLine(string.Format("process runs for {0} minutes!", ph.WaitingTime);
if (ph.WaitingTime>60) {
// KILL THE PROCESS
}
}
When "lots of code here" takes too long I want to terminate the thread no matter what state it is in. (at "Kill the process").
What would be the best approach?
Thread.CurrentThread.Interrupt()
OR
Thread.CurrentThread.Abort()?
Or are there even better approaches? (I cannot use "simple" mechanisms like boolean "stop"-variables as the "lots of code here" is VERY Dynamic calling other classes via reflection etc.
Does that even work? Or do I just kill the watchdog-thread, NOT the thread to be watched?
Thread.Abort attempts to terminate the target thread by injecting an out-of-band (asynchronous) exception. It is unsafe because the exception gets injected at unpredictable points in the execution sequence. This can (and often does) lead to some type of corruption in the application domain because of interrupted writes to data structures.
Thread.Interrupt causes most blocking calls in the BCL (like Thread.Sleep, WaitHandle.WaitOne, etc.) to bailout immediately. Unlike aborting a thread, interrupting a thread can be made completely safe because the exception is injected at predictable points in the execution sequence. A crafty programmer can make sure these points are considered "safe points".
So, if "lots of code here" will respond to Thread.Interrupt then that might be an acceptable approach to use. But, I would like to steer you more towards the cooperative cancellation pattern. Basically, this means your code must periodically poll for a cancellation request. The TPL already has a framework in place for doing this via CancellationToken. But, you could easily accomplish the same thing with a ManualResetEvent or a simple volatile bool variable.
Now, if "lots of code here" is not under your control or if the cooperative cancellation pattern will not work (perhaps because you are using a faulty 3rd party library) then you pretty much have no other choice but to spin up a completely separate process to run the risky code. Use WCF to communicate with the process and if it does not respond then you can kill it without corrupting the main process. It is a lot of work, but it may be your only option.
Aborting a thread, when it is in an unknown state, is not advisable. Say, the thread is currently executing a static constructor. The static ctor will be aborted and never run again (because faulting static ctor's never run again). You have effectively destroyed global state in your AppDomain without a way to ever recover.
There are lots of other hazards as well. That just doesn't fly.
There are two production-ready choices aborting threads:
Cooperatively (set an event or a boolean flag in combination with Thread.MemoryBarrier)
Don't abort the thread, but the entire AppDomain or process. Thread level granularity is too small. You need to delete all state related to that thread, doo.
I want to stress that you cannot make this work any other way. You will have the strangest faults in production if you insist on aborting threads non-cooperatively.

Problems invoking methods on a COM thread from a WinForms GUI thread?

I'm having trouble with my COM component written in .NET throwing warnings that look like:
Context 0x15eec0 is disconnected. No
proxy will be used to service the
request on the COM component. This may
cause corruption or data loss. To
avoid this problem, please ensure that
all contexts/apartments stay alive
until the application is completely
done with the RuntimeCallableWrappers
that represent COM components that
live inside them.
It looks like this is caused by my GUI thread calling functions in the COM thread without necessary syncronization. For reference I'm using the guidelines set in http://msdn.microsoft.com/en-us/library/ms229609%28VS.80%29.aspx for creating my GUI thread in the COM component.
My code looks something like:
class COMClass {
// this is called before SomeMethod
public void Init() {
ComObject comObject = new ComObject(); // this is imported from a TLB
// I create my GUI thread and start it as in the MSDN sample
Thread newThread = new Thread(new ThreadStart(delegate() {
Application.Run(new GUIForm(comObject));
}));
}
public void SomeMethod(){
comObject.DoSomething(); // this is where the error occurs
}
}
class GUIForm : Form {
ComObject com;
public GUIForm(ComObject com) {comObject = com;}
public void SomeButtonHandler(object sender, EventArgs e) {
comObject.SomeMethod(); // call on the GUI thread but the com object is bound to the COM thread...
}
}
Is there an established method for dealing with this? Calls to the GUI are no problem (Invoke/BeginInvoke) but calling the other way seems to be more difficult...
edit: It is also not an option to modify the COM object in any way.
It isn't very clear from your snippet how the all-important Init() method is called and how the thread got started. Clearly, the thread on which the COM object is created is not the same thread as the one where the SomeMethod() call is made. Further assuming that the COM server is apartment threaded, COM needs to marshal the SomeMethod() call to the thread that created the object. The one that called Init(). If that thread is no longer running, hilarity ensues.
There's one glaring problem, you forgot to call Thread.SetApartmentState().
Given that COM already marshals inter-thread calls, you are probably not gaining anything by starting your own thread. You can't magically make a COM server multi-threaded if it refuses to support it.
I found the problem, it wasn't the cross-thread operation persay. In my GUIForm I created a subwindow and used SetParent() to parent it to the COM server's app window. This seems to have caused the issues with the COM proxy being disconnected (although a more experience COM expert might have to enlighten me as to why it behaved like this).
Instead of parenting my control to the window I'm going to fully disconnect it and just hook WM_WINDOWPOSCHANGING to move my control with the main App window.
Since the COM object was created on the other thread, all calls to the COM object should be made from that thread. After launching the GUI thread, you'll need to have some kind of queueing mechanism set up to wait for calls to execute methods (probably a queue of delegates). Your GUI code can push a delegate into the queue and it will be executed (on the original thread) when the original thread processes the queue. See: http://www.yoda.arachsys.com/csharp/threads/deadlocks.shtml (producer/consumer example about mid-way down the page).

Should a class with a Thread member implement IDisposable?

Let's say I have this class Logger that is logging strings in a low-priority worker thread, which isn't a background thread. Strings are queued in Logger.WriteLine and munched in Logger.Worker. No queued strings are allowed to be lost. Roughly like this (implementation, locking, synchronizing, etc. omitted for clarity):
public class Logger
{
private Thread workerThread;
private Queue<String> logTexts;
private AutoResetEvent logEvent;
private AutoResetEvent stopEvent;
// Locks the queue, adds the text to it and sets the log event.
public void WriteLine(String text);
// Sets the stop event without waiting for the thread to stop.
public void AsyncStop();
// Waits for any of the log event or stop event to be signalled.
// If log event is set, it locks the queue, grabs the texts and logs them.
// If stop event is set, it exits the function and the thread.
private void Worker();
}
Since the worker thread is a foreground thread, I have to be able to deterministically stop it if the process should be able to finish.
Question: Is the general recommendation in this scenario to let Logger implement IDisposable and stop the worker thread in Dispose()? Something like this:
public class Logger : IDisposable
{
...
public void Dispose()
{
AsyncStop();
this.workerThread.Join();
}
}
Or are there better ways of handling it?
That would certainly work - a Thread qualifies as a resource, etc. The main benefit of IDisposable comes from the using statement, so it really depends on whether the typical use for the owner of the object is to use the object for a duration of time in a single method - i.e.
void Foo() {
...
using(var obj = YourObject()) {
... some loop?
}
...
}
If that makes sense (perhaps a work pump), then fine; IDisposable would be helpful for the case when an exception is thrown. If that isn't the typical use then other than highlighting that it needs some kind of cleanup, it isn't quite so helpful.
That's usually the best, as long as you have a deterministic way to dispose the logger (using block on the main part of the app, try/finally, shutdown handler, etc).
It may be a good idea to have the thread hold a WeakReference to the managing object, and periodically check to ensure that it still exists. In theory, you could use a finalizer to nudge your thread (note that the finalizer, unlike the Dispose, should not do a Thread.Join), but it may be a good idea to allow for the possibility of the finalizer failing.
You should be aware that if user doesn't call Dispose manually (via using or otherwise) application will never exit, as Thread object will hold strong reference to your Logger. Answer provided by supercat is much better general solution to this problem.

Categories

Resources