Best practice regarding firing and handling of .NET events - c#

It is obvious that firing events inside of a lock (i.e. critical section) is prone to deadlocks due to the possibility that the event handler may block on some asynchronous operation that also needs to acquire the same lock. Now, design-wise there are two possible solutions that come to my mind:
If it is needed to fire an event inside a lock, then always fire the event asynchronously. This can be performed by using ThreadPool, for example, if the firing order of the events does not matter. If the order of the events must be preserved, then a single event firing thread can be used to fire the events in order, but asynchronously.
The class/library that fires the events does not need to take any necessary precautions to prevent the deadlock and just fire the event inside the lock. In this case it is the event handler's responsibility to process the event asynchronously if it performs locking (or any other blocking operation) inside the event handler. If the event handler does not conform to this rule, then it should suffer the consequences in case a deadlock occurs.
I honestly think that the second option is better in terms of separation of concerns principle as the event-firing code should not guess what the event handler may or may not do.
However, practically, I am inclined to take the first route since the second option seems to converge to the point that every event handler must now run all event-handling code asynchronously, since most of the time it is not clear whether some series of calls performs a blocking operation or not. For complex event-handlers, tracing all possible paths (and, moreover, keeping track of it as the code evolves) is definitely not an easy task. Therefore, solving the problem in one place (where the event is fired) seems to be preferable.
I am interested in seeing if there are other alternative solutions that I may have overlooked and what possible advantages/disadvantages and pitfalls can be attributed to each possible solution.
Is there a best practice for this kind of situation?

There is a third option: Delay raising the event until the lock is released. Normally, locks are taken for a short time. It is usually possible to delay raising the event till after lock (but on the same thread).
The BCL almost never calls user code under a lock. This is an explicit design principle of theirs. For example ConcurrentDictionary.AddOrUpdate does not call the factory while under a lock. This counter-intuitive behavior causes many Stack Overflow questions because it can lead to multiple factory invocations for the same key.

I honestly think that the second option is better in terms of separation of concerns principle as the event-firing code should not guess what the event handler may or may not do.
I don't think the first option violates separation of concerns. Extract an AsyncEventNotifier class and have your event-generating object delegate to it, something like (obviously not complete):
class AsyncEventNotifier
{
private List<EventListener> _listeners;
public void AddEventListener(EventListener listener) { _listeners.add(listener); }
public void NotifyListeners(EventArgs args)
{
// spawn a new thread to call the listener methods
}
....
}
class EventGeneratingClass
{
private AsyncEventHandler _asyncEventHandler;
public void AddEventListener(EventListener listener) { _asyncEventHandler.AddEventListener(listener); }
private void FireSomeEvent()
{
var eventArgs = new EventArgs();
...
_asyncEventhandler.NotifyListeners(eventArgs);
}
}
Now your original class isn't responsible for anything it wasn't responsible for before. It knows how to add listeners to itself and it knows how to tell its listeners that an event has occurred. The new class knows the implementation details of "tell its listeners that an event has occurred". Listener order is also preserved if it's important. It probably shouldn't be, though. If listenerB can't handle an event until listenerA has already handled it then listenerB is probably more interested in an event generated by listenerA than by the object it was listening to. Either that or you should have another class whose responsibility is to know the order that an event needs to be processed in.

Related

What are the pros and cons of using CancellationTokens as alternatives to events?

Recently I came across a Microsoft interface with a quite unusual API:
public interface IHostApplicationLifetime
{
public CancellationToken ApplicationStarted { get; }
public CancellationToken ApplicationStopping { get; }
public CancellationToken ApplicationStopped { get; }
}
The documentation of the property ApplicationStopping suggests confusingly that this property is actually an event (emphasis added):
Triggered when the application host is performing a graceful shutdown. Shutdown will block until this event completes.
It seems that what should be a traditional EventHandler event, has been replaced with a CancellationToken property. This is how I expected this
interface to be:
public interface IHostApplicationLifetime
{
public event EventHandler ApplicationStarted;
public event EventHandler ApplicationStopping;
public event EventHandler ApplicationStopped;
}
My question is, are these two notifications mechanisms equivalent? If not, what are the pros and cons of each approach, from the perspective of an API designer? In which circumstances a CancellationToken property is superior to a classic event?
Microsoft should have been more careful with the documentation. It is confusing to describe the CancellationTokens as events to be "triggered".
I'm certainly not a C# language expert so hopefully the community will let us know if I'm missing import points. But let me take a stab a few things...
So your question: are they equivalent.. only in the sense that they both provide a way to register and invoke callbacks. But for a lot of other reasons, no.
CancellationTokens are wrapper around CancellationTokenSource. They tie into the Task implementation. They are thread safe. They can be invoked externally and can be invoked by a timer. You can chain multiple CancellationTokenSource together. They maintain state which indicates if they are, or are not, canceled and you can query that state.
C# events, the language feature, are in the words of the documentation a special multicast delegate. They can only be invoked within the class where they are declared. They are not really thread safe. They are baked into XAML and WPF.
I'm not going to comment too much on where one is superior to the other as they are very different. In the normal course I don't think you'd consider events in situations where you would consider CancellationTokens. They overlapped in the case of IHostApplicationLifetime mainly because of bad documentation and the redesign of the hosting infrastructure for .NET Core. As #EricLippert mentioned this link provides a great overview of that.
The CancellationToken is not equivalent with the classic events. The differences are numerous, with the most obvious being that the CancellationToken can be triggered (canceled) only once. So in case it makes sense for an event to be raised more than once, there is no dilemma: a classic event is the only option. So the comparison must be narrowed to the cases of one-time-events, where the CancellationToken has many advantages, and only one potential disadvantage. First the advantages:
The CancellationToken notifies its subscribers not only about a cancellation that may happen in the future, but also about a cancellation that has already happened. Trying to do the same in a multithreaded application with a classic event and a bool field creates a race condition. It is possible for the event to be triggered between checking the bool field and subscribing to the event, in which case the notification will be lost.
It is possible to attach and detach an event handler to a CancellationToken, even if the handler is an anonymous lambda function. This is not possible with a classic event. The detach operator (-=) requires the same handler that was passed previously at the subscription (+=), so the handler must be assigned to a variable, or be a named function.
It is possible to pass a CancellationToken as an argument to a method of another class, to allow registering (and optionally unregistering) a callback. This is not possible with the classic events. The C# language does not allow it. The only way to achieve this functionality is by passing attach/detach lambdas as arguments to the method: otherClass.SomeMethod(h => this.MyEvent += h, h => this.MyEvent -= h). This is complicated, awkward, and less readable than otherClass.SomeMethod(this.MyToken).
There are thousands of APIs that accept a CancellationToken parameter. This makes the consumption of this kind of notification very convenient in a multitude of scenarios. On the contrary the APIs that can consume events by accepting addHandler/removeHandler arguments are extremely rare (example: Observable.FromEventPattern).
Unregistering a callback from a CancellationToken with the method Unregister¹ gives a bool feedback whether the callback has already been invoked or not. On the contrary unsubscribing from a classic event with the -= operator gives no feedback. This means that in a multithreaded application the caller has no way of knowing whether the detached handler is already running on another thread, so that it can safely dispose any disposable resources referenced by the handler. This leaves the caller with awkward options like not disposing the resources, or catching possible ObjectDisposedExceptions inside the handler.
The disadvantage of using a CancellationToken as an one-time-notification is purely semantic. This type is strongly associated with the concept of cancellation, so using it to notify that, for example, something started or stopped, has great chances to create confusion.
¹ Not available for the .NET Framework, so this advantage does not apply for this platform.

Event Ordering in .NET

Got a quick question on event ordering in C#/.NET.
Let's say you have a while loop that reads a socket interface (TCP). The interface is definitely taking care of the ordering (it's TCP). Let's say your packet interface is written so that each "packet" you get in the stream, you will forward it to the next "layer" or the next object via an event callback.
So here is the pseudocode:
while (1) {
readsocket();
if (data received = complete packet)
raiseEvent(packet);
}
My questions are:
Are the events generated in order? (i.e. preserve ordering)
I am assuming #1 is correct, so that means it will block the while loop until the event finishes processing?
You never know how the event is implemented. It's possible that the events will all be executed synchronously, in order, and based on some meaningful value. It's also possible that they'll be executed synchronously in some arbitrary and inconsistent ordering. It's also possible that they won't even be executed synchronously, and that the various event handlers will be executed in new threads (or thread pool threads). It's entirely up to the implementation of the event to determine all of that.
It's rather uncommon to see different event handlers executed in parallel (and by that I mean very, very very rare), and almost all events that you come across will be backed by a single multicast delegate, meaning the order they will be fired in is the order in which they were added, but you have no way of actually knowing if that's the case (baring decompiling the code). There is no indication from the public API if that is how it is implemented.
Regardless of all of this, from a conceptual perspective, it would be best to not rely on any ordering of event handler invocations, and it's generally best to program as if the various event handlers could be run concurrently because at a conceptual level, that is what an event represents even if the implementation details are more restrictive.

C# events change sender state

Modify the sender's state in events (aside from being a mutable object), is this considered bad practice?
All event examples I've found are very simple and only do something like Console.WriteLine("event!")
Simple code:
public void HandleEvent(object sender, EventArgs args)
{
ClassA a = (ClassA)sender;
a.doSomething(this.makeSomething());
}
It's not bad practice as such, you need to be careful though.
For instance would it be relevant if dosomething was called from the eventhandler, or directly.
Or because you can't rely on when the eventhandler gets triggered, you are asynchronous, so you can't assume dosomething has been executed before you call dosomethingelse.
E.g dosomething should change state to 2 only if it's 1. If it's not 1 or already 2 more logic is required.
If you start disappearing into that hole, might be better to queue a request to do a dosomething, and then have an engine which deals with the current state and the request queue.
So have a think about how dosomething to a relates to any other methods you call on a. If it's self contained, you are ok, if dependencies start proliferating, than it's bad idea as opposed to a bad practice.
I would not consider it bad practice, as far as you do not make assumptions about the order followed by the runtime to call the event handlers registered with your events. In fact, being that order not guaranteed, you should not rely on that to change the state of your objects, including the sender one.

How to implement a thread safe error-free event handler in C#?

Problem background
An event can have multiple subscribers (i.e. multiple handlers may be called when an event is raised). Since any one of the handlers could throw an error, and that would prevent the rest of them from being called, I want to ignore any errors thrown from each individual handler. In other words, I do not want an error in one handler to disrupt the execution of other handlers in the invocation list, since neither those other handlers nor the event publisher has any control over what any particular event handler's code does.
This can be accomplished easily with code like this:
public event EventHandler MyEvent;
public void RaiseEventSafely( object sender, EventArgs e )
{
foreach(EventHandlerType handler in MyEvent.GetInvocationList())
try {handler( sender, e );}catch{}
}
A generic, thread-safe, error-free solution
Of course, I don't want to write all this generic code over and over every time I call an event, so I wanted to encapsulate it in a generic class. Furthermore, I'd actually need additional code to ensure thread-safety so that MyEvent's invocation list does not change while the list of methods is being executed.
I decided to implement this as a generic class where the generic type is constrained by the "where" clause to be a Delegate. I really wanted the constraint to be "delegate" or "event", but those are not valid, so using Delegate as a base class constraint is the best I can do. I then create a lock object and lock it in a public event's add and remove methods, which alter a private delegate variable called "event_handlers".
public class SafeEventHandler<EventType> where EventType:Delegate
{
private object collection_lock = new object();
private EventType event_handlers;
public SafeEventHandler(){}
public event EventType Handlers
{
add {lock(collection_lock){event_handlers += value;}}
remove {lock(collection_lock){event_handlers -= value;}}
}
public void RaiseEventSafely( EventType event_delegate, object[] args )
{
lock (collection_lock)
foreach (Delegate handler in event_delegate.GetInvocationList())
try {handler.DynamicInvoke( args );}catch{}
}
}
Compiler issue with += operator, but two easy workarounds
One problem ran into is that the line "event_handlers += value;" results in the compiler error "Operator '+=' cannot be applied to types 'EventType' and 'EventType'". Even though EventType is constrained to be a Delegate type, it will not allow the += operator on it.
As a workaround, I just added the event keyword to "event_handlers", so the definition looks like this "private event EventType event_handlers;", and that compiles fine. But I also figured that since the "event" keyword can generate code to handle this, that I should be able to as well, so I eventually changed it to this to avoid the compiler's inability to recognize that '+=' SHOULD apply to a generic type constrained to be a Delegate. The private variable "event_handlers" is now typed as Delegate instead of the generic EventType, and the add/remove methods follow this pattern event_handlers = MulticastDelegate.Combine( event_handlers, value );
The final code looks like this:
public class SafeEventHandler<EventType> where EventType:Delegate
{
private object collection_lock = new object();
private Delegate event_handlers;
public SafeEventHandler(){}
public event EventType Handlers
{
add {lock(collection_lock){event_handlers = Delegate.Combine( event_handlers, value );}}
remove {lock(collection_lock){event_handlers = Delegate.Remove( event_handlers, value );}}
}
public void RaiseEventSafely( EventType event_delegate, object[] args )
{
lock (collection_lock)
foreach (Delegate handler in event_delegate.GetInvocationList())
try {handler.DynamicInvoke( args );}catch{}
}
}
The Question
My question is... does this appear to do the job well? Is there a better way or is this basically the way it must be done? I think I've exhausted all the options. Using a lock in the add/remove methods of a public event (backed by a private delegate) and also using the same lock while executing the invocation list is the only way I can see to make the invocation list thread-safe, while also ensuring errors thrown by handlers don't interfere with the invocation of other handlers.
Since any one of the handlers could throw an error, and that would prevent the rest of them from being called,
You say that like it is a bad thing. That is a very good thing. When an unhandled, unexpected exception is thrown that means that the entire process is now in an unknown, unpredictable, possibly dangerously unstable state.
Running more code at this point is likely to make things worse, not better. The safest thing to do when this happens is to detect the situation and cause a failfast that takes down the entire process without running any more code. You don't know what awful thing running more code is going to do at this point.
I want to ignore any errors thrown from each individual handler.
This is a super dangerous idea. Those exceptions are telling you that something awful is happening, and you're ignoring them.
In other words, I do not want an error in one handler to disrupt the execution of other handlers in the invocation list, since neither those other handlers nor the event publisher has any control over what any particular event handler's code does.
Whose in charge here? Someone is adding those event handlers to this event. That is the code that is responsible for ensuring that the event handlers do the right thing should there be an exceptional situation.
I then create a lock object and lock it in a public event's add and remove methods, which alter a private delegate variable called "event_handlers".
Sure, that's fine. I question the necessity of the feature -- I very rarely have a situation where multiple threads are adding event handlers to an event -- but I'll take your word for it that you are in this situation.
But in that scenario this code is very, very, very dangerous:
lock (collection_lock)
foreach (Delegate handler in event_delegate.GetInvocationList())
try {handler.DynamicInvoke( args );}catch{}
Let's think about what goes wrong there.
Thread Alpha enters the collection lock.
Suppose there is another resource, foo, which is also controlled by a different lock. Thread Beta enters the foo lock in order to obtain some data that it needs.
Thread Beta then takes that data and attempts to enter the collection lock, because it wants to use the contents of foo in an event handler.
Thread Beta is now waiting on thread Alpha. Thread Alpha now calls a delegate, which decides that it wants to access foo. So it waits on thread Beta, and now we have a deadlock.
But can't we avoid this by ordering the locks? No, because the very premise of your scenario is that you don't know what the event handlers are doing! If you already know that the event handlers are well-behaved with respect to their lock ordering then you can presumably also know that they are well-behaved with respect to not throwing exceptions, and the whole problem vanishes.
OK, so let's suppose that you do this instead:
Delegate copy;
lock (collection_lock)
copy = event_delegate;
foreach (Delegate handler in copy.GetInvocationList())
try {handler.DynamicInvoke( args );}catch{}
Delegates are immutable and copied atomically by reference, so you now know that you're going to be invoking the contents of event_delegate but not holding the lock during the invocation. Does that help?
Not really. You've traded one problem for another one:
Thread Alpha takes the lock and makes a copy of the delegate list, and leaves the lock.
Thread Beta takes the lock, removes event handler X from the list, and destroys state necessary to prevent X from deadlocking.
Thread Alpha takes over again and starts up X from the copy. Because Beta just destroyed state necessary for the correct execution of X, X deadlocks. And once more, you are deadlocked.
Event handlers are required to not do that; they are required to be robust in the face of their suddenly becoming "stale". It sounds like you are in a scenario where you cannot trust your event handlers to be well-written. That's a horrid situation to be in; you then cannot trust any code to be reliable in the process. You seem to think that there is some level of isolation you can impose on an event handler by catching all its errors and muddling through, but there is not. Event handlers are just code, and they can affect arbitrary global state in the program like any other code.
In short, your solution is generic, but it is not threadsafe and it is not error-free. Rather, it exacerbates threading problems like deadlocks and it turns off safety systems.
You simply cannot abdicate responsibility for ensuring that event handlers are correct, so don't try. Write your event handlers so that they are correct -- so that they order locks correctly and never throw unhandled exceptions.
If they are not correct and end up throwing exceptions then take down the process immediately. Don't keep muddling through trying to run code that is now living in an unstable process.
Based on your comments on other answers it looks like you think that you should be able to take candy from strangers with no ill effects. You cannot, not without a whole lot more isolation. You can't just sign up random code willy-nilly to events in your process and hope for the best. If you have stuff that is unreliable because you're running third party code in your application, you need a managed add-in framework of some sort to provide isolation. Try looking up MEF or MAF.
The lock inside RaiseEventSafely is both unnecessary and dangerous.
It is unnecessary because delegates are immutable. Once you read it, the invokation list you obtained will not change. It doesn't matter if the changes happen while event code runs, or if the changes need to wait until after.
It is dangerous because you're calling external code while holding a lock. This can easily lead to lock order violations and thus deadlocks. Consider an eventhandler that spawns a new thread that tries to modify the event. Boom, deadlock.
The you have an empty catch for exception. That's rarely a good idea, since it silently swallows the exception. At minimum you should log the exception.
Your generic parameter doesn't start with a T. That's a bit confusing IMO.
where EventType:Delegate I don't think this compiles. Delegate is not a valid generic constraint. For some reason the C# specification forbids certain types as a generic constraint, and one of them is Delegate. (no idea why)
Have you looked into the PRISM EventAggregator or MVVMLight Messenger classes? Both of these classes fulfill all your requirements. MVVMLight's Messenger class uses WeakReferences to prevent memory leaks.
Aside from it being a bad idea to swallow exceptions, I suggest you consider not locking while invoking the list of delegates.
You'll need to put a remark in your class's documentation that delegates can be called after having been removed from the event.
The reason I'd do this is because otherwise you risk performance consequences and possibly deadlocks. You're holding a lock while calling into someone else's code. Let's call your internal lock Lock 'A'. If one of the handlers attempts to acquire a private lock 'B', and on a separate thread someone tries to register a handler while holding lock 'B', then one thread holds lock 'A' while trying to acquire 'B' and a different thread holds lock 'B' while trying to acquire lock 'A'. Deadlock.
Third-party libraries like yours are often written with no thread safety to avoid these kinds of issues, and it is up to the clients to protect methods that access internal variables. I think it is reasonable for an event class to provide thread safety, but I think the risk of a 'late' callback is better than a poorly-defined lock hierarchy prone to deadlocking.
Last nit-pick, do you think SafeEventHandler really describes what this class does? It looks like an event registrar and dispatcher to me.
It is a bad practice to swallow exceptions entirely. If you have a use case where you would like a publisher to recover gracefully from an error raised by a subscriber then this calls for the use of an event aggregator.
Moreover, I'm not sure I follow the code in SafeEventHandler.RaiseEventSafely. Why is there an event delegate as a parameter? It seems to have no relationship with the event_handlers field. As far as thread-safety, after the call to GetInvocationList, it does not matter if the original collection of delegates is modified because the array returned won't change.
If you must, I would suggest doing the following instead:
class MyClass
{
event EventHandler myEvent;
public event EventHandler MyEvent
{
add { this.myEvent += value.SwallowException(); }
remove { this.myEvent -= value.SwallowException(); }
}
protected void OnMyEvent(EventArgs args)
{
var e = this.myEvent;
if (e != null)
e(this, args);
}
}
public static class EventHandlerHelper
{
public static EventHandler SwallowException(this EventHandler handler)
{
return (s, args) =>
{
try
{
handler(s, args);
}
catch { }
};
}
}
Juval Löwy provides an implementation of this in his book "Programming .NET components".
http://books.google.com/books?id=m7E4la3JAVcC&lpg=PA129&pg=PA143#v=onepage&q&f=false
I considered everything everyone said, and arrived at the following code for now:
public class SafeEvent<EventDelegateType> where EventDelegateType:class
{
private object collection_lock = new object();
private Delegate event_handlers;
public SafeEvent()
{
if(!typeof(Delegate).IsAssignableFrom( typeof(EventDelegateType) ))
throw new ArgumentException( "Generic parameter must be a delegate type." );
}
public Delegate Handlers
{
get
{
lock (collection_lock)
return (Delegate)event_handlers.Clone();
}
}
public void AddEventHandler( EventDelegateType handler )
{
lock(collection_lock)
event_handlers = Delegate.Combine( event_handlers, handler as Delegate );
}
public void RemoveEventHandler( EventDelegateType handler )
{
lock(collection_lock)
event_handlers = Delegate.Remove( event_handlers, handler as Delegate );
}
public void Raise( object[] args, out List<Exception> errors )
{
lock (collection_lock)
{
errors = null;
foreach (Delegate handler in event_handlers.GetInvocationList())
{
try {handler.DynamicInvoke( args );}
catch (Exception err)
{
if (errors == null)
errors = new List<Exception>();
errors.Add( err );
}
}
}
}
}
This bypasses the compiler's special treatment of the Delegate as an invalid base class. Also, events cannot be typed as Delegate.
Here is how a SafeEvent would be used to create an event in a class:
private SafeEvent<SomeEventHandlerType> a_safe_event = new SafeEvent<SomeEventHandlerType>();
public event SomeEventHandlerType MyEvent
{
add {a_safe_event.AddEventHandler( value );}
remove {a_safe_event.RemoveEventHandler( value );}
}
And here is how the event would be raised and errors handled:
List<Exception> event_handler_errors;
a_safe_event.Raise( new object[] {event_type, disk}, out event_handler_errors );
//Report errors however you want; they should not have occurred; examine logged errors and fix your broken handlers!
To summarize, this component's job is to publish events to a list of subscribers in an atomic manner (i.e. the event will not be re-raised and the invocation list will not be changed while the invocation list is executing). Deadlock is possible but easily avoided by controlling access to the SafeEvent, because a handler would have to spawn a thread that calls one of the public methods of the SafeEvent and then wait on that thread. In any other scenario, other threads would simply block until the lock owning-thread releases the lock. Also, while I do not believe in ignoring errors at all, I also do not believe that this component is in any place to handle subscriber errors intelligently nor make a judgement call about the severity of such errors, so rather than throw them and risk crashing the application, it reports errors to the caller of "Raise", since the caller is likely to be in a better position to handle such errors. With that said, this components provides a kind of stability to events that's lacking in the C# event system.
I think what people are worried about is that letting other subscribers run after an error has occurred means they are running in an unstable context. While that might be true, that means the application is in fact written incorrectly any way you look at it. Crashing is no better a solution than allowing the code to run, because allowing the code to run will allow errors to be reported, and will allow the full effects of the error to be manifest, and this, in turn, will assist engineers to more quickly and thoroughly understand the nature of the error and FIX THEIR CODE FASTER.

C# .NET 3.5 : How to invoke an event handler and wait for it to complete

I have a class containing a worker thread which receives data from a queue in a loop.
Another part of the app sinks an event from this class, which the class raises for each queue item.
These events are fired asynchronously, so at busy times the other part of the app can be processing several events at once.
This should be fine but we've discovered a scenario where this can cause problems.
We need a quick solution while the main issue gets addressed. Does the framework provide a simple way I can force the worker thread to wait while each event gets processed (so they are processed sequentially)? If not, what's the easiest way to implement this?
A simple answer would be to lock() on a single object in the event handler. All of the theads would wait to get the lock.
The ManualResetEvent class might help you here, unless I'm not understanding your question. You can use it to block the firing of the next event until the last one completes.
My guess is that you want to simply go away from triggering the action by raising an event and calling the method directly.
AFAIK events are going to be async and I am not aware of any "easy" ways of changing that.
Turns out there's another answer. You can just add the following attribute to the method.
[System.Runtime.CompilerServices.MethodImpl(System.Runtime.CompilerServices.MethodImplOptions.Synchronized)]
There is no general way.
In the end the handlers need to provide a mechanism for tracking.
If you are using BeginInvoke, rather than raising the events directly, you can use a wrapper, within which you call the real event handler synchronously, then raise the wrapper asynchronously. The wrapper can maintain a counter (with Interlocked operations) or set an event as meets your needs.
Something like:
TheDelegate realHandler = theEvent;
var outer = this;
ThreadPool.QuereUserWorkItem(x => {
// Set start of handler
realHandler(outer, eventArgs);
// Set handler finished
};
All of the event handlers sinking events raised by the queue-reading worker thread are called in the queue-reading worker thread. As long as the event handlers aren't spawning threads of their own, you should be able to wait for the event handlers to finish by calling Thread.Join() on the queue-reading worker thread.

Categories

Resources