Invoke of an EventHandler - c#

I have the following EventHandler:
private EventHandler<MyEventArgs> _myEventHandler;
public event EventHandler<MyEventArgs> MyEvent
{
add { _myEventHandler += value; }
remove { _myEventHandler -= value; }
}
Could somebody explain the difference between the following snippets?
Snippet EventHandler (A):
//Snippet A:
if (_myEventHandler != null)
{
_myEventHandler(new MyEventArgs());
}
Snippet BeginInvoke (B):
//Snippet B:
if (_myEventHandler != null)
{
_myEventHandler.BeginInvoke(new MyEventArgs(), ar =>
{
var del = (EventHandler<MyEventArgs>)ar.AsyncState;
del.EndInvoke(ar);
}, _myEventHandler);
}
For clarification: What's the difference between invoking an EventHandler "just as it is" and using BeginInvoke?

The BeginInvoke approach is async, meaning that it is raised on a different thread. This can be dangerous if people don't expect it, and is pretty rare for events - but it can be useful.
Also, note that strictly speaking you should snapshot the event handler value - this is especially true if (via Begin*) you are dealing with threads.
var tmp = _myEventHandler;
if(tmp != null) {
tmp(sender, args);
}
Also - note that your event subscription itself is not thread-safe; again, this only matters if you are dealing with multi-threading, but the inbuilt field-like event is thread-safe:
public event EventHandler<MyEventArgs> MyEvent; // <===== done; nothing more
The issues avoided here are:
with the snapshot, we avoid the risk of the last subscriber unsubscribing between the null-check and the invoke (it does mean they might get an event they didn't expect, but it means we don't kill the raising thread)
with the field-like event change we avoid the risk of losing subscriptions / unsubscriptions when two threads are doing this at the same time

BeginInvoke() call immediatelly returns control to the calling thread and run a delegate in a separate thread from the ThreadPool, so this will be some kind of asynchronous execution.

Related

Synchronization mechanism for an observable object

Let's imagine we have to synchronize read/write access to shared resources. Multiple threads will access that resource both in read and writing (most of times for reading, sometimes for writing). Let's assume also that each write will always trigger a read operation (object is observable).
For this example I'll imagine a class like this (forgive syntax and style, it's just for illustration purposes):
class Container {
public ObservableCollection<Operand> Operands;
public ObservableCollection<Result> Results;
}
I'm tempted to use a ReadWriterLockSlim for this purpose moreover I'd put it at Container level (imagine object is not so simple and one read/write operation may involve multiple objects):
public ReadWriterLockSlim Lock;
Implementation of Operand and Result has no meaning for this example.
Now let's imagine some code that observes Operands and will produce a result to put in Results:
void AddNewOperand(Operand operand) {
try {
_container.Lock.EnterWriteLock();
_container.Operands.Add(operand);
}
finally {
_container.ExitReadLock();
}
}
Our hypotetical observer will do something similar but to consume a new element and it'll lock with EnterReadLock() to get operands and then EnterWriteLock() to add result (let me omit code for this). This will produce an exception because of recursion but if I set LockRecursionPolicy.SupportsRecursion then I'll just open my code to dead-locks (from MSDN):
By default, new instances of ReaderWriterLockSlim are created with the LockRecursionPolicy.NoRecursion flag and do not allow recursion. This default policy is recommended for all new development, because recursion introduces unnecessary complications and makes your code more prone to deadlocks.
I repeat relevant part for clarity:
Recursion [...] makes your code more prone to deadlocks.
If I'm not wrong with LockRecursionPolicy.SupportsRecursion if from same thread I ask a, let's say, read lock then someone else asks for a write lock then I'll have a dead-lock then what MSDN says makes sense. Moreover recursion will degrade performance too in a measurable way (and it's not what I want if I'm using ReadWriterLockSlim instead of ReadWriterLock or Monitor).
Question(s)
Finally my questions are (please note I'm not searching for a discussion about general synchronization mechanisms, I would know what's wrong for this producer/observable/observer scenario):
What's better in this situation? To avoid ReadWriterLockSlim in favor of Monitor (even if in real world code reads will be much more than writes)?
Give up with such coarse synchronization? This may even yield better performance but it'll make code much more complicated (of course not in this example but in real world).
Should I just make notifications (from observed collection) asynchronous?
Something else I can't see?
I know that there is not a best synchronization mechanism so tool we use must be right one for our case but I wonder if there are some best practice or I just ignore something very important between threads and observers (imagine to use Microsoft Reactive Extensions but question is general, not tied to that framework).
Possible solutions?
What I would try is to make events (somehow) deferred:
1st solution
Each change won't fire any CollectionChanged event, it's kept in a queue. When provider (object that push data) has finished it'll manually force the queue to be flushed (raising each event in sequence). This may be done in another thread or even in the caller thread (but outside the lock).
It may works but it'll make everything less "automatic" (each change notification must be manually triggered by producer itself, more code to write, more bugs all around).
2nd solution
Another solution may be to provide a reference to our lock to the observable collection. If I wrap ReadWriterLockSlim in a custom object (useful to hide it in a easy to use IDisposable object) I may add a ManualResetEvent to notify that all locks has been released in this way collection itself may rise events (again in the same thread or in another thread).
3rd solution
Another idea could be to just make events asynchronous. If event handler will need a lock then it'll be stopped to wait it's time frame. For this I worry about the big thread amount that may be used (especially if from thread pool).
Honestly I don't know if any of these is applicable in real world application (personally - from users point of view - I prefer second one but it implies custom collection for everything and it makes collection aware of threading and I would avoid it, if possible). I wouldn't like to make code more complicated than necessary.
This sounds like quite the multi-threading pickle. It's quite challenging to work with recursion in this chain-of-events pattern, whilst still avoiding deadlocks. You might want to consider designing around the problem entirely.
For example, you could make the addition of an operand asynchronous to the raising of the event:
private readonly BlockingCollection<Operand> _additions
= new BlockingCollection<Operand>();
public void AddNewOperand(Operand operand)
{
_additions.Add(operand);
}
And then have the actual addition happen in a background thread:
private void ProcessAdditions()
{
foreach(var operand in _additions.GetConsumingEnumerable())
{
_container.Lock.EnterWriteLock();
_container.Operands.Add(operand);
_container.Lock.ExitWriteLock();
}
}
public void Initialize()
{
var pump = new Thread(ProcessAdditions)
{
Name = "Operand Additions Pump"
};
pump.Start();
}
This separation sacrifices some consistency - code running after the add method won't actually know when the add has actually happened and maybe that's a problem for your code. If so, this could be re-written to subscribe to the observation and use a Task to signal when the add completes:
public Task AddNewOperandAsync(Operand operand)
{
var tcs = new TaskCompletionSource<byte>();
// Compose an event handler for the completion of this task
NotifyCollectionChangedEventHandler onChanged = null;
onChanged = (sender, e) =>
{
// Is this the event for the operand we have added?
if (e.NewItems.Contains(operand))
{
// Complete the task.
tcs.SetCompleted(0);
// Remove the event-handler.
_container.Operands.CollectionChanged -= onChanged;
}
}
// Hook in the handler.
_container.Operands.CollectionChanged += onChanged;
// Perform the addition.
_additions.Add(operand);
// Return the task to be awaited.
return tcs.Task;
}
The event-handler logic is raised on the background thread pumping the add messages, so there is no possibility of it blocking your foreground threads. If you await the add on the message-pump for the window, the synchronization context is smart enough to schedule the continuation on the message-pump thread as well.
Whether you go down the Task route or not, this strategy means that you can safely add more operands from an observable event without re-entering any locks.
I'm not sure if this is exactly the same issue but when dealing with relatively small amounts of data (2k-3k entries), I have used the below code to facilitate cross thread read/write access to collections bound to UI. This code originally found here.
public class BaseObservableCollection<T> : ObservableCollection<T>
{
// Constructors
public BaseObservableCollection() : base() { }
public BaseObservableCollection(IEnumerable<T> items) : base(items) { }
public BaseObservableCollection(List<T> items) : base(items) { }
// Evnet
public override event NotifyCollectionChangedEventHandler CollectionChanged;
// Event Handler
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
// Be nice - use BlockReentrancy like MSDN said
using (BlockReentrancy())
{
if (CollectionChanged != null)
{
// Walk thru invocation list
foreach (NotifyCollectionChangedEventHandler handler in CollectionChanged.GetInvocationList())
{
DispatcherObject dispatcherObject = handler.Target as DispatcherObject;
// If the subscriber is a DispatcherObject and different thread
if (dispatcherObject != null && dispatcherObject.CheckAccess() == false)
{
// Invoke handler in the target dispatcher's thread
dispatcherObject.Dispatcher.Invoke(DispatcherPriority.DataBind, handler, this, e);
}
else
{
// Execute handler as is
handler(this, e);
}
}
}
}
}
}
I have also used the code below (which inherits from the above code) to support raising the CollectionChanged event when items inside the collection raise the PropertyChanged.
public class BaseViewableCollection<T> : BaseObservableCollection<T>
where T : INotifyPropertyChanged
{
// Constructors
public BaseViewableCollection() : base() { }
public BaseViewableCollection(IEnumerable<T> items) : base(items) { }
public BaseViewableCollection(List<T> items) : base(items) { }
// Event Handlers
private void ItemPropertyChanged(object sender, PropertyChangedEventArgs e)
{
var arg = new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Replace, sender, sender);
base.OnCollectionChanged(arg);
}
protected override void ClearItems()
{
foreach (T item in Items) { if (item != null) { item.PropertyChanged -= ItemPropertyChanged; } }
base.ClearItems();
}
protected override void InsertItem(int index, T item)
{
if (item != null) { item.PropertyChanged += ItemPropertyChanged; }
base.InsertItem(index, item);
}
protected override void RemoveItem(int index)
{
if (Items[index] != null) { Items[index].PropertyChanged -= ItemPropertyChanged; }
base.RemoveItem(index);
}
protected override void SetItem(int index, T item)
{
if (item != null) { item.PropertyChanged += ItemPropertyChanged; }
base.SetItem(index, item);
}
}
Cross-Thread Collection Synchronization
Putting a ListBox binding to a ObservableCollection , when the data changes , you update the ListBox because INotifyCollectionChanged implemented .
The defect dell'ObservableCollection is that the data can be changed only by the thread that created it.
The SynchronizedCollection does not have the problem of Multi-Thread but does not update the ListBox because it is not implemented INotifyCollectionChanged , even if you implement INotifyCollectionChanged , CollectionChanged (this, e) can only be called from the thread that created it .. so it does not work.
Conclusion
-If you want a list that is autoupdated mono-thread use ObservableCollection
-If you want a list that is not autoupdated but multi-threaded use SynchronizedCollection
-If you want both, use Framework 4.5, BindingOperations.EnableCollectionSynchronization and ObservableCollection () in this way :
/ / Creates the lock object somewhere
private static object _lock = new object () ;
...
/ / Enable the cross acces to this collection elsewhere
BindingOperations.EnableCollectionSynchronization ( _persons , _lock )
The Complete Sample
http://10rem.net/blog/2012/01/20/wpf-45-cross-thread-collection-synchronization-redux

c# events: on which thread [duplicate]

This question already has answers here:
In .NET, what thread will Events be handled in?
(4 answers)
Closed 8 years ago.
pseudo-code:
class A
{
Dictionary<string, object> dic = new Dictionary<string, object>();
public Do()
{
some_a.SomeEvent += (s, e) =>
{
dic["some_string"] = new object();
};
dic["some_other_string"] = new object();
}
}
Is this safe? It would be if both dictionary operations were on single same thread. So are they?
EDIT In my situation event is fired in the same thread as Do, so it's safe.
An event happens (usually) on the thread that invokes the event. So we can't actually comment completely since you don't show the code that causes the event to be invoked!
Strictly speaking, the event could be on any thread - either because a random thread is calling OnSomeEvent (or whatever the trigger is), or if the OnSomeEvent implementation chooses to use BeginInvoke for some reason, but this is unlikely.
Ultimately, it comes down to: is there a reason to think multiple threads are involved in this code.
But: what absolutely is not the case: no, there is nothing that will make that event automatically happen on the thread that subscribes it. An event subscription is just an object+method pair; no thread is nominated in the subscription.
The code inside the event handler will execute on the thread the event is raised on.
class AsyncWorker
{
public void BeginWork()
{
new TaskFactory().StartNew(() => RaiseMyEvent());
}
public event EventHandler MyEvent;
private void RaiseMyEvent()
{
var myEvent = MyEvent;
if(myEvent != null)
myEvent(EventArgs.Empty);
}
}
var worker = new AsyncWorker();
worker.MyEvent += (s, e) =>
{
/* I will be executed on the thread
started by the TaskFactory */
};
worker.BeginWork();
Unless you specifically start the event handler in a different thread, both operations will indeed run on the same thread (assuming a single threaded application).

Should cancellable event args be passed to all handlers, if the first handler cancels it?

I've got an extension method that raises cancellable events, returning a bool if they're cancelled:
public static bool RaiseCancel<T>(this EventHandler<T> ev, object sender, T e) where T : CancelEventArgs
{
if (ev == null)
{
return false;
}
foreach (Delegate del in ev.GetInvocationList())
{
try
{
ISynchronizeInvoke invoke = del.Target as ISynchronizeInvoke;
if (invoke != null && invoke.InvokeRequired)
{
invoke.Invoke(del, new[] { sender, e });
}
else
{
del.DynamicInvoke(sender, e);
}
}
catch (TargetInvocationException ex)
{
throw ex.InnerException;
}
// if (e.Cancel) return true;
}
return e.Cancel;
}
However, I can't help thinking it should return immediately when a handler cancels it for the sake of efficiency, rather than continue to call the remaining handlers. As far as I'm aware, no handler of a cancellable event should EVER take any action other than to set the Cancel property to true. That being the case, what's the point asking more handlers to make a decision that's already been made? On the other hand, it seems wrong NOT to call an event handler when it happens if an object is listening for the event.
Should I uncomment that if statement (and replace the return at the end of the method with return false;) or not?
EDIT: I suppose if you're going to continue to call handlers, should I allow handlers themselves to make the decision (i.e., they can have if (e.Cancel) return; at the beginning of the handler) if they want?
NOTE: what I describe here is only what I think makes sense. It might not be implemented this way in the .NET framework (see João Angelo's comment below)
Take an example: the FormClosing event. If a handler cancels this event, the form is not going to be closed, so it doesn't make sense to notify the other handlers that the form is being closed.
In the more general case, if you call the other handlers after e.Cancel is set to true, you're notifying them of an something that isn't happening anymore...
So in my opinion you should stop invoking the handlers as soon as e.Cancel is set to true.

Does an event need to have at least one handler?

Why event needs to have at least one handler?
I created custom event for my Control and somewhere inside of code of my control, I call this event:
this.MyCustomEvent(this, someArgs);
it throws a NullReferenceException if there is no handler subscribed to it.
When I added a single handler in control's constructor, everything works fine:
this.MyCustomEvent += myCutomEventHandler;
void myCustomEventHandler(object sender, EventArgs e)
{ /* do nothing */ }
Is this normal or maybe I'm doing something wrong?
Shouldn't it check automatically if there are any handlers subscribed? It's a little dumb, IMHO.
I recommend you to have an extremely useful extension method:
public static void Raise<T>(this EventHandler<T> eventHandler, object sender, T e) where T : EventArgs
{
if (eventHandler != null)
{
eventHandler(sender, e);
}
}
which will do the check for you.
Usage:
MyCustomEvent.Raise(this, EventArgs.Empty);
Note that delegates are reference types, and their default value is null.
The solution proposed by others, i.e. to check for null before firing the event, is not thread-safe because listeners may unsubscribe from the event between the null check and the firing of the event.
I have seen solutions that involve copying the delegate to a local variable, and checking it for null before firing, such as
EventHandler myCustomEventCopy = MyCustomEvent;
if (myCustomEventCopy != null)
{
myCustomEventCopy (this, someArgs);
}
But this has a race-condition, i.e. handlers may fire even after they have unsubscribed from the event, which may corrupt the state of the application.
One solution that handles both problems is to initialize events to a blank handler, e.g.
public event EventHandler MyCustomEvent = delegate { };
and then fire them off without any checks, e.g.
MyCustomEvent(this, someArgs);
Edit: As others have pointed out, this is a complex issue.
http://blogs.msdn.com/b/ericlippert/archive/2009/04/29/events-and-races.aspx
Lippert points out that completely removing the "handler is fired after deregistration" problem requires that the handlers themselves are written in a robust manner.
An event at the bottom is MulticastDelegate, which is null if there is no method in the invocation list. Usually, you use a RaiseEvent() method to call an event, the pattern looks like the following:
public void RaiseEvent()
{
var handler = MyEvent;
if(handler != null)
handler(this, new EventArgs());
}
As you assign the event to a variable, it is threadsafe. You can miss a removed or added method though, which was added between the atomic operations (assignment -> null check -> invocation).
This is normal behaviour. The conventional pattern for throwing an event in .NET is to have a method called OnMyCustomEvent that you use to throw the event, like so:
protected void OnMyCustomEvent(MyCustomEventArgs e)
{
EventHandler<MyCustomEventArgs> threadSafeCopy = MyCustomEvent;
if (threadSafeCopy != null)
threadSafeCopy(this, e);
}
public event EventHandler<MyCustomEventArgs> MyCustomEvent;
Then from your code, you would call this.OnMyCustomEvent(someArgs) instead.
This is the normal behavior. To avoid this always test if there's an event subscriber before calling it:
if (MyCustomEvent != null)
{
MyCustomEvent(this, someArgs);
}
There is the "standard" approach of copying the reference and then checking for null (the value of the reference if no handlers are attached)—as given by other answers:
public EventHandler<MyEventArgs> MyEvent;
protected virtual OnMyEvent(MyEventArgs args) {
var copy = MyEvent;
if (copy != null) {
copy(this, args);
}
}
This works because, in part, MulticastDelegate instances are immutable.
There is another approach: the Null Object Pattern:
public EventHandler<MyEventArgs> MyEvent;
// In constructor:
MyEvent += (s,e) => {}; // No op, so it is always initialised
And there is no need to take a copy or check for null because it won't be. This works because there is no Clear method on an event.
Yes, you need to check for null. In fact, there's a right way to do it:
var myEvent = MyCustomEvent;
if (myEvent != null)
myEvent(this, someArgs);
This is the official right way because it avoids a possible race condition when the event is changed after the null check but before the call.

Why is this event declared with an anonymous delegate?

I have seen people define their events like this:
public event EventHandler<EventArgs> MyEvent = delegate{};
Can somebody explain how this is different from defining it without it? Is it to avoid checking for null when raising the event?
You got it - adding the empty delegate lets you avoid this:
public void DoSomething() {
if (MyEvent != null) // Unnecessary!
MyEvent(this, "foo");
}
This declaration ensures that MyEvent is never null, removing the tedious and error-prone task of having to check for null every time, at the cost of executing an extra empty delegate every time the event is fired.

Categories

Resources