Delaying a method call - c#

I have a high rate of events that can occur for a specific entity and i need to transfer them over a network. The problem is that those event can generate high level of traffic and calculation and that is not desired.
So my question would be what would be the best way to delay the execution of calculation function for a specific amount of time. In my case events doesn't have any actual data that i need to buffer or occurrence order so basically it would be just to start a timer once event occurs and fire it with entity parameter once delay expires.
I could build my own implementation with a timer but it seem that there are already ones that should support it e.g reactive extensions ?
In any case if somebody can point me out to an existing implementation or framework would be greatly appreciated.
Edit
Ok, i have looked at RX observable pattern and it looks like it can do the job. I can see a simple implementation that i could use e.g
IDisposable handlers;
Subject<int> subject = new Subject<int>();
handlers = subject.AsObservable().Sample(TimeSpan.FromSeconds(10))
.Subscribe(sample =>
{
Trace.WriteLine(sample);
});
Now whenever i want to process event i would call
subject.OnNext(someValue);
The sample should delay the calls to subscribers.
Can somebody comment if i am correct with this usage?

Here is an example to what you can do:
public class ExpiryDictionarty
{
Timer timer; //will hanlde the expiry
ConcurrentDictionary<string, string> state; //will be used to save the last event
public ExpiryDictionarty(int milisec)
{
state = new ConcurrentDictionary<string, string>();
timer = new Timer(milisec);
timer.Elapsed += new ElapsedEventHandler(Elapsed_Event);
timer.Start();
}
private void Elapsed_Event(object sender, ElapsedEventArgs e)
{
foreach (var key in state.Keys)
{
//fire the calculation for each event in the dictionary
}
state.Clear();
}
public void Add(string key, string value)
{
state.AddOrUpdate(key, value);
}
}
you can create a collection that will save all the events that you receive, once the time ticks you can fire all the events in the collection, because we are using a dictionary we can save only the last event so we don't have to save all the events you get.

I suggest you look into Proxy design pattern. Your clients will know only about a proxy and trigger events on the Proxy object. Your Proxy object will contain the logic that determines when to send actual request over the wire. This logic may depend on your requirements. From what I understood, having a boolean switch isEventRaised and checking it within a configurable interval may suffice your requirements (you will reset the flag to false at the end of this interval).
Also, you may check Throttling implementations first and try to figure out whether they will suite your requirements. For example, here is a StackOverflow question about different Throttling methods, which references among others a Token bucket algorithm.

Related

Is there a pattern or recommended way of waiting on a generic value to achieve a certain value?

In this brave new async world, I find myself time and time again in need of an awaitable method to wait for something to be at a certain state or meet a certain condition. For example (pseudocode)
await (myStateMachine.State == StateEnum.Ready);
await (myDownloadProgress == 100.0);
await (mySpiDeviceFifoLEvel != 0);
These scenarios arise because I need to hold off some asynchronously started code until a certain state is achieved in another part of the code. For example, the user fires up new part of the UI, but a background thread is still trying to establish communications with a piece of hardware. Or the state machine controlling one piece of hardware needs to wait until another state machine controlling another piece of hardware has got to a certain state of readiness.
I have come up with mainly wonderful and wacky ways of achieving this, and in doing so noticed certain patterns emerging, so the natural progression is to code us some helper class / generic to do this sort of behaviour in a re-usable fashion.
Before I go down this route, there must be others addressing this sort of issue, so I was interested if anyone knows of a tried and tested pattern or recommended way of doing this. I have done some searching on the WWW but not found anything particularly conclusive. This SO question touches on the subject but the op is asking for a different reason. This SO question asks for the same sort of thing but specific to task progress.
Ways I have achieved this so far
1. Don't do it! Use an event
When I am in control of the source (e.g. a state machine's state) that is changing, I often convince myself that I'm doing it wrong, and instead of waiting for a value to be achieved, I should make the producer (state machine) generate an event when my condition is achieved. Any listener can then use an AutoResetEvent or ManualResetEvent to wait on the handler
{
myStateMachine.OnMyConditionAchieved += OnConditionAchievedEventHandler;
myEvent = new AutoResetEvent(false);
myEvent.WaitOne();
}
void OnConditionAchievedEventHandler(object sender, EventArgs e)
{
myEvent.Set();
}
The downside of this is that I don't really want to litter my producer code with events that are specific to the consumers needs.
2. Use an Event, coding overhead vs performance tradeoff
If there isn't already a handy event to hook into (1), then the producer is forever being modified to meet the needs of the consumers. So the obvious natural progression is to make use of something like INotifyPropertyChanged pattern. That way, there is no endless extension to the producers and the consumer does this:
void StateMachine_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
if (e.Property == "State")
{
if (myStateMachine.State == State.TheStateThatIWant)
{
myEvent.Set();
}
}
}
This feels like a win because I use the NotifyPropertyChanged system a lot - it's required for DataBinding, so there's less code to add, but, it feels dirty that we're listening to every change in the producer in order to filter out the condition that want - surely there's a better way?!
3. Use a Task and poll (ugh)
Spin up a task that checks state and sleeps if a condition is not met indefinitely or until the task is cancelled. Callers, then wait on the task to complete. Task completes when the condition is met or cancelled.
Pros - makes for neat code, esp when using Task.Run(() => … ) lambda approach, can take advantage of task cancellation techniques (tokens, timeouts etc) which is often also needed
Cons - polling feels dirty, seems a bit heavy handed to built a whole new Task to do such a simple job
4. Use a task and wait on event
Better than polling right? But suffers from the same issue as 1) and 2) of needing an appropriate event to hook to, so 2) (INotifyPropertyChanged) more common than 1). So the implementation often ends up as spin up a task, wait on ManualResetEvent, listen to PropertyChanged and filter changes, fire event, return from task.
5. The holy grail
I'm not 100% sure but something that is
1) lightweight
2) allows the condition to be specified at the time the wait is initiated
3) not going to be a huge resource burden if 10,000 things were waiting on various properties to achieve certain values
4) clean i.e. disposes of resources correctly
MagicValueWaiter waitForValue = new MagicValueWaiter(MyStateMachine, nameof(State), (s) => (s > 4) && (s < 8));
await waitForInit.WaitAsync();
or
await ValueWaiter.WaitAsync(MyObject, nameof(MyPropertyorField), (s) => (s == States.Init);
So basically, a generic class / method for waiting for a given property or field of a given object to meet a certain given condition in the form of a lambda returning bool.
This approach might at first glance suggest a polling technique, however, if I forced MyObject to conform to something like must implement INotifyPropertyChanged or some custom base class to support this behaviour e.g. ISupportValueWaiting then we could hook into some common bahaviour e.g. events on MyObject and avoid polling.
Any obvious solutions I'm missing? Anyone got any novel ideas on how to do this? or comments on mine?
What you could do is using a TaskCompletionSource and the interface INotifyPropertyChanged to complete a Task as soon some condition on the obj is met.
So:
public static class ConditionWaiter
{
public static Task WaitForAsync<T>(this T obj, string PropertyName, Func<T, bool> pred)
where T : INotifyPropertyChanged
{
obj = obj ?? throw new ArgumentNullException(nameof(obj));
PropertyName = PropertyName ?? throw new ArgumentNullException(nameof(PropertyName));
pred = pred ?? throw new ArgumentNullException(nameof(pred));
var taskCompletionsource = new TaskCompletionSource<bool>();
void handler(object sender, PropertyChangedEventArgs e)
{
if (e.PropertyName == PropertyName && pred(obj))
{
obj.PropertyChanged -= handler;
taskCompletionsource.SetResult(true);
}
}
obj.PropertyChanged += handler;
return taskCompletionsource.Task;
}
}
And you could use it like:
await someValue.WaitForAsync(nameof(SomeType.SomeProperty), s => ...);
What you're looking for is reactive extensions.

C# Determine which of several timers has expired

I have an array of objects and each object has its own timer. If, when constructing the arrayed objects I pass my own timer event handler for use in the timers, is there any way to tell which object's timer has expired.
If not, it seems all the objects in my array would need to catch their own timers and I'd need to implement a completely different delegate that took something like InnerObject as a parameter so the inner objects own event handler could call it like this: OuterDelegate(this,eventArgs);
All the various ways along the same line are such a ridiculous amount of trouble I can't help but think there must be a better way. On other systems the timer always takes a token that is included in parameters to the event handler but I can't find anything resembling that in .net (core).
The answer to your question is fairly simple. The event handler of the System.Timers.Timer Elapsed event is published with an argument called sender.
_timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
}
The sender is actually an instance of the System.Timers.Timer class that elapsed.
So, with this you can know the timer that elapsed...
Further more, this class can be extended/Inherited, which means that you can create your own custom Timer class with has an extra property called Name, which you can use to compare and know which timer elapsed.
Example:
class CustomTimer : System.Timers.Timer {
public string TimerName { get; set; }
//More properties as you need
}
//common handler for array of timers
private void _timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e) {
var elapsedTimer = sender as CustomTimer;
if (elapsedTimer.TimerName.Equals(/*which ever*/)) {
//continue with logic
}
// continue with logic
}
Well it turns out that System.Timers is complete junk but not your only choice.
System.Threading.Timer has exactly the features I was looking for. I just didn't realize there were two version until I stumbled on certain complaints that clued me to the fact that they are not the same and I finally looked at the threading version.
Edit:
System.Threading.Timer's callback looks like this
public delegate void TimerCallback(object state);
Where state is an arbitrary object passed to the timer during construction. It can encapsulate anything the event handler needs to properly handle the specific instance of the event. You can even set properties or call methods on the object during event handling thus controlling its state based on the timer.
Edit-2
The only thing in the System.Timers implementation that is vaguely similar is the ability to attach a System.ComponentModel.ComponentCollection and the ability to point to a Component within the collection. These are COM objects belonging to System.​Windows.​Forms.​Control. Even if you extend the class to meet your own needs you drag support for unusable properties around with you.
You can extend the System.Threading.Timer just as easily without dragging unnecessary baggage along.

How to process (dynamically added) items at a given time?

I've got a (concurrent) priority queue with a timestamp (in the future) as the key and a function that should be called (/ an item that should be processed) when the time is reached as the value. I don't want to attach a timer to each item, cause there a lots of it. I'd rather go with a scheduler thread/task.
What would be a good strategy to do so?
With a thread running a scheduler... (pseudo-code follows)
// scheduler
readonly object _threadLock = new object();
while (true)
{
if(queue.Empty)
{
Monitor.Wait(_threadLock);
}
else
{
var time = GetWaitingTimeForNextElement();
if(time > 0)
Monitor.Wait(_threadLock, time);
else
// dequeue and process element
}
}
...and pulsing when adding elements (to an empty queue or adding a new first element)?
// element enqueued
Monitor.Pulse(_threadLock);
Or with somehow chained (Task.ContinueWith(...)) Tasks using Task.Delay(int, CancellationToken )? This would need some logic to abort the waiting if a new first element is enqueued or to create a new task if no one is running. It feels like there is a simpler solution I'm not getting right now. :)
Or using a timer (very-pseudo-code, just to get the idea)...
System.Timers.Timer x = new System.Timers.Timer().Start();
x.Elapsed += (sender, args) =>
{
// dequeue and process item(s)
x.Interval = GetWaitingTimeForNextElement(); // does this reset the timer anyway?
}
...and updating the interval when adding elements (like above).
// element enqueued
x.Interval = updatedTime;
I'm also concerned with the precision of the wait methods / timers: Milliseconds is quite rough (allthough it might work) Is there a better alternative?
Ergo...
Thats again a bunch of questions/thoughts - sorry for that - but there are so many options and concerns that its hard to get an overview. So to summarize: What is the best way to implement a (precise) time scheduling system for dynamically incoming items?.
I appreciate all hints and answers! Thanks a lot.
I would suggest doing it like this:
Create a class called TimedItemsConcurrentPriorityQueue<TKey, TValue> that inherits from ConcurrentPriorityQueue<TKey, TValue>.
Implement an event called ItemReady in your TimedItemsConcurrentPriorityQueue<TKey, TValue> class that gets fired whenever an item is ready (for being processed) according to the timestamp. You can use a single timer and update the timer as needed by shadowing the Enqueue, Insert, Remove and other methods as needed (Or by modifying the source of ConcurrentPriorityQueue<TKey, TValue> and make those methods virtual so you can override them).
Instantiate a single instance of TimedItemsConcurrentPriorityQueue<TKey, TValue>, let's call that variable itemsWaitingToBecomeReady.
Instantiate a single object of BlockingCollection<T>, let's call that variable itemsReady. Use the constructor that takes an IProducerConsumerCollection<T> and pass it a new instance of ConcurrentPriorityQueue<TKey, TValue> (it inherits IProducerConsumerCollection<KeyValuePair<TKey,TValue>>)
Whenever the event ItemReady is fired in itemsWaitingToBecomeReady, you deque that item and enqueue it to itemsReady.
Process the items in itemsReady using the BlockingCollection<T>.GetConsumingEnumerable method using a new task like this:
.
Task.Factory.StartNew(() =>
{
foreach (var item in itemsReady.GetConsumingEnumerable())
{
...
}
}

How to enqueue using 1 Timer and Dequeue with another

I need to enqueue items into a Queue at roughly 4 to 8ms intervals.
Separately, my UI layer needs to dequeue, process, and display info from these items at roughly 33ms intervals (it may dequeue multiple times at that interval).
I'm not quite sure what combination of Timers and Queue I should use to get this working.
I think I should user the ConcurrentQueue class for the queue, but what timer mechanism should I use for the Enqueueing and Dequeuing?
UPDATE:
I ended up going with something like Brian Gideon's and Alberto's answers.
Without going into all the details here is what I did:
I used the following timer to for both my 4ms timer and my 33ms timer. (http://www.codeproject.com/Articles/98346/Microsecond-and-Millisecond-NET-Timer)
My 4ms timer reads data from a high speed camera, does a small amount of processing and enqueues the data into a ConcurrentQueue.
My 33ms timer dequeues all items from the queue, does some more processing on each item and sends the data to another object that computes a rolling average over some given interval. (Queues are used to manage the rolling averages.)
Within the CompositionTarget.Rendering event, I grab the value(s) from the rolling average object and plot them on my custom line graph control.
I mentioned 33ms for the UI because this data is being fed into a realtime graph. 33ms is about 30 fps... anything slower than that and some smoothness is lost.
I did end up using the ConccuentQueue as well. Works great.
CPU takes a bit of a hit. I think it's due to the high performance timers.
Thanks for the help everyone.
Those are some really tight timing requirements. I question the ~33ms value for the UI updates. The UI should not have to be updated any faster than a human can perceive it and even then that is probably overkill.
What I would do instead is to use a producer-consumer pipeline.
Producer -> Processor -> UI
In my primitive illustration above the Producer will do the step of generating the messages and queueing them. The processor will monitor this queue and do the non-UI related processing of the messages. After processing is complete it will generate messages with just enough information required to update the UI thread. Each step in this pipeline will run on a designated thread. I am assuming you do have a legitimate need for two distinct intervals (4ms and 33ms respectively). I am suggesting you add a 3rd for the UI. The polling intervals might be:
~4ms -> ~33ms -> 500ms
I used the tilde (~) intentionally to highlight the fact that lower interval timings are very hard to achieve in .NET. You might be able to hit 33ms occassionally, but the standard deviation on an arbitrary population of "ticks" will be very high using any of the timers built into the BCL. And, of course, 4ms is out of the question.
You will need to experiment with multimedia timers or other HPET (high performance event timer) mechanisms. Some of these mechanisms use special hardware. If you go this route then you might get closer to that 4ms target. Do not expect miracles though. The CLR is going to stack the deck against you from the very beginning (garbage collection).
See Jim Mischel's answer here for a pretty good write up on some of your options.
You can use one DispatcherTimer for dequeue elements and publish them to the UI and another Timer to enqueue.
For example:
class Producer
{
public readonly Timer timer;
public ConcurrentQueue<int> Queue {get;private set;}
Producer()
{
timer = new Timer(Callback, null, 0, 8);
Queue = new Concurrent<int>();
}
private void Callback(object state)
{
Queue.Enqueue(123);
}
}
class Consumer
{
private readonly Producer producer;
private readonly DispatcherTimer timer;
Consumer(Producer p)
{
producer = p;
timer = new DispatcherTimer();
timer.Interval = TimeSpan.FromMilliseconds(33);
timer.Tick += new EventHandler(dispatcherTimer_Tick);
timer.Start();
}
private void dispatcherTimer_Tick(object sender, EventArgs e)
{
int value;
if(producer.Queue.TryDequeue(out value))
{
// Update your UI here
}
}
}
Since you are dealing with the UI, you could use a couple of DispatcherTimer instead the classic timers. This timer is designed just for the interaction with the UI, thus your queue should be able to enqueue/dequeue without any problem.

C# Events and Thread Safety

I frequently hear/read the following advice:
Always make a copy of an event before you check it for null and fire it. This will eliminate a potential problem with threading where the event becomes null at the location right between where you check for null and where you fire the event:
// Copy the event delegate before checking/calling
EventHandler copy = TheEvent;
if (copy != null)
copy(this, EventArgs.Empty); // Call any handlers on the copied list
Updated: I thought from reading about optimizations that this might also require the event member to be volatile, but Jon Skeet states in his answer that the CLR doesn't optimize away the copy.
But meanwhile, in order for this issue to even occur, another thread must have done something like this:
// Better delist from event - don't want our handler called from now on:
otherObject.TheEvent -= OnTheEvent;
// Good, now we can be certain that OnTheEvent will not run...
The actual sequence might be this mixture:
// Copy the event delegate before checking/calling
EventHandler copy = TheEvent;
// Better delist from event - don't want our handler called from now on:
otherObject.TheEvent -= OnTheEvent;
// Good, now we can be certain that OnTheEvent will not run...
if (copy != null)
copy(this, EventArgs.Empty); // Call any handlers on the copied list
The point being that OnTheEvent runs after the author has unsubscribed, and yet they just unsubscribed specifically to avoid that happening. Surely what is really needed is a custom event implementation with appropriate synchronisation in the add and remove accessors. And in addition there is the problem of possible deadlocks if a lock is held while an event is fired.
So is this Cargo Cult Programming? It seems that way - a lot of people must be taking this step to protect their code from multiple threads, when in reality it seems to me that events require much more care than this before they can be used as part of a multi-threaded design. Consequently, people who are not taking that additional care might as well ignore this advice - it simply isn't an issue for single-threaded programs, and in fact, given the absence of volatile in most online example code, the advice may be having no effect at all.
(And isn't it a lot simpler to just assign the empty delegate { } on the member declaration so that you never need to check for null in the first place?)
Updated: In case it wasn't clear, I did grasp the intention of the advice - to avoid a null reference exception under all circumstances. My point is that this particular null reference exception can only occur if another thread is delisting from the event, and the only reason for doing that is to ensure that no further calls will be received via that event, which clearly is NOT achieved by this technique. You'd be concealing a race condition - it would be better to reveal it! That null exception helps to detect an abuse of your component. If you want your component to be protected from abuse, you could follow the example of WPF - store the thread ID in your constructor and then throw an exception if another thread tries to interact directly with your component. Or else implement a truly thread-safe component (not an easy task).
So I contend that merely doing this copy/check idiom is cargo cult programming, adding mess and noise to your code. To actually protect against other threads requires a lot more work.
Update in response to Eric Lippert's blog posts:
So there's a major thing I'd missed about event handlers: "event handlers are required to be robust in the face of being called even after the event has been unsubscribed", and obviously therefore we only need to care about the possibility of the event delegate being null. Is that requirement on event handlers documented anywhere?
And so: "There are other ways to solve this problem; for example, initializing the handler to have an empty action that is never removed. But doing a null check is the standard pattern."
So the one remaining fragment of my question is, why is explicit-null-check the "standard pattern"? The alternative, assigning the empty delegate, requires only = delegate {} to be added to the event declaration, and this eliminates those little piles of stinky ceremony from every place where the event is raised. It would be easy to make sure that the empty delegate is cheap to instantiate. Or am I still missing something?
Surely it must be that (as Jon Skeet suggested) this is just .NET 1.x advice that hasn't died out, as it should have done in 2005?
UPDATE
As of C# 6, the answer to this question is:
SomeEvent?.Invoke(this, e);
The JIT isn't allowed to perform the optimization you're talking about in the first part, because of the condition. I know this was raised as a spectre a while ago, but it's not valid. (I checked it with either Joe Duffy or Vance Morrison a while ago; I can't remember which.)
Without the volatile modifier it's possible that the local copy taken will be out of date, but that's all. It won't cause a NullReferenceException.
And yes, there's certainly a race condition - but there always will be. Suppose we just change the code to:
TheEvent(this, EventArgs.Empty);
Now suppose that the invocation list for that delegate has 1000 entries. It's perfectly possible that the action at the start of the list will have executed before another thread unsubscribes a handler near the end of the list. However, that handler will still be executed because it'll be a new list. (Delegates are immutable.) As far as I can see this is unavoidable.
Using an empty delegate certainly avoids the nullity check, but doesn't fix the race condition. It also doesn't guarantee that you always "see" the latest value of the variable.
I see a lot of people going toward the extension method of doing this ...
public static class Extensions
{
public static void Raise<T>(this EventHandler<T> handler,
object sender, T args) where T : EventArgs
{
if (handler != null) handler(sender, args);
}
}
That gives you nicer syntax to raise the event ...
MyEvent.Raise( this, new MyEventArgs() );
And also does away with the local copy since it is captured at method call time.
"Why is explicit-null-check the 'standard pattern'?"
I suspect the reason for this might be that the null-check is more performant.
If you always subscribe an empty delegate to your events when they are created, there will be some overheads:
Cost of constructing the empty delegate.
Cost of constructing a delegate chain to contain it.
Cost of invoking the pointless delegate every single time the event is raised.
(Note that UI controls often have a large number of events, most of which are never subscribed to. Having to create a dummy subscriber to each event and then invoke it would likely be a significant performance hit.)
I did some cursory performance testing to see the impact of the subscribe-empty-delegate approach, and here are my results:
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took: 432ms
OnClassicNullCheckedEvent took: 490ms
OnPreInitializedEvent took: 614ms <--
Subscribing an empty delegate to each event . . .
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took: 674ms
OnClassicNullCheckedEvent took: 674ms
OnPreInitializedEvent took: 2041ms <--
Subscribing another empty delegate to each event . . .
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took: 2011ms
OnClassicNullCheckedEvent took: 2061ms
OnPreInitializedEvent took: 2246ms <--
Done
Note that for the case of zero or one subscribers (common for UI controls, where events are plentiful), the event pre-initialised with an empty delegate is notably slower (over 50 million iterations...)
For more information and source code, visit this blog post on .NET Event invocation thread safety that I published just the day before this question was asked (!)
(My test set-up may be flawed so feel free to download the source code and inspect it yourself. Any feedback is much appreciated.)
I truly enjoyed this read - not! Even though I need it to work with the C# feature called events!
Why not fix this in the compiler? I know there are MS people who read these posts, so please don't flame this!
1 - the Null issue) Why not make events be .Empty instead of null in the first place? How many lines of code would be saved for null check or having to stick a = delegate {} onto the declaration? Let the compiler handle the Empty case, IE do nothing! If it all matters to the creator of the event, they can check for .Empty and do whatever they care with it! Otherwise all the null checks / delegate adds are hacks around the problem!
Honestly I'm tired of having to do this with every event - aka boilerplate code!
public event Action<thisClass, string> Some;
protected virtual void DoSomeEvent(string someValue)
{
var e = Some; // avoid race condition here!
if(null != e) // avoid null condition here!
e(this, someValue);
}
2 - the race condition issue) I read Eric's blog post, I agree that the H (handler) should handle when it dereferences itself, but cannot the event be made immutable/thread safe? IE, set a lock flag on its creation, so that whenever it is called, it locks all subscribing and un-subscribing to it while its executing?
Conclusion,
Are not modern day languages supposed to solve problems like these for us?
With C# 6 and above, code could be simplified using new ?. operator as in:
TheEvent?.Invoke(this, EventArgs.Empty);
Here is the MSDN documentation.
According to Jeffrey Richter in the book CLR via C#, the correct method is:
// Copy a reference to the delegate field now into a temporary field for thread safety
EventHandler<EventArgs> temp =
Interlocked.CompareExchange(ref NewMail, null, null);
// If any methods registered interest with our event, notify them
if (temp != null) temp(this, e);
Because it forces a reference copy.
For more information, see his Event section in the book.
I've been using this design pattern to ensure that event handlers aren't executed after they're unsubscribed. It's working pretty well so far, although I haven't tried any performance profiling.
private readonly object eventMutex = new object();
private event EventHandler _onEvent = null;
public event EventHandler OnEvent
{
add
{
lock(eventMutex)
{
_onEvent += value;
}
}
remove
{
lock(eventMutex)
{
_onEvent -= value;
}
}
}
private void HandleEvent(EventArgs args)
{
lock(eventMutex)
{
if (_onEvent != null)
_onEvent(args);
}
}
I'm mostly working with Mono for Android these days, and Android doesn't seem to like it when you try to update a View after its Activity has been sent to the background.
This practice is not about enforcing a certain order of operations. It's actually about avoiding a null reference exception.
The reasoning behind people caring about the null reference exception and not the race condition would require some deep psychological research. I think it has something to do with the fact that fixing the null reference problem is much easier. Once that is fixed, they hang a big "Mission Accomplished" banner on their code and unzip their flight suit.
Note: fixing the race condition probably involves using a synchronous flag track whether the handler should run
So I'm a little late to the party here. :)
As for the use of null rather than the null object pattern to represent events with no subscribers, consider this scenario. You need to invoke an event, but constructing the object (EventArgs) is non-trivial, and in the common case your event has no subscribers. It would be beneficial to you if you could optimize your code to check to see if you had any subscribers at all before you committed processing effort to constructing the arguments and invoking the event.
With this in mind, a solution is to say "well, zero subscribers is represented by null." Then simply perform the null check before performing your expensive operation. I suppose another way of doing this would have been to have a Count property on the Delegate type, so you'd only perform the expensive operation if myDelegate.Count > 0. Using a Count property is a somewhat nice pattern that solves the original problem of allowing optimization, and it also has the nice property of being able to be invoked without causing a NullReferenceException.
Keep in mind, though, that since delegates are reference types, they are allowed to be null. Perhaps there was simply no good way of hiding this fact under the covers and supporting only the null object pattern for events, so the alternative may have been forcing developers to check both for null and for zero subscribers. That would be even uglier than the current situation.
Note: This is pure speculation. I'm not involved with the .NET languages or CLR.
for single threaded applicaitons, you are correc this is not an issue.
However, if you are making a component that exposes events, there is no guarantee that a consumer of your component is not going to go multithreading, in which case you need to prepare for the worst.
Using the empty delegate does solve the problem, but also causes a performance hit on every call to the event, and could possibly have GC implications.
You are right that the consumer trie dto unsubscribe in order for this to happen, but if they made it past the temp copy, then consider the message already in transit.
If you don't use the temporary variable, and don't use the empty delegate, and someone unsubscribes, you get a null reference exception, which is fatal, so I think the cost is worth it.
I've never really considered this to be much of an issue because I generally only protect against this sort of potential threading badness in static methods (etc) on my reusable components, and I don't make static events.
Am I doing it wrong?
Wire all your events at construction and leave them alone. The design of the Delegate class cannot possibly handle any other usage correctly, as I will explain in the final paragraph of this post.
First of all, there's no point in trying to intercept an event notification when your event handlers must already make a synchronized decision about whether/how to respond to the notification.
Anything that may be notified, should be notified. If your event handlers are properly handling the notifications (i.e. they have access to an authoritative application state and respond only when appropriate), then it will be fine to notify them at any time and trust they will respond properly.
The only time a handler shouldn't be notified that an event has occurred, is if the event in fact hasn't occurred! So if you don't want a handler to be notified, stop generating the events (i.e. disable the control or whatever is responsible for detecting and bringing the event into existence in the first place).
Honestly, I think the Delegate class is unsalvageable. The merger/transition to a MulticastDelegate was a huge mistake, because it effectively changed the (useful) definition of an event from something that happens at a single instant in time, to something that happens over a timespan. Such a change requires a synchronization mechanism that can logically collapse it back into a single instant, but the MulticastDelegate lacks any such mechanism. Synchronization should encompass the entire timespan or instant the event takes place, so that once an application makes the synchronized decision to begin handling an event, it finishes handling it completely (transactionally). With the black box that is the MulticastDelegate/Delegate hybrid class, this is near impossible, so adhere to using a single-subscriber and/or implement your own kind of MulticastDelegate that has a synchronization handle that can be taken out while the handler chain is being used/modified. I'm recommending this, because the alternative would be to implement synchronization/transactional-integrity redundantly in all your handlers, which would be ridiculously/unnecessarily complex.
Please take a look here: http://www.danielfortunov.com/software/%24daniel_fortunovs_adventures_in_software_development/2009/04/23/net_event_invocation_thread_safety
This is the correct solution and should always be used instead of all other workarounds.
“You can ensure that the internal invocation list always has at least one member by initializing it with a do-nothing anonymous method. Because no external party can have a reference to the anonymous method, no external party can remove the method, so the delegate will never be null”
— Programming .NET Components, 2nd Edition, by Juval Löwy
public static event EventHandler<EventArgs> PreInitializedEvent = delegate { };
public static void OnPreInitializedEvent(EventArgs e)
{
// No check required - event will never be null because
// we have subscribed an empty anonymous delegate which
// can never be unsubscribed. (But causes some overhead.)
PreInitializedEvent(null, e);
}
I don't believe the question is constrained to the c# "event" type. Removing that restriction, why not re-invent the wheel a bit and do something along these lines?
Raise event thread safely - best practice
Ability to sub/unsubscribe from any thread while within a raise (race
condition removed)
Operator overloads for += and -= at the class level.
Generic caller-defined delegate
Thanks for a useful discussion. I was working on this problem recently and made the following class which is a bit slower, but allows to avoid callings to disposed objects.
The main point here is that invocation list can be modified even event is raised.
/// <summary>
/// Thread safe event invoker
/// </summary>
public sealed class ThreadSafeEventInvoker
{
/// <summary>
/// Dictionary of delegates
/// </summary>
readonly ConcurrentDictionary<Delegate, DelegateHolder> delegates = new ConcurrentDictionary<Delegate, DelegateHolder>();
/// <summary>
/// List of delegates to be called, we need it because it is relatevely easy to implement a loop with list
/// modification inside of it
/// </summary>
readonly LinkedList<DelegateHolder> delegatesList = new LinkedList<DelegateHolder>();
/// <summary>
/// locker for delegates list
/// </summary>
private readonly ReaderWriterLockSlim listLocker = new ReaderWriterLockSlim();
/// <summary>
/// Add delegate to list
/// </summary>
/// <param name="value"></param>
public void Add(Delegate value)
{
var holder = new DelegateHolder(value);
if (!delegates.TryAdd(value, holder)) return;
listLocker.EnterWriteLock();
delegatesList.AddLast(holder);
listLocker.ExitWriteLock();
}
/// <summary>
/// Remove delegate from list
/// </summary>
/// <param name="value"></param>
public void Remove(Delegate value)
{
DelegateHolder holder;
if (!delegates.TryRemove(value, out holder)) return;
Monitor.Enter(holder);
holder.IsDeleted = true;
Monitor.Exit(holder);
}
/// <summary>
/// Raise an event
/// </summary>
/// <param name="args"></param>
public void Raise(params object[] args)
{
DelegateHolder holder = null;
try
{
// get root element
listLocker.EnterReadLock();
var cursor = delegatesList.First;
listLocker.ExitReadLock();
while (cursor != null)
{
// get its value and a next node
listLocker.EnterReadLock();
holder = cursor.Value;
var next = cursor.Next;
listLocker.ExitReadLock();
// lock holder and invoke if it is not removed
Monitor.Enter(holder);
if (!holder.IsDeleted)
holder.Action.DynamicInvoke(args);
else if (!holder.IsDeletedFromList)
{
listLocker.EnterWriteLock();
delegatesList.Remove(cursor);
holder.IsDeletedFromList = true;
listLocker.ExitWriteLock();
}
Monitor.Exit(holder);
cursor = next;
}
}
catch
{
// clean up
if (listLocker.IsReadLockHeld)
listLocker.ExitReadLock();
if (listLocker.IsWriteLockHeld)
listLocker.ExitWriteLock();
if (holder != null && Monitor.IsEntered(holder))
Monitor.Exit(holder);
throw;
}
}
/// <summary>
/// helper class
/// </summary>
class DelegateHolder
{
/// <summary>
/// delegate to call
/// </summary>
public Delegate Action { get; private set; }
/// <summary>
/// flag shows if this delegate removed from list of calls
/// </summary>
public bool IsDeleted { get; set; }
/// <summary>
/// flag shows if this instance was removed from all lists
/// </summary>
public bool IsDeletedFromList { get; set; }
/// <summary>
/// Constuctor
/// </summary>
/// <param name="d"></param>
public DelegateHolder(Delegate d)
{
Action = d;
}
}
}
And the usage is:
private readonly ThreadSafeEventInvoker someEventWrapper = new ThreadSafeEventInvoker();
public event Action SomeEvent
{
add { someEventWrapper.Add(value); }
remove { someEventWrapper.Remove(value); }
}
public void RaiseSomeEvent()
{
someEventWrapper.Raise();
}
Test
I tested it in the following manner. I have a thread which creates and destroys objects like this:
var objects = Enumerable.Range(0, 1000).Select(x => new Bar(foo)).ToList();
Thread.Sleep(10);
objects.ForEach(x => x.Dispose());
In a Bar (a listener object) constructor I subscribe to SomeEvent (which is implemented as shown above) and unsubscribe in Dispose:
public Bar(Foo foo)
{
this.foo = foo;
foo.SomeEvent += Handler;
}
public void Handler()
{
if (disposed)
Console.WriteLine("Handler is called after object was disposed!");
}
public void Dispose()
{
foo.SomeEvent -= Handler;
disposed = true;
}
Also I have couple of threads which raise event in a loop.
All these actions are performed simultaneously: many listeners are created and destroyed and event is being fired at the same time.
If there were a race conditions I should see a message in a console, but it is empty. But if I use clr events as usual I see it full of warning messages. So, I can conclude that it is possible to implement a thread safe events in c#.
What do you think?

Categories

Resources