Event Ordering in .NET - c#

Got a quick question on event ordering in C#/.NET.
Let's say you have a while loop that reads a socket interface (TCP). The interface is definitely taking care of the ordering (it's TCP). Let's say your packet interface is written so that each "packet" you get in the stream, you will forward it to the next "layer" or the next object via an event callback.
So here is the pseudocode:
while (1) {
readsocket();
if (data received = complete packet)
raiseEvent(packet);
}
My questions are:
Are the events generated in order? (i.e. preserve ordering)
I am assuming #1 is correct, so that means it will block the while loop until the event finishes processing?

You never know how the event is implemented. It's possible that the events will all be executed synchronously, in order, and based on some meaningful value. It's also possible that they'll be executed synchronously in some arbitrary and inconsistent ordering. It's also possible that they won't even be executed synchronously, and that the various event handlers will be executed in new threads (or thread pool threads). It's entirely up to the implementation of the event to determine all of that.
It's rather uncommon to see different event handlers executed in parallel (and by that I mean very, very very rare), and almost all events that you come across will be backed by a single multicast delegate, meaning the order they will be fired in is the order in which they were added, but you have no way of actually knowing if that's the case (baring decompiling the code). There is no indication from the public API if that is how it is implemented.
Regardless of all of this, from a conceptual perspective, it would be best to not rely on any ordering of event handler invocations, and it's generally best to program as if the various event handlers could be run concurrently because at a conceptual level, that is what an event represents even if the implementation details are more restrictive.

Related

Unity & Zenject - How to execute methods in sequence order from a TSignal.Fire()

I am trying to learn how to work with Zenject's and unity and I have encountered a particular problem that I do not know if it has a possible solution solely using Zenject's api.
Let assume i have MethodA, MethodB and MethodC, and a SignalA.
Is it possible to make this sequence:
SignalA.Fire() => MethodA (until released/finished)
=> MethodB (until released/finished)
=> MethodC (until released/finished)
Right now i have this pice of code :
private void BindSignalA()
{
Container.DeclareSignal<SignalA>().RunSync();
Container.BindSignal<SignalA>().ToMethod<MethodA>(handler => handler.Execute).FromNew();
Container.BindSignal<SignalA>().ToMethod<MethodB>(handler => handler.Execute).FromNew();
Container.BindSignal<SignalA>().ToMethod<MethodC>(handler => handler.Execute).FromNew();
}
And MethodA looks like this :
public class MethodA
{
public async void Execute()
{
await new WaitUntil(() => false);
}
}
So the expected result is that MethodB and MethodC will be never executed.
But the actual result is that all Methods are executed.
Is there a solution using Zenject's api to make this happen?
Thanks for the help, and in case any more information is needed I would love to know.
I am not familiar with signals, but checking the documentation , mainly the the 3rd point into consideration maybe your case is not good scenario for signal use.
When To Use Signals
Signals are most appropriate as a communication mechanism when:
There might be multiple interested receivers listening to the signal
The sender doesn't need to get a result back from the receiver
The sender doesn't even really care if it gets received. In other words, the sender should not rely on some state changing when the signal is called for subsequent sender logic to work correctly. Ideally signals can be thought as "fire and forget" events
The sender triggers the signal infrequently or at unpredictable times
On the other hand I you might be confusing decoupling the code with the syncronous execution concept. Delegates or events (which are a specific type of delegate) are spare pieces of logic of one class that you can keep so that can be executed in other class, so that you can "subscribe" or "listen" to something that might happen to execute/invoke your event in other part of the code. However that does not involve any asyncronous code or multithreading.
As far as I can guess, signals are used to handle event subscription/invocation in a decoupled manner with the Zenject signal object, in a similar way as dependencies between classes are handled in a decoupled manner with interfaces, so first I would check if your case is suitable for their use checking the documentation carefully and following the examples provided along until the concept clicks.
Meanwhile, I would first try to use c# normal delegates so that after the zenject signal misson can be understood. Also providing a simple working example of what you are trying to do without zenject would be very helpful as departing point to achieve the zenject way of achieving that same thing.

.NET Disruptor async patterns

I am using Disruptor-net in a C# application. I'm having some trouble understanding how to do async operations in the disruptor pattern.
Assuming I have a few event handlers, and the last one in the chain hands a message off to my business logic processors, how do I handle async operations inside of my business logic processor? When my business logic needs to do some database insert, does it hand a message off to my output disruptor, which does the insert, then publishes a new message on my input disruptor with all the state to continue the transaction?
In addition, within my output disruptor, would I use Tasks? I'm 99.9% sure I'd want to use tasks so I don't have a ton of event handlers blocking on async operations. How does that fit in with the disruptor pattern then? Seems kind of weird to just do something like this in my EventHandler..
void OnEvent(MyEvent evt, long sequence, bool endOfBatch)
{
db.InsertAsync(evt).ContinueWith(task => inputDisruptor.Publish(task));
}
The Disruptor has the following features:
Dedicated threads, which can be pinned / shielded / prioritized for better performance.
Explicit queues, which can be monitored and generate backpressure.
In-order message processing.
No heap allocations, which can help reduce GC pauses, or even remove them if your own code does not generate heap allocations.
Your code sample does not really follow the Disruptor philosophy:
Task.ContinueWith runs asynchronously by default, so the continuation will use thread-pool threads.
Because you are using the thread-pool, you have no guarantee on the continuation execution order. Even if you use TaskContinuationOptions.ExecuteSynchronously, you have no guarantee that InsertAsync will invoke the continuations in-order.
You are creating an implicit queue with all the pending insert operations. This queue is hidden and does not generate backpressure.
I will put aside the fact that your code is generating heap allocations. You will not benefit from the "no GC pauses" effect but it is probably very acceptable for your use-case.
Also, please note that batching is crucial to support high-throughput for IO operations. You should really use the Disruptor batches in your event handler.
I will simplify the problem to 3 event handlers:
PreInsertEventHandler: pre-insert logic (not shown here)
InsertEventHandler: insert logic
PostInsertEventHandler: post-insert logic
Of course, I am assuming that the post-insert logic must be run only after insert completion.
If your goal is to save the events in InsertEventHandler and to block until completion before processing the event in the next handler, you should probably just wait in InsertEventHandler.
InsertEventHandler:
void OnEvent(MyEvent evt, long sequence, bool endOfBatch)
{
_pendingInserts.Add((evt, task: db.InsertAsync(evt)));
if (endOfBatch)
{
var insertSucceeded = Task.WaitAll(_pendingInserts.Select(x => x.task).ToArray(), _insertTimeout);
foreach (var (pendingEvent, _) in _pendingInserts)
{
pendingEvent.InsertSucceeded = insertSucceeded;
}
_pendingInserts.Clear();
}
}
Of course, if your DB API exposes a bulk-insert method, it might be better to add the events in a list and to save them all at the end of the batch.
There are many other options, like waiting in PostInsertEventHandler, or queueing the insert results in another Disruptor, each coming with its own pros and cons. A SO answer might not be the best place to discuss and analyze all of them.

Best practice regarding firing and handling of .NET events

It is obvious that firing events inside of a lock (i.e. critical section) is prone to deadlocks due to the possibility that the event handler may block on some asynchronous operation that also needs to acquire the same lock. Now, design-wise there are two possible solutions that come to my mind:
If it is needed to fire an event inside a lock, then always fire the event asynchronously. This can be performed by using ThreadPool, for example, if the firing order of the events does not matter. If the order of the events must be preserved, then a single event firing thread can be used to fire the events in order, but asynchronously.
The class/library that fires the events does not need to take any necessary precautions to prevent the deadlock and just fire the event inside the lock. In this case it is the event handler's responsibility to process the event asynchronously if it performs locking (or any other blocking operation) inside the event handler. If the event handler does not conform to this rule, then it should suffer the consequences in case a deadlock occurs.
I honestly think that the second option is better in terms of separation of concerns principle as the event-firing code should not guess what the event handler may or may not do.
However, practically, I am inclined to take the first route since the second option seems to converge to the point that every event handler must now run all event-handling code asynchronously, since most of the time it is not clear whether some series of calls performs a blocking operation or not. For complex event-handlers, tracing all possible paths (and, moreover, keeping track of it as the code evolves) is definitely not an easy task. Therefore, solving the problem in one place (where the event is fired) seems to be preferable.
I am interested in seeing if there are other alternative solutions that I may have overlooked and what possible advantages/disadvantages and pitfalls can be attributed to each possible solution.
Is there a best practice for this kind of situation?
There is a third option: Delay raising the event until the lock is released. Normally, locks are taken for a short time. It is usually possible to delay raising the event till after lock (but on the same thread).
The BCL almost never calls user code under a lock. This is an explicit design principle of theirs. For example ConcurrentDictionary.AddOrUpdate does not call the factory while under a lock. This counter-intuitive behavior causes many Stack Overflow questions because it can lead to multiple factory invocations for the same key.
I honestly think that the second option is better in terms of separation of concerns principle as the event-firing code should not guess what the event handler may or may not do.
I don't think the first option violates separation of concerns. Extract an AsyncEventNotifier class and have your event-generating object delegate to it, something like (obviously not complete):
class AsyncEventNotifier
{
private List<EventListener> _listeners;
public void AddEventListener(EventListener listener) { _listeners.add(listener); }
public void NotifyListeners(EventArgs args)
{
// spawn a new thread to call the listener methods
}
....
}
class EventGeneratingClass
{
private AsyncEventHandler _asyncEventHandler;
public void AddEventListener(EventListener listener) { _asyncEventHandler.AddEventListener(listener); }
private void FireSomeEvent()
{
var eventArgs = new EventArgs();
...
_asyncEventhandler.NotifyListeners(eventArgs);
}
}
Now your original class isn't responsible for anything it wasn't responsible for before. It knows how to add listeners to itself and it knows how to tell its listeners that an event has occurred. The new class knows the implementation details of "tell its listeners that an event has occurred". Listener order is also preserved if it's important. It probably shouldn't be, though. If listenerB can't handle an event until listenerA has already handled it then listenerB is probably more interested in an event generated by listenerA than by the object it was listening to. Either that or you should have another class whose responsibility is to know the order that an event needs to be processed in.

C# pub/sub service - how to fire events on background threads?

I've developed some code that receives a series of values from a hardware device, every 50ms in the form of name/value pairs. I want to develop a pub/sub service whereby subscribers can be notified when the value of a particular item changes. The Subscribe method might look something like this:-
public void Subscribe(string itemName, Action<string, long> callback)
The code that reads the hardware values will check if a value has changed since last time. If so, it will iterate through any subscribers for that item, calling their delegates. As it stands, the delegates will be called on the same thread which isn't ideal - I need to keep the polling as fast as possible. What's the best approach for calling the callback delegates on separate threads? Should the subscribers pass in (say) a task/thread, or should the publisher be responsible for spinning these up?
Note that I need to pass a couple of parameters to the delegate (the item name and its value), so this might affect the approach taken. I know you can pass a single "state" object to tasks but it feels a bit unintuitive requiring the subscribers to implement an Action callback delegate (which must then be cast to some other type containing the name and value).
Also, I'm assuming that creating a new task/thread each time a delegate is called will hurt performance, so some kind of "pool" might be required?
I would maintain the same structure that you now have and put the responsibility of prompt action onto the callbacks, ie. the callbacks should not block or perform complex, lengthy actions directly.
If a particular callback needs to perform any lengthy action, it should queue off the Action data to a thread of its own and then return 'immediately', eg. it might BeginInvoke/PostMessage the data to a GUI thread, queue it to a thread that inserts into DB table or queue it to a logger, (or indeed, any combo chained together). These lengthy/blocking actions can then proceed in parallel while the device interface continues to poll.
This way, you keep the working structure you have and do not have to inflict any inter-thread comms onto callbacks that do not need it. The device interface remains encapsulated, just firing callbacks.
EDIT:
'creating a new task/thread each time a delegate is called will hurt performance' - yes, and also it would be difficult to maintain state. Often, such threads are written as while(true) loops with some signaling call at the top, eg. a blocking queue pop(), and so only need creating once, at startup, and never need terminating.

How can I make my implementation of IObservable<T> multithreaded?

I wrote an implementation based on the examples at [ Rx DEVHOL202] and http : //rxwiki.wikidot.com/101samples#toc48
Here is my code. http://csharp.pastebin.com/pm2NAPx6
It works, but the calls to OnNext are not NonBlocking, which is what i would like to implement to simulate a network read and asynchronously handing off each chunk of bytes as it is read to a handler [ which is not shown here in full but might cache results and do further processing ].
What is a good way of doing that?
Once the Exception gets thrown, all the subsequent OnNext()s are not processed!!
If I dont explicitly exit the loop and indicate completion.
Why is this so?
I would strongly recommend against trying to implement your own IObservable. The implicit rules go beyond thread safety and into method call ordering.
Usually you would return IObservable instances from methods, but if you need a class that implements it directly, you should wrap a Subject:
public class SomeObservable<T> : IObservable<T>
{
private Subject<T> subject = new Subject<T>();
public IDisposable Subscribe(IObserver<T> observer)
{
return subject.Subscribe(observer);
}
}
1. You need to be careful about how you support this from your observer (as you may have shared data), but you can make your handling of the calls asynchronous in one of two ways:
Call ObserveOn(Scheduler.TaskPool) (or ThreadPool if you are pre-4.0) before you call Subscribe. This causes messages to be routed through a scheduler (in this case, a task)
Pass the IScheduler to the Observer constructor
Start an asynchronous task/thread from your subscriber
2. This is the expected functionality. The contract between IObservable and IObserver is (OnNext)* (OnCompleted | OnError)?, which is to say "zero or more calls to OnNext, optionally followed by EITHER OnCompleted or OnError". After OnCompleted|OnError, it is invalid to call OnNext.
All the operators in Rx (Where, Select, etc) enforce this rule, even if the source doesn't.
I'm not sure if I understand your question correctly, but why can't you just execute whatever logic you have on a different Thread, or if it's small enough push it on a ThreadPool?
Here is an example:
ThreadPool.QueueUserWorkItem(o=>
{
_paidSubj.OnNext(this); // Raise PAID event
});
I'm confused about the data type on Subject, I have never seen that class in C#... is it something that you created? Is OnNext an event that gets raised or is it just a method? If OnNext is an event, then you can use BeginInvoke to invoke it asynchronously:
_paidSubj.OnNext.BeginInvoke(this, null, null);
Update:
An important thing that will happen if you implement this kind of asynchronous behavior: if you notify an IObserver by passing the Order, you might actually have some inconsistencies when you try to read the data in the observer (namely the order buffer) while the Order continues to modify the buffer in its Read thread. So there are at least two ways to solve this:
Restrict access to the memory which will get modified by using locks.
Only notify the observer with the relevant information that you want it to see:
a. By passing the information as a value (not as a reference).
b. By creating an immutable structure that transmits the information.
P.S.
Where did you get Subject from? Is Subject supposed to be an OrderObserver?

Categories

Resources