I have a listener, which receives work in the form of IPayload. The listener should push this work to observers who actually do the work. This is my first crude attempt to achieve this:
public interface IObserver
{
void DoWork(IPayload payload);
}
public interface IObservable
{
void RegisterObserver(IObserver observer);
void RemoveObserver(IObserver observer);
void NotifyObservers(IPayload payload);
}
public class Observer : IObserver
{
public void DoWork(IPayload payload)
{
// do some heavy lifting
}
}
public class Listener : IObservable
{
private readonly List<IObserver> _observers = new List<IObserver>();
public void PushIncomingPayLoad(IPayload payload)
{
NotifyObservers(payload);
}
public void RegisterObserver(IObserver observer)
{
_observers.Add(observer);
}
public void RemoveObserver(IObserver observer)
{
_observers.Remove(observer);
}
public void NotifyObservers(IPayload payload)
{
Parallel.ForEach(_observers, observer =>
{
observer.DoWork(payload);
});
}
}
Is this a valid approach that follows the observer/observable pattern (i.e. pub sub?)? My understanding is that the NotifyObservers also spins up a threat for each payload. Is this correct? Any improvement suggestions very much welcome.
Please note that all observers have to finish their work before new work in the form of a payload is passed on to them - the order of 'observation' does not matter. Basically, the listener has to act like a master whilst exploiting the cores of the host as much as possibly using the TPL. IMHO this requires the explicit registration of observers with the listener/Observable.
PS:
I think Parallel.ForEach does not create a thread for each observer: Why isn't Parallel.ForEach running multiple threads? If this is true how can I ensure to create a thread for each observer?
An alternative I have in mind is this:
public async void NotifyObservers(IPayload payload)
{
foreach (var observer in _observers)
{
var observer1 = observer;
await Task.Run(() => observer1.DoWork(payload));
}
await Task.WhenAll();
}
Of course you can do it this way, but in .net that is not needed if you dont want to reinvent the wheel :-)
In c# there this could be done using events.
A short example :
//Your Listener who has a public eventhandler where others can add them as listeners
public class Listener{
//your eventhandler where others "add" them as listeners
public event EventHandler<PayLoadEventsArgs> IncomingPayload;
//a method where you process new data and want to notify the others
public void PushIncomingPayLoad(IPayload payload)
{
//check if there are any listeners
if(IncomingPayload != null)
//if so, then notify them with the data in the PayloadEventArgs
IncomingPayload(this, new PayloadEventArgs(payload));
}
}
//Your EventArgs class to hold the data
public class PayloadEventArgs : EventArgs{
Payload payload { get; private set; }
public PayloadEventArgs(Payload payload){
this.payload = payload;
}
}
public class Worker{
//add this instance as a observer
YourListenerInstance.IncomingPayload += DoWork;
//remove this instance
YourListenerInstance.IncomingPayload -= DoWork;
//This method gets called when the Listener notifies the IncomingPayload listeners
void DoWork(Object sender, PayloadEventArgs e){
Console.WriteLine(e.payload);
}
}
EDIT: As the question asks for parallel execution, how about doing the new thread at the subscriber side? I think this is the easiest approach to achieve this.
//Inside the DoWork method of the subscriber start a new thread
Task.Factory.StartNew( () =>
{
//Do your work here
});
//If you want to make sure that a new thread is used for the task, then add the TaskCreationOptions.LongRunning parameter
Task.Factory.StartNew( () =>
{
//Do your work here
}, TaskCreationOptions.LongRunning);
Hopefully this answers your question? If not, please leave a comment.
Related
I am working on a WPF application using Prism. I am using EventAggregator to communicate between viewmodels.
public class PublisherViewModel
{
_eventAggregator.GetEvent<RefreshEvent>().Publish("STOCKS");
}
public class SubscriberViewModel
{
public SubscriberViewModel(IEventAggregator ea)
{
ea.GetEvent<RefreshEvent>().Subscribe(RefreshData);
}
void RefreshData(string category)
{
Task.Run(() =>
{
//Long Running Operation
Dispatcher.Invoke(() =>
{
//Refresh UI
});
});
}
}
PublisherViewModel can publish event one after another. However at a SubscriberViewModel as I have a long running Task and is not awaited (which I cannot) the second request coming from publisher start execution right away. At SubscriberViewModel I want to handle all incoming request such that they are executed one after another in the order which they arrive.
I am thinking to handle this using a queue based mechanism.
Could you please suggest me the best practice for the same.
Thanks!!
Update:-
I have used the below approach
public class BlockingQueue<T> wehre T : class
{
private readonly BlockingCollection<JobQueueItem<T>> _jobs;
public BlockingQueue(int upperBound)
{
_jobs = new BlockingCollection<JobQueueItem<T>>(upperBound);
var thread = new Thread(new ThreadStart(OnStart));
thread.IsBackground = true;
thread.Start();
}
public void Enqueue(T parameter, Func<T, Task> action)
{
_jobs.Add(new JobQueueItem<T> { Parameter = parameter, JobAction = action });
}
private void OnStart()
{
foreach (var job in _jobs.GetConsumingEnumerable(CancellationToken.None))
{
if (job != null && job.JobAction != null)
{
job.Action.Invoke(job.Parameter).Wait();
}
}
}
private class JobQueueItem<T>
{
internal T Parameter { get; set; }
internal Func<T, Task> JobAction { get; set; }
}
}
public class SubscriberViewModel
{
BlockingQueue<RefreshEventArgs> RefreshQueue = new ...;
//inside Subscribed method
RefreshQueue.Enqueue(args, RefreshData);
}
Please suggest. Thanks!
I am thinking to handle this using a queue based mechanism.
This is the way to go. Set up a queue (probably an asynchronous queue), push the events in the subscriber and consume them from a worker task.
TPL Dataflow is one option to do this: create an ActionBlock<string> from the handler and post the events to it as they come in.
I've been studying Observer parttern since this morning, but can't seem to figure out how to implement it with the built-in interfaces. I already looked at some examples but couldn't find any simple example yet.
Here's my code so far, inspired by the Microsoft Documentation :
class ObservableClass : IObservable<bool>, IDisposable
{
public bool observableBool;
public List<IObserver<bool>> observers;
public ObservableClass()
{
this.observableBool = false;
this.observers = new List<IObserver<bool>>();
}
public IDisposable Subscribe(IObserver<bool> observer)
{
if (!observers.Contains(observer))
{
AddObserver(observer);
}
return this;
}
public void Dispose()
{
Console.WriteLine("Disposing...");
}
public void AddObserver(IObserver<bool> obs)
{
this.observers.Add(obs);
}
public void RemoveObserver(IObserver<bool> obs)
{
this.observers.Remove(obs);
}
public void SwapBool()
{
observableBool = !observableBool;
}
}
the observable class contains an observableBool field. I want to notify the Observer when that field changes value.
Here's my Observer :
class ObserverClass : IObserver<bool>
{
public IDisposable observable;
public void OnCompleted()
{
Console.WriteLine("Completed");
}
public void OnError(Exception error)
{
Console.WriteLine("error");
}
public void OnNext(bool value)
{
Console.WriteLine("Next");
}
public virtual void Subscribe(IObservable<bool> obs)
{
if (obs != null)
observable = obs.Subscribe(this);
}
public void stopObserve()
{
observable.Dispose();
}
}
And finally my Program :
static void Main(string[] args)
{
ObservableClass observable = new ObservableClass();
ObserverClass observer = new ObserverClass();
observer.Subscribe(observable);
Console.WriteLine("subscribed observer");
observable.SwapBool();
Console.WriteLine("performed swapBool");
}
Expected output :
subscribed observer
Completed //Returned by ObserverClass.OnComplete()
performed swapBool
How to make this work ?
How to call on OnComplete and the other methods of ObserverClass everytime observableBool changes ?
I know there are other ways to do that, but my goal is to be able to use IObserver and IObservable.
You iterate over your set of observables to notify them:
public void SwapBool()
{
observableBool = !observableBool;
foreach (observable in observers)
{
observable.OnNext(observableBool);
}
}
You are meant to call OnNext when there is a new value. OnComplete is used to notify that there will be no more values.
I just noticed your observable is IDisposable...
First of all, disposing the result of Subscribe should unsubscribe that observer. Not dispose the observable.
In fact, I would expect that disposing the observable means that it will no longer be sending values (calls OnComplete on everybody and releases the list of observers).
Other concerns include:
You probably want a set type so you can add and remove observables more efficiently.
List is not thread-safe.
Why are you exposing your fields?
I guess it is sort of a code review, but here is my implementation of the producer / consumer pattern. What I would like to know is would there be a case in which the while loops in the ReceivingThread() or SendingThread() methods might stop executing. Please note that EnqueueSend(DataSendEnqeueInfo info) is called from multiple different threads and I probably can't use tasks here since I definitely have to consume commands in a separate thread.
private Thread mReceivingThread;
private Thread mSendingThread;
private Queue<DataRecievedEnqeueInfo> mReceivingThreadQueue;
private Queue<DataSendEnqeueInfo> mSendingThreadQueue;
private readonly object mReceivingQueueLock = new object();
private readonly object mSendingQueueLock = new object();
private bool mIsRunning;
EventWaitHandle mRcWaitHandle;
EventWaitHandle mSeWaitHandle;
private void ReceivingThread()
{
while (mIsRunning)
{
mRcWaitHandle.WaitOne();
DataRecievedEnqeueInfo item = null;
while (mReceivingThreadQueue.Count > 0)
{
lock (mReceivingQueueLock)
{
item = mReceivingThreadQueue.Dequeue();
}
ProcessReceivingItem(item);
}
mRcWaitHandle.Reset();
}
}
private void SendingThread()
{
while (mIsRunning)
{
mSeWaitHandle.WaitOne();
while (mSendingThreadQueue.Count > 0)
{
DataSendEnqeueInfo item = null;
lock (mSendingQueueLock)
{
item = mSendingThreadQueue.Dequeue();
}
ProcessSendingItem(item);
}
mSeWaitHandle.Reset();
}
}
internal void EnqueueRecevingData(DataRecievedEnqeueInfo info)
{
lock (mReceivingQueueLock)
{
mReceivingThreadQueue.Enqueue(info);
mRcWaitHandle.Set();
}
}
public void EnqueueSend(DataSendEnqeueInfo info)
{
lock (mSendingQueueLock)
{
mSendingThreadQueue.Enqueue(info);
mSeWaitHandle.Set();
}
}
P.S the idea here is that am using WaitHandles to put thread to sleep when the queue is empty, and signal them to start when new items are enqueued.
UPDATE
I am just going to leave this https://blogs.msdn.microsoft.com/benwilli/2015/09/10/tasks-are-still-not-threads-and-async-is-not-parallel/ ,for people who might be trying to implement Producer/Consumer pattern using TPL or tasks.
Use a BlockingCollection instead of Queue, EventWaitHandle and lock objects:
public class DataInfo { }
private Thread mReceivingThread;
private Thread mSendingThread;
private BlockingCollection<DataInfo> queue;
private CancellationTokenSource receivingCts = new CancellationTokenSource();
private void ReceivingThread()
{
try
{
while (!receivingCts.IsCancellationRequested)
{
// This will block until an item is added to the queue or the cancellation token is cancelled
DataInfo item = queue.Take(receivingCts.Token);
ProcessReceivingItem(item);
}
}
catch (OperationCanceledException)
{
}
}
internal void EnqueueRecevingData(DataInfo info)
{
// When a new item is produced, just add it to the queue
queue.Add(info);
}
// To cancel the receiving thread, cancel the token
private void CancelReceivingThread()
{
receivingCts.Cancel();
}
Personally, for simple producer-consumer problems, I would just use BlockingCollection. There would be no need to manually code your own synchronization logic. The consuming threads will also block if there are no items present in the queue.
Here is what your code might look like if you use this class:
private BlockingCollection<DataRecievedEnqeueInfo> mReceivingThreadQueue = new BlockingCollection<DataRecievedEnqeueInfo>();
private BlockingCollection<DataSendEnqeueInfo> mSendingThreadQueue = new BlockingCollection<DataSendEnqeueInfo>();
public void Stop()
{
// No need for mIsRunning. Makes the enumerables in the GetConsumingEnumerable() calls
// below to complete.
mReceivingThreadQueue.CompleteAdding();
mSendingThreadQueue.CompleteAdding();
}
private void ReceivingThread()
{
foreach (DataRecievedEnqeueInfo item in mReceivingThreadQueue.GetConsumingEnumerable())
{
ProcessReceivingItem(item);
}
}
private void SendingThread()
{
foreach (DataSendEnqeueInfo item in mSendingThreadQueue.GetConsumingEnumerable())
{
ProcessSendingItem(item);
}
}
internal void EnqueueRecevingData(DataRecievedEnqeueInfo info)
{
// You can also use TryAdd() if there is a possibility that you
// can add items after you have stopped. Otherwise, this can throw an
// an exception after CompleteAdding() has been called.
mReceivingThreadQueue.Add(info);
}
public void EnqueueSend(DataSendEnqeueInfo info)
{
mSendingThreadQueue.Add(info);
}
As suggested in comments, you also can give a try to the TPL Dataflow blocks.
As far as I can see, you have two similar pipelines, for receive and send, so I assume that your class hierarchy is like this:
class EnqueueInfo { }
class DataRecievedEnqeueInfo : EnqueueInfo { }
class DataSendEnqeueInfo : EnqueueInfo { }
We can assemble an abstract class which will encapsulate the logic for creating the pipeline, and providing the interface for processing the items, like this:
abstract class EnqueueInfoProcessor<T>
where T : EnqueueInfo
{
// here we will store all the messages received before the handling
private readonly BufferBlock<T> _buffer;
// simple action block for actual handling the items
private ActionBlock<T> _action;
// cancellation token to cancel the pipeline
public EnqueueInfoProcessor(CancellationToken token)
{
_buffer = new BufferBlock<T>(new DataflowBlockOptions { CancellationToken = token });
_action = new ActionBlock<T>(item => ProcessItem(item), new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = Environment.ProcessorCount,
CancellationToken = token
});
// we are linking two blocks so all the items from buffer
// will flow down to action block in order they've been received
_buffer.LinkTo(_action, new DataflowLinkOptions { PropagateCompletion = true });
}
public void PostItem(T item)
{
// synchronously wait for posting to complete
_buffer.Post(item);
}
public async Task SendItemAsync(T item)
{
// asynchronously wait for message to be posted
await _buffer.SendAsync(item);
}
// abstract method to implement
protected abstract void ProcessItem(T item);
}
Note that you also can encapsulate the link between two blocks by using the Encapsulate<TInput, TOutput> method, but in that case you have to properly handle the Completion of the buffer block, if you're using it.
After this, we just need to implement two methods for receive and send handle logic:
public class SendEnqueueInfoProcessor : EnqueueInfoProcessor<DataSendEnqeueInfo>
{
SendEnqueueInfoProcessor(CancellationToken token)
: base(token)
{
}
protected override void ProcessItem(DataSendEnqeueInfo item)
{
// send logic here
}
}
public class RecievedEnqueueInfoProcessor : EnqueueInfoProcessor<DataRecievedEnqeueInfo>
{
RecievedEnqueueInfoProcessor(CancellationToken token)
: base(token)
{
}
protected override void ProcessItem(DataRecievedEnqeueInfo item)
{
// recieve logic here
}
}
You also can create more complicated pipeline with TransformBlock<DataRecievedEnqeueInfo, DataSendEnqeueInfo>, if your message flow is about a ReceiveInfo message became SendInfo.
At the moment I have a class for scanning the network:
public class Network {
public event NewDeviceHandler NewDevice;
public event ScanFinishedHandler ScanFinished;
//...
public void Scan() { /* ... */ }
}
I want to update my UI as soon as a new Device was found.
What is the best practice in this case? Should I use events or is it better to use something like an ObservableCollection?
And I have to call this method in my UI-Thread (WPF). How should I do that?
Create a new Task in my UI-Application
Create a new Task in the Scan-Method
Use asnyc / await
Thank you very much for your help.
If you want to update your UI as soon as a new Device was found, you will only need ObservableCollection and Task. For example:
In Network class:
public event NewDeviceHandler<Device> NewDevice;
public void StartScan()
{
Task.Run(() => Scan() );
}
In view-model:
public ObservableCollection<Device> DevicesCollection { get; set; }
In code-behind:
private Network networkService = new Network();
...
// Somewhere in initialization code:
networkService.NewDevice += (sender, device) => Dispatcher.Invoke(() => viewModel.DevicesCollection.Add(device) );
...
private void ScanButton_OnClick(object sender, RoutedEventArgs e)
{
viewModel.DevicesCollection.Clear();
networkService.StartScan();
}
Our existing implementation of domain events limits (by blocking) publishing to one thread at a time to avoid reentrant calls to handlers:
public interface IDomainEvent {} // Marker interface
public class Dispatcher : IDisposable
{
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Subscribe code...
public void Publish(IDomainEvent domainEvent)
{
semaphore.Wait();
try
{
// Get event subscriber(s) from concurrent dictionary...
foreach (Action<IDomainEvent> subscriber in eventSubscribers)
{
subscriber(domainEvent);
}
}
finally
{
semaphore.Release();
}
}
// Dispose pattern...
}
If a handler publishes an event, this will deadlock.
How can I rewrite this to serialize calls to Publish? In other words, if subscribing handler A publishes event B, I'll get:
Handler A called
Handler B called
while preserving the condition of no reentrant calls to handlers in a multithreaded environment.
I do not want to change the public method signature; there's no place in the application to call a method to publish a queue, for instance.
We came up with a way to do it synchronously.
public class Dispatcher : IDisposable
{
private readonly ConcurrentQueue<IDomainEvent> queue = new ConcurrentQueue<IDomainEvent>();
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Subscribe code...
public void Publish(IDomainEvent domainEvent)
{
queue.Enqueue(domainEvent);
if (IsPublishing)
{
return;
}
PublishQueue();
}
private void PublishQueue()
{
IDomainEvent domainEvent;
while (queue.TryDequeue(out domainEvent))
{
InternalPublish(domainEvent);
}
}
private void InternalPublish(IDomainEvent domainEvent)
{
semaphore.Wait();
try
{
// Get event subscriber(s) from concurrent dictionary...
foreach (Action<IDomainEvent> subscriber in eventSubscribers)
{
subscriber(domainEvent);
}
}
finally
{
semaphore.Release();
}
// Necessary, as calls to Publish during publishing could have queued events and returned.
PublishQueue();
}
private bool IsPublishing
{
get { return semaphore.CurrentCount < 1; }
}
// Dispose pattern for semaphore...
}
}
You will have to make Publish asynchronous to achieve that. Naive implementation would be as simple as:
public class Dispatcher : IDisposable {
private readonly BlockingCollection<IDomainEvent> _queue = new BlockingCollection<IDomainEvent>(new ConcurrentQueue<IDomainEvent>());
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
public Dispatcher() {
new Thread(Consume) {
IsBackground = true
}.Start();
}
private List<Action<IDomainEvent>> _subscribers = new List<Action<IDomainEvent>>();
public void AddSubscriber(Action<IDomainEvent> sub) {
_subscribers.Add(sub);
}
private void Consume() {
try {
foreach (var #event in _queue.GetConsumingEnumerable(_cts.Token)) {
try {
foreach (Action<IDomainEvent> subscriber in _subscribers) {
subscriber(#event);
}
}
catch (Exception ex) {
// log, handle
}
}
}
catch (OperationCanceledException) {
// expected
}
}
public void Publish(IDomainEvent domainEvent) {
_queue.Add(domainEvent);
}
public void Dispose() {
_cts.Cancel();
}
}
It can't be done with that interface. You can process the event subscriptions asynchronously to remove the deadlock while still running them serially, but then you can't guarantee the order you described. Another call to Publish might enqueue something (event C) while the handler for event A is running but before it publishes event B. Then event B ends up behind event C in the queue.
As long as Handler A is on equal footing with other clients when it comes to getting an item in the queue, it either has to wait like everyone else (deadlock) or it has to play fairly (first come, first served). The interface you have there doesn't allow the two to be treated differently.
That's not to say you couldn't get up to some shenanigans in your logic to attempt to differentiate them (e.g. based on thread id or something else identifiable), but anything along those lines would unreliable if you don't control the subscriber code as well.