I'm writing a listener for messages using the Rx framework.
The problem I'm facing is that the library I'm using uses a consumer that publishes events whenever a message has arrived.
I've managed to consume the incoming messages via Observable.FromEventPattern but I have a problem with the messages that are already in the server.
At the moment I have the following chain of commands
Create a consumer
Create an observable sequence with FromEventPattern and apply needed transformations
Tell the consumer to start
Subscribe to the sequence
The easiest solution would be to swap steps 3. and 4. but since they happen in different components of the system, it's very hard for me to do so.
Ideally I would like to execute step 3 when step 4 happens (like a OnSubscribe method).
Thanks for your help :)
PS: to add more details, the events are coming from a RabbitMQ queue and I am using the EventingBasicConsumer class found in the RabbitMQ.Client package.
Here you can find the library I am working on. Specifically, this is the class/method giving me problems.
Edit
Here is a stripped version of the problematic code
void Main()
{
var engine = new Engine();
var messages = engine.Start();
messages.Subscribe(m => m.Dump());
Console.ReadLine();
engine.Stop();
}
public class Engine
{
IConnection _connection;
IModel _channel;
public IObservable<Message> Start()
{
var connectionFactory = new ConnectionFactory();
_connection = connectionFactory.CreateConnection();
_channel = _connection.CreateModel();
EventingBasicConsumer consumer = new EventingBasicConsumer(_channel);
var observable = Observable.FromEventPattern<BasicDeliverEventArgs>(
a => consumer.Received += a,
a => consumer.Received -= a)
.Select(e => e.EventArgs);
_channel.BasicConsume("a_queue", false, consumer);
return observable.Select(Transform);
}
private Message Transform(BasicDeliverEventArgs args) => new Message();
public void Stop()
{
_channel.Dispose();
_connection.Dispose();
}
}
public class Message { }
The symptom I experience is that since I invoke BasicConsume before subscribing to the sequence, any message that is in the RabbitMQ queue is fetched but not passed down the pipeline.
Since I don't have "autoack" on, the messages are returned to the queue as soon as the program stops.
As some have noted in the comments, and as you note in the question, the issue is due to the way you're using the RabbitMQ client.
To get around some of these issues, what I actually did was create an ObservableConsumer class. This is an alternative to the EventingBasicConsumer which is in use currently. One reason I did this was to deal with the issue described in the question, but the other thing this does is allow you to re-use this consumer object beyond a single connection/channel instance. This has the benefit of allowing your downstream reactive code to remain wired in spite of transient connection/channel characteristics.
using System;
using System.Collections.Generic;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
using System.Reactive.Subjects;
using RabbitMQ.Client;
namespace com.rabbitmq.consumers
{
public sealed class ObservableConsumer : IBasicConsumer
{
private readonly List<string> _consumerTags = new List<string>();
private readonly object _consumerTagsLock = new object();
private readonly Subject<Message> _subject = new Subject<Message>();
public ushort PrefetchCount { get; set; }
public IEnumerable<string> ConsumerTags { get { return new List<string>(_consumerTags); } }
/// <summary>
/// Registers this consumer on the given queue.
/// </summary>
/// <returns>The consumer tag assigned.</returns>
public string ConsumeFrom(IModel channel, string queueName)
{
Model = channel;
return Model.BasicConsume(queueName, false, this);
}
/// <summary>
/// Contains an observable of the incoming messages where messages are processed on a thread pool thread.
/// </summary>
public IObservable<Message> IncomingMessages
{
get { return _subject.ObserveOn(Scheduler.ThreadPool); }
}
///<summary>Retrieve the IModel instance this consumer is
///registered with.</summary>
public IModel Model { get; private set; }
///<summary>Returns true while the consumer is registered and
///expecting deliveries from the broker.</summary>
public bool IsRunning
{
get { return _consumerTags.Count > 0; }
}
/// <summary>
/// Run after a consumer is cancelled.
/// </summary>
/// <param name="consumerTag"></param>
private void OnConsumerCanceled(string consumerTag)
{
}
/// <summary>
/// Run after a consumer is added.
/// </summary>
/// <param name="consumerTag"></param>
private void OnConsumerAdded(string consumerTag)
{
}
public void HandleBasicConsumeOk(string consumerTag)
{
lock (_consumerTagsLock) {
if (!_consumerTags.Contains(consumerTag))
_consumerTags.Add(consumerTag);
}
}
public void HandleBasicCancelOk(string consumerTag)
{
lock (_consumerTagsLock) {
if (_consumerTags.Contains(consumerTag)) {
_consumerTags.Remove(consumerTag);
OnConsumerCanceled(consumerTag);
}
}
}
public void HandleBasicCancel(string consumerTag)
{
lock (_consumerTagsLock) {
if (_consumerTags.Contains(consumerTag)) {
_consumerTags.Remove(consumerTag);
OnConsumerCanceled(consumerTag);
}
}
}
public void HandleModelShutdown(IModel model, ShutdownEventArgs reason)
{
//Don't need to do anything.
}
public void HandleBasicDeliver(string consumerTag,
ulong deliveryTag,
bool redelivered,
string exchange,
string routingKey,
IBasicProperties properties,
byte[] body)
{
//Hack - prevents the broker from sending too many messages.
//if (PrefetchCount > 0 && _unackedMessages.Count > PrefetchCount) {
// Model.BasicReject(deliveryTag, true);
// return;
//}
var message = new Message(properties.HeaderFromBasicProperties()) { Content = body };
var deliveryData = new MessageDeliveryData()
{
ConsumerTag = consumerTag,
DeliveryTag = deliveryTag,
Redelivered = redelivered,
};
message.Tag = deliveryData;
if (AckMode != AcknowledgeMode.AckWhenReceived) {
message.Acknowledged += messageAcknowledged;
message.Failed += messageFailed;
}
_subject.OnNext(message);
}
void messageFailed(Message message, Exception ex, bool requeue)
{
try {
message.Acknowledged -= messageAcknowledged;
message.Failed -= messageFailed;
if (message.Tag is MessageDeliveryData) {
Model.BasicNack((message.Tag as MessageDeliveryData).DeliveryTag, false, requeue);
}
}
catch {}
}
void messageAcknowledged(Message message)
{
try {
message.Acknowledged -= messageAcknowledged;
message.Failed -= messageFailed;
if (message.Tag is MessageDeliveryData) {
var ackMultiple = AckMode == AcknowledgeMode.AckAfterAny;
Model.BasicAck((message.Tag as MessageDeliveryData).DeliveryTag, ackMultiple);
}
}
catch {}
}
}
}
I think there is no need to actually subscribe to rabbit queue (via BasicConsume) until you have subscribers to your observable. Right now you are starting rabbit subscription right away and push items to observable even if no one has subscribed to it.
Suppose we have this sample class:
class Events {
public event Action<string> MessageArrived;
Timer _timer;
public void Start()
{
Console.WriteLine("Timer starting");
int i = 0;
_timer = new Timer(_ => {
this.MessageArrived?.Invoke(i.ToString());
i++;
}, null, TimeSpan.Zero, TimeSpan.FromSeconds(1));
}
public void Stop() {
_timer?.Dispose();
Console.WriteLine("Timer stopped");
}
}
What you are doing now is basically:
var ev = new Events();
var ob = Observable.FromEvent<string>(x => ev.MessageArrived += x, x => ev.MessageArrived -= x);
ev.Start();
return ob;
What you need instead is observable which does exactly that, but only when someone subscribes:
return Observable.Create<string>(observer =>
{
var ev = new Events();
var ob = Observable.FromEvent<string>(x => ev.MessageArrived += x, x => ev.MessageArrived -= x);
// first subsribe
var sub = ob.Subscribe(observer);
// then start
ev.Start();
// when subscription is disposed - unsubscribe from rabbit
return new CompositeDisposable(sub, Disposable.Create(() => ev.Stop()));
});
Good, but now every subscription to observable will result in separate subscription to rabbit queues, which is not what we need. We can solve that with Publish().RefCount():
return Observable.Create<string>(observer => {
var ev = new Events();
var ob = Observable.FromEvent<string>(x => ev.MessageArrived += x, x => ev.MessageArrived -= x);
var sub = ob.Subscribe(observer);
ev.Start();
return new CompositeDisposable(sub, Disposable.Create(() => ev.Stop()));
}).Publish().RefCount();
Now what will happen is when first subscriber subscribes to observable (ref count goes from 0 to 1) - code from Observable.Create body is invoked and subscribes to rabbit queue. This subscription is then shared by all subsequent subscribers. When last unsubscribes (ref count goes to zero) - subscription is disposed, ev.Stop is called, and we unsubscribe from rabbit queue.
If so happens that you call Start() (which creates observable in your code) and never subscribe to it - nothing happens and no subscriptions to rabbit is made at all.
Related
Background
I'm in a need for a queued message broker dispatching messages in a distributed (over consecutive frames) manner. In the example shown below it will process no more than 10 subscribers, and then wait for the next frame before processing further.
(For the sake of clarification for those not familiar with Unity3D, Process() method is run using Unity's built-in StartCoroutine() method and - in this case - will last for the lifetime of the game - waiting or processing from the queue.)
So i have such a relatively simple class:
public class MessageBus : IMessageBus
{
private const int LIMIT = 10;
private readonly WaitForSeconds Wait;
private Queue<IMessage> Messages;
private Dictionary<Type, List<Action<IMessage>>> Subscribers;
public MessageBus()
{
Wait = new WaitForSeconds(2f);
Messages = new Queue<IMessage>();
Subscribers = new Dictionary<Type, List<Action<IMessage>>>();
}
public void Submit(IMessage message)
{
Messages.Enqueue(message);
}
public IEnumerator Process()
{
var processed = 0;
while (true)
{
if (Messages.Count == 0)
{
yield return Wait;
}
else
{
while(Messages.Count > 0)
{
var message = Messages.Dequeue();
foreach (var subscriber in Subscribers[message.GetType()])
{
if (processed >= LIMIT)
{
processed = 0;
yield return null;
}
processed++;
subscriber?.Invoke(message);
}
}
processed = 0;
}
}
}
public void Subscribe<T>(Action<IMessage> handler) where T : IMessage
{
if (!Subscribers.ContainsKey(typeof(T)))
{
Subscribers[typeof(T)] = new List<Action<IMessage>>();
}
Subscribers[typeof(T)].Add(handler);
}
public void Unsubscribe<T>(Action<IMessage> handler) where T : IMessage
{
if (!Subscribers.ContainsKey(typeof(T)))
{
return;
}
Subscribers[typeof(T)].Remove(handler);
}
}
And it works and behaves just as expected, but there is one problem.
The problem
I would like to use it (from the subscriber's point of view) like this:
public void Run()
{
MessageBus.Subscribe<TestEvent>(OnTestEvent);
}
public void OnTestEvent(TestEvent message)
{
message.SomeTestEventMethod();
}
But this obviously fails because Action<IMessage> cannot be converted to Action<TestEvent>.
The only way i can use it is like this:
public void Run()
{
MessageBus.Subscribe<TestEvent>(OnTestEvent);
}
public void OnTestEvent(IMessage message)
{
((TestEvent)message).SomeTestEventMethod();
}
But this feels unelegant and very wasteful as every subscriber needs to do the casting on it's own.
What i have tried
I was experimenting with "casting" actions like that:
public void Subscribe<T>(Action<T> handler) where T : IMessage
{
if (!Subscribers.ContainsKey(typeof(T)))
{
Subscribers[typeof(T)] = new List<Action<IMessage>>();
}
Subscribers[typeof(T)].Add((IMessage a) => handler((T)a));
}
And this works for the subscribe part, but obviously not for the unsubscribe. I could cache somewhere newly created handler-wrapper-lambdas for use when unsubscribing, but i don't think this is the real solution, to be honest.
The question
How can i make this to work as i would like to? Preferably with some C# "magic" if possible, but i'm aware it may require a completely different approach.
Also because this will be used in a game, and be run for it's lifetime i would like a garbage-free solution if possible.
So the problem is that you are trying to store lists of a different type as values in the subscriber dictionary.
One way to get around this might be to store a List<Delegate> and then to use Delegate.DynamicInvoke.
Here's some test code that summarizes the main points:
Dictionary<Type, List<Delegate>> Subscribers = new Dictionary<Type, List<Delegate>>();
void Main()
{
Subscribe<Evt>(ev => Console.WriteLine($"hello {ev.Message}"));
IMessage m = new Evt("spender");
foreach (var subscriber in Subscribers[m.GetType()])
{
subscriber?.DynamicInvoke(m);
}
}
public void Subscribe<T>(Action<T> handler) where T : IMessage
{
if (!Subscribers.ContainsKey(typeof(T)))
{
Subscribers[typeof(T)] = new List<Delegate>();
}
Subscribers[typeof(T)].Add(handler);
}
public interface IMessage{}
public class Evt : IMessage
{
public Evt(string message)
{
this.Message = message;
}
public string Message { get; }
}
I have a C# project working with input audio Stream from Kinect 1, Kinect 2, Microphone or anything else.
waveIn.DataAvailable += (object sender, WaveInEventArgs e) => {
lock(buffer){
var pos = buffer.Position;
buffer.Write(e.Buffer, 0, e.BytesRecorded);
buffer.Position = pos;
}
};
The buffer variable is a Stream from component A that will be processed by a SpeechRecognition component B working on Streams.
I will add new components C, D, E, working on Streams to compute pitch, detect sound, do finger printing, or anything else ...
How can I duplicate that Stream for components C, D, E ?
Component A send an Event "I have a Stream do what you want" I don't want to reverse the logic by an Event "Give me your streams"
I'm looking for a "MultiStream" that could give me a Stream instance and will handle the job
Component A
var MultiStream buffer = new MultiStream()
...
SendMyEventWith(buffer)
Component B, C, D, E
public void HandleMyEvent(MultiStream buffer){
var stream = buffer.GetNewStream();
var engine = new EngineComponentB()
engine.SetStream(stream);
}
The MultiStream must be a Stream to wrap Write() method (because Stream do not have data available mechanics) ?
If a Stream is Dispose() by Component B the MultiStream should remove it from it's array ?
The MultiStream must throw an exception on Read() to require use of GetNewStream()
EDIT: Kinect 1 provide a Stream itself ... :-( should I use a Thread to pumpit into the MultiStream ?
Did anybody have that kind of MultiStream Class ?
Thanks
I'm not sure if this is the best way to do it or that it's better than the previous answer, and I'm not guaranteeing that this code is perfect, but I coded something that is literally what you asked for because it was fun - a MultiStream class.
You can find the code for the class here: http://pastie.org/10289142
Usage Example:
MultiStream ms = new MultiStream();
Stream copy1 = ms.CloneStream();
ms.Read( ... );
Stream copy2 = ms.CloneStream();
ms.Read( ... );
copy1 and copy2 will contain identical data after the example is ran, and they will continue to get updated as the MultiStream is written to. You can read, update position, and dispose of the cloned streams individually. If disposed the cloned streams will get removed from MultiStream, and disposing of Multistream will close all related and cloned streams (you can change this if it's not the behavior you want). Trying to write to the cloned streams will throw a not supported exception.
Somehow I don't think streams really fit what you're trying to do. You're setting up a situation where a long run of the program is going to continually expand the data requirements for no apparent reason.
I'd suggest a pub/sub model that publishes the received audio data to subscribers, preferably using a multi-threaded approach to minimize the impact of a bad subscriber. Some ideas can be found here.
I've done this before with a processor class that implements IObserver<byte[]> and uses a Queue<byte[]> to store the sample blocks until the process thread is ready for them. Here's are the base classes:
public abstract class BufferedObserver<T> : IObserver<T>, IDisposable
{
private object _lck = new object();
private IDisposable _subscription = null;
public bool Subscribed { get { return _subscription != null; } }
private bool _completed = false;
public bool Completed { get { return _completed; } }
protected readonly Queue<T> _queue = new Queue<T>();
protected bool DataAvailable { get { lock(_lck) { return _queue.Any(); } } }
protected int AvailableCount { get { lock (_lck) { return _queue.Count; } } }
protected BufferedObserver()
{
}
protected BufferedObserver(IObservable<T> observable)
{
SubscribeTo(observable);
}
public virtual void Dispose()
{
if (_subscription != null)
{
_subscription.Dispose();
_subscription = null;
}
}
public void SubscribeTo(IObservable<T> observable)
{
if (_subscription != null)
_subscription.Dispose();
_subscription = observable.Subscribe(this);
_completed = false;
}
public virtual void OnCompleted()
{
_completed = true;
}
public virtual void OnError(Exception error)
{ }
public virtual void OnNext(T value)
{
lock (_lck)
_queue.Enqueue(value);
}
protected bool GetNext(ref T buffer)
{
lock (_lck)
{
if (!_queue.Any())
return false;
buffer = _queue.Dequeue();
return true;
}
}
protected T NextOrDefault()
{
T buffer = default(T);
GetNext(ref buffer);
return buffer;
}
}
public abstract class Processor<T> : BufferedObserver<T>
{
private object _lck = new object();
private Thread _thread = null;
private object _cancel_lck = new object();
private bool _cancel_requested = false;
private bool CancelRequested
{
get { lock(_cancel_lck) return _cancel_requested; }
set { lock(_cancel_lck) _cancel_requested = value; }
}
public bool Running { get { return _thread == null ? false : _thread.IsAlive; } }
public bool Finished { get { return _thread == null ? false : !_thread.IsAlive; } }
protected Processor(IObservable<T> observable)
: base(observable)
{ }
public override void Dispose()
{
if (_thread != null && _thread.IsAlive)
{
//CancelRequested = true;
_thread.Join(5000);
}
base.Dispose();
}
public bool Start()
{
if (_thread != null)
return false;
_thread = new Thread(threadfunc);
_thread.Start();
return true;
}
private void threadfunc()
{
while (!CancelRequested && (!Completed || _queue.Any()))
{
if (DataAvailable)
{
T data = NextOrDefault();
if (data != null && !data.Equals(default(T)))
ProcessData(data);
}
else
Thread.Sleep(10);
}
}
// implement this in a sub-class to process the blocks
protected abstract void ProcessData(T data);
}
This way you're only keeping the data as long as you need it, and you can attach as many process threads as you need to the same observable data source.
And for the sake of completeness, here's a generic class that implements IObservable<T> so you can see how it all fits together. This one even has comments:
/// <summary>Generic IObservable implementation</summary>
/// <typeparam name="T">Type of messages being observed</typeparam>
public class Observable<T> : IObservable<T>
{
/// <summary>Subscription class to manage unsubscription of observers.</summary>
private class Subscription : IDisposable
{
/// <summary>Observer list that this subscription relates to</summary>
public readonly ConcurrentBag<IObserver<T>> _observers;
/// <summary>Observer to manage</summary>
public readonly IObserver<T> _observer;
/// <summary>Initialize subscription</summary>
/// <param name="observers">List of subscribed observers to unsubscribe from</param>
/// <param name="observer">Observer to manage</param>
public Subscription(ConcurrentBag<IObserver<T>> observers, IObserver<T> observer)
{
_observers = observers;
_observer = observer;
}
/// <summary>On disposal remove the subscriber from the subscription list</summary>
public void Dispose()
{
IObserver<T> observer;
if (_observers != null && _observers.Contains(_observer))
_observers.TryTake(out observer);
}
}
// list of subscribed observers
private readonly ConcurrentBag<IObserver<T>> _observers = new ConcurrentBag<IObserver<T>>();
/// <summary>Subscribe an observer to this observable</summary>
/// <param name="observer">Observer instance to subscribe</param>
/// <returns>A subscription object that unsubscribes on destruction</returns>
/// <remarks>Always returns a subscription. Ensure that previous subscriptions are disposed
/// before re-subscribing.</remarks>
public IDisposable Subscribe(IObserver<T> observer)
{
// only add observer if it doesn't already exist:
if (!_observers.Contains(observer))
_observers.Add(observer);
// ...but always return a new subscription.
return new Subscription(_observers, observer);
}
// delegate type for threaded invocation of IObserver.OnNext method
private delegate void delNext(T value);
/// <summary>Send <paramref name="data"/> to the OnNext methods of each subscriber</summary>
/// <param name="data">Data object to send to subscribers</param>
/// <remarks>Uses delegate.BeginInvoke to send out notifications asynchronously.</remarks>
public void Notify(T data)
{
foreach (var observer in _observers)
{
delNext handler = observer.OnNext;
handler.BeginInvoke(data, null, null);
}
}
// delegate type for asynchronous invocation of IObserver.OnComplete method
private delegate void delComplete();
/// <summary>Notify all subscribers that the observable has completed</summary>
/// <remarks>Uses delegate.BeginInvoke to send out notifications asynchronously.</remarks>
public void NotifyComplete()
{
foreach (var observer in _observers)
{
delComplete handler = observer.OnCompleted;
handler.BeginInvoke(null, null);
}
}
}
Now you can create an Observable<byte[]> to use as your transmitter for Process<byte[]> instances that are interested. Pull data blocks out of the input stream, audio reader, etc. and pass them to the Notify method. Just make sure that you clone the arrays beforehand...
The earlier post seems not very clear, so after some testing, I reopened this post with much more simplified words, hope somebody could help.
My singleton observable was turned from multiple source of I/O events, means they're concurrently raised up in underlying, based on testing (to prove Rx is not thread safe) and RX design guideline, I made it serialized, see that lock(...):
public class EventFireCenter
{
public static event EventHandler<GTCommandTerminalEventArg> OnTerminalEventArrived;
private static object syncObject = new object();
public static void TestFireDummyEventWithId(int id)
{
lock (syncObject)
{
var safe = OnTerminalEventArrived;
if (safe != null)
{
safe(null, new GTCommandTerminalEventArg(id));
}
}
}
}
This is the singleton Observable:
public class UnsolicitedEventCenter
{
private readonly static IObservable<int> publisher;
static UnsolicitedEventCenter()
{
publisher = Observable.FromEventPattern<GTCommandTerminalEventArg>(typeof(EventFireCenter), "OnTerminalEventArrived")
.Select(s => s.EventArgs.Id);
}
private UnsolicitedEventCenter() { }
/// <summary>
/// Gets the Publisher property to start observe an observable sequence.
/// </summary>
public static IObservable<int> Publisher { get { return publisher; } }
}
The scenario of Subscribe(...) can be described by following code, you can see the Subscribe(...) could be called concurrently in different threads:
for (var i = 0; i < concurrentCount; i++)
{
var safe = i;
Scheduler.Default.Schedule(() =>
{
IDisposable dsp = null;
dsp = UnsolicitedEventCenter.Publisher
.Timeout(TimeSpan.FromMilliseconds(8000))
.Where(incomingValue => incomingValue == safe)
.ObserveOn(Scheduler.Default)
//.Take(1)
.Subscribe((incomingEvent) =>
{
Interlocked.Increment(ref onNextCalledTimes);
dsp.Dispose();
}
, ex =>
{
Interlocked.Increment(ref timeoutExceptionOccurredTimes);
lock (timedOutEventIds)
{
// mark this id has been timed out, only for unit testing result check.
timedOutEventIds.Add(safe);
}
dsp.Dispose();
});
Interlocked.Increment(ref threadPoolQueuedTaskCount);
});
}
As pointed out times by experienced people, call Dispose() in OnNext(...) is not recommended, but let's ignore it here since the code was from production.
Now the problem is randomly that .Timeout(TimeSpan.FromMilliseconds(8000)) is not working, the ex was never called, anyone could see any abnormal in the code?
for testing, I setup the stress testing, but so far, I didn't reproduced it, while in production, it appeared several times per day. Just in case, I pasted all the testing code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Rx
{
class Program
{
static void Main(string[] args)
{
// avoid thread creation delay in thread pool.
ThreadPool.SetMinThreads(200, 50);
// let the test run for 100 times
for (int t = 0; t < 100; t++)
{
Console.WriteLine("");
Console.WriteLine("======Current running times: " + t);
// at meantime, 150 XXX.Subscribe(...) will be called.
const int concurrentCount = 150;
// how many fake event will be fire to santisfy that 150 XXX.Subscribe(...).
const int fireFakeEventCount = 40;
int timeoutExceptionOccurredTimes = 0;
var timedOutEventIds = new List<int>();
int onNextCalledTimes = 0;
int threadPoolQueuedTaskCount = 0;
for (var i = 0; i < concurrentCount; i++)
{
var safe = i;
Scheduler.Default.Schedule(() =>
{
IDisposable dsp = null;
dsp = UnsolicitedEventCenter.Publisher
.Timeout(TimeSpan.FromMilliseconds(8000))
.Where(incomingValue => incomingValue == safe)
.ObserveOn(Scheduler.Default)
//.Take(1)
.Subscribe((incomingEvent) =>
{
Interlocked.Increment(ref onNextCalledTimes);
dsp.Dispose();
}
, ex =>
{
Interlocked.Increment(ref timeoutExceptionOccurredTimes);
lock (timedOutEventIds)
{
// mark this id has been timed out, only for unit testing result check.
timedOutEventIds.Add(safe);
}
dsp.Dispose();
});
Interlocked.Increment(ref threadPoolQueuedTaskCount);
});
}
Console.WriteLine("Starting fire event: " + DateTime.Now.ToString("HH:mm:ss.ffff"));
int threadPoolQueuedTaskCount1 = 0;
// simulate a concurrent event fire
for (int i = 0; i < fireFakeEventCount; i++)
{
var safe = i;
Scheduler.Default.Schedule(() =>
{
EventFireCenter.TestFireDummyEventWithId(safe);
Interlocked.Increment(ref threadPoolQueuedTaskCount1);
});
}
// make sure all proceeding task has been done in threadPool.
while (threadPoolQueuedTaskCount < concurrentCount)
{
Thread.Sleep(1000);
}
// make sure all proceeding task has been done in threadPool.
while (threadPoolQueuedTaskCount1 < fireFakeEventCount)
{
Thread.Sleep(100);
}
Console.WriteLine("Finished fire event: " + DateTime.Now.ToString("HH:mm:ss.ffff"));
// sleep a time which >3000ms.
Thread.Sleep(8000);
Console.WriteLine("timeoutExceptionOccurredTimes: " + timeoutExceptionOccurredTimes);
Console.WriteLine("onNextCalledTimes: " + onNextCalledTimes);
if ((concurrentCount - fireFakeEventCount) != timeoutExceptionOccurredTimes)
{
try
{
Console.WriteLine("Non timeout fired for these ids: " +
Enumerable.Range(0, concurrentCount)
.Except(timedOutEventIds).Except(Enumerable.Range(0, fireFakeEventCount)).Select(i => i.ToString())
.Aggregate((acc, n) => acc + "," + n));
}
catch (Exception ex) { Console.WriteLine("faild to output timedout ids..."); }
break;
}
if (fireFakeEventCount != onNextCalledTimes)
{
Console.WriteLine("onNextOccurredTimes assert failed");
break;
}
if ((concurrentCount - fireFakeEventCount) != timeoutExceptionOccurredTimes)
{
Console.WriteLine("timeoutExceptionOccurredTimes assert failed");
break;
}
}
Console.WriteLine("");
Console.WriteLine("");
Console.WriteLine("DONE!");
Console.ReadLine();
}
}
public class EventFireCenter
{
public static event EventHandler<GTCommandTerminalEventArg> OnTerminalEventArrived;
private static object syncObject = new object();
public static void TestFireDummyEventWithId(int id)
{
lock (syncObject)
{
var safe = OnTerminalEventArrived;
if (safe != null)
{
safe(null, new GTCommandTerminalEventArg(id));
}
}
}
}
public class UnsolicitedEventCenter
{
private readonly static IObservable<int> publisher;
static UnsolicitedEventCenter()
{
publisher = Observable.FromEventPattern<GTCommandTerminalEventArg>(typeof(EventFireCenter), "OnTerminalEventArrived")
.Select(s => s.EventArgs.Id);
}
private UnsolicitedEventCenter() { }
/// <summary>
/// Gets the Publisher property to start observe an observable sequence.
/// </summary>
public static IObservable<int> Publisher { get { return publisher; } }
}
public class GTCommandTerminalEventArg : System.EventArgs
{
public GTCommandTerminalEventArg(int id)
{
this.Id = id;
}
public int Id { get; private set; }
}
}
Most likely the Timeout is not triggering because you have it before the Where filter. This means that all events are flowing through and resetting the timer, and then most of the events get filtered by the Where clause. To your subscribing observer, it will seem like it never gets a result and the timeout never triggers. Move the Timeout to be after the Where and you should now have a system that times out individual observers if they do not get their expected event on time.
I'm struggling with trying to find the best way to implement WCF retries. I'm hoping to make the client experience as clean as possible. There are two approaches of which I'm aware (see below). My question is: "Is there a third approach that I'm missing? Maybe one that's the generally accepted way of doing this?"
Approach #1: Create a proxy that implements the service interface. For each call to the proxy, implement retries.
public class Proxy : ISomeWcfServiceInterface
{
public int Foo(int snurl)
{
return MakeWcfCall<int>(() => _channel.Foo(snurl));
}
public string Bar(string snuh)
{
return MakeWcfCall<string>(() => _channel.Bar(snuh));
}
private static T MakeWcfCall<T>(Func<T> func)
{
// Invoke the func and implement retries.
}
}
Approach #2: Change MakeWcfCall() (above) to public, and have the consuming code send the func directly.
What I don't like about approach #1 is having to update the Proxy class every time the interface changes.
What I don't like about approach #2 is the client having to wrap their call in a func.
Am I missing a better way?
EDIT
I've posted an answer here (see below), based on the accepted answer that pointed me in the right direction. I thought I'd share my code, in an answer, to help someone work through what I had to work through. Hope it helps.
I have done this very type of thing and I solved this problem using .net's RealProxy class.
Using RealProxy, you can create an actual proxy at runtime using your provided interface. From the calling code it is as if they are using an IFoo channel, but in fact it is a IFoo to the proxy and then you get a chance to intercept the calls to any method, property, constructors, etc…
Deriving from RealProxy, you can then override the Invoke method to intercept calls to the WCF methods and handle CommunicationException, etc.
Note: This shouldn't be the accepted answer, but I wanted to post the solution in case it helps others. Jim's answer pointed me in this direction.
First, the consuming code, showing how it works:
static void Main(string[] args)
{
var channelFactory = new WcfChannelFactory<IPrestoService>(new NetTcpBinding());
var endpointAddress = ConfigurationManager.AppSettings["endpointAddress"];
// The call to CreateChannel() actually returns a proxy that can intercept calls to the
// service. This is done so that the proxy can retry on communication failures.
IPrestoService prestoService = channelFactory.CreateChannel(new EndpointAddress(endpointAddress));
Console.WriteLine("Enter some information to echo to the Presto service:");
string message = Console.ReadLine();
string returnMessage = prestoService.Echo(message);
Console.WriteLine("Presto responds: {0}", returnMessage);
Console.WriteLine("Press any key to stop the program.");
Console.ReadKey();
}
The WcfChannelFactory:
public class WcfChannelFactory<T> : ChannelFactory<T> where T : class
{
public WcfChannelFactory(Binding binding) : base(binding) {}
public T CreateBaseChannel()
{
return base.CreateChannel(this.Endpoint.Address, null);
}
public override T CreateChannel(EndpointAddress address, Uri via)
{
// This is where the magic happens. We don't really return a channel here;
// we return WcfClientProxy.GetTransparentProxy(). That class will now
// have the chance to intercept calls to the service.
this.Endpoint.Address = address;
var proxy = new WcfClientProxy<T>(this);
return proxy.GetTransparentProxy() as T;
}
}
The WcfClientProxy: (This is where we intercept and retry.)
public class WcfClientProxy<T> : RealProxy where T : class
{
private WcfChannelFactory<T> _channelFactory;
public WcfClientProxy(WcfChannelFactory<T> channelFactory) : base(typeof(T))
{
this._channelFactory = channelFactory;
}
public override IMessage Invoke(IMessage msg)
{
// When a service method gets called, we intercept it here and call it below with methodBase.Invoke().
var methodCall = msg as IMethodCallMessage;
var methodBase = methodCall.MethodBase;
// We can't call CreateChannel() because that creates an instance of this class,
// and we'd end up with a stack overflow. So, call CreateBaseChannel() to get the
// actual service.
T wcfService = this._channelFactory.CreateBaseChannel();
try
{
var result = methodBase.Invoke(wcfService, methodCall.Args);
return new ReturnMessage(
result, // Operation result
null, // Out arguments
0, // Out arguments count
methodCall.LogicalCallContext, // Call context
methodCall); // Original message
}
catch (FaultException)
{
// Need to specifically catch and rethrow FaultExceptions to bypass the CommunicationException catch.
// This is needed to distinguish between Faults and underlying communication exceptions.
throw;
}
catch (CommunicationException ex)
{
// Handle CommunicationException and implement retries here.
throw new NotImplementedException();
}
}
}
Sequence diagram of a call being intercepted by the proxy:
You can implement generic proxy for example using Castle. There is a good article here http://www.planetgeek.ch/2010/10/13/dynamic-proxy-for-wcf-with-castle-dynamicproxy/. This approach will give user object which implements interface and you will have one class responsible for comunication
Do not mess with generated code because, as you mentioned, it will be generated again so any customization will be overridden.
I see two ways:
Write/generate a partial class for each proxy that contains the retry variation. This is messy because you will still have to adjust it when the proxy changes
Make a custom version of svcutil that allows you to generate a proxy that derives from a different base class and put the retry code in that base class. This is quite some work but it can be done and solves the issue in a robust way.
go through approach 1, then wrape all context event (open, opened, faulted, ...) into event to be exposed by your class proxy, once the communication is faulted then re-create the proxy or call some recursive method inside proxy class. i can share with you some wok i have just tested.
public class DuplexCallBackNotificationIntegrationExtension : IExtension, INotificationPusherCallback {
#region - Field(s) -
private static Timer _Timer = null;
private static readonly object m_SyncRoot = new Object();
private static readonly Guid CMESchedulerApplicationID = Guid.NewGuid();
private static CancellationTokenSource cTokenSource = new CancellationTokenSource();
private static CancellationToken cToken = cTokenSource.Token;
#endregion
#region - Event(s) -
/// <summary>
/// Event fired during Duplex callback.
/// </summary>
public static event EventHandler<CallBackEventArgs> CallBackEvent;
public static event EventHandler<System.EventArgs> InstanceContextOpeningEvent;
public static event EventHandler<System.EventArgs> InstanceContextOpenedEvent;
public static event EventHandler<System.EventArgs> InstanceContextClosingEvent;
public static event EventHandler<System.EventArgs> InstanceContextClosedEvent;
public static event EventHandler<System.EventArgs> InstanceContextFaultedEvent;
#endregion
#region - Property(ies) -
/// <summary>
/// Interface extension designation.
/// </summary>
public string Name {
get {
return "Duplex Call Back Notification Integration Extension.";
}
}
/// <summary>
/// GUI Interface extension.
/// </summary>
public IUIExtension UIExtension {
get {
return null;
}
}
#endregion
#region - Constructor(s) / Finalizer(s) -
/// <summary>
/// Initializes a new instance of the DuplexCallBackNotificationIntegrationExtension class.
/// </summary>
public DuplexCallBackNotificationIntegrationExtension() {
CallDuplexNotificationPusher();
}
#endregion
#region - Delegate Invoker(s) -
void ICommunicationObject_Opening(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Info("context_Opening");
this.OnInstanceContextOpening(e);
}
void ICommunicationObject_Opened(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Debug("context_Opened");
this.OnInstanceContextOpened(e);
}
void ICommunicationObject_Closing(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Debug("context_Closing");
this.OnInstanceContextClosing(e);
}
void ICommunicationObject_Closed(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Debug("context_Closed");
this.OnInstanceContextClosed(e);
}
void ICommunicationObject_Faulted(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Error("context_Faulted");
this.OnInstanceContextFaulted(e);
if (_Timer != null) {
_Timer.Dispose();
}
IChannel channel = sender as IChannel;
if (channel != null) {
channel.Abort();
channel.Close();
}
DoWorkRoutine(cToken);
}
protected virtual void OnCallBackEvent(Notification objNotification) {
if (CallBackEvent != null) {
CallBackEvent(this, new CallBackEventArgs(objNotification));
}
}
protected virtual void OnInstanceContextOpening(System.EventArgs e) {
if (InstanceContextOpeningEvent != null) {
InstanceContextOpeningEvent(this, e);
}
}
protected virtual void OnInstanceContextOpened(System.EventArgs e) {
if (InstanceContextOpenedEvent != null) {
InstanceContextOpenedEvent(this, e);
}
}
protected virtual void OnInstanceContextClosing(System.EventArgs e) {
if (InstanceContextClosingEvent != null) {
InstanceContextClosingEvent(this, e);
}
}
protected virtual void OnInstanceContextClosed(System.EventArgs e) {
if (InstanceContextClosedEvent != null) {
InstanceContextClosedEvent(this, e);
}
}
protected virtual void OnInstanceContextFaulted(System.EventArgs e) {
if (InstanceContextFaultedEvent != null) {
InstanceContextFaultedEvent(this, e);
}
}
#endregion
#region - IDisposable Member(s) -
#endregion
#region - Private Method(s) -
/// <summary>
///
/// </summary>
void CallDuplexNotificationPusher() {
var routine = Task.Factory.StartNew(() => DoWorkRoutine(cToken), cToken);
cToken.Register(() => cancelNotification());
}
/// <summary>
///
/// </summary>
/// <param name="ct"></param>
void DoWorkRoutine(CancellationToken ct) {
lock (m_SyncRoot) {
var context = new InstanceContext(this);
var proxy = new NotificationPusherClient(context);
ICommunicationObject communicationObject = proxy as ICommunicationObject;
communicationObject.Opening += new System.EventHandler(ICommunicationObject_Opening);
communicationObject.Opened += new System.EventHandler(ICommunicationObject_Opened);
communicationObject.Faulted += new System.EventHandler(ICommunicationObject_Faulted);
communicationObject.Closed += new System.EventHandler(ICommunicationObject_Closed);
communicationObject.Closing += new System.EventHandler(ICommunicationObject_Closing);
try {
proxy.Subscribe(CMESchedulerApplicationID.ToString());
}
catch (Exception ex) {
Logger.HELogger.DefaultLogger.DUPLEXLogger.Error(ex);
switch (communicationObject.State) {
case CommunicationState.Faulted:
proxy.Close();
break;
default:
break;
}
cTokenSource.Cancel();
cTokenSource.Dispose();
cTokenSource = new CancellationTokenSource();
cToken = cTokenSource.Token;
CallDuplexNotificationPusher();
}
bool KeepAliveCallBackEnabled = Properties.Settings.Default.KeepAliveCallBackEnabled;
if (KeepAliveCallBackEnabled) {
_Timer = new Timer(new TimerCallback(delegate(object item) {
DefaultLogger.DUPLEXLogger.Debug(string.Format("._._._._._. New Iteration {0: yyyy MM dd hh mm ss ffff} ._._._._._.", DateTime.Now.ToUniversalTime().ToString()));
DBNotificationPusherService.Acknowledgment reply = DBNotificationPusherService.Acknowledgment.NAK;
try {
reply = proxy.KeepAlive();
}
catch (Exception ex) {
DefaultLogger.DUPLEXLogger.Error(ex);
switch (communicationObject.State) {
case CommunicationState.Faulted:
case CommunicationState.Closed:
proxy.Abort();
ICommunicationObject_Faulted(null, null);
break;
default:
break;
}
}
DefaultLogger.DUPLEXLogger.Debug(string.Format("Acknowledgment = {0}.", reply.ToString()));
_Timer.Change(Properties.Settings.Default.KeepAliveCallBackTimerInterval, Timeout.Infinite);
}), null, Properties.Settings.Default.KeepAliveCallBackTimerInterval, Timeout.Infinite);
}
}
}
/// <summary>
///
/// </summary>
void cancelNotification() {
DefaultLogger.DUPLEXLogger.Warn("Cancellation request made!!");
}
#endregion
#region - Public Method(s) -
/// <summary>
/// Fire OnCallBackEvent event and fill automatic-recording collection with newest
/// </summary>
/// <param name="action"></param>
public void SendNotification(Notification objNotification) {
// Fire event callback.
OnCallBackEvent(objNotification);
}
#endregion
#region - Callback(s) -
private void OnAsyncExecutionComplete(IAsyncResult result) {
}
#endregion
}
Just wrap all service calls in a function, taking a delegate that would execute the passed delegate the amount of time necessary
internal R ExecuteServiceMethod<I, R>(Func<I, R> serviceCall, string userName, string password) {
//Note all clients have the name Manager, but this isn't a problem as they get resolved
//by type
ChannelFactory<I> factory = new ChannelFactory<I>("Manager");
factory.Credentials.UserName.UserName = userName;
factory.Credentials.UserName.Password = password;
I manager = factory.CreateChannel();
//Wrap below in a retry loop
return serviceCall.Invoke(manager);
}
I have a Sender class that sends a Message on a IChannel:
public class MessageEventArgs : EventArgs {
public Message Message { get; private set; }
public MessageEventArgs(Message m) { Message = m; }
}
public interface IChannel {
public event EventHandler<MessageEventArgs> MessageReceived;
void Send(Message m);
}
public class Sender {
public const int MaxWaitInMs = 5000;
private IChannel _c = ...;
public Message Send(Message m) {
_c.Send(m);
// wait for MaxWaitInMs to get an event from _c.MessageReceived
// return the message or null if no message was received in response
}
}
When we send messages, the IChannel sometimes gives a response depending on what kind of Message was sent by raising the MessageReceived event. The event arguments contain the message of interest.
I want Sender.Send() method to wait for a short time to see if this event is raised. If so, I'll return its MessageEventArgs.Message property. If not, I return a null Message.
How can I wait in this way? I'd prefer not to have do the threading legwork with ManualResetEvents and such, so sticking to regular events would be optimal for me.
Use a AutoResetEvent.
Gimme a few minutes and I'll throw together a sample.
Here it is:
public class Sender
{
public static readonly TimeSpan MaxWait = TimeSpan.FromMilliseconds(5000);
private IChannel _c;
private AutoResetEvent _messageReceived;
public Sender()
{
// initialize _c
this._messageReceived = new AutoResetEvent(false);
this._c.MessageReceived += this.MessageReceived;
}
public Message Send(Message m)
{
this._c.Send(m);
// wait for MaxWaitInMs to get an event from _c.MessageReceived
// return the message or null if no message was received in response
// This will wait for up to 5000 ms, then throw an exception.
this._messageReceived.WaitOne(MaxWait);
return null;
}
public void MessageReceived(object sender, MessageEventArgs e)
{
//Do whatever you need to do with the message
this._messageReceived.Set();
}
}
Have you tried assigning the function to call asynchronously to a delegate, then invoking the mydelegateinstance.BeginInvoke?
Linky for reference.
With the below example, just call
FillDataSet(ref table, ref dataset);
and it'll work as if by magic. :)
#region DataSet manipulation
///<summary>Fills a the distance table of a dataset</summary>
private void FillDataSet(ref DistanceDataTableAdapter taD, ref MyDataSet ds) {
using (var myMRE = new ManualResetEventSlim(false)) {
ds.EnforceConstraints = false;
ds.Distance.BeginLoadData();
Func<DistanceDataTable, int> distanceFill = taD.Fill;
distanceFill.BeginInvoke(ds.Distance, FillCallback<DistanceDataTable>, new object[] { distanceFill, myMRE });
WaitHandle.WaitAll(new []{ myMRE.WaitHandle });
ds.Distance.EndLoadData();
ds.EnforceConstraints = true;
}
}
/// <summary>
/// Callback used when filling a table asynchronously.
/// </summary>
/// <param name="result">Represents the status of the asynchronous operation.</param>
private void FillCallback<MyDataTable>(IAsyncResult result) where MyDataTable: DataTable {
var state = result.AsyncState as object[];
Debug.Assert((state != null) && (state.Length == 2), "State variable is either null or an invalid number of parameters were passed.");
var fillFunc = state[0] as Func<MyDataTable, int>;
var mre = state[1] as ManualResetEventSlim;
Debug.Assert((mre != null) && (fillFunc != null));
int rowsAffected = fillFunc.EndInvoke(result);
Debug.WriteLine(" Rows: " + rowsAffected.ToString());
mre.Set();
}
Perhaps your MessageReceived method should simply flag a value to a property of your IChannel interface, while implementing the INotifyPropertyChanged event handler, so that you would be advised when the property is changed.
By doing so, your Sender class could loop until the max waiting time is elapsed, or whenever the PropertyChanged event handler occurs, breaking the loop succesfully. If your loop doesn't get broken, then the message shall be considered as never received.
Useful sample with AutoResetEvent:
using System;
using System.Threading;
class WaitOne
{
static AutoResetEvent autoEvent = new AutoResetEvent(false);
static void Main()
{
Console.WriteLine("Main starting.");
ThreadPool.QueueUserWorkItem(
new WaitCallback(WorkMethod), autoEvent);
// Wait for work method to signal.
autoEvent.WaitOne();
Console.WriteLine("Work method signaled.\nMain ending.");
}
static void WorkMethod(object stateInfo)
{
Console.WriteLine("Work starting.");
// Simulate time spent working.
Thread.Sleep(new Random().Next(100, 2000));
// Signal that work is finished.
Console.WriteLine("Work ending.");
((AutoResetEvent)stateInfo).Set();
}
}
WaitOne is really the right tool for this job. In short, you want to wait between 0 and MaxWaitInMs milliseconds for a job to complete. You really have two choices, poll for completion or synchronize the threads with some construct that can wait an arbitrary amount of time.
Since you're well aware of the right way to do this, for posterity I'll post the polling version:
MessageEventArgs msgArgs = null;
var callback = (object o, MessageEventArgs args) => {
msgArgs = args;
};
_c.MessageReceived += callback;
_c.Send(m);
int msLeft = MaxWaitInMs;
while (msgArgs == null || msLeft >= 0) {
Thread.Sleep(100);
msLeft -= 100; // you should measure this instead with say, Stopwatch
}
_c.MessageRecieved -= callback;