I have a C# project working with input audio Stream from Kinect 1, Kinect 2, Microphone or anything else.
waveIn.DataAvailable += (object sender, WaveInEventArgs e) => {
lock(buffer){
var pos = buffer.Position;
buffer.Write(e.Buffer, 0, e.BytesRecorded);
buffer.Position = pos;
}
};
The buffer variable is a Stream from component A that will be processed by a SpeechRecognition component B working on Streams.
I will add new components C, D, E, working on Streams to compute pitch, detect sound, do finger printing, or anything else ...
How can I duplicate that Stream for components C, D, E ?
Component A send an Event "I have a Stream do what you want" I don't want to reverse the logic by an Event "Give me your streams"
I'm looking for a "MultiStream" that could give me a Stream instance and will handle the job
Component A
var MultiStream buffer = new MultiStream()
...
SendMyEventWith(buffer)
Component B, C, D, E
public void HandleMyEvent(MultiStream buffer){
var stream = buffer.GetNewStream();
var engine = new EngineComponentB()
engine.SetStream(stream);
}
The MultiStream must be a Stream to wrap Write() method (because Stream do not have data available mechanics) ?
If a Stream is Dispose() by Component B the MultiStream should remove it from it's array ?
The MultiStream must throw an exception on Read() to require use of GetNewStream()
EDIT: Kinect 1 provide a Stream itself ... :-( should I use a Thread to pumpit into the MultiStream ?
Did anybody have that kind of MultiStream Class ?
Thanks
I'm not sure if this is the best way to do it or that it's better than the previous answer, and I'm not guaranteeing that this code is perfect, but I coded something that is literally what you asked for because it was fun - a MultiStream class.
You can find the code for the class here: http://pastie.org/10289142
Usage Example:
MultiStream ms = new MultiStream();
Stream copy1 = ms.CloneStream();
ms.Read( ... );
Stream copy2 = ms.CloneStream();
ms.Read( ... );
copy1 and copy2 will contain identical data after the example is ran, and they will continue to get updated as the MultiStream is written to. You can read, update position, and dispose of the cloned streams individually. If disposed the cloned streams will get removed from MultiStream, and disposing of Multistream will close all related and cloned streams (you can change this if it's not the behavior you want). Trying to write to the cloned streams will throw a not supported exception.
Somehow I don't think streams really fit what you're trying to do. You're setting up a situation where a long run of the program is going to continually expand the data requirements for no apparent reason.
I'd suggest a pub/sub model that publishes the received audio data to subscribers, preferably using a multi-threaded approach to minimize the impact of a bad subscriber. Some ideas can be found here.
I've done this before with a processor class that implements IObserver<byte[]> and uses a Queue<byte[]> to store the sample blocks until the process thread is ready for them. Here's are the base classes:
public abstract class BufferedObserver<T> : IObserver<T>, IDisposable
{
private object _lck = new object();
private IDisposable _subscription = null;
public bool Subscribed { get { return _subscription != null; } }
private bool _completed = false;
public bool Completed { get { return _completed; } }
protected readonly Queue<T> _queue = new Queue<T>();
protected bool DataAvailable { get { lock(_lck) { return _queue.Any(); } } }
protected int AvailableCount { get { lock (_lck) { return _queue.Count; } } }
protected BufferedObserver()
{
}
protected BufferedObserver(IObservable<T> observable)
{
SubscribeTo(observable);
}
public virtual void Dispose()
{
if (_subscription != null)
{
_subscription.Dispose();
_subscription = null;
}
}
public void SubscribeTo(IObservable<T> observable)
{
if (_subscription != null)
_subscription.Dispose();
_subscription = observable.Subscribe(this);
_completed = false;
}
public virtual void OnCompleted()
{
_completed = true;
}
public virtual void OnError(Exception error)
{ }
public virtual void OnNext(T value)
{
lock (_lck)
_queue.Enqueue(value);
}
protected bool GetNext(ref T buffer)
{
lock (_lck)
{
if (!_queue.Any())
return false;
buffer = _queue.Dequeue();
return true;
}
}
protected T NextOrDefault()
{
T buffer = default(T);
GetNext(ref buffer);
return buffer;
}
}
public abstract class Processor<T> : BufferedObserver<T>
{
private object _lck = new object();
private Thread _thread = null;
private object _cancel_lck = new object();
private bool _cancel_requested = false;
private bool CancelRequested
{
get { lock(_cancel_lck) return _cancel_requested; }
set { lock(_cancel_lck) _cancel_requested = value; }
}
public bool Running { get { return _thread == null ? false : _thread.IsAlive; } }
public bool Finished { get { return _thread == null ? false : !_thread.IsAlive; } }
protected Processor(IObservable<T> observable)
: base(observable)
{ }
public override void Dispose()
{
if (_thread != null && _thread.IsAlive)
{
//CancelRequested = true;
_thread.Join(5000);
}
base.Dispose();
}
public bool Start()
{
if (_thread != null)
return false;
_thread = new Thread(threadfunc);
_thread.Start();
return true;
}
private void threadfunc()
{
while (!CancelRequested && (!Completed || _queue.Any()))
{
if (DataAvailable)
{
T data = NextOrDefault();
if (data != null && !data.Equals(default(T)))
ProcessData(data);
}
else
Thread.Sleep(10);
}
}
// implement this in a sub-class to process the blocks
protected abstract void ProcessData(T data);
}
This way you're only keeping the data as long as you need it, and you can attach as many process threads as you need to the same observable data source.
And for the sake of completeness, here's a generic class that implements IObservable<T> so you can see how it all fits together. This one even has comments:
/// <summary>Generic IObservable implementation</summary>
/// <typeparam name="T">Type of messages being observed</typeparam>
public class Observable<T> : IObservable<T>
{
/// <summary>Subscription class to manage unsubscription of observers.</summary>
private class Subscription : IDisposable
{
/// <summary>Observer list that this subscription relates to</summary>
public readonly ConcurrentBag<IObserver<T>> _observers;
/// <summary>Observer to manage</summary>
public readonly IObserver<T> _observer;
/// <summary>Initialize subscription</summary>
/// <param name="observers">List of subscribed observers to unsubscribe from</param>
/// <param name="observer">Observer to manage</param>
public Subscription(ConcurrentBag<IObserver<T>> observers, IObserver<T> observer)
{
_observers = observers;
_observer = observer;
}
/// <summary>On disposal remove the subscriber from the subscription list</summary>
public void Dispose()
{
IObserver<T> observer;
if (_observers != null && _observers.Contains(_observer))
_observers.TryTake(out observer);
}
}
// list of subscribed observers
private readonly ConcurrentBag<IObserver<T>> _observers = new ConcurrentBag<IObserver<T>>();
/// <summary>Subscribe an observer to this observable</summary>
/// <param name="observer">Observer instance to subscribe</param>
/// <returns>A subscription object that unsubscribes on destruction</returns>
/// <remarks>Always returns a subscription. Ensure that previous subscriptions are disposed
/// before re-subscribing.</remarks>
public IDisposable Subscribe(IObserver<T> observer)
{
// only add observer if it doesn't already exist:
if (!_observers.Contains(observer))
_observers.Add(observer);
// ...but always return a new subscription.
return new Subscription(_observers, observer);
}
// delegate type for threaded invocation of IObserver.OnNext method
private delegate void delNext(T value);
/// <summary>Send <paramref name="data"/> to the OnNext methods of each subscriber</summary>
/// <param name="data">Data object to send to subscribers</param>
/// <remarks>Uses delegate.BeginInvoke to send out notifications asynchronously.</remarks>
public void Notify(T data)
{
foreach (var observer in _observers)
{
delNext handler = observer.OnNext;
handler.BeginInvoke(data, null, null);
}
}
// delegate type for asynchronous invocation of IObserver.OnComplete method
private delegate void delComplete();
/// <summary>Notify all subscribers that the observable has completed</summary>
/// <remarks>Uses delegate.BeginInvoke to send out notifications asynchronously.</remarks>
public void NotifyComplete()
{
foreach (var observer in _observers)
{
delComplete handler = observer.OnCompleted;
handler.BeginInvoke(null, null);
}
}
}
Now you can create an Observable<byte[]> to use as your transmitter for Process<byte[]> instances that are interested. Pull data blocks out of the input stream, audio reader, etc. and pass them to the Notify method. Just make sure that you clone the arrays beforehand...
Related
I'm writing a listener for messages using the Rx framework.
The problem I'm facing is that the library I'm using uses a consumer that publishes events whenever a message has arrived.
I've managed to consume the incoming messages via Observable.FromEventPattern but I have a problem with the messages that are already in the server.
At the moment I have the following chain of commands
Create a consumer
Create an observable sequence with FromEventPattern and apply needed transformations
Tell the consumer to start
Subscribe to the sequence
The easiest solution would be to swap steps 3. and 4. but since they happen in different components of the system, it's very hard for me to do so.
Ideally I would like to execute step 3 when step 4 happens (like a OnSubscribe method).
Thanks for your help :)
PS: to add more details, the events are coming from a RabbitMQ queue and I am using the EventingBasicConsumer class found in the RabbitMQ.Client package.
Here you can find the library I am working on. Specifically, this is the class/method giving me problems.
Edit
Here is a stripped version of the problematic code
void Main()
{
var engine = new Engine();
var messages = engine.Start();
messages.Subscribe(m => m.Dump());
Console.ReadLine();
engine.Stop();
}
public class Engine
{
IConnection _connection;
IModel _channel;
public IObservable<Message> Start()
{
var connectionFactory = new ConnectionFactory();
_connection = connectionFactory.CreateConnection();
_channel = _connection.CreateModel();
EventingBasicConsumer consumer = new EventingBasicConsumer(_channel);
var observable = Observable.FromEventPattern<BasicDeliverEventArgs>(
a => consumer.Received += a,
a => consumer.Received -= a)
.Select(e => e.EventArgs);
_channel.BasicConsume("a_queue", false, consumer);
return observable.Select(Transform);
}
private Message Transform(BasicDeliverEventArgs args) => new Message();
public void Stop()
{
_channel.Dispose();
_connection.Dispose();
}
}
public class Message { }
The symptom I experience is that since I invoke BasicConsume before subscribing to the sequence, any message that is in the RabbitMQ queue is fetched but not passed down the pipeline.
Since I don't have "autoack" on, the messages are returned to the queue as soon as the program stops.
As some have noted in the comments, and as you note in the question, the issue is due to the way you're using the RabbitMQ client.
To get around some of these issues, what I actually did was create an ObservableConsumer class. This is an alternative to the EventingBasicConsumer which is in use currently. One reason I did this was to deal with the issue described in the question, but the other thing this does is allow you to re-use this consumer object beyond a single connection/channel instance. This has the benefit of allowing your downstream reactive code to remain wired in spite of transient connection/channel characteristics.
using System;
using System.Collections.Generic;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
using System.Reactive.Subjects;
using RabbitMQ.Client;
namespace com.rabbitmq.consumers
{
public sealed class ObservableConsumer : IBasicConsumer
{
private readonly List<string> _consumerTags = new List<string>();
private readonly object _consumerTagsLock = new object();
private readonly Subject<Message> _subject = new Subject<Message>();
public ushort PrefetchCount { get; set; }
public IEnumerable<string> ConsumerTags { get { return new List<string>(_consumerTags); } }
/// <summary>
/// Registers this consumer on the given queue.
/// </summary>
/// <returns>The consumer tag assigned.</returns>
public string ConsumeFrom(IModel channel, string queueName)
{
Model = channel;
return Model.BasicConsume(queueName, false, this);
}
/// <summary>
/// Contains an observable of the incoming messages where messages are processed on a thread pool thread.
/// </summary>
public IObservable<Message> IncomingMessages
{
get { return _subject.ObserveOn(Scheduler.ThreadPool); }
}
///<summary>Retrieve the IModel instance this consumer is
///registered with.</summary>
public IModel Model { get; private set; }
///<summary>Returns true while the consumer is registered and
///expecting deliveries from the broker.</summary>
public bool IsRunning
{
get { return _consumerTags.Count > 0; }
}
/// <summary>
/// Run after a consumer is cancelled.
/// </summary>
/// <param name="consumerTag"></param>
private void OnConsumerCanceled(string consumerTag)
{
}
/// <summary>
/// Run after a consumer is added.
/// </summary>
/// <param name="consumerTag"></param>
private void OnConsumerAdded(string consumerTag)
{
}
public void HandleBasicConsumeOk(string consumerTag)
{
lock (_consumerTagsLock) {
if (!_consumerTags.Contains(consumerTag))
_consumerTags.Add(consumerTag);
}
}
public void HandleBasicCancelOk(string consumerTag)
{
lock (_consumerTagsLock) {
if (_consumerTags.Contains(consumerTag)) {
_consumerTags.Remove(consumerTag);
OnConsumerCanceled(consumerTag);
}
}
}
public void HandleBasicCancel(string consumerTag)
{
lock (_consumerTagsLock) {
if (_consumerTags.Contains(consumerTag)) {
_consumerTags.Remove(consumerTag);
OnConsumerCanceled(consumerTag);
}
}
}
public void HandleModelShutdown(IModel model, ShutdownEventArgs reason)
{
//Don't need to do anything.
}
public void HandleBasicDeliver(string consumerTag,
ulong deliveryTag,
bool redelivered,
string exchange,
string routingKey,
IBasicProperties properties,
byte[] body)
{
//Hack - prevents the broker from sending too many messages.
//if (PrefetchCount > 0 && _unackedMessages.Count > PrefetchCount) {
// Model.BasicReject(deliveryTag, true);
// return;
//}
var message = new Message(properties.HeaderFromBasicProperties()) { Content = body };
var deliveryData = new MessageDeliveryData()
{
ConsumerTag = consumerTag,
DeliveryTag = deliveryTag,
Redelivered = redelivered,
};
message.Tag = deliveryData;
if (AckMode != AcknowledgeMode.AckWhenReceived) {
message.Acknowledged += messageAcknowledged;
message.Failed += messageFailed;
}
_subject.OnNext(message);
}
void messageFailed(Message message, Exception ex, bool requeue)
{
try {
message.Acknowledged -= messageAcknowledged;
message.Failed -= messageFailed;
if (message.Tag is MessageDeliveryData) {
Model.BasicNack((message.Tag as MessageDeliveryData).DeliveryTag, false, requeue);
}
}
catch {}
}
void messageAcknowledged(Message message)
{
try {
message.Acknowledged -= messageAcknowledged;
message.Failed -= messageFailed;
if (message.Tag is MessageDeliveryData) {
var ackMultiple = AckMode == AcknowledgeMode.AckAfterAny;
Model.BasicAck((message.Tag as MessageDeliveryData).DeliveryTag, ackMultiple);
}
}
catch {}
}
}
}
I think there is no need to actually subscribe to rabbit queue (via BasicConsume) until you have subscribers to your observable. Right now you are starting rabbit subscription right away and push items to observable even if no one has subscribed to it.
Suppose we have this sample class:
class Events {
public event Action<string> MessageArrived;
Timer _timer;
public void Start()
{
Console.WriteLine("Timer starting");
int i = 0;
_timer = new Timer(_ => {
this.MessageArrived?.Invoke(i.ToString());
i++;
}, null, TimeSpan.Zero, TimeSpan.FromSeconds(1));
}
public void Stop() {
_timer?.Dispose();
Console.WriteLine("Timer stopped");
}
}
What you are doing now is basically:
var ev = new Events();
var ob = Observable.FromEvent<string>(x => ev.MessageArrived += x, x => ev.MessageArrived -= x);
ev.Start();
return ob;
What you need instead is observable which does exactly that, but only when someone subscribes:
return Observable.Create<string>(observer =>
{
var ev = new Events();
var ob = Observable.FromEvent<string>(x => ev.MessageArrived += x, x => ev.MessageArrived -= x);
// first subsribe
var sub = ob.Subscribe(observer);
// then start
ev.Start();
// when subscription is disposed - unsubscribe from rabbit
return new CompositeDisposable(sub, Disposable.Create(() => ev.Stop()));
});
Good, but now every subscription to observable will result in separate subscription to rabbit queues, which is not what we need. We can solve that with Publish().RefCount():
return Observable.Create<string>(observer => {
var ev = new Events();
var ob = Observable.FromEvent<string>(x => ev.MessageArrived += x, x => ev.MessageArrived -= x);
var sub = ob.Subscribe(observer);
ev.Start();
return new CompositeDisposable(sub, Disposable.Create(() => ev.Stop()));
}).Publish().RefCount();
Now what will happen is when first subscriber subscribes to observable (ref count goes from 0 to 1) - code from Observable.Create body is invoked and subscribes to rabbit queue. This subscription is then shared by all subsequent subscribers. When last unsubscribes (ref count goes to zero) - subscription is disposed, ev.Stop is called, and we unsubscribe from rabbit queue.
If so happens that you call Start() (which creates observable in your code) and never subscribe to it - nothing happens and no subscriptions to rabbit is made at all.
I have often run into situations where I need some sort of valve construct to control the flow of a reactive pipeline. Typically, in a network-based application I have had the requirement to open/close a request stream according to the connection state.
This valve subject should support opening/closing the stream, and output delivery in FIFO order. Input values should be buffered when the valve is closed.
A ConcurrentQueue or BlockingCollection are typically used in such scenarios, but that immediately introduces threading into the picture. I was looking for a purely reactive solution to this problem.
Here's an implementation mainly based on Buffer() and BehaviorSubject. The behavior subject tracks the open/close state of the valve. Openings of the valve start buffering windows, and closings of the valve close those windows. Output of the buffer operator is "re-injected" onto the input (so that even observers themselves can close the valve):
/// <summary>
/// Subject offering Open() and Close() methods, with built-in buffering.
/// Note that closing the valve in the observer is supported.
/// </summary>
/// <remarks>As is the case with other Rx subjects, this class is not thread-safe, in that
/// order of elements in the output is indeterministic in the case of concurrent operation
/// of Open()/Close()/OnNext()/OnError(). To guarantee strict order of delivery even in the
/// case of concurrent access, <see cref="ValveSubjectExtensions.Synchronize{T}(NEXThink.Finder.Utils.Rx.IValveSubject{T})"/> can be used.</remarks>
/// <typeparam name="T">Elements type</typeparam>
public class ValveSubject<T> : IValveSubject<T>
{
private enum Valve
{
Open,
Closed
}
private readonly Subject<T> input = new Subject<T>();
private readonly BehaviorSubject<Valve> valveSubject = new BehaviorSubject<Valve>(Valve.Open);
private readonly Subject<T> output = new Subject<T>();
public ValveSubject()
{
var valveOperations = valveSubject.DistinctUntilChanged();
input.Buffer(
bufferOpenings: valveOperations.Where(v => v == Valve.Closed),
bufferClosingSelector: _ => valveOperations.Where(v => v == Valve.Open))
.SelectMany(t => t).Subscribe(input);
input.Where(t => valveSubject.Value == Valve.Open).Subscribe(output);
}
public bool IsOpen
{
get { return valveSubject.Value == Valve.Open; }
}
public bool IsClosed
{
get { return valveSubject.Value == Valve.Closed; }
}
public void OnNext(T value)
{
input.OnNext(value);
}
public void OnError(Exception error)
{
input.OnError(error);
}
public void OnCompleted()
{
output.OnCompleted();
input.OnCompleted();
valveSubject.OnCompleted();
}
public IDisposable Subscribe(IObserver<T> observer)
{
return output.Subscribe(observer);
}
public void Open()
{
valveSubject.OnNext(Valve.Open);
}
public void Close()
{
valveSubject.OnNext(Valve.Closed);
}
}
public interface IValveSubject<T>:ISubject<T>
{
void Open();
void Close();
}
An additional method for flushing out the valve can be useful at times, e.g. to eliminate remaining requests when they are no longer relevant. Here is an implementation that builds upon the precedent, adapter-style:
/// <summary>
/// Subject with same semantics as <see cref="ValveSubject{T}"/>, but adding flushing out capability
/// which allows clearing the valve of any remaining elements before closing.
/// </summary>
/// <typeparam name="T">Elements type</typeparam>
public class FlushableValveSubject<T> : IFlushableValveSubject<T>
{
private readonly BehaviorSubject<ValveSubject<T>> valvesSubject = new BehaviorSubject<ValveSubject<T>>(new ValveSubject<T>());
private ValveSubject<T> CurrentValve
{
get { return valvesSubject.Value; }
}
public bool IsOpen
{
get { return CurrentValve.IsOpen; }
}
public bool IsClosed
{
get { return CurrentValve.IsClosed; }
}
public void OnNext(T value)
{
CurrentValve.OnNext(value);
}
public void OnError(Exception error)
{
CurrentValve.OnError(error);
}
public void OnCompleted()
{
CurrentValve.OnCompleted();
valvesSubject.OnCompleted();
}
public IDisposable Subscribe(IObserver<T> observer)
{
return valvesSubject.Switch().Subscribe(observer);
}
public void Open()
{
CurrentValve.Open();
}
public void Close()
{
CurrentValve.Close();
}
/// <summary>
/// Discards remaining elements in the valve and reset the valve into a closed state
/// </summary>
/// <returns>Replayable observable with any remaining elements</returns>
public IObservable<T> FlushAndClose()
{
var previousValve = CurrentValve;
valvesSubject.OnNext(CreateClosedValve());
var remainingElements = new ReplaySubject<T>();
previousValve.Subscribe(remainingElements);
previousValve.Open();
return remainingElements;
}
private static ValveSubject<T> CreateClosedValve()
{
var valve = new ValveSubject<T>();
valve.Close();
return valve;
}
}
public interface IFlushableValveSubject<T> : IValveSubject<T>
{
IObservable<T> FlushAndClose();
}
As mentioned in the comment, these subjects are not "thread-safe" in the sense that order of delivery is no longer guaranteed in the case of concurrent operation. In a similar fashion as what exists for the standard Rx Subject, Subject.Synchronize() (https://msdn.microsoft.com/en-us/library/hh211643%28v=vs.103%29.aspx) we can introduce some extensions which provide locking around the valve:
public static class ValveSubjectExtensions
{
public static IValveSubject<T> Synchronize<T>(this IValveSubject<T> valve)
{
return Synchronize(valve, new object());
}
public static IValveSubject<T> Synchronize<T>(this IValveSubject<T> valve, object gate)
{
return new SynchronizedValveAdapter<T>(valve, gate);
}
public static IFlushableValveSubject<T> Synchronize<T>(this IFlushableValveSubject<T> valve)
{
return Synchronize(valve, new object());
}
public static IFlushableValveSubject<T> Synchronize<T>(this IFlushableValveSubject<T> valve, object gate)
{
return new SynchronizedFlushableValveAdapter<T>(valve, gate);
}
}
internal class SynchronizedValveAdapter<T> : IValveSubject<T>
{
private readonly object gate;
private readonly IValveSubject<T> valve;
public SynchronizedValveAdapter(IValveSubject<T> valve, object gate)
{
this.valve = valve;
this.gate = gate;
}
public void OnNext(T value)
{
lock (gate)
{
valve.OnNext(value);
}
}
public void OnError(Exception error)
{
lock (gate)
{
valve.OnError(error);
}
}
public void OnCompleted()
{
lock (gate)
{
valve.OnCompleted();
}
}
public IDisposable Subscribe(IObserver<T> observer)
{
return valve.Subscribe(observer);
}
public void Open()
{
lock (gate)
{
valve.Open();
}
}
public void Close()
{
lock (gate)
{
valve.Close();
}
}
}
internal class SynchronizedFlushableValveAdapter<T> : SynchronizedValveAdapter<T>, IFlushableValveSubject<T>
{
private readonly object gate;
private readonly IFlushableValveSubject<T> valve;
public SynchronizedFlushableValveAdapter(IFlushableValveSubject<T> valve, object gate)
: base(valve, gate)
{
this.valve = valve;
this.gate = gate;
}
public IObservable<T> FlushAndClose()
{
lock (gate)
{
return valve.FlushAndClose();
}
}
}
Here is my implementation with delay operator:
source.delay(new Func1<Integer, Observable<Boolean>>() {
#Override
public Observable<Boolean> call(Integer integer) {
return valve.filter(new Func1<Boolean, Boolean>() {
#Override
public Boolean call(Boolean aBoolean) {
return aBoolean;
}
});
}
})
.toBlocking()
.subscribe(new Action1<Integer>() {
#Override
public void call(Integer integer) {
System.out.println("out: " + integer);
}
});
The idea is to delay all source emissions until "valve opens". If valve is already opened, there will be no delay in emission of item.
Rx valve gist
I'm struggling with trying to find the best way to implement WCF retries. I'm hoping to make the client experience as clean as possible. There are two approaches of which I'm aware (see below). My question is: "Is there a third approach that I'm missing? Maybe one that's the generally accepted way of doing this?"
Approach #1: Create a proxy that implements the service interface. For each call to the proxy, implement retries.
public class Proxy : ISomeWcfServiceInterface
{
public int Foo(int snurl)
{
return MakeWcfCall<int>(() => _channel.Foo(snurl));
}
public string Bar(string snuh)
{
return MakeWcfCall<string>(() => _channel.Bar(snuh));
}
private static T MakeWcfCall<T>(Func<T> func)
{
// Invoke the func and implement retries.
}
}
Approach #2: Change MakeWcfCall() (above) to public, and have the consuming code send the func directly.
What I don't like about approach #1 is having to update the Proxy class every time the interface changes.
What I don't like about approach #2 is the client having to wrap their call in a func.
Am I missing a better way?
EDIT
I've posted an answer here (see below), based on the accepted answer that pointed me in the right direction. I thought I'd share my code, in an answer, to help someone work through what I had to work through. Hope it helps.
I have done this very type of thing and I solved this problem using .net's RealProxy class.
Using RealProxy, you can create an actual proxy at runtime using your provided interface. From the calling code it is as if they are using an IFoo channel, but in fact it is a IFoo to the proxy and then you get a chance to intercept the calls to any method, property, constructors, etc…
Deriving from RealProxy, you can then override the Invoke method to intercept calls to the WCF methods and handle CommunicationException, etc.
Note: This shouldn't be the accepted answer, but I wanted to post the solution in case it helps others. Jim's answer pointed me in this direction.
First, the consuming code, showing how it works:
static void Main(string[] args)
{
var channelFactory = new WcfChannelFactory<IPrestoService>(new NetTcpBinding());
var endpointAddress = ConfigurationManager.AppSettings["endpointAddress"];
// The call to CreateChannel() actually returns a proxy that can intercept calls to the
// service. This is done so that the proxy can retry on communication failures.
IPrestoService prestoService = channelFactory.CreateChannel(new EndpointAddress(endpointAddress));
Console.WriteLine("Enter some information to echo to the Presto service:");
string message = Console.ReadLine();
string returnMessage = prestoService.Echo(message);
Console.WriteLine("Presto responds: {0}", returnMessage);
Console.WriteLine("Press any key to stop the program.");
Console.ReadKey();
}
The WcfChannelFactory:
public class WcfChannelFactory<T> : ChannelFactory<T> where T : class
{
public WcfChannelFactory(Binding binding) : base(binding) {}
public T CreateBaseChannel()
{
return base.CreateChannel(this.Endpoint.Address, null);
}
public override T CreateChannel(EndpointAddress address, Uri via)
{
// This is where the magic happens. We don't really return a channel here;
// we return WcfClientProxy.GetTransparentProxy(). That class will now
// have the chance to intercept calls to the service.
this.Endpoint.Address = address;
var proxy = new WcfClientProxy<T>(this);
return proxy.GetTransparentProxy() as T;
}
}
The WcfClientProxy: (This is where we intercept and retry.)
public class WcfClientProxy<T> : RealProxy where T : class
{
private WcfChannelFactory<T> _channelFactory;
public WcfClientProxy(WcfChannelFactory<T> channelFactory) : base(typeof(T))
{
this._channelFactory = channelFactory;
}
public override IMessage Invoke(IMessage msg)
{
// When a service method gets called, we intercept it here and call it below with methodBase.Invoke().
var methodCall = msg as IMethodCallMessage;
var methodBase = methodCall.MethodBase;
// We can't call CreateChannel() because that creates an instance of this class,
// and we'd end up with a stack overflow. So, call CreateBaseChannel() to get the
// actual service.
T wcfService = this._channelFactory.CreateBaseChannel();
try
{
var result = methodBase.Invoke(wcfService, methodCall.Args);
return new ReturnMessage(
result, // Operation result
null, // Out arguments
0, // Out arguments count
methodCall.LogicalCallContext, // Call context
methodCall); // Original message
}
catch (FaultException)
{
// Need to specifically catch and rethrow FaultExceptions to bypass the CommunicationException catch.
// This is needed to distinguish between Faults and underlying communication exceptions.
throw;
}
catch (CommunicationException ex)
{
// Handle CommunicationException and implement retries here.
throw new NotImplementedException();
}
}
}
Sequence diagram of a call being intercepted by the proxy:
You can implement generic proxy for example using Castle. There is a good article here http://www.planetgeek.ch/2010/10/13/dynamic-proxy-for-wcf-with-castle-dynamicproxy/. This approach will give user object which implements interface and you will have one class responsible for comunication
Do not mess with generated code because, as you mentioned, it will be generated again so any customization will be overridden.
I see two ways:
Write/generate a partial class for each proxy that contains the retry variation. This is messy because you will still have to adjust it when the proxy changes
Make a custom version of svcutil that allows you to generate a proxy that derives from a different base class and put the retry code in that base class. This is quite some work but it can be done and solves the issue in a robust way.
go through approach 1, then wrape all context event (open, opened, faulted, ...) into event to be exposed by your class proxy, once the communication is faulted then re-create the proxy or call some recursive method inside proxy class. i can share with you some wok i have just tested.
public class DuplexCallBackNotificationIntegrationExtension : IExtension, INotificationPusherCallback {
#region - Field(s) -
private static Timer _Timer = null;
private static readonly object m_SyncRoot = new Object();
private static readonly Guid CMESchedulerApplicationID = Guid.NewGuid();
private static CancellationTokenSource cTokenSource = new CancellationTokenSource();
private static CancellationToken cToken = cTokenSource.Token;
#endregion
#region - Event(s) -
/// <summary>
/// Event fired during Duplex callback.
/// </summary>
public static event EventHandler<CallBackEventArgs> CallBackEvent;
public static event EventHandler<System.EventArgs> InstanceContextOpeningEvent;
public static event EventHandler<System.EventArgs> InstanceContextOpenedEvent;
public static event EventHandler<System.EventArgs> InstanceContextClosingEvent;
public static event EventHandler<System.EventArgs> InstanceContextClosedEvent;
public static event EventHandler<System.EventArgs> InstanceContextFaultedEvent;
#endregion
#region - Property(ies) -
/// <summary>
/// Interface extension designation.
/// </summary>
public string Name {
get {
return "Duplex Call Back Notification Integration Extension.";
}
}
/// <summary>
/// GUI Interface extension.
/// </summary>
public IUIExtension UIExtension {
get {
return null;
}
}
#endregion
#region - Constructor(s) / Finalizer(s) -
/// <summary>
/// Initializes a new instance of the DuplexCallBackNotificationIntegrationExtension class.
/// </summary>
public DuplexCallBackNotificationIntegrationExtension() {
CallDuplexNotificationPusher();
}
#endregion
#region - Delegate Invoker(s) -
void ICommunicationObject_Opening(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Info("context_Opening");
this.OnInstanceContextOpening(e);
}
void ICommunicationObject_Opened(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Debug("context_Opened");
this.OnInstanceContextOpened(e);
}
void ICommunicationObject_Closing(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Debug("context_Closing");
this.OnInstanceContextClosing(e);
}
void ICommunicationObject_Closed(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Debug("context_Closed");
this.OnInstanceContextClosed(e);
}
void ICommunicationObject_Faulted(object sender, System.EventArgs e) {
DefaultLogger.DUPLEXLogger.Error("context_Faulted");
this.OnInstanceContextFaulted(e);
if (_Timer != null) {
_Timer.Dispose();
}
IChannel channel = sender as IChannel;
if (channel != null) {
channel.Abort();
channel.Close();
}
DoWorkRoutine(cToken);
}
protected virtual void OnCallBackEvent(Notification objNotification) {
if (CallBackEvent != null) {
CallBackEvent(this, new CallBackEventArgs(objNotification));
}
}
protected virtual void OnInstanceContextOpening(System.EventArgs e) {
if (InstanceContextOpeningEvent != null) {
InstanceContextOpeningEvent(this, e);
}
}
protected virtual void OnInstanceContextOpened(System.EventArgs e) {
if (InstanceContextOpenedEvent != null) {
InstanceContextOpenedEvent(this, e);
}
}
protected virtual void OnInstanceContextClosing(System.EventArgs e) {
if (InstanceContextClosingEvent != null) {
InstanceContextClosingEvent(this, e);
}
}
protected virtual void OnInstanceContextClosed(System.EventArgs e) {
if (InstanceContextClosedEvent != null) {
InstanceContextClosedEvent(this, e);
}
}
protected virtual void OnInstanceContextFaulted(System.EventArgs e) {
if (InstanceContextFaultedEvent != null) {
InstanceContextFaultedEvent(this, e);
}
}
#endregion
#region - IDisposable Member(s) -
#endregion
#region - Private Method(s) -
/// <summary>
///
/// </summary>
void CallDuplexNotificationPusher() {
var routine = Task.Factory.StartNew(() => DoWorkRoutine(cToken), cToken);
cToken.Register(() => cancelNotification());
}
/// <summary>
///
/// </summary>
/// <param name="ct"></param>
void DoWorkRoutine(CancellationToken ct) {
lock (m_SyncRoot) {
var context = new InstanceContext(this);
var proxy = new NotificationPusherClient(context);
ICommunicationObject communicationObject = proxy as ICommunicationObject;
communicationObject.Opening += new System.EventHandler(ICommunicationObject_Opening);
communicationObject.Opened += new System.EventHandler(ICommunicationObject_Opened);
communicationObject.Faulted += new System.EventHandler(ICommunicationObject_Faulted);
communicationObject.Closed += new System.EventHandler(ICommunicationObject_Closed);
communicationObject.Closing += new System.EventHandler(ICommunicationObject_Closing);
try {
proxy.Subscribe(CMESchedulerApplicationID.ToString());
}
catch (Exception ex) {
Logger.HELogger.DefaultLogger.DUPLEXLogger.Error(ex);
switch (communicationObject.State) {
case CommunicationState.Faulted:
proxy.Close();
break;
default:
break;
}
cTokenSource.Cancel();
cTokenSource.Dispose();
cTokenSource = new CancellationTokenSource();
cToken = cTokenSource.Token;
CallDuplexNotificationPusher();
}
bool KeepAliveCallBackEnabled = Properties.Settings.Default.KeepAliveCallBackEnabled;
if (KeepAliveCallBackEnabled) {
_Timer = new Timer(new TimerCallback(delegate(object item) {
DefaultLogger.DUPLEXLogger.Debug(string.Format("._._._._._. New Iteration {0: yyyy MM dd hh mm ss ffff} ._._._._._.", DateTime.Now.ToUniversalTime().ToString()));
DBNotificationPusherService.Acknowledgment reply = DBNotificationPusherService.Acknowledgment.NAK;
try {
reply = proxy.KeepAlive();
}
catch (Exception ex) {
DefaultLogger.DUPLEXLogger.Error(ex);
switch (communicationObject.State) {
case CommunicationState.Faulted:
case CommunicationState.Closed:
proxy.Abort();
ICommunicationObject_Faulted(null, null);
break;
default:
break;
}
}
DefaultLogger.DUPLEXLogger.Debug(string.Format("Acknowledgment = {0}.", reply.ToString()));
_Timer.Change(Properties.Settings.Default.KeepAliveCallBackTimerInterval, Timeout.Infinite);
}), null, Properties.Settings.Default.KeepAliveCallBackTimerInterval, Timeout.Infinite);
}
}
}
/// <summary>
///
/// </summary>
void cancelNotification() {
DefaultLogger.DUPLEXLogger.Warn("Cancellation request made!!");
}
#endregion
#region - Public Method(s) -
/// <summary>
/// Fire OnCallBackEvent event and fill automatic-recording collection with newest
/// </summary>
/// <param name="action"></param>
public void SendNotification(Notification objNotification) {
// Fire event callback.
OnCallBackEvent(objNotification);
}
#endregion
#region - Callback(s) -
private void OnAsyncExecutionComplete(IAsyncResult result) {
}
#endregion
}
Just wrap all service calls in a function, taking a delegate that would execute the passed delegate the amount of time necessary
internal R ExecuteServiceMethod<I, R>(Func<I, R> serviceCall, string userName, string password) {
//Note all clients have the name Manager, but this isn't a problem as they get resolved
//by type
ChannelFactory<I> factory = new ChannelFactory<I>("Manager");
factory.Credentials.UserName.UserName = userName;
factory.Credentials.UserName.Password = password;
I manager = factory.CreateChannel();
//Wrap below in a retry loop
return serviceCall.Invoke(manager);
}
I've created a new class called Actor which processes messages passed to it.
The problem I am running into is figuring out what is the most elegant way to pass related but different messages to the Actor. My first idea is to use inheritance but it seems so bloated but it is strongly types which is a definite requirement.
Have any ideas?
Example
private abstract class QueueMessage { }
private class ClearMessage : QueueMessage
{
public static readonly ClearMessage Instance = new ClearMessage();
private ClearMessage() { }
}
private class TryDequeueMessage : QueueMessage
{
public static readonly TryDequeueMessage Instance = new TryDequeueMessage();
private TryDequeueMessage() { }
}
private class EnqueueMessage : QueueMessage
{
public TValue Item { get; private set; }
private EnqueueMessage(TValue item)
{
Item = item;
}
}
Actor Class
/// <summary>Represents a callback method to be executed by an Actor.</summary>
/// <typeparam name="TReply">The type of reply.</typeparam>
/// <param name="reply">The reply made by the actor.</param>
public delegate void ActorReplyCallback<TReply>(TReply reply);
/// <summary>Represents an Actor which receives and processes messages in concurrent applications.</summary>
/// <typeparam name="TMessage">The type of message this actor accepts.</typeparam>
/// <typeparam name="TReply">The type of reply made by this actor.</typeparam>
public abstract class Actor<TMessage, TReply> : IDisposable
{
/// <summary>The default total number of threads to process messages.</summary>
private const Int32 DefaultThreadCount = 1;
/// <summary>Used to serialize access to the message queue.</summary>
private readonly Locker Locker;
/// <summary>Stores the messages until they can be processed.</summary>
private readonly System.Collections.Generic.Queue<Message> MessageQueue;
/// <summary>Signals the actor thread to process a new message.</summary>
private readonly ManualResetEvent PostEvent;
/// <summary>This tells the actor thread to stop reading from the queue.</summary>
private readonly ManualResetEvent DisposeEvent;
/// <summary>Processes the messages posted to the actor.</summary>
private readonly List<Thread> ActorThreads;
/// <summary>Initializes a new instance of the Genex.Concurrency<TRequest, TResponse> class.</summary>
public Actor() : this(DefaultThreadCount) { }
/// <summary>Initializes a new instance of the Genex.Concurrency<TRequest, TResponse> class.</summary>
/// <param name="thread_count"></param>
public Actor(Int32 thread_count)
{
if (thread_count < 1) throw new ArgumentOutOfRangeException("thread_count", thread_count, "Must be 1 or greater.");
Locker = new Locker();
MessageQueue = new System.Collections.Generic.Queue<Message>();
EnqueueEvent = new ManualResetEvent(true);
PostEvent = new ManualResetEvent(false);
DisposeEvent = new ManualResetEvent(true);
ActorThreads = new List<Thread>();
for (Int32 i = 0; i < thread_count; i++)
{
var thread = new Thread(ProcessMessages);
thread.IsBackground = true;
thread.Start();
ActorThreads.Add(thread);
}
}
/// <summary>Posts a message and waits for the reply.</summary>
/// <param name="value">The message to post to the actor.</param>
/// <returns>The reply from the actor.</returns>
public TReply PostWithReply(TMessage message)
{
using (var wrapper = new Message(message))
{
lock (Locker) MessageQueue.Enqueue(wrapper);
PostEvent.Set();
wrapper.Channel.CompleteEvent.WaitOne();
return wrapper.Channel.Value;
}
}
/// <summary>Posts a message to the actor and executes the callback when the reply is received.</summary>
/// <param name="value">The message to post to the actor.</param>
/// <param name="callback">The callback that will be invoked once the replay is received.</param>
public void PostWithAsyncReply(TMessage value, ActorReplyCallback<TReply> callback)
{
if (callback == null) throw new ArgumentNullException("callback");
ThreadPool.QueueUserWorkItem(state => callback(PostWithReply(value)));
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
if (DisposeEvent.WaitOne(10))
{
DisposeEvent.Reset();
PostEvent.Set();
foreach (var thread in ActorThreads)
{
thread.Join();
}
((IDisposable)PostEvent).Dispose();
((IDisposable)DisposeEvent).Dispose();
}
}
/// <summary>Processes a message posted to the actor.</summary>
/// <param name="message">The message to be processed.</param>
protected abstract void ProcessMessage(Message message);
/// <summary>Dequeues the messages passes them to ProcessMessage.</summary>
private void ProcessMessages()
{
while (PostEvent.WaitOne() && DisposeEvent.WaitOne(10))
{
var message = (Message)null;
while (true)
{
lock (Locker)
{
message = MessageQueue.Count > 0 ?
MessageQueue.Dequeue() :
null;
if (message == null)
{
PostEvent.Reset();
break;
}
}
try
{
ProcessMessage(message);
}
catch
{
}
}
}
}
/// <summary>Represents a message that is passed to an actor.</summary>
protected class Message : IDisposable
{
/// <summary>The actual value of this message.</summary>
public TMessage Value { get; private set; }
/// <summary>The channel used to give a reply to this message.</summary>
public Channel Channel { get; private set; }
/// <summary>Initializes a new instance of Genex.Concurrency.Message class.</summary>
/// <param name="value">The actual value of the message.</param>
public Message(TMessage value)
{
Value = value;
Channel = new Channel();
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
Channel.Dispose();
}
}
/// <summary>Represents a channel used by an actor to reply to a message.</summary>
protected class Channel : IDisposable
{
/// <summary>The value of the reply.</summary>
public TReply Value { get; private set; }
/// <summary>Signifies that the message has been replied to.</summary>
public ManualResetEvent CompleteEvent { get; private set; }
/// <summary>Initializes a new instance of Genex.Concurrency.Channel class.</summary>
public Channel()
{
CompleteEvent = new ManualResetEvent(false);
}
/// <summary>Reply to the message received.</summary>
/// <param name="value">The value of the reply.</param>
public void Reply(TReply value)
{
Value = value;
CompleteEvent.Set();
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
((IDisposable)CompleteEvent).Dispose();
}
}
}
Steve Gilham summarized how the compiler actually handles discriminated unions. For your own code, you could consider a simplified version of that. Given the following F#:
type QueueMessage<T> = ClearMessage | TryDequeueMessage | EnqueueMessage of T
Here's one way to emulate it in C#:
public enum MessageType { ClearMessage, TryDequeueMessage, EnqueueMessage }
public abstract class QueueMessage<T>
{
// prevents unwanted subclassing
private QueueMessage() { }
public abstract MessageType MessageType { get; }
/// <summary>
/// Only applies to EnqueueMessages
/// </summary>
public abstract T Item { get; }
public static QueueMessage<T> MakeClearMessage() { return new ClearMessage(); }
public static QueueMessage<T> MakeTryDequeueMessage() { return new TryDequeueMessage(); }
public static QueueMessage<T> MakeEnqueueMessage(T item) { return new EnqueueMessage(item); }
private sealed class ClearMessage : QueueMessage<T>
{
public ClearMessage() { }
public override MessageType MessageType
{
get { return MessageType.ClearMessage; }
}
/// <summary>
/// Not implemented by this subclass
/// </summary>
public override T Item
{
get { throw new NotImplementedException(); }
}
}
private sealed class TryDequeueMessage : QueueMessage<T>
{
public TryDequeueMessage() { }
public override MessageType MessageType
{
get { return MessageType.TryDequeueMessage; }
}
/// <summary>
/// Not implemented by this subclass
/// </summary>
public override T Item
{
get { throw new NotImplementedException(); }
}
}
private sealed class EnqueueMessage : QueueMessage<T>
{
private T item;
public EnqueueMessage(T item) { this.item = item; }
public override MessageType MessageType
{
get { return MessageType.EnqueueMessage; }
}
/// <summary>
/// Gets the item to be enqueued
/// </summary>
public override T Item { get { return item; } }
}
}
Now, in code that is given a QueueMessage, you can switch on the MessageType property in lieu of pattern matching, and make sure that you access the Item property only on EnqueueMessages.
EDIT
Here's another alternative, based on Juliet's code. I've tried to streamline things so that it's got a more usable interface from C#, though. This is preferable to the previous version in that you can't get a MethodNotImplemented exception.
public abstract class QueueMessage<T>
{
// prevents unwanted subclassing
private QueueMessage() { }
public abstract TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase);
public static QueueMessage<T> MakeClearMessage() { return new ClearMessage(); }
public static QueueMessage<T> MakeTryDequeueMessage() { return new TryDequeueMessage(); }
public static QueueMessage<T> MakeEnqueueMessage(T item) { return new EnqueueMessage(item); }
private sealed class ClearMessage : QueueMessage<T>
{
public ClearMessage() { }
public override TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase)
{
return clearCase();
}
}
private sealed class TryDequeueMessage : QueueMessage<T>
{
public TryDequeueMessage() { }
public override TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase)
{
return tryDequeueCase();
}
}
private sealed class EnqueueMessage : QueueMessage<T>
{
private T item;
public EnqueueMessage(T item) { this.item = item; }
public override TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase)
{
return enqueueCase(item);
}
}
}
You'd use this code like this:
public class MessageUserTest
{
public void Use()
{
// your code to get a message here...
QueueMessage<string> msg = null;
// emulate pattern matching, but without constructor names
int i =
msg.Match(
clearCase: () => -1,
tryDequeueCase: () => -2,
enqueueCase: s => s.Length);
}
}
In your example code, you implement PostWithAsyncReply in terms of PostWithReply. That isn't ideal, because it means that when you call PostWithAsyncReply and the actor takes a while to handle it, there are actually two threads tied up: the one executing the actor and the one waiting for it to finish. It would be better to have the one thread executing the actor and then calling the callback in the asynchronous case. (Obviously in the synchronous case there's no avoiding the tying up of two threads).
Update:
More on the above: you construct an actor with an argument telling it how many threads to run. For simplicity suppose every actor runs with one thread (actually quite a good situation because actors can then have internal state with no locking on it, as only one thread accesses it directly).
Actor A calls actor B, waiting for a response. In order to handle the request, actor B needs to call actor C. So now A and B's only threads are waiting, and C's is the only one actually giving the CPU any work to do. So much for multi-threading! But this is what you get if you wait for answers all the time.
Okay, you could increase the number of threads you start in each actor. But you'd be starting them so they could sit around doing nothing. A stack uses up a lot of memory, and context switching can be expensive.
So it's better to send messages asynchronously, with a callback mechanism so you can pick up the finished result. The problem with your implementation is that you grab another thread from the thread pool, purely to sit around and wait. So you basically apply the workaround of increasing the number of threads. You allocate a thread to the task of never running.
It would be better to implement PostWithReply in terms of PostWithAsyncReply, i.e. the opposite way round. The asynchronous version is low-level. Building on my delegate-based example (because it involves less typing of code!):
private bool InsertCoinImpl(int value)
{
// only accept dimes/10p/whatever it is in euros
return (value == 10);
}
public void InsertCoin(int value, Action<bool> accepted)
{
Submit(() => accepted(InsertCoinImpl(value)));
}
So the private implementation returns a bool. The public asynchronous method accepts an action that will receive the return value; both the private implementation and the callback action are executed on the same thread.
Hopefully the need to wait synchronously is going to be the minority case. But when you need it, it could be supplied by a helper method, totally general purpose and not tied to any specific actor or message type:
public static T Wait<T>(Action<Action<T>> activity)
{
T result = default(T);
var finished = new EventWaitHandle(false, EventResetMode.AutoReset);
activity(r =>
{
result = r;
finished.Set();
});
finished.WaitOne();
return result;
}
So now in some other actor we can say:
bool accepted = Helpers.Wait<bool>(r => chocMachine.InsertCoin(5, r));
The type argument to Wait may be unnecessary, haven't tried compiling any of this. But Wait basically magics-up a callback for you, so you can pass it to some asynchronous method, and on the outside you just get back whatever was passed to the callback as your return value. Note that the lambda you pass to Wait still actually executes on the same thread that called Wait.
We now return you to our regular program...
As for the actual problem you asked about, you send a message to an actor to get it to do something. Delegates are helpful here. They let you effectively get the compiler to generate you a class with some data, a constructor that you don't even have to call explicitly and also a method. If you're having to write a bunch of little classes, switch to delegates.
abstract class Actor
{
Queue<Action> _messages = new Queue<Action>();
protected void Submit(Action action)
{
// take out a lock of course
_messages.Enqueue(action);
}
// also a "run" that reads and executes the
// message delegates on background threads
}
Now a specific derived actor follows this pattern:
class ChocolateMachineActor : Actor
{
private void InsertCoinImpl(int value)
{
// whatever...
}
public void InsertCoin(int value)
{
Submit(() => InsertCoinImpl(value));
}
}
So to send a message to the actor, you just call the public methods. The private Impl method does the real work. No need to write a bunch of message classes by hand.
Obviously I've left out the stuff about replying, but that can all be done with more parameters. (See update above).
Union types and pattern matching map pretty directly to the visitor pattern, I've posted about this a few times before:
What task is best done in a functional programming style?
https://stackoverflow.com/questions/1883246/none-pure-functional-code-smells/1884256#1884256
So if you want to pass messages with lots of different types, you're stuck implementing the visitor pattern.
(Warning, untested code ahead, but should give you an idea of how its done)
Let's say we have something like this:
type msg =
| Add of int
| Sub of int
| Query of ReplyChannel<int>
let rec counts = function
| [] -> (0, 0, 0)
| Add(_)::xs -> let (a, b, c) = counts xs in (a + 1, b, c)
| Sub(_)::xs -> let (a, b, c) = counts xs in (a, b + 1, c)
| Query(_)::xs -> let (a, b, c) = counts xs in (a, b, c + 1)
You end up with this bulky C# code:
interface IMsgVisitor<T>
{
T Visit(Add msg);
T Visit(Sub msg);
T Visit(Query msg);
}
abstract class Msg
{
public abstract T Accept<T>(IMsgVistor<T> visitor)
}
class Add : Msg
{
public readonly int Value;
public Add(int value) { this.Value = value; }
public override T Accept<T>(IMsgVisitor<T> visitor) { return visitor.Visit(this); }
}
class Sub : Msg
{
public readonly int Value;
public Add(int value) { this.Value = value; }
public override T Accept<T>(IMsgVisitor<T> visitor) { return visitor.Visit(this); }
}
class Query : Msg
{
public readonly ReplyChannel<int> Value;
public Add(ReplyChannel<int> value) { this.Value = value; }
public override T Accept<T>(IMsgVisitor<T> visitor) { return visitor.Visit(this); }
}
Now whenever you want to do something with the message, you need to implement a visitor:
class MsgTypeCounter : IMsgVisitor<MsgTypeCounter>
{
public readonly Tuple<int, int, int> State;
public MsgTypeCounter(Tuple<int, int, int> state) { this.State = state; }
public MsgTypeCounter Visit(Add msg)
{
Console.WriteLine("got Add of " + msg.Value);
return new MsgTypeCounter(Tuple.Create(1 + State.Item1, State.Item2, State.Item3));
}
public MsgTypeCounter Visit(Sub msg)
{
Console.WriteLine("got Sub of " + msg.Value);
return new MsgTypeCounter(Tuple.Create(State.Item1, 1 + State.Item2, State.Item3));
}
public MsgTypeCounter Visit(Query msg)
{
Console.WriteLine("got Query of " + msg.Value);
return new MsgTypeCounter(Tuple.Create(State.Item1, 1 + State.Item2, State.Item3));
}
}
Then finally you can use it like this:
var msgs = new Msg[] { new Add(1), new Add(3), new Sub(4), new ReplyChannel(null) };
var counts = msgs.Aggregate(new MsgTypeVisitor(Tuple.Create(0, 0, 0)),
(acc, x) => x.Accept(acc)).State;
Yes, its as obtuse as it seems, but that's how you pass multiple messages a class in a type-safe manner, and that's also why we don't implement unions in C# ;)
A long shot, but anyway..
I am assuming that discriminated-union is F# for ADT (Abstract Data Type). Which means the type could be one of several things.
In case there are two, you could try and put it in a simple generic class with two type parameters:
public struct DiscriminatedUnion<T1,T2>
{
public DiscriminatedUnion(T1 t1) { value = t1; }
public DiscriminatedUnion(T2 t1) { value = t2; }
public static implicit operator T1(DiscriminatedUnion<T1,T2> du) {return (T1)du.value; }
public static implicit operator T2(DiscriminatedUnion<T1,T2> du) {return (T2)du.value; }
object value;
}
To make it work for 3 or more, we need to replicate this class a number of times.
Any one has a solution for function overloading depending on the runtime type?
If you have this
type internal Either<'a, 'b> =
| Left of 'a
| Right of 'b
in F#, then the C# equivalent of the CLR generated for class Either<'a, 'b> has inner types like
internal class _Left : Either<a, b>
{
internal readonly a left1;
internal _Left(a left1);
}
each with a tag, a getter and a factory method
internal const int tag_Left = 0;
internal static Either<a, b> Left(a Left1);
internal a Left1 { get; }
plus a discriminator
internal int Tag { get; }
and a raft of methods to implement interfaces IStructuralEquatable, IComparable, IStructuralComparable
There is a compile-time checked discriminated union type at Discriminated union in C#
private class ClearMessage
{
public static readonly ClearMessage Instance = new ClearMessage();
private ClearMessage() { }
}
private class TryDequeueMessage
{
public static readonly TryDequeueMessage Instance = new TryDequeueMessage();
private TryDequeueMessage() { }
}
private class EnqueueMessage
{
public TValue Item { get; private set; }
private EnqueueMessage(TValue item) { Item = item; }
}
Using the discriminated union could be done as follows:
// New file
// Create an alias
using Message = Union<ClearMessage, TryDequeueMessage, EnqueMessage>;
int ProcessMessage(Message msg)
{
return Message.Match(
clear => 1,
dequeue => 2,
enqueue => 3);
}
This seems very noisy to me. Five lines of overhead is just too much.
m_Lock.EnterReadLock()
Try
Return m_List.Count
Finally
m_Lock.ExitReadLock()
End Try
So how would you simply this?
I was thinking the same, but in C# ;-p
using System;
using System.Threading;
class Program
{
static void Main()
{
ReaderWriterLockSlim sync = new ReaderWriterLockSlim();
using (sync.Read())
{
// etc
}
}
}
public static class ReaderWriterExt
{
sealed class ReadLockToken : IDisposable
{
private ReaderWriterLockSlim sync;
public ReadLockToken(ReaderWriterLockSlim sync)
{
this.sync = sync;
sync.EnterReadLock();
}
public void Dispose()
{
if (sync != null)
{
sync.ExitReadLock();
sync = null;
}
}
}
public static IDisposable Read(this ReaderWriterLockSlim obj)
{
return new ReadLockToken(obj);
}
}
All the solutions posted so far are at risk of deadlock.
A using block like this:
ReaderWriterLockSlim sync = new ReaderWriterLockSlim();
using (sync.Read())
{
// Do stuff
}
gets converted into something like this:
ReaderWriterLockSlim sync = new ReaderWriterLockSlim();
IDisposable d = sync.Read();
try
{
// Do stuff
}
finally
{
d.Dispose();
}
This means that a ThreadAbortException (or similar) could happen between sync.Read() and the try block. When this happens the finally block never gets called, and the lock is never released!
For more information, and a better implementation see:
Deadlock with ReaderWriterLockSlim and other lock objects. In short, the better implementation comes down to moving the lock into the try block like so:
ReaderWriterLockSlim myLock = new ReaderWriterLockSlim();
try
{
myLock.EnterReadLock();
// Do stuff
}
finally
{
// Release the lock
myLock.ExitReadLock();
}
A wrapper class like the one in the accepted answer would be:
/// <summary>
/// Manager for a lock object that acquires and releases the lock in a manner
/// that avoids the common problem of deadlock within the using block
/// initialisation.
/// </summary>
/// <remarks>
/// This manager object is, by design, not itself thread-safe.
/// </remarks>
public sealed class ReaderWriterLockMgr : IDisposable
{
/// <summary>
/// Local reference to the lock object managed
/// </summary>
private ReaderWriterLockSlim _readerWriterLock = null;
private enum LockTypes { None, Read, Write, Upgradeable }
/// <summary>
/// The type of lock acquired by this manager
/// </summary>
private LockTypes _enteredLockType = LockTypes.None;
/// <summary>
/// Manager object construction that does not acquire any lock
/// </summary>
/// <param name="ReaderWriterLock">The lock object to manage</param>
public ReaderWriterLockMgr(ReaderWriterLockSlim ReaderWriterLock)
{
if (ReaderWriterLock == null)
throw new ArgumentNullException("ReaderWriterLock");
_readerWriterLock = ReaderWriterLock;
}
/// <summary>
/// Call EnterReadLock on the managed lock
/// </summary>
public void EnterReadLock()
{
if (_readerWriterLock == null)
throw new ObjectDisposedException(GetType().FullName);
if (_enteredLockType != LockTypes.None)
throw new InvalidOperationException("Create a new ReaderWriterLockMgr for each state you wish to enter");
// Allow exceptions by the Enter* call to propogate
// and prevent updating of _enteredLockType
_readerWriterLock.EnterReadLock();
_enteredLockType = LockTypes.Read;
}
/// <summary>
/// Call EnterWriteLock on the managed lock
/// </summary>
public void EnterWriteLock()
{
if (_readerWriterLock == null)
throw new ObjectDisposedException(GetType().FullName);
if (_enteredLockType != LockTypes.None)
throw new InvalidOperationException("Create a new ReaderWriterLockMgr for each state you wish to enter");
// Allow exceptions by the Enter* call to propogate
// and prevent updating of _enteredLockType
_readerWriterLock.EnterWriteLock();
_enteredLockType = LockTypes.Write;
}
/// <summary>
/// Call EnterUpgradeableReadLock on the managed lock
/// </summary>
public void EnterUpgradeableReadLock()
{
if (_readerWriterLock == null)
throw new ObjectDisposedException(GetType().FullName);
if (_enteredLockType != LockTypes.None)
throw new InvalidOperationException("Create a new ReaderWriterLockMgr for each state you wish to enter");
// Allow exceptions by the Enter* call to propogate
// and prevent updating of _enteredLockType
_readerWriterLock.EnterUpgradeableReadLock();
_enteredLockType = LockTypes.Upgradeable;
}
/// <summary>
/// Exit the lock, allowing re-entry later on whilst this manager is in scope
/// </summary>
/// <returns>Whether the lock was previously held</returns>
public bool ExitLock()
{
switch (_enteredLockType)
{
case LockTypes.Read:
_readerWriterLock.ExitReadLock();
_enteredLockType = LockTypes.None;
return true;
case LockTypes.Write:
_readerWriterLock.ExitWriteLock();
_enteredLockType = LockTypes.None;
return true;
case LockTypes.Upgradeable:
_readerWriterLock.ExitUpgradeableReadLock();
_enteredLockType = LockTypes.None;
return true;
}
return false;
}
/// <summary>
/// Dispose of the lock manager, releasing any lock held
/// </summary>
public void Dispose()
{
if (_readerWriterLock != null)
{
ExitLock();
// Tidy up managed resources
// Release reference to the lock so that it gets garbage collected
// when there are no more references to it
_readerWriterLock = null;
// Call GC.SupressFinalize to take this object off the finalization
// queue and prevent finalization code for this object from
// executing a second time.
GC.SuppressFinalize(this);
}
}
protected ~ReaderWriterLockMgr()
{
if (_readerWriterLock != null)
ExitLock();
// Leave references to managed resources so that the garbage collector can follow them
}
}
Making usage as follows:
ReaderWriterLockSlim myLock = new ReaderWriterLockSlim();
using (ReaderWriterLockMgr lockMgr = new ReaderWriterLockMgr(myLock))
{
lockMgr.EnterReadLock();
// Do stuff
}
Also, from Joe Duffy's Blog
Next, the lock is not robust to asynchronous exceptions such as thread aborts and out of memory conditions. If one of these occurs while in the middle of one of the lock’s methods, the lock state can be corrupt, causing subsequent deadlocks, unhandled exceptions, and (sadly) due to the use of spin locks internally, a pegged 100% CPU. So if you’re going to be running your code in an environment that regularly uses thread aborts or attempts to survive hard OOMs, you’re not going to be happy with this lock.
This is not my invention but it certainly has made by hair a little less gray.
internal static class ReaderWriteLockExtensions
{
private struct Disposable : IDisposable
{
private readonly Action m_action;
private Sentinel m_sentinel;
public Disposable(Action action)
{
m_action = action;
m_sentinel = new Sentinel();
}
public void Dispose()
{
m_action();
GC.SuppressFinalize(m_sentinel);
}
}
private class Sentinel
{
~Sentinel()
{
throw new InvalidOperationException("Lock not properly disposed.");
}
}
public static IDisposable AcquireReadLock(this ReaderWriterLockSlim lock)
{
lock.EnterReadLock();
return new Disposable(lock.ExitReadLock);
}
public static IDisposable AcquireUpgradableReadLock(this ReaderWriterLockSlim lock)
{
lock.EnterUpgradeableReadLock();
return new Disposable(lock.ExitUpgradeableReadLock);
}
public static IDisposable AcquireWriteLock(this ReaderWriterLockSlim lock)
{
lock.EnterWriteLock();
return new Disposable(lock.ExitWriteLock);
}
}
How to use:
using (m_lock.AcquireReadLock())
{
// Do stuff
}
I ended up doing this, but I'm still open to better ways or flaws in my design.
Using m_Lock.ReadSection
Return m_List.Count
End Using
This uses this extension method/class:
<Extension()> Public Function ReadSection(ByVal lock As ReaderWriterLockSlim) As ReadWrapper
Return New ReadWrapper(lock)
End Function
Public NotInheritable Class ReadWrapper
Implements IDisposable
Private m_Lock As ReaderWriterLockSlim
Public Sub New(ByVal lock As ReaderWriterLockSlim)
m_Lock = lock
m_Lock.EnterReadLock()
End Sub
Public Sub Dispose() Implements IDisposable.Dispose
m_Lock.ExitReadLock()
End Sub
End Class
Since the point of a lock is to protect some piece of memory, I think it would be useful to wrap that memory in a "Locked" object, and only make it accessble through the various lock tokens (as mentioned by Mark):
// Stores a private List<T>, only accessible through lock tokens
// returned by Read, Write, and UpgradableRead.
var lockedList = new LockedList<T>( );
using( var r = lockedList.Read( ) ) {
foreach( T item in r.Reader )
...
}
using( var w = lockedList.Write( ) ) {
w.Writer.Add( new T( ) );
}
T t = ...;
using( var u = lockedList.UpgradableRead( ) ) {
if( !u.Reader.Contains( t ) )
using( var w = u.Upgrade( ) )
w.Writer.Add( t );
}
Now the only way to access the internal list is by calling the appropriate accessor.
This works particularly well for List<T>, since it already has the ReadOnlyCollection<T> wrapper. For other types, you could always create a Locked<T,T>, but then you lose out on the nice readable/writable type distinction.
One improvement might be to define the R and W types as disposable wrappers themselves, which would protected against (inadvertant) errors like:
List<T> list;
using( var w = lockedList.Write( ) )
list = w.Writable;
//BAD: "locked" object leaked outside of lock scope
list.MakeChangesWithoutHoldingLock( );
However, this would make Locked more complicated to use, and the current version does gives you the same protection you have when manually locking a shared member.
sealed class LockedList<T> : Locked<List<T>, ReadOnlyCollection<T>> {
public LockedList( )
: base( new List<T>( ), list => list.AsReadOnly( ) )
{ }
}
public class Locked<W, R> where W : class where R : class {
private readonly LockerState state_;
public Locked( W writer, R reader ) { this.state_ = new LockerState( reader, writer ); }
public Locked( W writer, Func<W, R> getReader ) : this( writer, getReader( writer ) ) { }
public IReadable Read( ) { return new Readable( this.state_ ); }
public IWritable Write( ) { return new Writable( this.state_ ); }
public IUpgradable UpgradableRead( ) { return new Upgradable( this.state_ ); }
public interface IReadable : IDisposable { R Reader { get; } }
public interface IWritable : IDisposable { W Writer { get; } }
public interface IUpgradable : IReadable { IWritable Upgrade( );}
#region Private Implementation Details
sealed class LockerState {
public readonly R Reader;
public readonly W Writer;
public readonly ReaderWriterLockSlim Sync;
public LockerState( R reader, W writer ) {
Debug.Assert( reader != null && writer != null );
this.Reader = reader;
this.Writer = writer;
this.Sync = new ReaderWriterLockSlim( );
}
}
abstract class Accessor : IDisposable {
private LockerState state_;
protected LockerState State { get { return this.state_; } }
protected Accessor( LockerState state ) {
Debug.Assert( state != null );
this.Acquire( state.Sync );
this.state_ = state;
}
protected abstract void Acquire( ReaderWriterLockSlim sync );
protected abstract void Release( ReaderWriterLockSlim sync );
public void Dispose( ) {
if( this.state_ != null ) {
var sync = this.state_.Sync;
this.state_ = null;
this.Release( sync );
}
}
}
class Readable : Accessor, IReadable {
public Readable( LockerState state ) : base( state ) { }
public R Reader { get { return this.State.Reader; } }
protected override void Acquire( ReaderWriterLockSlim sync ) { sync.EnterReadLock( ); }
protected override void Release( ReaderWriterLockSlim sync ) { sync.ExitReadLock( ); }
}
sealed class Writable : Accessor, IWritable {
public Writable( LockerState state ) : base( state ) { }
public W Writer { get { return this.State.Writer; } }
protected override void Acquire( ReaderWriterLockSlim sync ) { sync.EnterWriteLock( ); }
protected override void Release( ReaderWriterLockSlim sync ) { sync.ExitWriteLock( ); }
}
sealed class Upgradable : Readable, IUpgradable {
public Upgradable( LockerState state ) : base( state ) { }
public IWritable Upgrade( ) { return new Writable( this.State ); }
protected override void Acquire( ReaderWriterLockSlim sync ) { sync.EnterUpgradeableReadLock( ); }
protected override void Release( ReaderWriterLockSlim sync ) { sync.ExitUpgradeableReadLock( ); }
}
#endregion
}