I have a C# project working with input audio Stream from Kinect 1, Kinect 2, Microphone or anything else.
waveIn.DataAvailable += (object sender, WaveInEventArgs e) => {
lock(buffer){
var pos = buffer.Position;
buffer.Write(e.Buffer, 0, e.BytesRecorded);
buffer.Position = pos;
}
};
The buffer variable is a Stream from component A that will be processed by a SpeechRecognition component B working on Streams.
I will add new components C, D, E, working on Streams to compute pitch, detect sound, do finger printing, or anything else ...
How can I duplicate that Stream for components C, D, E ?
Component A send an Event "I have a Stream do what you want" I don't want to reverse the logic by an Event "Give me your streams"
I'm looking for a "MultiStream" that could give me a Stream instance and will handle the job
Component A
var MultiStream buffer = new MultiStream()
...
SendMyEventWith(buffer)
Component B, C, D, E
public void HandleMyEvent(MultiStream buffer){
var stream = buffer.GetNewStream();
var engine = new EngineComponentB()
engine.SetStream(stream);
}
The MultiStream must be a Stream to wrap Write() method (because Stream do not have data available mechanics) ?
If a Stream is Dispose() by Component B the MultiStream should remove it from it's array ?
The MultiStream must throw an exception on Read() to require use of GetNewStream()
EDIT: Kinect 1 provide a Stream itself ... :-( should I use a Thread to pumpit into the MultiStream ?
Did anybody have that kind of MultiStream Class ?
Thanks
I'm not sure if this is the best way to do it or that it's better than the previous answer, and I'm not guaranteeing that this code is perfect, but I coded something that is literally what you asked for because it was fun - a MultiStream class.
You can find the code for the class here: http://pastie.org/10289142
Usage Example:
MultiStream ms = new MultiStream();
Stream copy1 = ms.CloneStream();
ms.Read( ... );
Stream copy2 = ms.CloneStream();
ms.Read( ... );
copy1 and copy2 will contain identical data after the example is ran, and they will continue to get updated as the MultiStream is written to. You can read, update position, and dispose of the cloned streams individually. If disposed the cloned streams will get removed from MultiStream, and disposing of Multistream will close all related and cloned streams (you can change this if it's not the behavior you want). Trying to write to the cloned streams will throw a not supported exception.
Somehow I don't think streams really fit what you're trying to do. You're setting up a situation where a long run of the program is going to continually expand the data requirements for no apparent reason.
I'd suggest a pub/sub model that publishes the received audio data to subscribers, preferably using a multi-threaded approach to minimize the impact of a bad subscriber. Some ideas can be found here.
I've done this before with a processor class that implements IObserver<byte[]> and uses a Queue<byte[]> to store the sample blocks until the process thread is ready for them. Here's are the base classes:
public abstract class BufferedObserver<T> : IObserver<T>, IDisposable
{
private object _lck = new object();
private IDisposable _subscription = null;
public bool Subscribed { get { return _subscription != null; } }
private bool _completed = false;
public bool Completed { get { return _completed; } }
protected readonly Queue<T> _queue = new Queue<T>();
protected bool DataAvailable { get { lock(_lck) { return _queue.Any(); } } }
protected int AvailableCount { get { lock (_lck) { return _queue.Count; } } }
protected BufferedObserver()
{
}
protected BufferedObserver(IObservable<T> observable)
{
SubscribeTo(observable);
}
public virtual void Dispose()
{
if (_subscription != null)
{
_subscription.Dispose();
_subscription = null;
}
}
public void SubscribeTo(IObservable<T> observable)
{
if (_subscription != null)
_subscription.Dispose();
_subscription = observable.Subscribe(this);
_completed = false;
}
public virtual void OnCompleted()
{
_completed = true;
}
public virtual void OnError(Exception error)
{ }
public virtual void OnNext(T value)
{
lock (_lck)
_queue.Enqueue(value);
}
protected bool GetNext(ref T buffer)
{
lock (_lck)
{
if (!_queue.Any())
return false;
buffer = _queue.Dequeue();
return true;
}
}
protected T NextOrDefault()
{
T buffer = default(T);
GetNext(ref buffer);
return buffer;
}
}
public abstract class Processor<T> : BufferedObserver<T>
{
private object _lck = new object();
private Thread _thread = null;
private object _cancel_lck = new object();
private bool _cancel_requested = false;
private bool CancelRequested
{
get { lock(_cancel_lck) return _cancel_requested; }
set { lock(_cancel_lck) _cancel_requested = value; }
}
public bool Running { get { return _thread == null ? false : _thread.IsAlive; } }
public bool Finished { get { return _thread == null ? false : !_thread.IsAlive; } }
protected Processor(IObservable<T> observable)
: base(observable)
{ }
public override void Dispose()
{
if (_thread != null && _thread.IsAlive)
{
//CancelRequested = true;
_thread.Join(5000);
}
base.Dispose();
}
public bool Start()
{
if (_thread != null)
return false;
_thread = new Thread(threadfunc);
_thread.Start();
return true;
}
private void threadfunc()
{
while (!CancelRequested && (!Completed || _queue.Any()))
{
if (DataAvailable)
{
T data = NextOrDefault();
if (data != null && !data.Equals(default(T)))
ProcessData(data);
}
else
Thread.Sleep(10);
}
}
// implement this in a sub-class to process the blocks
protected abstract void ProcessData(T data);
}
This way you're only keeping the data as long as you need it, and you can attach as many process threads as you need to the same observable data source.
And for the sake of completeness, here's a generic class that implements IObservable<T> so you can see how it all fits together. This one even has comments:
/// <summary>Generic IObservable implementation</summary>
/// <typeparam name="T">Type of messages being observed</typeparam>
public class Observable<T> : IObservable<T>
{
/// <summary>Subscription class to manage unsubscription of observers.</summary>
private class Subscription : IDisposable
{
/// <summary>Observer list that this subscription relates to</summary>
public readonly ConcurrentBag<IObserver<T>> _observers;
/// <summary>Observer to manage</summary>
public readonly IObserver<T> _observer;
/// <summary>Initialize subscription</summary>
/// <param name="observers">List of subscribed observers to unsubscribe from</param>
/// <param name="observer">Observer to manage</param>
public Subscription(ConcurrentBag<IObserver<T>> observers, IObserver<T> observer)
{
_observers = observers;
_observer = observer;
}
/// <summary>On disposal remove the subscriber from the subscription list</summary>
public void Dispose()
{
IObserver<T> observer;
if (_observers != null && _observers.Contains(_observer))
_observers.TryTake(out observer);
}
}
// list of subscribed observers
private readonly ConcurrentBag<IObserver<T>> _observers = new ConcurrentBag<IObserver<T>>();
/// <summary>Subscribe an observer to this observable</summary>
/// <param name="observer">Observer instance to subscribe</param>
/// <returns>A subscription object that unsubscribes on destruction</returns>
/// <remarks>Always returns a subscription. Ensure that previous subscriptions are disposed
/// before re-subscribing.</remarks>
public IDisposable Subscribe(IObserver<T> observer)
{
// only add observer if it doesn't already exist:
if (!_observers.Contains(observer))
_observers.Add(observer);
// ...but always return a new subscription.
return new Subscription(_observers, observer);
}
// delegate type for threaded invocation of IObserver.OnNext method
private delegate void delNext(T value);
/// <summary>Send <paramref name="data"/> to the OnNext methods of each subscriber</summary>
/// <param name="data">Data object to send to subscribers</param>
/// <remarks>Uses delegate.BeginInvoke to send out notifications asynchronously.</remarks>
public void Notify(T data)
{
foreach (var observer in _observers)
{
delNext handler = observer.OnNext;
handler.BeginInvoke(data, null, null);
}
}
// delegate type for asynchronous invocation of IObserver.OnComplete method
private delegate void delComplete();
/// <summary>Notify all subscribers that the observable has completed</summary>
/// <remarks>Uses delegate.BeginInvoke to send out notifications asynchronously.</remarks>
public void NotifyComplete()
{
foreach (var observer in _observers)
{
delComplete handler = observer.OnCompleted;
handler.BeginInvoke(null, null);
}
}
}
Now you can create an Observable<byte[]> to use as your transmitter for Process<byte[]> instances that are interested. Pull data blocks out of the input stream, audio reader, etc. and pass them to the Notify method. Just make sure that you clone the arrays beforehand...
I have often run into situations where I need some sort of valve construct to control the flow of a reactive pipeline. Typically, in a network-based application I have had the requirement to open/close a request stream according to the connection state.
This valve subject should support opening/closing the stream, and output delivery in FIFO order. Input values should be buffered when the valve is closed.
A ConcurrentQueue or BlockingCollection are typically used in such scenarios, but that immediately introduces threading into the picture. I was looking for a purely reactive solution to this problem.
Here's an implementation mainly based on Buffer() and BehaviorSubject. The behavior subject tracks the open/close state of the valve. Openings of the valve start buffering windows, and closings of the valve close those windows. Output of the buffer operator is "re-injected" onto the input (so that even observers themselves can close the valve):
/// <summary>
/// Subject offering Open() and Close() methods, with built-in buffering.
/// Note that closing the valve in the observer is supported.
/// </summary>
/// <remarks>As is the case with other Rx subjects, this class is not thread-safe, in that
/// order of elements in the output is indeterministic in the case of concurrent operation
/// of Open()/Close()/OnNext()/OnError(). To guarantee strict order of delivery even in the
/// case of concurrent access, <see cref="ValveSubjectExtensions.Synchronize{T}(NEXThink.Finder.Utils.Rx.IValveSubject{T})"/> can be used.</remarks>
/// <typeparam name="T">Elements type</typeparam>
public class ValveSubject<T> : IValveSubject<T>
{
private enum Valve
{
Open,
Closed
}
private readonly Subject<T> input = new Subject<T>();
private readonly BehaviorSubject<Valve> valveSubject = new BehaviorSubject<Valve>(Valve.Open);
private readonly Subject<T> output = new Subject<T>();
public ValveSubject()
{
var valveOperations = valveSubject.DistinctUntilChanged();
input.Buffer(
bufferOpenings: valveOperations.Where(v => v == Valve.Closed),
bufferClosingSelector: _ => valveOperations.Where(v => v == Valve.Open))
.SelectMany(t => t).Subscribe(input);
input.Where(t => valveSubject.Value == Valve.Open).Subscribe(output);
}
public bool IsOpen
{
get { return valveSubject.Value == Valve.Open; }
}
public bool IsClosed
{
get { return valveSubject.Value == Valve.Closed; }
}
public void OnNext(T value)
{
input.OnNext(value);
}
public void OnError(Exception error)
{
input.OnError(error);
}
public void OnCompleted()
{
output.OnCompleted();
input.OnCompleted();
valveSubject.OnCompleted();
}
public IDisposable Subscribe(IObserver<T> observer)
{
return output.Subscribe(observer);
}
public void Open()
{
valveSubject.OnNext(Valve.Open);
}
public void Close()
{
valveSubject.OnNext(Valve.Closed);
}
}
public interface IValveSubject<T>:ISubject<T>
{
void Open();
void Close();
}
An additional method for flushing out the valve can be useful at times, e.g. to eliminate remaining requests when they are no longer relevant. Here is an implementation that builds upon the precedent, adapter-style:
/// <summary>
/// Subject with same semantics as <see cref="ValveSubject{T}"/>, but adding flushing out capability
/// which allows clearing the valve of any remaining elements before closing.
/// </summary>
/// <typeparam name="T">Elements type</typeparam>
public class FlushableValveSubject<T> : IFlushableValveSubject<T>
{
private readonly BehaviorSubject<ValveSubject<T>> valvesSubject = new BehaviorSubject<ValveSubject<T>>(new ValveSubject<T>());
private ValveSubject<T> CurrentValve
{
get { return valvesSubject.Value; }
}
public bool IsOpen
{
get { return CurrentValve.IsOpen; }
}
public bool IsClosed
{
get { return CurrentValve.IsClosed; }
}
public void OnNext(T value)
{
CurrentValve.OnNext(value);
}
public void OnError(Exception error)
{
CurrentValve.OnError(error);
}
public void OnCompleted()
{
CurrentValve.OnCompleted();
valvesSubject.OnCompleted();
}
public IDisposable Subscribe(IObserver<T> observer)
{
return valvesSubject.Switch().Subscribe(observer);
}
public void Open()
{
CurrentValve.Open();
}
public void Close()
{
CurrentValve.Close();
}
/// <summary>
/// Discards remaining elements in the valve and reset the valve into a closed state
/// </summary>
/// <returns>Replayable observable with any remaining elements</returns>
public IObservable<T> FlushAndClose()
{
var previousValve = CurrentValve;
valvesSubject.OnNext(CreateClosedValve());
var remainingElements = new ReplaySubject<T>();
previousValve.Subscribe(remainingElements);
previousValve.Open();
return remainingElements;
}
private static ValveSubject<T> CreateClosedValve()
{
var valve = new ValveSubject<T>();
valve.Close();
return valve;
}
}
public interface IFlushableValveSubject<T> : IValveSubject<T>
{
IObservable<T> FlushAndClose();
}
As mentioned in the comment, these subjects are not "thread-safe" in the sense that order of delivery is no longer guaranteed in the case of concurrent operation. In a similar fashion as what exists for the standard Rx Subject, Subject.Synchronize() (https://msdn.microsoft.com/en-us/library/hh211643%28v=vs.103%29.aspx) we can introduce some extensions which provide locking around the valve:
public static class ValveSubjectExtensions
{
public static IValveSubject<T> Synchronize<T>(this IValveSubject<T> valve)
{
return Synchronize(valve, new object());
}
public static IValveSubject<T> Synchronize<T>(this IValveSubject<T> valve, object gate)
{
return new SynchronizedValveAdapter<T>(valve, gate);
}
public static IFlushableValveSubject<T> Synchronize<T>(this IFlushableValveSubject<T> valve)
{
return Synchronize(valve, new object());
}
public static IFlushableValveSubject<T> Synchronize<T>(this IFlushableValveSubject<T> valve, object gate)
{
return new SynchronizedFlushableValveAdapter<T>(valve, gate);
}
}
internal class SynchronizedValveAdapter<T> : IValveSubject<T>
{
private readonly object gate;
private readonly IValveSubject<T> valve;
public SynchronizedValveAdapter(IValveSubject<T> valve, object gate)
{
this.valve = valve;
this.gate = gate;
}
public void OnNext(T value)
{
lock (gate)
{
valve.OnNext(value);
}
}
public void OnError(Exception error)
{
lock (gate)
{
valve.OnError(error);
}
}
public void OnCompleted()
{
lock (gate)
{
valve.OnCompleted();
}
}
public IDisposable Subscribe(IObserver<T> observer)
{
return valve.Subscribe(observer);
}
public void Open()
{
lock (gate)
{
valve.Open();
}
}
public void Close()
{
lock (gate)
{
valve.Close();
}
}
}
internal class SynchronizedFlushableValveAdapter<T> : SynchronizedValveAdapter<T>, IFlushableValveSubject<T>
{
private readonly object gate;
private readonly IFlushableValveSubject<T> valve;
public SynchronizedFlushableValveAdapter(IFlushableValveSubject<T> valve, object gate)
: base(valve, gate)
{
this.valve = valve;
this.gate = gate;
}
public IObservable<T> FlushAndClose()
{
lock (gate)
{
return valve.FlushAndClose();
}
}
}
Here is my implementation with delay operator:
source.delay(new Func1<Integer, Observable<Boolean>>() {
#Override
public Observable<Boolean> call(Integer integer) {
return valve.filter(new Func1<Boolean, Boolean>() {
#Override
public Boolean call(Boolean aBoolean) {
return aBoolean;
}
});
}
})
.toBlocking()
.subscribe(new Action1<Integer>() {
#Override
public void call(Integer integer) {
System.out.println("out: " + integer);
}
});
The idea is to delay all source emissions until "valve opens". If valve is already opened, there will be no delay in emission of item.
Rx valve gist
If you profile a simple client application that uses SocketAsyncEventArgs, you will notice Thread and ExecutionContext allocations.
The source of the allocations is SocketAsyncEventArgs.StartOperationCommon that creates a copy of the ExecutionContext with ExecutionContext.CreateCopy().
ExecutionContext.SuppressFlow seems like a good way to suppress this allocation. However this method itself will generate allocations when ran in a new thread.
How can I avoid these allocations?
SocketAsyncEventArgs
public class SocketAsyncEventArgs : EventArgs, IDisposable {
//...
// Method called to prepare for a native async socket call.
// This method performs the tasks common to all socket operations.
internal void StartOperationCommon(Socket socket) {
//...
// Prepare execution context for callback.
if (ExecutionContext.IsFlowSuppressed()) {
// This condition is what you need to pass.
// Fast path for when flow is suppressed.
m_Context = null;
m_ContextCopy = null;
} else {
// Flow is not suppressed.
//...
// If there is an execution context we need
//a fresh copy for each completion.
if(m_Context != null) {
m_ContextCopy = m_Context.CreateCopy();
}
}
// Remember current socket.
m_CurrentSocket = socket;
}
[Pure]
public static bool IsFlowSuppressed()
{
return Thread.CurrentThread.GetExecutionContextReader().IsFlowSuppressed;
}
//...
}
ExecutionContext
[Serializable]
public sealed class ExecutionContext : IDisposable, ISerializable
{
//...
// Misc state variables.
private ExecutionContext m_Context;
private ExecutionContext m_ContextCopy;
private ContextCallback m_ExecutionCallback;
//...
internal struct Reader
{
ExecutionContext m_ec;
//...
public bool IsFlowSuppressed
{
#if !FEATURE_CORECLR
[MethodImpl(MethodImplOptions.AggressiveInlining)]
#endif
get { return IsNull ? false : m_ec.isFlowSuppressed; }
}
} //end of Reader
internal bool isFlowSuppressed
{
get
{
return (_flags & Flags.IsFlowSuppressed) != Flags.None;
}
set
{
Contract.Assert(!IsPreAllocatedDefault);
if (value)
_flags |= Flags.IsFlowSuppressed;
else
_flags &= ~Flags.IsFlowSuppressed;
}
}
[System.Security.SecurityCritical] // auto-generated_required
public static AsyncFlowControl SuppressFlow()
{
if (IsFlowSuppressed())
{
throw new InvalidOperationException(Environment.GetResourceString("InvalidOperation_CannotSupressFlowMultipleTimes"));
}
Contract.EndContractBlock();
AsyncFlowControl afc = new AsyncFlowControl();
afc.Setup();
return afc;
}
//...
}//end of ExecutionContext.
AsyncFlowControl
public struct AsyncFlowControl: IDisposable
{
private bool useEC;
private ExecutionContext _ec;
//...
[SecurityCritical]
internal void Setup()
{
useEC = true;
Thread currentThread = Thread.CurrentThread;
_ec = currentThread.GetMutableExecutionContext();
_ec.isFlowSuppressed = true;
_thread = currentThread;
}
}
Thread
// deliberately not [serializable]
[ClassInterface(ClassInterfaceType.None)]
[ComDefaultInterface(typeof(_Thread))]
[System.Runtime.InteropServices.ComVisible(true)]
public sealed class Thread : CriticalFinalizerObject, _Thread
{
//...
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
internal ExecutionContext.Reader GetExecutionContextReader()
{
return new ExecutionContext.Reader(m_ExecutionContext);
}
}
The only way to set isFlowSuppressed to true, to pass the condition in the StartOperationCommon method, is by calling Setup method, and the only call to Setup is in SuppressFlow method, wich you have discussed.
As you can see, SuppressFlow is the only solution.
Actually, SuppressFlow doesn't allocate. It returns a AsyncFlowControl, which is a struct. The proper solution basically is to call SendAsync and ReceiveAsync as follows:
public static bool SendAsyncSuppressFlow(this Socket self, SocketAsyncEventArgs e)
{
var control = ExecutionContext.SuppressFlow();
try
{
return self.SendAsync(e);
}
finally
{
control.Undo();
}
}
public static bool ReceiveAsyncSuppressFlow(this Socket self, SocketAsyncEventArgs e)
{
var control = ExecutionContext.SuppressFlow();
try
{
return self.ReceiveAsync(e);
}
finally
{
control.Undo();
}
}
I created these extension methods to make this a bit simpler and more explicit.
Traces with dotMemory showed that memory allocations really do go down to zero.
I've created a new class called Actor which processes messages passed to it.
The problem I am running into is figuring out what is the most elegant way to pass related but different messages to the Actor. My first idea is to use inheritance but it seems so bloated but it is strongly types which is a definite requirement.
Have any ideas?
Example
private abstract class QueueMessage { }
private class ClearMessage : QueueMessage
{
public static readonly ClearMessage Instance = new ClearMessage();
private ClearMessage() { }
}
private class TryDequeueMessage : QueueMessage
{
public static readonly TryDequeueMessage Instance = new TryDequeueMessage();
private TryDequeueMessage() { }
}
private class EnqueueMessage : QueueMessage
{
public TValue Item { get; private set; }
private EnqueueMessage(TValue item)
{
Item = item;
}
}
Actor Class
/// <summary>Represents a callback method to be executed by an Actor.</summary>
/// <typeparam name="TReply">The type of reply.</typeparam>
/// <param name="reply">The reply made by the actor.</param>
public delegate void ActorReplyCallback<TReply>(TReply reply);
/// <summary>Represents an Actor which receives and processes messages in concurrent applications.</summary>
/// <typeparam name="TMessage">The type of message this actor accepts.</typeparam>
/// <typeparam name="TReply">The type of reply made by this actor.</typeparam>
public abstract class Actor<TMessage, TReply> : IDisposable
{
/// <summary>The default total number of threads to process messages.</summary>
private const Int32 DefaultThreadCount = 1;
/// <summary>Used to serialize access to the message queue.</summary>
private readonly Locker Locker;
/// <summary>Stores the messages until they can be processed.</summary>
private readonly System.Collections.Generic.Queue<Message> MessageQueue;
/// <summary>Signals the actor thread to process a new message.</summary>
private readonly ManualResetEvent PostEvent;
/// <summary>This tells the actor thread to stop reading from the queue.</summary>
private readonly ManualResetEvent DisposeEvent;
/// <summary>Processes the messages posted to the actor.</summary>
private readonly List<Thread> ActorThreads;
/// <summary>Initializes a new instance of the Genex.Concurrency<TRequest, TResponse> class.</summary>
public Actor() : this(DefaultThreadCount) { }
/// <summary>Initializes a new instance of the Genex.Concurrency<TRequest, TResponse> class.</summary>
/// <param name="thread_count"></param>
public Actor(Int32 thread_count)
{
if (thread_count < 1) throw new ArgumentOutOfRangeException("thread_count", thread_count, "Must be 1 or greater.");
Locker = new Locker();
MessageQueue = new System.Collections.Generic.Queue<Message>();
EnqueueEvent = new ManualResetEvent(true);
PostEvent = new ManualResetEvent(false);
DisposeEvent = new ManualResetEvent(true);
ActorThreads = new List<Thread>();
for (Int32 i = 0; i < thread_count; i++)
{
var thread = new Thread(ProcessMessages);
thread.IsBackground = true;
thread.Start();
ActorThreads.Add(thread);
}
}
/// <summary>Posts a message and waits for the reply.</summary>
/// <param name="value">The message to post to the actor.</param>
/// <returns>The reply from the actor.</returns>
public TReply PostWithReply(TMessage message)
{
using (var wrapper = new Message(message))
{
lock (Locker) MessageQueue.Enqueue(wrapper);
PostEvent.Set();
wrapper.Channel.CompleteEvent.WaitOne();
return wrapper.Channel.Value;
}
}
/// <summary>Posts a message to the actor and executes the callback when the reply is received.</summary>
/// <param name="value">The message to post to the actor.</param>
/// <param name="callback">The callback that will be invoked once the replay is received.</param>
public void PostWithAsyncReply(TMessage value, ActorReplyCallback<TReply> callback)
{
if (callback == null) throw new ArgumentNullException("callback");
ThreadPool.QueueUserWorkItem(state => callback(PostWithReply(value)));
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
if (DisposeEvent.WaitOne(10))
{
DisposeEvent.Reset();
PostEvent.Set();
foreach (var thread in ActorThreads)
{
thread.Join();
}
((IDisposable)PostEvent).Dispose();
((IDisposable)DisposeEvent).Dispose();
}
}
/// <summary>Processes a message posted to the actor.</summary>
/// <param name="message">The message to be processed.</param>
protected abstract void ProcessMessage(Message message);
/// <summary>Dequeues the messages passes them to ProcessMessage.</summary>
private void ProcessMessages()
{
while (PostEvent.WaitOne() && DisposeEvent.WaitOne(10))
{
var message = (Message)null;
while (true)
{
lock (Locker)
{
message = MessageQueue.Count > 0 ?
MessageQueue.Dequeue() :
null;
if (message == null)
{
PostEvent.Reset();
break;
}
}
try
{
ProcessMessage(message);
}
catch
{
}
}
}
}
/// <summary>Represents a message that is passed to an actor.</summary>
protected class Message : IDisposable
{
/// <summary>The actual value of this message.</summary>
public TMessage Value { get; private set; }
/// <summary>The channel used to give a reply to this message.</summary>
public Channel Channel { get; private set; }
/// <summary>Initializes a new instance of Genex.Concurrency.Message class.</summary>
/// <param name="value">The actual value of the message.</param>
public Message(TMessage value)
{
Value = value;
Channel = new Channel();
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
Channel.Dispose();
}
}
/// <summary>Represents a channel used by an actor to reply to a message.</summary>
protected class Channel : IDisposable
{
/// <summary>The value of the reply.</summary>
public TReply Value { get; private set; }
/// <summary>Signifies that the message has been replied to.</summary>
public ManualResetEvent CompleteEvent { get; private set; }
/// <summary>Initializes a new instance of Genex.Concurrency.Channel class.</summary>
public Channel()
{
CompleteEvent = new ManualResetEvent(false);
}
/// <summary>Reply to the message received.</summary>
/// <param name="value">The value of the reply.</param>
public void Reply(TReply value)
{
Value = value;
CompleteEvent.Set();
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
((IDisposable)CompleteEvent).Dispose();
}
}
}
Steve Gilham summarized how the compiler actually handles discriminated unions. For your own code, you could consider a simplified version of that. Given the following F#:
type QueueMessage<T> = ClearMessage | TryDequeueMessage | EnqueueMessage of T
Here's one way to emulate it in C#:
public enum MessageType { ClearMessage, TryDequeueMessage, EnqueueMessage }
public abstract class QueueMessage<T>
{
// prevents unwanted subclassing
private QueueMessage() { }
public abstract MessageType MessageType { get; }
/// <summary>
/// Only applies to EnqueueMessages
/// </summary>
public abstract T Item { get; }
public static QueueMessage<T> MakeClearMessage() { return new ClearMessage(); }
public static QueueMessage<T> MakeTryDequeueMessage() { return new TryDequeueMessage(); }
public static QueueMessage<T> MakeEnqueueMessage(T item) { return new EnqueueMessage(item); }
private sealed class ClearMessage : QueueMessage<T>
{
public ClearMessage() { }
public override MessageType MessageType
{
get { return MessageType.ClearMessage; }
}
/// <summary>
/// Not implemented by this subclass
/// </summary>
public override T Item
{
get { throw new NotImplementedException(); }
}
}
private sealed class TryDequeueMessage : QueueMessage<T>
{
public TryDequeueMessage() { }
public override MessageType MessageType
{
get { return MessageType.TryDequeueMessage; }
}
/// <summary>
/// Not implemented by this subclass
/// </summary>
public override T Item
{
get { throw new NotImplementedException(); }
}
}
private sealed class EnqueueMessage : QueueMessage<T>
{
private T item;
public EnqueueMessage(T item) { this.item = item; }
public override MessageType MessageType
{
get { return MessageType.EnqueueMessage; }
}
/// <summary>
/// Gets the item to be enqueued
/// </summary>
public override T Item { get { return item; } }
}
}
Now, in code that is given a QueueMessage, you can switch on the MessageType property in lieu of pattern matching, and make sure that you access the Item property only on EnqueueMessages.
EDIT
Here's another alternative, based on Juliet's code. I've tried to streamline things so that it's got a more usable interface from C#, though. This is preferable to the previous version in that you can't get a MethodNotImplemented exception.
public abstract class QueueMessage<T>
{
// prevents unwanted subclassing
private QueueMessage() { }
public abstract TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase);
public static QueueMessage<T> MakeClearMessage() { return new ClearMessage(); }
public static QueueMessage<T> MakeTryDequeueMessage() { return new TryDequeueMessage(); }
public static QueueMessage<T> MakeEnqueueMessage(T item) { return new EnqueueMessage(item); }
private sealed class ClearMessage : QueueMessage<T>
{
public ClearMessage() { }
public override TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase)
{
return clearCase();
}
}
private sealed class TryDequeueMessage : QueueMessage<T>
{
public TryDequeueMessage() { }
public override TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase)
{
return tryDequeueCase();
}
}
private sealed class EnqueueMessage : QueueMessage<T>
{
private T item;
public EnqueueMessage(T item) { this.item = item; }
public override TReturn Match<TReturn>(Func<TReturn> clearCase, Func<TReturn> tryDequeueCase, Func<T, TReturn> enqueueCase)
{
return enqueueCase(item);
}
}
}
You'd use this code like this:
public class MessageUserTest
{
public void Use()
{
// your code to get a message here...
QueueMessage<string> msg = null;
// emulate pattern matching, but without constructor names
int i =
msg.Match(
clearCase: () => -1,
tryDequeueCase: () => -2,
enqueueCase: s => s.Length);
}
}
In your example code, you implement PostWithAsyncReply in terms of PostWithReply. That isn't ideal, because it means that when you call PostWithAsyncReply and the actor takes a while to handle it, there are actually two threads tied up: the one executing the actor and the one waiting for it to finish. It would be better to have the one thread executing the actor and then calling the callback in the asynchronous case. (Obviously in the synchronous case there's no avoiding the tying up of two threads).
Update:
More on the above: you construct an actor with an argument telling it how many threads to run. For simplicity suppose every actor runs with one thread (actually quite a good situation because actors can then have internal state with no locking on it, as only one thread accesses it directly).
Actor A calls actor B, waiting for a response. In order to handle the request, actor B needs to call actor C. So now A and B's only threads are waiting, and C's is the only one actually giving the CPU any work to do. So much for multi-threading! But this is what you get if you wait for answers all the time.
Okay, you could increase the number of threads you start in each actor. But you'd be starting them so they could sit around doing nothing. A stack uses up a lot of memory, and context switching can be expensive.
So it's better to send messages asynchronously, with a callback mechanism so you can pick up the finished result. The problem with your implementation is that you grab another thread from the thread pool, purely to sit around and wait. So you basically apply the workaround of increasing the number of threads. You allocate a thread to the task of never running.
It would be better to implement PostWithReply in terms of PostWithAsyncReply, i.e. the opposite way round. The asynchronous version is low-level. Building on my delegate-based example (because it involves less typing of code!):
private bool InsertCoinImpl(int value)
{
// only accept dimes/10p/whatever it is in euros
return (value == 10);
}
public void InsertCoin(int value, Action<bool> accepted)
{
Submit(() => accepted(InsertCoinImpl(value)));
}
So the private implementation returns a bool. The public asynchronous method accepts an action that will receive the return value; both the private implementation and the callback action are executed on the same thread.
Hopefully the need to wait synchronously is going to be the minority case. But when you need it, it could be supplied by a helper method, totally general purpose and not tied to any specific actor or message type:
public static T Wait<T>(Action<Action<T>> activity)
{
T result = default(T);
var finished = new EventWaitHandle(false, EventResetMode.AutoReset);
activity(r =>
{
result = r;
finished.Set();
});
finished.WaitOne();
return result;
}
So now in some other actor we can say:
bool accepted = Helpers.Wait<bool>(r => chocMachine.InsertCoin(5, r));
The type argument to Wait may be unnecessary, haven't tried compiling any of this. But Wait basically magics-up a callback for you, so you can pass it to some asynchronous method, and on the outside you just get back whatever was passed to the callback as your return value. Note that the lambda you pass to Wait still actually executes on the same thread that called Wait.
We now return you to our regular program...
As for the actual problem you asked about, you send a message to an actor to get it to do something. Delegates are helpful here. They let you effectively get the compiler to generate you a class with some data, a constructor that you don't even have to call explicitly and also a method. If you're having to write a bunch of little classes, switch to delegates.
abstract class Actor
{
Queue<Action> _messages = new Queue<Action>();
protected void Submit(Action action)
{
// take out a lock of course
_messages.Enqueue(action);
}
// also a "run" that reads and executes the
// message delegates on background threads
}
Now a specific derived actor follows this pattern:
class ChocolateMachineActor : Actor
{
private void InsertCoinImpl(int value)
{
// whatever...
}
public void InsertCoin(int value)
{
Submit(() => InsertCoinImpl(value));
}
}
So to send a message to the actor, you just call the public methods. The private Impl method does the real work. No need to write a bunch of message classes by hand.
Obviously I've left out the stuff about replying, but that can all be done with more parameters. (See update above).
Union types and pattern matching map pretty directly to the visitor pattern, I've posted about this a few times before:
What task is best done in a functional programming style?
https://stackoverflow.com/questions/1883246/none-pure-functional-code-smells/1884256#1884256
So if you want to pass messages with lots of different types, you're stuck implementing the visitor pattern.
(Warning, untested code ahead, but should give you an idea of how its done)
Let's say we have something like this:
type msg =
| Add of int
| Sub of int
| Query of ReplyChannel<int>
let rec counts = function
| [] -> (0, 0, 0)
| Add(_)::xs -> let (a, b, c) = counts xs in (a + 1, b, c)
| Sub(_)::xs -> let (a, b, c) = counts xs in (a, b + 1, c)
| Query(_)::xs -> let (a, b, c) = counts xs in (a, b, c + 1)
You end up with this bulky C# code:
interface IMsgVisitor<T>
{
T Visit(Add msg);
T Visit(Sub msg);
T Visit(Query msg);
}
abstract class Msg
{
public abstract T Accept<T>(IMsgVistor<T> visitor)
}
class Add : Msg
{
public readonly int Value;
public Add(int value) { this.Value = value; }
public override T Accept<T>(IMsgVisitor<T> visitor) { return visitor.Visit(this); }
}
class Sub : Msg
{
public readonly int Value;
public Add(int value) { this.Value = value; }
public override T Accept<T>(IMsgVisitor<T> visitor) { return visitor.Visit(this); }
}
class Query : Msg
{
public readonly ReplyChannel<int> Value;
public Add(ReplyChannel<int> value) { this.Value = value; }
public override T Accept<T>(IMsgVisitor<T> visitor) { return visitor.Visit(this); }
}
Now whenever you want to do something with the message, you need to implement a visitor:
class MsgTypeCounter : IMsgVisitor<MsgTypeCounter>
{
public readonly Tuple<int, int, int> State;
public MsgTypeCounter(Tuple<int, int, int> state) { this.State = state; }
public MsgTypeCounter Visit(Add msg)
{
Console.WriteLine("got Add of " + msg.Value);
return new MsgTypeCounter(Tuple.Create(1 + State.Item1, State.Item2, State.Item3));
}
public MsgTypeCounter Visit(Sub msg)
{
Console.WriteLine("got Sub of " + msg.Value);
return new MsgTypeCounter(Tuple.Create(State.Item1, 1 + State.Item2, State.Item3));
}
public MsgTypeCounter Visit(Query msg)
{
Console.WriteLine("got Query of " + msg.Value);
return new MsgTypeCounter(Tuple.Create(State.Item1, 1 + State.Item2, State.Item3));
}
}
Then finally you can use it like this:
var msgs = new Msg[] { new Add(1), new Add(3), new Sub(4), new ReplyChannel(null) };
var counts = msgs.Aggregate(new MsgTypeVisitor(Tuple.Create(0, 0, 0)),
(acc, x) => x.Accept(acc)).State;
Yes, its as obtuse as it seems, but that's how you pass multiple messages a class in a type-safe manner, and that's also why we don't implement unions in C# ;)
A long shot, but anyway..
I am assuming that discriminated-union is F# for ADT (Abstract Data Type). Which means the type could be one of several things.
In case there are two, you could try and put it in a simple generic class with two type parameters:
public struct DiscriminatedUnion<T1,T2>
{
public DiscriminatedUnion(T1 t1) { value = t1; }
public DiscriminatedUnion(T2 t1) { value = t2; }
public static implicit operator T1(DiscriminatedUnion<T1,T2> du) {return (T1)du.value; }
public static implicit operator T2(DiscriminatedUnion<T1,T2> du) {return (T2)du.value; }
object value;
}
To make it work for 3 or more, we need to replicate this class a number of times.
Any one has a solution for function overloading depending on the runtime type?
If you have this
type internal Either<'a, 'b> =
| Left of 'a
| Right of 'b
in F#, then the C# equivalent of the CLR generated for class Either<'a, 'b> has inner types like
internal class _Left : Either<a, b>
{
internal readonly a left1;
internal _Left(a left1);
}
each with a tag, a getter and a factory method
internal const int tag_Left = 0;
internal static Either<a, b> Left(a Left1);
internal a Left1 { get; }
plus a discriminator
internal int Tag { get; }
and a raft of methods to implement interfaces IStructuralEquatable, IComparable, IStructuralComparable
There is a compile-time checked discriminated union type at Discriminated union in C#
private class ClearMessage
{
public static readonly ClearMessage Instance = new ClearMessage();
private ClearMessage() { }
}
private class TryDequeueMessage
{
public static readonly TryDequeueMessage Instance = new TryDequeueMessage();
private TryDequeueMessage() { }
}
private class EnqueueMessage
{
public TValue Item { get; private set; }
private EnqueueMessage(TValue item) { Item = item; }
}
Using the discriminated union could be done as follows:
// New file
// Create an alias
using Message = Union<ClearMessage, TryDequeueMessage, EnqueMessage>;
int ProcessMessage(Message msg)
{
return Message.Match(
clear => 1,
dequeue => 2,
enqueue => 3);
}
I have an adopted implementation of a simple (no upgrades or timeouts) ReaderWriterLock for Silverlight, I was wondering anyone with the right expertise can validate if it is good or bad by design. To me it looks pretty alright, it works as advertised, but I have limited experience with multi-threading code as such.
public sealed class ReaderWriterLock
{
private readonly object syncRoot = new object(); // Internal lock.
private int i = 0; // 0 or greater means readers can pass; -1 is active writer.
private int readWaiters = 0; // Readers waiting for writer to exit.
private int writeWaiters = 0; // Writers waiting for writer lock.
private ConditionVariable conditionVar; // Condition variable.
public ReaderWriterLock()
{
conditionVar = new ConditionVariable(syncRoot);
}
/// <summary>
/// Gets a value indicating if a reader lock is held.
/// </summary>
public bool IsReaderLockHeld
{
get
{
lock ( syncRoot )
{
if ( i > 0 )
return true;
return false;
}
}
}
/// <summary>
/// Gets a value indicating if the writer lock is held.
/// </summary>
public bool IsWriterLockHeld
{
get
{
lock ( syncRoot )
{
if ( i < 0 )
return true;
return false;
}
}
}
/// <summary>
/// Aquires the writer lock.
/// </summary>
public void AcquireWriterLock()
{
lock ( syncRoot )
{
writeWaiters++;
while ( i != 0 )
conditionVar.Wait(); // Wait until existing writer frees the lock.
writeWaiters--;
i = -1; // Thread has writer lock.
}
}
/// <summary>
/// Aquires a reader lock.
/// </summary>
public void AcquireReaderLock()
{
lock ( syncRoot )
{
readWaiters++;
// Defer to a writer (one time only) if one is waiting to prevent writer starvation.
if ( writeWaiters > 0 )
{
conditionVar.Pulse();
Monitor.Wait(syncRoot);
}
while ( i < 0 )
Monitor.Wait(syncRoot);
readWaiters--;
i++;
}
}
/// <summary>
/// Releases the writer lock.
/// </summary>
public void ReleaseWriterLock()
{
bool doPulse = false;
lock ( syncRoot )
{
i = 0;
// Decide if we pulse a writer or readers.
if ( readWaiters > 0 )
{
Monitor.PulseAll(syncRoot); // If multiple readers waiting, pulse them all.
}
else
{
doPulse = true;
}
}
if ( doPulse )
conditionVar.Pulse(); // Pulse one writer if one waiting.
}
/// <summary>
/// Releases a reader lock.
/// </summary>
public void ReleaseReaderLock()
{
bool doPulse = false;
lock ( syncRoot )
{
i--;
if ( i == 0 )
doPulse = true;
}
if ( doPulse )
conditionVar.Pulse(); // Pulse one writer if one waiting.
}
/// <summary>
/// Condition Variable (CV) class.
/// </summary>
public class ConditionVariable
{
private readonly object syncLock = new object(); // Internal lock.
private readonly object m; // The lock associated with this CV.
public ConditionVariable(object m)
{
lock (syncLock)
{
this.m = m;
}
}
public void Wait()
{
bool enter = false;
try
{
lock (syncLock)
{
Monitor.Exit(m);
enter = true;
Monitor.Wait(syncLock);
}
}
finally
{
if (enter)
Monitor.Enter(m);
}
}
public void Pulse()
{
lock (syncLock)
{
Monitor.Pulse(syncLock);
}
}
public void PulseAll()
{
lock (syncLock)
{
Monitor.PulseAll(syncLock);
}
}
}
}
If it is good, it might be helpful to others too as Silverlight currently lacks a reader-writer type of lock. Thanks.
I go in depth on explaining Vance Morrison's ReaderWriterLock (which became ReaderWriterLockSlim in .NET 3.5) on my blog (down to the x86 level). This might be helpful in your design, especially understanding how things really work.
Both of your IsReadorLockHeld and IsWriterLockHeld methods are flawed at a conceptual level. While it is possible to determine that at a given point in time a particular lock is or is not held, there is absolutely nothing you can safely do without this information unless you continue to hold the lock (not the case in your code).
These methods would be more accurately named WasReadLockHeldInThePast and WasWriterLockHeldInThePast. Once you rename the methods to a more accurate representation of what they do, it becomes clearer that they are not very useful.
This class seems simpler to me, and provides the same functionality. It may be slightly less performant, since it always PulsesAll(), but the logic is much simpler to understand, and I doubt the performance hit is that great.
public sealed class ReaderWriterLock()
{
private readonly object internalLock = new object();
private int activeReaders = 0;
private bool activeWriter = false;
public void AcquireReaderLock()
{
lock (internalLock)
{
while (activeWriter)
Monitor.Wait(internalLock);
++activeReaders;
}
}
public void ReleaseReaderLock()
{
lock (internalLock)
{
// if activeReaders <= 0 do some error handling
--activeReaders;
Monitor.PulseAll(internalLock);
}
}
public void AcquireWriterLock()
{
lock (internalLock)
{
// first wait for any writers to clear
// This assumes writers have a higher priority than readers
// as it will force the readers to wait until all writers are done.
// you can change the conditionals in here to change that behavior.
while (activeWriter)
Monitor.Wait(internalLock);
// There are no more writers, set this to true to block further readers from acquiring the lock
activeWriter = true;
// Now wait till all readers have completed.
while (activeReaders > 0)
Monitor.Wait(internalLock);
// The writer now has the lock
}
}
public void ReleaseWriterLock()
{
lock (internalLock)
{
// if activeWriter != true handle the error
activeWriter = false;
Monitor.PulseAll(internalLock);
}
}
}