Performance and Threading improvements ASP.Net MVC - c#

Im totally new to threading, and want to make use of 6 Core processor and gain improvements.
Im trying to find some quick wins, my little business is growing and I've noticed some performance hits (a couple of customer have advised me) when completing a few sections on my service, some of it Im guessing may be down to the need to send emails and waiting for the third party to respond, is their an easy way to pass this off onto another thread\ while not breaking the session\service?
I have an action when an appointment is "Completed"
switch (appointment.State)
{
case DomainObjects.AppointmentState.Completed:
_clientService.SendMessageToClient(clientId,"Email title"EmailMessage(appointment, "AppointmentThankYou"), appointment.Id, userId);
break;
}
Is this better?
case DomainObjects.AppointmentState.Completed:
var emailThread = new Thread(() => _clientService.SendMessageToClient(clientId,"Email Subject",
EmailMessage(appointment, "AppointmentThankYou"),
appointment.Id, userId))
{
IsBackground = true
};
emailThread.Start();
Constructive Feedback welcomed.

To be honest I believe that your approach outlined above needs a considerable amount of work before releasing to production. Imagine the situation where 1000 users click on your site - you now have 1000 background tasks all trying to send messages at the same time. This will probably bottleneck your system for disk and network IO.
While there are a number of ways to approach this problem, one of the most common is to use a producer-consumer queue, preferably using a thread safe collection such as ConcurrentQueue with the number of active long running, e.g. emailing threads in process at any time controlled by a synchronization mechanism such as SemaphoreSlim
I've created a very simple application to demonstrate this approach as shown below. The key classes in this are
The MessageProcessor class which maintains the queue and controls access for both adding items AddToQueue method and sending messages ReadFromQueue. The class itself implements the Singleton pattern to ensure only one instance is ever present in the application (you don't want multiple queues). The ReadFromQueue method also implements a timer (set for 2 seconds) which specifies how often a task should be spawned to send a message.
The SingletonBase class is just an abstract class for implementing the Singleton pattern
The MessageSender class is used for the actual work of sending the message
The CreateMessagesForTest class simply simulates creating test messages for the purpose of this answer
Hope this helps
using System;
using System.Collections.Concurrent;
using System.Globalization;
using System.Reactive.Linq;
using System.Reflection;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication9
{
internal class Program
{
private static void Main(string[] args)
{
MessagingProcessor.Instance.ReadFromQueue(); // starts the message sending tasks
var createMessages = new CreateMessagesForTest();
createMessages.CreateTestMessages(); // creates sample test messages for processing
Console.ReadLine();
}
}
/// <summary>
/// Simply creates test messages every second for sending
/// </summary>
public class CreateMessagesForTest
{
public void CreateTestMessages()
{
IObservable<long> observable = Observable.Interval(TimeSpan.FromSeconds(1));
// Token for cancelation
var source = new CancellationTokenSource();
// Create task to execute.
Action action = (CreateMessage);
// Subscribe the obserable to the task on execution.
observable.Subscribe(x =>
{
var task = new Task(action);
task.Start();
}, source.Token);
}
private static void CreateMessage()
{
var message = new Message {EMailAddress = "aa#aa.com", MessageBody = "abcdefg"};
MessagingProcessor.Instance.AddToQueue(message);
}
}
/// <summary>
/// The conents of the email to send
/// </summary>
public class Message
{
public string EMailAddress { get; set; }
public string MessageBody { get; set; }
}
/// <summary>
/// Handles all aspects of processing the messages, only one instance of this class is allowed
/// at any time
/// </summary>
public class MessagingProcessor : SingletonBase<MessagingProcessor>
{
private MessagingProcessor()
{
}
private ConcurrentQueue<Message> _messagesQueue = new ConcurrentQueue<Message>();
// create a semaphore to limit the number of threads which can send an email at any given time
// In this case only allow 2 to be processed at any given time
private static readonly SemaphoreSlim Semaphore = new SemaphoreSlim(2, 2);
public void AddToQueue(Message message)
{
_messagesQueue.Enqueue(message);
}
/// <summary>
/// Used to start the process for sending emails
/// </summary>
public void ReadFromQueue()
{
IObservable<long> observable = Observable.Interval(TimeSpan.FromSeconds(2));
// Token for cancelation
var source = new CancellationTokenSource();
// Create task to execute.
Action action = (SendMessages);
// Subscribe the obserable to the task on execution.
observable.Subscribe(x =>
{
var task = new Task(action);
task.Start();
}, source.Token);
}
/// <summary>
/// Handles dequeing and syncronisation to the queue
/// </summary>
public void SendMessages()
{
try
{
Semaphore.Wait();
Message message;
while (_messagesQueue.TryDequeue(out message)) // if we have a message to send
{
var messageSender = new MessageSender();
messageSender.SendMessage(message);
}
}
finally
{
Semaphore.Release();
}
}
}
/// <summary>
/// Sends the emails
/// </summary>
public class MessageSender
{
public void SendMessage(Message message)
{
// do some long running task
}
}
/// <summary>
/// Implements singleton pattern on all classes which derive from it
/// </summary>
/// <typeparam name="T">Derived class</typeparam>
public abstract class SingletonBase<T> where T : class
{
public static T Instance
{
get { return SingletonFactory.Instance; }
}
/// <summary>
/// The singleton class factory to create the singleton instance.
/// </summary>
private class SingletonFactory
{
static SingletonFactory()
{
}
private SingletonFactory()
{
}
internal static readonly T Instance = GetInstance();
private static T GetInstance()
{
var theType = typeof (T);
T inst;
try
{
inst = (T) theType
.InvokeMember(theType.Name,
BindingFlags.CreateInstance | BindingFlags.Instance
| BindingFlags.NonPublic,
null, null, null,
CultureInfo.InvariantCulture);
}
catch (MissingMethodException ex)
{
var exception = new TypeLoadException(string.Format(
CultureInfo.CurrentCulture,
"The type '{0}' must have a private constructor to " +
"be used in the Singleton pattern.", theType.FullName)
, ex);
//LogManager.LogException(LogManager.EventIdInternal, exception, "error in instantiating the singleton");
throw exception;
}
return inst;
}
}
}
}

Related

How to implement a sorted buffer?

I need to traverse a collection of disjoint folders; each folder is associated to a visited time configurated somewhere in the folder.
I then sort the folders, and process the one with the earliest visited time first. Note the processing is generally slower than the traversing.
My code targets Framework4.8.1; Currently my implementation is as follows:
public class BySeparateThread
{
ConcurrentDictionary<string, DateTime?> _dict = new ConcurrentDictionary<string, DateTime?>();
private object _lock;
/// <summary>
/// this will be called by producer thread;
/// </summary>
/// <param name="address"></param>
/// <param name="time"></param>
public void add(string address,DateTime? time) {
_dict.TryAdd(address, time);
}
/// <summary>
/// called by subscriber thread;
/// </summary>
/// <returns></returns>
public string? next() {
lock (_lock) {
var r = _dict.FirstOrDefault();
//return sortedList.FirstOrDefault().Value;
if (r.Key is null)
{
return r.Key;
}
if (r.Value is null)
{
_dict.TryRemove(r.Key, out var _);
return r.Key;
}
var key = r.Key;
foreach (var item in _dict.Skip(1) )
{
if (item.Value is null)
{
_dict.TryRemove(item.Key, out var _);
return item.Key;
}
if (item.Value< r.Value)
{
r=item;
}
}
_dict.TryRemove(key, out var _);
return key;
}
}
/// <summary>
/// this will be assigned of false by producer thread;
/// </summary>
public bool _notComplete = true;
/// <summary>
/// shared configuration for subscribers;
/// </summary>
fs.addresses_.disjoint.deV_._bak.Io io; //.io_._CfgX.Create(cancel, git)
/// <summary>
/// run this in a separate thread other than <see cref="add(string, DateTime?)"/>
/// </summary>
/// <param name="sln"></param>
/// <returns></returns>
public async Task _asyn_ofAddress(string sln)
{
while (_notComplete)
{
var f = next();
if (f is null )
{
await Task.Delay(30*1000);
//await Task.Yield();
continue;
}
/// degree of concurrency is controlled by a semophore; for instance, at most 4 are tackled:
new dev.srcs.each.sln_.delvable.Bak_srcsInAddresses(io)._startTask_ofAddress(sln);
}
}
}
For the above, I'm concerned about the while(_notComplete) part, as it looks like there would be many loops doing nothing there. I think there should be better ways to remove the while by utilizing the fact that the collection can notify whether it's empty or not at some/various stages such as when we add.
There would be better implementation which can be based on some mature framework such as those being considered by me these days but I often stopped wondering at some implementation details:
BlockingCollection
for this one, I don't know how to make the collection added and sorted dynamically while producer and subscriber are on the run;
Channel
Again, I could not come up with one fitting my need after I read its examples;
Pipeline
I havenot fully understood it;
Rx
I tried to implement an observable and an observer. It only gives me a macroscope framework, but when I get into the details, I ended with what I'm currently doing and I begin to wonder: with what I'm doing, I don't need Rx here.
Dataflow
Shall I implement my own BufferBlock or ActionBlock? It seems the built-in bufferBlock cannot be customized to sort things before releasing them to the next block.
Sorting buffered Observables seems similar to my problem; but it ends with a solution similar to the one I currently have but am not satisfied with, as stated in the above.
Could some one give me a sample code? Please give as concrete code as you can; As you can see, I have researched some general ideas/paths and finally what stops me short is the details, which are often glossed over in some docs.
I just found one solution which is better than my current one. I believe there are some even better ones, so please do post your answers if you find some; my current one is just what I can hack for what I know so far.
I found Prioritized queues in Task Parallel Library, and I write a similar one for my case:
using System;
using System.Collections;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Reactive.Subjects;
using System.Threading;
using System.Threading.Tasks;
namespace nilnul.dev.srcs.every.slns._bak
{
public class BySortedSet : IProducerConsumerCollection<(string, DateTime)>
{
private class _Comparer : IComparer<(string, DateTime)>
{
public int Compare((string, DateTime) first, (string, DateTime) second)
{
var returnValue = first.Item2.CompareTo(second.Item2);
if (returnValue == 0)
returnValue = first.Item1.CompareTo(second.Item1);
return returnValue;
}
static public _Comparer Singleton
{
get
{
return nilnul._obj.typ_.nilable_.unprimable_.Singleton<_Comparer>.Instance;// just some magic to get an instance
}
}
}
SortedSet<(string, DateTime)> _dict = new SortedSet<(string, DateTime)>(
_Comparer.Singleton
);
private object _lock=new object();
public int Count
{
get
{
lock(_lock){
return _dict.Count;
}
}
}
public object SyncRoot => _lock;
public bool IsSynchronized => true;
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
//throw new NotImplementedException();
}
public void CopyTo((string, DateTime)[] array, int index)
{
lock (_lock)
{
foreach (var item in _dict)
{
array[index++] = item;
}
}
}
public void CopyTo(Array array, int index)
{
lock (_lock)
{
foreach (var item in _dict)
{
array.SetValue(item, index++);
}
}
}
public bool TryAdd((string, DateTime) item)
{
lock (_lock)
{
return _dict.Add(item);
}
}
public bool TryTake(out (string, DateTime) item)
{
lock (_lock)
{
item = _dict.Min;
if (item==default)
{
return false;
}
return _dict.Remove(item);
}
}
public (string, DateTime)[] ToArray()
{
lock (_lock)
{
return this._dict.ToArray();
}
}
public IEnumerator<(string, DateTime)> GetEnumerator()
{
return ToArray().AsEnumerable().GetEnumerator();
}
/// <summary>
/// </summary>
/// <returns></returns>
public BlockingCollection<(string, DateTime)> asBlockingCollection() {
return new BlockingCollection<(string, DateTime)>(
this
);
}
}
}
Then I can use that like:
static public void ExampleUse(CancellationToken cancellationToken) {
var s = new BySortedSet().asBlockingCollection();
/// traversal thread:
s.Add(("", DateTime.MinValue));
//...
s.CompleteAdding();
/// tackler thread:
///
foreach (var item in s.GetConsumingEnumerable(cancellationToken))
{
/// process the item;
/// todo: degree of parallelism is controlled by the tackler, or is there a better way like in dataflow or Rx or sth else?
}
}
Thanks!

Mocking Redlock.CreateAsync does not return mocked object

I am trying to Mock Redlock
I have the test below
using Moq;
using RedLockNet;
using System;
using System.Threading;
using System.Threading.Tasks;
using Xunit;
namespace RedLock.Tests
{
public class RedLockTests
{
[Fact]
public async Task TestMockingOfRedlock()
{
var redLockFactoryMock = new Mock<IDistributedLockFactory>();
var mock = new MockRedlock();
redLockFactoryMock.Setup(x => x.CreateLockAsync(It.IsAny<string>(),
It.IsAny<TimeSpan>(), It.IsAny<TimeSpan>(),
It.IsAny<TimeSpan>(), It.IsAny<CancellationToken>()))
.ReturnsAsync(mock);
var sut = new TestRedlockHandler(redLockFactoryMock.Object);
var data = new MyEventData();
await sut.Handle(data);
}
}
}
MockRedlock is a simple mock class that implements IRedLock
public class MockRedlock: IRedLock
{
public void Dispose()
{
}
public string Resource { get; }
public string LockId { get; }
public bool IsAcquired => true;
public RedLockStatus Status => RedLockStatus.Acquired;
public RedLockInstanceSummary InstanceSummary => new RedLockInstanceSummary();
public int ExtendCount { get; }
}
await sut.Handle(data); is a call to a seperate event class
I have shown this below. This has been simplified, but using the code below and the test above the null reference error can be reproduced
public class MyEventData
{
public string Id { get; set; }
public MyEventData()
{
Id = Guid.NewGuid().ToString();
}
}
public class TestRedlockHandler
{
private IDistributedLockFactory _redLockFactory;
public TestRedlockHandler(IDistributedLockFactory redLockFactory)
{
_redLockFactory = redLockFactory;
}
public async Task Handle(MyEventData data)
{
var lockexpiry = TimeSpan.FromMinutes(2.5);
var waitspan = TimeSpan.FromMinutes(2);
var retryspan = TimeSpan.FromSeconds(20);
using (var redlock =
await _redLockFactory.CreateLockAsync(data.Id.ToString(), lockexpiry, waitspan, retryspan, null))
{
if (!redlock.IsAcquired)
{
string errorMessage =
$"Did not acquire Lock on Lead {data.Id.ToString()}. Aborting.\n " +
$"Acquired{redlock.InstanceSummary.Acquired} \n " +
$"Error{redlock.InstanceSummary.Error} \n" +
$"Conflicted {redlock.InstanceSummary.Conflicted} \n" +
$"Status {redlock.Status}";
throw new Exception(errorMessage);
}
}
}
}
When I try to call this I expect my object to be returned, but instead I get null
On the line if (!redlock.IsAcquired) redLock is null
What is missing?
The definition of CreateLockAsync
/// <summary>
/// Gets a RedLock using the factory's set of redis endpoints. You should check the IsAcquired property before performing actions.
/// Blocks and retries up to the specified time limits.
/// </summary>
/// <param name="resource">The resource string to lock on. Only one RedLock should be acquired for any given resource at once.</param>
/// <param name="expiryTime">How long the lock should be held for.
/// RedLocks will automatically extend if the process that created the RedLock is still alive and the RedLock hasn't been disposed.</param>
/// <param name="waitTime">How long to block for until a lock can be acquired.</param>
/// <param name="retryTime">How long to wait between retries when trying to acquire a lock.</param>
/// <param name="cancellationToken">CancellationToken to abort waiting for blocking lock.</param>
/// <returns>A RedLock object.</returns>
Task<IRedLock> CreateLockAsync(string resource, TimeSpan expiryTime, TimeSpan waitTime, TimeSpan retryTime, CancellationToken? cancellationToken = null);
requires a nullable CancellationToken
CancellationToken? cancellationToken = null
But the setup of the mock uses
It.IsAny<CancellationToken>() //NOTE CancellationToken instead of CancellationToken?
Because the setup expects the non-nullable struct but when invoked the nullable CancellationToken? is what will be passed even though it is null,
The mock will return null by default because the setup does not match what was actually invoked.
Once the correct type was used the factory was able to return the desired mock
//...
redLockFactoryMock
.Setup(x => x.CreateLockAsync(It.IsAny<string>(),
It.IsAny<TimeSpan>(), It.IsAny<TimeSpan>(),
It.IsAny<TimeSpan>(), It.IsAny<CancellationToken?>()))
.ReturnsAsync(mock);
//...

How to wait until item goes through pipeline?

So, I'm trying to wrap my head around Microsoft's Dataflow library. I've built a very simple pipeline consisting of just two blocks:
var start = new TransformBlock<Foo, Bar>();
var end = new ActionBlock<Bar>();
start.LinkTo(end);
Now I can asynchronously process Foo instances by calling:
start.SendAsync(new Foo());
What I do not understand is how to do the processing synchronously, when needed. I thought that waiting on SendAsync would be enough:
start.SendAsync(new Foo()).Wait();
But apparently it returns as soon as item is accepted by first processor in pipeline, and not when item is fully processed. So is there a way to wait until given item was processed by last (end) block? Apart from passing a WaitHandle through entire pipeline.
In short that's not supported out of the box in data flow. Essentially what you need to do is to tag the data so you can retrieve it when processing is done. I've written up a way to do this that let's the consumer await a Job as it gets processed by the pipeline. The only concession to pipeline design is that each block take a KeyValuePair<Guid, T>. This is the basic JobManager and the post I wrote about it. Note the code in the post is a bit dated and needs some updates but it should get you in the right direction.
namespace ConcurrentFlows.DataflowJobs {
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
/// <summary>
/// A generic interface defining that:
/// for a specified input type => an awaitable result is produced.
/// </summary>
/// <typeparam name="TInput">The type of data to process.</typeparam>
/// <typeparam name="TOutput">The type of data the consumer expects back.</typeparam>
public interface IJobManager<TInput, TOutput> {
Task<TOutput> SubmitRequest(TInput data);
}
/// <summary>
/// A TPL-Dataflow based job manager.
/// </summary>
/// <typeparam name="TInput">The type of data to process.</typeparam>
/// <typeparam name="TOutput">The type of data the consumer expects back.</typeparam>
public class DataflowJobManager<TInput, TOutput> : IJobManager<TInput, TOutput> {
/// <summary>
/// It is anticipated that jobHandler is an injected
/// singleton instance of a Dataflow based 'calculator', though this implementation
/// does not depend on it being a singleton.
/// </summary>
/// <param name="jobHandler">A singleton Dataflow block through which all jobs are processed.</param>
public DataflowJobManager(IPropagatorBlock<KeyValuePair<Guid, TInput>, KeyValuePair<Guid, TOutput>> jobHandler) {
if (jobHandler == null) { throw new ArgumentException("Argument cannot be null.", "jobHandler"); }
this.JobHandler = JobHandler;
if (!alreadyLinked) {
JobHandler.LinkTo(ResultHandler, new DataflowLinkOptions() { PropagateCompletion = true });
alreadyLinked = true;
}
}
private static bool alreadyLinked = false;
/// <summary>
/// Submits the request to the JobHandler and asynchronously awaits the result.
/// </summary>
/// <param name="data">The input data to be processd.</param>
/// <returns></returns>
public async Task<TOutput> SubmitRequest(TInput data) {
var taggedData = TagInputData(data);
var job = CreateJob(taggedData);
Jobs.TryAdd(job.Key, job.Value);
await JobHandler.SendAsync(taggedData);
return await job.Value.Task;
}
private static ConcurrentDictionary<Guid, TaskCompletionSource<TOutput>> Jobs {
get;
} = new ConcurrentDictionary<Guid, TaskCompletionSource<TOutput>>();
private static ExecutionDataflowBlockOptions Options {
get;
} = GetResultHandlerOptions();
private static ITargetBlock<KeyValuePair<Guid, TOutput>> ResultHandler {
get;
} = CreateReplyHandler(Options);
private IPropagatorBlock<KeyValuePair<Guid, TInput>, KeyValuePair<Guid, TOutput>> JobHandler {
get;
}
private KeyValuePair<Guid, TInput> TagInputData(TInput data) {
var id = Guid.NewGuid();
return new KeyValuePair<Guid, TInput>(id, data);
}
private KeyValuePair<Guid, TaskCompletionSource<TOutput>> CreateJob(KeyValuePair<Guid, TInput> taggedData) {
var id = taggedData.Key;
var jobCompletionSource = new TaskCompletionSource<TOutput>();
return new KeyValuePair<Guid, TaskCompletionSource<TOutput>>(id, jobCompletionSource);
}
private static ExecutionDataflowBlockOptions GetResultHandlerOptions() {
return new ExecutionDataflowBlockOptions() {
MaxDegreeOfParallelism = Environment.ProcessorCount,
BoundedCapacity = 1000
};
}
private static ITargetBlock<KeyValuePair<Guid, TOutput>> CreateReplyHandler(ExecutionDataflowBlockOptions options) {
return new ActionBlock<KeyValuePair<Guid, TOutput>>((result) => {
RecieveOutput(result);
}, options);
}
private static void RecieveOutput(KeyValuePair<Guid, TOutput> result) {
var jobId = result.Key;
TaskCompletionSource<TOutput> jobCompletionSource;
if (!Jobs.TryRemove(jobId, out jobCompletionSource)) {
throw new InvalidOperationException($"The jobId: {jobId} was not found.");
}
var resultValue = result.Value;
jobCompletionSource.SetResult(resultValue);
}
}
}
I ended up using the following pipeline:
var start = new TransformBlock<FooBar, FooBar>(...);
var end = new ActionBlock<FooBar>(item => item.Complete());
start.LinkTo(end);
var input = new FooBar {Input = new Foo()};
start.SendAsync(input);
input.Task.Wait();
Where
class FooBar
{
public Foo Input { get; set; }
public Bar Result { get; set; }
public Task<Bar> Task { get { return _taskSource.Task; } }
public void Complete()
{
_taskSource.SetResult(Result);
}
private TaskCompletionSource<Bar> _taskSource = new TaskCompletionSource<Bar>();
}
Less than ideal, but it works.

What's the correct way to construct my message handlers so that they can be moved out of the appHost?

Given the following code for my RabbitMQ Request and Response messages:
public class AppHost : ServiceStackHost
{
public AppHost()
: base("LO.Leads.Processor", typeof(LeadService).Assembly) { }
public override void Configure(Container container)
{
//DataAccess
//RabbitMQ
container.Register<IMessageService>(c => new RabbitMqServer("cdev-9010.example.com", "test", "test")
{
AutoReconnect = true,
DisablePriorityQueues = true,
});
var mqServer = container.Resolve<IMessageService>();
var messageHandlers = new MessageHandlers(); // Is there a better way than newing up an instance?
mqServer.RegisterHandler<LeadInformation>(messageHandlers.OnProcessLeadInformation, messageHandlers.OnExceptionLeadInformation);
mqServer.Start();
}
}
public class MessageHandlers
{
readonly ILog _log = LogManager.GetLogger(typeof(MessageHandlers));
public object OnProcessLeadInformation(IMessage<LeadInformation> request)
{
_log.DebugFormat("Request message received {0}", request.Id);
try
{
// Log to the database
// Run rules against lead
// Log response to database
// return response
}
catch (Exception exception)
{
_log.Error(request, exception);
}
return new LeadInformationResponse();
}
public void OnExceptionLeadInformation(IMessage<LeadInformation> request, Exception exception)
{
_log.Error(request, exception);
}
}
Most of the ServiceStack documentation shows examples where an anonymous method is used in-line, but that quickly bloats the apphost file and I'd like to move this code out closer to the service interface project. Is there anyway to define the delegate and not have to instantiate the "MessageHandlers" class?
Updated May 29th, 2015
Having had a few months to reflect on this question I wanted to add that I never implemented the changes to break out the method call from anonymous to delegate.
But now it's time to commit, and to that end I have come up with the 'template' class that matches the signature of the RegisterHandeler method, code below.
public static class HelloHandler
{
private static ILog _log = LogManager.GetLogger("logger");
public static Func<IMessage<ServiceModel.Hello>, object> ProcessMessageFn
{
get { return OnProcessMessage; }
}
public static Action<IMessageHandler, IMessage<ServiceModel.Hello>, Exception> ProcessExpectionFn
{
get { return OnProcessExpection; }
}
/// <summary>
///
/// </summary>
/// <param name="message"></param>
/// <returns></returns>
private static object OnProcessMessage(IMessage<ServiceModel.Hello> message)
{
return message;
}
/// <summary>
///
/// </summary>
/// <param name="handler"></param>
/// <param name="message"></param>
/// <param name="exception"></param>
private static void OnProcessExpection(IMessageHandler handler, IMessage<ServiceModel.Hello> message, Exception exception)
{
/*
public interface IMessageHandler
{
Type MessageType { get; }
IMessageQueueClient MqClient { get; }
void Process(IMessageQueueClient mqClient);
int ProcessQueue(IMessageQueueClient mqClient, string queueName, Func<bool> doNext = null);
void ProcessMessage(IMessageQueueClient mqClient, object mqResponse);
IMessageHandlerStats GetStats();
}
*/
/*
public interface IMessage<T> : IMessage, IHasId<Guid>
{
T GetBody();
}
*/
/*
Exception
*/
}
}
I'm leaning more towards this design to begin with, but that has me questioning the usage of the
OnProcessExpection
method. What is the first parameter being passed in for? When would that be relevant and what can I do with it? The interface does have the GetStats method which might be beneficial in diagnosing and issue, but I'm just guessing.
Thank you,
Stephen

Invoking operations asynchronously with expiration

I have a classic asynchronous message dispatching problem. Essentially, I need to asynchronously dispatch messages and then capture the message response when the dispatch is complete. The problem is, I can't seem to figure out how to make any one request cycle self-expire and shortcircuit.
Here is a sample of the pattern I am using at the moment:
Defined delegate for invokation
private delegate IResponse MessageDispatchDelegate(IRequest request);
Dispatch messages with a callback
var dispatcher = new MessageDispatchDelegate(DispatchMessage);
dispatcher.BeginInvoke(requestMessage, DispatchMessageCallback, null);
Dispatch the message
private IResponse DispatchMessage(IRequest request)
{
//Dispatch the message and throw exception if it times out
}
Get results of dispatch as either a response or an exception
private void DispatchMessageCallback(IAsyncResult ar)
{
//Get result from EndInvoke(r) which could be IResponse or a Timeout Exception
}
What I can't figure out is how to cleanly implement the timeout/shortcircuit process in the DispatchMessage method. Any ideas would be appreciated
var dispatcher = new MessageDispatchDelegate(DispatchMessage);
var asyncResult = dispatcher.BeginInvoke(requestMessage, DispatchMessageCallback, null);
if (!asyncResult.AsyncWaitHandle.WaitOne(1000, false))
{
/*Timeout action*/
}
else
{
response = dispatcher.EndInvoke(asyncResult);
}
After lots of head-scratching I was finally able to find a solution for my original question. First off, let me say I got a lot of great responses and I tested all of them (commenting each with the results). The main problems were that all the proposed solutions led to either dead-locks (leading to 100% timeout scenario's) or made an otherwise Asyncronous process syncronous. I don't like answering my own question (first time ever), but in this case I took the advice of the StackOverflow FAQ since I've truely learned my own lesson and wanted to share it with the community.
In the end, I combined the proposed solutions with the invocation of delagates into alternate AppDomains. It's a bit more code and it's a little more expensive, but this avoids the dead-locks and allows fully asyncronous invocations which is what I required. Here are the bits...
First I needed something to invoke a delegate in another AppDomain
/// <summary>
/// Invokes actions in alternate AppDomains
/// </summary>
public static class DomainInvoker
{
/// <summary>
/// Invokes the supplied delegate in a new AppDomain and then unloads when it is complete
/// </summary>
public static T ExecuteInNewDomain<T>(Delegate delegateToInvoke, params object[] args)
{
AppDomain invocationDomain = AppDomain.CreateDomain("DomainInvoker_" + delegateToInvoke.GetHashCode());
T returnValue = default(T);
try
{
var context = new InvocationContext(delegateToInvoke, args);
invocationDomain.DoCallBack(new CrossAppDomainDelegate(context.Invoke));
returnValue = (T)invocationDomain.GetData("InvocationResult_" + invocationDomain.FriendlyName);
}
finally
{
AppDomain.Unload(invocationDomain);
}
return returnValue;
}
[Serializable]
internal sealed class InvocationContext
{
private Delegate _delegateToInvoke;
private object[] _arguments;
public InvocationContext(Delegate delegateToInvoke, object[] args)
{
_delegateToInvoke = delegateToInvoke;
_arguments = args;
}
public void Invoke()
{
if (_delegateToInvoke != null)
AppDomain.CurrentDomain.SetData("InvocationResult_" + AppDomain.CurrentDomain.FriendlyName,
_delegateToInvoke.DynamicInvoke(_arguments));
}
}
}
Second I needed something to orchestrate collection of the required parameters and collect/resolve the results. This will also define the timeout and worker processes which will be called asyncronously in an alternate AppDomain
Note: In my tests, I extended the dispatch worker method to take random amounts of time to observe that everything worked as expected in both timeout and non-timeout cases
public delegate IResponse DispatchMessageWithTimeoutDelegate(IRequest request, int timeout = MessageDispatcher.DefaultTimeoutMs);
[Serializable]
public sealed class MessageDispatcher
{
public const int DefaultTimeoutMs = 500;
/// <summary>
/// Public method called on one more many threads to send a request with a timeout
/// </summary>
public IResponse SendRequest(IRequest request, int timeout)
{
var dispatcher = new DispatchMessageWithTimeoutDelegate(SendRequestWithTimeout);
return DomainInvoker.ExecuteInNewDomain<Response>(dispatcher, request, timeout);
}
/// <summary>
/// Worker method invoked by the <see cref="DomainInvoker.ExecuteInNewDomain<>"/> process
/// </summary>
private IResponse SendRequestWithTimeout(IRequest request, int timeout)
{
IResponse response = null;
var dispatcher = new DispatchMessageDelegate(DispatchMessage);
//Request Dispatch
var asyncResult = dispatcher.BeginInvoke(request, null, null);
//Wait for dispatch to complete or short-circuit if it takes too long
if (!asyncResult.AsyncWaitHandle.WaitOne(timeout, false))
{
/* Timeout action */
response = null;
}
else
{
/* Invoked call ended within the timeout period */
response = dispatcher.EndInvoke(asyncResult);
}
return response;
}
/// <summary>
/// Worker method to do the actual dispatch work while being monitored for timeout
/// </summary>
private IResponse DispatchMessage(IRequest request)
{
/* Do real dispatch work here */
return new Response();
}
}
Third I need something to stand-in for the actual thing that is asyncronously triggering the dispatches
Note: This is just to demonstrate the asyncronous behaviours I required. In reality, the First and Second items above demonstrate the isolation of timeout behaviours on alternate threads. This just demonstrates how the above resources are used
public delegate IResponse DispatchMessageDelegate(IRequest request);
class Program
{
static int _responsesReceived;
static void Main()
{
const int max = 500;
for (int i = 0; i < max; i++)
{
SendRequest(new Request());
}
while (_responsesReceived < max)
{
Thread.Sleep(5);
}
}
static void SendRequest(IRequest request, int timeout = MessageDispatcher.DefaultTimeoutMs)
{
var dispatcher = new DispatchMessageWithTimeoutDelegate(SendRequestWithTimeout);
dispatcher.BeginInvoke(request, timeout, SendMessageCallback, request);
}
static IResponse SendRequestWithTimeout(IRequest request, int timeout = MessageDispatcher.DefaultTimeoutMs)
{
var dispatcher = new MessageDispatcher();
return dispatcher.SendRequest(request, timeout);
}
static void SendMessageCallback(IAsyncResult ar)
{
var result = (AsyncResult)ar;
var caller = (DispatchMessageWithTimeoutDelegate)result.AsyncDelegate;
Response response;
try
{
response = (Response)caller.EndInvoke(ar);
}
catch (Exception)
{
response = null;
}
Interlocked.Increment(ref _responsesReceived);
}
}
In retrospect, this approach has some unintended consequences. Since the worker method occurs in an alternate AppDomain, this adds addition protections for exceptions (although it can also hide them), allows you to load and unload other managed assemblies (if required), and allows you to define highly constrained or specialized security contexts. This requires a bit more productionization but provided the framework to answer my original question. Hope this helps someone.

Categories

Resources