Pattern for Reusable asynchronous HttpHandler - c#

I'm currently developing a custom HttpHandler (for compressing/combining CSS, but that doesn't matter for this question).
I started with a simple reusable=true synchronous HttpHandler like we all know.
Now i'm trying to improve it to an asynchronous handler (as it uses IO functionality and it's used on a very busy website).
My first attempt (and this seems to work ok):
Action<HttpContext> asyncProcessRequest;
public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData)
{
asyncProcessRequest = new Action<HttpContext>(ProcessRequest);
return asyncProcessRequest.BeginInvoke(context, cb, extraData);
}
public void EndProcessRequest(IAsyncResult result)
{
asyncProcessRequest.EndInvoke(result);
}
public virtual void ProcessRequest(HttpContext context)
{
// real work
}
This is a non-reusable httphandler (as from what I read, IsReusable should be false, because this handler has state (the asyncProcessRequest field).
Now I want to make this reusable. So my first thought was to create a dictionary of IAsyncResult / Action like this:
IDictionary<IAsyncResult, Action<HttpContext>> asyncProcessRequests;
public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData)
{
if (asyncProcessRequests == null)
{
asyncProcessRequests = new Dictionary<IAsyncResult, Action<HttpContext>>();
}
var request = new Action<HttpContext>(ProcessRequest);
var result = request.BeginInvoke(context, cb, extraData);
asyncProcessRequests.Add(result, request);
return result;
}
public void EndProcessRequest(IAsyncResult result)
{
Action<HttpContext> action;
if (asyncProcessRequests.TryGetValue(result, out action))
{
action.EndInvoke(result);
}
}
Is this a correct pattern ? or am I way off?
It seems to work (I'm not getting any errors or weird behavior), but before putting this to production, I would like to verify with someone who has more experience than me in writing these Http handlers..
Thanks in advance!

In general, for the async pattern, you should use the state parameter which you pass into the BeginXxx method as last parameter (you called it extraData).
So you may want to create a helper class holding the (original) extraData as well as whatever additional state you need to handle the request end.
However, in your specific case, I believe that you're not accelerating anything with the use of the async pattern. While it works, it basically only adds overhead, since you're calling a delegate in an async fashion, which does nothing but dispatch a call to the thread pool to handle the call. Therefore, as long as you don't have multiple delegates running simultaneously via async calls, you're not going to benefit much. Since web requests are multithreaded already, I don't think that this will help performance; in the contrary, you run into the risk of threadpool starvation.
Correct and efficient async handling is not easy. You can benefit from it if you're doing inherently async things such as reading data from a file or network connection or when calling external components which support async invocation (such as a web service call or a database).

If I remember correctly IsReusable indicates to ASP.NET that your handler should not be destroyed after processing request and the same instance could be used for processing subsequent requests. I.e. one instance of the handler object does not handle multiple requests simultaneously.

Related

Synchronous wrapper for WCF asynchronous calls

We are calling WCF services asyncronously.
public partial class ServiceClient : System.ServiceModel.ClientBase<MyService>, MyService
{
......
}
ServiceClient _serviceclient;
void Getproducts(string filter, string augument, EventHandler<GetCompletedEventArgs> callback)
{
_serviceclient.GetAsyncGetproducts(filter, argument, callback);
}
I want to the Getproducts function to be synchronous. What is the best way to achieve this like the following
void Getproducts(string filter, string augument, EventHandler<GetCompletedEventArgs> callback)
{
_serviceclient.GetAsyncGetproducts(filter, argument, callback);
//wait until callback comes back and return
}
EDIT: The proxy is providing any synchronous calls
You cannot make synchronous networking requests in Silverlight from the UI thread. There's no going around that. Even if you try to trick the asynchronous methods into behaving synchronously, it will not work. That's because if that were possible, the UI thread would be blocked, and the application would appear to be frozen. This happens because the responses to networking requests in SL are always delivered to the UI thread; if you wait for it on the UI thread itself, then you create a deadlock.
You essentially have two options: the preferred one is to actually go the asynchronous route. It's hard at first, if you're only used to synchronous programming, but it's a very valuable skill to have. The other option is to make the call on a background thread. I've tried it and it works, and some people have blogged about it, so you can try it as well. But AFAIK it's not officially supported.
Rather than just passing the callback parameter as the callback you'll want to assign your own callback that executes that method in addition to doing something else. You effectively just need to trigger an event of some sort. I have demonstrated one way using tasks, but you could just as easily use an auto reset event or one of any number of other synchronization methods.
void Getproducts(string filter, string augument, EventHandler<GetCompletedEventArgs> callback)
{
var tcs = new TaskCompletionSource<bool>();
_serviceclient.GetAsyncGetproducts(filter, argument, args =>
{
callback(args);
tcs.SetResult(true);
});
tcs.Task.Wait();
}

Recursive WinRT Async Issues

I have some code that does something like this:
abstract class Data
{
Data(string name, bool load) { if (load) { Load().Wait(); }
abstract Task Load();
}
class XmlData : Data
{
XmlData(string name, bool load = true) : base(name, load) {}
override async Task Load()
{
var file = await GetFileAsync(...);
var xmlDoc = await LoadXmlDocAsync(file);
ProcessXml(xmlDoc);
}
void ProcessXml(XmlDocument xmlDoc)
{
foreach (var element in xmlDoc.Nodes)
{
if (element.NodeName == "something")
new XmlData(element.NodeText);
}
}
}
I seem to (sometimes) get wierd timing issues, where it ends up hanging the code at GetFileAsync(...). Is this caused by the recursive nature of the calls? When I change all the await calls to actually do a .Wait() for them to finish, and essentially get rid of all the asynchronous nature of the calls, my code executes fine.
Is this caused by the recursive nature of the calls? When I change all the await calls to actually do a .Wait() for them to finish, and essentially get rid of all the asynchronous nature of the calls, my code executes fine.
It really depends -
The most likely culprit would be if your caller is somehow blocking the user interface thread (via a call to Wait(), etc). In this case, the default behavior of await is to capture the calling synchronization context, and post the results back on that context.
However, if a caller is using that context, you can get a deadlock.
This is very likely the case, and being caused by this line of code:
Data(string name, bool load) { if (load) { Load.Wait(); }
This can be easily avoided by making your library code (like this XmlData class) explicitly not use the calling synchronization context. This is typically only required for user interface code. By avoiding the capture, you do two things. First, you improve teh overall performance (often dramatically), and second, you avoid this dead lock condition.
This can be done by using ConfigureAwait and changing your code like so:
override async Task Load()
{
var file = await GetFileAsync.(...).ConfigureAwait(false);
var xmlDoc = await LoadXmlDocAsync(file).ConfigureAwait(false);
ProcessXml(xmlDoc);
}
That being said - I would rethink this design a bit. There are really two problems here.
First, your putting a virtual method call in your constructor, which is fairly dangerous, and should be avoided when possible, as it can lead to unusual problems.
Second, your turning your entire asynchronous operation into a synchronous operation by putting this in the constructor with the block. I would, instead, recommend rethinking this entire thing.
Perhaps you could rework this to make some form of factory, which returns your data loaded asynchronously? This could be as simple as making the public api for creation a factory method that returns Task<Data>, or even a generic public async Task<TData> Create<TData>(string name) where TData : Data method, which would allow you to keep the construction and loading asynchronous and avoid the blocking entirely.

Truly asynchronous WCF service

I am implementing an asynchronous service. After evaluating Microsoft's example, I am wondering if their approach is truly asynchronous. I am pretty sure it is, but some of the samples I've seen online and the AsyncCallback parameter causes me to wonder.
According to the example, we need to implement the Begin and End method pair like this:
public IAsyncResult BeginGetAcmeAnvil(AsyncCallback callback, object state)
{
// Starts synchronous task
var acmeAsyncResult = new AcmeAsyncResult<Anvil>
{
Data = new Anvil()
};
return acmeAsyncResult;
}
public Anvil EndGetAcmeAnvil(IAsyncResult result)
{
var acmeAsyncResult = result as AcmeAsyncResult<Anvil>;
return acmeAsyncResult != null
? acmeAsyncResult.Data
: new Anvil();
}
Pretty straightforward, but why do we have an AsyncCallback parameter? Shouldn't we do a call to callback which will in turn trigger the End method?
This is what I have in mind:
public delegate void AsyncMethodCaller(AcmeAsyncResult<Anvil> acmeAsyncResult,
AsyncCallback callback);
public IAsyncResult BeginGetAcmeAnvil(AsyncCallback callback, object state)
{
var acmeAsyncResult = new AcmeAsyncResult<Anvil>();
var asyncMethodCaller = new AsyncMethodCaller(GetAnvilAsync);
// Starts asynchronous task
asyncMethodCaller.BeginInvoke(acmeAsyncResult, callback, null, null);
return acmeAsyncResult;
}
private void GetAcmeAnvilAsync(AcmeAsyncResult<Anvil> acmeAsyncResult,
AsyncCallback callback)
{
acmeAsyncResult.Data = new Anvil();
callback(acmeAsyncResult); // Triggers EndGetAcmeAnvil
}
public Anvil EndGetAcmeAnvil(IAsyncResult result)
{
var acmeAsyncResult = result as AcmeAsyncResult<Anvil>;
return acmeAsyncResult != null
? acmeAsyncResult.Data
: new Anvil();
}
I did some load testing using loadUI, but there was no obvious performance changes.
I found a good article explaining how to get the best performance out of your Async WCF service.
The gist is:
don't do heavy work in the Begin method, and
do make the callback to trigger the End method.
Here's an extract from the text:
For best performance, here are two principles when you call/implement the above asynchronous pattern:
Principle 1: Do not do heavy-weighted work inside the Begin method...
The reason for this is that you should return the calling thread as soon as possible so that the caller can schedule other work. If it’s a UI thread, the application needs to use the thread to respond to user inputs. You should always put heavy operations in a different thread if possible.
Principle 2: Avoid calling End method on the same thread of the Begin method.
The End method is normally blocking. It waits for the operation to complete. If you implement the End method, you would see that it actually calls IAsyncResult.WaitHandle.WaitOne(). On the other hand, as a normal implementation, this WaitHandle is a delay allocated ManualResetEvent. As long as you don’t call it, it would be not allocated at all. For fast operations, this is pretty cheap. However, once End is called, you would have to allocate it. The right place to call End is from the callback of the operation. When the callback is invoked, it means that the blocking work is really completed. At this point, you can call End to get data retrieved without sacrificing performance.
I think the main reason its separated like this is that the WCF runtime is handling the thread synchronization as opposed to you having to handle it manually.
If you were invoking the end method via callback, you would have to handle the synchronization which makes the pattern quite a bit more complex (as you can see in your coding examples). The goal of this pattern is not for you to really be aware of the threading stuff, you just want to code your long running operation without having to think about the implementation details of the threading.

How to force C# asynchronous operations to run in a...synchronized manner?

I have these two classes: LineWriter and AsyncLineWriter. My goal is to allow a caller to queue up multiple pending asynchronous invocations on AsyncLineWriter’s methods, but under the covers never truly allow them to run in parallel; i.e., they must somehow queue up and wait for one another. That's it. The rest of this question gives a complete example so there's absolutely no ambiguity about what I'm really asking.
LineWriter has a single synchronous method (WriteLine) that takes about 5 seconds to run :
public class LineWriter
{
public void WriteLine(string line)
{
Console.WriteLine("1:" + line);
Thread.Sleep(1000);
Console.WriteLine("2:" + line);
Thread.Sleep(1000);
Console.WriteLine("3:" + line);
Thread.Sleep(1000);
Console.WriteLine("4:" + line);
Thread.Sleep(1000);
Console.WriteLine("5:" + line);
Thread.Sleep(1000);
}
}
AsyncLineWriter just encapsulates LineWriter and provides an asynchronous interface (BeginWriteLine and EndWriteLine):
public class AsyncLineWriter
{
public AsyncLineWriter()
{
// Async Stuff
m_LineWriter = new LineWriter();
DoWriteLine = new WriteLineDelegate(m_LineWriter.WriteLine);
// Locking Stuff
m_Lock = new object();
}
#region Async Stuff
private LineWriter m_LineWriter;
private delegate void WriteLineDelegate(string line);
private WriteLineDelegate DoWriteLine;
public IAsyncResult BeginWriteLine(string line, AsyncCallback callback, object state)
{
EnterLock();
return DoWriteLine.BeginInvoke(line, callback, state);
}
public void EndWriteLine(IAsyncResult result)
{
DoWriteLine.EndInvoke(result);
ExitLock();
}
#endregion
#region Locking Stuff
private object m_Lock;
private void EnterLock()
{
Monitor.Enter(m_Lock);
Console.WriteLine("----EnterLock----");
}
private void ExitLock()
{
Console.WriteLine("----ExitLock----");
Monitor.Exit(m_Lock);
}
#endregion
}
As I said in the first paragraph, my goal is to only allow one pending asynchronous operation at a time. I’d really like it if I didn’t need to use locks here; i.e., if BeginWriteLine could return an IAsyncResult handle which always returns immediately, and yet retains the expected behavior, that would be great, but I can’t figure out how to do that. So the next best way to explain what I'm after seemed to be to just use locks.
Still, my “Locking Stuff” section isn’t working as expected. I would expect the above code (which runs EnterLock before an async operation begins and ExitLock after an async operation ends) to only allow one pending async operation to run at any given time.
If I run the following code:
static void Main(string[] args)
{
AsyncLineWriter writer = new AsyncLineWriter();
var aresult = writer.BeginWriteLine("atest", null, null);
var bresult = writer.BeginWriteLine("btest", null, null);
var cresult = writer.BeginWriteLine("ctest", null, null);
writer.EndWriteLine(aresult);
writer.EndWriteLine(bresult);
writer.EndWriteLine(cresult);
Console.WriteLine("----Done----");
Console.ReadLine();
}
I expect to see the following output:
----EnterLock----
1:atest
2:atest
3:atest
4:atest
5:atest
----ExitLock----
----EnterLock----
1:btest
2:btest
3:btest
4:btest
5:btest
----ExitLock----
----EnterLock----
1:ctest
2:ctest
3:ctest
4:ctest
5:ctest
----ExitLock----
----Done----
But instead I see the following, which means they’re all just running in parallel as if the locks had no effect:
----EnterLock----
----EnterLock----
----EnterLock----
1:atest
1:btest
1:ctest
2:atest
2:btest
2:ctest
3:atest
3:btest
3:ctest
4:atest
4:btest
4:ctest
5:atest
5:btest
5:ctest
----ExitLock----
----ExitLock----
----ExitLock----
----Done----
I’m assuming the locks are “ignored” because (and correct me if I’m wrong) I’m locking them all from the same thread. My question is: how can I get my expected behavior? And “don’t use asynchronous operations” is not an acceptable answer.
There is a bigger, more complex, real life use case for this, using asynchronous operations, where sometimes it’s not possible to have multiple truly parallel operations, but I need to at least emulate the behavior even if it means queuing the operations and running them one after another. Specifically, the interface to AsyncLineWriter shouldn’t change, but it’s internal behavior somehow needs to queue up any async operations it’s given in a thread-safe manner. Another gotcha is that I can't add locks into LineWriter's WriteLine, because in my real case, this is a method which I can't change (although, in this example, doing so actually does give me the expected output).
A link to some code designed to solve a similar issue might be good enough to get me on the right path. Or perhaps some alternative ideas. Thanks.
P.S. If you're wondering what kind of use case could possibly use such a thing: it's a class which maintains a network connection, upon which only one operation can be active at a time; I'm using asynchronous calls for each operation. There's no obvious way to have truly parallel lines of communication go on over a single network connection, so they will need to wait for each other in one way or another.
Locks cannot possibly work here. You're trying to enter the lock on one thread (the calling thread) and exit from it on a different thread (the ThreadPool).
Since .Net locks are re-entrant, entering it a second time on the caller thread doesn't wait. (had it waited, it would deadlock, since you can only exit the lock on that thread)
Instead, you should create a method that calls the synchronous version in a lock, then call that method asynchronously.
This way, you're entering the lock on the async thread rather than the caller thread, and exiting it on the same thread after the method finishes.
However, this is inefficient,since you're wasting threads waiting on the lock.
It would be better to make a single thread with a queue of delegates to execute on it, so that calling the async method would start one copy of that thread, or add it to the queue if the thread is already running.
When this thread finishes the queue, it would exit, and then be restarted next time you call an async method.
I've done this in a similar context; see my earlier question.
Note that you'll need to implement IAsyncResult yourself, or use a callback pattern (which is generally much simpler).
I created the following "Asynchronizer" class:
public class Asynchronizer
{
public readonly Action<Action> DoAction;
public Asynchronizer()
{
m_Lock = new object();
DoAction = new Action<Action>(ActionWrapper);
}
private object m_Lock;
private void ActionWrapper(Action action)
{
lock (m_Lock)
{
action();
}
}
}
You can use it like this:
public IAsyncResult BeginWriteLine(string line, AsyncCallback callback, object state)
{
Action action = () => m_LineWriter.WriteLine(line);
return m_Asynchronizer.DoAction.BeginInvoke(action, callback, state);
}
public void EndWriteLine(IAsyncResult result)
{
m_Asynchronizer.DoAction.EndInvoke(result);
}
This way it's easy to synchronize multiple asynchronous operations even if they have different method signatures. E.g., we could also do this:
public IAsyncResult BeginWriteLine2(string line1, string line2, AsyncCallback callback, object state)
{
Action action = () => m_LineWriter.WriteLine(line1, line2);
return m_Asynchronizer.DoAction.BeginInvoke(action, callback, state);
}
public void EndWriteLine2(IAsyncResult result)
{
m_Asynchronizer.DoAction.EndInvoke(result);
}
A user can queue up as many BeginWriteLine/EndWriteLine and BeginWriteLine2/EndWriteLine2 as they want, and they will be called in a staggered, synchronized, fashion (although you will have as many threads open as operations are pending, so only practical if you know you'll only ever have a asynchronous operations pending at a time). A better, more complex, solution is, like SLaks pointed out, to use a dedicated queue-thread and queue the actions into it.

AsyncCallBack CompletedSynchronously

I've noticed the following pattern recently, but I don't entirely grasp the usage of the CompletedSynchronously property:
IAsyncResult channelOpenResult = channel.BeginOpen(new AsyncCallback(OnOpenCompleteChannel), channel);
if (channelOpenResult.CompletedSynchronously)
{
CompleteOpenChannel(channelOpenResult);
}
And then again, in the callback:
void OnOpenCompleteChannel(IAsyncResult result)
{
if (result.CompletedSynchronously)
return;
else
CompleteOpenChannel(result);
}
And somewhere in the code there is of course a function:
void CompleteOpenChannel(IAsyncResult result) ...
Is this a way to handle the asynchronous call differently depending on whether it completes directly or not? But why use it in this case, since the AsyncCallback always will be called (will it?)?
Could someone give an example where the call is made synchronously?
See this blog. A common pattern does async work in a loop, and checking CompletedSynchronously helps avoid the case where you get 'unlucky' and a bunch of async calls happen to complete sync and you risk StackOverflowException. (E.g. if you're reading data over the network, and the data you're reading has already come over the wire and is buffered, your async call may complete synchronously, which means your callback is called on the same thread (with a taller stack), which means you better not schedule another async call in a loop.)
According to this document you can supply the call with a synchronous and ASync callback, and only if the call was not handled synchronously, will it call the ASync methods. I do not think this is really applicable to Silverlight (because all Silverlight calls are ASync to a degree) but is probably used more for custom factories in other .NET applications.
Hope this helps.

Categories

Resources