Asynchronous Streaming in WCF - c#

I am working with Streaming with WCF and I have a question about what the paragraph on "Enabling Asynchronous Streaming" means from the MSDN article on Large Data and Streaming in WCF.
To enable asynchronous streaming, add the
DispatcherSynchronizationBehavior endpoint behavior to the service
host and set its AsynchronousSendEnabled property to true. We have
also added the capability of true asynchronous streaming on the send
side. This improves scalability of the service in scenarios where it
is streaming messages to multiple clients some of which are slow in
reading possibly due to network congestion or are not reading at all.
In these scenarios we now do not block individual threads on the
service per client. This ensures that the service is able to process
many more clients thereby improving the scalability of the service.
I understand that the above means I add
<behaviors>
<endpointBehaviors>
<behavior name="AsyncStreaming">
<dispatcherSynchronization asynchronousSendEnabled="true" />
</behavior>
</endpointBehaviors>
...
To my web.config file and referencing the AsyncStreaming behavior in my endpoint, however I don't understand what doing those steps accomplishes for me. Do I need to modify my code at all to take advantage of this asynchronousness?
Also on a similar topic (but if it is too different I will move it to a new question), how does using async/await effect using Streams in WCF? Can I do Task<Stream> Foo() in my service contract? I make some database calls whose results I eventually wrap in to a custom stream that I will be returning from the WCF service. Being able to use things like ExecuteDataReaderAsync() is very useful, can I still use it when dealing with streamed instead of buffered messages?
I have tested it and I know it "works" using Tasks but I don't know if doing it causes the function to fall back to "Buffered" mode like when you provide more than one parameter to the function (See the 3rd paragraph of "Programming Model for Streamed Transfers" on the same MSDN page) and I don't know how to check to see if that is happening.

I traced it down to RequestContext via the .NET Reference Source. Apparently, the ChannelHandler.sendAsynchronously field controls whether the message reply is done asynchronously (via RequestContext.BeginReply/EndReply APM methods) or synchronously via RequestContext.Reply.
As far as I can tell, all this does is frees a server-side thread which is returned to the pool, and which otherwise would be busy inside RequestContext.Reply with "pumping" the stream to the client, for as long as the Stream object is alive on the server.
This appears to be totally transparent, so I think you can safely use async TAP-based contract methods and return Task<Stream>. In another contract method you could do await Stream.WriteAsync, for example.
Please share your actual experience as your own answer when you get there, I'd be very interested in the details :)

Good question on a complex topic. I've implemented a streaming WCF service to accommodate huge (up to 2GB) downloads, but I'm a bit confused, too, about this AsyncStreaming=True business since WCF is already asynchronous (in that each connected client gets its own thread and requests and receives asynchronously) as long as
<ServiceBehavior(ConcurrencyMode:=ConcurrencyMode.Multiple, InstanceContextMode:=InstanceContextMode.PerCall)>
But you do have to alter your code to make streaming work. Even if you have Binding.TransferMode = TransferMode.Streamed , the program will revert to buffering if you don't change the code so that your uploading and downloading functions A) get and return streams and B) your upload and download functions implements something like this:
//oBuffer is your content
if (oBuffer != null) {
oStream = new MemoryStream(oBuffer);
if (oStream.CanSeek) {
oStream.Seek(0, SeekOrigin.Begin);
}
return oStream;
}
This is a decent HowTo article I used as a guide: http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP

Related

webapi 2 - how to properly invoke long running method async/in new thread, and return response to client

I am developing a web-api that takes data from client, and saves it for later use. Now i have an external system that needs to know of all events, so i want to setup a notification component in my web-api.
What i do is, after data is saved, i execute a SendNotification(message) method in my new component. Meanwhile i don't want my client to wait or even know that we're sending notifications, so i want to return a 201 Created / 200 OK response as fast as possible to my clients.
Yes this is a fire-and-forget scenario. I want the notification component to handle all exception cases (if notification fails, the client of the api doesn't really care at all).
I have tried using async/await, but this does not work in the web-api, since when the request-thread terminates, the async operation does so aswell.
So i took a look at Task.Run().
My controller looks like so:
public IHttpActionResult PostData([FromBody] Data data) {
_dataService.saveData(data);
//This could fail, and retry strategy takes time.
Task.Run(() => _notificationHandler.SendNotification(new Message(data)));
return CreatedAtRoute<object>(...);
}
And the method in my NotificationHandler
public void SendNotification(Message message) {
//..send stuff to a notification server somewhere, syncronously.
}
I am relatively new in the C# world, and i don't know if there is a more elegant(or proper) way of doing this. Are there any pitfalls with using this method?
It really depends how long. Have you looked into the possibility of QueueBackgroundWorkItem as detailed here. If you want to implement a very fast fire and forget you also might want to consider a queue to pop these messages onto so you can return from the controller immediately. You'd then have to have something which polls the queue and sends out the notifications i.e. Scheduled Task, Windows service etc. IIRC, if IIS recycles during a task, the process is killed whereas with QueueBackgroundWorkItem there is a grace period for which ASP.Net will let the work item finish it's job.
I would take a look on Hangfire. It is fairly easy to setup, it should be able to run within your ASP.NET process and is easy to migrate to a standalone process in case your IIS load suddenly increases.
I experimented with Hangfire a while ago but in standalone mode. It has enough docs and easy to understand API.

Prevent hung workers in ASP.NET when reading the posted data

I have an issue caused by an external factor which is causing my app pool to queue up requests and hang. The issue seems to be caused when the client making the HTTP request somehow loses its TCP layer connection to my server, whilst the server is trying to read all the data POST'd to it.
I am using an Asynchronous HTTP handler, and I am using the following code to read all the posted data:
string post_data = new StreamReader(context.Request.InputStream).ReadToEnd();
I believe what is happening is that "ReadToEnd()" is blocking my worker thread and when the TCP layer is lost the thread is stuck there trying to read indefinitely. How can I prevent this from happening?
I am currently coding in .NET 2.0 but I can use a newer framework if this is required.
HttpRequest.InputStream will synchronously read the entire request, then it returns the Stream as one huge chunk. You'll want something like this instead (requires .NET 4.5):
string body = await new StreamReader(request.GetBufferlessInputStream()).ReadToEndAsync();
GetBufferlessInputStream() won't read the entire request eagerly; it returns a Stream that reads the request on-demand. Then ReadToEndAsync() will asynchronously read from this Stream without tying up a worker thread.
To create a .NET 4.5-style async handler, subclass the HttpTaskAsyncHandler class. See this blog post for more information on async / await in .NET 4.5. You must also target .NET 4.5 in Web.config to get this functionality.

Is HttpResponse.Write a blocking function

I'm working on an Asynchronous HTTP handler and trying to figure out if the HttpResponse.Write function blocks until it receives an ACK from the client.
The MSDN documentation doesn't specifically say; however, I do know that the MSDN documentation for the ISAPI WriteClient() function (a similar mechanism) mentions that the synchronous version does block while attempting to send data to the client.
I thought of three possible ways to determine the answer:
Have someone tell me its non-blocking
Write a low level TCP test client and set break point on the acknowledgement ( is this possible?)
Use reflection to inspect the inner workings of the HTTPResponse.Write method ( is this possible?)
Its not blocking, but can use a buffer and send them all together.
Try to set HttpResponse.Buffer=false; to direct write to your client.
You can also use the HttpResponse.Flush(); to force to send what you have to your client.
About HttpResponse.Buffer Property on MSDN
And maybe this intresting you: Web app blocked while processing another web app on sharing same session
HttpResponse operates in two distinct modes, buffered and unbuffered. In buffered mode, the various Write functions put their data into a memory region and the function returns as soon as the data is copied over. If you set Buffer to false, Write blocks until all of the data is sent to the client.

wcf oneway non blocking operation

I need such scenario: client sends message to server, not waiting for response, and don't care, if message was send properly.
using(host.RemoteService client = new host.RemoteService())
{
client.Open();
cliend.SendMessage("msg");
}
in scenario when firewall is on, or there is no connection to the internet, client dies at "SendMessage". I mean program stops to respond. I wish program don't care about the result. I mean if there is no connection, i wish program to go further, omitting "SendMessage" or sth like that.
What should I do, is there any solution for non blocking method?
Try something like this in your service contract:
[OperationContract(IsOneWay=true)]
void Send(string message);
See the following link:
One Way Operation in WCF
Edit: OP was already using my suggested solution.
Suggested approaches to solve the issue - taken from MSDN (One-Way Services):
Clients Blocking with One-Way Operations
It is important to realize that while some one-way applications return
as soon as the outbound data is written to the network connection, in
several scenarios the implementation of a binding or of a service can
cause a WCF client to block using one-way operations. In WCF client
applications, the WCF client object does not return until the outbound
data has been written to the network connection. This is true for
all message exchange patterns, including one-way operations; this
means that any problem writing the data to the transport prevents the
client from returning. Depending upon the problem, the result could
be an exception or a delay in sending messages to the service.
You can mitigate some of this problem by inserting a buffer between
the client object and the client transport's send operation. For
example, using asynchronous calls or using an in-memory message
queue can enable the client object to return quickly. Both
approaches may increase functionality, but the size of the thread pool
and the message queue still enforce limits.
It is recommended, instead, that you examine the various controls on
the service as well as on the client, and then test your application
scenarios to determine the best configuration on either side. For
example, if the use of sessions is blocking the processing of messages
on your service, you can set the
System.ServiceModel.ServiceBehaviorAttribute.InstanceContextMode
property to PerCall so that each message can be processed by a
different service instance, and set the ConcurrencyMode to
Multiple in order to allow more than one thread to dispatch messages
at a time. Another approach is to increase the read quotas of the
service and client bindings.
Modify your attribute
[OperationContract(IsOneWay=true)]

Why i'm forced to Close() C# asynchronous client socket, after every transaction?

I'm trying to write an asynch socket application which transfering complex objects over across sides..
I used the example here...
Everything is fine till i try send multi package data. When the transferred data requires multiple package transfer server application is suspending and server is going out of control without any errors...
After many hours later i find a solution; if i close client sender socket after each EndSend callback, the problem is solving. But i couldn't understand why this is necessary? Or are there any other solution for the situation?
My (2) projects is same with example above only i changed EndSend callback method like following:
public void EndSendCallback(IAsyncResult result)
{
Status status = (Status)result.AsyncState;
int size = status.Socket.EndSend(result);
status.Socket.Close(); // <--------------- This line solved the situation
Console.Out.WriteLine("Send data: " + size + " bytes.");
Console.ReadLine();
allDone.Set();
}
Thanks..
This is due to the example code given not handling multiple packages (and being broken).
A few observations:
The server can only handle 1 client at a time.
The server simply checks whether the data coming in is in a single read smaller than the data requested and if so, assumes that's the last part.
The server then ignores the client socket while leaving the connection open. This puts the responsibility of closing the connection on the client side which can be confusing and which will waste resources on the server.
Now the first observation is an implementation detail and not really relevant in your case. The second observation is relevant for you since it will likely result in unexplained bugs- probably not in development- but when this code is actually running somewhere in a real scenario. Sockets are not streamlined. When the client sents over 1000 bytes. This might require 1 call to read on the server or 10. A call to read simply returns as soon as there is 'some' data available. What you need to do is implement some sort of protocol that communicates either how much data is being sent over- or when all the data has been sent over. I really recommend just to stick with the HTTP protocol since this is a well tested and well supported protocol that suits most scenario's.
The third observation might also cause bugs where the server is running out of resources since it leaves all connections open.

Categories

Resources