Implementing a TCP/IP JSONRPC connection in C# - Need design Advice - c#

I'm working on implementing a JSON RPC connection over TCP/IP and I have one fundamental issue. Currently I'm using a synchronous approach, which works well.
I can send
{"id":1,"jsonrpc":"2.0","method":"Input.Home"}
and receive
{"id":1,"jsonrpc":"2.0","result":true}
This works with no problems. The issue arises when I receive notifications. These can arrive unpredictably and at any time. I am interacting with the XBMC JSON RPC API. If a notification has been sent by XBMC, I receive multiple JSON requests at once. E.g.
{"jsonrpc":"2.0","method":"GUI.OnScreensaverActivated","params":{"data":null,"sender":"xbmc"}}{"jsonrpc":"2.0","method":"GUI.OnScreensaverDeactivated","params":{"data":null,"sender":"xbmc"}}
This causes a crash in JSON.NET, and understandably so. My first instinct is I need to asynchronously receive these notifications so that I don't have to wait until the next method is called to receive them. However this complicates the simple example I showed above because I can no longer utilize the synchronous calls. i.e.
SendJson(json);
result = ReceiveJson();
Is there a clean way to implement this without over complicating it? Any/All advice is appreciated.

Here is where all the reading is. Its all worth it if you want to get into it.
http://msdn.microsoft.com/en-us/library/ms228969.aspx
For a summary though..
Basically what needs to happen is you need to create an object that implements IAsyncResult. This will store state about your async operation, and a callback for when it is complete.
Create a method that can take your IAsyncResult object as input, and before the method returns, it should call .SetCompleted() on your IAsyncResult object. Its inside this method that you do the work you normally do.
Then you will create an instance of your IAsyncResult object (set its data, and callback), then call Task.Factory.StartNew([YourNewMethod],[YourInstanceOf IAsyncResult]).
That is the core of what needs to happen. Just make sure to set your callback and handle exceptions.
If you want a working example of how I and using an async http handler to do json-rpc, take a look at http://jsonrpc2.codeplex.com/SourceControl/changeset/view/13061#74311 also where Process is being called and the async object is being setup http://jsonrpc2.codeplex.com/SourceControl/changeset/view/13061#61720
Hope that helps.

Related

How do asynchronous GET requests work?

I THINK I understand the basic principle of an asynchronous POST method from a high level view point. The big advantage is that the caller gets a quick response, even though the processing of the data from the POST might not be completed yet.
But I don't really understand how this applies to a GET method. The response needs to contain the data, so how can there BE a response before processing is completed? Why have an API with a GET request handler that utilizes asynchronous methods?
I don't think it matters for this general type of question, but I'm writing in C# using Web API.
On the network there is no such thing as an async HTTP call. It's just data flowing over TCP. The server can't tell whether the client is internally sync or async.
the caller gets a quick response
Indeed, the server can send the response line and headers early and data late. But that has nothing to do with async IO or an async .NET server implementation. It's just some bytes arriving early and some late.
So there is no difference between GET and POST here.
why ... utilize asynchronous methods?
They can have scalability benefits for the client and/or the server. There is no difference at the HTTP level.
So the app can do other things that don't need the data.
If you implement a GET as synchronous (and let's say you are on a bad network, where it takes 20 seconds to get it), you can't
Show progress
Allow the user to cancel
Let them initiate other GETs
Let them use other parts of the app
EDIT (see comments): The server wants to respond asynchronously for mostly the same reason (to give up the thread). Actually getting the data might be asynchronous and take some time -- you wouldn't want to block the thread for the whole time.
It doesn't make any sense if you're building a RESTful API, in which GET should return a resource (or a collection) and be idempotent (not make any changes).
But if you're not following those principles, a GET request could perform some work asynchronously. For example, GET /messages/3 could return the message to the client, but then asynchronously do other work like:
mark the message as read
send a push notification to the user's other clients indicating that the message is read
schedule a cronjob 30 days in the future to delete the message
etc.
None of these would be considered a RESTful API, but are still used from time to time.

Releasing a connection in .NET HTTPHandler

I am currently implementing an HTTP Handler that will take a POST of some XML data from some 3rd party, then will do some work with it. The processing that I will be doing with the XML has potential to take some time. Once I yank out the XML from the POST, there is no need to keep the connection open with the client while I process the data. As I don't have any control of when the client will time out posting to me, I just want to grab the XML and let the connection go.
Is there any easy way to go about this? Using the Response.Close() isn't correct, as it doesn't close the connection properly. Response.End() exits my HTTP Handler all together. I could throw the processing into a background thread, but I heard that can be a little risky in ASP.NET as the AppDomain can be torn down which could kill my process in the middle.
Any thoughts would be much appreciated. Thanks for the help!
Save received data to some sort of permanent queue (MSMQ for example).
Exit handler.
Process data from queue in another application, for example in windows service.
This is not exactly "easy way", but safe and fast for customers.
Thanks everyone for your input. The way I went about this, so others can ponder it as a solution:
The queuing would probably be the most "correct" means, but it would take some extra implementation that really is just over the top for what I am intending to do. Using the information from http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
First I create my processing class and spin that up in a background thread to do the work after I get the XML from the client. This releases the connection while my worker thread continues in the background.
I registered my processing class as an IRegisteredObject. I the implemented the Stop(bool immediate) method
public void Stop(bool immediate)
{
if (!immediate && _working)
return;//don't unregister yet, give it some time
if(immediate && _working)
{
//TODO: Log this instance
}
HostingEnvironment.UnregisterObject(this);
}
I set my _working variable to true when I am processing work, and unset it when done. If in the rare case I am processing work and stop gets called because the AppDomain is getting taken down, it will first just return without unregistering itself. This gives my process a bit more time to finish up. If when the method gets called the second time with the immediate flag set to true, it quickly logs the issue and then unregister itself.
This may not be the ultimate solution, but for my purposes, this will take care of alerting me when the very rare condition happens, as well as not holding up the client's connection as I process the data.
If you're using .NET 4.5, check out HttpTaskAsyncHandler.
From the linked page:
Asynchronous HTTP handlers The traditional approach to writing asynchronous handlers in ASP.NET is to implement the IHttpAsyncHandler
interface. ASP.NET 4.5 introduces the HttpTaskAsyncHandler
asynchronous base type that you can derive from, which makes it much
easier to write asynchronous handlers. The HttpTaskAsyncHandler type
is abstract and requires you to override the ProcessRequestAsync
method. Internally ASP.NET takes care of integrating the return
signature (a Task object) of ProcessRequestAsync with the older
asynchronous programming model used by the ASP.NET pipeline. The
following example shows how you can use Task and await as part of the
implementation of an asynchronous HTTP handler:
public class MyAsyncHandler : HttpTaskAsyncHandler
{
// ...
// ASP.NET automatically takes care of integrating the Task based override
// with the ASP.NET pipeline.
public override async Task ProcessRequestAsync(HttpContext context)
{
WebClient wc = new WebClient();
var result = await
wc.DownloadStringTaskAsync("http://www.microsoft.com");
// Do something with the result
}
}

Action<T> vs Standard Return

I'm not a C# guy I'm more an Objective-C guy but lately I've seen a lot of implementations of:
public void Method(Action<ReturnType> callback, params...)
Instead of:
public ReturnType Method(params...)
One of the examples of this is the MVVM Light Framework, there the developer implements the data service contract (and implementation) using the first approach, so my question is: Why this? Is just a matter of likes or is the first approach asyncronous by defaut (given the function pointer). If that's true, is the standard return death? I ask cause I personally like the second approach is more clear to me when I see an API.
Unlike the API returning ReturnType, a version with callback can return right away, and perform the callback later. This may be important when the value to return is not immediately available, and getting it entails a considerable delay. For example, and API that requests the data from a web service may take considerable time. When the result data is not required to proceed, you could initiate a call, and provide an asynchronous callback. This way the caller would be able to proceed right away, and process notifications when they become available.
Consider an API that takes a URL of an image, and returns an in-memory representation of the image. If your API is
Image GetImage(URL url)
and your users need to pull ten images, they would either need to wait for each image to finish loading before requesting the next one, or start multiple threads explicitly.
On the other hand, if your API is
void Method(Action<Image> callback, URL url)
then the users of your API would initiate all ten requests at the same time, and display the images as they become available asynchronously. This approach greatly simplifies the thread programming the users are required to do.
The first method is likely to be an asynchronous method, where the method returns immediately and the callback is called once the operation has finished.
The second method is the standard way of doing method returns for (synchronous) methods in C#.
Of course, API designers are free to make whatever signature they seem fit; and there might be other underlying details to justify the callback style. But, as a rule of thumb, if you see the callback style, expect the method to be asynchronus.

Queue implementation in C#

I'm dealing with a hardware resource that can only handle 1 command at a time. I'm going to exposing some of it's API functions via a web interface, so obviously there's a good chance more than 1 command will get sent at a time. I have decided that queuing these commands when they're submitted is the best way to ensure serial processing.
I'm planning on implementing the queue in a static class. The web app code-behind will add a command by calling a method corresponding to the command they want. I want the calling method to wait until it gets the output of its command, so no async magic is required.
Am I doing this right? Is there a better way?
How do I start implementing the queue in C# (I usually work with Java)? I assume I'll need some sort Event to signal a job has been added, and a Handler to initiate processing of the queue...
I'm using .NET Framework 4.
You can use the ConcurrentQueue class for your implementation and have a dedicated thread to process items in the queue.
For the waiting part you can use an AutoResetEvent, producers pass the event instance to the singleton class along with the request, then calls WaitOne() which blocks until the processor has signaled processing is completed by calling Set().
Sounds like a good approach EXCEPT: Use the Generic Queue collections class. Do not write your own! You would be reinventing a well-built wheel.

Are calls synchronous in WCF?

I'm writing an App using WCF where clients subscribe to a server and then updates get pushed back to the clients.
The subscribers subscribe to the server using a DuplexPipeChannel calling a Subscribe() method on the server.
The server maintains a List<> of subscribers and when there is data to push out to the subscribers it calls a PushData() method.
My intention is to iterate through the list of subscribers calling the push method on each of them in turn.
What I want to know is: Is calling the push method on my Subscriber blocking? Will a failure of connectivity or delay in connecting to one of the subscribers cause the rest of the push calls to be delayed (or worse fail)?
I'm sorry if this is an obvious question, but I've been mostly a .Net 2.0 person up until now so I know very little about WCF.
My WCF code is loosly based on this tutorial.
Another Question
Assuming it is synchronous, am I better off spawning a new thread to deal with the client side requests or would I be better off spawning a new thread for each "push serverside?"
WCF calls are synchronous by default, although they can be configured to be asynchronous. See Jarrett's answer below. Take a look here. Every message you send will receive a result back, whether you actually are expecting data or not.
The call will block depending on what your server does. If PushData on the server actually iterates through the subscriber list and sends a message to each, it will. If PushData only inserts the data and another thread handles sending the data to the subscribers, it will only block while your server inserts the data and returns.
Hope this helps.
Edit: Regarding spawning threads client-side vs server-side. Server-side. If a client calls takes a while, that's while, but if it takes a long time because the server is actually sending out calls to other clients in the same call, then something is wrong. I would actually not really spawn a new thread each time. Just create a producer/consumer pattern on your server side so that whenever a data item is queued, the consumer picks it up. Hell, you can even have multiple consumers.
If you right-click on the Service Reference, you have the option to create Async calls. (There's a checkbox on the setup dialog.) I usually create Async methods and then listen for a result. While it is a bit more work, I can write a much more responsive application using async service operations.

Categories

Resources