I'm not a C# guy I'm more an Objective-C guy but lately I've seen a lot of implementations of:
public void Method(Action<ReturnType> callback, params...)
Instead of:
public ReturnType Method(params...)
One of the examples of this is the MVVM Light Framework, there the developer implements the data service contract (and implementation) using the first approach, so my question is: Why this? Is just a matter of likes or is the first approach asyncronous by defaut (given the function pointer). If that's true, is the standard return death? I ask cause I personally like the second approach is more clear to me when I see an API.
Unlike the API returning ReturnType, a version with callback can return right away, and perform the callback later. This may be important when the value to return is not immediately available, and getting it entails a considerable delay. For example, and API that requests the data from a web service may take considerable time. When the result data is not required to proceed, you could initiate a call, and provide an asynchronous callback. This way the caller would be able to proceed right away, and process notifications when they become available.
Consider an API that takes a URL of an image, and returns an in-memory representation of the image. If your API is
Image GetImage(URL url)
and your users need to pull ten images, they would either need to wait for each image to finish loading before requesting the next one, or start multiple threads explicitly.
On the other hand, if your API is
void Method(Action<Image> callback, URL url)
then the users of your API would initiate all ten requests at the same time, and display the images as they become available asynchronously. This approach greatly simplifies the thread programming the users are required to do.
The first method is likely to be an asynchronous method, where the method returns immediately and the callback is called once the operation has finished.
The second method is the standard way of doing method returns for (synchronous) methods in C#.
Of course, API designers are free to make whatever signature they seem fit; and there might be other underlying details to justify the callback style. But, as a rule of thumb, if you see the callback style, expect the method to be asynchronus.
Related
I am a longtime user of Azure's Application Insights, and I use the TelemetryClient's TrackTrace() and TrackException() liberally in every enterprise application I write.
One thing that has always bothered me slightly is that these methods are synchronous. Since these methods communicate with an external API, it would seem there is an ever-present risk of blocking; e.g., if the network is down/slow, or if App Insights' own API is having issues.
In such cases, it seems possible (at least in theory) that an entire application could hang. In such cases, if they ever occur, I would like my applications to continue operating despite failing to trace within a reasonable time frame.
I've done some research online, and it appears that there is no built-in way to call these methods asynchronously. Do you know of any way to accomplish this? (Or.....does the App Insights API have an under-the-hood black-box way of automatically preventing these sorts of things?)
Of course, I know I could always wrap my calls in a Task (e.g., await Task.Run(() => myTelemetryClient.TrackTrace("my message")); (or write an async extension method that does this). I could also use a timer to cancel such a request. But it would be nice if there was a more integrated way of doing this.
Can anyone enlighten me? Is this really a potential problem that I should be concerned with? Or am I merely tilting at windmills?
Update: I just now saw this, which indicates that AI does indeed handle tracking in an asynchronous manner "under the hood". But how can this be reliable, given the truism that asynchronous operations really need to be made async all the way up and down the call stack in order to be blocking-proof?
Is this really a potential problem that I should be concerned with?
No. None of the TrackABC() methods communicate with any external API or do anything which would take a long time. Track() runs all telemetry initializers, and then queues the item into an in-memory queue.
While the built-in telemetry initializers are designed to finish quickly and make no I/O or HttpCalls, if a user adds a telemetryinitializer which makes an http call or something similar, then Yes, it'll affect you Track() calls. But with normal usage of TelemetryInitializers, this should not be a concern.
If it's anything like the JS API, the tracking events are placed in a queue then dequeued and sent (possibly in batches at configurable intervals) independently of the TrackXXX methods. Enqueuing an event can be synchronous, but the sending end of the process can operated asynchronously. The queue decouples the two from one another. –
spender
I think #spender answered my question! Thanks!
I have a AWS lambda function written in c#. This function is responsible for calling 5-6 API calls (Post requests).
All of these API calls are independent of each other.
I do not care of the about the responses from any of these API calls.
Each of the API call takes about 5 seconds to complete even though I do not care about the follow up response.
Question:
I want my lambda function to execute and respond within a second. How can I asynchronously make my API calls such that lambda function can finish all of these within my time limit without waiting for the responses from the API calls? Ideally, I want to implement a fire and forget API call system that sends the final response back without any delay.
According to AWS lambda documentation, I have to use await operator with asynchronous calls in lambda to avoid the function being completed before the asynchronous calls are finished.
Am I missing something here? Or is there a way to accomplish this?
Thanks
You can't run code "outside" of a serverless request. To try to do so will only bring pain - since your serverless host has no idea that your code is incomplete, it will feel free to terminate your hosting process.
The proper solution is to have two lambdas separated by a queue. The first (externally-facing) lambda takes the POST request, drops a message on the queue, and returns its response to the caller.
The second (internal-only) lambda monitors the queue and does the API calls.
For your use case, using AWS Step functions will provide a fully managed solution. The steps are as follows.
Define you AWS step function flow, based on whether you want trigger them parallel or one after another.
Integrate the initial step with API Gateway POST method.
After starting the Step function state machine, it will return a success state (Immediately without waiting for the end state)
There are few benefits with Step functions over custom Lambda flow implementation.
You can configure to retry each step, if individual steps return errors.
If any of the steps ends up with an error, you can trigger a callback.
You can visually identify and monitor which step had issues & etc.
If you just want a fire and forget, then don't use await. Just use an HttpClient method (get, put, etc.) to call the API, and you're done. Those methods return a Task<HttpResponseMessage> which you don't care about, so it's fine for your Lambda to exit at that point.
I THINK I understand the basic principle of an asynchronous POST method from a high level view point. The big advantage is that the caller gets a quick response, even though the processing of the data from the POST might not be completed yet.
But I don't really understand how this applies to a GET method. The response needs to contain the data, so how can there BE a response before processing is completed? Why have an API with a GET request handler that utilizes asynchronous methods?
I don't think it matters for this general type of question, but I'm writing in C# using Web API.
On the network there is no such thing as an async HTTP call. It's just data flowing over TCP. The server can't tell whether the client is internally sync or async.
the caller gets a quick response
Indeed, the server can send the response line and headers early and data late. But that has nothing to do with async IO or an async .NET server implementation. It's just some bytes arriving early and some late.
So there is no difference between GET and POST here.
why ... utilize asynchronous methods?
They can have scalability benefits for the client and/or the server. There is no difference at the HTTP level.
So the app can do other things that don't need the data.
If you implement a GET as synchronous (and let's say you are on a bad network, where it takes 20 seconds to get it), you can't
Show progress
Allow the user to cancel
Let them initiate other GETs
Let them use other parts of the app
EDIT (see comments): The server wants to respond asynchronously for mostly the same reason (to give up the thread). Actually getting the data might be asynchronous and take some time -- you wouldn't want to block the thread for the whole time.
It doesn't make any sense if you're building a RESTful API, in which GET should return a resource (or a collection) and be idempotent (not make any changes).
But if you're not following those principles, a GET request could perform some work asynchronously. For example, GET /messages/3 could return the message to the client, but then asynchronously do other work like:
mark the message as read
send a push notification to the user's other clients indicating that the message is read
schedule a cronjob 30 days in the future to delete the message
etc.
None of these would be considered a RESTful API, but are still used from time to time.
I'm not sure if this is technically a call for threading, but strangely enough in my many years of coding, I really haven't had to do this.
Scenario:
API Method is called by a user to kickoff the processing of an
order.
The processing method we need to call next can take a really long
time, so start that but we don't need any reply or acknowledgement if it has been completed, error, etc. as our logging process takes care of all that.
However, user just needs to know the process
has started, so send back a positive response, but kick off the
other methods.
I think this is threading, but for the life of me I'm a bit unsure.
Long running task in WebAPI
It seems like this may be an issue running your process in the background after sending a response.
"ASP.NET (and most other servers) work on the assumption that it's safe to tear down your service once all requests have completed."
As for your situation, what I would do is handle things on the front end and using javascript, jquery, etc, create an on-click event for the order button, and allow it to display an order submitted text or something to that effect.
I'm working on implementing a JSON RPC connection over TCP/IP and I have one fundamental issue. Currently I'm using a synchronous approach, which works well.
I can send
{"id":1,"jsonrpc":"2.0","method":"Input.Home"}
and receive
{"id":1,"jsonrpc":"2.0","result":true}
This works with no problems. The issue arises when I receive notifications. These can arrive unpredictably and at any time. I am interacting with the XBMC JSON RPC API. If a notification has been sent by XBMC, I receive multiple JSON requests at once. E.g.
{"jsonrpc":"2.0","method":"GUI.OnScreensaverActivated","params":{"data":null,"sender":"xbmc"}}{"jsonrpc":"2.0","method":"GUI.OnScreensaverDeactivated","params":{"data":null,"sender":"xbmc"}}
This causes a crash in JSON.NET, and understandably so. My first instinct is I need to asynchronously receive these notifications so that I don't have to wait until the next method is called to receive them. However this complicates the simple example I showed above because I can no longer utilize the synchronous calls. i.e.
SendJson(json);
result = ReceiveJson();
Is there a clean way to implement this without over complicating it? Any/All advice is appreciated.
Here is where all the reading is. Its all worth it if you want to get into it.
http://msdn.microsoft.com/en-us/library/ms228969.aspx
For a summary though..
Basically what needs to happen is you need to create an object that implements IAsyncResult. This will store state about your async operation, and a callback for when it is complete.
Create a method that can take your IAsyncResult object as input, and before the method returns, it should call .SetCompleted() on your IAsyncResult object. Its inside this method that you do the work you normally do.
Then you will create an instance of your IAsyncResult object (set its data, and callback), then call Task.Factory.StartNew([YourNewMethod],[YourInstanceOf IAsyncResult]).
That is the core of what needs to happen. Just make sure to set your callback and handle exceptions.
If you want a working example of how I and using an async http handler to do json-rpc, take a look at http://jsonrpc2.codeplex.com/SourceControl/changeset/view/13061#74311 also where Process is being called and the async object is being setup http://jsonrpc2.codeplex.com/SourceControl/changeset/view/13061#61720
Hope that helps.