How to track performance of mvc4 web api rest call? - c#

I have an asp.net mvc4 web api (rest) interface that is being called by numerous clients. Basically I serve up content per certain params:
http://myserv.x.com/api/123/getstuff?whatstuff=thisstuff
My question is that it gets hit about 50K a day and I am noticing timeouts and slow response times every now and then.
ASK: How can I include metrics of how long the request took to process (internal to my code) as well as how long it took to get serviced in the IIS queue. I'm not sure if the latency is in my code or IIS.
I'd like to add them back within the response somehow:
<StuffPayload>
<Stuff id=1 url=http://myserv.x.com/img/1/>
<Response time=100ms IIStime=10ms MyServerCodeTime=90ms/>
</StuffPayload>

First check what is your method doing: if there's any sql/file operation ensure you create/dispose correctly all resources.
You could write custom action filters for logging so you can have a reusable peace of code for all your tracing. You can then add additional content to the response within the OnActionExecuted method.

Related

ClientConnectionFailure at forward-request

I have an Angular Web Application, that is backed by a C# Web Api, which facilitates speaking to an Azure Function App.
An rough example flow is like the following:
Angular Web App (press download with selected parameters) -> send GET request to API Management Service
API Management Service makes call to a C# Web Api
C# Web Api then responds back to the APIM, which in turn calls an Azure Function App to further process
data from an external source
Once a csv is ready, the data payload is downloaded in the browser where the Web App is open
For larger payloads, the download request fails with the following error in Application Insights:
"ClientConnectionFailure at forward-request"
This error occurs at exactly 2 minutes, every time, unless the payload is sufficiently small.
This lead me to believe that the Function App, which I understand as the client in this situation, is timing out, and cancelling the request.
But testing a GET with the exact same parameters through a local instance of the Azure Function App using Postman, the payload is successfully retrieved.
So the issue isn't the Azure Function App, because it did not time out in Postman as when using the WebApp.
This leads me to three different possibilities:
The C# WebApi is timing out and cancelling the request before the APIM can respond in full
The WebApp itself is timing out.
The internet browser (Chrome), is timing out. (Chrome has a hard unchangeable timeout of 5 minutes, so unlikely)
#1. To tackle the the first option, I upgraded the timeout of the HttpClient created in the relevant download action:
public aync Task<HttpResponseMessage> DownloadIt(blah)
{
HttpClient client = getHttpClient();
client.Timeout = TimeSpan.FromMilliseconds(Convert.ToDouble(600000)); // 10 minutes
var request = new HttpRequestMessage(HttpMethod.Get, buildQueryString(blah, client.BaseAddress));
return await client.SendAsync(request);
}
private HttpClient getHttpClient()
{
return _httpClientFactory.CreateClient("blah");
}
This had no effect as the same error was observed.
#2. There are a couple of Timeout properties in the protractor.conf.js, like allScriptsTimeout and defaultTimeoutInterval.
Increasing these had no effect.
** There is a last possibility that the APIM itself is timing out, but looking into the APIM policy for the relevant API, there is no forward-request property, with a timeout, meaning by default according to Microsoft, there is no timeout for the APIM.
https://learn.microsoft.com/en-us/azure/api-management/api-management-advanced-policies
I've tried a few different strategies but to no avail.
Indeed there's a timeout, as ClientConnectionFailure indicates that the client closes the connection with API Management (APIM) while APIM is yet to return a response to it (the client), in this case while it was forwarding the request to the backend(forward-request)
To debug this kind of issues, the best approach is to collect APIM inspector trace to inspect request processing inside APIM pipeline, paying attention to the time spent on each section of the request - Inbound, Backend, Outbound. The section where the most time is spent is probably the culprit (or it's dependencies). Hopefully, this helps you track down the problem.
You can explicitly set a forward-request on the entire function app or a single endpoint such as:
<backend>
<forward-request timeout="1800" />
</backend>
where the time is in seconds (1800*60 = 60 minutes here)
To do this in APIM,
go to your APIM
APIs
Select your function app
Click on the Code icon </> under Inbound Processing
Alternatively, if you want to do this for just a single operation/endpoint, before performing step 4., click on an individual operation/endpoint.
After testing each component of the solution locally (outside Azure), web app (front end), web api, function app (backend), it is clear that the issue was caused by Azure itself, namely the default 4 minutes for Idle Timeout at the Azure Load Balancer.
I double checked by timing the requests that failed and always got 4 minutes.
The way the code in the backend is sending requests is all together, for larger data sets this caused it to hit the load balancer's timeout.
It looks like the load balancer timeout is configurable, but this doesn't look like something I will be able to change.
So solution: Write more efficiet/better code in the backend.

How do you handle an Async Response from a Web Service

I was given the task of creating a web based client for a web service.
I began building it out in c# | .net 4.0 | MVC3 (i can use 4.5 if necessary)
Sounded like a piece of cake until I found out that some of their responses would be asynchronous. This is the flow ... you call a method and they return a response of ack or nack letting you know if your request was valid. Upon an ack response you should expect an async response with the data you requested, which will be sent to a callback url that you provide in your request.
Here are my questions:
If I'm building a web app and debugging on localhost:{portnum} how can I give them a callback url.
If I have already received a response (ack/nack) and my function finishes firing isn't my connection to the client then over ? How would I then get the data back to the client? My only thought is maybe using something like signalR, but that seems crazy for a customer buy flow.
Do I have to treat their response like a webhook? Build something separate that just listens and has no knowledge of the initial request. Just save the data to a db and then have the initial request while loop until there is a record for the unique id sent from the webhook.... oye vey
This really has my brain hurting :-/
Any help would be greatly appreciated. Articles, best practices, anything.
Thanks in advance.
If you create your service reference, it will generate a *ServiceMethod*Completed delegate. Register an event handler on it to process your data.
Use the ServiceMethod_Async() method to call the service.
The way I perceived your question is as follows, though please correct me if I'm wrong about this:
1) You send a request to their endpoint with parameters filled with your data. In addition, you send a:
callback url that you provide in your request. (quoted from your question)
2) They (hopefully) send an ack for your valid request
3) They eventually send the completed data to your callback url (which you specified).
If this is the flow, it's not all that uncommon especially if the operations on their side may take long periods of time. So let's say that you have some method, we'll call it HandleResponse(data). If you had originally intended to do this synchronously, which rarely happens in the web world, you would presumably have called HandleResponse( http-webservice-call-tothem );
Instead, since it is they who are initiating the call to HandleResponse, you need to set a route in your web app like /myapp/givemebackmydata/{data} and hook that to HandleResponse. Then, you specify the callbackurl to them as /myapp/givemebackmydata/{data}. Keep in mind without more information I can't say if they will send it as the body of a POST request to your handler or if they will string replace a portion of the url with the actual data, in which case you'd need to substitute {data} in your callback url with whatever placeholder they stipulate in their docs. Do they have docs? If they don't, none of this will help all that much.
Lastly, to get the data back on the client you will likely want some sort of polling loop in your web client, preferably via AJAX. This would run on a setInterval and periodically hit some page on your server that keeps state for whether or not their webservice has called your callback url yet. This is the gnarlier part because you will need to provide state for each request, since multiple people will presumably be waiting for a callback and each callback url hit will map to one of the waiting clients. A GUID may be good for this.
Interesting question, by the way.

How would I extend WebAPI to support returning controller action results via an HTTP callback?

I'm trying to extend WebAPI to support returning a response through an HTTP callback.
Workflow:
WebAPI receives a HTTP request with a callback URL.
WebAPI handles the URL normally and if the operation completes in less time than a configured timeout the result is sent synchronously.
If the timeout is exceeded the server needs to send an HTTP response indicating it went async, processing continues.
When processing (eventually) completes the response of the controller is posted to the pre-negotiated callback url.
Controllers need to remain synchronous and unaware of the async/callback functionality.
It appears MessageHandlers are a likely candidate but returning multiple HTTP responses (one for the early 'long task' response and one for the callback) does not appear to be supported.
Can someone provide guidance on what areas of WebAPI are extensible and relevant to this scenario?
I think an HttpMessageHandler will do the trick but not the way I think you're asking for.
One URL will be the main one and will return either the result or the redirection and the other will handle the redirections.
This is a very common scenario. In some cases you'll ask for a list of something and receive the a managed amount of results and a continuation URL if there are more. You requirement might be looked up as being just that where you either only have a continuation or the whole results.
Another way of looking at it as CQRS (Command Query Responsibility Segregation). You issue a command to on URL and retrieve the response from another. As an optimization, the result of invoking the command might be the response instead of the query URL.
Does this help you?

WCF Rest Asynchronous Calling Methods

I have a class library I developed that is rather processing intensive that I currently call through a WCF REST service.
The REST service directly accesses the DLLs for the class library and more or less the WCF rest service is an interface for the system.
Let's say the following methods are defined:
Create Request
Starts a thread that takes five minutes, but immediately returns a session ID that the process generates and the thread uses to report when it is completed to the database.
Check Status
Accepts a session id and checks the database to see if the process has completed.
I have to think that there is a better way to "manage" the threads running, however, my requirements state that the user should receive an immediate response from the REST service upon issuing a request.
I am using the WCF Message property to return XML to the browser and as this application can be called from any programming language I can't use classic WCF and callbacks (I think, correct me if I am wrong).
Sometimes I run into an issue where an error occurs and the iscomplete event never gets written to the database and therefore the "Check Status" method says it's processing forever.
Does anyone have any ideas about what is normally done and what can be done in this situation?
Thanks!
Jeffrey Kevin Pry
Your service should return a 202 Accepted at the initial request with a way for the client to check the current status, either through the Location header or as part of the content.
As you indicate the client then polls the URL indicated to check the current status. I would also suggest adding a bit of cache time to this response in case a client just starts looping.
How you handle things on the server is up to you and in no way related to REST. For one thing I would put all logic that executes as the background thread in a try/catch to you can return an error status back if an error occurs and possibly retry the action depending on the circumstances.
I implemented a similiar process for importing/processing of large files and to be honest, I have never had a problem. Perhaps resolving the reason that the IsComplete never gets set will make this more resilient.
Not much of an answer, but still..

IHttpHandler vs IHttpModule

My question is simple (although the answer will most likely not be): I'm trying to decide how to implement a server side upload handler in C# / ASP.NET.
I've used both HttpModules (IHttpModule interface) and HttpHandlers (IHttpHandler interface) and it occurs to me that I could implement this using either mechanism. It also occurs to me that I don't understand the differences between the two.
So my question is this: In what cases would I choose to use IHttpHandler instead of IHttpModule (and vice/versa)?
Is one executed much higher in the pipeline? Is one much easier to configure in certain situations? Does one not work well with medium security?
An ASP.NET HTTP handler is the process (frequently referred to as the "endpoint") that runs in response to a request made to an ASP.NET Web application. The most common handler is an ASP.NET page handler that processes .aspx files. When users request an .aspx file, the request is processed by the page through the page handler. You can create your own HTTP handlers that render custom output to the browser.
Typical uses for custom HTTP handlers include the following:
RSS feeds To create an RSS feed for a Web site, you can create a handler that emits RSS-formatted XML. You can then bind a file name extension such as .rss to the custom handler. When users send a request to your site that ends in .rss, ASP.NET calls your handler to process the request.
Image server If you want a Web application to serve images in a variety of sizes, you can write a custom handler to resize images and then send them to the user as the handler's response.
An HTTP module is an assembly that is called on every request that is made to your application. HTTP modules are called as part of the ASP.NET request pipeline and have access to life-cycle events throughout the request. HTTP modules let you examine incoming and outgoing requests and take action based on the request.
Typical uses for HTTP modules include the following:
Security Because you can examine incoming requests, an HTTP module can perform custom authentication or other security checks before the requested page, XML Web service, or handler is called. In Internet Information Services (IIS) 7.0 running in Integrated mode, you can extend forms authentication to all content types in an application.
Statistics and logging Because HTTP modules are called on every request, you can gather request statistics and log information in a centralized module, instead of in individual pages.
Custom headers or footers Because you can modify the outgoing response, you can insert content such as custom header information into every page or XML Web service response.
From: http://msdn.microsoft.com/en-us/library/bb398986.aspx
As stated here, HttpModules are simple classes that can plug themselves into the request processing pipeline, whereas HttpHandlers differ from HttpModules not only because of their positions in the request processing pipeline, but also because they must be mapped to a specific file extensions.
IHttpModule gives you much more control, you can basically control all of the traffic directed to your Web application. IHttpHandler gives you less control (the traffic is filtered before it reaches your handler), but if this is sufficient for your needs, then I see no reason to use the IHttpModule.
Anyway, it's probably best to have your custom logic in a separate class, and then just use this class from either IHttpModule or IHttpHandler. This way you don't really have to worry about choosing one or the other. In fact, you could create an extra class which implements both IHttpHandler and IHttpModule and then decide what to use by setting it in Web.config.
15 seconds has a nice small tutorial giving practical example
Modules are intended to handle events raised by the application before and after the request is actually processed by the handler. Handlers, on the other hand, aren't given the opportunity to subscribe to any application events and, instead, simply get their ProcessRequest method invoked in order to the "main" work of processing a specific request.
Take a look at this documentation from Microsoft (about half way down the page in the "The request is processed by the HttpApplication pipeline" section):
http://msdn.microsoft.com/en-us/library/bb470252.aspx
You can see in step 15 where the handler gets its chance to execute. All of the events before and after that step are available for interception by modules, but not handlers.
Depending on what specific features you're trying to achieve, you could use either a handler or a module to implement an upload handler. You might even end up using both.
Something to consider might to use an upload handler that's already written.
Here's a free and open source one:
http://www.brettle.com/neatupload
Here's a commercial one:
http://krystalware.com/Products/SlickUpload/
If you look at the documentation for NeatUpload, you'll see that it requires you to configure a module.

Categories

Resources