HttpClient hanging in long-running service - c#

We have a service deployed in AKS that makes HTTP requests to an external API outside of the cluster. The path of the request is AKS -> API Management -> ExpressRoute -> API.
We have been encountering an issue where all requests from an instance of the service will hang when calling the external API. The requests never reach the API. Due to the timeouts that we have configured, the requests will be cancelled after 10 seconds. We most recently saw this on an instance of the service that had been running for over 30 days without any issues. Restarting the instance resolves the issue.
We use Microsoft.Extensions.Http (6.0.0) to register the API client class with the following code:
serviceCollection.AddHttpClient<IMyApiClient, MyApiClient>();
Our API client looks like this:
public class MyApiClient : IMyApiClient
{
private readonly HttpClient _httpClient;
public MyApiClient(HttpClient httpClient)
{
_httpClient = httpClient;
}
public async Task CallApi(CancellationToken ctx)
{
await _httpClient.GetAsync("api-uri", ctx);
}
}
Obviously there is complexity in the hops between AKS and the API, however, I first want to understand if we are doing anything wrong with HttpClient. As far as I can tell, we are following the reccommended usage.
We are unable to reproduce this issue locally and we have no additional logs as the requests are simply taking a long time and then being cancelled, there are no actual errors.

Related

ClientConnectionFailure at forward-request

I have an Angular Web Application, that is backed by a C# Web Api, which facilitates speaking to an Azure Function App.
An rough example flow is like the following:
Angular Web App (press download with selected parameters) -> send GET request to API Management Service
API Management Service makes call to a C# Web Api
C# Web Api then responds back to the APIM, which in turn calls an Azure Function App to further process
data from an external source
Once a csv is ready, the data payload is downloaded in the browser where the Web App is open
For larger payloads, the download request fails with the following error in Application Insights:
"ClientConnectionFailure at forward-request"
This error occurs at exactly 2 minutes, every time, unless the payload is sufficiently small.
This lead me to believe that the Function App, which I understand as the client in this situation, is timing out, and cancelling the request.
But testing a GET with the exact same parameters through a local instance of the Azure Function App using Postman, the payload is successfully retrieved.
So the issue isn't the Azure Function App, because it did not time out in Postman as when using the WebApp.
This leads me to three different possibilities:
The C# WebApi is timing out and cancelling the request before the APIM can respond in full
The WebApp itself is timing out.
The internet browser (Chrome), is timing out. (Chrome has a hard unchangeable timeout of 5 minutes, so unlikely)
#1. To tackle the the first option, I upgraded the timeout of the HttpClient created in the relevant download action:
public aync Task<HttpResponseMessage> DownloadIt(blah)
{
HttpClient client = getHttpClient();
client.Timeout = TimeSpan.FromMilliseconds(Convert.ToDouble(600000)); // 10 minutes
var request = new HttpRequestMessage(HttpMethod.Get, buildQueryString(blah, client.BaseAddress));
return await client.SendAsync(request);
}
private HttpClient getHttpClient()
{
return _httpClientFactory.CreateClient("blah");
}
This had no effect as the same error was observed.
#2. There are a couple of Timeout properties in the protractor.conf.js, like allScriptsTimeout and defaultTimeoutInterval.
Increasing these had no effect.
** There is a last possibility that the APIM itself is timing out, but looking into the APIM policy for the relevant API, there is no forward-request property, with a timeout, meaning by default according to Microsoft, there is no timeout for the APIM.
https://learn.microsoft.com/en-us/azure/api-management/api-management-advanced-policies
I've tried a few different strategies but to no avail.
Indeed there's a timeout, as ClientConnectionFailure indicates that the client closes the connection with API Management (APIM) while APIM is yet to return a response to it (the client), in this case while it was forwarding the request to the backend(forward-request)
To debug this kind of issues, the best approach is to collect APIM inspector trace to inspect request processing inside APIM pipeline, paying attention to the time spent on each section of the request - Inbound, Backend, Outbound. The section where the most time is spent is probably the culprit (or it's dependencies). Hopefully, this helps you track down the problem.
You can explicitly set a forward-request on the entire function app or a single endpoint such as:
<backend>
<forward-request timeout="1800" />
</backend>
where the time is in seconds (1800*60 = 60 minutes here)
To do this in APIM,
go to your APIM
APIs
Select your function app
Click on the Code icon </> under Inbound Processing
Alternatively, if you want to do this for just a single operation/endpoint, before performing step 4., click on an individual operation/endpoint.
After testing each component of the solution locally (outside Azure), web app (front end), web api, function app (backend), it is clear that the issue was caused by Azure itself, namely the default 4 minutes for Idle Timeout at the Azure Load Balancer.
I double checked by timing the requests that failed and always got 4 minutes.
The way the code in the backend is sending requests is all together, for larger data sets this caused it to hit the load balancer's timeout.
It looks like the load balancer timeout is configurable, but this doesn't look like something I will be able to change.
So solution: Write more efficiet/better code in the backend.

Asp.net Azure SignalR service on a distributed system

I have a web app running on Asp.net 4.7.2. This app contains a few REST endpoints that are queried and then delegate work off to a windows service that sits on another machine.
When the work on this service is complete, I want to use SignalR to send a message to the client to indicate that the work is done. Because this service is on another machine, this is proving quite tricky.
I have tried using an Azure SignalR Service as an abstracted level above this. My intention was to use the REST capabilities to call this Azure service, have it run code from a Hub (which I have currently defined and created in the web app) and then broadcast a message to the clients.
At the moment, I am not sure if this is possible. The notes say that the REST provision is only available in the asp.net CORE version of the library. I have made calls to the endpoint and received an accepted response, but no luck.
My question is, then, how do I get the following architecture working under my conditions? If I cannot, what other suggestions do you have?
Machine 1:
Windows service running bespoke code that takes an unpredictable length of time
When code completes, send message to SignalR hub via azure service
Machine 2:
Webapp containing the SignalR hub definitions and client logic.
You can actually accomplish this without the service using just your existing webapp. The key is to create an endpoint that your Windows service to call that is linked to your hub.
So if I create a simple SingalR hub in .Net 4.x:
public class NotificationHub : Hub
{
public void Send(string message)
{
Clients.All.addNewMessageToPage(name, message);
}
}
I can access it in a WebApi controller:
public class NotificationController : ApiController
{
IHubContext context;
public NotificationController()
{
context = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();
}
public HttpResponseMessage Get(string message)
{
object[] args = { message };
context.Clients.All.Send("Send", args) ;
return Request.CreateResponse(HttpStatusCode.OK);
}
}
Using a REST endpoint is going to be simplest in frameworks like MVC and WebApi, but it is possible to add a messaging service in between like Service Bus if you need to do more than simply return a message to the clients.

Update configuration and restart self-hosted ASP.NET Web API 2 on HTTP request

I have a self-hosted OWIN web app running ASP.NET Web API 2.2, which is executed as a windows service. I want to achieve a remote configuration scenario: from a client, I want to make an HTTP request and in the called Web API action, I want to update a configuration file (e.g. to change a DB connection string) and then restart the service (so that the new configuration is loaded). However, here is the tricky part:
If I simply call Environment.Exit, then the respone won't finish and the client won't get the message that it worked. Is there any way to write to the response stream and finish it before restarting the service from within a Web-API action? Or should I use something else for this, maybe a filter? Or is there any other way that you would rather suggest?
I am not interested in any security discussion - the service is only available in an intranet and the corresponding Web API action is secured with authentication and authorization.
Thanks for your help in advance.
Look like sucker punch, but it works:
[HttpGet]
public async Task<IHttpActionResult> ActionMethod()
{
var httpActionResult = Ok(); //some work instead it
new Thread(() =>
{
Thread.Sleep(500);
Environment.Exit(1);
}).Start();
return httpActionResult;
}
I not shure, but if you exit with exitCode != 0, then services subsystem will think, that your service crashed and try restat it (If you setup it in service settings)

HttpClient caching in UWP apps

I own both WebApi server (asp.net core app) and the client (UWP app).
I call the WebApi services using HttpClient from the UWP app.
Some resources are readonly and therefore can be cached:
[ResponseCache(Duration = 60*60*12, VaryByQueryKeys = new[] { "id" }, Location = ResponseCacheLocation.Client)]`
[HttpGet("{id}")]
public IActionResult Get(string id) { ... }
Is it possible to enable caching in HttpClient in UWP app or do I have to do it on my own?
The HttpClient has (good) caching itself already. If the default caching behavior is not enough, you can further control the cache through the HttpCacheControl class which separates read and write behavior.
More important is knowing how to use your HttpClient. Even though it implements IDisposable, you should NOT dispose it but keep a single HttpClient object alive through your whole application, it's designed for re-use.
What the HttpClient doesn't do, is return you the cached result while being disconnected. Therefore there are other libraries like Akavache, which creates an offline key-value store.

HttpClient connection issue to WebAPI hosted using Asp.NetCore TestServer

In the unit Test cases, TestServer is used for in-memory hosting of the WebAPI. After this, we are trying to make HttpConnection to this hosted WebAPI using some code like this:
HttpClient client= new HttpClient();
client.BaseAddress("url");
client.GetAsync("url").Result();
This code gives an exception, with the error message
"Connection refused".
But if we get the HttpClient object using the below code and follow the above steps, it works fine.
TestServer.CreateClient()
What could be the reason behind it? Is it because it's an in-memory hosted WebAPI? No actual Http context is there??
That's by design. When you call TestServer.CreateClient() a client with special HttpMessageHandler is created. That handler allows to directly call APIs under test without exposing it as HTTP server:
public class TestServer : IServer, IDisposable
{
public HttpClient CreateClient()
{
HttpClient httpClient = new HttpClient(this.CreateHandler());
...
}
}
That should be faster and suitable enough for unit-testing. For integration testing, run a Kestrel server at test startup.
P.S. I hope your application design became better with DI instead of explicit building. That's nice you resolved the problem this way.

Categories

Resources