ClientConnectionFailure at forward-request - c#

I have an Angular Web Application, that is backed by a C# Web Api, which facilitates speaking to an Azure Function App.
An rough example flow is like the following:
Angular Web App (press download with selected parameters) -> send GET request to API Management Service
API Management Service makes call to a C# Web Api
C# Web Api then responds back to the APIM, which in turn calls an Azure Function App to further process
data from an external source
Once a csv is ready, the data payload is downloaded in the browser where the Web App is open
For larger payloads, the download request fails with the following error in Application Insights:
"ClientConnectionFailure at forward-request"
This error occurs at exactly 2 minutes, every time, unless the payload is sufficiently small.
This lead me to believe that the Function App, which I understand as the client in this situation, is timing out, and cancelling the request.
But testing a GET with the exact same parameters through a local instance of the Azure Function App using Postman, the payload is successfully retrieved.
So the issue isn't the Azure Function App, because it did not time out in Postman as when using the WebApp.
This leads me to three different possibilities:
The C# WebApi is timing out and cancelling the request before the APIM can respond in full
The WebApp itself is timing out.
The internet browser (Chrome), is timing out. (Chrome has a hard unchangeable timeout of 5 minutes, so unlikely)
#1. To tackle the the first option, I upgraded the timeout of the HttpClient created in the relevant download action:
public aync Task<HttpResponseMessage> DownloadIt(blah)
{
HttpClient client = getHttpClient();
client.Timeout = TimeSpan.FromMilliseconds(Convert.ToDouble(600000)); // 10 minutes
var request = new HttpRequestMessage(HttpMethod.Get, buildQueryString(blah, client.BaseAddress));
return await client.SendAsync(request);
}
private HttpClient getHttpClient()
{
return _httpClientFactory.CreateClient("blah");
}
This had no effect as the same error was observed.
#2. There are a couple of Timeout properties in the protractor.conf.js, like allScriptsTimeout and defaultTimeoutInterval.
Increasing these had no effect.
** There is a last possibility that the APIM itself is timing out, but looking into the APIM policy for the relevant API, there is no forward-request property, with a timeout, meaning by default according to Microsoft, there is no timeout for the APIM.
https://learn.microsoft.com/en-us/azure/api-management/api-management-advanced-policies
I've tried a few different strategies but to no avail.

Indeed there's a timeout, as ClientConnectionFailure indicates that the client closes the connection with API Management (APIM) while APIM is yet to return a response to it (the client), in this case while it was forwarding the request to the backend(forward-request)
To debug this kind of issues, the best approach is to collect APIM inspector trace to inspect request processing inside APIM pipeline, paying attention to the time spent on each section of the request - Inbound, Backend, Outbound. The section where the most time is spent is probably the culprit (or it's dependencies). Hopefully, this helps you track down the problem.

You can explicitly set a forward-request on the entire function app or a single endpoint such as:
<backend>
<forward-request timeout="1800" />
</backend>
where the time is in seconds (1800*60 = 60 minutes here)
To do this in APIM,
go to your APIM
APIs
Select your function app
Click on the Code icon </> under Inbound Processing
Alternatively, if you want to do this for just a single operation/endpoint, before performing step 4., click on an individual operation/endpoint.

After testing each component of the solution locally (outside Azure), web app (front end), web api, function app (backend), it is clear that the issue was caused by Azure itself, namely the default 4 minutes for Idle Timeout at the Azure Load Balancer.
I double checked by timing the requests that failed and always got 4 minutes.
The way the code in the backend is sending requests is all together, for larger data sets this caused it to hit the load balancer's timeout.
It looks like the load balancer timeout is configurable, but this doesn't look like something I will be able to change.
So solution: Write more efficiet/better code in the backend.

Related

Upload large file - 502 bad gateway error on Azure App Service

I have a problem when I upload the large file on Azure. I am working on an ASP.NET Core 5.0 API project.
I have implemented functionality regarding Microsoft recommendation. Moreover, I added a pooling mechanism so the frontend application has another endpoint to check upload status.
Everything works fine when I run locally but I have a problem with a large file on Azure. My API is using Azure App Service Premium P1v3. It returns a 502 bad gateway for large files (above 1GB).
I made a tests and 98 % time consuming is reading stream. From Microsft docs it is:
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
untrustedFileNameForStorage = contentDisposition.FileName.Value;
// Don't trust the file name sent by the client. To display
// the file name, HTML-encode the value.
trustedFileNameForDisplay = WebUtility.HtmlEncode(
contentDisposition.FileName.Value);
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
ModelState, _permittedExtensions, _fileSizeLimit);
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
}
I know there is a load balancer timeout of 230 seconds on Azure App Service but when I test it using postman in most cases 502 is being returned after 30 seconds.
Maybe I need to set some configuration feature on Azure App Service? Always on is enabled.
I would like to stay with Azure App Service, but I was thinking about migrating to Azure App service or allow the Frontend application to upload files directly to Azure Blob Storage.
Do you have any idea how to solve it?
Newset
Uploading and Downloading large files in ASP.NET Core 3.1?
The previous answers are based on only using app services, but it is not recommended to store large files in app services. The first is that future updates will become slower and slower, and the second is that the disk space will soon be used up.
So it is recommended to use azure storage. If you use azure storage, suggestion 2 is recommended for larger files to upload large files in chunks.
Preview
Please confirm whether the large file can be transferred successfully even if the error message returns a 500 error.
I have studied this phenomenon before, and each browser is different, and the 500 error time is roughly between 230s-300s. But looking through the log, the program continues to run.
Related Post:
The request timed out. The web server failed to respond within the specified time
So there are two suggestions I give, you can refer to:
Suggestion 1:
It is recommended to create an http interface (assuming the name is getStatus) in your program to receive file upload progress, similar to processbar. When the file starts to transfer, monitor the upload progress, upload the file interface, return HttpCode 201 accept, then the status value is obtained through getStatus, when it reaches 100%, it returns success.
Suggestion 2:
Use MultipartRequestHelper to cut/slice large file. Your usage maybe wrong. Please refer below post.
Dealing with large file uploads on ASP.NET Core 1.0
The version of .net core is inconsistent, but the idea is the same.
Facing similar issue on uploading document of larger size(up to 100MB) through as.net core api hosted as azure app gateway and have set timeout to 10min and applied these attributes on action
[RequestFormLimits(MultipartBodyLengthLimit = 209715200)]
[RequestSizeLimit(209715200)]
Even kestrel has configured to accept 200MB
UseKestrel(options =>
{
options.Limits.MaxRequestBodySize = 209715200;
options.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(10);
});
The file content is in base64 format in request object.
Appreciate if any help on this problem.

HttpClient.SendAsync call results in multiple triggers on service side, returns internal server error

I have an ASP.NET Web API deployed on Azure App service.
I am experiencing following error: For one specific business object my Web API's GET method is returning Internal server error, while for other business objects the same GET method is working fine.
When I debugged my Web API it turned out, that valid business object is returned, but… GET method was triggered multiple times (and on client side I see that it is called only once)
This is an excerpt where Web API is called from client code
// Create HTTP transport objects
HttpRequestMessage httpRequest = new HttpRequestMessage();
httpRequest.Method = HttpMethod.Get;
httpRequest.RequestUri = new Uri(url);
// Set Credentials
if (this.Credentials != null)
{
cancellationToken.ThrowIfCancellationRequested();
await this.Credentials.ProcessHttpRequestAsync(httpRequest, cancellationToken).ConfigureAwait(false);
}
HttpResponseMessage httpResponse = await this.HttpClient.SendAsync(httpRequest, cancellationToken).ConfigureAwait(false);
Besides - if I try to open that same url from browser (e.g.: https://myservice.com/api/businessObject/xxx) - request is performed only once (as it should) and correct results (Json) is displayed in browser.
Any suggestions what to try to figure why call from client side (for specific object) results in multiple Web API service executions and how to fix this?
My Web API service is deriving from System.Web.Http.ApiController
I got some information from exception, but it doesn't seem to be very helpful
Exception thrown: 'Microsoft.Rest.TransientFaultHandling.HttpRequestWithStatusException' in Microsoft.Rest.ClientRuntime.dll The thread 0x27fc has exited with code 0 (0x0)
EDIT
I got some information from Azure Log stream, but that information does not seam to make sense… because this problem happens for one specific business object (and only when requested from my application - not failing from web browser), other business objects are working fine so I don't see how this could be related to access / web.config file...
IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly.
IIS was not able to process configuration for the Web site or application.
The authenticated user does not have permission to use this DLL.
..
Ensure that the NTFS permissions for the web.config file are correct and allow access to the Web servers machine account.
Check the event logs to see if any additional information was logged.
Verify the permissions for the DLL.
Install the .NET Extensibility feature if the request is mapped to a managed handler.
Create a tracing rule to track failed requests for this HTTP status code

No response from server when POST request to asp.net web api reaches some limit

I am sending JSON data to Web api. Everything works locally with IISExpress. However, there is an issue with remote server which has IIS 8 installed. I got no response when request payload reaches some value. I mean that the response is in pending status (no empty response, no errors). Limits varies a bit for different controllers/functions, but still is pretty low. They are around 2.5-3.5 kb. It feels like concrete controller's method is not hit at all in such cases (to check this, I put return statement right in the beginning of the method like following).
[HttpPost]
public async Task<IHttpActionResult> UpdateCity(int id, CityDto cityDto)
{
return StatusCode(HttpStatusCode.NoContent);
}
I tried to set request limits as mentioned here https://stackoverflow.com/a/3853785/3507404 without luck.
My set up:
1) Amazon EC2
2) Windows Server 2012 r2
3) IIS 8
4) Net 4.5
I do not see any related info in IIS logs.
How can I debug this issue? Is there any other limits that I should set? Can it be related to Amazon EC2?
Sounds like a deadlock issue. Your method is marked as async but you aren't awaiting anything. Either remove the async Task, await something, or remove async and return Task.FromResult(new StatusCode(HttpStatusCode.NoContent));
Give this a try and let me know. This would explain the random failure times as the deadlock would only occur under certain situations.
It happened that this issues was not related to either IIS or Asp.net. This was either my internet provider or wi-fi router which caused it (or both). It looks like some POST requests (at least POST, did not check with others) never reached my remote server at all when I was using the router (D-link Dir-625). However, it may not be just because of the router. I suspect that this is some configuration on my internet provider side which does not work well with my router.

Facebook Graph API 403 Forbidden Error

This is similar to some questions on here, but none have seemed to produce an answer that has helped me. I'm calling the graph api from a c#/.Net application to get photos for a particular album, and I'm receiving a 403 error...sometimes.
I've never received the error in my development environment, only in production. I'm also caching the responses for an hour, so the most the application would hit the API in a given hour would be around 20 times, and not all at once. I'm currently swallowing the exception when it errors out and simply not showing the images, but that isn't a long-term solution.
var request = WebRequest.Create("https://graph.facebook.com/ALBUM_ID/photos");
var stream = request.GetResponse().GetResponseStream();
This just started happening about a month ago but I didn't see anything in the breaking changes list that would suggest this behavior. Any insight would be appreciated.
Update
This was hidden away in the response stream.
{"error":{"message":"(#4) Application request limit
reached","type":"OAuthException","code":4}}
I don't see for the life of me how I could be hitting a limit considering I'm only hitting the api a few times.
if you make a GET request to one of FB graph API endpoints that does not require access_token that does not mean you should not include it in request parameter. If you do as FB documentation says as do not include access_token then in FB server side it registers into your server machine. So limit (whatever amount is it exactly) can be reached very easily. If you however, put the user access token into the request (&access_token=XXXXXX) then requests register into the specific user, so the limit hardly ever be reached. You can test it with a simple script that makes 1000 requests with and without user access_token.
NOTE, FB app access token will not be sufficient as you will face the same problem: requests will be registered into app access_token that situation is alike making requests without access_token.

How to track performance of mvc4 web api rest call?

I have an asp.net mvc4 web api (rest) interface that is being called by numerous clients. Basically I serve up content per certain params:
http://myserv.x.com/api/123/getstuff?whatstuff=thisstuff
My question is that it gets hit about 50K a day and I am noticing timeouts and slow response times every now and then.
ASK: How can I include metrics of how long the request took to process (internal to my code) as well as how long it took to get serviced in the IIS queue. I'm not sure if the latency is in my code or IIS.
I'd like to add them back within the response somehow:
<StuffPayload>
<Stuff id=1 url=http://myserv.x.com/img/1/>
<Response time=100ms IIStime=10ms MyServerCodeTime=90ms/>
</StuffPayload>
First check what is your method doing: if there's any sql/file operation ensure you create/dispose correctly all resources.
You could write custom action filters for logging so you can have a reusable peace of code for all your tracing. You can then add additional content to the response within the OnActionExecuted method.

Categories

Resources