Uploading to Azure WebApp throws `ConnectionResetException: The client has disconnected` - c#

I'm working on a .NET Core 3.1 webapp using C#.
I use Blazor ServerSide as my front-end. The app is hosted by Azure.
On my page I have an upload component. When I upload 1 file it works fine. When I upload 2-3 files it is still working but when I upload more files I get this error:
(Serilog.AspNetCore.RequestLoggingMiddleware) HTTP "POST" "/api/foo/2/bar" responded 500 in 18501.3265 ms
Microsoft.AspNetCore.Connections.ConnectionResetException: The client has disconnected
---> System.Runtime.InteropServices.COMException (0x800704CD): An operation was attempted on a nonexistent network connection. (0x800704CD)
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.IIS.Core.IO.AsyncIOOperation.GetResult(Int16 token)
Most likely it has something to do with the file size of the POST.
When running my webapp in Visual Studio I don't have any problems.
I also see this warning in my logging:
(Microsoft.AspNetCore.Server.IIS.Core.IISHttpServer) Increasing the MaxRequestBodySize conflicts with the max value for IIS limit maxAllowedContentLength. HTTP requests that have a content length greater than maxAllowedContentLength will still be rejected by IIS. You can disable the limit by either removing or setting the maxAllowedContentLength value to a higher limit.
I've tried numerous options to increase the limit:
In my controller:
[Authorize]
[DisableRequestSizeLimit]
public class UploadController : BaseApiController
In Startup.cs:
// To bypass the file limit:
services.Configure<FormOptions>(options =>
{
options.ValueLengthLimit = int.MaxValue;
options.MultipartBodyLengthLimit = long.MaxValue; // <-- !!! long.MaxValue
options.MultipartBoundaryLengthLimit = int.MaxValue;
options.MultipartHeadersCountLimit = int.MaxValue;
options.MultipartHeadersLengthLimit = int.MaxValue;
});
services.Configure<IISServerOptions>(options =>
{
options.MaxRequestBodySize = null;
});
But still no luck.
Am I missing yet another setting or is this error not related to the upload size?

It took me weeks to solve this together with help from Microsoft Support.
It appears the problem was only with a few clients.
I had changed the WebApp setting in Azure to use HTTP v2 instead of HTTP v1.
And it looks like in the connection from my browser to Azure one hop was dropping the HTTP v2 packages. When I reset the setting the problem went away.

Related

Multiple servers generating error 500 in HANGFIRE

When I put the application into production with pod-managed Kubernetes architecture where it has the possibility of scaling so today it has two servers running the same application, hangfire recognizes both but returns an error 500
Unable to refresh the statistics: the server responded with 500 (error). Try reloading the page manually, or wait for automatic reload that will happen in a minute.
But when I leave on stage which is the testing application where there is only one server, hangfire works normally.
Hangfire Configuration:
Startup.cs
services.AddHangfire(x => x.UsePostgreSqlStorage(Configuration.GetConnectionString("DefaultConnection")));
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] { new AuthorizationFilterHangfire() }
});
app.UseHangfireServer();
Error
You can now add IgnoreAntiforgeryToken to your service which should resolve this issue.
According to this github post, the issue occured when you had multiple servers running the dashboard and due to load balancing when your request went to different server from the one you originally got the page, you'd see the error.
Adding IgnoreAntiforgeryToken = true to the dashboard should resolve the issue.
Excerpt Taken from here
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] {new HangfireAuthFilter()},
IgnoreAntiforgeryToken = true // <--This
});
We solved this issue by persisting our DataProtectionKeys in Redis for all replicas to read from.
services.AddDataProtection().PersistKeysToStackExchangeRedis(redis, $yourprefix::DataProtectionKeys");

An internal response header size limit was exceeded

This error occurrs when I perform a redirect to another part of the site in question.
return this.RedirectToActionPermanent("Index", "Dashboard"
The code does not step into the Index method in the Dashboard controller before the error occurs.
I think the Fiddler response header posted is not the >internal< Response header in the error message.
Could anyone explain how best to increase the size limit of the internal response header?
Or, Are we looking at needing to compress the internal response header as a solution?
For your information the call to redirect above was working day in day out for about 3 months until it broke about 5 days ago.
[Edit 1]
The program.cs file is now looking like:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().ConfigureKestrel((context, options) => {
options.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(5);
options.Limits.MaxResponseBufferSize = 1024000;
options.Limits.Http2.HeaderTableSize = 16384;
});
However, the same error is still occurring.
[Edit 2 18.03.19 2:28am]
The problem was caused by a User Claim not being filled in (Identity Table) on one of the profile users. I filled the data in manually in SSMS and the error stopped occurring.
TIA.
David
I have also faced similar issue, for me
this was caused by a bug in WinHTTP which happened long time ago (core 2.0). When a redirect response occurs, WinHTTP prefers to reuse the same connection to follow the redirect.
The number of response bytes drained is controlled by the WinHttpHandler.MaxResponseDrainSize.
When the response size exceeds MaxResponseDrainSize the request is aborted with specified error.
The issue disappeared when i upgraded to 2.1
Update Timout Issue
The settings you add as KeepAliveTimeout = TimeSpan.FromMinutes(5); applies to Kestrel but not IIS.
There is a reverse proxy setup between IIS and Kestrel. So you made the Timeout of Kestrel to 5 Min. But IIS sits with default timeout which is 120sec. You can change the time out in
IIS->Sites->Select your Site->Advanced Settings ->Limits->Connection Time Out.
Try changing this settings to check whether this error happens.
Also there is another setting available in Configuration Editor

Azure Blob Storage .NET client request timeout

I'm trying to understand the behavior of handling network errors in the Azure Storage .NET client. In short, my issue is:
If I pull my network cable while I'm downloading a blob from blob storage, my application will hang for at least 30 minutes (this is how long my patience lasted - it probably hangs longer).
For example, this happens if I use the following code (I have not configured any settings on the blob client itself).
...
var blockBlob = container.GetBlockBlobReference("myblob.data");
var blobRequestOptions = new BlobRequestOptions()
{
RetryPolicy = new NoRetry(),
};
using (var stream = new MemoryStream())
{
blockBlob.DownloadToStream(stream, null, blobRequestOptions);
}
I know that I can configure the MaximumExecutionTime property in BlobRequestOptions, but it seems a bit strange to me that the default behavior is to hang indefinitely if there's a drop in network connectivity. This makes me suspect that I'm missing something basic on how the client is supposed to be used. (There default value for MaximumExecutionTimeout appears to be Infinite).
I also know I can pass in a ServerTimeout, but my understanding is that this is used internally in the Azure Storage service and would't be applicable if there's a network drop.
What I think I'm looking for specifically is a per-request timeout for the HTTP calls made to blob storage. Something like Timeout on a HttpWebRequest.
(I've reproduced my issue in the Azure Storage Client version 9.3.2)
From my understanding of the SDK, the timeout is handled by default on the server side. I did not find anything regarding this on the MSDN, but the Azure Java SDK (using the same HTTP endpoints) says :
The default maximum execution is set in the client and is by default null, indicating no maximum time.
You can check it here : https://azure.github.io/azure-storage-java/index.html?com/microsoft/azure/storage/RequestOptions.html
Look for the setMaximumExecutionTimeInMs method.
Since the timeouts seem to be handled by the server and the default client has not default timeout value, it makes sense that you request never end when you unplug the router because you won't be able to catch the server-sided timeout.
I found that the storage sdk team had indeed acknowledged and addressed this bug in v8.1.3 as seen in the change log:
https://github.com/Azure/azure-storage-net/blob/dfc88329b56ef022e38f2d39d709ddc2b41fe6a0/Common/changelog.txt
Changes in 8.1.3 :
- Blobs (Desktop) : Fixed a bug where the MaximumExecutionTime was not honored, leading to infinite wait, if due to a failure, e.g., a network failure after receiving the response headers, server stopped sending partial response.
commit: https://github.com/Azure/azure-storage-net/pull/459/commits/ad8fd6ad3cdfad77cfe23afe16f1f96c04ad90ee
However, your claim is you can reproduce this in 9.3.2. I too, am seeing this issue with 11.1.1. I'm thinking that the bug was not fully addressed.

Getting time outs while deleting the files from Microsoft azure

I'm trying to delete the backup files (which are older than 30 days) from Microsoft azure using C# code but unfortunately i'm getting time out issues ,For error message please click the below "Error code".Can any one please help me on that.
Please see the code below:
Microsoft.WindowsAzure.Storage.StorageException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
If you could locate this issue and confirm that the error is thrown at container.ListBlobs, I assume that you could set BlobRequestOptions.ServerTimeout to improve the server timeout for your request. Also, you could leverage BlobRequestOptions.RetryPolicy (LinearRetry or ExponentialRetry) to enable retries when a request failed. Here is the code snippet, you could refer to it:
container.ListBlobs(null, false, options: new BlobRequestOptions()
{
ServerTimeout = TimeSpan.FromMinutes(5)
});
or
container.ListBlobs(null, false, options: new BlobRequestOptions()
{
//the server timeout interval for the request
ServerTimeout = TimeSpan.FromMinutes(5),
//the maximum execution time across all potential retries for the request
MaximumExecutionTime=TimeSpan.FromMinutes(15),
RetryPolicy=new ExponentialRetry(TimeSpan.FromSeconds(5),3) //retry 3 times
});
Additionally, you could leverage ListBlobsSegmented to list blobs in pages. For more details, you could refer to List blobs in pages asynchronously section in this official tutorial.

Azure ASP .net WebApp The request timed out

I have deployed an ASP .net MVC web app to Azure App service.
I do a GET request from my site to some controller method which gets data from DB(DbContext). Sometimes the process of getting data from DB may take more than 4 minutes. That means that my request has no action more than 4 minutes. After that Azure kills the connection - I get message:
500 - The request timed out. The web server failed
to respond within the specified time.
This is a method example:
[HttpGet]
public async Task<JsonResult> LongGet(string testString)
{
var task = Task.Delay(360000);
await task;
return Json("Woke", JsonRequestBehavior.AllowGet);
}
I have seen a lot of questions like this, but I got no answer:
Not working 1
Cant give other link - reputation is too low.
I have read this article - its about Azure Load Balancer which is not available for webapps, but its written that common way of handling my problem in Azure webapp is using TCP Keep-alive. So I changed my method:
[HttpGet]
public async Task<JsonResult> LongPost(string testString)
{
ServicePointManager.SetTcpKeepAlive(true, 1000, 5000);
ServicePointManager.MaxServicePointIdleTime = 400000;
ServicePointManager.FindServicePoint(Request.Url).MaxIdleTime = 4000000;
var task = Task.Delay(360000);
await task;
return Json("Woke", JsonRequestBehavior.AllowGet);
}
But still get same error.
I am using simple GET request like
GET /Home/LongPost?testString="abc" HTTP/1.1
Host: longgetrequest.azurewebsites.net
Cache-Control: no-cache
Postman-Token: bde0d996-8cf3-2b3f-20cd-d704016b29c6
So I am looking for the answer what am I doing wrong and how to increase request timeout time in Azure Web app. Any help is appreciated.
Azure setting on portal:
Web sockets - On
Always On - On
App settings:
SCM_COMMAND_IDLE_TIMEOUT = 3600
WEBSITE_NODE_DEFAULT_VERSION = 4.2.3
230 seconds. That's it. That's the in-flight request timeout in Azure App Service. It's hardcoded in the platform so TCP keep-alives or not you're still bound by it.
Source -- see David Ebbo's answer here:
https://social.msdn.microsoft.com/Forums/en-US/17305ddc-07b2-436c-881b-286d1744c98f/503-errors-with-large-pdf-file?forum=windowsazurewebsitespreview
There is a 230 second (i.e. a little less than 4 mins) timeout for requests that are not sending any data back. After that, the client gets the 500 you saw, even though in reality the request is allowed to continue server side.
Without knowing more about your application it's difficult to suggest a different approach. However what's clear is that you do need a different approach --
Maybe return a 202 Accepted instead with a Location header to poll for the result later?
I just changed my Azure Web Site from Shared Enviroment to Standard, and it works.

Categories

Resources