This error occurrs when I perform a redirect to another part of the site in question.
return this.RedirectToActionPermanent("Index", "Dashboard"
The code does not step into the Index method in the Dashboard controller before the error occurs.
I think the Fiddler response header posted is not the >internal< Response header in the error message.
Could anyone explain how best to increase the size limit of the internal response header?
Or, Are we looking at needing to compress the internal response header as a solution?
For your information the call to redirect above was working day in day out for about 3 months until it broke about 5 days ago.
[Edit 1]
The program.cs file is now looking like:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().ConfigureKestrel((context, options) => {
options.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(5);
options.Limits.MaxResponseBufferSize = 1024000;
options.Limits.Http2.HeaderTableSize = 16384;
});
However, the same error is still occurring.
[Edit 2 18.03.19 2:28am]
The problem was caused by a User Claim not being filled in (Identity Table) on one of the profile users. I filled the data in manually in SSMS and the error stopped occurring.
TIA.
David
I have also faced similar issue, for me
this was caused by a bug in WinHTTP which happened long time ago (core 2.0). When a redirect response occurs, WinHTTP prefers to reuse the same connection to follow the redirect.
The number of response bytes drained is controlled by the WinHttpHandler.MaxResponseDrainSize.
When the response size exceeds MaxResponseDrainSize the request is aborted with specified error.
The issue disappeared when i upgraded to 2.1
Update Timout Issue
The settings you add as KeepAliveTimeout = TimeSpan.FromMinutes(5); applies to Kestrel but not IIS.
There is a reverse proxy setup between IIS and Kestrel. So you made the Timeout of Kestrel to 5 Min. But IIS sits with default timeout which is 120sec. You can change the time out in
IIS->Sites->Select your Site->Advanced Settings ->Limits->Connection Time Out.
Try changing this settings to check whether this error happens.
Also there is another setting available in Configuration Editor
Related
I'm working on a .NET Core 3.1 webapp using C#.
I use Blazor ServerSide as my front-end. The app is hosted by Azure.
On my page I have an upload component. When I upload 1 file it works fine. When I upload 2-3 files it is still working but when I upload more files I get this error:
(Serilog.AspNetCore.RequestLoggingMiddleware) HTTP "POST" "/api/foo/2/bar" responded 500 in 18501.3265 ms
Microsoft.AspNetCore.Connections.ConnectionResetException: The client has disconnected
---> System.Runtime.InteropServices.COMException (0x800704CD): An operation was attempted on a nonexistent network connection. (0x800704CD)
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.IIS.Core.IO.AsyncIOOperation.GetResult(Int16 token)
Most likely it has something to do with the file size of the POST.
When running my webapp in Visual Studio I don't have any problems.
I also see this warning in my logging:
(Microsoft.AspNetCore.Server.IIS.Core.IISHttpServer) Increasing the MaxRequestBodySize conflicts with the max value for IIS limit maxAllowedContentLength. HTTP requests that have a content length greater than maxAllowedContentLength will still be rejected by IIS. You can disable the limit by either removing or setting the maxAllowedContentLength value to a higher limit.
I've tried numerous options to increase the limit:
In my controller:
[Authorize]
[DisableRequestSizeLimit]
public class UploadController : BaseApiController
In Startup.cs:
// To bypass the file limit:
services.Configure<FormOptions>(options =>
{
options.ValueLengthLimit = int.MaxValue;
options.MultipartBodyLengthLimit = long.MaxValue; // <-- !!! long.MaxValue
options.MultipartBoundaryLengthLimit = int.MaxValue;
options.MultipartHeadersCountLimit = int.MaxValue;
options.MultipartHeadersLengthLimit = int.MaxValue;
});
services.Configure<IISServerOptions>(options =>
{
options.MaxRequestBodySize = null;
});
But still no luck.
Am I missing yet another setting or is this error not related to the upload size?
It took me weeks to solve this together with help from Microsoft Support.
It appears the problem was only with a few clients.
I had changed the WebApp setting in Azure to use HTTP v2 instead of HTTP v1.
And it looks like in the connection from my browser to Azure one hop was dropping the HTTP v2 packages. When I reset the setting the problem went away.
Background
I have an API that makes uses the ADOMD.Net client to retrieve data from Azure Analysis Services (AAS) Model. A connection is created using a connection string that contains an access token obtained as documented here.
Problem
The API works fine, until the AAS Model is paused and resumes which happens every night. After this, the Model returns a 207 Temporary Redirect response.
The remote server returned an error: (307) Temporary Redirect.
Technical Details:
RootActivityId: 4d85e2d9-e1ec-406d-92aa-3f3e33ac4ed4
Date (UTC): 3/9/2020 1:09:05 PM
The location in the header is:
https://asazureweu10-westeurope.asazure.windows.net/webapi/xmla
The original request (but made by the ADOMD.Net client, not me) is made to:
https://asazureweu5-westeurope.asazure.windows.net/webapi/xmla
After I restart the API the client starts responding with the data and functions correctly. This leads me to believe something is being cached, and once the API is restarted the cache is cleared maybe? Is this a Connection pooling issue?
Does anyone know what is happening behind the scenes and know why this is happening?
This is a verified bug which has been reported to MS.
The MS support engineer has said.
The client library indeed caches this the cluster information in the
AppDomain level cache, which has a timeout of 60 minutes. But there is
an invalidation code that invalidates the cache on error code >= 300
and <= 399. In this case, the error is 307 so the invalidation logic
should still work, as long as the connection is closes and then opened
again.
This bug is due to be patched in a late august, early September Nuget release.
Update: This bug has now been patched in Nuget package microsoft.AnalysisServices.AdomdClient.NetCore.retail.amd64, version 19.9.0.1-Preview.
The code snippet I'm using is as below.
// Catch any error redirection errors and retry. Due to internal workings of lib, if model is scaled up or down, or paused and restarted,
// the model will come back on a different cluster, so we need to get a new connection.
catch (AdomdConnectionException ex)
{
if ((int) ((HttpWebResponse) ((WebException) ex.InnerException).Response).StatusCode >= 300
& (int) ((HttpWebResponse) ((WebException) ex.InnerException).Response).StatusCode <= 399)
{
aasConnection = await GetOpenAasConnectionAsync(customData, role);
}
else throw;
}
We have a self-hosted Apple Enterprise Developer based iOS app that was released around six weeks ago. We're working on the next version of the app and are seeing problems when people update to the new test version. An HttpRequestException is thrown with a "Connection reset by peer" message whenever calls over HTTPS that include a bearer token are made. The call stack shown in the MS App Center error report provides little info to go on.
Main thread
System.Net.Sockets
Socket.Receive (System.Byte[] buffer, System.Int32 offset, System.Int32 size, System.Net.Sockets.SocketFlags socketFlags)
System.Net.Sockets
NetworkStream.Read (System.Byte[] buffer, System.Int32 offset, System.Int32 size)
The initial web request to sign the user in appears to be working. It is the only request that doesn't include a bearer token in the headings. All subsequent requests in a given session cause this exception to be thrown.
We have found that users can resolve the error by removing the currently released version of the app from their phone before installing the new version. The error no longer occurs in the new version once that is done. We've also received a couple of reports of users getting the error even without upgrading the version of the app. It seems to occur for those users after not having used the app in a while.
We have tried to reproduce the error by following the same steps of installing the new test version on top of the released version on our devices and it works fine.
Our project manager has the issue on her personal device. She has been testing new versions without upgrading to see if the exception is no longer thrown. We have tried the following to resolve the problem without success:
Removed all local caching of previous web request results and bearer tokens, forcing live requests at all times
Switching from the iOS NSUrlSession implementation of HttpClient back to the default Xamarin managed implementation
Verifying the HttpClient base URI and individual request URIs are what are expected
Verifying the bearer token value has been assigned to the HttpClient Authorization default headers when the calls are made
Has anyone had this happen to them, or maybe can shed some light on possible causes of this exception being thrown? Thanks a ton for any help provided. It is driving us crazy!
Here's an example of one of the request calls. The HttpClient property is a static instance that is used for the lifetime of the app. The SaveContext property is a bool that is used to enable/disable context saving to test performing the requests in background threads without saving the context. Both options have been tried and had no impact on the error occurring. The HttpClient BaseAddress property has the API uri root assigned to it. The uri passed to GetAsync is a relative uri.
HttpResponseMessage serverResponse = await HttpClient.GetAsync(uri).ConfigureAwait(SaveContext);
Here's more of the setup:
var handler = new HttpClientHandler {
AllowAutoRedirect = true, AutomaticDecompression =
DecompressionMethods.Deflate | DecompressionMethods.GZip,
MaxAutomaticRedirections = 20
};
var httpClient = new HttpClient(handler);
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(JsonSerializerHelper.JsonEncoding));
HttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
We've been having issues sending certain Docusign envelopes lately, specifically those with large file sizes.
The errors we've been getting are:
Error calling CreateEnvelope: The operation has timed out
And
The request was aborted: The request was canceled.
No inner exception with any additional information in either case.
These errors only occur on our production server; on my local development machine everything works fine, so I can only assume that this is a connectivity issue; that there simply isn't enough time to send the supplied data over the available connection before something times out. What I would like to know is, what is the something that's timing out? Are these errors coming from my end, or Docusign's? If the former, is there any way to increase the timeout? I've got my HTTP execution timeout set to 300 seconds:
<httpRuntime maxRequestLength="30000" requestValidationMode="4.0" executionTimeout="300" targetFramework="4.5" />
... but that doesn't seem to affect anything, it always seems to time out at the default 1 minute 50 seconds.
Is there anything more I can do to prevent these requests from timing out?
Thanks,
Adam
Our issue has been resolved. The timeouts were indeed being caused by something on our end; there is a "Timeout" property which can be set against the EnvelopesApi object before sending; it can also be passed into the constructor when declared. So our fix was as simple as:
EnvelopesApi envelopesApi = new EnvelopesApi();
envelopesApi.Configuration.Timeout = DocusignTimeout;
The crux of our issue was that the Timeout property was not exposed in older versions of eSign. We had upgraded to 2.1.0 (the current version) earlier this week, but something must not have taken, as the metadata still showed our DocuSign.eSign.Client.Configuration class at version 15.4.0.0. Uninstalling the reinstalling eSign and RestSharp packages from NuGet gave us the correct version of this class, and enabled us to set our own timeout.
Hope this is helpful!
I got an Owin self-hosted web-api server, and I'm wondering if I need to change timeout settings when there are huge file downloads?
The client I'm using reads the response withHttpCompletionOption.ResponseHeadersRead.
During debugging, after I stopped for some time in a breakpoint, I got an exception on client side while trying to read from a received stream:
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
While debugging I can reproduce this issue. It happens after around 30 seconds waiting in a breakpoint, after the Get-Request to the server returned.
Is this due to some kind of idle timeout, because I hold in a breakpoint and do not work on the received stream? Or can it also happen while I'm reading from the stream when my collection is slow and it takes too long?
Very old question but may help whoever hits the same wall.
I had the same problem with a streaming content and found the initial clue inside HTTPERR folder (C:\Windows\System32\LogFiles\HTTPERR)
2016-08-12 09:17:52 ::1%0 60095 ::1%0 8000 HTTP/1.1 GET
/endpoint/audiostream/0/0/streamer.mp3 - - - Timer_MinBytesPerSecond -
2016-08-12 09:18:19 ::1%0 60118 ::1%0 8000 HTTP/1.1 GET
/endpoint/audiostream/0/0/streamer.mp3 - - - Request_Cancelled -
Owin HttpListener has a TimeOutManager property that allows you to change most timeout/limits. The only way I found to get my webapp HttpListener instance was by accessing its properties
var listener = (OwinHttpListener);
app.Properties["Microsoft.Owin.Host.HttpListener.OwinHttpListener" ]);
listener.Listener.TimeoutManager.MinSendBytesPerSecond = uint.MaxValue;
According to owin codebase, uint.MaxValue as MinSendBytesPerSecond will just disable the flag.