I'm trying to delete the backup files (which are older than 30 days) from Microsoft azure using C# code but unfortunately i'm getting time out issues ,For error message please click the below "Error code".Can any one please help me on that.
Please see the code below:
Microsoft.WindowsAzure.Storage.StorageException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
If you could locate this issue and confirm that the error is thrown at container.ListBlobs, I assume that you could set BlobRequestOptions.ServerTimeout to improve the server timeout for your request. Also, you could leverage BlobRequestOptions.RetryPolicy (LinearRetry or ExponentialRetry) to enable retries when a request failed. Here is the code snippet, you could refer to it:
container.ListBlobs(null, false, options: new BlobRequestOptions()
{
ServerTimeout = TimeSpan.FromMinutes(5)
});
or
container.ListBlobs(null, false, options: new BlobRequestOptions()
{
//the server timeout interval for the request
ServerTimeout = TimeSpan.FromMinutes(5),
//the maximum execution time across all potential retries for the request
MaximumExecutionTime=TimeSpan.FromMinutes(15),
RetryPolicy=new ExponentialRetry(TimeSpan.FromSeconds(5),3) //retry 3 times
});
Additionally, you could leverage ListBlobsSegmented to list blobs in pages. For more details, you could refer to List blobs in pages asynchronously section in this official tutorial.
Related
I am creating a Xamarin app that contains a functionality to upload large videos to Azure Blob Storage.
Below is the code used.
BlobClientOptions blobClientOptions = new BlobClientOptions();
blobClientOptions.Retry.NetworkTimeout = Timeout.InfiniteTimeSpan;
BlobClient blobClient = new BlobClient("ConnectionString", "ContainerName", "fileName", blobClientOptions);
//previewVideo is an array of bytes
MemoryStream memoryStream = new MemoryStream(previewVideo);
await blobClient.UploadAsync(memoryStream);
Sometimes I am getting the below error.
Retry failed after 6 tries. Retry settings can be adjusted in ClientOptions.Retry
(The operation was cancelled because it exceeded the configured timeout of 0:01:40.
Network timeout can be adjusted in ClientOptions.Retry.NetworkTimeout.)
As shown in the code above, I am setting the NetworkTimeout but it is always timing out after 100 seconds.
As per Azure SDK on git hub, this was a know issue and it is solved last month in the last package update.
I updated the package and now the timeout property is working fine. Hence the problem is solved.
I'm working on a .NET Core 3.1 webapp using C#.
I use Blazor ServerSide as my front-end. The app is hosted by Azure.
On my page I have an upload component. When I upload 1 file it works fine. When I upload 2-3 files it is still working but when I upload more files I get this error:
(Serilog.AspNetCore.RequestLoggingMiddleware) HTTP "POST" "/api/foo/2/bar" responded 500 in 18501.3265 ms
Microsoft.AspNetCore.Connections.ConnectionResetException: The client has disconnected
---> System.Runtime.InteropServices.COMException (0x800704CD): An operation was attempted on a nonexistent network connection. (0x800704CD)
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.IIS.Core.IO.AsyncIOOperation.GetResult(Int16 token)
Most likely it has something to do with the file size of the POST.
When running my webapp in Visual Studio I don't have any problems.
I also see this warning in my logging:
(Microsoft.AspNetCore.Server.IIS.Core.IISHttpServer) Increasing the MaxRequestBodySize conflicts with the max value for IIS limit maxAllowedContentLength. HTTP requests that have a content length greater than maxAllowedContentLength will still be rejected by IIS. You can disable the limit by either removing or setting the maxAllowedContentLength value to a higher limit.
I've tried numerous options to increase the limit:
In my controller:
[Authorize]
[DisableRequestSizeLimit]
public class UploadController : BaseApiController
In Startup.cs:
// To bypass the file limit:
services.Configure<FormOptions>(options =>
{
options.ValueLengthLimit = int.MaxValue;
options.MultipartBodyLengthLimit = long.MaxValue; // <-- !!! long.MaxValue
options.MultipartBoundaryLengthLimit = int.MaxValue;
options.MultipartHeadersCountLimit = int.MaxValue;
options.MultipartHeadersLengthLimit = int.MaxValue;
});
services.Configure<IISServerOptions>(options =>
{
options.MaxRequestBodySize = null;
});
But still no luck.
Am I missing yet another setting or is this error not related to the upload size?
It took me weeks to solve this together with help from Microsoft Support.
It appears the problem was only with a few clients.
I had changed the WebApp setting in Azure to use HTTP v2 instead of HTTP v1.
And it looks like in the connection from my browser to Azure one hop was dropping the HTTP v2 packages. When I reset the setting the problem went away.
I am logging some data to a Gen1 Azure Datalake Store, using the Microsoft.Azure.DataLake.Store driver.
I am authenticating and creating a client like so:
var adlCreds = await ApplicationTokenProvider.LoginSilentAsync(tenant, clientId, secret);
var adlClient = AdlsClient.CreateClient(dlUrl, adlCreds);
And then writing to a file using ConcurrentAppendAsync like so (file name just an example):
var textBytes = Encoding.UTF8.GetBytes(appendText);
await client.ConcurrentAppendAsync("test/myFile.json", true, textBytes, 0, textBytes.Length);
This is working, most of the time, however I am seeing intermittent errors logged with the error:
CONCURRENTAPPEND failed with Unknown Error: The underlying
connection was closed: A connection that was expected to be kept alive
was closed by the server.
When this occurs the concurrent append fails and the data is not saved.
The service this is running in is on .NET Framework 4.6.1. I am caching and reusing the credentials object (for 30 mins) and the AdlsClient (for 5 mins) which as far as I can tell is OK, so I'm not sure what the issue is.
One of the problems could be due to the use of "Create" and "ConcurrentAppend" on the same file stream. ADLS documentation mentions that they can't be used on the same file. If you are using them concurrently, maybe, try changing the "Create" command to "ConcurrentAppend" as the latter can be used to create a file if it doesn't exist.
This error occurrs when I perform a redirect to another part of the site in question.
return this.RedirectToActionPermanent("Index", "Dashboard"
The code does not step into the Index method in the Dashboard controller before the error occurs.
I think the Fiddler response header posted is not the >internal< Response header in the error message.
Could anyone explain how best to increase the size limit of the internal response header?
Or, Are we looking at needing to compress the internal response header as a solution?
For your information the call to redirect above was working day in day out for about 3 months until it broke about 5 days ago.
[Edit 1]
The program.cs file is now looking like:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().ConfigureKestrel((context, options) => {
options.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(5);
options.Limits.MaxResponseBufferSize = 1024000;
options.Limits.Http2.HeaderTableSize = 16384;
});
However, the same error is still occurring.
[Edit 2 18.03.19 2:28am]
The problem was caused by a User Claim not being filled in (Identity Table) on one of the profile users. I filled the data in manually in SSMS and the error stopped occurring.
TIA.
David
I have also faced similar issue, for me
this was caused by a bug in WinHTTP which happened long time ago (core 2.0). When a redirect response occurs, WinHTTP prefers to reuse the same connection to follow the redirect.
The number of response bytes drained is controlled by the WinHttpHandler.MaxResponseDrainSize.
When the response size exceeds MaxResponseDrainSize the request is aborted with specified error.
The issue disappeared when i upgraded to 2.1
Update Timout Issue
The settings you add as KeepAliveTimeout = TimeSpan.FromMinutes(5); applies to Kestrel but not IIS.
There is a reverse proxy setup between IIS and Kestrel. So you made the Timeout of Kestrel to 5 Min. But IIS sits with default timeout which is 120sec. You can change the time out in
IIS->Sites->Select your Site->Advanced Settings ->Limits->Connection Time Out.
Try changing this settings to check whether this error happens.
Also there is another setting available in Configuration Editor
Recenlty we've been having issues downloading envelope documents.Getting the bellow exception after 6 minutes.
envelopesApi.GetDocument(accountId, envelopeId, documentId)
DocuSign.eSign.Client.ApiException: Error calling GetDocument: The operation has timed out.
The timeout has set for 10min as bellow,
var envelopesApi = new EnvelopesApi();
envelopesApi.Configuration.Timeout = 600000;
envelopesApi.Configuration.ApiClient.RestClient.Timeout = 600000;//also added this
But after receive error, when re-trying through the postman is succeeding.
Also this error is intermittent.
Is there anything that we are missing ?
Thanks,
Dula
Timeouts can occur for a variety of reasons, like internet latency and other TPC/IP issues on the way from DocuSign servers to your box. I would recommend that operations like retrieving large files are done in the background.
You should also suggest you update the SDK to the latest version as some improvements were made in this area.