Docusign eSign: GetDocument request time out - c#

Recenlty we've been having issues downloading envelope documents.Getting the bellow exception after 6 minutes.
envelopesApi.GetDocument(accountId, envelopeId, documentId)
DocuSign.eSign.Client.ApiException: Error calling GetDocument: The operation has timed out.
The timeout has set for 10min as bellow,
var envelopesApi = new EnvelopesApi();
envelopesApi.Configuration.Timeout = 600000;
envelopesApi.Configuration.ApiClient.RestClient.Timeout = 600000;//also added this
But after receive error, when re-trying through the postman is succeeding.
Also this error is intermittent.
Is there anything that we are missing ?
Thanks,
Dula

Timeouts can occur for a variety of reasons, like internet latency and other TPC/IP issues on the way from DocuSign servers to your box. I would recommend that operations like retrieving large files are done in the background.
You should also suggest you update the SDK to the latest version as some improvements were made in this area.

Related

Microsoft Graph API - Exchange Online Messages call returns ServiceUnavailable

I am fetching messages from Exchange in Office365 using Microsoft Graph API.
However, for some folders I seem to get intermittent exceptions.
What we are using:
Microsoft.Graph Version 3.9.0 - Microsoft Graph Client Library for .Net
Microsoft.Graph.Core Version 1.21.0 - Microsoft Graph Core Client Library for .Net
This is the call being used:
'GET /v1.0/users/{id}/mailFolders/{id}/messages'
And this is the error (ServiceUnavailable with UnknownError as inner exception):
Status Code: ServiceUnavailable Microsoft.Graph.ServiceException:
Code: UnknownError Message: Error while processing response. Inner
error:
AdditionalData:
date: 2020-08-04T13:55:33
request-id: ** ClientRequestId: **
Code: UnknownError Message: Error while processing response. Inner error:
AdditionalData:
date: 2020-08-04T13:55:33
request-id: ** ClientRequestId: **
What I've tried:
Throttling:
These are usually the errors we would see with throttling. However, in this case, there seems to be no indication of throttling being applied. There isn't any 'back-off' time returned in the result. Other requests to different folders returns just fine too. By applying our own 'back-off' time (ranging between 5mins-20mins does not seem so make a difference either).
Beta endpoint:
The call posted above shows /v1.0 used. We've also switched to the /beta endpoint, with no difference.
Amount of mails retrieved:
Graph allows us to retrieve up to 999 mails at a time. We've reduced that all the way down to a mail or 2 at a time, but it still returns with the same error.
Delta token:
We've also tried switching over to using the delta token in order to retrieve the mails. This also returns with the same error.
Graph downgrade:
Hoping that there is some difference in the last few versions, we downgraded Graph. There was no difference.
Check local sync issues:
I've noticed in the past (quite a while back), that when doing this call for a folder that has potential local sync issues, this is the same type of error response returned. In this case, there is no reason to believe that these are local sync issues.
Additional:
When setting up the httpProvider, I've removed the default retry handlers as well. I've seen that using the default retry handler, it would automatically catch the 'ServiceException' and do internal retries (not adhering to back offs (not that there is any)), and would result in a tooManyRetries or a timeout (hiding the actual issue). By removing the default retry handler, we can see the actual 'ServiceException' error returned by the server.
When:
Based on our telemetry, this seems to have started happening a lot more frequently since around the 11-13th of June. Before that we did not experience any issues.
There are days that the requests work, but they are few and far in between.
This is quite a big issue for us, so any suggestions would be greatly appreciated. Any specific Microsoft Support channel that I can log this with would also help.
Thanks in advance.

Docusign eSign: CreateEnvelope requests timing out

We've been having issues sending certain Docusign envelopes lately, specifically those with large file sizes.
The errors we've been getting are:
Error calling CreateEnvelope: The operation has timed out
And
The request was aborted: The request was canceled.
No inner exception with any additional information in either case.
These errors only occur on our production server; on my local development machine everything works fine, so I can only assume that this is a connectivity issue; that there simply isn't enough time to send the supplied data over the available connection before something times out. What I would like to know is, what is the something that's timing out? Are these errors coming from my end, or Docusign's? If the former, is there any way to increase the timeout? I've got my HTTP execution timeout set to 300 seconds:
<httpRuntime maxRequestLength="30000" requestValidationMode="4.0" executionTimeout="300" targetFramework="4.5" />
... but that doesn't seem to affect anything, it always seems to time out at the default 1 minute 50 seconds.
Is there anything more I can do to prevent these requests from timing out?
Thanks,
Adam
Our issue has been resolved. The timeouts were indeed being caused by something on our end; there is a "Timeout" property which can be set against the EnvelopesApi object before sending; it can also be passed into the constructor when declared. So our fix was as simple as:
EnvelopesApi envelopesApi = new EnvelopesApi();
envelopesApi.Configuration.Timeout = DocusignTimeout;
The crux of our issue was that the Timeout property was not exposed in older versions of eSign. We had upgraded to 2.1.0 (the current version) earlier this week, but something must not have taken, as the metadata still showed our DocuSign.eSign.Client.Configuration class at version 15.4.0.0. Uninstalling the reinstalling eSign and RestSharp packages from NuGet gave us the correct version of this class, and enabled us to set our own timeout.
Hope this is helpful!

Getting time outs while deleting the files from Microsoft azure

I'm trying to delete the backup files (which are older than 30 days) from Microsoft azure using C# code but unfortunately i'm getting time out issues ,For error message please click the below "Error code".Can any one please help me on that.
Please see the code below:
Microsoft.WindowsAzure.Storage.StorageException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
If you could locate this issue and confirm that the error is thrown at container.ListBlobs, I assume that you could set BlobRequestOptions.ServerTimeout to improve the server timeout for your request. Also, you could leverage BlobRequestOptions.RetryPolicy (LinearRetry or ExponentialRetry) to enable retries when a request failed. Here is the code snippet, you could refer to it:
container.ListBlobs(null, false, options: new BlobRequestOptions()
{
ServerTimeout = TimeSpan.FromMinutes(5)
});
or
container.ListBlobs(null, false, options: new BlobRequestOptions()
{
//the server timeout interval for the request
ServerTimeout = TimeSpan.FromMinutes(5),
//the maximum execution time across all potential retries for the request
MaximumExecutionTime=TimeSpan.FromMinutes(15),
RetryPolicy=new ExponentialRetry(TimeSpan.FromSeconds(5),3) //retry 3 times
});
Additionally, you could leverage ListBlobsSegmented to list blobs in pages. For more details, you could refer to List blobs in pages asynchronously section in this official tutorial.

Adjust HttpRequst Timeout Before Creating any HttpWebRequest

In C# 5 and winform, I used a library created by Telegram Company. In this library there is a function SendDocument(UserId,DocumentStream). I know in this function, they used some HttpWebRequest, and the Timeout property of that is not handled. because sometimes it can't send large documents and after exact 100 seconds(default timeout in DotNet), the function throws an exception The task was canceled.
From the document of Telegram Company, we can send 50 MB files and my example files is about 15 MB.
Ok, Now I want to adjust the timeout of all HttpWebRequest of my server but I don't have any feature for this.
Can I adjust all HttpWebRequest.Timeout property in my server??
This is not directly related to your question, but may help ease your mind.
My advice is, don't bother adjusting the timeout. It is not likely to help. Here is what I have gone through:
I have tried to upload a 20M mp4 video file using Telegram Bot API. From a Raspberry Pi, it took 5 minutes, then returned a 504 Gateway-Timeout error. From a hosted server, it took 1 minute, then returned a 504 Gateway-Timeout error. In both cases, however, the video did eventually reach the recipient 5 minutes later. So, the upload seemed somewhat successful, yet not quite successful.
I tried to fix the problem by streaming the upload. Same problem persisted.
I tried to adjust the HTTP timeout parameter. Same problem persisted.
I tried to use cURL to make the request (instead of using telepot, a Python library I author). Same problem persisted.
I suspect the problem lies with the Telegram servers, so I talked to Bot Support. They got back to me once, saying they have made some improvements and asked if I still have the same problem. But same problem still persists.
So, it seems the problem does lie with the Telegram servers. It's not your code.
I know it's a pretty old question but may be my answer will help somebody. When I tried to send cosiderably large files via my bot I received Telegram.Bot.Exceptions.ApiRequestException: Request timed out and the only solution I found was this issue. Which wasn't really helpfull because if you check source code then you'll see that passing cancellation token does nothing with request timeout. But then I saw that you can pass HttpClient to your bot client instance and make it something like this:
_httpClient = new HttpClient();
_httpClient.Timeout = new TimeSpan(0, 5, 0); // 5 min
_client = new TelegramBotClient(botConfig.Token, _httpClient);
Hope this will help

Google.Apis.Admin.Email_Migration_v2 [HTTP Status Code 412 – Limit Reached]

Edit 2:
Client Library: After reviewing it is not easily suggested that this is for the .NET client library.
DLL: Google.Apis.Admin.email_migration_v2.dll
What steps will reproduce the problem?
Generate a process which contains a
Google.Apis.Admin.email_migration_v2.AdminService instance for each
unique Google Apps Gmail mailbox that will have messages sent to it.
All of the AdminService objects generated use the same OAuth2.0
credentials and application name. Each AdminService object generated
will only send messages to one Google Apps user’s mailbox. For
example, if we were sending messages to five different Google Apps
Gmail mailboxes we would generate five AdminService objects to send
messages; one for each user’s mailbox.
Biggest thing to note is that each AdminService object created is created on a separate process.
AdminService objects were given a FileDataStore object to change the location of where the refresh token is stored; C:\ProgramData\SomeFile\SomeFile.
Supplied appropriate scopes to the credentials.
Begin sending mail messages on each process. Using one thread to send messages in each process, so only one message is sent at a time to each user’s mailbox.
Each message sent gets its own instance of MailItem and MailResource.InsetMedia
The MailResource.InsertMedia object is generated for each item by calling AdminService.Mail.Insert(MailItem, string, Stream, string) method.
When our code makes the call to MailResource.InsertMediaUpload.UploadAsync(CancellationTokenSource).Result is where we can receive the error.
The error is caught and handled (logged) from the return type of the aforementioned call; the type is Google.Apis.Upload.IUploadProgress. The exception is handled using the IUploadProgress.Exception property.
What is the expected output? What do you see instead?
The expected output would be a successful message response or the exception property of the IUploadProgress to be null after the return of the task. Instead we are receiving the following error message:
The service admin has thrown an exception:
Google.GoogleApiException:Google.Apis.Requests.RequestError
Limit reached. [412]
Errors [Message[Limit reached.] Location[If-Match - header] Reason[conditionNotMet] Domain[global]]
at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task)
at Google.Apis.Upload.ResumableUpload`1.d__e.MoveNext()
What version of the product are you using?
Google.Apis.Admin.Email_Migration_v2 (1.8.1.20)
What is your operating system?
Windows Server 2008 R2 Enterprise (SP1)
What is your IDE?
Visual Studio 2013 Premium
What is the .NET framework version?
4.0.30319
Please provide any additional information below.
Non-consecutive messages can fail (with the 412 http status code
provided above) during the process of sending the messages. Once we
receive this error other messages sent after the failed message(s)
can succeed. (Items can fail at any point during the process
beginning, middle or end.)
Each message sent has nearly identical content. The size of the
messages range from 1KB to 100KB including the size of all associated
attachments, not all messages have attachments.
Reprocessing the failed items at a later time results in successful
message responses and the appropriate items are sent to the user’s
Google Apps Gmail Inbox.
The maximum number of Google Apps user’s mailboxes sent to at one
time was ten.
After checking the quotas of our Google Developers Console project:
We were nowhere near the specified limit of 20 requests a second for
the Email Migration API; maxed out at sending 7 requests a second.
Only 2% of the maximum daily requests had been reached.
All messages sent had the same label; the label was well under the
225 character limit. Actually all of the labels/sub-labels applied
together only surmounted to 40 characters.
This error message can still be received when sending to only one
Google Apps user’s mailbox; only using one process and one thread.
Each process is normally sending anywhere from 1000-5000 messages.
I have not found a lot of specific documentation to explain this particular error in enough detail to remedy the problem at hand.
Questions:
So what exactly does this 412 http status code mean? What limit is being encountered that this message is referring to?
Shouldn’t we be receiving some form of 5XX error from the server if we are hitting a limit? In which case wouldn’t the built in exponential back off policy kick in?
a. Unless the server is checking the POST request for a pre-condition about a server side limit then telling the client to back off which is what a 412 error seems to typically indicate. In that case please give as much detail as possible for question 1.
Sorry for the extensive post! Thanks for your time! I will also be creating a defect/issue in Google's .NET issue tracker and providing a link.
Edit 1:
For anyone interested in following this issue here is a link to the submitted item in Google's issue tracker for .NET.
Submitted Issue
For reference it is issue 492.
I am not quite sure where you see the "the specified limit of 20 requests a second for the Email Migration API". Reminder: the QPS limit you see in the Google Developers Console project is not the actual default limit. You can change that limit to anything you want, and thus, that's not the actual limit for the API. It is really just for managing the consumption of the API quota (some APis will have a much higher QPS where you can adjust it to lower for different projects across your console).
According to the email migration APi documentation, the QPS is 1 request per second (the link is here: https://developers.google.com/admin-sdk/email-migration/v2/limits).
I have experienced 412 errors when the QPS limit is being hit, and I have also seen the 412 error returned when I am uploading too much data to a single domain. How much data are you loading all at once? I would suggest doing an exponential backoff to see if the issue would disappear.
I believe I have found an answer to this problem, though I will advise a disclaimer, I do not work for Google and cannot be 100% sure of the accuracy; you've been warned. This should at least hold true for the .NET version of Google's Email Migration v2 API. I cannot guarantee how other APIs work because I do not use them..
Through working with this API in spurts for well over eight months now, it appears that if an application or multiple applications are to send messages to a single Google Apps user/mailbox consistently, at a faster rate than which Google servers can process, then at some rate you should start to get a bunch of GoogleApiExceptions stating "412 - Limit Reached" when sending new messages. What we have gathered through using our application is that each Google Apps user/mailbox has its own pending items queue. When you send a message to Google Apps it is first put into this queue before being processed by a Google Server and put into the user's mailbox. If this queue becomes full and you attempt to send another message you will receive a 412 error.
Options are to wait before sending another message, you'll have to wait however long the Google server takes to process the next message in the user's queue before sending another; which is unpredictable. The better option in my opinion is to start sending messages to another Google Apps user; because each user appears to have its own message queue. Be sure to stop sending to the user who is consistently getting 412 errors. This will give the Google server some time to process that user's packed message queue. Note each pending messages queue appeared to hold about 100-150 items before throwing 412 errors.
503 errors appear to occur when sending messages into a user's mailbox queue at a higher rate than 1 request per second. As Emily has stated "the QPS limit you see in the Google Developers Console project is not the actual default limit" it is truly 1 QPS per Google Apps user.
As for the exponential back-off it is supposed to be implemented automatically see this. Note Peleyal appears to be the gentleman in charge of the API; can be noted from the download page for the API.
This took us a little while to figure out so cheers if you're having this issue! Please if you find any contradicting information correct any mistakes found in this answer or make your own!!

Categories

Resources