I'm hosting an application on Azure as a continuous WebJob. The program frequently (around once per second) makes calls to a CosmosDB database by creating a DocumentClient instance (I make use of the DocumentClient function CreateDocumentQuery and a Linq Query on the resultant IEnumerable to retrieve objects from my database). When I run the program locally it behaves as expected without any issues. When I publish the program as an Azure WebJob and run it, my logs indicate that an HttpRequestException is being thrown with the message:
An error occurred while sending the request.
Additionally, I get the following stack trace:
at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification)
at System.Threading.Tasks.Task1.get_Result()
at Microsoft.Azure.Documents.Linq.DocumentQuery1.d__31.MoveNext()
at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source)
at ... my calling code...
This problem only seems to occur when I make frequent use of the DocumentClient and only on the WebJob side of things. Running an equivalent load locally does not faze my application. Why is this exception occurring in my WebJob? It might be worth noting that this problem occurs with both the S1 and P1V2 App Service tiers.
DocumentClient shouldn't be used on per-request basis and instead you should use it as a singleton instance in your application. Creating client per-request will add lots of overhead on the latency.
So I'd declare Client property as "static" and initialize it in the constructor of Service. You could call await Client.OpenAsync() in the Connect method to "warm" up the client and in each of your public methods directly use the Client instance to call the DocumentDB APIs.
Dispose the Client in the Dispose method of Service.
Those clients are designed to be re-used, so it's recommended that you have a single static instance that you re-use across all functions.
Here you can find tips on performance issue:
https://learn.microsoft.com/en-us/azure/cosmos-db/performance-tips#sdk-usage
Hope that helps!
Related
This error is logged occasionally in the function app logs. "An exception occurred while creating a ServiceBusSessionReceiver (Namespace '<servicebus namespace>.servicebus.windows.net', Entity path '<topic>/Subscriptions/<subscription>'). Error Message: 'Azure.Messaging.ServiceBus.ServiceBusException: Put token failed. status-code: 500, status-description: The service was unable to process the request; please retry the operation."
The function app uses managed identity to connect to the service bus.
There is no impact on the regular usage but just want to know the reason for this exception.
I checked online to find the reason for the exception but didn`t find anything even on StackOverflow. I want to know the reason for this exception so I will know the impact of the failure and try to resolve it.
There is no action needed for your application and nothing that you can do to resolve. This is something that is handled by the Service Bus infrastructure internally. Intermittent failures will not impact your application, though if you're seeing this in large clusters or seeing it frequently, I'd encourage you to open a support ticket for investigation.
To add some context, this exception indicates a service-side issue when passing authorization token over the CBS link, which is a background activity. The Service Bus client sends refreshes periodically with a large enough window that failures can be retried before the current authorization expires. In the worst case, a specific connection would fault and the Service Bus client would create a new one. So long as the service issue was transient, such as is common when a service node is rebooting or being moved, things will recover without noticeable impact to the application.
net core web api project.
I commonly use logs everywere in my apps to have some additional tracking capabilities for overall system health. Currently, my "logging" happens synchronously, for instance
void MyMethod()
{
Log.Write("initiating");
//Do Something
Log.Write("finished");
}
Now, Log.Write() will consume time in the main thread as it's, after all, a sql insert.
How can I make Log.Write be, both asynchronous (Task.Run style for which i need no return value, so no awaiting) AND resolve its own sql connection? If Log.Write() uses the same connection my controller/method has, it will be disposed after the main execution and I risk not having an open connection when the async task runs. So Write() must resolve its own connection and it is a method that might be called hundred if not thousands of times a minute.
Thanks!
Microsoft themselves states that async logging methods should not be necessary since you should not be logging to a slow store:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/logging#no-asynchronous-logger-methods
We have an Azure-based ASP.NET Web Service that accesses an Azure KeyVault. We are seeing two instances in which a method "hangs" on a first try, and then works a minute or so later.
In both instances, a KeyVault access occurs. In both instances the problem started when we started using the KeyVault in these methods.
We have done very careful logging in the first instance, and cannot see anything else in our code that could cause the hang. The KeyVault access is the primary suspect.
In addition, if we run the app from our local servers (from Visual Studio), the KeyVault access works fine on the "first try". It only produces the "hang" error when it runs in production on Azure, and only on that "first try".
By "hang" I mean that in one instance, which is triggered by an external API, it takes at least 60 seconds (we can tell that because the external API times out.) In the other instance, which is triggered by a page request, several minutes can pass and the page just spins, at which point we assume the DB request or something else has timed out.
When I say "a minute or so later", that's as fast as we have timed the retry.
Is there some kind of issue or function where the KeyVault needs to be "warmed up" before it works on the first try?
Update: I'm looking at the code more carefully, and I see at least a couple of places where we can insert still more logging to get a more exact picture of where the failure occurs. I'm going to do that, and then I'll report back here.
Update: See answer below - major newbie error, has been corrected.
Found the problem, and the solution.
Key Vault access needs to be called from an async task, because there is a multi-second delay.
private async Task<string> GetKeyVaultSecretValue(varSecretParms) {
I don't understand the underlying technology, however, apparently, if the call is from within a standard code sequence, the server doesn't like to wait, and so the thread is abandoned/halts.
According to your description, it seems that it dues to WebApp that does not enable Always on .
By default, web apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time
If possible, please have a try to enable Always on and try it again.
Can I use Proxy.Open() as an indication to whether the connection should work or not?
I would like to check if a connection is available first, and then, if its not, i won't make any calls to the service during the run of the application.
Note: I only need to check connectivity, not necessarily and entire client-service round-trip.
I Ended up creating a Ping() methods in the service as suggested.
FYI, using simply Open() just didn't work - Open() doesn't raise any exceptions even if the service is offline!
Given the fact that there are so many variables which influence success of a WCF Service call, I tend to add a dummy void KnockKnock()) method to the services to have a real proof if the connection works.
This method can also serve a double purpose: You can call it async to notify the server that he has to be prepared for incoming requests. (Just the initial start-up of the service may take some time.) By invoking the KnockKnock() method the server can start loading the service and give your clients a better initial response performance.
At the moment I have a windows service written in C# that waits for a message in SQL Server Service Broker, and upon receipt calls a webservice method with details passed in the message.
I want to replace this with an SQLCLR stored procedure that calls the webservice directly, which I've found to be quite simple. I can then call this using internal activation.
However it needs to create a new instance of the webservice for each method, and from experience I've found this takes some time. In the windows service I create it with lazy instantiation in a static class, but I understand static fields can't be used to store information in SQLCLR.
Is there a way to persist this webservice reference?
Edit: Here is the lazy instantation code referencing the singleton I'd like to persist:
static class WsSingleton
{
static MWs.MWS mWS = null;
public static MWs.MWS GetMWS()
{
if (mWS == null)
{
mWS = new MWs.MWS();
mWS.Credentials = new System.Net.NetworkCredential("user", "password", "domain");
}
return mWS;
}
}
it needs to create a new instance of the webservice for each method
Do you mean the client has to instantiate a proxy for each HTTP call it makes? If that's what you mean you shouldn't have to persist any reference. An internal activated procedure is launched when there are messages to process and it can stay active and running. Such a locl state could be the proxy instance used to place the WWW calls. Typically the procedure runs a loop and keeps state on the stack as local variables of loop method. See Writing Service Broker Procedures for more details. To be more precise, your code should not RECEIVE one message at a time, but a set of messages.
But I would advise against doing what you're doing. First and foremost, making HTTP calls from SQLCLR is a bad idea. Doing any sort of blocking in SQLCLR is bad, and doing blocking on waiting for bits to respond from the intertubez is particularly bad. The internal SQL Server resources, specially workers, are far too valuable and few to waste them waiting for some WWW service to respond. I would advise keeping things like they are now, namely have hte HTTP call occur from an external process.
A second comment I have is that you may be better of using a table as a queue. See Using tables as Queues. One typical problem with queueing HTTP calls is that the WWW is very unreliable and you have to account for timeouts and retries. Using a table as a quueu can achieve this easier than a true Service Broker Queue. With SSB you'd have to rely on conversation timers for reliable retries, and it makes your activation logic significantly more complicated.