I have the following policy in a PolicyRegistry to be reused globally:
var fallbackPolicy = Policy
.Handle<DrmException>().OrInner<DrmException>()
.Fallback(
fallbackAction: () => { //should commit or dispose the transaction here using a passed in Func or Action },
onFallback: (exception) => { Log.Error().Exception(exception).Message($"Exception occurred, message: {exception.Message}.").Write(); }
);
I have the following code which I want to implement the fallbackPolicy in:
if(Settings.DRM_ENABLED)
drmManager.ExecuteAsync(new DeleteUser(123).Wait();//HTTP Call, throws DrmException if unsuccessful
//in some cases, there is an if(transaction == null) here (if transaction was passed as a parameter and needs to be committed here)
transaction.Commit();//if not thrown - commits the transaction
I would like it to look something like this:
var fallbackPolicy = Policy
.Handle<DrmException>().OrInner<DrmException>()
.Fallback(
fallbackAction: (transaction) => { transaction.Dispose(); },
onFallback: (exception) => { Log.Error().Exception(exception).Message($"Exception occurred, message: {exception.Message}.").Write(); }
);
fallbackPolicy.Execute(() => drmManager.ExecuteAsync(new DeleteUser(123).Wait(), transaction)
As far as I understand the fallbackPolicy.Execute takes Action/Func to be carried out which either succeeds, in which case the fallbackPolicy is not hit, or fails, in which case the fallbackPolicy kicks in with some predefined fallbackAction.
What I would like to do is to pass in two handlers (onFail(transaction) which disposes the transaction and onSuccess(transaction) which commits the transaction) when executing the policy. Is there an easier way of doing it instead of wrapping it or using a Polly's context?
Feels like there are a few separate questions here:
How can I make a centrally-defined FallbackPolicy do something dynamic?
How can I make one FallbackPolicy do two things?
With Polly in the mix, how can I do one thing on overall failure and another on overall success?
I'll answer these separately to give you a full toolkit to build your own solution - and for future readers - but you'll probably not need all three to achieve your goal. Cut to 3. if you just want a solution.
1. How can I make a centrally-defined FallbackPolicy do something dynamic?
For any policy defined centrally, yes Context is the way you can pass in something specific for that execution. References: discussion in a Polly issue; blog post.
Part of your q seems around making the FallbackPolicy both log; and deal with the transaction. So ...
2. How can I make one FallbackPolicy do two things?
You can pass in something dynamic (per above). Another option is use two different fallback policies. You can use the same kind of policy multiple times in a PolicyWrap. So you could define a centrally-stored FallbackPolicy to do just the logging, and keep it simple, non-dynamic:
var loggingFallbackPolicy = Policy
.Handle<DrmException>().OrInner<DrmException>()
.Fallback(fallbackAction: () => { /* maybe nothing, maybe rethrow - see discussion below */ },
onFallback: (exception) => { /* logging; */ });
Then you can define another FallbackPolicy locally to roll back the transaction on failure. Since it's defined locally, you could likely just pass the transaction variable in to its fallbackAction: using a closure (in which case you don't have to use Context).
Note: If using two FallbackPolicys in a PolicyWrap, you'd need to make the inner FallbackPolicy rethrow (not swallow) the handled exception, so that the outer FallbackPolicy also handles it.
Re:
What I would like to do is to pass in two handlers (onFail(transaction) which
disposes the transaction and onSuccess(transaction) which commits the transaction)
There isn't any policy which offers special handling on success, but:
3. With Polly in the mix, how can I do one thing on overall failure and another on overall success?
Use .ExecuteAndCapture(...). This returns a PolicyResult with property .Outcome == OutcomeType.Successful or OutcomeType.Failure (and other info: see documentation)
So overall, something like:
var logAndRethrowFallbackPolicy = Policy
.Handle<DrmException>().OrInner<DrmException>()
.Fallback(fallbackAction: (exception, context, token) => {
throw exception; // intentional rethrow so that the 'capture' of ExecuteAndCapture reacts. Use ExceptionDispatchInfo if you care about the original call stack.
},
onFallback: (exception, context) => { /* logging */ });
At execution site:
PolicyResult result = myPolicies.ExecuteAndCapture(() => ... ); // where myPolicies is some PolicyWrap with logAndRethrowFallbackPolicy outermost
if (result.Outcome == OutcomeType.Successful)
{ transaction.Commit(); }
else
{ transaction.Dispose(); }
Related
During startup I basically add an HttpClient like this:
services.AddHttpClient<IHttpClient, MyHttpClient>().AddPolicyHandler(GetRetryPolicy());
public IAsyncPolicy<HttpResponseMessage> GetRetryPolicy()
{
HttpPolicyExtensions
.HandleTransientHttpError()
.OrResult(message => message.StatusCode == HttpStatusCode.NotFound)
.WaitAndRetryAsync(GetBackOffDelay(options),
onRetry: (result, timespan, retryAttempt, context) =>
{
context.GetLogger()?.LogWarning($"Failure with status code {result.Result.StatusCode}. Retry attempt {retryAttempt}. Retrying in {timespan}.");
}));
}
How can I test that the retry policy works as expected? I've tried writing a test like this:
public async Task MyTest()
{
var policy = GetMockRetryPolicy(); // returns the policy shown above.
var content = HttpStatusCode.InternalServerError;
var services = new ServiceCollection();
services.AddHttpClient<IHttpClient, MyFakeClient>()
.AddPolicyHandler(policy);
var client = (MyFakeClient)services.BuildServiceProvider().GetRequiredService<IHttpClient>();
await client.Post(new Uri("https://someurl.com")), content);
// Some asserts that don't work right now
}
For reference here's the bulk of my Post method on MyFakeClient:
if(Enum.TryParse(content.ToString(), out HttpStatusCode statusCode))
{
if(statusCode == HttpStatusCode.InternalServerError)
{
throw new HttpResponseException(new Exception("Internal Server Error"), (int)HttpStatusCode.InternalServerError);
}
}
The MyFakeClient has a Post method that checks to see if the content is an HttpStatusCode and if it's an internal server error throws an HttpResponseException. At the moment, this creates the correct client and triggers the post fine. It throws the HttpResponseException but in doing so exits the test rather than using the retry policy.
How can I get it to use the retry policy in the test?
Update
I followed Peter's advice and went down the integration test route and managed to get this to work using hostbuilder and a stub delegating handler. In the stub handler I pass in a custom header to enable me to read the retry count back out of the response. The retry count is just a property on the handler that gets incremented every time it's called, so it's actually the first attempt, plus all following retries. That means if the retry count is 3, you should expect 4 as the value.
The thing is you can't really unit test this:
The retry is registered on the top of the HttpClient via the DI. When you are unit testing then you are not relying on the DI rather on individual components. So, integration testing might be more suitable for this. I've already detailed how can you do that via WireMock.Net. The basic idea is to create a local http server (mocking the downstream system) with predefined response sequence.
After you have defined the retry policy (with the retry count, time penalties) you can not retrieve them easily. So, from a unit testing perspective it is really hard to make sure that the policy has been defined correctly (like the delay is specified in seconds, not in minutes). I've already created a github issue for that, but unfortunately the development of the new version has been stuck.
The Durable Functions documentation specifies the following pattern to set up automatic handling of retries when an exception is raised within an activity function:
public static async Task Run(DurableOrchestrationContext context)
{
var retryOptions = new RetryOptions(
firstRetryInterval: TimeSpan.FromSeconds(5),
maxNumberOfAttempts: 3);
await ctx.CallActivityWithRetryAsync("FlakyFunction", retryOptions, "ABC");
// ...
}
However I can't see a way to check which retry you're up to within the activity function:
[FunctionName("FlakyFunction")]
public static string[] MyFlakyFunction(
[ActivityTrigger] string id,
ILogger log)
{
// Is there a built-in way to tell what retry attempt I'm up to here?
var retry = ??
DoFlakyStuffThatMayCauseException();
}
EDIT: I know it can probably be handled by mangling some sort of count into the RetryOptions.Handle delegate, but that's a horrible solution. It can be handled manually by maintaining an external state each time it's executed, but given that there's an internal count of retries I'm just wondering if there's any way to access that. Primary intended use is debugging and logging, but I can think of many other uses.
There does not seem to be a way to identify the retry. Activity functions are unaware of state and retries. When the CallActivityWithRetryAsync call is made the DurableOrchestrationContext calls the ScheduleWithRetry method of the OrchestrationContext class inside the DurableTask framework:
public virtual Task<T> ScheduleWithRetry<T>(string name, string version, RetryOptions retryOptions, params object[] parameters)
{
Task<T> RetryCall() => ScheduleTask<T>(name, version, parameters);
var retryInterceptor = new RetryInterceptor<T>(this, retryOptions, RetryCall);
return retryInterceptor.Invoke();
}
There the Invoke method on the RetryInterceptor class is called and that does a foreach loop over the maximum number of retries. This class does not expose properties or methods to obtain the number of retries.
Another workaround to help with debugging could be logging statements inside the activity function. And when you're running it locally you can put in a breakpoint there to see how often it stops there. Note that there is already a feature request to handle better logging for retries. You could add your feedback there or raise a new issue if you feel that's more appropriate.
To be honest, I think it's good that an activity is unaware of state and retries. That should be the responsibility of the orchestrator. It would be useful however if you could get some statistics on retries to see if there is a trend in performance degradation.
I am developing one application and in that I need to consume Rest API which has token associated with it. After specific interval that token get's expired so in that case suppose if I try to call that API it's throwing exception. So to resolve this should I refresh the token in catch block and use GoTo to execute the try block again. I read couple of articles and most of them suggest to avoid using GoTo.
Below are the links which I refer for best approach to follow but still am not convinced whether to go with it or not:
1> Is it possible to execute the code in the try block again after an exception in caught in catch block?
2> https://blogs.msdn.microsoft.com/abhinaba/2005/10/01/c-try-and-retry/
Just put a retry count and a continue to skip to the next iteration of a loop in your logic:
int maxRetry = 10;
for (int i = 0; i<=maxRetry; i++)
{
try
{
//DO YOUR STUFF
}
catch (Exception)
{
//OH NOES! ERROR!
continue; //RETRY!
}
}
When it has tried 10 times it exits and that's it.
You can unleash your fantasy with any loop you like for, while, do while etc. Use the one that fits your needs.
If there is some really bad error that needs to stop the execution of the loop then break then throw an exception and use
catch(VeryWrongException ex)
{
throw;
}
catch (Exception)
{
//OH NOES! ERROR!
continue; //RETRY!
}
where VeryWrongException is the type of exceptions you want to actually manage, instead of using the previous catch condition.
Extra:
To have an idea of what kind of exceptions your code can generate and catch them, use intellisense, it's your friend:
Catch statements which use GoTo to retry the same logic can be dangerous if they are not used properly.
A better way of dealing with this is to write some retry logic, that will attempt to perform your tasks a limited number of times, ideally allowing you to specify your exception.
If you don't want to write your own retry logic, I can recommend you use an external library such as Polly
An Example of its usage would be this :
// Set up the policy
var retryPolicy = Policy
.Handle<Exception>()
.WaitAndRetry(
3,
retryAttempt => TimeSpan.FromSeconds(5 * retryAttempt)
);
// Attempt to send the message, use Polly to retry a maximum of three times.
retryPolicy.Execute(() =>
{
// Your Code
});
I am afraid you are trying to solve this problem at the wrong place. If you request to an API fails, because of an expired token, you should just throw an exception.
Another class, maybe the one that is responsible for initiating the request in the first place, could resolve the error (refreshing the token) and retry requesting data.
If you merge all this responsibility in one place, things could get complicated really fast.
I would like to implement an ASP.NET Core API, which is not responding to HTTP requests, but upon startup starts listening to Google Cloud Pub/Sub messages, and it keeps listening indefinitely throughout its lifetime.
What is the preferred way to implement this with the official Pub/Sub SDK?
I can think of two ways:
Approach 1: Just use a SimpleSubscriber, and in the Startup.Configure start listening to messages:
public void Configure(IApplicationBuilder app)
{
var simpleSubscriber = await SimpleSubscriber.CreateAsync(subscriptionName);
var receivedMessages = new List<PubsubMessage>();
simpleSubscriber.StartAsync((msg, cancellationToken) =>
{
// Process the message here.
return Task.FromResult(SimpleSubscriber.Reply.Ack);
});
...
}
Approach 2: Use a library specifically created to periodically run a job, for example Quartz, Hangfire or FluentScheduler, and every time the job is triggered, pull the new messages with a SubscriberClient.
Which one is the preferred approach? The first one seems simpler, but I'm not sure if it's really reliable.
The first approach is definitely how this is intended to be used.
However, see the docs for StartAsync:
Starts receiving messages. The returned Task completes when either
StopAsync(CancellationToken) is called or if an unrecoverable
fault occurs. This method cannot be called more than once per
SubscriberClient instance.
So you do need to handle unexpected StartAsync shutdown on unrecoverable error. The simplest thing to do would be be use an outer loop, although given these errors are considered unrecoverable it is likely something about the call needs to be changed before it can succeed.
The code might look like this:
while (true)
{
// Each SubscriberClientinstance must only be used once.
var subscriberClient = await SubscriberClient.CreateAsync(subscriptionName);
try
{
await subscriberClient.StartAsync((msg, cancellationToken) =>
{
// Process the message here.
return Task.FromResult(SimpleSubscriber.Reply.Ack);
});
}
catch (Exception e)
{
// Handle the unrecoverable error somehow...
}
}
If this doesn't work as expected, please let us know.
Edit: SimpleSubscriber was renamed to SubscriberClient in the library so the answer has been edited accordingly.
The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.